text stringlengths 10 951k | source stringlengths 39 44 |
|---|---|
The Sentinel (short story)
"The Sentinel" is a science fiction short story by British author Arthur C. Clarke, written in 1948 and first published in 1951 as "Sentinel of Eternity", which was used as a starting point for the and "2001: A Space Odyssey".
"The Sentinel" was written in 1948 for a BBC competition (in which it failed to place) and was first published in the magazine "10 Story Fantasy" in its Spring 1951 issue, under the title "Sentinel of Eternity". It was subsequently published as part of the short story collections "Expedition to Earth" (1953), "The Nine Billion Names of God" (1967), and "The Lost Worlds of 2001" (1972). Despite the story's initial failure, it changed the course of Clarke's career.
"The Sentinel" (published 1982) is also the title of a collection of Arthur C. Clarke short stories, which includes the eponymous "The Sentinel", "Guardian Angel" (the inspiration for his "Childhood's End"), "The Songs of Distant Earth", and "Breaking Strain".
The story deals with the discovery of an artifact on Earth's Moon left behind eons ago by ancient aliens. The object is made of a polished mineral, is tetrahedral in shape, and is surrounded by a spherical forcefield. The narrator speculates at one point that the mysterious aliens who left this structure on the Moon may have used mechanisms belonging "to a technology that lies beyond our horizons, perhaps to the technology of para-physical forces."
The narrator speculates that for millions of years (evidenced by dust buildup around its forcefield) the artifact has been transmitting signals into deep space, but it ceases to transmit when, sometime later, it is destroyed "with the savage might of atomic power". The narrator hypothesizes that this "sentinel" was left on the Moon as a "warning beacon" for possible intelligent and spacefaring species that might develop on Earth.
In "", the operation of the sentinel is activated when sunlight touches it for the first time after it is dug up.
Algis Budrys found "The Sentinel" to be infuriating, saying that "one can raise a formidable reputation for profundity by repeating, over and over again, that the universe is wide and man is very small ... while our instruments show that the universe is wide, they are our instruments and we managed somehow to build them. There is no evidence whatsoever that Man is that goddamned small".
The story was adapted and expanded upon in the 1968 film, "", made by filmmaker Stanley Kubrick. Kubrick and Clarke modified and fused the story with other ideas. Clarke expressed impatience with its common description as the story on which the novel and movie are based. He explained | https://en.wikipedia.org/wiki?curid=31434 |
The Fountains of Paradise
The Fountains of Paradise is a science fiction novel by British writer Arthur C. Clarke. Set in the 22nd century, it describes the construction of a space elevator. This "orbital tower" is a giant structure rising from the ground and linking with a satellite in geostationary orbit at the height of approximately 36,000 kilometers (approx. 22,300 miles). Such a structure would be used to raise payloads to orbit without the expense of using rockets. The novel won both the Hugo and Nebula Awards for Best Novel.
The novel focuses primarily on a project proposed by the main character, Vannevar Morgan, known as the Orbital Tower. The tower is to stretch from the Earth's equator to a satellite that is in geostationary orbit. Such a structure would greatly reduce the cost of sending people and supplies into space.
The main story is framed by two other stories. The first one tells of King Kalidasa, living thousands of years before Morgan is born, who is constructing a large tower. The other story, taking place long after Morgan has died, deals with aliens making contact with Earth.
Due to many technical issues, there are only two locations on Earth where the Orbital Tower can be built. One is in the middle of the Pacific Ocean, and the other is Sri Kanda (now Sri Lanka). However, there is a Buddhist temple on the island, and Mahanayake Thero, the head of the order, refuses to give permission to begin construction.
Hearing of the difficulties, a group of people living on Mars contacts Morgan and suggests that the tower be built there instead. It would be smaller than the one planned for the Earth and reach from Mars to one of its moons, Deimos.
After a few setbacks, including some fatalities, construction of the tower gets underway. Although the engineer's heart is failing, he rides up the tower to take food and oxygen to a group of stranded students and their professor. After overcoming serious difficulties he succeeds, then dies of a heart attack on the way back down.
The main theme of the novel is preceded by, and to some extent juxtaposed with, the story of the life and death of King Kashyapa I of Sri Lanka (fictionalized as King Kalidasa). It foreshadows the exploits of Vannevar Morgan in his determination to realize the space elevator.
Other subplots include human colonization of the Solar system and the first contact with extraterrestrial intelligence.
Clarke envisions a microscopically thin (in his demonstrator sample) but strong "hyperfilament" that makes the elevator possible. Although the hyperfilament is constructed from "continuous pseudo-one-dimensional diamond crystal", Clarke later expressed his belief that another type of carbon, Buckminsterfullerene, would play the role of hyperfilament in a real space elevator. The latest developments in carbon nanotube technology bring the orbital elevator closer to possible realisation.
The story is set in the fictional equatorial island country of Taprobane, which Clarke has described as "about ninety percent congruent with the island of Sri Lanka", south of its real-world location. The ruins of the palace at Yakkagala as described in the book very closely match the real-life ruins at Sigiriya in Sri Lanka. The mountain on which the space elevator is built is called Sri Kanda in the book, and bears a strong resemblance to the real mountain Sri Pada. | https://en.wikipedia.org/wiki?curid=31435 |
Tagalog language
Tagalog () () is an Austronesian language spoken as a first language by the ethnic Tagalog people, who make up a quarter of the population of the Philippines and as a second language by the majority. Its standardized form, officially named "Filipino", is the national language of the Philippines, and is one of two official languages alongside English.
Tagalog is closely related to other Philippine languages, such as the Bikol languages, Ilocano, the Visayan languages, Kapampangan, and Pangasinan, and more distantly to other Austronesian languages, such as the Formosan languages of Taiwan, Malay (Malaysian and Indonesian), Hawaiian, Māori, and Malagasy.
The word "Tagalog" is derived from the endonym "taga-ilog" ("river dweller"), composed of "tagá-" ("native of" or "from") and "ilog" ("river"). Linguists such as Dr. David Zorc and Dr. Robert Blust speculate that the Tagalogs and other Central Philippine ethno-linguistic groups originated in Northeastern Mindanao or the Eastern Visayas.
Possible words of Old Tagalog origin are attested in the Laguna Copperplate Inscription from the tenth century, which is largely written in Old Malay. The first known complete book to be written in Tagalog is the "Doctrina Christiana" (Christian Doctrine), printed in 1593. The "Doctrina" was written in Spanish and two transcriptions of Tagalog; one in the ancient, then-current Baybayin script and the other in an early Spanish attempt at a Latin orthography for the language.
Throughout the 333 years of Spanish rule, various grammars and dictionaries were written by Spanish clergymen. In 1610, the Dominican priest Francisco Blancas de San Jose published the "Arte y reglas de la lengua tagala" (which was subsequently revised with two editions in 1752 and 1832) in Bataan. In 1613, the Franciscan priest Pedro de San Buenaventura published the first Tagalog dictionary, his "Vocabulario de la lengua tagala" in Pila, Laguna.
The first substantial dictionary of the Tagalog language was written by the Czech Jesuit missionary Pablo Clain in the beginning of the 18th century. Clain spoke Tagalog and used it actively in several of his books. He prepared the dictionary, which he later passed over to Francisco Jansens and José Hernandez. Further compilation of his substantial work was prepared by P. Juan de Noceda and P. Pedro de Sanlucar and published as "Vocabulario de la lengua tagala" in Manila in 1754 and then repeatedly reedited, with the last edition being in 2013 in Manila.
Among others, "Arte de la lengua tagala y manual tagalog para la administración de los Santos Sacramentos" (1850) in addition to early studies of the language.
The indigenous poet Francisco Baltazar (1788–1862) is regarded as the foremost Tagalog writer, his most notable work being the early 19th-century epic "Florante at Laura".
Tagalog differs from its Central Philippine counterparts with its treatment of the Proto-Philippine schwa vowel . In most Bikol and Visayan languages, this sound merged with and . In Tagalog, it has merged with . For example, Proto-Philippine (adhere, stick) is Tagalog "dikít" and Visayan & Bikol "dukot".
Proto-Philippine , , and merged with but is between vowels. Proto-Philippine (name) and (kiss) became Tagalog "ngalan" and "halík".
Proto-Philippine merged with . (water) and (blood) became Tagalog "tubig" and "dugô".
Tagalog was declared the official language by the first revolutionary constitution in the Philippines, the in 1897.
In 1935, the Philippine constitution designated English and Spanish as official languages, but mandated the development and adoption of a common national language based on one of the existing native languages. After study and deliberation, the National Language Institute, a committee composed of seven members who represented various regions in the Philippines, chose Tagalog as the basis for the evolution and adoption of the national language of the Philippines. President Manuel L. Quezon then, on December 30, 1937, proclaimed the selection of the Tagalog language to be used as the basis for the evolution and adoption of the national language of the Philippines. In 1939, President Quezon renamed the proposed Tagalog-based national language as "Wikang Pambansâ" (national language). Under the Japanese puppet government during World War II, Tagalog as a national language was strongly promoted; the 1943 Constitution specifying: The government shall take steps toward the development and propagation of Tagalog as the national language.".
In 1959, the language was further renamed as "Pilipino". Along with English, the national language has had official status under the 1973 constitution (as "Pilipino") and the present 1987 constitution (as Filipino).
The adoption of Tagalog in 1937 as basis for a national language is not without its own controversies. Instead of specifying Tagalog, the national language was designated as "Wikang Pambansâ" ("National Language") in 1939. Twenty years later, in 1959, it was renamed by then Secretary of Education, José Romero, as "Pilipino" to give it a national rather than ethnic label and connotation. The changing of the name did not, however, result in acceptance among non-Tagalogs, especially Cebuanos who had not accepted the selection.
The national language issue was revived once more during the 1971 Constitutional Convention. The majority of the delegates were even in favor of scrapping the idea of a "national language" altogether. A compromise solution was worked out—a "universalist" approach to the national language, to be called "Filipino" rather than "Pilipino". The 1973 constitution makes no mention of Tagalog. When a new constitution was drawn up in 1987, it named Filipino as the national language. The constitution specified that as the Filipino language evolves, it shall be further developed and enriched on the basis of existing Philippine and other languages. However, more than two decades after the institution of the "universalist" approach, there seems to be little if any difference between Tagalog and Filipino.
Many of the older generation in the Philippines feel that the replacement of English by Tagalog in the popular visual media has had dire economic effects regarding the competitiveness of the Philippines in trade and overseas remittances.
Upon the issuance of "Executive Order No. 134", Tagalog was declared as basis of the National Language. On 12 April 1940, "Executive No. 263" was issued ordering the teaching of the national language in all public and private schools in the country.
Article XIV, Section 6 of the 1987 Constitution of the Philippines specifies, in part:
Under Section 7, however:
In 2009, the Department of Education promulgated an order institutionalizing a system of mother-tongue based multilingual education ("MLE"), wherein instruction is conducted primarily in a student's mother tongue (one of the various regional Philippine languages) until at least grade three, with additional languages such as Filipino and English being introduced as separate subjects no earlier than grade two. In secondary school, Filipino and English become the primary languages of instruction, with the learner's first language taking on an auxiliary role. After pilot tests in selected schools, the MLE program was implemented nationwide from School Year (SY) 2012–2013.
Tagalog is the first language of a quarter of the population of the Philippines (particularly in Central and Southern Luzon) and the second language for the majority.
According to the Philippine Statistics Authority, as of 2014 there were 100 million people living in the Philippines, where the vast majority have some basic level of understanding of the language. The Tagalog homeland, Katagalugan, covers roughly much of the central to southern parts of the island of Luzon—particularly in Aurora, Bataan, Batangas, Bulacan, Cavite, Laguna, Metro Manila, Nueva Ecija, Quezon, Rizal, and Zambales. Tagalog is also spoken natively by inhabitants living on the islands of Marinduque and Mindoro, as well as Palawan to a lesser extent. Significant minorities are found in the other Central Luzon provinces of Pampanga and Tarlac, Ambos Camarines in Bicol Region, and the Cordillera city of Baguio. Tagalog is also the predominant language of Cotabato City in Mindanao, making it the only place outside of Luzon with a native Tagalog speaking majority.
At the 2000 Philippines Census, it is spoken by approximately 57.3 million Filipinos, 96% of the household population who were able to attend school; slightly over 22 million, or 28% of the total Philippine population, speak it as a native language.
The following regions and provinces of the Philippines are majority Tagalog-speaking (from north to south):
Tagalog speakers are also found in other parts of the Philippines and through its standardized form of Filipino, the language serves the national "lingua franca" of the country.
Tagalog also serves as the common language among Overseas Filipinos, though its use overseas is usually limited to communication between Filipino ethnic groups. The largest concentration of Tagalog speakers outside the Philippines is found in the United States, where in 2013, the U.S. Census Bureau reported (based on data collected in 2011) that it was the fourth most-spoken non-English language at home with almost 1.6 million speakers, behind Spanish, French (including Patois, Cajun, Creole), and Chinese (with figures for Cantonese and Mandarin combined). In urban areas, Tagalog ranked as the third most spoken non-English language, behind Spanish and Chinese varieties but ahead of French. Other countries with significant concentrations of overseas Filipinos and Tagalog speakers include Saudi Arabia, Canada, United Arab Emirates, Kuwait, and Malaysia.
Tagalog is a Central Philippine language within the Austronesian language family. Being Malayo-Polynesian, it is related to other Austronesian languages, such as Malagasy, Javanese, Malay (Malaysian and Indonesian), Tetum (of Timor), and Yami (of Taiwan). It is closely related to the languages spoken in the Bicol Region and the Visayas islands, such as the Bikol group and the Visayan group, including Waray-Waray, Hiligaynon and Cebuano.
At present, no comprehensive dialectology has been done in the Tagalog-speaking regions, though there have been descriptions in the form of dictionaries and grammars of various Tagalog dialects. Ethnologue lists Manila, Lubang, Marinduque, Bataan (Western Central Luzon), Batangas, Bulacan (Eastern Central Luzon), Tanay-Paete (Rizal-Laguna), and Tayabas (Quezon and Aurora) as dialects of Tagalog; however, there appear to be four main dialects, of which the aforementioned are a part: Northern (exemplified by the Bulacan dialect), Central (including Manila), Southern (exemplified by Batangas), and Marinduque.
Some example of dialectal differences are:
Perhaps the most divergent Tagalog dialects are those spoken in Marinduque. Linguist Rosa Soberano identifies two dialects, western and eastern, with the former being closer to the Tagalog dialects spoken in the provinces of Batangas and Quezon.
One example is the verb conjugation paradigms. While some of the affixes are different, Marinduque also preserves the imperative affixes, also found in Visayan and Bikol languages, that have mostly disappeared from most Tagalog early 20th century; they have since merged with the infinitive.
Northern and central dialects form the basis for the national language.
The Tagalog language also boasts accentations unique to some parts of Tagalog-speaking regions. For example, in some parts of Manila, a strong pronunciation of "i" exists and vowel-switching of "o" and "u" exists so words like "gising" (to wake) is pronounced as "giseng" with a strong 'e' and the word "tagu-taguan" (hide-and-go-seek) is pronounced as "tago-tagoan" with a mild 'o'.
Batangas Tagalog boasts the most distinctive accent in Tagalog compared to the more Hispanized northern accents of the language. The Batangas accent has been featured in film and television and Filipino actor Leo Martinez speaks with this accent. Martinez's accent, however, will quickly be recognized by native Batangueños as representative of the accent in western Batangas which is milder compared to that used in the eastern part of the province.
Bulacan Tagalog has more deep words and accented like Filipino during the Spanish period.
Quezon and Aurora's (Tayabas Tagalog) has unique accents. Quezon's Tagalog has also has several unique words, and each town has a different tone, like in Sariaya, Atimonan and Gumaca.
Cavite accent specifically in the lowland part of the province were a mix of deep Tagalog and Chavacano, a language also spoken in Zamboanga while in upland Cavite like in the municipalities of Alfonso, Cavite, Magallanes, Cavite as well as Tagaytay City uses the accent comparable to the accent used in western Batangas due to its proximity.
Laguna also has a different set of accents, notably in the municipality of Alaminos, Laguna and the City of San Pablo, Laguna has the accent comparable to the accent used in eastern Batangas while the accent used in the northern parts of Laguna such as Biñan, Laguna and San Pedro, Laguna uses the accent comparable to Manila Tagalog.
Nueva Ecija's accent is like Bulacan's, but with different intonations. Tarlac also has this accent.
"Taglish" and "Englog" are names given to a mix of English and Tagalog. The amount of English vs. Tagalog varies from the occasional use of English loan words to changing language in mid-sentence. Such code-switching is prevalent throughout the Philippines and in various languages of the Philippines other than Tagalog.
Code-mixing also entails the use of foreign words that are "Filipinized" by reforming them using Filipino rules, such as verb conjugations. Users typically use Filipino or English words, whichever comes to mind first or whichever is easier to use.
City-dwellers are more likely to do this.
The practice is common in television, radio, and print media as well. Advertisements from companies like Wells Fargo, Wal-Mart, Albertsons, McDonald's, and Western Union have contained Taglish.
Tagalog has 33 phonemes: 19 of them are consonants and 14 are vowels. Syllable structure is relatively simple, being maximally CrVC, where Cr only occurs in borrowed words such as "trak" "truck" or "sombréro" "hat".
Tagalog has ten simple vowels, five long and five short, and four diphthongs. Before appearing in the area north of the Pasig river, Tagalog had three vowel qualities: , , and . This was later expanded to five with the introduction of words from central and northern Philippines, such as the Kapampangan, Pangasinan and Ilocano languages, as well as Spanish words.
Nevertheless, simplification of pairs and is likely to take place, especially in some Tagalog as second language, remote location and working class registers.
The four diphthongs are , , , and . Long vowels are not written apart from pedagogical texts, where an acute accent is used: "á é í ó ú."
The table above shows all the possible realizations for each of the five vowel sounds depending on the speaker's origin or proficiency. The five general vowels are in bold.
Below is a chart of Tagalog consonants. All the stops are unaspirated. The velar nasal occurs in all positions including at the beginning of a word. Loanword variants using these phonemes are italicized inside the angle brackets.
Glottal stop is not indicated. Glottal stops are most likely to occur when:
Stress is a distinctive feature in Tagalog. Primary stress occurs on either the final or the penultimate syllable of a word. Vowel lengthening accompanies primary or secondary stress except when stress occurs at the end of a word.
Tagalog words are often distinguished from one another by the position of the stress and/or the presence of a final glottal stop. In formal or academic settings, stress placement and the glottal stop are indicated by a diacritic ("tuldík") above the final vowel. The penultimate primary stress position ("malumay") is the default stress type and so is left unwritten except in dictionaries.
Tagalog, like other Philippines languages today, is written using the Latin alphabet. Prior to the arrival of the Spanish in 1521 and the beginning of their colonization in 1565, Tagalog was written in an abugida—or alphasyllabary—called Baybayin. This system of writing gradually gave way to the use and propagation of the Latin alphabet as introduced by the Spanish. As the Spanish began to record and create grammars and dictionaries for the various languages of the Philippine archipelago, they adopted systems of writing closely following the orthographic customs of the Spanish language and were refined over the years. Until the first half of the 20th century, most Philippine languages were widely written in a variety of ways based on Spanish orthography.
In the late 19th century, a number of educated Filipinos began proposing for revising the spelling system used for Tagalog at the time. In 1884, Filipino doctor and student of languages Trinidad Pardo de Tavera published his study on the ancient Tagalog script "Contribucion para el Estudio de los Antiguos Alfabetos Filipinos" and in 1887, published his essay "El Sanscrito en la lengua Tagalog" which made use of a new writing system developed by him. Meanwhile, Jose Rizal, inspired by Pardo de Tavera's 1884 work, also began developing a new system of orthography (unaware at first of Pardo de Tavera's own orthography). A major noticeable change in these proposed orthographies was the use of the letter ⟨k⟩ rather than ⟨c⟩ and ⟨q⟩ to represent the phoneme .
In 1889, the new bilingual Spanish-Tagalog "La España Oriental" newspaper, of which Isabelo de los Reyes was an editor, began publishing using the new orthography stating in a footnote that it would "use the orthography recently introduced by ... learned Orientalis". This new orthography, while having its supporters, was also not initially accepted by several writers. Soon after the first issue of "La España", Pascual H. Poblete's "Revista Católica de Filipina" began a series of articles attacking the new orthography and its proponents. A fellow writer, Pablo Tecson was also critical. Among the attacks was the use of the letters "k" and "w" as they were deemed to be of German origin and thus its proponents were deemed as "unpatriotic". The publishers of these two papers would eventually merge as "La Lectura Popular" in January 1890 and would eventually make use of both spelling systems in its articles. Pedro Laktaw, a schoolteacher, published the first Spanish-Tagalog dictionary using the new orthography in 1890.
In April 1890, Jose Rizal authored an article "Sobre la Nueva Ortografia de la Lengua Tagalog" in the Madrid-based periodical La Solidaridad. In it, he addressed the criticisms of the new writing system by writers like Pobrete and Tecson and the simplicity, in his opinion, of the new orthography. Rizal described the orthography promoted by Pardo de Tavera as "more perfect" than what he himself had developed. The new orthography was however not broadly adopted initially and was used inconsistently in the bilingual periodicals of Manila until the early 20th century. The revolutionary society Kataás-taasan, Kagalang-galang Katipunan ng̃ mg̃á Anak ng̃ Bayan or Katipunan made use of the k-orthography and the letter k featured prominently on many of its flags and insignias.
In 1937, Tagalog was selected to serve as basis for the country's national language. In 1940, the "Balarílà ng Wikang Pambansâ" () of grammarian Lope K. Santos introduced the Abakada alphabet. This alphabet consists of 20 letters and became the standard alphabet of the national language. The orthography as used by Tagalog would eventually influence and spread to the systems of writing used by other Philippine languages (which had been using variants of the Spanish-based system of writing). In 1987, the ABAKADA was dropped and in its place is the expanded Filipino alphabet.
Tagalog was written in an abugida (alphasyllabary) called Baybayin prior to the Spanish colonial period in the Philippines, in the 16th century. This particular writing system was composed of symbols representing three vowels and 14 consonants. Belonging to the Brahmic family of scripts, it shares similarities with the Old Kawi script of Java and is believed to be descended from the script used by the Bugis in Sulawesi.
Although it enjoyed a relatively high level of literacy, Baybayin gradually fell into disuse in favor of the Latin alphabet taught by the Spaniards during their rule.
There has been confusion of how to use Baybayin, which is actually an abugida, or an alphasyllabary, rather than an alphabet. Not every letter in the Latin alphabet is represented with one of those in the Baybayin alphasyllabary. Rather than letters being put together to make sounds as in Western languages, Baybayin uses symbols to represent syllables.
A "kudlit" resembling an apostrophe is used above or below a symbol to change the vowel sound after its consonant. If the kudlit is used above, the vowel is an "E" or "I" sound. If the kudlit is used below, the vowel is an "O" or "U" sound. A special kudlit was later added by Spanish missionaries in which a cross placed below the symbol to get rid of the vowel sound all together, leaving a consonant. Previously, the consonant without a following vowel was simply left out (for example, "bundok" being rendered as "budo"), forcing the reader to use context when reading such words.
Example:
Baybayin is encoded in Unicode version 3.2 in the range 1700-171F under the name "Tagalog".
Until the first half of the 20th century, Tagalog was widely written in a variety of ways based on Spanish orthography consisting of 32 letters called 'ABECEDARIO' (Spanish for "alphabet"):
When the national language was based on Tagalog, grammarian Lope K. Santos introduced a new alphabet consisting of 20 letters called "ABAKADA" in school grammar books called "balarilà":
In 1987, the Department of Education, Culture and Sports issued a memo stating that the Philippine alphabet had changed from the Pilipino-Tagalog Abakada version to a new 28-letter alphabet to make room for loans, especially family names from Spanish and English:
The genitive marker "ng" and the plural marker "mga" (e.g. "Iyan ang mga damit ko." (Those are my clothes)) are abbreviations that are pronounced "nang" and "mangá" . "Ng", in most cases, roughly translates to "of" (ex. "Siya ay kapatid ng nanay ko." She is the sibling "of" my mother) while "nang" usually means "when" or can describe how something is done or to what extent (equivalent to the suffix "-ly" in English adverbs), among other uses.
In the first example, "nang" is used in lieu of the word "noong" (when; "Noong si Hudas ay madulas"). In the second, "nang" describes that the person woke up ("gumising") early ("maaga"); "gumising nang maaga". In the third, "nang" described up to what extent that Juan improved ("gumaling"), which is "greatly" ("nang "). In the latter two examples, the ligature "na" and its variants "-ng" and "-g" may also be used ("Gumising na maaga/Maagang gumising"; "Gumaling na /Todong gumaling").
The longer "nang" may also have other uses, such as a ligature that joins a repeated word:
The words "pô/hô" and "opò/ohò" are traditionally used as polite iterations of the affirmative ""oo"" ("yes"). It is generally used when addressing elders or superiors such as bosses or teachers.
"Pô" and "opò" are specifically used to denote a high level of respect when addressing older persons of close affinity like parents, relatives, teachers and family friends. "Hô" and "ohò" are generally used to politely address older neighbours, strangers, public officials, bosses and nannies, and may suggest a distance in societal relationship and respect determined by the addressee's social rank and not their age. However, "pô" and "opò" can be used in any case in order to express an elevation of respect.
Used in the affirmative:
"Pô/Hô" may also be used in negation.
Tagalog vocabulary is composed mostly of words of native Austronesian origin - most of the words that end with the diphthongs -iw, (e.g. saliw) and those words that exhibit reduplication (e.g. halo-halo, patpat, etc.). However it has a significant number of Spanish loanwords. Spanish is the language that has bequeathed the most loanwords to Tagalog.
In pre-Hispanic times, Trade Malay was widely known and spoken throughout Maritime Southeast Asia.
Tagalog also includes many loanwords from English, Indian languages (Sanskrit and Tamil), Chinese languages (Hokkien, Cantonese, Mandarin), Japanese, Arabic and Persian.
Due to trade with Mexico via the Manila galleons from the 16th to the 19th centuries, many words from Nahuatl (Aztec) and Castilian (Spanish) were introduced to Tagalog.
The Philippines has long been a melting pot of nations. The islands have been subject to different influences and a meeting point of numerous migrations since the early prehistoric origins of trading activities, especially from the time of the Neolithic Period, the Silk Road, the Tang Dynasty, the Ming Dynasty, the Ryukyu Kingdom, the Spice Route and the Manila Galleon trading periods. This means that the evolution of the language is difficult to reconstruct (although many theories exist).
English has borrowed some words from Tagalog, such as abaca, barong, balisong, boondocks, jeepney, Manila hemp, pancit, ylang-ylang, and yaya, although the vast majority of these borrowed words are only used in the Philippines as part of the vocabularies of Philippine English.
Tagalog has contributed several words to Philippine Spanish, like "barangay" (from "balan͠gay," meaning "barrio"), the "abacá", "cogon", "palay", "dalaga" etc.
Below is a chart of Tagalog and a number of other Austronesian languages comparing thirteen words.
Religious literature remains one of the most dynamic contributors to Tagalog literature. The first Bible in Tagalog, then called "Ang Biblia" ("the Bible") and now called "Ang Dating Biblia" ("the Old Bible"), was published in 1905. In 1970, the Philippine Bible Society translated the Bible into modern Tagalog. Even before the Second Vatican Council, devotional materials in Tagalog had been in circulation. There are at least four circulating Tagalog translations of the Bible
When the Second Vatican Council, (specifically the Sacrosanctum Concilium) permitted the universal prayers to be translated into vernacular languages, the Catholic Bishops' Conference of the Philippines was one of the first to translate the Roman Missal into Tagalog. The Roman Missal in Tagalog was published as early as 1982.
Jehovah's Witnesses were printing Tagalog literature at least as early as 1941 and "The Watchtower" (the primary magazine of Jehovah's Witnesses) has been published in Tagalog since at least the 1950s. New releases are now regularly released simultaneously in a number of languages, including Tagalog. The official website of Jehovah's Witnesses also has some publications available online in Tagalog. The revised bible edition, the "New World Translation of the Holy Scriptures," was released in Tagalog on 2019 and it is distributed without charge both printed and online versions.
Tagalog is quite a stable language, and very few revisions have been made to Catholic Bible translations. Also, as Protestantism in the Philippines is relatively young, liturgical prayers tend to be more ecumenical.
In Tagalog, the Lord's Prayer is exclusively known by its incipit, "Amá Namin" (literally, "Our Father").
This is Article 1 of the Universal Declaration of Human Rights ("Pángkalahatáng Pagpapahayag ng Karapatáng Pantao")
The numbers ("mga bilang") in Tagalog language are of two sets. The first set consists of native Tagalog words and the other set are Spanish loanwords. (This may be compared to other East Asian languages, except with the second set of numbers borrowed from Spanish instead of Chinese.) For example, when a person refers to the number "seven", it can be translated into Tagalog as ""pito"" or ""siyete"" (Spanish: "siete").
Months and days in Tagalog are also localised forms of Spanish months and days. "Month" in Tagalog is "buwán" (also the word for moon) and "day" is "araw" (the word also means sun). Unlike Spanish, however, months and days in Tagalog are always capitalised.
Time expressions in Tagalog are also Tagalized forms of the corresponding Spanish. "Time" in Tagalog is "panahon", or more commonly "oras". Unlike Spanish and English, times in Tagalog are capitalized whenever they appear in a sentence.
*Pronouns such as "niyo" (2nd person plural) and "nila" (3rd person plural) are used on a single 2nd person in polite or formal language. See Tagalog grammar.
"Ang hindî marunong lumingón sa pinánggalingan ay hindî makaráratíng sa paroroonan". (José Rizal)One who knows not how to look back from whence he came, will never get to where he is going.
"Unang kagat, tinapay pa rin." It means :"First bite, still bread." or "All fluff no substance."
"Tao ka nang humarap, bilang tao kitang haharapin."(A proverb in Southern Tagalog that made people aware the significance of sincerity in Tagalog communities. It says, "As a human you reach me, I treat you as a human and never act as a traitor.")
"Hulí man daw (raw) at magalíng, nakáhahábol pa rin."If one is behind but capable, one will still be able to catch up.
"Magbirô ka na sa lasíng, huwág lang sa bagong gising."Make fun of someone drunk, if you must, but never one who has just awakened.
"Aanhín pa ang damó kung patáy na ang kabayo?"What use is the grass if the horse is already dead?
"Ang sakít ng kalingkingan, ramdám ng buóng katawán."The pain in the pinkie is felt by the whole body.
"Nasa hulí ang pagsisisi."Regret is always in the end.
"Pagkáhabà-habà man ng prusisyón, sa simbahan pa rin ang tulóy."The procession may stretch on and on, but it still ends up at the church.
"Kung 'dî mádaán sa santóng dasalan, daanin sa santóng paspasan."If it cannot be got through holy prayer, get it through blessed force. | https://en.wikipedia.org/wiki?curid=31438 |
Tokamak
A tokamak () is a device which uses a powerful magnetic field to confine a hot plasma in the shape of a torus. The tokamak is one of several types of magnetic confinement devices being developed to produce controlled thermonuclear fusion power. , it is the leading candidate for a practical fusion reactor.
Tokamaks were initially conceptualized in the 1950s by Soviet physicists Igor Tamm and Andrei Sakharov, inspired by a letter by Oleg Lavrentiev. Meanwhile, the first working tokamak was attributed to the work of Natan Yavlinsky on the T-1. It had been demonstrated that a stable plasma equilibrium requires magnetic field lines that wind around the torus in a helix. Devices like the z-pinch and stellarator had attempted this, but demonstrated serious instabilities. It was the development of the concept now known as the safety factor (labelled "q" in mathematical notation) that guided tokamak development; by arranging the reactor so this critical factor "q" was always greater than 1, the tokamaks strongly suppressed the instabilities which plagued earlier designs.
The first tokamak, the T-1, began operation in 1958. By the mid-1960s, the tokamak designs began to show greatly improved performance. Initial results were released in 1965, but were ignored; Lyman Spitzer dismissed them out of hand after noting potential problems in their system for measuring temperatures. A second set of results was published in 1968, this time claiming performance far in advance of any other machine, and was likewise considered unreliable. This led to the invitation of a delegation from the United Kingdom to make their own measurements. These confirmed the Soviet results, and their 1969 publication resulted in a stampede of tokamak construction.
By the mid-1970s, dozens of tokamaks were in use around the world. By the late 1970s, these machines had reached all of the conditions needed for practical fusion, although not at the same time nor in a single reactor. With the goal of breakeven (a fusion energy gain factor equal to 1) now in sight, a new series of machines were designed that would run on a fusion fuel of deuterium and tritium. These machines, notably the Joint European Torus (JET), Tokamak Fusion Test Reactor (TFTR) and JT-60, had the explicit goal of reaching breakeven.
Instead, these machines demonstrated new problems that limited their performance. Solving these would require a much larger and more expensive machine, beyond the abilities of any one country. After an initial agreement between Ronald Reagan and Mikhail Gorbachev in November 1985, the International Thermonuclear Experimental Reactor (ITER) effort emerged and remains the primary international effort to develop practical fusion power. Many smaller designs, and offshoots like the spherical tokamak, continue to be used to investigate performance parameters and other issues. , JET remains the record holder for fusion output, reaching 16 MW of output for 24 MW of input heating power.
The word tokamak is a transliteration of the Russian word , an acronym of either:
or
The term was created in 1957 by Igor Golovin, the vice-director of the Laboratory of Measuring Apparatus of Academy of Science, today's Kurchatov Institute. A similar term, "tokomag", was also proposed for a time.
In 1934, Mark Oliphant, Paul Harteck and Ernest Rutherford were the first to achieve fusion on Earth, using a particle accelerator to shoot deuterium nuclei into a metal foil containing deuterium or other atoms. This allowed them to measure the nuclear cross section of various fusion reactions, and determined that the deuterium-deuterium reaction occurred at a lower energy than other reactions, peaking at about 100,000 electronvolts (100 keV).
Accelerator-based fusion is not practical because the reaction cross section is tiny; most of the particles in the accelerator will scatter off the fuel, not fuse with it. These scatterings cause the particles to lose energy to the point where they can no longer undergo fusion. The energy put into these particles is thus lost, and it is easy to demonstrate this is much more energy than the resulting fusion reactions can release.
To maintain fusion and produce net energy output, the bulk of the fuel must be raised to high temperatures so its atoms are constantly colliding at high speed; this gives rise to the name "thermonuclear" due to the high temperatures needed to bring it about. In 1944, Enrico Fermi calculated the reaction would be self-sustaining at about 50,000,000 K; at that temperature, the rate that energy is given off by the reactions is high enough that they heat the surrounding fuel rapidly enough to maintain the temperature against losses to the environment, continuing the reaction.
During the Manhattan Project, the first practical way to reach these temperatures was created, using an atomic bomb. In 1944, Fermi gave a talk on the physics of fusion in the context of a then-hypothetical hydrogen bomb. However, some thought had already been given to a "controlled" fusion device, and James L. Tuck and Stanislaw Ulam had attempted such using shaped charges driving a metal foil infused with deuterium, although without success.
The first attempts to build a practical fusion machine took place in the United Kingdom, where George Paget Thomson had selected the pinch effect as a promising technique in 1945. After several failed attempts to gain funding, he gave up and asked two graduate students, Stan Cousins and Alan Ware, to build a device out of surplus radar equipment. This was successfully operated in 1948, but showed no clear evidence of fusion and failed to gain the interest of the Atomic Energy Research Establishment.
In 1950, Oleg Lavrentiev, then a Red Army sergeant stationed on Sakhalin with little to do, wrote a letter to the Central Committee of the Communist Party of the Soviet Union. The letter outlined the idea of using an atomic bomb to ignite a fusion fuel, and then went on to describe a system that used electrostatic fields to contain a hot plasma in a steady state for energy production.
The letter was sent to Andrei Sakharov for comment. Sakharov noted that "the author formulates a very important and not necessarily hopeless problem", and found his main concern in the arrangement was that the plasma would hit the electrode wires, and that "wide meshes and a thin current-carrying part which will have to reflect almost all incident nuclei back into the reactor. In all likelihood, this requirement is incompatible with the mechanical strength of the device."
Some indication of the importance given to Lavrentiev's letter can be seen in the speed with which it was processed; the letter was received by the Central Committee on 29 July, Sakharov sent his review in on 18 August, by October, Sakharov and Igor Tamm had completed the first detailed study of a fusion reactor, and they had asked for funding to build it in January 1951.
When heated to fusion temperatures, the electrons in atoms disassociate, resulting in a fluid of nuclei and electrons known as a plasma. Unlike electrically neutral atoms, a plasma is electrically conductive, and can, therefore, be manipulated by electrical or magnetic fields.
Sakharov's concern about the electrodes led him to consider using magnetic confinement instead of electrostatic. In the case of a magnetic field, the particles will circle around the lines of force. As the particles are moving at high speed, their resulting paths look like a helix. If one arranges a magnetic field so lines of force are parallel and close together, the particles orbiting adjacent lines may collide, and fuse.
Such a field can be created in a solenoid, a cylinder with magnets wrapped around the outside. The combined fields of the magnets create a set of parallel magnetic lines running down the length of the cylinder. This arrangement prevents the particles from moving sideways to the wall of the cylinder, but it does not prevent them from running out the end. The obvious solution to this problem is to bend the cylinder around into a donut shape, or torus, so that the lines form a series of continual rings. In this arrangement, the particles circle endlessly.
Sakharov discussed the concept with Igor Tamm, and by the end of October 1950 the two had written a proposal and sent it to Igor Kurchatov, the director of the atomic bomb project within the USSR, and his deputy, Igor Golovin. However, this initial proposal ignored a fundamental problem; when arranged along a straight solenoid, the external magnets are evenly spaced, but when bent around into a torus, they are closer together on the inside of the ring than the outside. This leads to uneven forces that cause the particles to drift away from their magnetic lines.
During visits to the Laboratory of Measuring Instruments of the USSR Academy of Sciences (LIPAN), the Soviet nuclear research centre, Sakharov suggested two possible solutions to this problem. One was to suspend a current-carrying ring in the centre of the torus. The current in the ring would produce a magnetic field that would mix with the one from the magnets on the outside. The resulting field would be twisted into a helix, so that any given particle would find itself repeatedly on the outside, then inside, of the torus. The drifts caused by the uneven fields are in opposite directions on the inside and outside, so over the course of multiple orbits around the long axis of the torus, the opposite drifts would cancel out. Alternately, he suggested using an external magnet to induce a current in the plasma itself, instead of a separate metal ring, which would have the same effect.
In January 1951, Kurchatov arranged a meeting at LIPAN to consider Sakharov's concepts. They found widespread interest and support, and in February a report on the topic was forwarded to Lavrentiy Beria, who oversaw the atomic efforts in the USSR. For a time, nothing was heard back.
On 25 March 1951, Argentine President Juan Perón announced that a former German scientist, Ronald Richter, had succeeded in producing fusion at a laboratory scale as part of what is now known as the Huemul Project. Scientists around the world were excited by the announcement, but soon concluded it was not true; simple calculations showed that his experimental setup could not produce enough energy to heat the fusion fuel to the needed temperatures.
Although dismissed by nuclear researchers, the widespread news coverage meant politicians were suddenly aware of, and receptive to, fusion research. In the UK, Thomson, who had been repeatedly refused, was suddenly granted considerable funding. Over the next months, two projects based on the pinch system were up and running. In the US, Lyman Spitzer read the Huemul story, realized it was false, and set about designing a machine that would work. In May he was awarded $50,000 to begin research on his stellarator concept. Jim Tuck had returned to the UK briefly and saw Thomson's pinch machines. When he returned to Los Alamos he also applied for funding at the same time as Spitzer, but was turned down. Instead, he was given a matching $50,000 directly from the Los Alamos budget.
Similar events occurred in the USSR. In mid-April, Dmitri Efremov of the Scientific Research Institute of Electrophysical Apparatus stormed into Kurchatov's study with a magazine containing a story about Richter's work, demanding to know why they were beaten by the Argentines. Kurchatov immediately contacted Beria with a proposal to set up a separate fusion research laboratory with Lev Artsimovich as director. Only days later, on 5 May, the proposal had been signed by Joseph Stalin.
By October, Sakharov and Tamm had completed a much more detailed consideration of their original proposal, calling for a device with a major radius (of the torus as a whole) of and a minor radius (the interior of the cylinder) of . The proposal suggested the system could produce of tritium a day, or breed of U233 a day.
As the idea was further developed, it was realized that a current in the plasma could create a field that was strong enough to confine the plasma as well, removing the need for the external magnets. At this point, the Soviet researchers had re-invented the pinch system being developed in the UK, although they had come to this design from a very different starting point.
Once the idea of using the pinch effect for confinement had been proposed, a much simpler solution became evident. Instead of a large toroid, one could simply induce the current into a linear tube, which could cause the plasma within to collapse down into a filament. This had a huge advantage; the current in the plasma would heat it through normal resistive heating, but this would not heat the plasma to fusion temperatures. However, as the plasma collapsed, the adiabatic process would result in the temperature rising dramatically, more than enough for fusion. With this development, only Golovin and Natan Yavlinsky continued considering the more static toroidal arrangement.
On 4 July 1952, Nikolai Filippov's group measured neutrons being released from a linear pinch machine. Lev Artsimovich demanded that they check everything before concluding fusion had occurred, and during these checks, they found that the neutrons were not from fusion at all. This same linear arrangement had also occurred to researchers in the UK and US, and their machines showed the same behaviour. But the great secrecy surrounding the research meant none of the groups was aware that the others were working on it, let alone having the identical problem.
After much study, it was found the neutrons were caused by instabilities in the plasma. There were two common types of instability, the "sausage" that was seen primarily in linear machines, and the "kink" which was most common in the toroidal machines. Groups in all three countries began studying the formation of these instabilities and potential ways to address them. Important contributions to the field were made by Martin David Kruskal and Martin Schwarzschild in the US, and Shafranov in the USSR.
One idea that came from these studies became known as the "stabilized pinch". This concept added additional magnets to the outside of the chamber, which created a field that would be present in the plasma before the pinch discharge. In most concepts, the external field was relatively weak, and because a plasma is diamagnetic, it penetrated only the outer areas of the plasma. When the pinch discharge occurred and the plasma quickly contracted, this field became "frozen in" to the resulting filament, creating a strong field in its outer layers. In the US, this was known as "giving the plasma a backbone."
Sakharov revisited his original toroidal concepts and came to a slightly different conclusion about how to stabilize the plasma. The layout would be the same as the stabilized pinch concept, but the role of the two fields would be reversed. Instead of weak external fields providing stabilization and a strong pinch current responsible for confinement, in the new layout, the external magnets would be much more powerful in order to provide the majority of confinement, while the current would be much smaller and responsible for the stabilizing effect.
In 1955, with the linear approaches still subject to instability, the first toroidal device was built in the USSR. TMP was a classic pinch machine, similar to models in the UK and US of the same era. The vacuum chamber was made of ceramic, and the spectra of the discharges showed silica, meaning the plasma was not perfectly confined by magnetic field and hitting the walls of the chamber. Two smaller machines followed, using copper shells. The conductive shells were intended to help stabilize the plasma, but were not completely successful in any of the machines that tried it.
With progress apparently stalled, in 1955 Kurchatov called an All Union conference of Soviet researchers with the ultimate aim of opening up fusion research within the USSR. In April 1956, Kurchatov travelled to the UK as part of a widely publicized visit by Nikita Khrushchev and Nikolai Bulganin. He offered to give a talk at Atomic Energy Research Establishment, at the former RAF Harwell, where he shocked the hosts by presenting a detailed historical overview of the Soviet fusion efforts. He took time to note, in particular, the neutrons seen in early machines and warned that neutrons did not mean fusion.
Unknown to Kurchatov, the British ZETA stabilized pinch machine was being built at the far end of the former runway. ZETA was, by far, the largest and most powerful fusion machine to date. Supported by experiments on earlier designs that had been modified to include stabilization, ZETA intended to produce low levels of fusion reactions. This was apparently a great success, and in January 1958 they announced the fusion had been achieved in ZETA based on the release of neutrons and measurements of the plasma temperature.
Vitaly Shafranov and Stanislav Braginskii examined the news reports and attempted to figure out how it worked. One possibility they considered was the use of weak "frozen in" fields, but rejected this, believing the fields would not last long enough. They then concluded ZETA was essentially identical to the devices they had been studying, with strong external fields.
By this time, Soviet researchers had decided to build a larger toroidal machine along the lines suggested by Sakharov. In particular, their design considered one important point found in Kruskal's and Shafranov's works; if the helical path of the particles made them circulate around the plasma's circumference more rapidly than they circulated the long axis of the torus, the kink instability would be strongly suppressed.
Today this basic concept is known as the "safety factor". The ratio of the number of times the particle orbits the major axis compared to the minor axis is denoted "q", and the "Kruskal-Shafranov Limit" stated that the kink will be suppressed as long as "q" > 1. This path is controlled by the relative strengths of the external magnets compared to the field created by the internal current. To have "q" > 1, the external magnets must be much more powerful, or alternatively, the internal current has to be reduced.
Following this criterion, design began on a new reactor, T-1, which today is known as the first real tokamak. T-1 used both stronger external magnets and a reduced current compared to stabilized pinch machines like ZETA. The success of the T-1 resulted in its recognition as the first working tokamak.
For his work on "powerful impulse discharges in a gas, to obtain unusually high temperatures needed for thermonuclear processes", Yavlinskii was awarded the Lenin Prize and the Stalin Prize in 1958. Yavlinskii was already preparing the design of an even larger model, later built as T-3. With the apparently successful ZETA announcement, Yavlinskii's concept was viewed very favourably.
Details of ZETA became public in a series of articles in "Nature" later in January. To Shafranov's surprise, the system did use the "frozen in" field concept. He remained sceptical, but a team at the Ioffe Institute in St. Petersberg began plans to build a similar machine known as Alpha. Only a few months later, in May, the ZETA team issued a release stating they had not achieved fusion, and that they had been misled by erroneous measures of the plasma temperature.
T-1 began operation at the end of 1958. It demonstrated very high energy losses through radiation. This was traced to impurities in the plasma due to the vacuum system causing outgassing from the container materials. In order to explore solutions to this problem, another small device was constructed, T-2. This used an internal liner of corrugated metal that was baked at to cook off trapped gasses.
As part of the second Atoms for Peace meeting in Geneva in September 1958, the Soviet delegation released many papers covering their fusion research. Among them was a set of initial results on their toroidal machines, which at that point had shown nothing of note.
The "star" of the show was a large model of Spitzer's stellarator, which immediately caught the attention of the Soviets. In contrast to their designs, the stellarator produced the required twisted paths in the plasma without driving a current through it, using a series of magnets that could operate in the steady state rather than the pulses of the induction system. Kurchatov began asking Yavlinskii to change their T-3 design to a stellarator, but they convinced him that the current provided a useful second role in heating, something the stellarator lacked.
At the time of the show, the stellarator had suffered a long string of minor problems that were just being solved. Solving these revealed that the diffusion rate of the plasma was much faster than theory predicted. Similar problems were seen in all the contemporary designs, for one reason or another. The stellarator, various pinch concepts and the magnetic mirror machines in both the US and USSR all demonstrated problems that limited their confinement times.
From the first studies of controlled fusion, there was a problem lurking in the background. During the Manhattan Project, David Bohm had been part of the team working on isotopic separation of uranium. In the post-war era he continued working with plasmas in magnetic fields. Using basic theory, one would expect the plasma to diffuse across the lines of force at a rate inversely proportional to the square of the strength of the field, meaning that small increases in force would greatly improve confinement. But based on their experiments, Bohm developed an empirical formula, now known as Bohm diffusion, that suggested the rate was linear with the magnetic force, not its square.
If Bohm's formula was correct, there was no hope one could build a fusion reactor based on magnetic confinement. To confine the plasma at the temperatures needed for fusion, the magnetic field would have to be orders of magnitude greater than any known magnet. Spitzer ascribed the difference between the Bohm and classical diffusion rates to turbulence in the plasma, and believed the steady fields of the stellarator would not suffer from this problem. Various experiments at that time suggested the Bohm rate did not apply, and that the classical formula was correct.
But by the early 1960s, with all of the various designs leaking plasma at a prodigious rate, Spitzer himself concluded that the Bohm scaling was an inherent quality of plasmas, and that magnetic confinement would not work. The entire field descended into what became known as "the doldrums", a period of intense pessimism.
In contrast to the other designs, the experimental tokamaks appeared to be progressing well, so well that a minor theoretical problem was now a real concern. In the presence of gravity, there is a small pressure gradient in the plasma, formerly small enough to ignore but now becoming something that had to be addressed. This led to the addition of yet another set of magnets in 1962, which produced a vertical field that offset these effects. These were a success, and by the mid-1960s the machines began to show signs that they were beating the Bohm limit.
At the 1965 Second International Atomic Energy Agency Conference on fusion at the UK's newly opened Culham Centre for Fusion Energy, Artsimovich reported that their systems were surpassing the Bohm limit by 10 times. Spitzer, reviewing the presentations, suggested that the Bohm limit may still apply; the results were within the range of experimental error of results seen on the stellarators, and the temperature measurements, based on the magnetic fields, were simply not trustworthy.
The next major international fusion meeting was held in August 1968 in Novosibirsk. By this time two additional tokamak designs had been completed, TM-2 in 1965, and T-4 in 1968. Results from T-3 had continued to improve, and similar results were coming from early tests of the new reactors. At the meeting, the Soviet delegation announced that T-3 was producing electron temperatures of 1000 eV (equivalent to 10 million degrees Celsius) and that confinement time was at least 50 times the Bohm limit.
These results were at least 10 times that of any other machine. If correct, they represented an enormous leap for the fusion community. Spitzer remained sceptical, noting that the temperature measurements were still based on the indirect calculations from the magnetic properties of the plasma. Many concluded they were due to an effect known as runaway electrons, and that the Soviets were measuring only those extremely energetic electrons and not the bulk temperature. The Soviets countered with several arguments suggesting the temperature they were measuring was Maxwellian, and the debate raged.
In the aftermath of ZETA, the UK teams began the development of new plasma diagnostic tools to provide more accurate measurements. Among these was the use of a laser to directly measure the temperature of the bulk electrons using Thomson scattering. This technique was well known and respected in the fusion community; Artsimovich had publicly called it "brilliant". Artsimovich invited Bas Pease, the head of Culham, to use their devices on the Soviet reactors. At the height of the cold war, in what is still considered a major political manoeuvre on Artsimovich's part, British physicists were allowed to visit the Kurchatov Institute, the heart of the Soviet nuclear bomb effort.
The British team, nicknamed "The Culham Five", arrived late in 1968. After a lengthy installation and calibration process, the team measured the temperatures over a period of many experimental runs. Initial results were available by August 1969; the Soviets were correct, their results were accurate. The team phoned the results home to Culham, who then passed them along in a confidential phone call to Washington. The final results were published in "Nature" in November 1969. The results of this announcement have been described as a "veritable stampede" of tokamak construction around the world.
One serious problem remained. Because the electrical current in the plasma was much lower and produced much less compression than a pinch machine, this meant the temperature of the plasma was limited to the resistive heating rate of the current. First proposed in 1950, Spitzer resistivity stated that the electrical resistance of a plasma was reduced as the temperature increased, meaning the heating rate of the plasma would slow as the devices improved and temperatures were pressed higher. Calculations demonstrated that the resulting maximum temperatures while staying within "q" > 1 would be limited to the low millions of degrees. Artsimovich had been quick to point this out in Novosibirsk, stating that future progress would require new heating methods to be developed.
One of the people attending the Novosibirsk meeting in 1968 was Amasa Stone Bishop, one of the leaders of the US fusion program. One of the few other devices to show clear evidence of beating the Bohm limit at that time was the multipole concept. Both Lawrence Livermore and the Princeton Plasma Physics Laboratory (PPPL), home of Spitzer's stellarator, were building variations on the multipole design. While moderately successful on their own, T-3 greatly outperformed either machine. Bishop was concerned that the multipoles were redundant and thought the US should consider a tokamak of its own.
When he raised the issue at a December 1968 meeting, directors of the labs refused to consider it. Melvin B. Gottlieb of Princeton was exasperated, asking "Do you think that this committee can out-think the scientists?" With the major labs demanding they control their own research, one lab found itself left out. Oak Ridge had originally entered the fusion field with studies for reactor fueling systems, but branched out into a mirror program of their own. By the mid-1960s, their DCX designs were running out of ideas, offering nothing that the similar program at the more prestigious and politically powerful Livermore didn't. This made them highly receptive to new concepts.
After a considerable internal debate, Herman Postma formed a small group in early 1969 to consider the tokamak. They came up with a new design, later christened Ormak, that had several novel features. Primary among them was the way the external field was created in a single large copper block, fed power from a large transformer below the torus. This was as opposed to traditional designs that used magnet windings on the outside. They felt the single block would produce a much more uniform field. It would also have the advantage of allowing the torus to have a smaller major radius, lacking the need to route cables through the donut hole, leading to a lower "aspect ratio", which the Soviets had already suggested would produce better results.
In early 1969, Artsimovich visited MIT, where he was hounded by those interested in fusion. He finally agreed to give several lectures in April and then allowed lengthy question-and-answer sessions. As these went on, MIT itself grew interested in the tokamak, having previously stayed out of the fusion field for a variety of reasons. Bruno Coppi was at MIT at the time, and following the same concepts as Postma's team, came up with his own low-aspect-ratio concept, Alcator. Instead of Ormak's toroidal transformer, Alcator used traditional ring-shaped magnets but required them to be much smaller than existing designs. MIT's Francis Bitter Magnet Laboratory was the world leader in magnet design and they were confident they could build them.
During 1969, two additional groups entered the field. At General Atomics, Tihiro Ohkawa had been developing multipole reactors, and submitted a concept based on these ideas. This was a tokamak that would have a non-circular plasma cross-section; the same math that suggested a lower aspect-ratio would improve performance also suggested that a C or D-shaped plasma would do the same. He called the new design Doublet. Meanwhile, a group at University of Texas at Austin was proposing a relatively simple tokamak to explore heating the plasma through deliberately induced turbulence, the Texas Turbulent Tokamak.
When the members of the Atomic Energy Commissions' Fusion Steering Committee met again in June 1969, they had "tokamak proposals coming out of our ears." The only major lab working on a toroidal design that was not proposing a tokamak was Princeton, who refused to consider it in spite of their Model C stellarator being just about perfect for such a conversion. They continued to offer a long list of reasons why the Model C should not be converted. When these were questioned, a furious debate broke out about whether the Soviet results were reliable.
Watching the debate take place, Gottlieb had a change of heart. There was no point moving forward with the tokamak if the Soviet electron temperature measurements were not accurate, so he formulated a plan to either prove or disprove their results. While swimming in the pool during the lunch break, he told Harold Furth his plan, to which Furth replied: "well, maybe you're right." After lunch, the various teams presented their designs, at which point Gottlieb presented his idea for a "stellarator-tokamak" based on the Model C.
The Standing Committee noted that this system could be complete in six months, while Ormak would take a year. It was only a short time later that the confidential results from the Culham Five were released. When they met again in October, the Standing Committee released funding for all of these proposals. The Model C's new configuration, soon named Symmetrical Tokamak, intended to simply verify the Soviet results, while the others would explore ways to go well beyond T-3.
Experiments on the Symmetrical Tokamak began in May 1970, and by early the next year they had confirmed the Soviet results. The stellarator was abandoned, and PPPL turned its considerable expertise to the problem of heating the plasma. Two concepts seemed to hold promise. PPPL proposed using magnetic compression, a pinch-like technique to compress a warm plasma to raise its temperature, but providing that compression through magnets rather than current. Oak Ridge suggested neutral beam injection, small particle accelerators that would shoot fuel atoms through the surrounding magnetic field where they would collide with the plasma and heat it.
PPPL's Adiabatic Toroidal Compressor (ATC) began operation in May 1972, followed shortly thereafter by a neutral-beam equipped Ormak. Both demonstrated significant problems, but PPPL leapt past Oak Ridge by fitting beam injectors to ATC and provided clear evidence of successful heating in 1973. This success "scooped" Oak Ridge, who fell from favour within the Washington Steering Committee.
By this time a much larger design based on beam heating was under construction, the Princeton Large Torus, or PLT. PLT was designed specifically to "give a clear indication whether the tokamak concept plus auxiliary heating can form a basis for a future fusion reactor". PLT was an enormous success, continually raising its internal temperature until it hit 60 million Celsius (8,000 eV, eight times T-3's record) in 1978. This is a key point in the development of the tokamak; fusion reactions become self-sustaining at temperatures between 50 and 100 million Celsius, PLT demonstrated that this was technically achievable.
These experiments, especially PLT, put the US far in the lead in tokamak research. This is due largely to budget; a tokamak cost about $500,000 and the US annual fusion budget was around $25 million at that time. They could afford to explore all of the promising methods of heating, ultimately discovering neutral beams to be among the most effective.
During this period, Robert Hirsch took over the Directorate of fusion development in the U.S. Atomic Energy Commission. Hirsch felt that the program could not be sustained at its current funding levels without demonstrating tangible results. He began to reformulate the entire program. What had once been a lab-led effort of mostly scientific exploration was now a Washington-led effort to build a working power-producing reactor. This was given a boost by the 1973 oil crisis, which led to greatly increased research into alternative energy systems.
By the late-1970s, tokamaks had reached all the conditions needed for a practical fusion reactor; in 1978 PLT had demonstrated ignition temperatures, the next year the Soviet T-7 successfully used superconducting magnets for the first time, Doublet proved to be a success and led to almost all future designs adopting this "shaped plasma" approach. It appeared all that was needed to build a power-producing reactor was to put all of these design concepts into a single machine, one that would be capable of running with the radioactive tritium in its fuel mix.
The race was on. During the 1970s, four major second-generation proposals were funded worldwide. The Soviets continued their development lineage with the T-15, while a pan-European effort was developing the Joint European Torus (JET) and Japan began the JT-60 effort (originally known as the "Breakeven Plasma Test Facility"). In the US, Hirsch began formulating plans for a similar design, skipping over proposals for another stepping-stone design directly to a tritium-burning one. This emerged as the Tokamak Fusion Test Reactor (TFTR), run directly from Washington and not linked to any specific lab. Originally favouring Oak Ridge as the host, Hirsch moved it to PPPL after others convinced him they would work the hardest on it because they had the most to lose.
The excitement was so widespread that several commercial ventures to produce commercial tokamaks began around this time. Best known among these, in 1978, Bob Guccione, publisher of Penthouse Magazine, met Robert Bussard and became the world's biggest and most committed private investor in fusion technology, ultimately putting $20 million of his own money into Bussard's Compact Tokamak. Funding by the Riggs Bank led to this effort being known as the Riggatron.
TFTR won the construction race and began operation in 1982, followed shortly by JET in 1983 and JT-60 in 1985. JET quickly took the lead in critical experiments, moving from test gases to deuterium and increasingly powerful "shots". But it soon became clear that none of the new systems were working as expected. A host of new instabilities appeared, along with a number of more practical problems that continued to interfere with their performance. On top of this, dangerous "excursions" of the plasma hitting with the walls of the reactor were evident in both TFTR and JET. Even when working perfectly, plasma confinement at fusion temperatures, the so-called "fusion triple product", continued to be far below what would be needed for a practical reactor design.
Through the mid-1980s the reasons for many of these problems became clear, and various solutions were offered. However, these would significantly increase the size and complexity of the machines. A follow-on design incorporating these changes would be both enormous and vastly more expensive than either JET or TFTR. A new period of pessimism descended on the fusion field.
At the same time these experiments were demonstrating problems, much of the impetus for the US's massive funding disappeared; in 1986 Ronald Reagan declared the 1970s energy crisis was over, and funding for advanced energy sources had been slashed in the early 1980s.
Some thought of an international reactor design had been ongoing since June 1973 under the name INTOR, for INternational TOkamak Reactor. This was originally started through an agreement between Richard Nixon and Leonid Brezhnev, but had been moving slowly since its first real meeting on 23 November 1978.
During the Geneva Superpower Summit in November 1985, Reagan raised the issue with Mikhail Gorbachev and proposed reforming the organization. "... The two leaders emphasized the potential importance of the work aimed at utilizing controlled thermonuclear fusion for peaceful purposes and, in this connection, advocated the widest practicable development of international cooperation in obtaining this source of energy, which is essentially inexhaustible, for the benefit for all mankind."
The next year, an agreement was signed between the US, Soviet Union, European Union and Japan, creating the International Thermonuclear Experimental Reactor organization.
Design work began in 1988, and since that time the ITER reactor has been the primary tokamak design effort worldwide.
Positively charged ions and negatively charged electrons in a fusion plasma are at very high temperatures, and have correspondingly large velocities. In order to maintain the fusion process, particles from the hot plasma must be confined in the central region, or the plasma will rapidly cool. Magnetic confinement fusion devices exploit the fact that charged particles in a magnetic field experience a Lorentz force and follow helical paths along the field lines.
The simplest magnetic confinement system is a solenoid. A plasma in a solenoid will spiral about the lines of field running down its center, preventing motion towards the sides. However, this does not prevent motion towards the ends. The obvious solution is to bend the solenoid around into a circle, forming a torus. However, it was demonstrated that such an arrangement is not uniform; for purely geometric reasons, the field on the outside edge of the torus is lower than on the inside edge. This asymmetry causes the electrons and ions to drift across the field, and eventually hit the walls of the torus.
The solution is to shape the lines so they do not simply run around the torus, but twist around like the stripes on a barber pole or candycane. In such a field any single particle will find itself at the outside edge where it will drift one way, say up, and then as it follows its magnetic line around the torus it will find itself on the inside edge, where it will drift the other way. This cancellation is not perfect, but calculations showed it was enough to allow the fuel to remain in the reactor for a useful time.
The two first solutions to making a design with the required twist were the stellarator which did so through a mechanical arrangement, twisting the entire torus, and the z-pinch design which ran an electrical current through the plasma to create a second magnetic field to the same end. Both demonstrated improved confinement times compared to a simple torus, but both also demonstrated a variety of effects that caused the plasma to be lost from the reactors at rates that were not sustainable.
The tokamak is essentially identical to the z-pinch concept in its physical layout. Its key innovation was the realization that the instabilities that were causing the pinch to lose its plasma could be controlled. The issue was how "twisty" the fields were; fields that caused the particles to transit inside and out more than once per orbit around the long axis torus were much more stable than devices that had less twist. This ratio of twists to orbits became known as the "safety factor", denoted "q". Previous devices operated at "q" about ⅓, while the tokamak operates at "q" » 1. This increases stability by orders of magnitude.
When the problem is considered even more closely, the need for a vertical (parallel to the axis of rotation) component of the magnetic field arises. The Lorentz force of the toroidal plasma current in the vertical field provides the inward force that holds the plasma torus in equilibrium.
While the tokamak addresses the issue of plasma stability in a gross sense, plasmas are also subject to a number of dynamic instabilities. One of these, the kink instability, is strongly suppressed by the tokamak layout, a side-effect of the high safety factors of tokamaks. The lack of kinks allowed the tokamak to operate at much higher temperatures than previous machines, and this allowed a host of new phenomena to appear.
One of these, the banana orbits, is caused by the wide range of particle energies in a tokamak – much of the fuel is hot but a certain percentage is much cooler. Due to the high twist of the fields in the tokamak, particles following their lines of force rapidly move towards the inner edge and then outer. As they move inward they are subject to increasing magnetic fields due to the smaller radius concentrating the field. The low-energy particles in the fuel will reflect off this increasing field and begin to travel backwards through the fuel, colliding with the higher energy nuclei and scattering them out of the plasma. This process causes fuel to be lost from the reactor, although this process is slow enough that a practical reactor is still well within reach.
One of the first goals for any controlled fusion device is to reach "breakeven", the point where the energy being released by the fusion reactions is equal to the amount of energy being used to maintain the reaction. The ratio of input to output energy is denoted "Q", and breakeven corresponds to a "Q" of 1. A "Q" of at least one is needed for the reactor to generate net energy, but for practical reasons, it is desirable for it to be much higher.
Once breakeven is reached, further improvements in confinement generally lead to a rapidly increasing "Q". That is because some of the energy being given off by the fusion reactions of the most common fusion fuel, a 50-50 mix of deuterium and tritium, is in the form of alpha particles. These can collide with the fuel nuclei in the plasma and heat it, reducing the amount of external heat needed. At some point, known as "ignition", this internal self-heating is enough to keep the reaction going without any external heating, corresponding to an infinite "Q".
In the case of the tokamak, this self-heating process is maximized if the alpha particles remain in the fuel long enough to guarantee they will collide with the fuel. As the alphas are electrically charged, they are subject to the same fields that are confining the fuel plasma. The amount of time they spend in the fuel can be maximized by ensuring their orbit in the field remains within the plasma. It can be demonstrated that this occurs when the electrical current in the plasma is about 3 MA.
In the early 1970s, studies at Princeton into the use of high-power superconducting magnets in future tokamak designs examined the layout of the magnets. They noticed that the arrangement of the main toroidal coils meant that there was significantly more tension between the magnets on the inside of the curvature where they were closer together. Considering this, they noted that the tensional forces within the magnets would be evened out if they were shaped like a D, rather than an O. This became known as the "Princeton D-coil".
This was not the first time this sort of arrangement had been considered, although for entirely different reasons. The safety factor varies across the axis of the machine; for purely geometrical reasons, it is always smaller at the inside edge of the plasma closest to the machine's center because the long axis is shorter there. That means that a machine with an average "q" = 2 might still be less than 1 in certain areas. In the 1970s, it was suggested that one way to counteract this and produce a design with a higher average "q" would be to shape the magnetic fields so that the plasma only filled the outer half of the torus, shaped like a D or C when viewed end-on, instead of the normal circular cross section.
One of the first machines to incorporate a D-shaped plasma was the JET, which began its design work in 1973. This decision was made both for theoretical reasons as well as practical; because the force is larger on the inside edge of the torus, there is a large net force pressing inward on the entire reactor. The D-shape also had the advantage of reducing the net force, as well as making the supported inside edge flatter so it was easier to support. Code exploring the general layout noticed that a non-circular shape would slowly drift vertically, which led to the addition of an active feedback system to hold it in the center. Once JET had selected this layout, the General Atomics Doublet III team redesigned that machine into the D-IIID with a D-shaped cross-section, and it was selected for the Japanese JT-60 design as well. This layout has been largely universal since then.
One problem seen in all fusion reactors is that the presence of heavier elements causes energy to be lost at an increased rate, cooling the plasma. During the very earliest development of fusion power, a solution to this problem was found, the "divertor", essentially a large mass spectrometer that would cause the heavier elements to be flung out of the reactor. This was initially part of the stellarator designs, where it is easy to integrate into the magnetic windings. However, designing a divertor for a tokamak proved to be a very difficult design problem.
Another problem seen in all fusion designs is the heat load that the plasma places on the wall of the confinement vessel. There are materials that can handle this load, but they are generally undesirable and expensive heavy metals. When such materials are sputtered in collisions with hot ions, their atoms mix with the fuel and rapidly cool it. A solution used on most tokamak designs is the "limiter", a small ring of light metal that projected into the chamber so that the plasma would hit it before hitting the walls. This eroded the limiter and caused its atoms to mix with the fuel, but these lighter materials cause less disruption than the wall materials.
When reactors moved to the D-shaped plasmas it was quickly noted that the escaping particle flux of the plasma could be shaped as well. Over time, this led to the idea of using the fields to create an internal divertor that flings the heavier elements out of fuel, typically towards the bottom of the reactor. There, a pool of liquid lithium metal is used as a sort of limiter; the particles hit it and are rapidly cooled, remaining in the lithium. This internal pool is much easier to cool, due to its location, and although some lithium atoms are released into the plasma, its very low mass makes it a much smaller problem than even the lightest metals used previously.
As machines began to explore this newly shaped plasma, they noticed that certain arrangements of the fields and plasma parameters would sometimes enter what is now known as the high-confinement mode, or H-mode, which operated stably at higher temperatures and pressures. Operating in the H-mode, which can also be seen in stellarators, is now a major design goal of the tokamak design.
Finally, it was noted that when the plasma had a non-uniform density would give rise to internal electrical currents. This is known as the "bootstrap current". This allows a properly designed reactor to generate some of the internal current needed to twist the magnetic field lines without having to supply it from an external source. This has a number of advantages, and modern designs all attempt to generate as much of their total current through the bootstrap process as possible.
By the early 1990s, the combination of these features and others collectively gave rise to the "advanced tokamak" concept. This forms the basis of modern research, including ITER.
Tokamaks are subject to events known as "disruptions" that cause confinement to be lost in milliseconds. There are two primary mechanisms. In one, the "vertical displacement event" (VDE), the entire plasma moves vertically until it touches the upper or lower section of the vacuum chamber. In the other, the "major disruption", long wavelength, non-axisymmetric magnetohydrodynamical instabilities cause the plasma to be forced into non-symmetrical shapes, often squeezed into the top and bottom of the chamber.
When the plasma touches the vessel walls it undergoes rapid cooling, or "thermal quenching". In the major disruption case, this is normally accompanied by a brief increase in plasma current as the plasma concentrates. Quenching ultimately causes the plasma confinement to break up. In the case of the major disruption the current drops again, the "current quench". The initial increase in current is not seen in the VDE, and the thermal and current quench occurs at the same time. In both cases, the thermal and electrical load of the plasma is rapidly deposited on the reactor vessel, which has to be able to handle these loads. ITER is designed to handle 2600 of these events over its lifetime.
For modern high-energy devices, where plasma currents are on the order of 15 megaamperes in ITER, it is possible the brief increase in current during a major disruption will cross a critical threshold. This occurs when the current produces a force on the electrons that is higher than the frictional forces of the collisions between particles in the plasma. In this event, electrons can be rapidly accelerated to relativistic velocities, creating so-called "runaway electrons" in the relativistic runaway electron avalanche. These retain their energy even as the current quench is occurring on the bulk of the plasma.
When confinement finally breaks down, these runaway electrons follow the path of least resistance and impact the side of the reactor. These can reach 12 megaamps of current deposited in a small area, well beyond the capabilities of any mechanical solution. In one famous case, the Tokamak de Fontenay aux Roses had a major disruption where the runaway electrons burned a hole through the vacuum chamber.
The occurrence of major disruptions in running tokamaks has always been rather high, of the order of a few percent of the total numbers of the shots. In currently operated tokamaks, the damage is often large but rarely dramatic. In the ITER tokamak, it is expected that the occurrence of a limited number of major disruptions will definitively damage the chamber with no possibility to restore the device. The development of systems to counter the effects of runaway electrons is considered a must-have piece of technology for the operational level ITER.
A large amplitude of the central current density can also result in internal disruptions, or sawteeth, which do not generally result in termination of the discharge.
In an operating fusion reactor, part of the energy generated will serve to maintain the plasma temperature as fresh deuterium and tritium are introduced. However, in the startup of a reactor, either initially or after a temporary shutdown, the plasma will have to be heated to its operating temperature of greater than 10 keV (over 100 million degrees Celsius). In current tokamak (and other) magnetic fusion experiments, insufficient fusion energy is produced to maintain the plasma temperature, and constant external heating must be supplied. Chinese researchers set up the Experimental Advanced Superconducting Tokamak (EAST) in 2006 which is believed to sustain 100 million degree celsius plasma (sun has 15 million degree celsius temperature) which is required to initiate the fusion between hydrogen atoms, according to the latest test conducted in EAST (test conducted in November 2018).
Since the plasma is an electrical conductor, it is possible to heat the plasma by inducing a current through it; the induced current that provides most of the poloidal field is also a major source of initial heating.
The heating caused by the induced current is called ohmic (or resistive) heating; it is the same kind of heating that occurs in an electric light bulb or in an electric heater. The heat generated depends on the resistance of the plasma and the amount of electric current running through it. But as the temperature of heated plasma rises, the resistance decreases and ohmic heating becomes less effective. It appears that the maximum plasma temperature attainable by ohmic heating in a tokamak is 20–30 million degrees Celsius. To obtain still higher temperatures, additional heating methods must be used.
The current is induced by continually increasing the current through an electromagnetic winding linked with the plasma torus: the plasma can be viewed as the secondary winding of a transformer. This is inherently a pulsed process because there is a limit to the current through the primary (there are also other limitations on long pulses). Tokamaks must therefore either operate for short periods or rely on other means of heating and current drive.
A gas can be heated by sudden compression. In the same way, the temperature of a plasma is increased if it is compressed rapidly by increasing the confining magnetic field. In a tokamak, this compression is achieved simply by moving the plasma into a region of higher magnetic field (i.e., radially inward). Since plasma compression brings the ions closer together, the process has the additional benefit of facilitating attainment of the required density for a fusion reactor.
Magnetic compression was an area of research in the early "tokamak stampede", and was the purpose of one major design, the ATC. The concept has not been widely used since then, although a somewhat similar concept is part of the General Fusion design.
Neutral-beam injection involves the introduction of high energy (rapidly moving) atoms or molecules into an ohmically heated, magnetically confined plasma within the tokamak.
The high energy atoms originate as ions in an arc chamber before being extracted through a high voltage grid set. The term "ion source" is used to generally mean the assembly consisting of a set of electron emitting filaments, an arc chamber volume, and a set of extraction grids. A second device, similar in concept, is used to separately accelerate electrons to the same energy. The much lighter mass of the electrons makes this device much smaller than its ion counterpart. The two beams then intersect, where the ions and electrons recombine into neutral atoms, allowing them to travel through the magnetic fields.
Once the neutral beam enters the tokamak, interactions with the main plasma ions occur. This has two effects. One is that the injected atoms re-ionize and become charged, thereby becoming trapped inside the reactor and adding to the fuel mass. The other is that the process of being ionized occurs through impacts with the rest of the fuel, and these impacts deposit energy in that fuel, heating it.
This form of heating has no inherent energy (temperature) limitation, in contrast to the ohmic method, but its rate is limited to the current in the injectors. Ion source extraction voltages are typically on the order of 50–100 kV, and high voltage, negative ion sources (-1 MV) are being developed for ITER. The ITER Neutral Beam Test Facility in Padova will be the first ITER facility to start operation.
While neutral beam injection is used primarily for plasma heating, it can also be used as a diagnostic tool and in feedback control by making a pulsed beam consisting of a string of brief 2–10 ms beam blips. Deuterium is a primary fuel for neutral beam heating systems and hydrogen and helium are sometimes used for selected experiments.
High-frequency electromagnetic waves are generated by oscillators (often by gyrotrons or klystrons) outside the torus. If the waves have the correct frequency (or wavelength) and polarization, their energy can be transferred to the charged particles in the plasma, which in turn collide with other plasma particles, thus increasing the temperature of the bulk plasma. Various techniques exist including electron cyclotron resonance heating (ECRH) and ion cyclotron resonance heating. This energy is usually transferred by microwaves.
Plasma discharges within the tokamak's vacuum chamber consist of energized ions and atoms and the energy from these particles eventually reaches the inner wall of the chamber through radiation, collisions, or lack of confinement. The inner wall of the chamber is water-cooled and the heat from the particles is removed via conduction through the wall to the water and convection of the heated water to an external cooling system.
Turbomolecular or diffusion pumps allow for particles to be evacuated from the bulk volume and cryogenic pumps, consisting of a liquid helium-cooled surface, serve to effectively control the density throughout the discharge by providing an energy sink for condensation to occur. When done correctly, the fusion reactions produce large amounts of high energy neutrons. Being electrically neutral and relatively tiny, the neutrons are not affected by the magnetic fields nor are they stopped much by the surrounding vacuum chamber.
The neutron flux is reduced significantly at a purpose-built neutron shield boundary that surrounds the tokamak in all directions. Shield materials vary, but are generally materials made of atoms which are close to the size of neutrons because these work best to absorb the neutron and its energy. Good candidate materials include those with much hydrogen, such as water and plastics. Boron atoms are also good absorbers of neutrons. Thus, concrete and polyethylene doped with boron make inexpensive neutron shielding materials.
Once freed, the neutron has a relatively short half-life of about 10 minutes before it decays into a proton and electron with the emission of energy. When the time comes to actually try to make electricity from a tokamak-based reactor, some of the neutrons produced in the fusion process would be absorbed by a liquid metal blanket and their kinetic energy would be used in heat-transfer processes to ultimately turn a generator.
(in chronological order of start of operations) | https://en.wikipedia.org/wiki?curid=31439 |
Turbopump
A turbopump is a propellant pump with two main components: a rotodynamic pump and a driving gas turbine, usually both mounted on the same shaft, or sometimes geared together. The purpose of a turbopump is to produce a high-pressure fluid for feeding a combustion chamber or other use.
There are two types of turbopumps: a centrifugal pump, where the pumping is done by throwing fluid outward at high speed, or an axial-flow pump, where alternating rotating and static blades progressively raise the pressure of a fluid.
Axial-flow pumps have small diameters but give relatively modest pressure increases. Although multiple compression stages are needed, axial flow pumps work well with low-density fluids. Centrifugal pumps are far more powerful for high-density fluids but require large diameters for low-density fluids.
Turbopumps operate in much the same way as turbocharger units for vehicles: higher fuel pressures allow fuel to be supplied to higher-pressure combustion chambers for higher-performance engines.
High-pressure pumps for larger missiles had been discussed by rocket pioneers such as Hermann Oberth. In mid-1935 Wernher von Braun initiated a fuel pump project at the southwest German firm "Klein, Schanzlin & Becker" that was experienced in building large fire-fighting pumps. The V-2 rocket design used hydrogen peroxide decomposed through a Walter steam generator to power the uncontrolled turbopump produced at the Heinkel plant at Jenbach, so V-2 turbopumps and combustion chamber were tested and matched to prevent the pump from overpressurizing the chamber. The first engine fired successfully in September, and on August 16, 1942, a trial rocket stopped in mid-air and crashed due to a failure in the turbopump. The first successful V-2 launch was on October 3, 1942.
The principal engineer for turbopump development at Aerojet was George Bosco. During the second half of 1947, Bosco and his group learned about the pump work of others and made preliminary design studies. Aerojet representatives visited Ohio State University where Florant was working on hydrogen pumps, and consulted Dietrich Singelmann, a German pump expert at Wright Field. Bosco subsequently used Singelmann's data in designing Aerojet's first hydrogen pump.
By mid-1948, Aerojet had selected centrifugal pumps for both liquid hydrogen and liquid oxygen. They obtained some German radial-vane pumps from the Navy and tested them during the second half of the year.
By the end of 1948, Aerojet had designed, built, and tested a liquid hydrogen pump (15 cm diameter). Initially, it used ball bearings that were run clean and dry, because the low temperature made conventional lubrication impractical. The pump was first operated at low speeds to allow its parts to cool down to operating temperature. When temperature gauges showed that liquid hydrogen had reached the pump, an attempt was made to accelerate from 5000 to 35 000 revolutions per minute. The pump failed and examination of the pieces pointed to a failure of the bearing, as well as the impeller. After some testing, super-precision bearings, lubricated by oil that was atomized and directed by a stream of gaseous nitrogen, were used. On the next run, the bearings worked satisfactorily but the stresses were too great for the brazed impeller and it flew apart. A new one was made by milling from a solid block of aluminum. The next two runs with the new pump were a great disappointment; the instruments showed no significant flow or pressure rise. The problem was traced to the exit diffuser of the pump, which was too small and insufficiently cooled during the cool-down cycle so that it limited the flow. This was corrected by adding vent holes in the pump housing; the vents were opened during cool down and closed when the pump was cold. With this fix, two additional runs were made in March 1949 and both were successful. Flow rate and pressure were found to be in approximate agreement with theoretical predictions. The maximum pressure was 26 atmospheres () and the flow was 0.25 kilogram per second.
The Space Shuttle Main Engine's turbopumps spun at over 30,000 rpm, delivering 150 lb (68 kg) of liquid hydrogen and 896 lb (406 kg) of liquid oxygen to the engine per second.
Most turbopumps are centrifugal - the fluid enters the pump near the axis and the rotor accelerates the fluid to high speed. The fluid then passes through a diffuser which is a progressively enlarging pipe, which permits recovery of the dynamic pressure. The diffuser turns the high kinetic energy into high pressures (hundreds of bars is not uncommon), and if the outlet backpressure is not too high, high flow rates can be achieved.
Axial turbopumps also exist. In this case the axle essentially has propellers attached to the shaft, and the fluid is forced by these parallel with the main axis of the pump. Generally, axial pumps tend to give much lower pressures than centrifugal pumps, and a few bars is not uncommon. They are, however, still useful – axial pumps are commonly used as "inducers" for centrifugal pumps, which raise the inlet pressure of the centrifugal pump enough to prevent excessive cavitation from occurring therein.
Turbopumps have a reputation for being extremely hard to design to get optimal performance. Whereas a well engineered and debugged pump can manage 70–90% efficiency, figures less than half that are not uncommon. Low efficiency may be acceptable in some applications, but in rocketry this is a severe problem. Turbopumps in rockets are important and problematic enough that launch vehicles using one have been caustically described as a "turbopump with a rocket attached"–up to 55% of the total cost has been ascribed to this area.
Common problems include:
In addition, the precise shape of the rotor itself is critical.
Steam turbine-powered turbopumps are employed when there is a source of steam, e.g. the boilers of steam ships. Gas turbines are usually used when electricity or steam is not available and place or weight restrictions permit the use of more efficient sources of mechanical energy.
One of such cases are rocket engines, which need to pump fuel and oxidizer into their combustion chamber. This is necessary for large liquid rockets, since forcing the fluids or gases to flow by simple pressurizing of the tanks is often not feasible; the high pressure needed for the required flow rates would need strong and heavy tanks.
Ramjet motors are also usually fitted with turbopumps, the turbine being driven either directly by external freestream ram air or internally by airflow diverted from combustor entry. In both cases the turbine exhaust stream is dumped overboard. | https://en.wikipedia.org/wiki?curid=31440 |
Tragedy of the anticommons
The tragedy of the anticommons is a type of coordination breakdown, in which a single resource has numerous rightsholders who prevent others from using it, frustrating what would be a socially desirable outcome. It is a mirror-image of the older concept of tragedy of the commons, in which numerous rights holders' combined use exceeds the capacity of a resource and depletes or destroys it. The "tragedy of the anticommons" covers a range of coordination failures including patent thickets, and submarine patents. Overcoming these breakdowns can be difficult, but there are assorted means, including eminent domain, laches, patent pools, or other licensing organizations.
The term originally appeared in Michael Heller's 1998 article of the same name and is the thesis of his 2008 book. The model was formalized by James M. Buchanan and Yong Yoon. In a 1998 "Science" article, Heller and Rebecca S. Eisenberg, while not disputing the role of patents in general in motivating invention and disclosure, argue that biomedical research was one of several key areas where competing patent rights could actually prevent useful and affordable products from reaching the marketplace.
In early aviation, the Wright brothers held patents on certain aspects of aircraft, while Glenn Curtiss held patents on ailerons which was an advance on the Wrights' system, but antipathy between the patent holders prevented their use. The government was forced to step in and enforce the existence of a patent pool during World War One.
In his 1998 "Harvard Law Review" article, Michael Heller noted that there were a lot of open air kiosks but also a lot of empty stores in many Eastern European cities after the fall of Communism. Upon investigation, he concluded that it was difficult or even impossible for a startup retailer to negotiate successfully for the use of that space because many different agencies and private parties had rights over the use of store space. Even though all the persons with ownership rights were losing money with the empty stores, and stores were in great demand, competing interests got in the way of the effective use of that space.
Michael Heller says that the rise of the "robber barons" in medieval Germany was the result of the tragedy of the anticommons. Nobles commonly attempted to collect tolls on stretches of the Rhine passing by or through their fiefs, building towers alongside the river and stretching iron chains to prevent boats from carrying cargo up and down the river without paying a fee. Repeated attempts were made by the Holy Roman Empire, including several efforts over the centuries led by the Emperor himself, to regulate toll collection on the Rhine, but it was not until the establishment of the "Rhine League" of the Emperor, certain nobles, and certain clergy that the control of the "robber barons" over the Rhine was crushed by military force. River tolls on the Rhine, increasingly imposed by states rather than individual lords, remained a sticking point in relations and commerce in the Rhine basin until the establishment of the Central Commission for Navigation on the Rhine in 1815.
Michael Heller and Rebecca Eisenberg are academic law professors who believe that biological patents create a "tragedy of the anticommons", "in which people underuse scarce resources because too many owners can block each other." Others claim that patents have not created this "anticommons" effect on research, based on surveys of scientists. | https://en.wikipedia.org/wiki?curid=31441 |
Thealogy
Thealogy views divine matters with feminine perspectives including but not only feminism. Valerie Saiving, Isaac Bonewits (1976) and Naomi Goldenberg (1979) introduced the concept as a neologism (new word) in feminist terms. Its use then widened to mean all feminine ideas of the sacred, which Charlotte Caron usefully explained in 1993: "reflection on the divine in feminine or feminist terms". By 1996, when Melissa Raphael published "Thealogy and Embodiment", the term was well established.
As a neologism, the term derives from two Greek words: "thea", , meaning "goddess", the feminine equivalent of "theos", "god" (from PIE root *"dhes"-); and "logos", , plural "logoi", often found in English as the suffix "-logy", meaning "word", "reason" or "plan", and in Greek philosophy and theology the divine reason implicit in the cosmos.
Thealogy has areas in common with feminist theology, the study of God from a feminist perspective, often emphasising monotheism. Thus the relation is an overlap as thealogy is not limited to deity in spite of its etymology; the two fields have been described as both related and interdependent.
The term's origin and initial use is open to continuing debate. Patricia 'Iolana traces the early use of the neologism to 1976 crediting both Valerie Saiving and Isaac Bonewits for its initial use.
The coinage of 'thealogian' on record by Bonewits in 1976 has been promoted,
In the 1979 book "Changing of the Gods", Naomi Goldenberg introduces the term as a future possibility with respect to a distinct discourse, highlighting the masculine nature of theology. Also in 1979, in the first revised edition of "Real Magic", Bonewits defined "thealogy" in his Glossary as "Intellectual speculations concerning the nature of the Goddess and Her relations to the world in general and humans in particular; rational explanations of religious doctrines, practices and beliefs, which may or may not bear any connection to any religion as actually conceived and practiced by the majority of its members." Also in the same glossary, he defined "theology" with nearly identical words, changing the feminine pronouns with masculine pronouns appropriately.
Carol P. Christ used the term in "Laughter of Aphrodite" (1987), claiming that those creating thealogy could not avoid being influenced by the categories and questions posed in Christian and Jewish theologies. She further defined thealogy in her 2002 essay, "Feminist theology as post-traditional thealogy," as "the reflection on the meaning of the Goddess".
In her 1989 essay "On Mirrors, Mists and Murmurs: Toward an Asian American Thealogy", Rita Nakashima Brock defined thealogy as "the work of women reflecting on their experiences of and beliefs about divine reality". Also in 1989, Ursula King notes thealogy's growing usage as a fundamental departure from traditional male-oriented theology, characterized by its privileging of symbols over rational explanation.
In 1993, Charlotte Caron's inclusive and clear definition of thealogy as "reflection on the divine in feminine and feminist terms" appeared in "To Make and Make Again". By this time, the concept had gained considerable status among Goddess adherents.
Situated in relationship to the fields of theology and religious studies, thealogy is a discourse that critically engages the beliefs, wisdom, practices, questions, and values of the Goddess community, both past and present. Similar to theology, thealogy grapples with questions of meaning, include reflecting on the nature of the divine, the relationship of humanity to the environment, the relationship between the spiritual and sexual self, and the nature of belief. However, in contrast to theology, which often focuses on an exclusively logical and empirical discourse, thealogy embraces a postmodern discourse of personal experience and complexity.
The term suggests a feminist approach to theism and the context of God and gender within Paganism, Neopaganism, Goddess Spirituality and various nature-based religions. However, thealogy can be described as religiously pluralistic, as thealogians come from various religious backgrounds that are often hybrid in nature. In addition to Pagans, Neopagans, and Goddess-centred faith traditions, they are also Christian, Jewish, Buddhist, Muslim, Quakers, etc. or define themselves as Spiritual Feminists. As such, the term "thealogy" has also been used by feminists within mainstream monotheistic religions to describe in more detail the feminine aspect of a monotheistic deity or trinity, such as God/dess Herself, or the Heavenly Mother of the Latter Day Saint movement.
In 2000, Melissa Raphael wrote the text "Introducing Thealogy: Discourse on the Goddess" for the series Introductions in Feminist Theology. Written for an academic audience, it purports to introduce the main elements of thealogy within the context of Goddess feminism. She situates thealogy as a discourse that can be engaged with by Goddess feminists—those who are feminist adherents of the Goddess who may have left their church, synagogue, or mosque—or those who may still belong to their originally established religion. In the book, Raphael compares and contrasts thealogy with the Goddess movement. In 2007, Paul Reid-Bowen wrote the text "Goddess as Nature: Towards a Philosophical Thealogy", which can be regarded as another systematic approach to thealogy, but which integrates philosophical discourse.
In the past decade, other thealogians like Patricia 'Iolana and D'vorah Grenn have generated discourses that bridge thealogy with other academic disciplines. 'Iolana's Jungian thealogy bridges analytical psychology with thealogy, and Grenn's metaformic thealogy is a bridge between matriarchal studies and thealogy.
Contemporary Thealogians include Carol P. Christ, Melissa Raphael, Asphodel Long, Beverly Clack, Charlotte Caron, Naomi Goldenberg, Paul Reid-Bowen, Rita Nakashima Brock, and Patricia 'Iolana.
At least one Christian theologist dismisses thealogy as the creation of a new deity made up by radical feminists. Paul Reid-Bowen and Chaone Mallory point out that essentialism is a problematic slippery slope when Goddess feminists argue that women are inherently better than men or inherently closer to the Goddess. In his book "Goddess Unmasked: The Rise of Neopagan Feminist Spirituality", Philip G. Davis levies a number of criticisms against the Goddess movement, including logical fallacies, hypocrisies, and essentialism.
Thealogy has also been criticized for its objection to empiricism and reason. In this critique, thealogy is seen as flawed by rejecting a purely empirical worldview for a purely relativistic one. Meanwhile, scholars like Harding and Haraway seek a middle ground of feminist empiricism. | https://en.wikipedia.org/wiki?curid=31444 |
Torpedo boat
A torpedo boat is a relatively small and fast naval ship designed to carry torpedoes into battle. The first designs were steam-powered craft dedicated to ramming enemy ships with explosive spar torpedoes. Later evolutions launched variants of self-propelled Whitehead torpedoes.
These were inshore craft created to counter both the threat of battleships and other slow and heavily armed ships by using speed, agility, and powerful torpedoes, and the overwhelming expense of building a like number of capital ships to counter an enemy's. A swarm of expendable torpedo boats attacking en masse could overwhelm a larger ship's ability to fight them off using its large but cumbersome guns. A fleet of torpedo boats could pose a similar threat to an adversary's capital ships, albeit only in the coastal areas to which their small size and limited fuel load restricted them.
The introduction of fast torpedo boats in the late 19th century was a serious concern to the era's naval strategists, introducing the concept of tactical asymmetric warfare. In response, navies operating large ships introduced smaller but more seaworthy ships to counter torpedo boats, mounting light quick-firing guns. These ships, which came to be called "torpedo boat destroyers" (and later simply "destroyers"), initially were purely defensive, both attacking the torpedo boat threats and serving as expendable screens intercepting torpedoes intended for larger ships. In time they became larger and took on more roles, including making their own torpedo attacks on valuable enemy ships as well as defending against submarines and aircraft. Later yet they were armed with guided missiles and eventually became the predominant type of surface warship in modern era.
Today, the old concept of a very small, fast, and cheap surface combatant with powerful offensive weapons is taken up by the "fast attack craft".
The American Civil War saw a number of innovations in naval warfare, including an early type of torpedo boat, armed with spar torpedoes. In 1861, President Abraham Lincoln instituted a naval blockade of Southern ports, which crippled the South's efforts to obtain war materiel from abroad. The South also lacked the means to construct a naval fleet capable of taking on the Union Navy on even terms. One strategy to counter the blockade saw the development of torpedo boats, small fast boats designed to attack the larger capital ships of the blockading fleet as a form of asymmetrical warfare.
The class of torpedo boats were steam powered with a partially enclosed hull. They were not true submarines but were "semi-submersible"; when ballasted, only the smokestack and few inches of the hull were above the water line. CSS "Midge" was a "David"-class torpedo boat. CSS "Squib" and represented another class of torpedo boats that were also low built but had open decks and lacked the ballasting tanks found on the "David"s.
The Confederate torpedo boats were armed with spar torpedoes. This was a charge of powder in a waterproof case, mounted to the bow of the torpedo boat below the water line on a long spar. The torpedo boat attacked by ramming her intended target, which stuck the torpedo to the target ship by means of a barb on the front of the torpedo. The torpedo boat would back away to a safe distance and detonate the torpedo, usually by means of a long cord attached to a trigger.
In general, the Confederate torpedo boats were not very successful. Their low sides made them susceptible to swamping in high seas, and even to having their boiler fires extinguished by spray from their own torpedo explosions. Torpedo misfires (too early) and duds were common. In 1864, Union Navy Lieutenant William B. Cushing fitted a steam launch with a spar torpedo to attack the Confederate ironclad . Also the same year the Union launched , a purpose-built craft with a number of technical innovations including variable ballast for attack operations and an extensible and reloadable torpedo placement spar.
A prototype self-propelled torpedo was created by a commission placed by Giovanni Luppis, an Austrian naval officer from Rijeka, then a port city of the Austro-Hungarian Empire, and Robert Whitehead, an English engineer who was the manager of a town factory. In 1864, Luppis presented Whitehead with the plans of the "salvacoste" (coastsaver), a floating weapon driven by ropes from the land that had been dismissed by the naval authorities due to the impractical steering and propulsion mechanisms.
Whitehead was unable to improve the machine substantially, since the clockwork motor, attached ropes, and surface attack mode all contributed to a slow and cumbersome weapon. However, he kept considering the problem after the contract had finished, and eventually developed a tubular device, designed to run underwater on its own, and powered by compressed air. The result was a submarine weapon, the "Minenschiff" (mine ship), the first modern self-propelled torpedo, officially presented to the Austrian Imperial Naval commission on December 21, 1866.
The first trials were not successful as the weapon was unable to maintain a course on a steady depth. After much work, Whitehead introduced his "secret" in 1868 which overcame this. It was a mechanism consisting of a hydrostatic valve and pendulum that caused the torpedo's hydroplanes to be adjusted so as to maintain a preset depth.
During the mid-19th century, the ships of the line were superseded by large steam powered ships with heavy gun armament and heavy armour, called ironclads. Ultimately this line of development led to the dreadnought class of all-big-gun battleship, starting with .
At the same time, the weight of armour slowed the battleships, and the huge guns needed to penetrate enemy armour fired at very slow rates. This allowed for the possibility of a small and fast ship that could attack the battleships, at a much lower cost. The introduction of the torpedo provided a weapon that could cripple, or even sink, any battleship.
The first warship of any kind to carry self-propelled torpedoes was HMS "Vesuvius" of 1873. The first seagoing vessel "designed" to fire the self-propelled Whitehead torpedo was . The boat was built by John Thornycroft at Church Wharf in Chiswick for the Royal Navy. It entered service in 1876 and was armed with self-propelled Whitehead torpedoes.
As originally built, "Lightning" had two drop collars to launch torpedoes; these were replaced in 1879 by a single torpedo tube in the bow. She carried also two reload torpedoes amidships. She was later renamed "Torpedo Boat No. 1". The French Navy followed suit in 1878 with "Torpilleur No 1", launched in 1878 though she had been ordered in 1875.
Another early such ship was the Norwegian warship , ordered from Thornycroft shipbuilding company, England, in either 1872 or 1873, and built at Thornycroft's shipyard at Church Wharf in Chiswick on the River Thames. Managing a speed of , she was one of the fastest boats afloat when completed. The Norwegians initially planned to arm her with a spar torpedo, but this may never have been fitted. "Rap" was outfitted with launch racks for the new self-propelled Whitehead torpedoes in 1879.
The first recorded launch of torpedoes from a torpedo boat—which itself was launched from a torpedo boat tender—in an actual battle was by future Russian Admiral Stepan Makarov on January 16, 1878, who used self-propelled Whitehead's torpedoes against the Ottoman gunboat "İntibâh" during the Russo-Turkish War of 1877–1878. The first sinking of an armoured ship by a torpedo boat using self-propelled torpedoes, the , occurred during the battle of Caldera Bay during 1891 Chilean Civil War.
In the late 19th century, many navies started to build torpedo boats in length, armed with up to three torpedo launchers and small guns. They were powered by steam engines and had a maximum speed of . They were relatively inexpensive and could be purchased in quantity, allowing mass attacks on fleets of larger ships. The loss of even a squadron of torpedo boats to enemy fire would be more than outweighed by the sinking of a capital ship.
The Russo-Japanese War of 1904–1905 was the first great war of the 20th century. It was the first practical testing of the new steel battleships, cruisers, destroyers, submarines, and torpedo boats. During the war the Imperial Russian Navy in addition to their other warships, deployed 86 torpedo boats and launched 27 torpedoes (from all warships) in three major campaigns, scoring 5 hits.
The Imperial Japanese Navy (IJN), like the Russians, often combined their TBs (which possessed only hull numbers) with their torpedo boat destroyers (TBDs) (often simply referring to them as "destroyers") and launched over 270 torpedoes (counting the opening engagement at Port Arthur naval base on 8 February 1904) during the war. The IJN deployed approximately 21 TBs during the conflict, and on 27 May 1905 the Japanese torpedo boat destroyers and TBs launched 16 torpedoes at the battleship , Admiral Zinovy Rozhestvensky's flagship at the battle of Tsushima. Admiral Tōgō Heihachirō, the IJN commander, had ordered his torpedo boats to finish off the enemy flagship, already gunned into a wreck, as he prepared to pursue the remnants of the Russian battle fleet.
Of the 16 torpedoes launched by the TBDs and TBs at the Russian battleship, only four hit their mark, two of those hits were from torpedo boats "#72" and "#75". By evening, the battleship rolled over and sank to the bottom of the Tsushima Straits. By war's end, torpedoes launched from warships had sunk one battleship, two armored cruisers, and two destroyers. The remaining over 80 warships would be sunk by guns, mines, scuttling, or shipwreck.
The introduction of the torpedo boat resulted in a flurry of activity in navies around the world, as smaller, quicker-firing guns were added to existing ships to ward off the new threat. In the mid-1880s there were developed torpedo gunboats, the first vessel design for the explicit purpose of hunting and destroying torpedo boats. Essentially very small cruisers, torpedo gunboats were equipped with torpedo tubes and an adequate gun armament, intended for hunting down smaller enemy boats.
The first example of this was , designed by Nathaniel Barnaby in 1885. The gunboat was armed with torpedoes and designed for hunting and destroying smaller torpedo boats. She was armed with a single 4-inch/25-pounder breech-loading gun, six 3-pounder QF guns and four torpedo tubes, arranged with two fixed tubes at the bow and a set of torpedo dropping carriages on either side. Four torpedo reloads were carried.
A number of torpedo gunboat classes followed, including the "Grasshopper" class, the , the and the – all built for the Royal Navy during the 1880s and the 1890s. However, by the end of the 1890s torpedo gunboats had been made obsolete by their more successful contemporaries, the torpedo boat destroyers, which were much faster.
The first ships to bear the formal designation "torpedo boat destroyer" (TBD) were the of two ships and of two ships of the Royal Navy, ordered in 1892 by Rear Admiral Jackie Fisher. These were basically enlarged torpedo boats, with speed equal to or surpassing the torpedo boats, but were armed with heavier guns that could attack them before they were able to close on the main fleet.
After the Russo-Japanese War, these ships became known simply as destroyers. Destroyers became so much more useful, having better seaworthiness and greater capabilities than torpedo boats, that they eventually replaced most torpedo boats. However, the London Naval Treaty after World War I limited tonnage of warships, but placed no limits on ships of under 600 tons. The French, Italian, Japanese and German Navies developed torpedo boats around that displacement, 70 to 100 m long, armed with two or three guns of around 100 mm (4 in) and torpedo launchers. For example, the Royal Norwegian Navy s were in fact of a torpedo boat size, while the Italian s were closer in size to a destroyer escort. After World War II they were eventually subsumed into the revived corvette classification.
The Kriegsmarine torpedo boats were classified "Torpedoboot" with "T"-prefixed hull numbers. The classes designed in the mid-1930s, such as the Torpedo boat type 35, had few guns, relying almost entirely upon their torpedoes. This was found to be inadequate in combat, and the result was a "fleet torpedo boat" class ("Flottentorpedoboot"), which were significantly larger, up to 1,700 tons, comparable to small destroyers. This class of German boats could be highly effective, as in the action in which the British cruiser was sunk off Brittany by a torpedo salvo launched by the s T23 and T27.
Before World War I steam torpedo boats which were larger and more heavily armed than hitherto were being used. The new internal combustion engine generated much more power for a given weight and size than steam engines, and allowed the development of a new class of small and fast boats. These powerful engines could make use of planing hull designs and were capable of the much higher speed of under appropriate sea conditions than displacement hulls. The boat could carry two to four torpedoes fired from simple fixed launchers and several machine guns.
During the First World War, three junior officers of the Harwich Force suggested that small motor boats carrying a torpedo might be capable of travelling over the protective minefields and attacking ships of the Imperial German Navy at anchor in their bases. In 1915, the Admiralty produced a Staff Requirement requesting designs for a Coastal Motor Boat for service in the North Sea. These boats were expected to have a high speed, making use of the lightweight and powerful petrol engines then available. The speed of the boat when fully loaded was to be at least and sufficient fuel was to be carried to give a considerable radius of action.
They were to be armed in a variety of ways, with torpedoes, depth charges or for laying mines. Secondary armament would have been provided by light machine guns, such as the Lewis gun. The CMBs were designed by Thornycroft, who had experience in small fast boats. Engines were not proper maritime internal combustion engines (as these were in short supply) but adapted aircraft engines from firms such as Sunbeam and Napier. A total of 39 such vessels were built.
In 1917 Thornycroft produced an enlarged overall version. This allowed a heavier payload, and now two torpedoes could be carried. A mixed warload of a single torpedo and four depth charges could also be carried, the depth charges released from individual cradles over the sides, rather than a stern ramp. Speeds from were possible, depending on the various petrol engines fitted. At least two unexplained losses due to fires in port are thought to have been caused by a build-up of petrol vapour igniting.
Italian torpedo boats sank the Austrian-Hungarian in 1917, and in 1918. During the civil war in Russia, British torpedo boats made raids on Kronstadt harbour damaging two battleships and sinking a cruiser.
Such vessels remained useful through World War II. The Royal Navy's Motor Torpedo Boats (MTBs), Kriegsmarine 'S-Boote' ("Schnellboot" or "fast-boat": British termed them E-boats), (Italian) M.A.S. and M.S., Soviet Navy and U.S. PT boats (standing for "Patrol Torpedo") were all of this type.
A classic fast torpedo boat action was the Channel Dash in February 1942 when German "E-boats" and destroyers defended the flotilla of , , and several smaller ships as the passed through the Channel.
By World War II torpedo boats were seriously hampered by higher fleet speeds; although they still had a speed advantage, they could only catch the larger ships by running at very high speeds over very short distances, as demonstrated in the Channel Dash. An even greater threat was the widespread arrival of patrol aircraft, which could hunt down torpedo boats long before they could engage their targets.
During World War II United States naval forces employed fast wooden PT boats in the South Pacific in a number of roles in addition to the originally envisioned one of torpedo attack. PT boats performed reconnaissance, ferry, courier, search & rescue as well as attack and smoke screening duties. They took part in fleet actions and they worked in smaller groups and singly to harry enemy supply lines. Late in the Pacific War when large targets became scarce, many PT boats replaced two or all four of their torpedo tubes with additional guns for engaging enemy coastal supply boats and barges, isolating enemy-held islands from supply, reinforcement or evacuation.
The most significant military ship sunk by a torpedo boat during World War II was the cruiser which was attacked by two Italian torpedo boats (M.S. 16 and M.S. 22) during Operation Pedestal on 13 August 1942. It seems that the torpedo that mortally struck Manchester was launched by M.S. 22 (TV Franco Mezzadra) from a distance of about 600 meters.
Boats similar to torpedo boats are still in use, but are armed with long-range anti-ship missiles that can be used at ranges between 30 and 70 km. This reduces the need for high-speed chases and gives them much more room to operate in while approaching their targets.
Aircraft are a major threat, making the use of boats against any fleet with air cover very risky. The low height of the radar mast makes it difficult to acquire and lock onto a target while maintaining a safe distance. As a result, fast attack craft are being replaced for use in naval combat by larger corvettes, which are able to carry radar-guided anti-aircraft missiles for self-defense, and helicopters for over-the-horizon targeting.
Although torpedo boats have disappeared from the majority of the world's navies, they remained in use until the late 1990s and early 2000s in a few specialised areas, most notably in the Baltic. The close confines of the Baltic and ground clutter effectively negated the range benefits of early ASMs. Operating close to shore in conjunction with land based air cover and radars, and in the case of the Norwegian navy hidden bases cut into fjord sides, torpedo boats remained a cheap and viable deterrent to amphibious attack. Indeed, this is still the operational model followed by the Chinese Navy with its for the protection of its coastal and estuarial waters. | https://en.wikipedia.org/wiki?curid=31446 |
The Book of the Courtier
The Book of the Courtier ( ) by Baldassare Castiglione, is a lengthy philosophical dialogue on the topic of what constitutes an ideal courtier or (in the third chapter) court lady, worthy to befriend and advise a Prince or political leader. The book quickly became enormously popular and was assimilated by its readers into the genre of prescriptive courtesy books or books of manners, dealing with issues of etiquette, self-presentation, and morals, particularly at princely, or royal courts, books such as Giovanni Della Casa's "Galateo " (1558) and Stefano Guazzo's "The civil conversation" (1574). The "Book of the Courtier" was much more than that, however, having the character of a drama, an open-ended philosophical discussion, and an essay. It has been seen as a poignantly nostalgic evocation of an idealized milieu — that of the small courts of the High Renaissance which were vanishing in the Italian Wars; as a reverent tribute to the friends of Castiglione's youth, in particular the chastely married Duchess Isabella Gonzaga of Urbino, to whom Castiglione had addressed a sequence of Platonic sonnets and who died in 1526; and even as a veiled political allegory. It was composed over the course of twenty years, beginning in 1508, and ultimately published in 1528 by the Aldine Press in Venice just before the author's death. An influential English translation by Thomas Hoby was published in 1561.
The book is organized as a series of conversations that were supposed to have taken place over four nights in 1507 between the courtiers of the Duchy of Urbino, at a time when Castiglione was himself a member of the Duke's Court), although he is not portrayed as one of the interlocutors. In the book, the courtier is described as having a cool mind, a good voice (with beautiful, elegant and brave words) along with proper bearing and gestures. At the same time though, the courtier is expected to have a warrior spirit, to be athletic, and have good knowledge of the humanities, Classics and fine arts.
Over the course of four evenings, members of the court try to describe the perfect gentleman of the court. In the process they debate the nature of nobility, humor, women, and love.
"The Book of the Courtier" was one of the most widely distributed books of the 16th century, with editions printed in six languages and in twenty European centers. The 1561 English translation by Thomas Hoby had a great influence on the English upper class's conception of English gentlemen. The Courtier enjoyed influence for some generations, not least in Elizabethan England following its first translation by Sir Thomas Hoby in 1561, a time when Italian culture was very much in fashion.
Of the many qualities Castiglione's characters attribute to their perfect courtier, oratory and the manner in which the courtier presents himself while speaking is amongst the most highly discussed. Wayne Rebhorn, a Castiglione scholar, states that the courtier's speech and behavior in general is “designed to make people marvel at him, to transform himself into a beautiful spectacle for others to contemplate." As explained by Count Ludovico, the success of the courtier depends greatly on his reception by the audience from the first impression. This partly explains why the group considers the courtier's dress so vital to his success.
Castiglione's characters opine about how their courtier can impress his audience and win its approval. Similar to the Classical Roman rhetoricians Cicero and Quintilian, Castiglione stresses the importance of delivery while speaking. In Book I, the Count states that when the courtier speaks he must have a “sonorous, clear, sweet and well sounding” voice that is neither too effeminate nor too rough and be “tempered by a calm face and with a play of the eyes that shall give an effect of grace.” (Castiglione 1.33) This grace, or "grazia", becomes an important element in the courtier's appearance to the audience. Edoardo Saccone states in his analysis of Castiglione, “"grazia" consists of, or rather is obtained through, "sprezzatura".”
According to the Count, "sprezzatura" is amongst one of the most important, if not the most important, rhetorical device the courtier needs. Peter Burke describes "sprezzatura" in "The Book of the Courtier" as “nonchalance”, “careful negligence”, and “effortless and ease.” The ideal courtier is someone who “conceals art, and presents what is done and said as if it was done without effort and virtually without thought.” (31).
The Count advocates the courtier engage in "sprezzatura", or this “certain nonchalance”, in all the activities he participates in, especially speech. In Book I, he states, "Accordingly we may affirm that to be true art which does not appear to be art; nor to anything must we give greater care than to conceal art, for if it is discovered, it quite destroys our credit and brings us into small esteem." (Castiglione 1.26) The Count reasons that by obscuring his knowledge of letters, the courtier gives the appearance that his “orations were composed very simply” as if they sprang up from “nature and truth [rather] than from study and art.” (1.26). This much more natural appearance, even though it is not natural by any means, is more advantageous to the courtier.
The Count contends that if the courtier wants to attain "grazia" and be esteemed excellent, it would be in his best interest to have this appearance of nonchalance. By failing to employ "sprezzatura", he destroys his opportunity for grace. By applying "sprezzatura" to his speech and everything else he does, the courtier appears to have "grazia" and impresses his audience, thereby achieving excellence and perfection. (Saccone 16).
Another feature of rhetoric which Castiglione discusses is the role of written language and style. Castiglione declined to imitate Boccaccio and write in Tuscan Italian, as was customary at the time; instead he wrote in the Italian used in his native Lombardy (he was born near Mantua): as the Count says, “certainly it would require a great deal of effort on my part if in these discussions of ours I wished to use those old Tuscan words which the Tuscans of today have discarded; and what’s more I’m sure you would all laugh at me” (Courtier 70). Here, the use of the old and outdated Tuscan language is seen as a form of excess rather than a desirable trait. Castiglione states that had he followed Tuscan usage in his book, his description of sprezzatura would appear hypocritical, in that his effort would be seen as lacking in nonchalance (Courtier 71).
Federico responds to the Count's assessment of the use of spoken language by posing the question as to what is the best language in which to write rhetoric. The Count's response basically states that the language does not matter, but rather the style, authority, and grace of the work matters most (Courtier 71). Robert J. Graham, a Renaissance literary scholar, notes that “questions of whose language is privileged at any given historical moment are deeply implicated in matters of personal, social and cultural significance”, which he states is the primary reason for Castiglione's usage of the native vernacular. This also illustrates the Count's response on the relativity of language in Latin. With the role of language set, Castiglione begins to describe the style and authority in which the courtier must write in order to become successful.
The Count explains, "it is right that greater pains would be taken to make what is written more polished and correct…they should be chosen from the most beautiful of those employed in speech" (Courtier 71). This is where the style of which the courtier writes encourages the persuasiveness or successfulness of a speech. The success of a written speech, in contrast to the spoken speech, hinges on the notion that "we are willing to tolerate a great deal of improper and even careless usage" in oral rhetoric than written rhetoric. The Count explains that along with proper word usage, an ideal courtier must have a proper sense of style and flow to their words. These words must be factual yet entertaining as the Count states, “then, it is necessary to arrange what is to be said or written in its logical order, and after that to express it well in words that, if I am not mistaken, should be appropriate, carefully chosen, clear and well formed, but above all that are still in popular use" (Courtier 77). This form of emphasis on language is noted by Graham as; "Although the Count is aware that more traditional aspects of the orator (appearance, gestures, voice, etc.)…all this will be futile and of little consequence if the ideas conveyed by these words themselves are not witty or elegant to the requirements of the situation” (Graham 49). | https://en.wikipedia.org/wiki?curid=31447 |
Traceroute
In computing, codice_1 and codice_2 are computer network diagnostic commands for displaying the route (path) and measuring transit delays of packets across an Internet Protocol (IP) network. The history of the route is recorded as the round-trip times of the packets received from each successive host (remote node) in the route (path); the sum of the mean times in each hop is a measure of the total time spent to establish the connection. Traceroute proceeds unless all (three) sent packets are lost more than twice; then the connection is lost and the route cannot be evaluated. Ping, on the other hand, only computes the final round-trip times from the destination point.
For Internet Protocol Version 6 (IPv6) the tool sometimes has the name "traceroute6" or "tracert6".
The command codice_1 is available on many modern operating systems. On Unix-like systems such as FreeBSD, macOS, and Linux it is available as a command line tool. Traceroute is also graphically accessible in macOS within the "Network Utilities" suite.
Microsoft Windows and ReactOS provide a program named codice_2 that performs the same route-tracing function. Windows NT-based operating systems also provide PathPing, with similar functionality. The ReactOS version was developed by Ged Murphy and is licensed under the GPL.
On Unix-like operating systems, traceroute sends, by default, a sequence of User Datagram Protocol (UDP) packets, with destination port numbers ranging from 33434 to 33534; the implementations of traceroute shipped with Linux, FreeBSD, NetBSD, OpenBSD, DragonFly BSD, and macOS include an option to use ICMP Echo Request packets ("-I"), or any arbitrary protocol ("-P") such as UDP, TCP using TCP SYN packets, or ICMP.
On Windows, tracert sends ICMP Echo Request packets, rather than the UDP packets traceroute sends by default.
The time-to-live (TTL) value, also known as "hop limit", is used in determining the intermediate routers being traversed towards the destination. Traceroute sends packets with TTL values that gradually increase from packet to packet, starting with TTL value of one. Routers decrement TTL values of packets by one when routing and discard packets whose TTL value has reached zero, returning the ICMP error message ICMP Time Exceeded. For the first set of packets, the first router receives the packet, decrements the TTL value and drops the packet because it then has TTL value zero. The router sends an ICMP Time Exceeded message back to the source. The next set of packets are given a TTL value of two, so the first router forwards the packets, but the second router drops them and replies with ICMP Time Exceeded. Proceeding in this way, traceroute uses the returned ICMP Time Exceeded messages to build a list of routers that packets traverse, until the destination is reached and returns an ICMP Destination Unreachable message if UDP packets are being used or an ICMP Echo Reply message if ICMP Echo messages are being used.
The timestamp values returned for each router along the path are the delay (latency) values, typically measured in milliseconds for each packet.
The sender expects a reply within a specified number of seconds. If a packet is not acknowledged within the expected interval, an asterisk is displayed. The Internet Protocol does not require packets to take the same route towards a particular destination, thus hosts listed might be hosts that other packets have traversed. If the host at hop #N does not reply, the hop is skipped in the output.
If a network has a firewall and operates both Windows and Unix-like systems, more than one protocol must be enabled inbound through the firewall for traceroute to work and receive replies.
Some traceroute implementations use TCP packets, such as "tcptraceroute" and layer four traceroute (lft). PathPing is a utility introduced with Windows NT that combines ping and traceroute functionality. MTR is an enhanced version of ICMP traceroute available for Unix-like and Windows systems. The various implementations of traceroute all rely on ICMP Time Exceeded (type 11) packets being sent to the source.
On Linux, "tracepath" is a utility similar to traceroute, with the primary difference of not requiring superuser privileges.
Cisco's implementation of traceroute also uses a sequence of UDP datagrams, each with incrementing TTL values, to an invalid port number at the remote host; by default, UDP port 33434 is used. An extended version of this command (known as the "extended traceroute" command) can change the destination port number used by the UDP probe messages.
Most implementations include at least options to specify the number of queries to send per hop, time to wait for a response, the hop limit and port to use. Invoking traceroute with no specified options displays the list of available options, while "man traceroute" presents more details, including the displayed error flags. An example on Linux:
$ traceroute -w 3 -q 1 -m 16 example.com
In the example above, selected options are to wait for three seconds (instead of five), send out only one query to each hop (instead of three), limit the maximum number of hops to 16 before giving up (instead of 30), with "example.com" as the final host.
This can help identify incorrect routing table definitions or firewalls that may be blocking ICMP traffic, or high port UDP in Unix ping, to a site. A firewall may permit ICMP packets but not permit packets of other protocols.
Traceroute is also used by penetration testers to gather information about network infrastructure and IP ranges around a given host.
It can also be used when downloading data, and if there are multiple mirrors available for the same piece of data, each mirror can be traced to get an idea of which mirror would be the fastest to use.
The traceroute manual page states that the original traceroute program was written by Van Jacobson in 1987 from a suggestion by Steve Deering, with particularly cogent suggestions or fixes from C. Philip Wood, Tim Seaver and Ken Adelman. The author of the ping program, Mike Muuss, states on his website that traceroute was written using kernel ICMP support that he had earlier coded to enable raw ICMP sockets when he first wrote the ping program.
Traceroute limitations are well known and should be taken into account when using the tool. For example, traceroute does not discover paths at the router level, but at the interface level. Another limitation appears when routers do not respond to probes or when routers have a limit for ICMP responses. In the presence of traffic load balancing, traceroute may indicate a path that does not actually exist; to minimize this problem there is a traceroute modification called Paris-traceroute, which maintains the flow identifier of the probes to avoid load balancing. | https://en.wikipedia.org/wiki?curid=31448 |
Time to live
Time to live (TTL) or hop limit is a mechanism that limits the lifespan or lifetime of data in a computer or network. TTL may be implemented as a counter or timestamp attached to or embedded in the data. Once the prescribed event count or timespan has elapsed, data is discarded or revalidated. In computer networking, TTL prevents a data packet from circulating indefinitely. In computing applications, TTL is commonly used to improve the performance and manage the caching of data.
Under the Internet Protocol, TTL is an 8-bit field. In the IPv4 header, TTL is the 9th octet of 20. In the IPv6 header, it is the 8th octet of 40. The maximum TTL value is 255, the maximum value of a single octet. A recommended initial value is 64.
The time-to-live value can be thought of as an upper bound on the time that an IP datagram can exist in an Internet system. The TTL field is set by the sender of the datagram, and reduced by every router on the route to its destination. If the TTL field reaches zero before the datagram arrives at its destination, then the datagram is discarded and an Internet Control Message Protocol (ICMP) error datagram (11 - Time Exceeded) is sent back to the sender. The purpose of the TTL field is to avoid a situation in which an undeliverable datagram keeps circulating on an Internet system, and such a system eventually becoming swamped by such "immortals".
In theory, under IPv4, time to live is measured in seconds, although every host that passes the datagram must reduce the TTL by at least one unit. In practice, the TTL field is reduced by one on every hop. To reflect this practice, the field is renamed "hop limit" in IPv6.
TTLs also occur in the Domain Name System (DNS), where they are set by an authoritative name server for a particular resource record. When a caching (recursive) nameserver queries the authoritative nameserver for a resource record, it will cache that record for the time (in seconds) specified by the TTL. If a stub resolver queries the caching nameserver for the same record before the TTL has expired, the caching server will simply reply with the already cached resource record rather than retrieve it from the authoritative nameserver again. TTL for NXDOMAIN (non-existent domain) responses is set from the minimum of the MINIMUM field of the SOA record and the TTL of the SOA itself, and indicates how long a resolver may cache the negative answer.
Shorter TTLs can cause heavier loads on an authoritative name server, but can be useful when changing the address of critical services like web servers or MX records, and therefore are often lowered by the DNS administrator prior to a service being moved, in order to reduce possible disruptions.
The units used are seconds. An older common TTL value for DNS was 86400 seconds, which is 24 hours. A TTL value of 86400 would mean that, if a DNS record was changed on the authoritative nameserver, DNS servers around the world could still be showing the old value from their cache for up to 24 hours after the change.
Newer DNS methods that are part of a disaster recovery (DR) system may have some records deliberately set extremely low on TTL. For example, a 300-second TTL would help key records expire in 5 minutes to help ensure these records are flushed quickly worldwide. This gives administrators the ability to edit and update records in a timely manner. TTL values are "per record" and setting this value on specific records is sometimes honored automatically by all standard DNS systems worldwide. However, a problem persists in that some caching DNS nameservers set their own TTLs regardless of the authoritative records, thus it cannot be guaranteed that all downstream DNS servers have the new records after the TTL has expired.
Time to live may also be expressed as a date and time on which a record expires. The codice_1 header in HTTP responses, the codice_2 header field in both requests and responses and the codice_3 field in HTTP cookies express time-to-live in this way. | https://en.wikipedia.org/wiki?curid=31449 |
Tel Aviv
Tel Aviv-Yafo ( – "Tel Aviv-Yafo" ; – "Tall ʾAbīb - Yāfā"), often referred to as just Tel Aviv, is the most populous city in the Gush Dan metropolitan area of Israel. Located on the Israeli Mediterranean coastline and with a population of , it is the economic and technological center of the country. If East Jerusalem is considered part of Israel, Tel Aviv is the country's second most populous city after Jerusalem; if not, Tel Aviv is the most populous city before West Jerusalem.
Tel Aviv is governed by the Tel Aviv-Yafo Municipality, headed by Mayor Ron Huldai, and is home to many foreign embassies. It is a global city and is ranked 25th in the Global Financial Centres Index. Tel Aviv has the third- or fourth-largest economy and the largest economy per capita in the Middle East. The city has the 31st highest cost of living in the world. Tel Aviv receives over 2.5 million international visitors annually. A "party capital" in the Middle East, it has a lively nightlife and 24-hour culture. Tel Aviv has been called "The World's Vegan Food Capital", as it possesses the highest per capita population of vegans in the world, with many vegan eateries throughout the city. Tel Aviv is home to Tel Aviv University, the largest university in the country with more than 30,000 students.
The city was founded in 1909 by the Yishuv (Jewish residents) as a modern housing estate on the outskirts of the ancient port city of Jaffa, then part of the Mutasarrifate of Jerusalem within the Ottoman Empire. It was at first called 'Ahuzat Bayit' ("lit." "House Estate" or "Homestead"), the name of the association which established the neighbourhood. Its name was changed the following year to 'Tel Aviv', after the biblical name Tel Abib adopted by Nahum Sokolow as the title for his Hebrew translation of Theodor Herzl's 1902 novel "Altneuland" ("Old New Land"). Other Jewish suburbs of Jaffa established before Tel Aviv eventually became part of Tel Aviv, the oldest among them being Neve Tzedek (est. 1886). Tel Aviv was given "township" status within the Jaffa Municipality in 1921, and became independent from Jaffa in 1934. After the 1947–1949 Palestine war Tel Aviv began the municipal annexation of parts of Jaffa, fully unified with Jaffa under the name "Tel Aviv" in April 1950, and was renamed to "Tel Aviv-Yafo" in August 1950.
Immigration by mostly Jewish refugees meant that the growth of Tel Aviv soon outpaced that of Jaffa, which had a majority Arab population at the time. Tel Aviv and Jaffa were later merged into a single municipality in 1950, two years after the Israeli Declaration of Independence, which was proclaimed in the city. Tel Aviv's White City, designated a UNESCO World Heritage Site in 2003, comprises the world's largest concentration of International Style buildings, including Bauhaus and other related modernist architectural styles.
"Tel Aviv" is the Hebrew title of Theodor Herzl's "Altneuland" ("Old New Land"), translated from German by Nahum Sokolow. Sokolow had adopted the name of a Mesopotamian site near the city of Babylon mentioned in Ezekiel: "Then I came to them of the captivity at Tel Aviv, that lived by the river Chebar, and to where they lived; and I sat there overwhelmed among them seven days." The name was chosen in 1910 from several suggestions, including "Herzliya". It was found fitting as it embraced the idea of a renaissance in the ancient Jewish homeland. "Aviv" is Hebrew for "spring", symbolizing renewal, and "tel" is a man-made mound accumulating layers of civilization built one over the other and symbolizing the ancient.
Although founded in 1909 as a small settlement on the sand dunes north of Jaffa, Tel Aviv was envisaged as a future city from the start. Its founders hoped that in contrast to what they perceived as the squalid and unsanitary conditions of neighbouring Arab towns, Tel Aviv was to be a clean and modern city, inspired by the European cities of Warsaw and Odessa. The marketing pamphlets advocating for its establishment stated:
The walled city of Jaffa was the only inhabited part of what is now Tel Aviv in early modern times. Jaffa was an important port city in the region for millennia. Archaeological evidence shows signs of human settlement there starting in roughly 7,500 BC. The city was established around 1,800 BC at the latest. Its natural harbour has been used since the Bronze Age. By the time Tel Aviv was founded as a separate city during Ottoman rule of the region, Jaffa had been ruled by the Canaanites, Egyptians, Philistines, Israelites, Assyrians, Babylonians, Persians, Phoenicians, Ptolemies, Seleucids, Hasmoneans, Romans, Byzantines, the early Islamic caliphates, Crusaders, Ayyubids, and Mamluks before coming under Ottoman rule in 1515. It had been fought over numerous times. The city is mentioned in ancient Egyptian documents, as well as the Hebrew Bible.
Other ancient sites in Tel Aviv include: Tell Qasile, Tel Gerisa, Abattoir Hill. Tel Hashash and Tell Qudadi.
During the First Aliyah in the 1880s, when Jewish immigrants began arriving in the region in significant numbers, new neighborhoods were founded outside Jaffa on the current territory of Tel Aviv. The first was Neve Tzedek, founded by Mizrahi Jews due to overcrowding in Jaffa and built on lands owned by Aharon Chelouche. Other neighborhoods were Neve Shalom (1890), Yafa Nof (1896), Achva (1899), Ohel Moshe (1904), Kerem HaTeimanim (1906), and others. Once Tel Aviv received city status in the 1920s, those neighborhoods joined the newly formed municipality, now becoming separated from Jaffa.
The Second Aliyah led to further expansion. In 1906, a group of Jews, among them residents of Jaffa, followed the initiative of Akiva Aryeh Weiss and banded together to form the "Ahuzat Bayit" (lit. "homestead") society. One of the society's goals was to form a "Hebrew urban centre in a healthy environment, planned according to the rules of aesthetics and modern hygiene." The urban planning for the new city was influenced by the Garden city movement. The first 60 plots were purchased in Kerem Djebali near Jaffa by Jacobus Kann, a Dutch citizen, who registered them in his name to circumvent the Turkish prohibition on Jewish land acquisition. Meir Dizengoff, later Tel Aviv's first mayor, also joined the Ahuzat Bayit society. His vision for Tel Aviv involved peaceful co-existence with Arabs.
On 11 April 1909, 66 Jewish families gathered on a desolate sand dune to parcel out the land by lottery using seashells. This gathering is considered the official date of the establishment of Tel Aviv. The lottery was organised by Akiva Aryeh Weiss, president of the building society. Weiss collected 120 sea shells on the beach, half of them white and half of them grey. The members' names were written on the white shells and the plot numbers on the grey shells. A boy drew names from one box of shells and a girl drew plot numbers from the second box. A photographer, Avraham Soskin, documented the event. The first water well was later dug at this site, located on what is today Rothschild Boulevard, across from Dizengoff House. Within a year, Herzl, Ahad Ha'am, Yehuda Halevi, Lilienblum, and Rothschild streets were built; a water system was installed; and 66 houses (including some on six subdivided plots) were completed. At the end of Herzl Street, a plot was allocated for a new building for the Herzliya Hebrew High School, founded in Jaffa in 1906. The cornerstone for the building was laid on 28 July 1909. The town was originally named Ahuzat Bayit. On 21 May 1910, the name Tel Aviv was adopted. The flag and city arms of Tel Aviv (see above) contain under the red Star of David 2 words from the biblical book of Jeremiah: "I (God) will build You up again and you will be rebuilt." (Jer 31:4) Tel Aviv was planned as an independent Hebrew city with wide streets and boulevards, running water for each house, and street lights.
By 1914, Tel Aviv had grown to more than . In 1915 a census of Tel Aviv was conducted, recording a population 2,679. However, growth halted in 1917 when the Ottoman authorities expelled the residents of Jaffa and Tel Aviv as a wartime measure. A report published in "The New York Times" by United States Consul Garrels in Alexandria, Egypt described the Jaffa deportation of early April 1917. The orders of evacuation were aimed chiefly at the Jewish population. Jews were free to return to their homes in Tel Aviv at the end of the following year when, with the end of World War I and the defeat of the Ottomans, the British took control of Palestine.
The town had rapidly become an attraction to immigrants, with a local activist writing:
Tel Aviv, along with the rest of the Jaffa municipality, was conquered by the British imperial army in late 1917 during the Sinai and Palestine Campaign of World War I and became part of British-administered Mandatory Palestine until 1948.
Tel Aviv, established as suburb of Jaffa, received "township" or local council status within the Jaffa Municipality in 1921. According to a census conducted in 1922 by the British Mandate authorities, the Tel Aviv township had a population of 15,185 inhabitants, consisting of 15,065 Jews, 78 Muslims and 42 Christians.
Increasing in the 1931 census to 46,101, in 12,545 houses.
With increasing Jewish immigration during the British administration, friction between Arabs and Jews in Palestine increased. On 1 May 1921, the Jaffa Riots resulted in the deaths of 48 Arabs and 47 Jews and injuries to 146 Jews and 73 Arabs. In the wake of this violence, many Jews left Jaffa for Tel Aviv, increasing the population of Tel Aviv from 2,000 in 1920 to around 34,000 by 1925.
Tel Aviv began to develop as a commercial center.
In 1923, Tel Aviv was the first town to be wired to electricity in Palestine, followed by Jaffa later in the same year. The opening ceremony of the Jaffa Electric Company powerhouse, on 10 June 1923, celebrated the lighting of the two main streets of Tel Aviv.
In 1925, the Scottish biologist, sociologist, philanthropist and pioneering town planner Patrick Geddes drew up a master plan for Tel Aviv which was adopted by the city council led by Meir Dizengoff. Geddes's plan for developing the northern part of the district was based on Ebenezer Howard's garden city movement. While most of the northern area of Tel Aviv was built according to this plan, the influx of European refugees in the 1930s necessitated the construction of taller apartment buildings on a larger footprint in the city.
Ben Gurion House was built in 1930–31, part of a new workers' housing development. At the same time, Jewish cultural life was given a boost by the establishment of the Ohel Theatre and the decision of Habima Theatre to make Tel Aviv its permanent base in 1931.
Tel Aviv was granted the status of an independent municipality separate from Jaffa in 1934.
The Jewish population rose dramatically during the Fifth Aliyah after the Nazis came to power in Germany. By 1937 the Jewish population of Tel Aviv had risen to 150,000, compared to Jaffa's mainly Arab 69,000 residents. Within two years, it had reached 160,000, which was over a third of Palestine's total Jewish population. Many new Jewish immigrants to Palestine disembarked in Jaffa, and remained in Tel Aviv, turning the city into a center of urban life. Friction during the 1936–39 Arab revolt led to the opening of a local Jewish port, Tel Aviv Port, independent of Jaffa, in 1938. It closed on 25 October 1965. Lydda Airport (later Ben Gurion Airport) and Sde Dov Airport opened between 1937 and 1938.
Many German Jewish architects trained at the Bauhaus, the Modernist school of architecture in Germany, and left Germany during the 1930s. Some, like Arieh Sharon, came to Palestine and adapted the architectural outlook of the Bauhaus and similar schools to the local conditions there, creating what is recognized as the largest concentration of buildings in the International Style in the world.
Tel Aviv's White City emerged in the 1930s, and became a UNESCO World Heritage Site in 2003. During World War II, Tel Aviv was hit by Italian airstrikes on September 9, 1940, which killed 137 people in the city.
During the Jewish insurgency in Mandatory Palestine, Jewish Irgun and Lehi guerrillas launched repeated attacks against British military, police, and government targets in the city. In 1946, following the King David Hotel bombing, the British carried out Operation Shark, in which the entire city was searched for Jewish militants and most of the residents questioned, during which the entire city was placed under curfew. During the March 1947 martial law in Mandatory Palestine, Tel Aviv was placed under martial law by the British authorities for 15 days, with the residents kept under curfew for all but three hours a day as British forces hunted for militants. In spite of this, Jewish guerrilla attacks continued in Tel Aviv and other areas under martial law in Palestine.
According to the 1947 UN Partition Plan for dividing Palestine into Jewish and Arab states, Tel Aviv, by then a city of 230,000, was to be included in the proposed Jewish state. Jaffa with, as of 1945, a population of 101,580 people—53,930 Muslims, 30,820 Jews and 16,800 Christians—was designated as part of the Arab state. Civil War broke out in the country and in particular between the neighbouring cities of Tel Aviv and Jaffa, which had been assigned to the Jewish and Arab states respectively. After several months of siege, on 13 May 1948, Jaffa fell and the Arab population fled en masse.
When Israel declared Independence on 14 May 1948, the population of Tel Aviv was over 200,000. Tel Aviv was the temporary government center of the State of Israel until the government moved to Jerusalem in December 1949. Due to the international dispute over the status of Jerusalem, most embassies remained in or near Tel Aviv.
The boundaries of Tel Aviv and Jaffa became a matter of contention between the Tel Aviv municipality and the Israeli government in 1948. The former wished to incorporate only the northern Jewish suburbs of Jaffa, while the latter wanted a more complete unification. The issue also had international sensitivity, since the main part of Jaffa was in the Arab portion of the United Nations Partition Plan, whereas Tel Aviv was not, and no armistice agreements had yet been signed. On 10 December 1948, the government announced the annexation to Tel Aviv of Jaffa's Jewish suburbs, the Palestinian neighborhood of Abu Kabir, the Arab village of Salama and some of its agricultural land, and the Jewish 'Hatikva' slum. On 25 February 1949, the depopulated Palestinian village of al-Shaykh Muwannis was also annexed to Tel Aviv. On 18 May 1949, Manshiya and part of Jaffa's central zone were added, for the first time including land that had been in the Arab portion of the UN partition plan. The government voted on the unification of Tel Aviv and Jaffa on 4 October 1949, but the decision was not implemented until 24 April 1950 due to the opposition of Tel Aviv mayor Israel Rokach. The name of the unified city was Tel Aviv until 19 August 1950, when it was renamed Tel Aviv-Yafo in order to preserve the historical name Jaffa.
Tel Aviv thus grew to . In 1949, a memorial to the 60 founders of Tel Aviv was constructed.
In the 1960s, some of the older buildings were demolished, making way for the country's first high-rises. The historic Herzliya Hebrew Gymnasium was controversially demolished, to make way for the Shalom Meir Tower, which was completed in 1965, and remained Israel's tallest building until 1999. Tel Aviv's population peaked in the early 1960s at 390,000, representing 16 percent of the country's total.
By the early 1970s, Tel Aviv had entered a long and steady period of continuous population decline, which was accompanied by urban decay. By 1981, Tel Aviv had entered not just natural population decline, but an absolute population decline as well. In the late 1980s the city had an aging population of 317,000. Construction activity had moved away from the inner ring of Tel Aviv, and had moved to its outer perimeter and adjoining cities. A mass out-migration of residents from Tel Aviv, to adjoining cities like Petah Tikva and Rehovot, where better housing conditions were available, was underway by the beginning of the 1970s, and only accelerated by the Yom Kippur War. Cramped housing conditions and high property prices pushed families out of Tel Aviv and deterred young people from moving in. From the beginning of 1970s, the common image of Tel Aviv became that of a decaying city, as Tel Aviv's population fell 20%.
In the 1970s, the apparent sense of Tel Aviv's urban decline became a theme in the work of novelists such as Yaakov Shabtai, in works describing the city such as "Sof Davar" ("The End of Things") and "Zikhron Devarim" ("The Memory of Things"). A symptomatic article of 1980 asked "Is Tel Aviv Dying?" and portrayed what it saw as the city's existential problems: "Residents leaving the city, businesses penetrating into residential areas, economic and social gaps, deteriorating neighbourhoods, contaminated air - Is the First Hebrew City destined for a slow death? Will it become a ghost town?". However, others saw this as a transitional period. By the late 1980s, attitudes to the city's future had become markedly more optimistic. It had also become a center of nightlife and discotheques for Israelis who lived in the suburbs and adjoining cities. By 1989, Tel Aviv had acquired the nickname "Nonstop City", as a reflection of the growing recognition of its nightlife and 24/7 culture, and "Nonstop City" had to some extent replaced the former moniker of "First Hebrew City".
The largest project built in this era was the Dizengoff Center, Israel's first shopping mall, which was completed in 1983. Other notable projects included the construction of Marganit Tower in 1987, the opening of the Suzanne Dellal Center for Dance and Theater in 1989, and the Tel Aviv Cinematheque (opened in 1973 and located to the current building in 1989).
In the early 1980s, 13 embassies in Jerusalem moved to Tel Aviv as part of the UN's measures responding to Israel's 1980 Jerusalem Law. Today, most national embassies are located in Tel Aviv or environs.
In the 1990s, the decline in Tel Aviv's population began to be reversed and stabilized, at first temporarily due to a wave of immigrants from the former Soviet Union. Tel Aviv absorbed 42,000 immigrants from the FSU, many educated in scientific, technological, medical and mathematical fields. In this period, the number of engineers in the city doubled. Tel Aviv soon began to emerge as a global high-tech center. The construction of many skyscrapers and high-tech office buildings followed. In 1993, Tel Aviv was categorized as a world city.
However, the city's municipality struggled to cope with an influx of new immigrants. Tel Aviv's tax base had been shrinking for many years, as a result of its preceding long term population decline, and this meant there was little money available at the time to invest in the city's deteriorating infrastructure and housing. In 1998, Tel Aviv was on the “verge of bankruptcy”. Economic difficulties would then be compounded by a wave of Palestinian suicide bombings in the city from the mid 1990s, to the end of the Second Intifada, as well as the Dot-com bubble, which affected the city's rapidly growing hi-tech sector.
On 4 November 1995, Israel's prime minister, Yitzhak Rabin, was assassinated at a rally in Tel Aviv in support of the Oslo peace accord. The outdoor plaza where this occurred, formerly known as Kikar Malchei Yisrael, was renamed Rabin Square.
New laws were introduced to protect Modernist buildings, and efforts to preserve them were aided by UNESCO recognition of the Tel Aviv's White City as a world heritage site in 2003. In the early 2000s, Tel Aviv municipality focused on attracting more young residents to the city. It made significant investment in major boulevards, to create attractive pedestrian corridors. Former industrial areas like the city's previously derelict Northern Tel Aviv Port and the Jaffa railway station, were upgraded and transformed into leisure areas. A process of gentrification began in some of the poor neighborhoods of southern Tel Aviv and many older buildings began to be renovated.
The demographic profile of the city changed in the 2000s, as it began to attract a higher proportion of young residents. By 2012, 28 percent of the city's population was aged between 20–34 years old. Between 2007–2012, the city's population growth averaged 6.29 percent. As a result of its population recovery and industrial transition, the city's finances were transformed, and by 2012 it was running a budget surplus and maintained a credit rating of AAA+.
In the 2000s and early 2010s, Tel Aviv received tens of thousands of illegal immigrants, primarily from Sudan and Eritrea, changing the demographic profile of areas of the city.
In 2009, Tel Aviv celebrated its official centennial. In addition to city- and country-wide celebrations, digital collections of historical materials were assembled. These include the History section of the official Tel Aviv-Yafo Centennial Year website; the Ahuzat Bayit collection, which focuses on the founding families of Tel Aviv, and includes photographs and biographies; and Stanford University's Eliasaf Robinson Tel Aviv Collection, documenting the history of the city. Today, the city is regarded as a strong candidate for global city status. Over the past 60 years, Tel Aviv had developed into a secular, liberal-minded center with a vibrant nightlife and café culture.
In the Gulf War in 1991, Tel Aviv was attacked by Scud missiles from Iraq. Iraq hoped to provoke an Israeli military response, which could have destroyed the US–Arab alliance. The United States pressured Israel not to retaliate, and after Israel acquiesced, the US and Netherlands rushed Patriot missiles to defend against the attacks, but they proved largely ineffective. Tel Aviv and other Israeli cities continued to be hit by Scuds throughout the war, and every city in the Tel Aviv area except for Bnei Brak was hit. A total of 74 Israelis died as a result of the Iraqi attacks, mostly from suffocation and heart attacks, while approximately 230 Israelis were injured. Extensive property damage was also caused, and some 4,000 Israelis were left homeless. It was feared that Iraq would fire missiles filled with nerve agents or sarin. As a result, the Israeli government issued gas masks to its citizens. When the first Iraqi missiles hit Israel, some people injected themselves with an antidote for nerve gas. The inhabitants of the southeastern suburb of HaTikva erected an angel-monument as a sign of their gratitude that "it was through a great miracle, that many people were preserved from being killed by a direct hit of a Scud rocket."
Since the First Intifada, Tel Aviv has suffered from Palestinian political violence. The first suicide attack in Tel Aviv occurred on 19 October 1994, on the Line 5 bus, when a bomber killed 22 civilians and injured 50 as part of a Hamas suicide campaign. On 6 March 1996, another Hamas suicide bomber killed 13 people (12 civilians and 1 soldier), many of them children, in the Dizengoff Center suicide bombing. Three women were killed by a Hamas terrorist in the Café Apropo bombing on 27 March 1997.
One of the most deadly attacks occurred on 1 June 2001, during the Second Intifada, when a suicide bomber exploded at the entrance to the Dolphinarium discothèque, killing 21, mostly teenagers, and injuring 132. Another Hamas suicide bomber killed six civilians and injured 70 in the Allenby Street bus bombing. Twenty-three civilians were killed and over 100 injured in the Tel Aviv central bus station massacre. Al-Aqsa Martyrs Brigades claimed responsibility for the attack. In the Mike's Place suicide bombing, an attack on a bar by a British Muslim suicide bomber resulted in the deaths of three civilians and wounded over 50. Hamas and Al Aqsa Martyrs Brigades claimed joint responsibility. An Islamic Jihad bomber killed five and wounded over 50 in the 25 February 2005 Stage Club bombing. The most recent suicide attack in the city occurred on 17 April 2006, when 11 people were killed and at least 70 wounded in a suicide bombing near the old central bus station.
Another attack took place on 29 August 2011 in which a Palestinian attacker stole an Israeli taxi cab and rammed it into a police checkpoint guarding the popular Haoman 17 nightclub in Tel Aviv which was filled with 2,000 Israeli teenagers. After crashing, the assailant went on a stabbing spree, injuring eight people. Due to an Israel Border Police roadblock at the entrance and immediate response of the Border Police team during the subsequent stabbings, a much larger and fatal mass-casualty incident was avoided.
On 21 November 2012, during Operation Pillar of Defense, the Tel Aviv area was targeted by rockets, and air raid sirens were sounded in the city for the first time since the Gulf War. All of the rockets either missed populated areas or were shot down by an Iron Dome rocket defense battery stationed near the city. During the operation, a bomb blast on a bus wounded at least 28 civilians, three seriously. This was described as a terrorist attack by Israel, Russia, and the United States and was condemned by the United Nations, United States, United Kingdom, France and Russia, whilst Hamas spokesman Sami Abu Zuhri declared that the organisation "blesses" the attack.
Tel Aviv is located around on the Israeli Mediterranean coastline, in central Israel, the historic land bridge between Europe, Asia and Africa. Immediately north of the ancient port of Jaffa, Tel Aviv lies on land that used to be sand dunes and as such has relatively poor soil fertility. The land has been flattened and has no important gradients; its most notable geographical features are bluffs above the Mediterranean coastline and the Yarkon River mouth. Because of the expansion of Tel Aviv and the Gush Dan region, absolute borders between Tel Aviv and Jaffa and between the city's neighborhoods do not exist.
The city is located northwest of Jerusalem and south of the city of Haifa. Neighboring cities and towns include Herzliya to the north, Ramat HaSharon to the northeast, Petah Tikva, Bnei Brak, Ramat Gan and Giv'atayim to the east, Holon to the southeast, and Bat Yam to the south. The city is economically stratified between the north and south. Southern Tel Aviv is considered less affluent than northern Tel Aviv with the exception of Neve Tzedek and northern and north-western Jaffa. Central Tel Aviv is home to Azrieli Center and the important financial and commerce district along Ayalon Highway. The northern side of Tel Aviv is home to Tel Aviv University, Hayarkon Park, and upscale residential neighborhoods such as Ramat Aviv and Afeka.
Tel Aviv has a Mediterranean climate (Köppen climate classification: Csa), and enjoys plenty of sunshine throughout the year. Most precipitation falls in the form of rain between the months of October and April, with intervening dry summers. The average annual temperature is , and the average sea temperature is during the winter, and during the summer. The city averages of precipitation annually.
Summers in Tel Aviv last about five months, from June to October. August, the warmest month, averages a high of , and a low of . The high relative humidity due to the location of the city by the Mediterranean Sea, in a combination with the high temperatures, creates a thermal discomfort during the summer. Summer low temperatures in Tel Aviv seldom drop below .
Winters are mild and wet, with most of the annual precipitation falling within the months of December, January and February as intense rainfall and thunderstorms. In January, the coolest month, the average maximum temperature is , the minimum temperature averages . During the coldest days of winter, temperatures may vary between and . Both freezing temperatures and snowfall are extremely rare in the city.
Autumns and springs are characterized by sharp temperature changes, with heat waves that might be created due to hot and dry air masses that arrive from the nearby deserts. During heatwaves in autumn and springs, temperatures usually climb up to and even up to , accompanied with exceptionally low humidity. An average day during autumn and spring has a high of to , and a low of to .
The highest recorded temperature in Tel Aviv was on 17 May 1916, and the lowest is on 7 February 1950, during a cold wave that brought the only recorded snowfall in Tel Aviv.
Tel Aviv is governed by a 31-member city council elected for a five-year term in direct proportional elections.
All Israeli citizens over the age of 18 with at least one year of residence in Tel Aviv are eligible to vote in municipal elections. The municipality is responsible for social services, community programs, public infrastructure, urban planning, tourism and other local affairs. The Tel Aviv City Hall is located at Rabin Square. Ron Huldai has been mayor of Tel Aviv since 1998. Huldai was reelected for a fifth term in the 2018 municipal elections, defeating former deputy Asaf Zamir, founder of the Ha’Ir party. Huldai's has become the longest-serving mayor of the city, exceeding Shlomo Lahat 19-year term, and will be term-limited from running for a sixth term. The shortest-serving was David Bloch, in office for two years, 1925–27.
Politically, Tel Aviv is known to be a stronghold for the left, in both local and national issues. The left wing vote is especially prevalent in the city's mostly affluent central and northern neighborhoods, though not the case for its working-class southeastern neighborhoods which tend to vote for right wing parties in national elections. Outside the kibbutzim, Meretz receives more votes in Tel Aviv than in any other city in Israel.
Following the 2013 municipal elections, Meretz gained an unprecedented 6 seats on the council. However, having been reelected as mayor, Huldai and the Tel Aviv 1 list lead the coalition, which controls 29 of 31 seats.
In 2006, 51,359 children attended school in Tel Aviv, of whom 8,977 were in municipal kindergartens, 23,573 in municipal elementary schools, and 18,809 in high schools. Sixty-four percent of students in the city are entitled to matriculation, more than 5 percent higher than the national average. About 4,000 children are in first grade at schools in the city, and population growth is expected to raise this number to 6,000. As a result, 20 additional kindergarten classes were opened in 2008–09 in the city. A new elementary school is planned north of Sde Dov as well as a new high school in northern Tel Aviv.
The first Hebrew high school, called Herzliya Hebrew Gymnasium, was established in Jaffa in 1905 and moved to Tel Aviv after its founding in 1909, where a new campus on Herzl Street was constructed for it.
Tel Aviv University, the largest university in Israel, is known internationally for its physics, computer science, chemistry and linguistics departments. Together with Bar-Ilan University in neighboring Ramat Gan, the student population numbers over 50,000, including a sizeable international community. Its campus is located in the neighborhood of Ramat Aviv. Tel Aviv also has several colleges.
The Herzliya Hebrew Gymnasium moved from Jaffa to old Tel Aviv in 1909 and moved to Jabotinsky Street in the early 1960s. Other notable schools in Tel Aviv include Shevah Mofet, the second Hebrew school in the city, Ironi Alef High School for Arts and Alliance.
Tel Aviv has a population of spread over a land area of , yielding a population density of 7,606 people per square km (19,699 per square mile). According to the Israel Central Bureau of Statistics (CBS), Tel Aviv's population is growing at an annual rate of 0.5 percent. Jews of all backgrounds form 91.8 percent of the population, Muslims and Arab Christians make up 4.2 percent, and the remainder belong to other groups (including various Christian and Asian communities). As Tel Aviv is a multicultural city, many languages are spoken in addition to Hebrew. According to some estimates, about 50,000 unregistered African and Asian foreign workers live in the city. Compared with Westernised cities, crime in Tel Aviv is relatively low.
According to Tel Aviv-Yafo Municipality, the average income in the city, which has an Unemployment Rate of 4.6%, is 20% above the national average. The city's education standards are above the national average: of its 12th-grade students, 64.4 percent are eligible for matriculation certificates. The age profile is relatively even, with 22.2 percent aged under 20, 18.5 percent aged 20–29, 24 percent aged 30–44, 16.2 percent aged between 45 and 59, and 19.1 percent older than 60.
Tel Aviv's population reached a peak in the early 1960s at around 390,000, falling to 317,000 in the late 1980s as high property prices forced families out and deterred young couples from moving in. Since the 1990s, population has steadily grown. Today, the city's population is young and growing. In 2006, 22,000 people moved to the city, while only 18,500 left, and many of the new families had young children. The population is expected to reach 450,000 by 2025; meanwhile, the average age of residents fell from 35.8 in 1983 to 34 in 2008. The population over age 65 stands at 14.6 percent compared with 19% in 1983.
Tel Aviv has 544 active synagogues,
including historic buildings such as the Great Synagogue, established in the 1930s. In 2008, a center for secular Jewish Studies and a secular yeshiva opened in the city. Tensions between religious and secular Jews before the gay pride parade ended in vandalism of a synagogue. The number of churches has grown to accommodate the religious needs of diplomats and foreign workers. The population was 93% Jewish, 1% Muslim, and 1% Christian. The remaining 5 percent were not classified by religion. Israel Meir Lau is chief rabbi of the city.
Tel Aviv is an ethnically diverse city. The Jewish population, which forms the majority group in Tel Aviv consists of the descendants of immigrants from all parts of the world, including Ashkenazi Jews from Europe, North America, South America, Australia and South Africa, as well as Sephardic and Mizrahi Jews from Southern Europe, North Africa, India, Central Asia, West Asia, and the Arabian Peninsula. There are also a sizable number of Ethiopian Jews and their descendants living in Tel Aviv. In addition to Muslim and Arab Christian minorities in the city, several hundred Armenian Christians who reside in the city are concentrated mainly in Jaffa and some Christians from the former Soviet Union who immigrated to Israel with Jewish spouses and relatives. In recent years, Tel Aviv has received many non-Jewish migrants from Asia and Africa, students, foreign workers (documented and undocumented) and refugees. There are many economic migrants and refugees from African countries, primarily Eritrea and Sudan, located in the southern part of the city.
Tel Aviv is divided into nine districts that have formed naturally over the city's short history. The oldest of these is Jaffa, the ancient port city out of which Tel Aviv grew. This area is traditionally made up demographically of a greater percentage of Arabs, but recent gentrification is replacing them with a young professional and artist population. Similar processes are occurring in nearby Neve Tzedek, the original Jewish neighborhood outside of Jaffa. Ramat Aviv, a district in the northern part of the city that is largely made up of luxury apartments and includes Tel Aviv University, is currently undergoing extensive expansion and is set to absorb the beachfront property of Sde Dov Airport after its decommissioning. The area known as HaKirya is the Israel Defense Forces (IDF) headquarters and a large military base.
Moreover, in the past few years, Rothschild Boulevard which is located at beginning in Neve Tzedek had become an attraction both of tourist, businesses and startups. It features a wide, tree-lined central strip with pedestrian and bike lanes.
Historically, there was a demographic split between the Ashkenazi northern side of the city, including the district of Ramat Aviv, and the southern, more Sephardi and Mizrahi neighborhoods including Neve Tzedek and Florentin.
Since the 1980s, major restoration and gentrification projects have been implemented in southern Tel Aviv. Baruch Yoscovitz, city planner for Tel Aviv beginning in 2001, reworked old British plans for the Florentin neighborhood from the 1920s, adding green areas, pedestrian malls, and housing. The municipality invested two million shekels in the project. The goal was to make Florentin the Soho of Tel Aviv, and attract artists and young professionals to the neighborhood. Indeed, street artists, such as Dede, installation artists such as Sigalit Landau, and many others made the upbeat neighborhood their home base. Florentin is now known as a hip, "cool" place to be in Tel Aviv with coffeehouses, markets, bars, galleries and parties.
Tel Aviv is home to different architectural styles that represent influential periods in its history. The early architecture of Tel Aviv consisted largely of European-style single-story houses with red-tiled roofs. Neve Tzedek, the first neighborhood to be constructed outside of Jaffa is characterised by two-story sandstone buildings. By the 1920s, a new eclectic Orientalist style came into vogue, combining European architecture with Eastern features such as arches, domes and ornamental tiles. Municipal construction followed the "garden city" master plan drawn up by Patrick Geddes. Two- and three-story buildings were interspersed with boulevards and public parks.
Various architectural styles, such as Art Deco, classical and modernist also exist in Tel Aviv.
Bauhaus architecture was introduced in the 1920s and 1930s by German Jewish architects who settled in Palestine after the rise of the Nazis. Tel Aviv's White City, around the city center, contains more than 5,000 Modernist-style buildings inspired by the Bauhaus school and Le Corbusier. Construction of these buildings, later declared protected landmarks and, collectively, a UNESCO World Heritage Site, continued until the 1950s in the area around Rothschild Boulevard. Some 3,000 buildings were created in this style between 1931 and 1939 alone.
In the 1960s, this architectural style gave way to office towers and a chain of waterfront hotels and commercial skyscrapers. Some of the city's Modernist buildings were neglected to the point of ruin. Before legislation to preserve this landmark architecture, many of the old buildings were demolished. Efforts are under way to refurbish Bauhaus buildings and restore them to their original condition.
The Shalom Meir Tower, Israel's first skyscraper, was built in Tel Aviv in 1965 and remained the country's tallest building until 1999. At the time of its construction, the building rivaled Europe's tallest buildings in height, and was the tallest in the Middle East.
In the mid-1990s, the construction of skyscrapers began throughout the entire city, altering its skyline. Before that, Tel Aviv had had a generally low-rise skyline. However, the towers were not concentrated in certain areas, and were scattered at random locations throughout the city, creating a disjointed skyline.
New neighborhoods, such as Park Tzameret, have been constructed to house apartment towers such as YOO Tel Aviv towers, designed by Philippe Starck. Other districts, such as Sarona, have been developed with office towers. Other recent additions to Tel Aviv's skyline include the 1 Rothschild Tower and First International Bank Tower. As Tel Aviv celebrated its centennial in 2009, the city attracted a number of architects and developers, including I. M. Pei, Donald Trump, and Richard Meier. American journalist David Kaufman reported in "New York" magazine that since Tel Aviv "was named a UNESCO World Heritage site, gorgeous historic buildings from the Ottoman and Bauhaus era have been repurposed as fabulous hotels, eateries, boutiques, and design museums." In November 2009, "Haaretz" reported that Tel Aviv had 59 skyscrapers more than 100 meters tall. Currently, dozens of skyscrapers have been approved or are under construction throughout the city, and many more are planned. The tallest building approved is the Egged Tower, which would become Israel's tallest building upon completion. According to current plans, the tower is planned to have 80 floors, rise to a height of 270 meters, and will have a 50-meter spire.
In 2010, the Tel Aviv Municipality's Planning and Construction Committee launched a new master plan for the city for 2025. It decided not to allow the construction of any additional skyscrapers in the city center, while at the same time greatly increasing the construction of skyscrapers in the east. The ban extends to an area between the coast and Ibn Gabirol Street, and also between the Yarkon River and Eilat Street. It did not extend to towers already under construction or approved. One final proposed skyscraper project was approved, while dozens of others had to be scrapped. Any new buildings there will usually not be allowed to rise above six and a half stories. However, hotel towers along almost the entire beachfront will be allowed to rise up to 25 stories. According to the plan, large numbers of skyscrapers and high-rise buildings at least 18 stories tall would be built in the entire area between Ibn Gabirol Street and the eastern city limits, as part of the master plan's goal of doubling the city's office space to cement Tel Aviv as the business capital of Israel. Under the plan, "forests" of corporate skyscrapers will line both sides of the Ayalon Highway. Further south, skyscrapers rising up to 40 stories will be built along the old Ottoman railway between Neve Tzedek and Florentine, with the first such tower there being the Neve Tzedek Tower. Along nearby Shlavim Street, passing between Jaffa and south Tel Aviv, office buildings up to 25 stories will line both sides of the street, which will be widened to accommodate traffic from the city's southern entrance to the center.
In November 2012, it was announced that to encourage investment in the city's architecture, residential towers throughout Tel Aviv would be extended in height. Buildings in Jaffa and the southern and eastern districts may have two and a half stories added, while those on Ibn Gabirol Street might be extended by seven and a half stories.
Tel Aviv has been ranked as the twenty-fifth most important financial center in the world. As it was built on sand dunes in an area unsuitable for farming, it instead developed as a hub of business and scientific research. In 1926, the country's first shopping arcade, Passage Pensak, was built there. By 1936, as tens of thousands of middle class immigrants arrived from Europe, Tel Aviv was already the largest city in Palestine. A small port was built at the Yarkon estuary, and many cafes, clubs and cinemas opened. Herzl Street became a commercial thoroughfare at this time.
Economic activities account for 17 percent of the GDP. In 2011, Tel Aviv had an unemployment rate of 4.4 percent. The city has been described as a "flourishing technological center" by "Newsweek" and a "miniature Los Angeles" by "The Economist". In 1998, the city was described by Newsweek as one of the 10 most technologically influential cities in the world. Since then, high-tech industry in the Tel Aviv area has continued to develop. The Tel Aviv metropolitan area (including satellite cities such as Herzliya and Petah Tikva) is Israel's center of high-tech, sometimes referred to as Silicon Wadi.
Tel Aviv is home to the Tel Aviv Stock Exchange (TASE), Israel's only stock exchange, which has reached record heights since the 1990s. The Tel Aviv Stock exchange has also gained attention for its resilience and ability to recover from war and disasters. For example, the Tel Aviv Stock Exchange was higher on the last day of both the 2006 Lebanon war and the 2009 Operation in Gaza than on the first day of fighting Many international venture-capital firms, scientific research institutes and high-tech companies are headquartered in the city. Industries in Tel Aviv include chemical processing, textile plants and food manufacturers.
In 2016, the Globalization and World Cities Study Group and Network (GaWC) at Loughborough University reissued an inventory of world cities based on their level of advanced producer services. Tel Aviv was ranked as an alpha- world city.
The Kiryat Atidim high tech zone opened in 1972 and the city has become a major world high tech hub. In December 2012, the city was ranked second on a list of top places to found a high tech startup company, just behind Silicon Valley. In 2013, Tel Aviv had more than 700 startup companies and research and development centers, and was ranked the second-most innovative city in the world, behind Medellín and ahead of New York City.
According to Forbes, nine of its fifteen Israeli-born billionaires live in Israel; four live in Tel Aviv and its suburbs. The cost of living in Israel is high, with Tel Aviv being its most expensive city to live in. According to Mercer, a human resources consulting firm based in New York, Tel Aviv is the most expensive city in the Middle East and the 19th most expensive in the world.
Shopping malls in Tel Aviv include Dizengoff Center, Ramat Aviv Mall and Azrieli Shopping Mall and markets such as Carmel Market, Ha'Tikva Market, and Bezalel Market.
Tel Aviv is a major center of culture and entertainment. Eighteen of Israel's 35 major centers for the performing arts are located in the city, including five of the country's nine large theatres, where 55% of all performances in the country and 75 percent of all attendance occurs. The Tel Aviv Performing Arts Center is home of the Israeli Opera, where Plácido Domingo was house tenor between 1962 and 1965, and the Cameri Theatre. With 2,482 seats, the Heichal HaTarbut is the city's largest theatre and home to the Israel Philharmonic Orchestra.
Habima Theatre, Israel's national theatre, was closed down for renovations in early 2008, and reopened in November 2011 after major remodeling. Enav Cultural Center is one of the newer additions to the cultural scene. Other theatres in Tel Aviv are the Gesher Theatre and Beit Lessin Theater; Tzavta and Tmuna are smaller theatres that host musical performances and fringe productions. In Jaffa, the Simta and Notzar theatres specialize in fringe as well. Tel Aviv is home to the Batsheva Dance Company, a world-famous contemporary dance troupe. The Israeli Ballet is also based in Tel Aviv. Tel Aviv's center for modern and classical dance is the Suzanne Dellal Center for Dance and Theatre in Neve Tzedek.
The city often hosts international musicians at venues such as Yarkon Park, Expo Tel Aviv, the Barby Club, the Zappa Club and Live Park Rishon Lezion just south of Tel Aviv. After Israel's victory in 2018, Tel Aviv was named host city for the 2019 Eurovision Song Contest (the first Israeli-hosted Eurovision held outside of Jerusalem). Opera and classical music performances are held daily in Tel Aviv, with many of the world's leading classical conductors and soloists performing on Tel Aviv stages over the years.
The Tel Aviv Cinematheque screens art movies, premieres of short and full-length Israeli films, and hosts a variety of film festivals, among them the Festival of Animation, Comics and Caricatures, "Icon" Science Fiction and Fantasy Festival, the Student Film Festival, the Jazz, Film and Videotape Festival and Salute to Israeli Cinema. The city has several multiplex cinemas.
Tel Aviv receives about 2.5 million international visitors annually, the fifth-most-visited city in the Middle East & Africa. In 2010, "Knight Frank"'s world city survey ranked it 34th globally. Tel Aviv has been named the third "hottest city for 2011" (behind only New York City and Tangier) by "Lonely Planet", third-best in the Middle East and Africa by "Travel + Leisure magazine" (behind only Cape Town and Jerusalem), and the ninth-best beach city in the world by "National Geographic". Tel Aviv is consistently ranked as one of the top LGBT destinations in the world. The city has also been ranked as one of the top 10 oceanfront cities.
Tel Aviv is known as "the city that never sleeps" and a "party capital" due to its thriving nightlife, young atmosphere and famous 24-hour culture. Tel Aviv has branches of some of the world's leading hotels, including the Crowne Plaza, Sheraton, Dan, Isrotel and Hilton. It is home to many museums, architectural and cultural sites, with city tours available in different languages. Apart from bus tours, architectural tours, Segway tours, and walking tours are also popular. Tel Aviv has 44 hotels with more than 6,500 rooms.
The beaches of Tel Aviv and the city's promenade play a major role in the city's cultural and touristic scene, often ranked as some of the best beaches in the world. Hayarkon Park is the most visited urban park in Israel, with 16 million visitors annually. Other parks within city limits include Charles Clore Park, Independence Park, Meir Park and Dubnow Park. About 19% of the city land are green spaces.
Tel Aviv is an international hub of highly active and diverse nightlife with bars, dance bars and nightclubs staying open well past midnight. The largest area for nightclubs is the Tel Aviv port, where the city's large, commercial clubs and bars draw big crowds of young clubbers from both Tel Aviv and neighboring cities. The South of Tel Aviv is known for the popular Haoman 17 club, as well as for being the city's main hub of alternative clubbing, with underground venues including established clubs like the Block Club, Comfort 13 and Paradise Garage, as well as various warehouse and loft party venues. The Allenby/Rothschild area is another popular nightlife hub, featuring such clubs as the Pasaz, Radio EPGB and the Penguin. In 2013, Absolut Vodka introduced a specially designed bottle dedicated to Tel Aviv as part of its international cities series.
Tel Aviv has become an international center of fashion and design. It has been called the "next hot destination" for fashion. Israeli designers, such as swimwear company Gottex show their collections at leading fashion shows, including New York's Bryant Park fashion show. In 2011, Tel Aviv hosted its first Fashion Week since the 1980s, with Italian designer Roberto Cavalli as a guest of honor.
Named "the best gay city in the world" by American Airlines, Tel Aviv is one of the most popular destinations for LGBT tourists internationally, with a large LGBT community. American journalist David Kaufman has described the city as a place "packed with the kind of ‘we're here, we're queer’ vibe more typically found in Sydney and San Francisco. The city hosts its well-known pride parade, the biggest in Asia, attracting over 200,000 people yearly.
In January 2008, Tel Aviv's municipality established the city's LGBT Community centre, providing all of the municipal and cultural services to the LGBT community under one roof. In December 2008, Tel Aviv began putting together a team of gay athletes for the 2009 World Outgames in Copenhagen. In addition, Tel Aviv hosts an annual LGBT Film Festival.
Tel Aviv's LGBT community is the subject of Eytan Fox's 2006 film "The Bubble".
Tel Aviv is famous for its wide variety of world-class restaurants, offering traditional Israeli dishes as well as international fare. More than 100 sushi restaurants, the third highest concentration in the world, do business in the city.
In Tel Aviv there are some dessert specialties, the most known is the Halva ice cream traditionally topped with date syrup and pistachios
Israel has the highest number of museums per capita of any country, with three of the largest located in Tel Aviv. Among these are the Eretz Israel Museum, known for its collection of archaeology and history exhibits dealing with the Land of Israel, and the Tel Aviv Museum of Art. Housed on the campus of Tel Aviv University is Beth Hatefutsoth, a museum of the international Jewish diaspora that tells the story of Jewish prosperity and persecution throughout the centuries of exile. Batey Haosef Museum specializes in Israel Defense Forces military history. The Palmach Museum near Tel Aviv University offers a multimedia experience of the history of the Palmach. Right next to Charles Clore Park is a museum of the Etzel. The Israel Trade Fairs & Convention Center, located in the northern part of the city, hosts more than 60 major events annually. Many offbeat museums and galleries operate in the southern areas, including the Tel Aviv Raw Art contemporary art gallery.
Tel Aviv is the only city with three clubs in Israeli Premier League, the country's top football league. Maccabi Tel Aviv Sports Club was founded in 1906 and competes in more than 10 sport fields. Its basketball team, Maccabi Tel Aviv, is a world-known professional team, that holds 50 Israeli titles, has won 39 editions of the Israel cup, and has six European Championships, and its football team has won 21 Israeli league titles and has won 23 State Cups, four Toto Cups and two Asian Club Championships. Yael Arad, an athlete in Maccabi's judo club, won a silver medal in the 1992 Olympic Games.
National Sport Center – Tel Aviv (also Hadar Yosef Sports Center) is a compound of stadiums and sports facilities. It also houses the Olympic Committee of Israel and the National Athletics Stadium with the Israeli Athletic Association.
Hapoel Tel Aviv Sports Club, founded in 1923, comprises more than 11 sports clubs, including Hapoel Tel Aviv Football Club (13 championships, 16 State Cups, one Toto Cup and once Asian champions) which plays in Bloomfield Stadium, men's and women's basketball clubs.
Bnei Yehuda (once Israeli champion, twice State Cup winners and twice Toto Cup winner) is the only Israeli football team in the top division that represents a neighborhood, the Hatikva Quarter in Tel Aviv, and not a city.
Shimshon Tel Aviv and Beitar Tel Aviv both formerly played in the top division, but dropped into the lower leagues, and merged in 2000, the new club now playing in Liga Artzit, the third tier. Another former first division team, Maccabi Jaffa, is now defunct, as are Maccabi HaTzefon Tel Aviv, Hapoel HaTzefon Tel Aviv and Hakoah Tel Aviv, who merged with Maccabi Ramat Gan and moved to Ramat Gan in 1959.
Two rowing clubs operate in Tel Aviv. The Tel Aviv Rowing Club, established in 1935 on the banks of the Yarkon River, is the largest rowing club in Israel. Meanwhile, the beaches of Tel Aviv provide a vibrant Matkot (beach paddleball) scene. Tel Aviv Lightning represent Tel Aviv in the Israel Baseball League. Tel Aviv also has an annual half marathon, run in 2008 by 10,000 athletes with runners coming from around the world.
In 2009, the Tel Aviv Marathon was revived after a fifteen-year hiatus, and is run annually since, attracting a field of over 18,000 runners.
Tel Aviv is also ranked to be 10th best to-skateboarding city by Transworld Skateboarding.
The three largest newspaper companies in Israel—Yedioth Ahronoth, Maariv and Haaretz—are all based within the city limits. Several radio stations cover the Tel Aviv area, including the city-based Radio Tel Aviv.
The three major Israeli television networks, Israel Broadcasting Authority, Keshet, Reshet, and Channel 10, are based in the city, as well as two of the most popular radio stations in Israel: Galatz and Galgalatz, which are both based in Jaffa. Studios of the international news channel i24news is located at Jaffa Port Customs House. An English language radio station, TLV1, is based at Kikar Hamedina.
Tel Aviv is ranked as the greenest city in Israel. Since 2008, city lights are turned off annually in support of Earth Hour. In February 2009, the municipality launched a water saving campaign, including competition granting free parking for a year to the household that is found to have consumed the least amount of water per person.
In the early 21st century, Tel Aviv's municipality transformed a derelict power station into a public park, now named "Gan HaHashmal" ("Electricity Park"), paving the way for eco-friendly and environmentally conscious designs. In October 2008, Martin Weyl turned an old garbage dump near Ben Gurion International Airport, called Hiriya, into an attraction by building an arc of plastic bottles. The site, which was renamed Ariel Sharon Park to honor Israel's former prime minister, will serve as the centerpiece in what is to become a urban wilderness on the outskirts of Tel Aviv, designed by German landscape architect, Peter Latz.
At the end of the 20th century, the city began restoring historical neighborhoods such as Neve Tzedek and many buildings from the 1920s and 1930s. Since 2007, the city hosts its well-known, annual Open House Tel Aviv weekend, which offers the general public free entrance to the city's famous landmarks, private houses and public buildings. In 2010, the design of the renovated Tel Aviv Port ("Nemal Tel Aviv") won the award for outstanding landscape architecture at the European Biennial for Landscape Architecture in Barcelona.
In 2014, the Sarona Market Complex opened, following an 8-year renovation project of Sarona colony.
Tel Aviv is a major transportation hub, served by a comprehensive public transport network, with many major routes of the national transportation network running through the city.
As with the rest of Israel, bus transport is the most common form of public transport and is very widely used. The Tel Aviv Central Bus Station is located in the southern part of the city. The main bus network in Tel Aviv metropolitan area operated by Dan Bus Company, Metropoline and Kavim. the Egged Bus Cooperative, Israels's largest bus company, provides intercity transportation.
The city is also served by local and inter-city share taxis. Many local and inter-city bus routes also have sherut taxis that follow the same route and display the same route number in their window. Fares are standardised within the region and are comparable to or less expensive than bus fares. Unlike other forms of public transport, these taxis also operate on Fridays and Saturdays (the Jewish sabbath "Shabbat"). Private taxis are white with a yellow sign on top. Fares are standardised and metered, but may be negotiated ahead of time with the driver.
The Tel Aviv Central railway station is the main railway station of the city, and the busiest station in Israel. The city has three additional railway stations along the Ayalon Highway: Tel Aviv University, HaShalom (adjacent to Azrieli Center) and HaHagana (near the Tel Aviv Central Bus Station), Tel Aviv Mercaz. It is estimated that over a million passengers travel by rail to Tel Aviv monthly. The trains do not run on Saturday and the principal Jewish festivals (Rosh Hashana (2 days), Yom Kippur, Sukkot, Simkhat Torah, Pessach (Passover) first and fifth days and Shavuot (Pentecost)).
Jaffa Railway Station was the first railway station in the Middle East. It served as the terminus for the Jaffa–Jerusalem railway. The station opened in 1891 and closed in 1948. In 2005–2009, the station was restored and converted into an entertainment and leisure venue marketed as "HaTachana", Hebrew for "the station" (see homepage here:).
The first line of a light rail system is under construction and scheduled to open in 2020. The Red Line starts at Petah Tikva's Central Bus Station, east of Tel Aviv and follows the Jabotinsky Road (Route 481) westwards at street level. At the point where Jabotinsky Road and Highway 4 intersect the line drops into a tunnel for through Bnei Brak, Ramat Gan and Tel Aviv and emerges again to street level just before Jaffa, where it turns southwards towards Bat Yam.
The underground section will include 10 stations, including an interchange with Israel Railways services at Tel Aviv Central Railway Station and the nearby 2000 Terminal. A maintenance depot, connected via a branch line and tunnel to the main section of the line, will be constructed in Kiryat Arye, across from the existing Kiryat Arye suburban railway station. The intended builder and operator of the first line, MTS, has had financial difficulties that postponed the line's opening. In May 2010, the ministry of finance decided to cancel the agreement with MTS due to the difficulties and the agreement was cancelled in August 2010. The line is being built instead by NTA—The Tel Aviv region's mass transit development authority. Initially, the line's targeted opening was in 2012 and today the target is 2016 after several postponements due to the disagreements with MTS and NTA's takeover of the project.
The second line is scheduled to open in 2021.
The main highway leading to and within the city is the Ayalon Highway (Highway 20), which runs in the eastern side of the city from north to south along the Ayalon River riverbed. Driving south on Ayalon gives access to Highway 4 leading to Ashdod, Highway 1, leading to Ben Gurion International Airport and Jerusalem and Highway 431 leading to Jerusalem, Modiin, Rehovot and the Highway 6 Trans-Israel Highway. Driving north on Ayalon gives access to the Highway 2 coastal road leading to Netanya, Hadera and Haifa. Within the city, main routes include Kaplan Street, Allenby Street, Ibn Gabirol Street, Dizengoff Street, Rothschild Boulevard, and in Jaffa the main route is Jerusalem Boulevard. Namir Road connects the city to Highway 2, Israel's main north–south highway, and Begin/Jabotinsky Road, which provides access from the east through Ramat Gan, Bnei Brak and Petah Tikva. Tel Aviv, accommodating about 500,000 commuter cars daily, suffers from increasing congestion. In 2007, the Sadan Report recommended the introduction of a congestion charge similar to that of London in Tel Aviv as well as other Israeli cities. Under this plan, road users traveling into the city would pay a fixed fee.
The main airport serving Greater Tel Aviv is Ben Gurion International Airport. Located in the neighbouring city of Lod, it handled over 20 million passengers in 2017. Ben Gurion is the main hub of El Al, Arkia, Israir Airlines and Sun D'Or. The airport is southeast of Tel Aviv, on Highway 1 between Tel Aviv and Jerusalem. Sde Dov (IATA: SDV), in northwestern Tel Aviv, is a domestic airport and was closed in 2019 in favor of real-estate development. All services to Sde Dov will be transferred to Ben Gurion Airport.
Tel Aviv Municipality encourages the use of bicycles in the city. Plans called for expansion of the paths to by 2009. As of April 2011 the municipality has completed construction of the planned of bicycle paths.
In April 2011, Tel Aviv municipality launched Tel-O-Fun, a bicycle sharing system, in which 150 stations of bicycles for rent were installed within the city limits. As of October 2011, there are 125 active stations, providing more than 1,000 bicycles.
The municipality of Tel Aviv signed agreements with many cities worldwide.
The Israeli Interior Ministry is planning on eventually annexing the neighboring city of Bat Yam into Tel Aviv. Current plans call for the merger to take place in 2023 after a few years' preparation. It has been suggested that if this proves successful, other neighboring cities such as Ramat Gan and Givatayim would then be merged into Tel Aviv. Some officials envision that as part of these mergers, Tel Aviv will become a supercity with several sub-municipalities in the style of Greater London.
In alphabetical order by surname; stage names are treated as single names: | https://en.wikipedia.org/wiki?curid=31453 |
Clangers
Clangers (usually referred to as "The Clangers") is a British stop-motion children's television series, made of short films about a family of mouse-like creatures who live on, and inside, a small moon-like planet. They speak only in a whistled language. They eat only green soup (supplied by the Soup Dragon) and blue string pudding. The programmes were originally broadcast on BBC1 between 1969 and 1972, followed by a special episode which was broadcast in 1974. The series was revived in 2015, broadcast on CBeebies.
The series was made by Smallfilms, the company set up by Oliver Postgate (who was the show's writer, animator and narrator) and Peter Firmin (who was its modelmaker and illustrator). Firmin designed the characters, and his wife knitted and "dressed" them. The music, often part of the story, was provided by Vernon Elliott.
A third series, narrated by Monty Python actor Michael Palin, was broadcast in the UK from 15 June 2015 on the BBC's CBeebies TV channel, gaining hugely successful viewing figures, following on from a short special broadcast by the BBC earlier that year. The new programmes are still made using stop-motion animation (instead of the computer-generated imagery which had replaced the original stop-motion animation in revivals of other children's shows such as "Fireman Sam", "Thomas & Friends" and "The Wombles").
"Clangers" won a BAFTA in the Best Pre-School Animation category in 2015.
The Clangers originated in a series of children's books developed from another "Smallfilms" production, "Noggin the Nog". Publishers Kay and Ward created a series of books based on the "Noggin the Nog" television episodes, which was subsequently expanded into a series called "Noggin First Reader", aimed at teaching children to read.
In one of these, called "Noggin and the Moonmouse", published in 1967, a new horse-trough was put up in the middle of the town in the North-Lands. A spacecraft hurtled down and splash-landed in it: the top unscrewed, and out came a largish, mouse-like creature in a duffel coat, who wanted fuel for his spaceship. He showed Nooka and the children that what he needed was vinegar and soap-flakes, so they filled up the fueltank of the little spherical ship, which then "took off in a dreadful cloud smelling of vinegar and soap-flakes, covering the town with bubbles".
In 1969 (the year of NASA's first landing on the Moon), the BBC asked "Smallfilms" to produce a new series for colour television, but without specifying a storyline. Postgate concluded that as space exploration was topical the new series should take place in space (and, inspired by the real Moon Landing, Peter Firmin designed a set which strongly resembled the Moon). Postgate adapted the Moonmouse from the 1967 story, by simply removing its tail ("because it kept getting into the soup"). Hence the Clangers looked similar to mice (and, from their pink colour, pigs). They wore clothes reminiscent of Roman armour, "against the space debris that kept falling onto the planet, lost from other places, such as television sets and bits of an Iron Chicken". And they spoke in whistled language.
"The Clangers" was described by Postgate as a family in space. They were small creatures living in peace and harmony on – and inside – a small, hollow planet, far, far away: nourished by Blue String Pudding, and by Green Soup harvested from the planet's volcanic soup wells by the Soup Dragon.
The word "Clanger" is said to derive from the sound made by opening the metal cover of one of the creatures' crater-like burrows, each of which was covered with an old metal dustbin lid, to protect against meteorite impacts (and space debris). In each episode there would be some problem to solve, typically concerning something invented or discovered, or some new visitor to meet. Music Trees, with note-shaped fruit, grew on the planet's surface, and music would often be an integral feature in the simple but amusing plots. In the "Fishing" episode, one of the Cheese Trees provided a cylindrical five-line staff for notes taken from the Music Trees.
Postgate provided the narration, for the most part in a soft, melodic voice, describing and accounting for the curious antics of the little blue planet's knitted pink inhabitants, and providing a "translation", as it were, for much of their whistled dialogue. Postgate claimed that in reality when the Clangers' were whistling, they were "swearing their little heads off".
The first of the 26 episodes (aired as two series of 13 programmes each) was broadcast on BBC1 from 16 November 1969. The last edition of the second series was transmitted on 10 November 1972.
However, there was also one final programme, a four-minute election special entitled "Vote for Froglet", broadcast on 10 October 1974 (the day of the General Election), which was not shown in the usual timeslot during children's programmes. Oliver Postgate said in a 2005 interview that he wasn't sure whether the 1974 special still existed, and it has been referred to as a "missing episode". In fact the whole episode is available from the British Film Institute.
The original Mother Clanger puppet was stolen in 1972. Today, Major Clanger and the second Mother Clanger are on display at the Rupert Bear Museum.
The Clangers grew in size between the first and last episodes, to allow Firmin to use an Action Man model figure in the episode "The Rock Collector".
In October 2013, the BBC's CBeebies channel announced that a new series would be produced for broadcasting in their 2015 schedules, with Michael Palin narrating in place of the late Oliver Postgate. The American pre-school channel Sprout added the series to their 2015 schedule, with William Shatner narrating.
In November 2015, "The Clangers" won the Best Pre-school Animation award at the BAFTAs.
The principal characters are the Clangers themselves, the females wearing waistcoats and the males brass armour:
Other characters appeared in only one or two episodes each:
One of the most noted aspects was the use of sound effects, with a score composed by Vernon Elliott under instructions from Postgate. Although the episodes were scripted, most of the music used in the two series was written in translation by Postgate in the form of "musical sketches" or graphs that he drew for Elliott, who converted the drawings into a musical score. The music was then recorded by the two, along with other musicians – dubbed the "Clangers Ensemble" – in a village hall, where they would often leave the windows open, leading to the sounds of birds outside being heard on some recordings. Much of the score was performed on Elliott's bassoon, and also included harp, clarinet, glockenspiel and bells.
The distinctive whistles made by the Clangers, performed on swanee whistles, have become as identifiable as the characters themselves, much imitated by viewers. The series creators have said that the Clangers, living in vacuum, did not communicate by sound, but rather by a type of nuclear magnetic resonance, which was translated to audible whistles for the human audience. These whistles followed the rhythm and intonation of a script in English. The action was also narrated by a voice-over from Postgate. However, when the series was shown without narration to a group of overseas students, many of them felt that the Clangers were speaking their particular language.
Postgate recounted: "When the BBC got the script, [they] rang me up and said “At the beginning of episode three, where the doors get stuck, Major Clanger says 'sod it, the bloody thing’s stuck again'. Well, darling, you can’t say that on Children’s television, you know, I mean you just can’t.” I said “It’s not going to be said, it’s going to be whistled”, but [they] just said “But people will know!” I said no, that if they had nice minds, they’d think “Oh dear, the silly thing’s not working properly”. If you watch the episode, the one where the rocket goes up and shoots down the Iron Chicken, Major Clanger kicks the door to make it work and his first words are “Sod it, the bloody thing’s stuck again”. Years later, when the merchandising took off, the Golden Bear company wanted a Clanger and a Clanger phrase for it to make when you squeezed it, they got “Sod it, the bloody thing’s stuck again”! "
John Du Prez, who wrote some of the music for "Monty Python" (another show Michael Palin was in) composed the score for the 2015 series.
The first series was transmitted on BBC1 at 5:55pm, except for the episode "Chicken" which went out at 5:50pm because there was a "Children in Need" appeal at 6:00pm.
The second series episodes were also transmitted weekly on BBC1, but in a wide variety of differing timeslots. Episodes 1 and 2 were seen at 4:50pm; episodes 3, 5 and 6 at 5:05pm; episodes 4 and 8 at 5:00pm; episode 7 at 4:40pm; episode 9 at 5:30pm; and episodes 10, 11, 12 and 13 (which followed episode 9 after a gap of more than a year) at 4:00pm.
The first of these was an election special, produced in 1974, entitled "Vote for Froglet". Inspired by what Postgate referred to as the "Winter of Discontent" (a phrase from Shakespeare's play "Richard III", usually employed to refer to the winter of 1978–79, but Postgate was referring to the miners' strike in the winter of 1973-74), and inspired partly by his recollections of post-war Germany, it was broadcast on the night of the October 1974 General Election. The narrator explains the democratic process, and demonstrates it by asking the Clangers to vote between the Soup Dragon and a Froglet. The Soup Dragon wins the election on a policy of "No Soup for Froglets", but the Clangers are dissatisfied with the result. This special was believed to be a lost episode for many years, but it was released in full for free by the British Film Institute to coincide with the 2017 UK General Election.
Episodes 3–26 were first broadcast at 5:30 pm,
Episodes 27–52 at 6.00 pm on CBeebies.
Following the March 2015 special, a full series was commissioned for the summer of that year. The series was narrated by Michael Palin, and co-produced by Smallfilms with the involvement of Peter Firmin and Oliver Postgate's son, Dan. The series was directed by Chris Tichborne and Mole Hill, with music composed by John Du Prez. 52 11-minute episodes were commissioned. The voices of the Iron Chicken, the Soup Dragon, and the Baby Soup Dragon were by Dan Postgate.
The first episode of the new series aired on 15 June 2015. It turned out to be a massive hit for CBeebies. The BBC News Entertainment and Arts magazine revealed that 65% of the episode's viewing audience of 484,000 were adults, and that it was CBeebies' most watched programme of 2015 up to that date. The rating was more than double the previous record, set by an episode of Alphablocks, Numberjacks, Waybuloo, Fimbles, Charlie and Lola, Teletubbies, The Lingo Show and The Octonauts that year, as well as other CBeebies favourites since the station's launch in 2002, although an episode of Numberjacks peaked at over 1 million back in 2009.
According to the 7 June 2015 issue of "Parade" magazine, actor William Shatner has been chosen to be the American narrator for the series when it begins airing on the cable network Sprout.
A second series of the revival, and the fourth series overall, started on 11 September 2017.
Although not quite as popular as "Bagpuss" (which in 1999 was voted in a British television poll the best children's television programme ever made), since the death of Postgate in December 2008 interest has been revived in his work, which is considered to have had a notable influence on British culture throughout the 1960s, 1970s and 1980s. In 2007, Postgate and Firmin were jointly presented with the Action for Children's Arts J. M. Barrie Award "for a lifetime's achievement in delighting children".
The Soup Dragons, a Scottish alternative rock band of the late 1980s and early 1990s, took their name from the Clangers character.
In the 1972 "Doctor Who" serial "The Sea Devils", The Master is seen to be watching the episode "The Rock Collector". He states at first that he believes that they are real creatures and even starts to try and learn their language. But he is later told that they are just television characters.
A Clanger (as a glove-puppet rather than a stop-motion puppet) appears as a member of the "Puppet Government" in "The Goodies" TV episode "The Goodies Rule – O.K.?".
From the block's start until its discontinuation, the UK's Nick Jr. Classics block aired "Clangers" episodes specifically for parents who remembered the show.
Tiny Clanger (also as a glove-puppet) appeared on "Sprout's Sunny Side Up Show" in honour of the U.S. premiere of "Clangers".
The series was not widely broadcast outside the UK in the 1970s, mainly because it did not require additional money from sales abroad to finance its production. However the Norwegian Broadcasting Corporation showed the series in 1970 and 1982, entitled "Romlingane". It was narrated by Ingebrigt Davik, a popular author of children's books. It was shown on Swedish Television in the late 1960s and 1970s, entitled "Rymdlarna". The first 13 episodes were also shown on Czechoslovak Television in August 1972, entitled "Rámusíci" as a part of the children's evening program slot "Večerníček".
The revived version in 2015 has received funding from Sprout, a subsidiary of NBCUniversal, and has been pre-sold to other foreign broadcasters including the Australian Broadcasting Corporation. The American transmissions are narrated by William Shatner.
As from 2018, it is also broadcast on the Belgian channel Ketnet.
In 2001, a selection of the music and sound effects was compiled by Jonny Trunk from 128 musical cues held by Postgate, who contributed act one, "The Iron Chicken and the Music Trees", of "A Clangers Opera", with libretto that he had compiled.
In the early 1990s, three VHS cassettes of the Clangers were released by BBC Enterprises Ltd. Later, another six cassettes were released by Universal Pictures. A number of DVDs have also been released by Universal Pictures (original series) and Signature Entertainment (revived series). | https://en.wikipedia.org/wiki?curid=31454 |
Terry Brooks
Terence Dean Brooks (born January 8, 1944) is an American writer of fantasy fiction. He writes mainly epic fantasy, and has also written two film novelizations. He has written 23 "New York Times" bestsellers during his writing career, and has sold over 25 million copies of his books in print. He is one of the biggest-selling living fantasy writers.
Brooks was born in the rural Midwestern town of Sterling, Illinois, and spent a large part of his life living there. He is an alumnus of Hamilton College, earning his B.A. in English literature in 1966. He later obtained a J.D. degree from Washington and Lee University. He was a practising attorney before becoming a full-time author.
Brooks had been a writer since high school, writing mainly in the genres of science fiction, western, fiction, and non-fiction. One day, in his early college life, he was given a copy of J. R. R. Tolkien's "The Lord of the Rings", which inspired him to write in one genre. While Tolkien inspired the genre, Brooks stated during his TEDxRainier talk "Why I Write about Elves", as well as at the Charlotte Literary Festival that he credits the inspiration of his style of writing to William Faulkner's works. With this inspiration, he then made his debut in 1977 with "The Sword of Shannara".
After finishing two sequels to "The Sword of Shannara", Brooks moved on to the series which would become known as the "Landover" novels. Brooks then wrote a four-book series titled "The Heritage of Shannara". For the next fourteen years, he wrote more "Landover" books, then went on to write "The Word and Void" trilogy. Continuing the "Shannara" series, Brooks wrote the prequel to "The Sword of Shannara", titled "First King of Shannara". He then wrote two series, "The Voyage of the Jerle Shannara" and "High Druid of Shannara" and finished a third, "Genesis of Shannara", a trilogy bridging his "Word and Void" and "Shannara" series. The sixth book in the "Landover" series, "A Princess of Landover", was released in August 2009.
Returning to Shannara, a duology, "Legends of Shannara", taking place after the events of "Genesis of Shannara", was written next. The first book, entitled "Bearers of the Black Staff", was released in August 2010 and the second, "The Measure of the Magic", was released in August 2011.
He next completed a trilogy entitled "The Dark Legacy of Shannara". The three books are; "Wards of Faerie" (Feb 2013), "Bloodfire Quest" (June 2013), and "Witch Wraith" (Dec 2013). He followed this with the trilogy "Defenders of Shannara", which include "The High Druid's Blade" (July 2014), "The Darkling Child" (June 2015), and "The Sorcerer's Daughter" (May 24, 2016).
According to his website, he is currently working on the final and concluding tetralogy of the Shannara series known as The Fall of Shannara. The first book in the tetralogy is "The Black Elfstone" and was released on June 13, 2017. The second book in the series is "The Skaar Invasion " released on June 19, 2018. The third book in the series is "The Stiehl Assassin" published on May 28, 2019.
A television series based on the Shannara works, entitled "The Shannara Chronicles", began showing on MTV in January 2016. The show starts with the second book of the original series, "Elfstones", as there are strong female roles which did not appear in the first book. The second season aired in 2017 on Spike TV. On January 16, 2018, it was announced that the series had been cancelled after two seasons. Producers later announced that the series is being shopped to other networks.
Terry has written a number of other books, based on movies, science fiction and his own life. Novels include "Hook", based on the movie of the same name, first published November 24, 1991 and republished in 1998. "" was published April 21, 1999 with four differing dust jacket covers. His own writing life is reflected in two stories, "", published February 3, 2004, and "Why I Write About Elves" published in 2005. A science fiction book, "Street Freaks", was released on October 2, 2018. Terry has written a number of e-book short stories which will also be collected into a book in October 2019 (or 2020 dependent upon the publisher) with all his other short stories.
Brooks resides in Seattle, Washington, with his wife, Judine.
After writing "Indomitable", a short story constituting an epilogue to "The Wishsong of Shannara", Terry Brooks declared: | https://en.wikipedia.org/wiki?curid=31455 |
Truck
A truck or lorry is a motor vehicle designed to transport cargo. Trucks vary greatly in size, power, and configuration; smaller varieties may be mechanically similar to some automobiles. Commercial trucks can be very large and powerful and may be configured to be mounted with specialized equipment, such as in the case of refuse trucks, fire trucks, concrete mixers, and suction excavators. In American English, a commercial vehicle without a trailer or other articulation is formally a "straight truck" while one designed specifically to pull a trailer is not a truck but a "tractor".
Modern trucks are largely powered by diesel engines, although small to medium size trucks with gasoline engines exist in the US, Canada, and Mexico. In the European Union, vehicles with a gross combination mass of up to are known as light commercial vehicles, and those over as large goods vehicles.
Trucks and cars have a common ancestor: the steam-powered "fardier" Nicolas-Joseph Cugnot built in 1769. However, steam wagons were not common until the mid-19th century. The roads of the time, built for horse and carriages, limited these vehicles to very short hauls, usually from a factory to the nearest railway station. The first semi-trailer appeared in 1881, towed by a steam tractor manufactured by De Dion-Bouton. Steam-powered wagons were sold in France and the United States until the eve of World War I, and 1935 in the United Kingdom, when a change in road tax rules made them uneconomic against the new diesel lorries.
In 1895 Karl Benz designed and built the first truck in history using the internal combustion engine. Later that year some of Benz's trucks were modified to become the first bus by the "Netphener", the first motorbus company in history. A year later, in 1896, another internal combustion engine truck was built by Gottlieb Daimler. Other companies, such as Peugeot, Renault and Büssing, also built their own versions. The first truck in the United States was built by Autocar in 1899 and was available with optional 5 or 8 horsepower motors.
Trucks of the era mostly used two-cylinder engines and had a carrying capacity of . In 1904, 700 heavy trucks were built in the United States, 1000 in 1907, 6000 in 1910, and 25000 in 1914.
After World War I, several advances were made: pneumatic tires replaced the previously common full rubber versions. electric starters, power brakes, 4, 6, and 8 cylinder engines, closed cabs, and electric lighting followed. The first modern semi-trailer trucks also appeared. Touring car builders such as Ford and Renault entered the heavy truck market.
Although it had been invented in 1897, the diesel engine did not appear in production trucks until Benz introduced it in 1923.
The diesel engine was not common in trucks in Europe until the 1930s. In the United States, Autocar introduced diesel engines for heavy applications in the mid-1930s. Demand was high enough that Autocar launched the "DC" model (diesel conventional) in 1939. However, it took much longer for diesel engines to be broadly accepted in the US: gasoline engines were still in use on heavy trucks in the 1970s.
"Truck" is used in American English, and is common in Canada, Australia, New Zealand, Pakistan and South Africa, while "lorry" is the equivalent in British English, and is the usual term in countries like Ireland, Malaysia, Singapore and India.
The first known usage of "truck" was in 1611, when it referred to the small strong wheels on ships' cannon carriages. In its extended usage it came to refer to carts for carrying heavy loads, a meaning known since 1771. Its expanded application to "motor-powered load carrier" has been in usage since 1930, shortened from "motor truck", which dates back to 1901.
"Lorry" has a more uncertain origin, but probably has its roots in the rail transport industry, where the word is known to have been used in 1838 to refer to a type of truck (a goods wagon as in British usage, not a bogie as in the American), specifically a large flat wagon. It might derive from the verb "lurry" (to carry or drag along; or to lug) which was in use as early as 1664, but that association is not definitive. The expanded meaning of "lorry", "self-propelled vehicle for carrying goods", has been in usage since 1911.
In the United States, Canada, and the Philippines "truck" is usually reserved for commercial vehicles larger than normal cars, and includes pickups and other vehicles having an open load bed. In Australia, New Zealand and South Africa, the word "truck" is mostly reserved for larger vehicles; in Australia and New Zealand, a pickup truck is usually called a "ute" (short for "utility"), while in South Africa it is called a "bakkie" (Afrikaans: "small open container"). In the United Kingdom, India, Malaysia, Singapore, Ireland, and Hong Kong "lorry" is used instead of "truck", but only for the medium and heavy types, while "truck" is used almost exclusively to refer to pickups.
Often produced as variations of golf cars, with internal combustion or battery electric drive, these are used typically for off-highway use on estates, golf courses, and parks. While not suitable for highway use some variations may be licensed as slow speed vehicles for operation on streets, generally as a body variation of a neighborhood electric vehicle. A few manufactures produce specialized chassis for this type of vehicle, while Zap Motors markets a version of their xebra electric tricycle (licensable in the U.S. as a motorcycle).
Popular in Europe and Asia, many mini trucks are factory redesigns of light automobiles, usually with monocoque bodies. Specialized designs with substantial frames such as the Italian Piaggio shown here are based upon Japanese designs (in this case by Daihatsu) and are popular for use in "old town" sections of European cities that often have very narrow alleyways.
Regardless of name, these small trucks serve a wide range of uses. In Japan, they are regulated under the Kei car laws, which allow vehicle owners a break in taxes for buying a smaller and less-powerful vehicle (currently, the engine is limited to 660 cc displacement). These vehicles are used as on-road utility vehicles in Japan. These Japanese-made mini trucks that were manufactured for on-road use are competing with off-road ATVs in the United States, and import regulations require that these mini trucks have a speed governor as they are classified as low speed vehicles. These vehicles have found uses in construction, large campuses (government, university, and industrial), agriculture, cattle ranches, amusement parks, and replacements for golf carts.
Major mini truck manufacturers and their brands:
Light trucks are car-sized (in the US, no more than ) and are used by individuals and businesses alike. In the EU they may not weigh more than , and are allowed to be driven with a driving licence for cars. Pickup trucks, called utes in Australia and New Zealand, are common in North America and some regions of Latin America, Asia and Africa, but not so in Europe, where this size of commercial vehicle is most often made as vans.
Medium trucks are larger than light but smaller than heavy trucks. In the US, they are defined as weighing between .
For the UK and the EU the weight is between . Local delivery and public service (dump trucks, garbage trucks and fire-fighting trucks) are normally around this size.
Heavy trucks are the largest on-road trucks, Class 8. These include vocational applications such as heavy dump trucks, concrete pump trucks, and refuse hauling, as well as ubiquitous long-haul 4x2 and 6×4 tractor units.
Road damage and wear increase very rapidly with the axle weight. The number of steering axles and the suspension type also influence the amount of the road wear. In many countries with good roads a six-axle truck may have a maximum weight of or more.
Off-road trucks include standard, extra heavy-duty highway-legal trucks, typically outfitted with off-road features such as a front driving axle and special tires for applications such as logging and construction, and purpose-built off-road vehicles unconstrained by weight limits, such as the Liebherr T 282B mining truck.
Australia has complex regulations over weight and length, including axle spacing, type of axle/axle group, rear overhang, kingpin to rear of trailer, drawbar length, ground clearance, as well as height and width laws. These limits are some of the highest in the world, a B-double can weigh and be long, and road trains used in the outback can weigh and be long.
The European Union also has complex regulations. The number and spacing of axles, steering, single or dual tires, and suspension type all affect maximum weights. Length of a truck, of a trailer, from axle to hitch point, kingpin to rear of trailer, and turning radius are all regulated. In additions, there are special rules for carrying containers, and countries can set their own rules for local traffic.
The United States Federal Bridge Law deals with the relation between the gross weight of the truck, the number of axles, the weight on and the spacing between the axles that the truck can have on the Interstate highway system. Each State determines the maximum permissible vehicle, combination, and axle weight on state and local roads.
!Country!!Maximumwith three axles!!With one trailer!!Maximum combination | https://en.wikipedia.org/wiki?curid=31456 |
Thomas the Apostle
Thomas the Apostle (; ; ; "Tʾōmā šliḥā" ("Thoma Sheliha")), also called Didymus ("twin"), was one of the Twelve Apostles of Jesus according to the New Testament. Thomas is commonly known as "Doubting Thomas" because he doubted Jesus' resurrection when first told of it (as related in the Gospel of John alone); later, he confessed his faith, "My Lord and my God," on seeing Jesus' crucifixion wounds.
According to traditional accounts of the Saint Thomas Christians of modern-day Kerala in India, Thomas is believed to have travelled outside the Roman Empire to preach the Gospel, travelling as far as the Malabar Coast which is in modern-day Kerala. According to their tradition, Thomas reached Muziris (modern-day North Paravur and Kodungalloor in the state of Kerala, India) in AD 52. In 1258, some of the relics were brought to Ortona, in Abruzzo, Italy, where they have been held in the Church of Saint Thomas the Apostle. He is often regarded as the patron saint of India, and the name "Thomas" remains quite popular among Saint Thomas Christians of India.
Thomas first speaks in the Gospel of John. In , when Lazarus had recently died, the apostles do not wish to go back to Judea, where some Jews had attempted to stone Jesus. Thomas says: "Let us also go, that we may die with him."
Thomas speaks again in . There, Jesus had just explained that he was going away to prepare a heavenly home for his followers, and that one day they would join him there. Thomas reacted by saying, "Lord, we know not whither thou goest; and how can we know the way?"
The name "Thomas" (Koine Greek: Θωμᾶς) given for the apostle in the New Testament is derived from the Aramaic תְּאוֹמָא or "Tāʾwma"/"Tʾōmā", equivalently from Hebrew תְּאוֹם "tʾóm", meaning "twin". The equivalent term for twin in Greek, which is also used in the New Testament, is Δίδυμος "Didymos".
The Nag Hammadi copy of the "Gospel of Thomas" begins: "These are the secret sayings that the living Jesus spoke and Didymos, Judas Thomas, recorded." Early Syrian traditions also relate the apostle's full name as Judas Thomas. Some have seen in the "Acts of Thomas" (written in east Syria in the early 3rd century, or perhaps as early as the first half of the 2nd century) an identification of Saint Thomas with the apostle Judas, Son of James, better known in English as Jude. However, the first sentence of the Acts follows the Gospels and the Acts of the Apostles in distinguishing the apostle Thomas and the apostle Judas son of James. Others, such as James Tabor, identify him as Judah, the brother of Jesus mentioned by Mark. In the Book of Thomas the Contender, part of the Nag Hammadi library, he is alleged to be a twin to Jesus: "Now, since it has been said that you are my twin and true companion, examine yourself…"
A "Doubting Thomas" is a skeptic who refuses to believe without direct personal experience—a reference to the Apostle Thomas, due to his refusal to believe the resurrected Jesus had appeared to the ten other apostles, until he could see and feel the wounds received by Jesus on the cross.
When the feast of Saint Thomas was inserted in the Roman calendar in the 9th century, it was assigned to 21 December. The "Martyrology of St. Jerome" mentioned the apostle on 3 July, the date to which the Roman celebration was transferred in 1969, so that it would no longer interfere with the major ferial days of Advent. Traditionalist Roman Catholics (who follow the General Roman Calendar of 1960 or earlier) and many Anglicans (including members of the Episcopal Church as well as members of the Church of England and the Lutheran Church, who worship according to the 1662 edition of the Book of Common Prayer), still celebrate his feast day on 21 December. However, most modern liturgical calendars (including the Common Worship calendar of the Church of England) prefer 3 July.
The Eastern Orthodox and Byzantine Catholic churches celebrate his feast day on 6 October (for those churches which follow the traditional Julian calendar, 6 October currently falls on 19 October of the modern Gregorian calendar). In addition, the next Sunday of the Easter (Pascha) is celebrated as the Sunday of Thomas, in commemoration of Thomas' question to Jesus, which led him to proclaim, according to Orthodox teaching, two natures of Jesus, both human and divine. Thomas is commemorated in common with all of the other apostles on 30 June (13 July), in a feast called the Synaxis of the Holy Apostles. He is also associated with the "Arabian" (or "Arapet") icon of the Theotokos (Mother of God), which is commemorated on 6 September (19 September). The Malankara Orthodox church celebrates his feast on three days, 3 July (in memory of the relic translation to Edessa), 18 December (the Day he was lanced), and 21 December (when he died).
"The Passing of Mary", adjudged heretical by Pope Gelasius I in 494, was attributed to Joseph of Arimathea. The document states that Thomas was the only witness of the Assumption of Mary into heaven. The other apostles were miraculously transported to Jerusalem to witness her death. Thomas was left in India, but after her first burial, he was transported to her tomb, where he witnessed her bodily assumption into heaven, from which she dropped her girdle. In an inversion of the story of Thomas' doubts, the other apostles are skeptical of Thomas' story until they see the empty tomb and the girdle. Thomas' receipt of the girdle is commonly depicted in medieval and pre-Tridentine Renaissance art, the apostle's doubting reduced to a metaphorical knot in the Bavarian baroque Mary Untier of Knots.
According to traditional accounts of the Saint Thomas Christians of India, the Apostle Thomas landed in Muziris (Cranganore) on the Kerala coast in AD 52 and was martyred in Myalpur, near Madras in AD 72. The port was destroyed in 1341 by a massive flood that realigned the coasts. He is believed by the Saint Thomas Christian tradition to have established seven churches (communities) in Kerala. These churches are at Kodungallur, Palayoor, Kottakkavu (Paravur), Kokkamangalam, Niranam, Nilackal (Chayal), Kollam, and Thiruvithamcode. Thomas baptized several families, namely Pakalomattom, Sankarapuri, Thayyil, Payyappilly, Kalli, Kaliyankal, and Pattamukku. Other families claim to have origins almost as far back as these, and the religious historian Robert Eric Frykenberg notes that: "Whatever dubious historicity may be attached to such local traditions, there can be little doubt as to their great antiquity or to their great appeal in popular imagination."
St. Ephrem, a doctor of Syriac Christianity, writes in the forty-second of his "Carmina Nisibina" that the Apostle was put to death in India, and that his remains were subsequently buried in Edessa, brought there by an unnamed merchant.
According to Eusebius' record, Thomas and Bartholomew were assigned to Parthia and India.
We arrived at Edessa in the Name of Christ our God, and, on our arrival, we straightway repaired to the church and memorial of saint Thomas. There, according to custom, prayers were made and the other things that were customary in the holy places were done; we read also some things concerning saint Thomas himself. The church there is very great, very beautiful and of new construction, well worthy to be the house of God, and as there was much that I desired to see, it was necessary for me to make a three days' stay there.
According to Saint Theodoret of Cyrrhus, the bones of Saint Thomas were transferred by Cyrus I, Bishop of Edessa, from the martyrium outside of Edessa to a church in the south-west corner of the city on 22 August 394.
In 441, the "Magister militum per Orientem" Anatolius donated a silver coffin to hold the relics.
In AD 522, Cosmas Indicopleustes (called the Alexandrian) visited the Malabar Coast. He is the first traveller who mentions Syrian Christians in Malabar, in his book "Christian Topography." He mentions that in the town of "Kalliana" (Quilon or Kollam) there was a bishop who had been consecrated in Persia.
In 1144, the city was conquered by the Zengids and the shrine destroyed.
The reputed relics of St. Thomas remained at Edessa until they were translated to Chios in 1258. Some portion of the relics were later transported to the West, and now rest in the Cathedral of St. Thomas the Apostle in Ortona, Italy. However, the skull of Thomas is said to be at Monastery of Saint John the Theologian on the Greek island of Patmos.
Ortona's three galleys reached the island of Chios in 1258, led by General Leone Acciaiuoli. Chios was considered the island where Saint Thomas, after his death in India, had been buried. A portion fought around the Peloponnese and the Aegean islands, the other in the sea lapping at the then Syrian coast. The three galleys of Ortona moved on the second front of the war and reached the island of Chios.
The tale is provided by Giambattista De Lectis, physician and writer of the 16th century of Ortona. After the looting, the navarca Ortona Leone went to pray in the main church of the island of Chios and was drawn to a chapel adorned and resplendent with lights. An elderly priest, through an interpreter informed him that in that oratory was venerated the Body of Saint Thomas the Apostle. Lion, filled with an unusual sweetness, gathered in deep prayer. At that moment a light hand twice invited him to come closer. The navarca Leone reached out and took a bone from the largest hole of the tombstone, on which were carved the Greek letters and a halo depicted a bishop from the waist up. He was the confirmation of what he had said the old priest and that you are indeed in the presence of the Apostle's body. He went back on the galley and planned the theft for the next night, along with fellow Ruggiero Grogno. They lifted the heavy gravestone and watched the underlying relics. The wrapped in snow-white cloths them laid in a wooden box (stored at Ortona to the looting of 1566) and brought them aboard the galley. Lion, then, along with other comrades, he returned again in the church, took the tombstone and took her away. Just the Chinardo admiral was aware of the precious cargo moved all the sailors of the Muslim faith on other ships and ordered him to take the route to Ortona.
He landed at the port of Ortona 6 September 1258. According to the story of De Lectis, he was informed the abbot Jacopo responsible for Ortona Church, which predispose full provision for hospitality felt and shared by all the people. Since then the body of the apostle and the gravestone are preserved in the crypt of the Basilica. In 1259 a parchment written in Bari by the court under John Peacock contracts, the presence of five witnesses, preserved in Ortona at the Diocesan Library, confirming the veracity of that event, reported, as mentioned, by Giambattista De Lectis, physician and writer Ortona of the 16th century.
The relics resisted both the Saracen looting of 1566, and the destruction of the Nazis in the battle of Ortona fought in late December 1943. The basilica was blown up because the belfry was considered a lookout point by the allies, coming by sea from San Vito Chietino. The relics, together with the treasure of Saint Thomas, were intended by the Germans to be sold, but the monks entombed them inside the bell tower, the only surviving part of the semi-ruined church.
The tombstone of Thomas, brought to Ortona from Chios along with the relics of the Apostle, is currently preserved in the crypt of St Thomas Basilica, behind the altar. The urn containing the bones instead is placed under the altar. It is the cover of a fake coffin, fairly widespread burial form in the early Christian world, as the top of a tomb of less expensive material. The plaque has an inscription and a bas-relief that refer, in many respects, to the Syro-Mesopotamian. Tombstone Thomas the Apostle on inclusion can be read, in Greek characters uncial, the expression 'osios thomas, that Saint Thomas. It can be dated from the point of view palaeographic and lexical to the 3rd–5th century, a time when the term osios is still used as a synonym of aghios in that holy is he that is in the grace of God and is inserted in the Church: the two vocabulary, therefore, indicate the Christians. In the particular case of Saint Thomas' plaque, then, the word osios can easily be the translation of the word Syriac "mar" (Lord), attributed in the ancient world, but also to the present day, is a saint to be a bishop.
The finger bones of Saint Thomas were discovered during restoration work at the Church of Saint Thomas in Mosul, Iraq in 1964, and were housed there until the Fall of Mosul, after which the relics were transferred to the Monastery of Saint Matthew on 17 June 2014.
A number of early Christian writings written during the centuries immediately following the first Ecumenical Council of 325 mention Thomas' mission.
The "Transitus Mariae" describes each of the apostles purportedly being temporarily transported to heaven during the Assumption of Mary.
The main source is the apocryphal Acts of Thomas, sometimes called by its full name "The Acts of Judas Thomas", written circa 180–230 AD/CE, These are generally regarded by various Christian religions as apocryphal, or even heretical. The two centuries that lapsed between the life of the apostle and the recording of this work cast doubt on their authenticity.
The king, Misdeus (or Mizdeos), was infuriated when Thomas converted the queen Tertia, the king's son Juzanes, sister-in-law princess Mygdonia and her friend Markia. Misdeus led Saint Thomas outside the city and ordered four soldiers to take him to the nearby hill, where the soldiers speared Thomas and killed him. After Thomas' death, Syphorus was elected the first presbyter of Mazdai by the surviving converts, while Juzanes was the first deacon. (The names Misdeus, Tertia, Juzanes, Syphorus, Markia and Mygdonia (c.f. Mygdonia, a province of Mesopotamia) may suggest Greek descent or cultural influences. Greek traders had long visited Muziris. Greek kingdoms in northern India and Bactria, founded by Alexander the Great, were vassals of the Indo-Parthians.
The Doctrine of the Apostles as reflected in attests that Thomas had written Christian doctrine from India.
Christian philosopher Origen taught with great acclaim in Alexandria and then in Caesarea. He is the first known writer to record the casting of lots by the Apostles. Origen's original work has been lost, but his statement about Parthia falling to Thomas has been preserved by Eusebius. "Origen, in the third chapter of his Commentary on Genesis, says that, according to tradition, Thomas's allotted field of labour was Parthia".
Quoting Origen, Eusebius of Caesarea says: "When the holy Apostles and disciples of our Saviour were scattered over all the world, Thomas, so the tradition has it, obtained as his portion Parthia…" "Judas, who is also called Thomas" has a role in the legend of king Abgar of Edessa (Urfa), for having sent Thaddaeus to preach in Edessa after the Ascension (Eusebius, "Historia ecclesiae" 1.13; III.1; Ephrem the Syrian also recounts this legend.)
Many devotional hymns composed by St. Ephraem bear witness to the Edessan Church's strong conviction concerning St. Thomas's Indian Apostolate. There the devil speaks of Saint Thomas as "the Apostle I slew in India". Also, "The merchant brought the bones" to Edessa.
Gregory of Nazianzus was born AD 330, consecrated a bishop by his friend St. Basil; in 372, his father, the Bishop of Nazianzus, induced him to share his charge. In 379, the people of Constantinople called him to be their bishop. By the Orthodox Church, he is emphatically called "the Theologian". "What? were not the Apostles strangers amidst the many nations and countries over which they spread themselves? … Peter indeed may have belonged to Judea, but what had Paul in common with the gentiles, Luke with Achaia, Andrew with Epirus, John with Ephesus, Thomas with India, Mark with Italy?"
Saint Ambrose was thoroughly acquainted with the Greek and Latin Classics and had a good deal of information on India and Indians. He speaks of the Gymnosophists of India, the Indian Ocean, the river Ganges etc., a number of times. "This admitted of the Apostles being sent without delay according to the saying of our Lord Jesus… Even those Kingdoms which were shut out by rugged mountains became accessible to them, as India to Thomas, Persia to Matthew..."
Saint Gregory of Tours (died 594) Saint Gregory's testimony: "Thomas the Apostle, according to the narrative of his martyrdom is stated to have suffered in India. His holy remains (corpus), after a long interval of time, were removed to the city of Edessa in Syria and there interred. In that part of India where they first rested, stand a monastery and a church of striking dimensions, elaborately adorned and designed. This Theodore, who had been to the place, narrated to us."
In the first two centuries of the Christian era, a number of writings were circulated. It is unclear now why Thomas was seen as an authority for doctrine, although this belief is documented in Gnostic groups as early as the "Pistis Sophia". In that Gnostic work, Mary Magdalene (one of the disciples) says:
An early, non-Gnostic tradition may lie behind this statement, which also emphasizes the primacy of the Gospel of Matthew in its Aramaic form, over the other canonical three.
Besides the "Acts of Thomas" there was a widely circulated "Infancy Gospel of Thomas" probably written in the later 2nd century, and probably also in Syria, which relates the miraculous events and prodigies of Jesus' boyhood. This is the document which tells for the first time the familiar legend of the twelve sparrows which Jesus, at the age of five, fashioned from clay on the Sabbath day, which took wing and flew away. The earliest manuscript of this work is a 6th-century one in Syriac. This gospel was first referred to by Irenaeus; Ron Cameron notes: "In his citation, Irenaeus first quotes a non-canonical story that circulated about the childhood of Jesus and then goes directly on to quote a passage from the infancy narrative of the Gospel of Luke. Since the Infancy Gospel of Thomas records both of these stories, in relative close proximity to one another, it is possible that the apocryphal writing cited by Irenaeus is, in fact, what is now known as the Infancy Gospel of Thomas. Because of the complexities of the manuscript tradition, however, there is no certainty as to when the stories of the Infancy Gospel of Thomas began to be written down."
The best known in modern times of these documents is the "sayings" document that is being called the Gospel of Thomas, a noncanonical work whose date is disputed. The opening line claims it is the work of "Didymos Judas Thomas" – whose identity is unknown. This work was discovered in a Coptic translation in 1945 at the Egyptian village of Nag Hammadi, near the site of the monastery of Chenoboskion. Once the Coptic text was published, scholars recognized that an earlier Greek translation had been published from fragments of papyrus found at Oxyrhynchus in the 1890s.
In the 16th-century work "Jornada", Antonio Gouvea writes of ornate crosses known as "Saint Thomas Crosses". It is also known as Nasrani Menorah, Persian Cross, or Mar Thoma Sleeva. These crosses are believed to date from the 6th century as per the tradition and are found in a number of churches in Kerala, Mylapore and Goa. "Jornada" is the oldest known written document to refer to this type of cross as a St. Thomas Cross. Gouvea also writes about the veneration of the Cross at Cranganore, referring to the cross as "Cross of Christians".
There are several interpretations of the Nasrani symbol. The interpretation based on Christian Jewish tradition assumes that its design was based on Jewish menorah, an ancient symbol of the Hebrews, which consists of seven branched lamp stand (candelabra). The interpretation based on local culture states that the Cross without the figure of Jesus and with flowery arms symbolizing "joyfulness" points to the resurrection theology of Saint Paul; the Holy Spirit on the top represents the role of Holy Spirit in the resurrection of Jesus Christ. The lotus symbolizing Buddhism and the Cross over it shows that Christianity was established in the land of Buddha. The three steps indicate Calvary and the rivulets, channels of Grace flowing from the Cross.
The Qur’anic account of the disciples of Jesus does not include their names, numbers, or any detailed accounts of their lives. Muslim exegesis, however, more-or-less agrees with the New Testament list and says that the disciples included Peter, Philip, Thomas, Bartholomew, Matthew, Andrew, James, Jude, John, and Simon the Zealot. | https://en.wikipedia.org/wiki?curid=31459 |
Tom Cruise
Thomas Cruise Mapother IV (born July 3, 1962) is an American actor and producer. He has received various accolades for his work, including three Golden Globe Awards and three nominations for Academy Awards. With a net worth of $570 million as of 2020, he is one of the highest-paid actors in the world. In addition, his films have grossed over in North America and over worldwide, making him one of the highest-grossing box office stars of all time.
Cruise began acting in the early 1980s and made his breakthrough with leading roles in the comedy film "Risky Business" (1983) and action drama film "Top Gun" (1986). Critical acclaim came with his roles in the drama films "The Color of Money" (1986), "Rain Man" (1988), and "Born on the Fourth of July" (1989). For his portrayal of Ron Kovic in the latter, he won a Golden Globe Award and received a nomination for the Academy Award for Best Actor. As a leading Hollywood star in the 1990s, he starred in several commercially successful films, including the drama "A Few Good Men" (1992), the thriller "The Firm" (1993), the horror film "Interview with the Vampire" (1994), and the romance "Jerry Maguire" (1996). For his role in the latter, he won a Golden Globe Award for Best Actor and received his second Academy Award nomination.
Cruise's performance as a motivational speaker in the drama film "Magnolia" (1999) earned him another Golden Globe Award and a nomination for the Academy Award for Best Supporting Actor. As an action star, he has played Ethan Hunt in six films of the from 1996 to 2018. He also continued to feature in several science fiction and action films, including "Vanilla Sky" (2001), "Minority Report" (2002), "The Last Samurai" (2003), "Collateral" (2004), "War of the Worlds" (2005), "Knight and Day" (2010), "Jack Reacher" (2012), "Oblivion" (2013), and "Edge of Tomorrow" (2014).
Cruise has been married to actresses Mimi Rogers, Nicole Kidman, and Katie Holmes. He has three children, two of whom were adopted during his marriage to Kidman and the other of whom is a biological daughter he had with Holmes. Cruise is an outspoken advocate for the Church of Scientology and its associated social programs, and credits it with helping him overcome dyslexia. In the 2000s, he sparked controversy with his Church-affiliated criticisms of psychiatry and anti-depressant drugs, his efforts to promote Scientology as a religion in Europe, and a leaked video interview of him promoting Scientology.
Cruise was born Thomas Cruise Mapother IV in Syracuse, New York, on July 3, 1962, the son of special education teacher Mary Lee (née Pfeiffer; 1936–2017) and electrical engineer Thomas Cruise Mapother III (1934–1984). His parents were both from Louisville, Kentucky, and had English, German, and Irish ancestry. Cruise has three sisters named Lee Anne, Marian, and Cass. One of his cousins, William Mapother, is also an actor who has appeared alongside Cruise in five films. Cruise grew up in near poverty and had a Catholic upbringing. He later described his father as "a merchant of chaos", a "bully", and a "coward" who beat his children. He elaborated, "[My father] was the kind of person where, if something goes wrong, they kick you. It was a great lesson in my life—how he'd lull you in, make you feel safe and then, bang! For me, it was like, 'There's something wrong with this guy. Don't trust him. Be careful around him.'"
Cruise spent part of his childhood in Canada. When his father took a job as a defense consultant with the Canadian Armed Forces, his family moved in late 1971 to Beacon Hill, Ottawa. He attended the new Robert Hopkins Public School for his fourth and fifth grade education. He first became involved in drama in fourth grade, under the tutelage of George Steinburg. He and six other boys put on an improvised play to music called "IT" at the Carleton Elementary School drama festival. Drama organizer Val Wright, who was in the audience, later said that "the movement and improvisation were excellent [...] it was a classic ensemble piece". In sixth grade, Cruise went to Henry Munro Middle School in Ottawa. That year, his mother left his father, taking Cruise and his sisters back to the United States. In 1978, she married Jack South. Cruise's father died of cancer in 1984. Cruise briefly took a church scholarship and attended a Franciscan seminary in Cincinnati, Ohio; he aspired to become a priest before he became interested in acting. In total, he attended 15 schools in 14 years. In his senior year of high school, he played football for the varsity team as a linebacker, but was cut from the squad after getting caught drinking beer before a game. He went on to star in the school's production of "Guys and Dolls". In 1980, he graduated from Glen Ridge High School in Glen Ridge, New Jersey.
At age 18, with the blessing of his mother and stepfather, Cruise moved to New York City to pursue an acting career. After working as a busboy in New York, he went to Los Angeles to try out for television roles. He signed with CAA and began acting in films. He first appeared in a bit part in the 1981 film "Endless Love", followed by a major supporting role as a crazed military academy student in "Taps" later that year. In 1983, Cruise was part of the ensemble cast of "The Outsiders". That same year he appeared in "All the Right Moves" and "Risky Business", which has been described as "A Generation X classic, and a career-maker for Tom Cruise", and which, along with 1986's "Top Gun", cemented his status as a superstar. Cruise also played the male lead in the Ridley Scott film "Legend", released in 1985.
Cruise followed up "Top Gun" with "The Color of Money", which came out the same year, and which paired him with Paul Newman. 1988 saw him star in "Cocktail", which earned him a nomination for the Razzie Award for Worst Actor. Later that year he starred with Dustin Hoffman in "Rain Man", which won the Academy Award for Best Film and Cruise the Kansas City Film Critics Circle Award for Best Supporting Actor. Cruise portrayed real-life paralyzed Vietnam War veteran Ron Kovic in 1989's "Born on the Fourth of July", which earned him a Golden Globe Award for Best Actor - Motion Picture Drama, the Chicago Film Critics Association Award for Best Actor, the People's Choice Award for Favorite Motion Picture Actor, a nomination for BAFTA Award for Best Actor in a Leading Role, and Cruise's first Best Actor Academy Award nomination.
Cruise's next films were "Days of Thunder" (1990) and "Far and Away" (1992), both of which co-starred then-wife Nicole Kidman as his love interest, followed by the legal thriller "The Firm", which was a critical and commercial success. In 1994, Cruise starred along with Brad Pitt, Antonio Banderas and Christian Slater in Neil Jordan's "Interview with the Vampire", a gothic drama/horror film that was based on Anne Rice's best-selling novel. The film was well-received, although Rice was initially quite outspoken in her criticism of Cruise having been cast in the film, as Julian Sands was her first choice. Upon seeing the film however, she paid $7,740 for a two-page ad in "Daily Variety" praising his performance and apologizing for her previous doubts about him.
In 1996, Cruise appeared as superspy Ethan Hunt in the reboot of "", which he produced. It was a box office success, although it received criticism regarding the Jim Phelps character being a villain despite being a protagonist of the original television series.
In 1996, he took on the title role in "Jerry Maguire", for which he earned a Golden Globe and his second nomination for an Academy Award. In 1999, Cruise costarred with Kidman in the erotic Stanley Kubrick film "Eyes Wide Shut", and took a rare supporting role, as a motivational speaker, Frank T.J. Mackey, in "Magnolia", for which he received another Golden Globe and nomination for an Academy Award.
In 2000, Cruise returned as Ethan Hunt in the second installment of the "Mission Impossible" films, "". The film was helmed by Hong Kong director John Woo and branded with his gun fu style, and continued the series' blockbuster success at the box office, taking in almost $547M in worldwide figures. Like its predecessor, it was the highest-grossing film of the year, and had a mixed critical reception. Cruise received an MTV Movie Award for Best Male Performance for this film.
His next five films were major critical and commercial successes. The following year Cruise starred in the romantic thriller "Vanilla Sky" (2001) with Cameron Diaz and Penélope Cruz. In 2002, Cruise starred in the dystopian science fiction action film "Minority Report" which was directed by Steven Spielberg and based on the science fiction short story by Philip K. Dick.
In 2003, he starred in Edward Zwick's period action drama "The Last Samurai", for which he received a Golden Globe nomination for best actor. In 2004, Cruise received critical acclaim for his performance as Vincent in "Collateral" - directed by Michael Mann. In 2005, Cruise worked again with Steven Spielberg in "War of the Worlds", a loose adaptation of the H. G. Wells novel of the same name, which became the fourth highest-grossing film of the year with US$591.4 million worldwide. Also in 2005, he won the People's Choice Award for Favorite Male Movie Star, and the MTV Generation Award. Cruise was nominated for seven Saturn Awards between 2002 and 2009, winning once. Nine of the ten films he starred in during the decade made over $100 million at the box office.
In 2006, he returned to his role as Ethan Hunt in the third installment of the "Mission Impossible" film series, "". The film was more positively received by critics than the previous films in the series, and grossed nearly $400 million at the box office. In 2007, Cruise took a rare supporting role for the second time in "Lions for Lambs", which was a commercial disappointment. This was followed by an unrecognizable appearance as "Les Grossman" in the 2008 comedy "Tropic Thunder" with Ben Stiller, Jack Black, and Robert Downey Jr. This performance earned Cruise a Golden Globe nomination. Cruise played the central role in the historical thriller "Valkyrie" released on December 25, 2008 to box office success.
In March 2010, Cruise completed filming the action-comedy "Knight and Day", in which he re-teamed with former costar Cameron Diaz; the film was released on June 23, 2010. On February 9, 2010, Cruise confirmed that he would star in "", the fourth installment in the "" series. The film was released in December 2011 to high critical acclaim and box office success. Unadjusted for ticket price inflation, it was Cruise's biggest commercial success to that date.
On May 6, 2011, Cruise was awarded a humanitarian award from the Simon Wiesenthal Centre and Museum of Tolerance for his work as a dedicated philanthropist. In mid-2011, Cruise started shooting the movie "Rock of Ages", in which he played the character Stacee Jaxx. The film was released in June 2012.
Cruise starred as Jack Reacher in the film adaptation of British author Lee Child's 2005 novel "One Shot." The film was released on December 21, 2012. It met with positive reviews from critics and was a box office success grossing $216,568,266 worldwide. In 2013, he starred in the science fiction film "Oblivion" based on director Joseph Kosinski's graphic novel of the same name. The film met with mixed reviews and grossed $285,600,588 worldwide. It also starred Morgan Freeman and Olga Kurylenko.
As of mid-2015 Cruise's films have grossed about $8.2 billion worldwide.
Cruise returned as Ethan Hunt in the fifth installment of the "Mission: Impossible" series, "", which he also produced. Returning cast members included Simon Pegg as Benji and Jeremy Renner as William Brandt, with Christopher McQuarrie as director. The film earned high critical acclaim and was a commercial success.
Cruise starred in the 2017 reboot of Boris Karloff's 1932 horror movie "The Mummy". The new film, also titled "The Mummy", was produced by Alex Kurtzman, Chris Morgan, and Sean Daniel, written by Jon Spaihts and directed by Kurtzman which received negative reviews and become flop at the box office . In 2018, Cruise again reprised Ethan Hunt, in the sixth film in his franchise, "". The film was more positively received by critics than the previous films in the series, and grossed over $791 million at the box office. Unadjusted for ticket price inflation, it is Cruise's biggest commercial success to date.
Cruise partnered with his former talent agent Paula Wagner to form Cruise/Wagner Productions in 1993, and the company has since co-produced several of Cruise's films, the first being "" in 1996 which was also Cruise's first project as a producer.
Cruise is noted as having negotiated some of the most lucrative film deals in Hollywood, and was described in 2005 by Hollywood economist Edward Jay Epstein as "one of the most powerful – and richest – forces in Hollywood." Epstein argues that Cruise is one of the few producers (the others being George Lucas, Steven Spielberg and Jerry Bruckheimer) who are regarded as able to guarantee the success of a billion-dollar film franchise. Epstein also contends that the public obsession with Cruise's tabloid controversies obscures full appreciation of Cruise's exceptional commercial prowess.
Cruise/Wagner Productions, Cruise's film production company, is said to be developing a screenplay based on Erik Larson's "New York Times" bestseller, "The Devil in the White City" about a real-life serial killer, H. H. Holmes, at Chicago's World's Columbian Exposition. Kathryn Bigelow is attached to the project to produce and helm. Meanwhile, Leonardo DiCaprio's production company, Appian Way, is also developing a film about Holmes and the World's Fair, in which DiCaprio will star.
Cruise has produced several films in which he appeared. He produced "", "Without Limits", "", "The Others", "Vanilla Sky" and many others.
On August 22, 2006, Paramount Pictures announced it was ending its 14-year relationship with Cruise. In the "Wall Street Journal", chairman of Viacom (Paramount's parent company) Sumner Redstone cited the economic damage to Cruise's value as an actor and producer from his controversial public behavior and views. Cruise/Wagner Productions responded that Paramount's announcement was a face-saving move after the production company had successfully sought alternative financing from private equity firms.
Industry analysts such as Edward Jay Epstein commented that the real reason for the split was most likely Paramount's discontent over Cruise/Wagner's exceptionally large share of DVD sales from the "Mission: Impossible" franchise.
In November 2006, Cruise and Paula Wagner announced that they had taken over the film studio United Artists. Cruise acts as a producer and star in films for United Artists, while Wagner serves as UA's chief executive.
Production began in 2007 of "Valkyrie", a thriller based on the July 20, 1944 assassination attempt against Adolf Hitler. The film was acquired in March 2007 by United Artists. On March 21, 2007, Cruise signed to play Claus von Stauffenberg, the protagonist. This project marked the second production to be greenlighted since Cruise and Wagner took control of United Artists. The first was its inaugural film, "Lions for Lambs", directed by Robert Redford and starring Redford, Meryl Streep and Cruise. "Lambs" was released on November 9, 2007, opening to unimpressive box office revenue and critical reception.
In August 2008, Wagner stepped down from her position at United Artists; she retains her stake in UA, which combined with Cruise's share amounts to 30 percent of the studio.
Cruise splits his time between homes in Beverly Hills, California; Telluride, Colorado; Clearwater, Florida; Dulwich, London; and East Grinstead, West Sussex. He had several relationships with older women in the early-to-mid-1980s, including Rebecca De Mornay (three years his senior), Patti Scialfa (nine years his senior), and Cher (16 years his senior).
Cruise married actress Mimi Rogers on May 9, 1987. They divorced on February 4, 1990. Rogers introduced Cruise to Scientology.
Cruise met his second wife, actress Nicole Kidman, on the set of their film "Days of Thunder" (1990). The couple married on December 24, 1990. They adopted two children: Isabella Jane (born 1992) and Connor Antony (born 1995). In February 2001, Cruise filed for divorce from Kidman while she was unknowingly pregnant. The pregnancy ended in a miscarriage. In 2007, Kidman clarified rumors of a miscarriage early in her marriage to Cruise, saying that it was wrongly reported and explaining that she had actually had an ectopic pregnancy.
Cruise was next romantically linked with Penélope Cruz, his co-star in "Vanilla Sky" (2001). Their relationship ended in 2004. An article in the October 2012 issue of "Vanity Fair" stated that several sources have said that after the breakup with Cruz, Scientologist leaders launched a secret project to find Cruise a new girlfriend. According to those sources, a series of "auditions" of Scientologist actresses resulted in a short-lived relationship with British-Iranian actress Nazanin Boniadi, who subsequently left Scientology. Scientology and Cruise's lawyers issued strongly worded denials and threatened to sue, accusing "Vanity Fair" of "shoddy journalism" and "religious bigotry". Journalist Roger Friedman later reported that he received an email from director and ex-Scientologist Paul Haggis confirming the story.
In April 2005, Cruise began dating actress Katie Holmes. On April 27 that year, Cruise and Holmes—dubbed "TomKat" by the media—made their first public appearance together in Rome. A month later, Cruise publicly declared his love for Holmes on "The Oprah Winfrey Show", famously jumping up and down on Winfrey's couch during the show. On October 6, 2005, Cruise and Holmes announced they were expecting a child. In April 2006, their daughter Suri was born. On November 18, Holmes and Cruise were married at the 15th-century Odescalchi Castle in Bracciano, in a Scientologist ceremony attended by many Hollywood stars. Their publicists said the couple had "officialized" their marriage in Los Angeles the day before the Italian ceremony. There has been widespread speculation that their marriage was arranged by the Church of Scientology. David Miscavige, the head of Scientology, served as Cruise's best man. On June 29, 2012, Holmes filed for divorce from Cruise. On July 9, the couple signed a divorce settlement worked out by their lawyers. New York law requires all divorce documents remain sealed, so the exact terms of the settlement are not publicly available.
Cruise is an outspoken advocate for the Church of Scientology and its associated social programs. He became involved with Scientology in 1990 through his first wife, Mimi Rogers. Cruise struggled with dyslexia at an early age and has said that Scientology, specifically the L. Ron Hubbard Study Tech, helped him overcome dyslexia.
In addition to promoting various programs that introduce people to Scientology, Cruise has campaigned for Scientology to be recognized as a religion in Europe. In 2005, the Paris city council revealed that Cruise had lobbied officials Nicolas Sarkozy and Jean-Claude Gaudin. They described him as a militant spokesman for Scientology, and barred any further dealings with him.
Cruise co-founded and raised donations for Downtown Medical to offer New York City 9/11 rescue workers detoxification therapy based on the works of L. Ron Hubbard. This drew criticism from the medical profession and from firefighters.
For such activities, Scientology leader David Miscavige created the Scientology Freedom Medal of Valor and awarded it to Cruise in late 2004.
In January 2004, Cruise made the controversial statement "I think psychiatry should be outlawed." Further controversy erupted in 2005 after he openly criticized actress Brooke Shields for using the drug Paxil (paroxetine), an anti-depressant to which Shields attributes her recovery from postpartum depression after the birth of her first daughter in 2003. Cruise asserted that there is no such thing as a chemical imbalance and that psychiatry is a form of pseudoscience. Shields responded that Cruise "should stick to saving the world from aliens and let women who are experiencing postpartum depression decide what treatment options are best for them". This led to a heated argument between Matt Lauer and Cruise on NBC's "Today" on June 24, 2005.
Medical authorities view Cruise's comments as furthering the social stigma of mental illness. Shields herself called Cruise's comments "a disservice to mothers everywhere." In late August 2006, Cruise apologized in person to Shields for his comments.
Scientology is well known for its opposition to mainstream psychiatry and the psychoactive drugs which are routinely prescribed for treatment. It was reported that Cruise's anti-psychiatry actions led to a rift with director Steven Spielberg. Spielberg had reportedly mentioned in Cruise's presence the name of a doctor friend who prescribed psychiatric medication. Shortly thereafter, the doctor's office was picketed by Scientologists, reportedly angering Spielberg.
On January 15, 2008, a video produced by the Church of Scientology featuring an interview with Cruise was posted on YouTube, showing Cruise discussing what being a Scientologist means to him. The Church of Scientology said the video had been "pirated and edited," and was taken from a three-hour video produced for members of Scientology. YouTube removed the Cruise video from their site under threat of litigation.
After YouTube investigated this claim, they found that the video did not breach copyright law, as it is covered by the fair use clause. It was subsequently reinstated on the site, and as of June 2020, the video has achieved over 14 million views. YouTube has declined to remove it again, due to the popularity of the video, and subsequent changes to copyright policy of the Web site.
In 2013, Cruise stated that ex-wife Katie Holmes divorced him in part to protect the couple's daughter Suri from Scientology. He also said that Suri is no longer a practicing member of the church.
In March 2004, his publicist of 14 years, Pat Kingsley, resigned. Cruise's next publicist was Lee Anne DeVette, Cruise's sister, who was herself a Scientologist. She served in that role until November 2005. DeVette was replaced with Paul Bloch from the publicity firm Rogers and Cowan. Such restructuring was seen as a move to curtail publicity of his views on Scientology, as well as the controversy surrounding his relationship with Katie Holmes.
The 2015 documentary "Going Clear: Scientology and the Prison of Belief" cast a spotlight on Cruise's role in Scientology. The film alleges that Cruise used Sea Org workers as a source of free labor. In the film, Cruise's former auditor Marty Rathbun claims that wife Nicole Kidman was wiretapped on Tom Cruise's suggestion (which Cruise's lawyer denies).
In 2006, "Premiere" ranked Cruise as Hollywood's most powerful actor, as Cruise came in at number 13 on the magazine's 2006 Power List, being the highest ranked actor. The same year, "Forbes" magazine ranked him as the world's most powerful celebrity. The founder of CinemaScore in 2016 cited Cruise and Leonardo DiCaprio as the "two stars, it doesn't matter how bad the film is, they can pull [the box office] up".
October 10, 2006, was declared "Tom Cruise Day" in Japan; the Japan Memorial Day Association said that he was awarded with a special day because he has made more trips to Japan than any other Hollywood star.
While reviewing "Days of Thunder", film critic Roger Ebert noted the similarities between several of Cruise's 1980s films and nicknamed the formula the Tom Cruise Picture. Some of Cruise's later films like "A Few Good Men" and "The Last Samurai" can also be considered to be part of this formula.
"Widescreenings" noted that for Tom Cruise's character Daniel Kaffee in "A Few Good Men",
[screenwriter] Aaron Sorkin interestingly takes the opposite approach of "Top Gun", where Cruise also starred as the protagonist. In "Top Gun", Cruise plays Mitchell who is a 'hot shot' military underachiever who makes mistakes because he is trying to outperform his late father. Where Maverick Mitchell needs to rein in the discipline, Daniel Kaffee needs to let it go, finally see what he can do.
Sorkin and director Rob Reiner are praised in gradually unveiling Kaffee's potential in the film.
In 1998, Tom Cruise successfully sued the "Daily Express", a British tabloid which alleged that his marriage to Kidman was a sham designed to cover up his homosexuality. Cruise won the libel case.
In May 2001, he filed a lawsuit against gay porn actor Chad Slater. Slater had told the celebrity magazine "Actustar" that he had been involved in an affair with Cruise. This claim was strongly denied by Cruise, and Slater was later ordered to pay $10 million to Cruise in damages after Slater declared he could not afford to defend himself against the suit and would therefore default. Cruise requested a default judgment and, in January 2003, a Los Angeles judge decided against Slater after the porn actor said that his story was false.
Cruise also sued "Bold Magazine" publisher Michael Davis for $100 million, because Davis had alleged (though never confirmed) that he had video that would prove Cruise was gay. The suit was dropped in exchange for a public statement by Davis that the video was not of Cruise, and that Cruise was heterosexual.
In 2006, Cruise sued cybersquatter Jeff Burgar to obtain control of the TomCruise.com domain name. When owned by Burgar, the domain redirected to information about Cruise on Celebrity1000.com. The decision to turn TomCruise.com over to Cruise was handed down by the World Intellectual Property Organization (WIPO) on July 5, 2006.
In 2009, Michael Davis Sapir filed a suit charging that his phone had been wiretapped at Cruise's behest. That suit was dismissed by a Central Civil West court judge in Los Angeles on the grounds that the statute of limitations had expired on Sapir's claim.
In October 2012, Cruise filed a lawsuit against "In Touch" and "Life & Style" magazines for defamation after they claimed Cruise had "abandoned" his six-year-old daughter. During deposition, Cruise testified that due to his work load 110 days had passed without him seeing her. The suit was ultimately settled between the two parties. | https://en.wikipedia.org/wiki?curid=31460 |
The Smashing Pumpkins
The Smashing Pumpkins (or Smashing Pumpkins) are an American alternative rock band from Chicago. Formed in 1988 by frontman Billy Corgan (lead vocals, guitar), D'arcy Wretzky (bass), James Iha (guitar), and Jimmy Chamberlin (drums), the band has undergone many line-up changes. The current lineup features Corgan, Chamberlin, Iha and guitarist Jeff Schroeder.
Disavowing the punk rock roots of many of their alt-rock contemporaries, they have a diverse, densely layered, and guitar-heavy sound, containing elements of gothic rock, heavy metal, dream pop, psychedelic rock, progressive rock, shoegazing, and electronica in later recordings. Corgan is the group's primary songwriter; his musical ambitions and cathartic lyrics have shaped the band's albums and songs, which have been described as "anguished, bruised reports from Billy Corgan's nightmare-land".
The Smashing Pumpkins broke into the musical mainstream with their second album, 1993's "Siamese Dream". The group built its audience with extensive touring and their 1995 follow-up, the double album "Mellon Collie and the Infinite Sadness", debuted at number one on the "Billboard" 200 album chart. With 30 million albums sold worldwide, the Smashing Pumpkins were one of the most commercially successful and critically acclaimed bands of the 1990s. However, internal fighting, drug use, and diminishing record sales led to a 2000 break-up.
In 2006, Corgan and Chamberlin reconvened to record a new Smashing Pumpkins album, "Zeitgeist". After touring throughout 2007 and 2008 with a lineup including new guitarist Jeff Schroeder, Chamberlin left the band in early 2009. Later that year, Corgan began a new recording series with a rotating lineup of musicians entitled "Teargarden by Kaleidyscope", which encompassed the release of stand-alone singles, compilation EP releases, and two full albums that also fell under the project's scope—"Oceania" in 2012 and "Monuments to an Elegy" in 2014. Chamberlin and Iha officially rejoined the band in February 2018. The reunited lineup released the album "" in November 2018.
After the breakup of his gothic rock band the Marked, singer and guitarist Billy Corgan left St. Petersburg, Florida, to return to his native city of Chicago, where he took a job in a record store and formed the idea of a new band to be called the Smashing Pumpkins. While working there, he met guitarist James Iha. Adorning themselves with paisley and other psychedelic trappings, the two began writing songs together (with the aid of a drum machine) that were heavily influenced by the Cure and New Order. The duo performed live for the first time on July 9, 1988 at the Polish bar Chicago 21. This performance included only Corgan on bass and Iha on guitar with a drum machine. Shortly thereafter, Corgan met D'arcy Wretzky after a show by the Dan Reed Network where they argued the merits of the band. After finding out Wretzky played bass guitar, Corgan recruited her into the lineup, and the trio played a show at the Avalon Nightclub. After this show, Cabaret Metro owner Joe Shanahan agreed to book the band on the condition that they replace the drum
machine with a live drummer.
Jazz drummer Jimmy Chamberlin was recommended by a friend of Corgan's. Chamberlin knew little of alternative music and immediately changed the sound of the nascent band. As Corgan recalled of the period, "We were completely into the sad-rock, Cure kind of thing. It took about two or three practices before I realized that the power in his playing was something that enabled us to rock harder than we could ever have imagined." On October 5, 1988, the complete band took the stage for the first time at the Cabaret Metro.
In 1989, the Smashing Pumpkins made their first appearance on record with the compilation album "Light Into Dark", which featured several Chicago alternative bands. The group released its first single, "I Am One", in 1990 on local Chicago label Limited Potential. The single sold out and they released a follow-up, "Tristessa", on Sub Pop, after which they signed to Caroline Records. The band recorded their 1991 debut studio album "Gish" with producer Butch Vig at his Smart Studios in Madison, Wisconsin for $20,000. In order to gain the consistency he desired, Corgan often played all instruments excluding drums, which created tension in the band. The music fused heavy metal guitars, psychedelia, and dream pop, garnering them comparisons to Jane's Addiction. "Gish" became a minor success, with the single "Rhinoceros" receiving some airplay on modern rock radio. After releasing the "Lull" EP in October 1991 on Caroline Records, the band formally signed with Virgin Records, which was affiliated with Caroline. The band supported the album with a tour that included opening for bands such as the Red Hot Chili Peppers, Jane's Addiction, and Guns N' Roses. During the tour, Iha and Wretzky went through a messy breakup, Chamberlin became addicted to narcotics and alcohol, and Corgan entered a deep depression, writing some songs for the upcoming album in the parking garage where he lived at the time.
With the breakthrough of alternative rock into the American mainstream due to the popularity of grunge bands such as Nirvana and Pearl Jam, the Smashing Pumpkins were poised for major commercial success. At this time, the Smashing Pumpkins were routinely lumped in with the grunge movement, with Corgan protesting, "We've graduated now from 'the next Jane's Addiction' to 'the next Nirvana', now we're 'the next Pearl Jam'."
Amid this environment of intense internal pressure for the band to break through to widespread popularity, the band relocated to Marietta, Georgia in late 1992 to begin work on their second album, with Butch Vig returning as producer. The decision to record so far away from their hometown was motivated partly by the band's desire to avoid friends and distractions during the recording, but largely as a desperate attempt to cut Chamberlin off from his known drug connections. The recording environment for "Siamese Dream" was quickly marred by discord within the band. As was the case with "Gish", Corgan and Vig decided that Corgan should play nearly all of the guitar and bass parts on the album, contributing to an air of resentment. The contemporary music press began to portray Corgan as a tyrant. Corgan's depression, meanwhile, had deepened to the point where he contemplated suicide, and he compensated by practically living in the studio. Meanwhile, Chamberlin quickly managed to find new connections and was often absent without any contact for days at a time. In all, it took over four months to complete the record, with the budget exceeding $250,000.
Despite all the problems in its recording, "Siamese Dream" debuted at number ten on the "Billboard" 200 chart, and sold over four million copies in the U.S. alone. Alongside the band's mounting mainstream recognition, the band's reputation as careerists among their former peers in the independent music community was worsened. Indie rock band Pavement's 1994 song "Range Life" directly mocks the band in its lyrics, although Stephen Malkmus, lead singer of Pavement, has stated, "I never dissed their music. I just dissed their status." Former Hüsker Dü frontman Bob Mould called them "the grunge Monkees", and fellow Chicago musician/producer Steve Albini wrote a scathing letter in response to an article praising the band, derisively comparing them to REO Speedwagon ("by, of and for the mainstream") and concluding their ultimate insignificance. The opening track and lead single of "Siamese Dream", "Cherub Rock", directly addresses Corgan's feud with the "indie-world".
In 1994 Virgin released the B-sides/rarities compilation "Pisces Iscariot" which charted higher than "Siamese Dream" by reaching number four on the "Billboard" 200. Also released was a VHS cassette titled "Vieuphoria" featuring a mix of live performances and behind-the-scenes footage. Following relentless touring to support the recordings, including headline slots on the 1994 Lollapalooza tour and at Reading Festival in 1995, the band took time off to write the follow-up album.
During 1995, Corgan wrote about 56 songs, following which the band went into the studio with producers Flood and Alan Moulder to work on what Corgan described as ""The Wall" for Generation X", and which became "Mellon Collie and the Infinite Sadness", a double album of twenty-eight songs, lasting over two hours (the vinyl version of the album contained three records, two extra songs, and an alternate track listing). The songs were intended to hang together conceptually as a symbol of the cycle of life and death. Praised by "Time" as "the group's most ambitious and accomplished work yet", "Mellon Collie" debuted at number one on the "Billboard" 200 in October 1995. Even more successful than "Siamese Dream", it was certified ten times platinum in the United States and became the best-selling double album of the decade. It also garnered seven 1997 Grammy Award nominations, including Album of the Year. The band won only the Best Hard Rock Performance award, for the album's lead single "Bullet with Butterfly Wings". The album spawned five singles—"Bullet with Butterfly Wings", "1979", "Zero", "Tonight, Tonight" which Corgan stated was inspired by the Cheap Trick song "I'll Be with You Tonight", and "Thirty-Three"—of which the first three were certified gold and all but "Zero" entered the Top 40. Many of the songs that did not make it onto "Mellon Collie" were released as B-sides to the singles, and were later compiled in "The Aeroplane Flies High" box set. The set was originally limited to 200,000 copies, but more were produced to meet demand.
In 1996 the Pumpkins undertook an extended world tour in support of "Mellon Collie". Corgan's look during this period—a shaved head, a long sleeve black shirt with the word "Zero" printed on it, and silver pants—became iconic. That year, the band also made a guest appearance in an episode of "The Simpsons", "Homerpalooza". With considerable video rotation on MTV, major industry awards, and "Zero" shirts selling in many malls, the Pumpkins were considered one of the most popular bands of the time. But the year was far from entirely positive for the band. In May, the Smashing Pumpkins played a gig at the Point Theatre in Dublin, Ireland. Despite the band's repeated requests for moshing to stop, a seventeen-year-old fan named Bernadette O'Brien was crushed to death. The concert ended early and the following night's performance in Belfast was cancelled out of respect for her. However, while Corgan maintained that moshing's "time [had] come and gone", the band would continue to request open-floor concerts throughout the rest of the tour.
The band suffered a personal tragedy on the night of July 11, 1996, when touring keyboardist Jonathan Melvoin and Chamberlin overdosed on heroin in a hotel room in New York City. Melvoin died, and Chamberlin was arrested for drug possession. A few days later, the band announced that Chamberlin had been fired as a result of the incident. The Pumpkins chose to finish the tour, and hired drummer Matt Walker and keyboardist Dennis Flemion. Corgan later said the decision to continue touring was the worst decision the band had ever made, damaging both their music and their reputation. Chamberlin admitted in a 1994 "Rolling Stone" cover story that in the past he'd "gotten high in every city in this country and probably half the cities in Europe." But in recent years, he had reportedly been clean. On July 17, the Pumpkins issued a statement in which they said, "For nine years we have battled with Jimmy's struggles with the insidious disease of drug and alcohol addiction. It has nearly destroyed everything we are and stand for. … We wish [him] the best we have to offer". Meanwhile, the band had given interviews since the release of "Mellon Collie" stating that it would be the last conventional Pumpkins record, and that rock was becoming stale. James Iha said at the end of 1996, "The future is in electronic music. It really seems boring just to play rock music."
After the release of "Mellon Collie", the Pumpkins contributed a number of songs to various compilations. Released in early 1997, the song "Eye", which appeared on the soundtrack to David Lynch's "Lost Highway", relied almost exclusively on electronic instruments and signaled a drastic shift from the Pumpkins' previous musical styles. At the time, Corgan stated his "idea [was] to reconfigure the focus and get away from the classic guitars-bass-drum rock format." Later that year, the group contributed "The End Is the Beginning Is the End" to the soundtrack for the film "Batman & Robin". With Matt Walker on drums, the song featured a heavy sound similar to "Bullet with Butterfly Wings" while still having strong electronic influences. The song later won the 1998 Grammy for Best Hard Rock Performance. Though Corgan announced that the song represented the sound people could expect from the band in the future, the band's next album would feature few guitar-driven songs.
Recorded following the death of Corgan's mother and his divorce, 1998's "Adore" represented a significant change of style from the Pumpkins' previous guitar-based rock, veering into electronica. The record, cut with assistance from drum machines and studio drummers including Matt Walker, was infused with a darker aesthetic than much of the band's earlier work. The group also modified its public image, shedding its alternative rock look for a more subdued appearance. Although "Adore" received favorable reviews and was nominated for Best Alternative Performance at the Grammy Awards, the album had only sold about 830,000 copies in the United States by the end of the year. The album nonetheless sold three times as many copies overseas. The band began a seventeen-date, fifteen-city charity North American tour in support of "Adore". At each stop on the tour, the band donated 100 percent of tickets sales to a local charity organization. The tour's expenses were entirely funded out of the band's own pockets. All told, the band donated over $2.8 million to charity as a result of the tour.
In 1999 the band surprised fans by reuniting with a rehabilitated Jimmy Chamberlin for a brief tour dubbed "The Arising", which showcased both new and classic material. The lineup was short-lived, however, as the band announced the departure of Wretzky in September during work on the album "Machina/The Machines of God". Former Hole bassist Melissa Auf der Maur was recruited for the "Sacred and Profane" tour in support of the album and appeared in the videos accompanying its release. Released in 2000, "Machina" was initially promoted as the Pumpkins' return to a more traditional rock sound, after the more gothic, electronic-sounding "Adore". The album debuted at number three on the "Billboard" charts, but quickly disappeared and as of 2007 had only been certified gold. Music journalist Jim DeRogatis, who described the album as "one of the strongest of their career", noted that the stalled sales for "Machina" in comparison to teen pop ascendant at the time "seems like concrete proof that a new wave of young pop fans has turned a deaf ear toward alternative rock."
On May 23, 2000, in a live radio interview on KROQ-FM (Los Angeles), Billy Corgan announced the band's decision to break up at the end of that year following additional touring and recording. The group's final album before the break-up, "Machina II/The Friends & Enemies of Modern Music", was released in September 2000 in a limited pressing on vinyl with permission and instructions for free redistribution on the Internet by fans. Only twenty-five copies were cut, each of which was hand numbered and given to friends of the band along with band members themselves. The album, released under the Constantinople Records label created by Corgan, consisted of one double LP and three ten-inch EPs. Originally, the band asked Virgin to offer "Machina II" as a free download to anyone who bought "Machina". When the record label declined, Corgan opted to release the material independently.
On December 2, 2000, Smashing Pumpkins played a farewell concert at The Metro, the same Chicago club where their career had effectively started twelve years earlier. The four-and-a-half-hour-long show featured 35 songs spanning the group's career, and attendees were given a recording of the band's first concert at The Metro, "Live at Cabaret Metro 10-5-88". The single "Untitled" was released commercially to coincide with the farewell show.
In 2001 the compilation "Rotten Apples" was released. The double-disc version of the album, released as a limited edition, included a collection of B-sides and rarities called "Judas O". The "Greatest Hits Video Collection" DVD was also released at the same time. This was a compilation of all of the Pumpkins promo videos from "Gish" to "Machina" along with unreleased material. "Vieuphoria" was released on DVD in 2002, as was the soundtrack album "Earphoria", previously released solely to radio stations in 1994.
Billy Corgan and Jimmy Chamberlin reunited in 2001 as members of Corgan's next project, the short-lived supergroup Zwan. The group's only album, "Mary Star of the Sea", was released in 2003. After cancelling a few festival appearances, Corgan announced the demise of the band in 2003. During 2001 Corgan also toured as part of New Order and provided vocals on their comeback album "Get Ready". In October 2004 Corgan released his first book, "Blinking with Fists", a collection of poetry. In June 2005, he released a solo album, "TheFutureEmbrace", which he described as "(picking) up the thread of the as-yet-unfinished work of the Smashing Pumpkins". Despite this, it was greeted with generally mixed reviews and lackluster sales. Only one single, "Walking Shade", was released in support of the album.
In addition to drumming with Zwan, Jimmy Chamberlin also formed an alternative rock/jazz fusion project band called Jimmy Chamberlin Complex. The group released an album in 2005 titled "Life Begins Again". Corgan provided guest vocals on the track "Lokicat". James Iha served as a guitarist in A Perfect Circle, appearing on their "Thirteenth Step" club tour and 2004 album, "eMOTIVe". He has also been involved with other acts such as Chino Moreno's Team Sleep and Vanessa and the O's. He continues to work with Scratchie Records, his own record label, as well. D'arcy Wretzky has, aside from one radio interview in 2009, not made any public statements or appearances nor given any interviews since leaving the band in 1999. On January 25, 2000, she was arrested after she allegedly purchased three bags of cocaine, but after successfully completing a court-ordered drug education program, the charges were dropped.
Corgan insisted during this period that the band would not reform, although when Zwan broke up he announced, "I think my heart was in Smashing Pumpkins […] I think it was naive of me to think that I could find something that would mean as much to me." Corgan said in 2005, "I never wanted to leave the Smashing Pumpkins. That was never the plan." On February 17, 2004, Corgan posted a message on his personal blog calling Wretzky a "mean-spirited drug addict" and blaming Iha for the breakup of the Smashing Pumpkins. On June 3, 2004, he added that "the depth of my hurt [from Iha] is only matched with the depth of my gratitude". Iha responded to Corgan's claims in 2005, saying, "No, I didn't break up the band. The only person who could have done that is Billy."
On June 21, 2005, the day of the release of his album "TheFutureEmbrace", Corgan took out full-page advertisements in the "Chicago Tribune" and "Chicago Sun-Times" to announce that he planned to reunite the band. "For a year now", Corgan wrote, "I have walked around with a secret, a secret I chose to keep. But now I want you to be among the first to know that I have made plans to renew and revive the Smashing Pumpkins. I want my band back, and my songs, and my dreams". Corgan and Chamberlin were verified as participants in the reunion, but there was question as to whether other former members of the band would participate.
In April 2007 Iha and Auf der Maur separately confirmed that they were not taking part in the reunion. Chamberlin would later state that Iha and Wretzky "didn't want to be a part of" the reunion. The Smashing Pumpkins performed live for the first time since 2000 on May 22, 2007, in Paris, France. There, the band unveiled new touring members: guitarist Jeff Schroeder, bassist Ginger Reyes, and keyboardist Lisa Harriton. That same month, "Tarantula" was released as the first single from the band's forthcoming album. On July 7, the band performed at the Live Earth concert in New Jersey.
The band's new album, "Zeitgeist", was released that same month on Reprise Records, entering the "Billboard" charts at number two and selling 145,000 copies in its first week. "Zeitgeist" received mixed reviews, with much of the criticism targeted at the absence of half of the original lineup. The album divided the Pumpkins' fanbase. Corgan would later admit, "I know a lot of our fans are puzzled by "Zeitgeist". I think they wanted this massive, grandiose work, but you don't just roll out of bed after seven years without a functioning band and go back to doing that".
Corgan and Chamberlin continued to record as a duo, releasing the four-song EP "American Gothic" in January 2008 and the singles "Superchrist" and "G.L.O.W." later that year. That November, the group released the DVD "If All Goes Wrong", which chronicled the group's 2007 concert residences in Asheville, North Carolina and San Francisco, California. In late 2008, the band commenced on a controversy-riddled 20th Anniversary Tour. Around this time, Corgan said the group will make no more full-length records in order to focus exclusively on singles, explaining, "The listening patterns have changed, so why are we killing ourselves to do albums, to create balance, and do the arty track to set up the single? It's done."
In March 2009 Corgan announced on the band's website that Chamberlin had left the group and would be replaced. Chamberlin subsequently stated that his departure from the band is "a positive move forward for me. I can no longer commit all of my energy into something that I don't fully possess." Chamberlin stressed that the split was amicable, commenting, "I am glad [Corgan] has chosen to continue under the name. It is his right." Chamberlin soon formed the band Skysaw, which has released an album and toured in support of Minus the Bear. In July 2009 Billy Corgan formed a new group called Spirits in the Sky, initially as a tribute band to Sky Saxon of the Seeds, who had recently died. The following month Corgan confirmed on the band's website that 19-year-old Spirits in the Sky drummer Mike Byrne had replaced Chamberlin and that the pair was working on new Pumpkins recordings.
The group announced plans to release a 44-track concept album, "Teargarden by Kaleidyscope", for free over the Internet one track at a time. The first track, "A Song for a Son", was released in December 2009 to moderate press acclaim. In March 2010 Ginger Reyes officially left the band, prompting an open call for auditions for a new bassist. In May, Nicole Fiorentino announced she had joined the band as bass player, and would be working on "Teargarden by Kaleidyscope". The new lineup went on a world tour through to the end of 2010. One of the first shows with the new lineup was a concert to benefit Matthew Leone, bassist for the rock band Madina Lake, at the Metro on July 27, 2010. In late 2010 all four members contributed to the sessions for the third volume of "Teargarden".
On April 26, 2011, Corgan announced that the Smashing Pumpkins would be releasing a new album titled "Oceania", which he labeled as "an album within an album" in regards to the "Teargarden by Kaleidyscope" project, in the fall. As with the previous recording sessions, all four band members contributed to the project. Also, the entire album catalog was to be remastered and reissued with bonus tracks, starting with "Gish" and "Siamese Dream" in November 2011. The pre-"Gish" demos, "Pisces Iscariot", and "Mellon Collie and the Infinite Sadness" were released in 2012, with "The Aeroplane Flies High" released the following year. "Adore" was released in 2014, and "Machina/The Machines of God" and the yet commercially unreleased "Machina II/Friends and Enemies of Modern Music" are expected to be combined, remixed, and released in the same year. The band did a thirteen-city US tour in October 2011 followed by a European tour in November and December.
"Oceania" was released on June 19, 2012, and received generally positive reviews. The album debuted at No. 4 on the Billboard 200 and at No. 1 on the Billboard Independent. The album spawned two singles, "The Celestials" and "Panopticon". The band proceeded to tour in support of the album, including a US tour involving playing the album in its entirety. By September 2012, Corgan stated that the band had already begun work on their next album. However, despite this, the band concentrated on touring, playing at Glastonbury Festival, Dour Festival and the Barclays Center, where they recorded "", which was released on September 24, 2013.
On March 25, 2014, Corgan announced he had signed a new record deal with BMG, for two new albums, titled "Monuments to an Elegy" and "Day for Night", respectively. In June, it was revealed that Mike Byrne was no longer in the band, to be replaced by Tommy Lee of Mötley Crüe on the new album, and Fiorentino would not be recording on the album either. "Monuments to an Elegy" was released on December 5, 2014, to generally positive reviews. The band toured in support of the album starting on November 26, with Rage Against the Machine's Brad Wilk filling in on drums and the Killers' Mark Stoermer filling in on bass. The follow-up proposed album "Day For Night" was cited for delayed late 2015 or early 2016 release.
Later in 2015 Corgan announced that the band would embark on a co-headlining tour of North America with Marilyn Manson, "The End Times Tour", across July and August 2015. Prior to the co-headlining dates, the band performed a series of acoustic shows with drum machines and tapes for percussion. When the time came for the co-headlining tour, plans for a drummer fell through and Corgan recruited Chamberlin to reunite for the shows. On February 1, 2016, it was announced that the band would continue their "In Plainsong" acoustic tour with Jimmy Chamberlin on drums and were planning to head "straight to the studio after the dates to record a brand new album inspired by the sounds explored in the new acoustic setting". On February 25, 2016, Corgan posted a video from a Los Angeles studio on the band's Facebook account, giving an update on the writing process for the new songs for the upcoming album to be released after the "In Plainsong" tour.
The tour began in Portland, Oregon, on March 22, 2016.
On his birthday on March 26, 2016, original guitarist James Iha joined Billy Corgan, Jimmy Chamberlin, and Jeff Schroeder on stage unannounced at the Ace Hotel in downtown Los Angeles. He performed a few songs, including "Mayonaise", "Soma" and "Whir" marking his first appearance with the Smashing Pumpkins in 16 years. Iha also played at the second of the two Smashing Pumpkins shows at the Ace Hotel the following day, which was Easter Sunday. Iha joined the Pumpkins for a third time at their concert of April 14 at the Civic Opera House in Chicago. In July, Corgan began hinting of the possibility of reuniting the band original lineup, of himself, Iha, Wretzky, and Chamberlin, and in August, he stated he had begun reaching out to the original lineup about the feasibility of a reunion, including speaking to Wretzky for the first time in sixteen years. Despite the comments, Corgan would spend much of 2017 working on solo material – recording and releasing the solo album "Ogilala" and beginning work on another solo album for 2018. In June 2017 Chamberlin also mentioned the possibility of a reunion tour in 2018. In January 2018 Corgan shared a photo of himself, Iha, and Chamberlin together in recording studio. In February 2018 Corgan announced that he was working with music producer Rick Rubin on a future Smashing Pumpkins album, that there were currently 26 songs he was actively working on, and that "the guitar feels once again like the preferred weapon of choice." Soon afterwards, Corgan shared a photo of sound equipment with Iha's name on a label, as well as announcing recording was finished on the upcoming album.
On February 15, 2018, the band officially announced that founding members Iha and Chamberlin were back in the band. They embarked on the "Shiny and Oh So Bright Tour" starting in July, with a focus on performing material from their first five studio albums. Original bassist D'arcy Wretzky claimed she had been offered a contract to rejoin the band but Corgan rescinded the offer soon after. Corgan released a statement denying the claims, stating "Ms. Wretzky has repeatedly been invited out to play with the group, participate in demo sessions, or at the very least, meet face-to-face, and in each and every instance she always deferred". Jack Bates (son of Joy Division bassist Peter Hook) played bass on the tour. Bates previously toured with the Smashing Pumpkins in 2015. Multi-instrumentalist Katie Cole rejoined the band for the tour as well, singing backup vocals and playing keyboards and guitar.
In March 2018, Corgan mentioned the band planned to release two EPs in 2018, with the first tentatively planned for May. On June 8, 2018, the first single from the set of music, "Solara", was released. On August 2, 2018, the band celebrated their 30th anniversary by performing in Holmdel, New Jersey. In September 2018, they announced the album "", released via Napalm Records on November 16, 2018. The album debuted at number 54 on the Billboard 200 chart, making it their worst performing release since their first album, "Gish", debuted at 195 in 1991.
After touring through much of 2019, Corgan noted in January 2020 that the band was currently working on 21 songs for a future album release.
The direction of the band is dominated by lead guitarist, lead vocalist, keyboardist, bassist and principal songwriter Billy Corgan. Journalist Greg Kot wrote, "The music [of the Smashing Pumpkins] would not be what it is without his ambition and vision, and his famously fractured relationships with his family, friends, and bandmembers." Melissa Auf der Maur commented upon news of the group's reunion, "Everyone knows Billy doesn't need too many people to make a Pumpkins record, other than Jimmy [Chamberlin]—who he has on board." In a 2015 interview Corgan himself referred to the current iteration of the band "as sort of an open source collective" noting that "It's whoever feels right at the time." Many of Corgan's lyrics for the Pumpkins are cathartic expressions of emotion, full of personal musings and strong indictments of himself and those close to him. Music critics were not often fans of Corgan's angst-filled lyrics. Jim DeRogatis wrote in a 1993 "Chicago Sun-Times" article that Corgan's lyrics "too often sound like sophomoric poetry", although he viewed the lyrics of later albums "Adore" and "Machina" as an improvement. The band's songs have been described as "anguished, bruised reports from Billy Corgan's nightmare-land" by journalist William Shaw.
Smashing Pumpkins, unlike many alternative rock bands at the time, disavowed the influence of punk rock on their sound. Overall, they have a diverse, densely layered, and guitar-heavy sound, containing elements of gothic rock, heavy metal, dream pop, psychedelic rock, progressive rock, shoegazing, and electronica in later recordings.
The Smashing Pumpkins' distinctive sound up until "Adore" involved layering numerous guitar tracks onto a song during the recording process, a tactic that "Mellon Collie and the Infinite Sadness" coproducer Flood called the "Pumpkin guitar overdub army." Although there were a lot of overdubbed parts on "Gish", Corgan began to really explore the possibilities of overdubbing with "Siamese Dream"; Corgan has stated that "Soma" alone contains up to 40 overdubbed guitar parts. While Corgan knew many of the songs would be difficult or impossible to replicate from their recorded versions in concert (in fact, some songs were drastically altered for live performance), he has explained the use of overdubbing by posing the question "When you are faced with making a permanent recorded representation of a song, why not endow it with the grandest possible vision?" This use of multilayered sounds was inspired by Corgan's love of 1970s popular artists & bands such as: David Bowie, Cheap Trick, Queen, Boston, and the Electric Light Orchestra, as well as shoegaze, a British alternative rock style of the late 1980s and early 1990s that relied on swirling layers of guitar noise for effect. "Mellon Collie" coproducer Alan Moulder was originally hired to mix "Siamese Dream" because Corgan was a fan of his work producing shoegaze bands such as My Bloody Valentine, Ride, and Slowdive.
Like many contemporary alternative bands, the Smashing Pumpkins utilized shifts in song dynamics, going from quiet to loud and vice versa. Hüsker Dü's seminal album "Zen Arcade" demonstrated to the band how they could place gentler material against more aggressive fare, and Corgan made such shifts in dynamics central to the pursuit of his grand musical ambitions. Corgan said he liked the idea of creating his own alternative universe through sound that essentially tells the listener, "Welcome to Pumpkin Land, this is what it sounds like on Planet Pumpkin." This emphasis on atmosphere carried through to "Adore" (described as "arcane night music" in prerelease promotion) and the "Machina" albums (concept records that tell the story of a fictional rock band).
The Pumpkins drew inspiration from a variety of other genres, some unfashionable during the 1990s among music critics. Corgan in particular was open about his appreciation of heavy metal, citing Dimebag Darrell of Pantera as his favorite contemporary guitarist. When one interviewer commented to Corgan and Iha that "Smashing Pumpkins is one of the groups that relegitimized heavy metal" and that they "were among the first alternative rockers to mention people like Ozzy and Black Sabbath with anything other than contempt". Corgan went on to rave about Black Sabbath's "Master of Reality" and Judas Priest's "Unleashed in the East". The song "Zero", which reminded Iha of Judas Priest, is an example of what the band dubbed "cybermetal." Post-punk and gothic rock bands like Joy Division/New Order, Bauhaus, the Cure, and Depeche Mode were formative influences on the band, which covered such artists in concert and on record. Corgan also cited Siouxsie and the Banshees saying it was important to point back to bands that influenced them. Psychedelic rock was also referenced often in the band's early recordings; according to Corgan, "In typical Pumpkins fashion, no one at that point really liked loud guitars or psychedelic music so, of course, that's exactly what we had to do." Corgan acknowledged that a chord he jokingly claimed as "the Pumpkin chord" (a G# octave chord at the eleventh fret of a guitar with the low E string played over it), used as the basis for "Cherub Rock", "Drown", and other songs, was in fact previously used by Jimi Hendrix. Other early influences cited by Corgan include Cream, the Stooges, and Blue Cheer.
Regarding the band's influence upon other groups, Greg Kot wrote in 2001, "Whereas Nirvana spawned countless mini-Nirvanas, the Pumpkins remain an island unto themselves." Still, some artists and bands have been influenced by the Pumpkins, such as Nelly Furtado, Marilyn Manson, Third Eye Blind, Mark Hoppus of Blink-182, Tegan and Sara, Fall Out Boy, Rivers Cuomo, Panic! at the Disco, Silversun Pickups, and My Chemical Romance. My Chemical Romance vocalist Gerard Way has said that they pattern their career upon the Pumpkins', including music videos. The members of fellow Chicago band Kill Hannah are friends with Corgan, and lead singer Mat Devine has compared his group to the Pumpkins.
The group has sold over 30 million albums worldwide as of October 2012, and sales in the United States alone reaching 19.75 million.
The Smashing Pumpkins have been praised as "responsible for some of the most striking and memorable video clips" and for having "approached videos from a completely artistic standpoint rather than mere commercials to sell albums". MTV's 2001 anniversary special "Testimony: 20 Years of Rock on MTV" credited the Pumpkins, along with Nine Inch Nails, with treating music videos as an art form during the 1990s. Corgan has said, "We generally resisted the idea of what I call the classic MTV rock video, which is like lots of people jumping around and stuff." The band worked with video directors including Kevin Kerslake ("Cherub Rock"), Samuel Bayer ("Bullet with Butterfly Wings"), and, most frequently, the team of Jonathan Dayton and Valerie Faris ("Rocket", "1979", "Tonight, Tonight", "The End Is the Beginning Is the End", and "Perfect"). Corgan, who was frequently heavily involved in the conception of the videos, said of Dayton and Faris, "I know my [initial] versions are always darker, and they're always talking me into something a little kinder and gentler." Videos like "Today", "Rocket", and "1979" dealt with images taken from middle American culture, albeit exaggerated. The group's videos so often avoid the literal interpretation of the song lyrics that the video for "Thirty-Three", with images closely related to the words of the song, was created as an intentional stylistic departure.
The band was nominated for several MTV Video Music Awards during the 1990s. In 1996, the group won eight VMAs total for the "1979" and "Tonight, Tonight" videos, including the top award, Video of the Year, for "Tonight, Tonight". The video was also nominated for a Grammy at the 1997 ceremony. Of the "Tonight, Tonight" video, Corgan remarked, "I don't think we've ever had people react [like this]... it just seemed to touch a nerve."
Shortly after the band's 2000 breakup, the "Greatest Hits Video Collection" was released, collecting the band's music videos from 1991 to 2000 and including commentary from Corgan, Iha, Chamberlin, Wretzky, and various music video directors with outtakes, live performances, and the extended "Try, Try, Try" short film.
Current members
Live members
Former members
American Music Awards
Grammy Awards
MTV Europe Music Awards
MTV Video Music Awards
Studio albums
Notes
† Part of "Teargarden by Kaleidyscope" (2009–2014), an overarching project abandoned before completion. | https://en.wikipedia.org/wiki?curid=31463 |
Thomas Robert Malthus
Thomas Robert Malthus (; 13/14 February 1766 – 23 December 1834) was an English cleric, scholar and influential economist in the fields of political economy and demography.
In his 1798 book "An Essay on the Principle of Population", Malthus observed that an increase in a nation's food production improved the well-being of the populace, but the improvement was temporary because it led to population growth, which in turn restored the original per capita production level. In other words, humans had a propensity to utilize abundance for population growth rather than for maintaining a high standard of living, a view that has become known as the "Malthusian trap" or the "Malthusian spectre". Populations had a tendency to grow until the lower class suffered hardship, want and greater susceptibility to famine and disease, a view that is sometimes referred to as a Malthusian catastrophe. Malthus wrote in opposition to the popular view in 18th-century Europe that saw society as improving and in principle as perfectible.
Malthus saw population growth as being inevitable whenever conditions improved, thereby precluding real progress towards a utopian society: "The power of population is indefinitely greater than the power in the earth to produce subsistence for man". As an Anglican cleric, he saw this situation as divinely imposed to teach virtuous behaviour. Malthus wrote that "the increase of population is necessarily limited by the means of subsistence"; "population does invariably increase when the means of subsistence increase"; and "the superior power of population is repressed by moral restraint, vice and misery".
Malthus criticized the Poor Laws for leading to inflation rather than improving the well-being of the poor. He supported taxes on grain imports (the Corn Laws). His views became influential and controversial across economic, political, social and scientific thought. Pioneers of evolutionary biology read him, notably Charles Darwin and Alfred Russel Wallace. He remains a much-debated writer.
The sixth child of Henrietta Catherine Graham and Daniel Malthus, Robert Malthus grew up in The Rookery, a country house in Westcott, near Dorking in Surrey. William Petersen describes Daniel Malthus as "a gentleman of good family and independent means [...] [and] a friend of David Hume and Jean-Jacques Rousseau". The young Malthus received his education at home in Bramcote, Nottinghamshire, and then at the Warrington Academy from 1782. Warrington was a dissenting academy, which closed in 1783. Malthus continued for a period to be tutored by Gilbert Wakefield, who had taught him there.
Malthus entered Jesus College, Cambridge in 1784. While there, he took prizes in English declamation, Latin and Greek, and graduated with honours, Ninth Wrangler in mathematics. His tutor was William Frend. He took the MA degree in 1791, and was elected a Fellow of Jesus College two years later. In 1789, he took orders in the Church of England, and became a curate at Oakwood Chapel (also Okewood) in the parish of Wotton, Surrey.
Malthus was a demographer before he was ever considered an economist. He first came to prominence for his 1798 publication, "An Essay on the Principle of Population". In it, he raised the question of how population growth related to the economy. He affirmed that there were many events, good and bad, that affected the economy in ways no one had ever deliberated upon before. The main point of his essay was that population multiplies geometrically and food arithmetically, therefore whenever the food supply increases, population will rapidly grow to eliminate the abundance. Eventually in the future, there would not be enough food for the whole of humanity to consume and people would starve. Until that point, the more food made available, the more the population would increase. He also stated that there was a fight for survival amongst humans and that only the strong who could attain food and other needs would survive, unlike the impoverished population he saw during his time period.
Malthus wrote the original text in reaction to the optimism of his father and his father's associates (notably Jean-Jaques Rousseau) regarding the future improvement of society. He also constructed his case as a specific response to writings of William Godwin (1756–1836) and of the Marquis de Condorcet (1743–1794). His assertions evoked questions and criticism, and between 1798 and 1826 he published six more versions of "An Essay on the Principle of Population", updating each edition to incorporate new material, to address criticism, and to convey changes in his own perspectives on the subject. Even so, the propositions made in "An Essay" were shocking to the public and largely disregarded during the 19th century. The negativity surrounding his essay created a space filled with opinions on population growth, connected with either praise or criticism of ideas about contraception and the future of agriculture.
The Malthusian controversy to which the "Essay" gave rise in the decades following its publication tended to focus attention on the birth rate and marriage rates. The neo-Malthusian controversy, comprising related debates of many years later, has seen a similar central role assigned to the numbers of children born.
On the whole it may be said that Malthus's revolutionary ideas in the sphere of population growth remain relevant to economic thought even today and continue to make economists ponder about the future.
In 1799, Malthus made a European tour with William Otter, a close college friend, travelling part of the way with Edward Daniel Clarke and John Marten Cripps, visiting Germany, Scandinavia and Russia. Malthus used the trip to gather population data. Otter later wrote a "Memoir" of Malthus for the second (1836) edition of his "Principles of Political Economy". During the Peace of Amiens of 1802 he travelled to France and Switzerland, in a party that included his relation and future wife Harriet.
In 1803, he became rector of Walesby, Lincolnshire.
In 1805, Malthus became Professor of History and Political Economy at the East India Company College in Hertfordshire. His students affectionately referred to him as "Pop", "Population", or "web-toe" Malthus.
Near the end of 1817, the proposed appointment of Graves Champney Haughton to the College was made a pretext by Randle Jackson and Joseph Hume to launch an attempt to close it down. Malthus wrote a pamphlet defending the College, which was reprieved by the East India Company within the same year, 1817.
In 1818, Malthus became a Fellow of the Royal Society.
During the 1820s, there took place a setpiece intellectual discussion among the exponents of political economy, often called the Malthus–Ricardo debate after its leading figures, Malthus and theorist of free trade David Ricardo, both of whom had written books with the title "Principles of Political Economy". Under examination were the nature and methods of political economy itself, while it was simultaneously under attack from others. The roots of the debate were in the previous decade. In "The Nature of Rent" (1815), Malthus had dealt with economic rent, a major concept in classical economics. Ricardo defined a theory of rent in his "Principles of Political Economy and Taxation" (1817): he regarded rent as value in excess of real production—something caused by ownership rather than by free trade. Rent therefore represented a kind of negative money that landlords could pull out of the production of the land, by means of its scarcity. Contrary to this concept, Malthus proposed rent to be a kind of economic surplus.
The debate developed over the economic concept of a general glut, and the possibility of failure of Say's Law. Malthus laid importance on economic development and the persistence of disequilibrium. The context was the post-war depression; Malthus had a supporter in William Blake, in denying that capital accumulation (saving) was always good in such circumstances, and John Stuart Mill attacked Blake on the fringes of the debate.
Ricardo corresponded with Malthus from 1817 about his "Principles". He was drawn into considering political economy in a less restricted sense, which might be adapted to legislation and its multiple objectives, by the thought of Malthus. In "Principles of Political Economy" (1820) and elsewhere, Malthus addressed the tension, amounting to conflict he saw between a narrow view of political economy and the broader moral and political plane. Leslie Stephen wrote:
If Malthus and Ricardo differed, it was a difference of men who accepted the same first principles. They both professed to interpret Adam Smith as the true prophet, and represented different shades of opinion rather than diverging sects.
It is now considered that the different purposes seen by Malthus and Ricardo for political economy affected their technical discussion, and contributed to the lack of compatible definitions. For example, Jean-Baptiste Say used a definition of production based on goods and services and so queried the restriction of Malthus to "goods" alone.
In terms of public policy, Malthus was a supporter of the protectionist Corn Laws from the end of the Napoleonic Wars. He emerged as the only economist of note to support duties on imported grain. By encouraging domestic production, Malthus argued, the Corn Laws would guarantee British self-sufficiency in food.
Malthus was a founding member in 1821 of the Political Economy Club, where John Cazenove tended to be his ally against Ricardo and Mill. He was elected in the beginning of 1824 as one of the ten royal associates of the Royal Society of Literature. He was also one of the first fellows of the Statistical Society, founded in March 1834. In 1827 he gave evidence to a committee of the House of Commons on emigration.
In 1827, he published "Definitions in Political Economy" The first chapter put forth "Rules for the Definition and Application of Terms in Political Economy". In chapter 10, the penultimate chapter, he presented 60 numbered paragraphs putting forth terms and their definitions that he proposed should be used in discussing political economy following those rules. This collection of terms and definitions is remarkable for two reasons: first, Malthus was the first economist to explicitly organize, define, and publish his terms as a coherent glossary of defined terms; and second, his definitions were for the most part well-formed definitional statements. Between these chapters, he criticized several contemporary economists—Jean-Baptiste Say, David Ricardo, James Mill, John Ramsay McCulloch, and Samuel Bailey—for sloppiness in choosing, attaching meaning to, and using their technical terms.
McCulloch was the editor of "The Scotsman" of Edinburgh and replied cuttingly in a review printed on the front page of his newspaper in March 1827. He implied that Malthus wanted to dictate terms and theories to other economists. McCulloch clearly felt his ox gored, and his review of "Definitions" is largely a bitter defence of his own "Principles of Political Economy", and his counter-attack "does little credit to his reputation", being largely "personal derogation" of Malthus. The purpose of Malthus's "Definitions" was terminological clarity, and Malthus discussed appropriate terms, their definitions, and their use by himself and his contemporaries. This motivation of Malthus's work was disregarded by McCulloch, who responded that there was nothing to be gained "by carping at definitions, and quibbling about the meaning to be attached to" words. Given that statement, it is not surprising that McCulloch's review failed to address the rules of chapter 1 and did not discuss the definitions of chapter 10; he also barely mentioned Malthus's critiques of other writers.
In spite of this and in the wake of McCulloch's scathing review, the reputation of Malthus as economist dropped away for the rest of his life. On the other hand, Malthus did have supporters, including Thomas Chalmers, some of the Oriel Noetics, Richard Jones and William Whewell from Cambridge.
Malthus died suddenly of heart disease on 23 December 1834 at his father-in-law's house. He was buried in Bath Abbey. His portrait, and descriptions by contemporaries, present him as tall and good-looking, but with a cleft lip and palate. The cleft palate affected his speech: such birth defects had occurred before amongst his relatives.
On 13 March 1804, Malthus married Harriet, daughter of John Eckersall of Claverton House, near Bath. They had a son and two daughters. His first born Henry became vicar of Effingham, Surrey in 1835 and of Donnington, Sussex in 1837; he married Sofia Otter (1807–1889), daughter of Bishop William Otter and died in August 1882, aged 76. His middle child Emily died in 1885, outliving her parents and siblings. The youngest Lucille died unmarried and childless in 1825, months before her 18th birthday.
Malthus argued in his "Essay" (1798) that population growth generally expanded in times and in regions of plenty until the size of the population relative to the primary resources caused distress:
Malthus argued that two types of checks hold population within resource limits: "positive" checks, which raise the death rate; and "preventive" ones, which lower the birth rate. The positive checks include hunger, disease and war; the preventive checks: birth control, postponement of marriage and celibacy.
The rapid increase in the global population of the past century exemplifies Malthus's predicted population patterns; it also appears to describe socio-demographic dynamics of complex pre-industrial societies. These findings are the basis for neo-malthusian modern mathematical models of "long-term historical dynamics".
Malthus wrote that in a period of resource abundance, a population could double in 25 years. However, the margin of abundance could not be sustained as population grew, leading to checks on population growth:
In later editions of his essay, Malthus clarified his view that if society relied on human misery to limit population growth, then sources of misery ("e.g.", hunger, disease, and war) would inevitably afflict society, as would volatile economic cycles. On the other hand, "preventive checks" to population that limited birthrates, such as later marriages, could ensure a higher standard of living for all, while also increasing economic stability. Regarding possibilities for freeing man from these limits, Malthus argued against a variety of imaginable solutions, such as the notion that agricultural improvements could expand without limit.
Of the relationship between population and economics, Malthus wrote that when the population of laborers grows faster than the production of food, real wages fall because the growing population causes the cost of living ("i.e.", the cost of food) to go up. Difficulties of raising a family eventually reduce the rate of population growth, until the falling population again leads to higher real wages.
In the second and subsequent editions Malthus put more emphasis on "moral restraint" as the best means of easing the poverty of the lower classes."
In this work, his first published pamphlet, Malthus argues against the notion prevailing in his locale that the greed of intermediaries caused the high price of provisions. Instead, Malthus says that the high price stems from the Poor Laws, which "increase the parish allowances in proportion to the price of corn." Thus, given a limited supply, the Poor Laws force up the price of daily necessities. However, he concludes by saying that in time of scarcity such Poor Laws, by raising the price of corn more evenly, actually produce a "beneficial" effect.
Although government in Britain had regulated the prices of grain, the Corn Laws originated in 1815. At the end of the Napoleonic Wars that year, Parliament passed legislation banning the importation of foreign corn into Britain until domestic corn cost 80 shillings per quarter. The high price caused the cost of food to increase and caused distress among the working classes in the towns. It led to serious rioting in London and to the Peterloo Massacre in Manchester in 1819.
In this pamphlet, printed during the parliamentary discussion, Malthus tentatively supported the free-traders. He argued that given the increasing cost of growing British corn, advantages accrued from supplementing it from cheaper foreign sources.
In 1820 Malthus published "Principles of Political Economy".
(A second edition was posthumously published in 1836.) Malthus intended this work to rival Ricardo's "Principles" (1817). It, and his 1827 "Definitions in political economy", defended Sismondi's views on "general glut" rather than Say's Law, which in effect states "there can be no general glut".
Malthus developed the theory of demand-supply mismatches that he called gluts. Discounted at the time, this theory foreshadowed later work by an admirer, John Maynard Keynes.
The vast bulk of continuing commentary on Malthus, however, extends and expands on the "Malthusian controversy" of the early 19th century.
The epitaph of Malthus in Bath Abbey reads [with commas inserted for clarity]:
Sacred to the memory of the Rev THOMAS ROBERT MALTHUS, long known to the lettered world by his admirable writings on the social branches of political economy, particularly by his essay on population.
One of the best men and truest philosophers of any age or country, raised by native dignity of mind above the misrepresentation of the ignorant and the neglect of the great, he lived a serene and happy life devoted to the pursuit and communication of truth, supported by a calm but firm conviction of the usefulness of his labours, content with the approbation of the wise and good.
His writings will be a lasting monument of the extent and correctness of his understanding.
The spotless integrity of his principles, the equity and candour of his nature, his sweetness of temper, urbanity of manners and tenderness of heart, his benevolence and his piety are still dearer recollections of his family and friends.
Born February 14, 1766 – Died 29 December 1834. | https://en.wikipedia.org/wiki?curid=31468 |
Tengwar
The tengwar are an artificial script created by J. R. R. Tolkien.
Within the fictional context of Tolkien's legendarium, the tengwar were invented by the Elf Fëanor, and used first to write the Elven tongues Quenya and Telerin. Later a great number of languages of Middle-earth were written using the tengwar, including Sindarin. Tolkien used tengwar to write English: most of Tolkien's tengwar samples are actually in English.
According to "The War of the Jewels" (Appendix D to "Quendi and Eldar"), Fëanor, when he created his script, introduced a change in terminology. He called a letter, i.e. a written representation of a spoken phoneme ("tengwë"), a "tengwa". Previously, any letter or symbol had been called a "sarat" (from "*sar" "incise"). The alphabet of Rúmil of Tirion, on which Fëanor supposedly based his own work, was known as Sarati. It later became known as "Tengwar of Rúmil".
The plural of "tengwa" was "tengwar", and this is the name by which Fëanor's system became known. Since, however, in commonly used modes, an individual "tengwa" was equivalent to a consonant, the term "tengwar" in popular use became equivalent to "consonant sign", and the vowel signs were known as "ómatehtar". By loan-translation, the tengwar became known as "tîw" (singular "têw") in Sindarin, when they were introduced to Beleriand. The letters of the earlier alphabet native to Sindarin were called "cirth" (singular "certh", probably from "*kirte" "cutting", and thus semantically analogous to Quenya "sarat"). This term was loaned into exilic Quenya as "certa", plural "certar".
The sarati, a script developed by Tolkien in the late 1910s and described in "Parma Eldalamberon 13", anticipates many features of the tengwar: vowel representation by diacritics (which is found in many tengwar varieties); different tengwar shapes; and a few correspondences between sound features and letter shape features (though inconsistent).
Even closer to the tengwar is the Valmaric script, described in "Parma Eldalamberon 14", which Tolkien used from about 1922 to 1925. It features many tengwar shapes, the inherent vowel found in some tengwar varieties, and the tables in the samples V12 and V13 show an arrangement that is very similar to one of the primary tengwar in the classical Quenya "mode".
Jim Allan ("An Introduction to Elvish", ) compared the tengwar with the "Universal Alphabet" of Francis Lodwick of 1686, both on grounds of the correspondence between shape features and sound features, and of the actual letter shapes.
The tengwar were probably developed in the late 1920s or in the early 1930s. "The Lonely Mountain Jar Inscription", the first published Tengwar sample, dates to 1937 ("The Hobbit", most editions with colour plates). The full explanation of the tengwar was published in Appendix E of "The Lord of the Rings" in 1955.
The "Mellonath Daeron Index of Tengwar Specimina" (DTS) lists most of the known samples of tengwar by Tolkien.
There are only a few known samples predating publication of "The Lord of the Rings" (many of them published posthumously):
The most notable characteristic of the tengwar script is that the shapes of the letters correspond to the distinctive features of the sounds they represent. The Quenya consonant system has five places of articulation: labial, dental, palatal, velar, and glottal. The velars distinguish between plain and labialized (that is, articulated with rounded lips, or followed by a sound). Each point of articulation, and the corresponding tengwa series, has a name in the classical Quenya mode. Dental sounds are called "Tincotéma" and are represented with the tengwar in column I. Labial sounds are called "Parmatéma", and represented by the column II tengwar; velar sounds are called "Calmatéma", represented by column III; and labialized velar sounds are called "Quessetéma", represented by the "tengwar" of column IV. Palatal sounds are called "Tyelpetéma" and have no tengwa series of their own, but are represented by column III letters with an added diacritic for following .
Similarly shaped letters reflect not only similar places of articulation, but also similar manners of articulation. In the classical Quenya mode, row 1 represents voiceless stops, row 2 voiced prenasalized stops, row 3 voiceless fricatives, row 4 voiceless prenasalized stops, row 5 nasal stops, and row 6 approximants.
Most letters are constructed by a combination of two basic shapes: a vertical stem (either long or short) and either one or two rounded bows (which may or may not be underscored, and may be on the left or right of the stem).
These principal letters are divided into four series ("témar") that correspond to the main places of articulation and into six grades ("tyeller") that correspond to the main manners of articulation. Both vary among modes.
Each series is headed by the basic signs composed of a vertical stem descending below the line, and a single bow. These basic signs represent the voiceless stop consonants for that series. For the classical Quenya mode, they are , , and , and the series are named "tincotéma", "parmatéma", "calmatéma", and "quessetéma", respectively; téma means "series" in Quenya.
In rows of the "general use", there are the following correspondences between letter shapes and manners of articulation:
In addition to these variations of the tengwar shapes, there is yet another variation, the use of stems that are extended both above and below the line. This shape may correspond to other consonant variations required. Except for some English abbreviations, it is not used in any of the better known tengwar modes, but it occurs in a Qenya mode where the tengwa Parma with extended stem is used for and the tengwa Calma with extended stem is used for . The tengwar with raised stems sometimes occur in glyph variants that look like extended stems, as seen in the inscription of the One Ring.
Here is an example from the "parmatéma" (the signs with a closed bow on the right side) in the "general use" of the tengwar:
In some languages such as Quenya, which do not contain any voiced fricatives other than "v", the raised stem + doubled bow row is used for the very common nasal+stop sequences ("nt", "mp", "nk", "nqu"). In such cases, the "w" sign in the previous paragraph is used for "v". In the mode of Beleriand, found on the door to Moria, the bottom "tyellë" is used for nasals (e.g., "vala" is used for ) and the fifth "tyellë" for doubled nasals ("" for ).
There are additional letters that do not have regular shapes. They may represent, e.g., , , and . Their use varies considerably from mode to mode. Some aficionados have added more letters not found in Tolkien's writings for use in their modes.
A "tehta" (Quenya "marking") is a diacritic placed above or below the tengwa. They can represent vowels, consonant doubling, or nasal sound.
As Tolkien explained in the "ROTK" appendix, the "tehtar" for vowels resemble Latin diacritics: circumflex (î) , acute (í) , dot (i) , left curl (ı̔) , and right curl (ı̓) . Long vowels, excepting , may be indicated by doubling the signs. Some languages from which is absent or in which compared to it appears sparsely, such as the Black Speech, use left curl for ; other languages swap the signs for and .
A vowel occurring alone is drawn on the vowel carrier, which resembles dotless i (ı) for a short vowel or dotless j (ȷ) for a long vowel.
Just as with any alphabetic writing system, every specific language written in tengwar requires a specific orthography, depending on the phonology of that language. These tengwar orthographies are usually called "modes". Some modes follow pronunciation, while others follow traditional orthography.
Some modes map the basic consonants to , , and (classical mode in chart at right), while others use them to represent , , and (general mode at right).
In some modes, called "ómatehtar" (or "vowel tehtar") modes, the vowels are represented with diacritics called "tehtar" (Quenya for 'signs'; corresponding singular: "tehta", 'sign'). These ómatehtar modes can be loosely considered abjads rather than true alphabets. In some ómatehtar modes, the consonant signs feature an inherent vowel. These ómatehtar modes can be considered alphasyllabaries.
"Ómatehtar" modes can vary in that the vowel stroke can be placed either on top of the consonant preceding it, as in Quenya, or on the consonant following, as in Sindarin, English, and the notorious Black Speech inscription on the One Ring. The other main difference is in the fourth "tyellë" below, where those letters with raised stems and doubled bows can be either voiced fricatives, as in Sindarin (general mode at right), or nasalized stops, as in Quenya (classical mode).
In the "full writing" modes, the consonants and the vowels are represented by Tengwar. Only one such mode is well known. It is called the "mode of Beleriand" and one can read it on the Doors of Durin.
Since the publication of the first official description of the Tengwar at the end of "The Lord of the Rings", others have created modes for other languages such as English, Spanish, German, Swedish, French, Finnish, Italian, Hungarian and Welsh. Modes have also been devised for other constructed languages; Esperanto and Lojban.
Tolkien has used multiple modes for English, including full writing and ómatehtar alphabetic modes, phonetic full modes and phonetic ómatehtar modes known from documents published after his death.
The contemporary de facto standard in the tengwar user community maps the tengwar characters onto the ISO 8859-1 character encoding following the example of the tengwar typefaces by Dan Smith. This implies a major flaw: If no corresponding tengwar font is installed, a string of nonsense characters appears.
Since there are not enough places in ISO 8859-1's 191 codepoints for all the signs used in tengwar orthography, certain signs are included in a "tengwar A" font which also maps its characters on ISO 8859-1, overlapping with the first font.
For each tengwar diacritic, there are four different codepoints that are used depending on the width of the character which bears it.
Other tengwar typefaces with this encoding include Johan Winge's Tengwar Annatar, Måns Björkman's Tengwar Parmaitë, Enrique Mombello's Tengwar Élfica or Michal Nowakowski's Tengwar Formal (note that most of these differ in details).
The following sample shows the first article of the Universal Declaration of Human Rights written in English, according to the traditional English orthography. It should look similar to the picture at the top of the page, but if no tengwar font is installed, it will appear as a jumble of characters because the corresponding ISO 8859-1 characters will appear instead.
j#¸ 9t&5# w`Vb%_ 6EO w6Y5 e7`V`V 2{( zèVj# 5% 2x%51T`Û 2{( 7v%1+- 4hR 7EO 2{$yYO2 y4% 7]F85^ 2{( z5^8I`B5$I( 2{( dyYj2 zE1 1yY6E2_ 5^( 5#4^(7 5% `C 8q7T1T W w74^(692^H --
Note: Some browsers may not display these characters properly.
A proposal has been made by Michael Everson to include the tengwar in the Unicode standard. The codepoints are subject to change; the range to U+0160FF in the SMP is tentatively allocated for tengwar according to the current Unicode roadmap.
Tengwar are currently included in the unofficial ConScript Unicode Registry (CSUR), which assigns codepoints in the Private Use Area. Tengwar are mapped to the range U+E000 to U+E07F; see External links. The following Unicode sample (which repeats the one above) is meaningful when viewed under a typeface supporting tengwar glyphs in the area defined in the ConScript tengwar proposal. Some typefaces that support this proposal are Everson Mono, Tengwar Telcontar, Constructium, Tengwar Formal Unicode, and FreeMonoTengwar (James Kass's Code2000 and Code2001 use an older, incompatible version of the proposal).
Tengwar has been used in Tolkien fandom since the publication of "The Lord of the Rings" in the 1950s.
Tengwar script appears in a bound volume in the Within Temptation music video for "Stand My Ground" (2004), though it appears to be a random selection of letters, with a tehta vowel appearing about every five words or so. Many tengwar are also repeated for no apparent reason. Another instance of this stylistic use of tengwar is the computer game "" (1997); again the tengwar are used meaninglessly. Tengwar is also used in "Alone in the Dark", a comic book, as a typeface describing an arcane language.
There has been a fashion of tengwar tattoos, especially in the wake of Peter Jackson's "The Lord of the Rings" film trilogy.
Celebrities with such tattoos include Spanish footballer
Fernando Torres and Argentine footballer Sergio Agüero.
With the exception of John Rhys-Davies, the actors playing the "Fellowship of the Ring" in Peter Jackson's film trilogy have tattoos of the English word "nine" written in Quenya-mode tengwar. | https://en.wikipedia.org/wiki?curid=31469 |
Tori Amos
Tori Amos (born Myra Ellen Amos, August 22, 1963) is an American singer-songwriter and pianist. She is a classically trained musician with a mezzo-soprano vocal range. Having already begun composing instrumental pieces on piano, Amos won a full scholarship to the Peabody Institute at Johns Hopkins University at the age of five, the youngest person ever to have been admitted. She was expelled at the age of 11 for what "Rolling Stone" described as "musical insubordination". Amos was the lead singer of the short-lived 1980s pop group Y Kant Tori Read before achieving her breakthrough as a solo artist in the early 1990s. Her songs focus on a broad range of topics, including sexuality, feminism, politics and religion.
Her charting singles include "Crucify", "Silent All These Years", "God", "Cornflake Girl", "Caught a Lite Sneeze", "Professional Widow", "Spark", "1000 Oceans", "Flavor" and "A Sorta Fairytale", her most commercially successful single in the U.S. to date. Amos has received five MTV VMA nominations, eight Grammy Award nominations, and won an Echo Klassik award for her "Night of Hunters" classical crossover album. She is listed on VH1's 1999 "100 Greatest Women of Rock and Roll" at #71.
Amos is the third child of Mary Ellen (Copeland) and Edison McKinley Amos. She was born at the Old Catawba Hospital in Newton, North Carolina, during a trip from their Georgetown home in Washington, D.C. Amos has claimed that her maternal grandparents each had an Eastern Cherokee grandparent of their own. Of particular importance to her as a child was her maternal grandfather, Calvin Clinton Copeland, who was a great source of inspiration and guidance, offering a more pantheistic spiritual alternative to her father and paternal grandmother's traditional Christianity.
When she was two years old, her family moved to Baltimore, Maryland, where her father had transplanted his Methodist ministry from its original base in Washington, D.C. Her older brother and sister took piano lessons, but Tori did not need them. From the time she could reach the piano, she taught herself to play: when she was two, she could reproduce pieces of music she had only heard once, and, by the age of three, she was composing her own songs. She has described seeing music as structures of light since early childhood, an experience consistent with chromesthesia:
At five, she became the youngest student ever admitted to the preparatory division of the Peabody Conservatory of Music. She studied classical piano at Peabody from 1968 to 1974. In 1974, when she was eleven, her scholarship was discontinued, and she was asked to leave. Amos has asserted that she lost the scholarship because of her interest in rock and popular music, coupled with her dislike for reading from sheet music.
In 1972, the Amos family moved to Silver Spring, Maryland, where her father became pastor of the Good Shepherd United Methodist church. At thirteen, Amos began playing at gay bars and piano bars, chaperoned by her father.
Amos won a county teen talent contest in 1977, singing a song called "More Than Just a Friend". As a senior at Richard Montgomery High School, she co-wrote "Baltimore" with her brother, Mike Amos, for a competition involving the Baltimore Orioles. The song did not win the contest but became her first single, released as a 7-inch single pressed locally for family and friends in 1980 with another Amos-penned composition as a B-side, "Walking With You". Before this, she had performed under her middle name, Ellen, but permanently adopted Tori after a friend's boyfriend told her she looked like a Torrey pine, a tree native to the West Coast.
By the time she was 17, Amos had a stock of homemade demo tapes that her father regularly sent out to record companies and producers. Producer Narada Michael Walden responded favorably: he and Amos cut some tracks together, but none were released. Eventually, Atlantic Records responded to one of the tapes, and, when A&R man Jason Flom flew to Baltimore to audition her in person, the label was convinced and signed her.
In 1984, Amos moved to Los Angeles to pursue her music career after several years performing on the piano bar circuit in the D.C. area.
In 1986, Amos formed a musical group called Y Kant Tori Read, named for her difficulty sight reading. In addition to Amos, the group was composed of Steve Caton (who would later play guitars on all of her albums until 1999), drummer Matt Sorum, bass player Brad Cobb and, for a short time, keyboardist Jim Tauber. The band went through several iterations of songwriting and recording; Amos has said interference from record executives caused the band to lose its musical edge and direction during this time. Finally, in July 1988, the band's self-titled debut album, "Y Kant Tori Read", was released. Although its producer, Joe Chiccarelli, stated that Amos was very happy with the album at the time, Amos has since criticized it, once remarking: "The only good thing about that album is my ankle high boots."
Following the album's commercial failure and the group's subsequent disbanding, Amos began working with other artists (including Stan Ridgway, Sandra Bernhard, and Al Stewart) as a backup vocalist. She also recorded a song called "Distant Storm" for the film "China O'Brien." In the credits, the song is attributed to a band called Tess Makes Good.
Despite the disappointing reaction to "Y Kant Tori Read", Amos still had to comply with her six-record contract with Atlantic Records, which, in 1989, wanted a new record by March 1990. The initial recordings were declined by the label, which Amos felt was because the album had not been properly presented. The album was reworked and expanded under the guidance of Doug Morris and the musical talents of Steve Caton, Eric Rosse, Will MacGregor, Carlo Nuccio, and Dan Nebenzal, resulting in "Little Earthquakes", an album recounting her religious upbringing, sexual awakening, struggle to establish her identity, and sexual assault. This album became her commercial and artistic breakthrough, entering the British charts in January 1992 at Number 15. "Little Earthquakes" was released in the United States in February 1992 and slowly but steadily began to attract listeners, gaining more attention with the video for the single "Silent All These Years".
Amos traveled to New Mexico with personal and professional partner Eric Rosse in 1993 to write and largely record her second solo record, "Under the Pink". The album was received with mostly favorable reviews and sold enough copies to chart at No. 12 on the "Billboard 200", a significantly higher position than the preceding album's position at No. 54 on the same chart. However, the album found its biggest success in the UK, debuting at number one upon release in February 1994.
Her third solo album, "Boys for Pele", was released in January 1996. The album was recorded in an Irish church, in Delgany, County Wicklow, with Amos taking advantage of the church's acoustics. For this album, Amos used the harpsichord, harmonium, and clavichord as well as the piano. The album garnered mixed reviews upon its release, with some critics praising its intensity and uniqueness while others bemoaned its comparative impenetrability. Despite the album's erratic lyrical content and instrumentation, the latter of which kept it away from mainstream audiences, "Boys for Pele" is Amos's most successful simultaneous transatlantic release, reaching No. 2 on the UK Top 40 and No. 2 on the Billboard 200 upon its release.
Fueled by the desire to have her own recording studio to distance herself from record company executives, Amos had the barn of her home in Cornwall converted into the state-of-the-art recording studio of Martian Engineering Studios.
"From the Choirgirl Hotel" and "To Venus and Back", released in May 1998 and September 1999, respectively, differ greatly from previous albums. Amos's trademark acoustic, piano-based sound is largely replaced with arrangements that include elements of electronica and dance music with vocal washes. The underlying themes of both albums deal with womanhood and Amos's own miscarriages and marriage. Reviews for "From the Choirgirl Hotel" were mostly favorable and praised Amos's continued artistic originality. Debut sales for "From the Choirgirl Hotel" are Amos's best to date, selling 153,000 copies in its first week. "To Venus and Back", a two-disc release of original studio material and live material recorded from the previous world tour, received mostly positive reviews and included the first major-label single available for sale as a digital download.
Shortly after giving birth to her daughter, Amos decided to record a cover album, taking songs written by men about women and reversing the gender roles to reflect a woman's perspective. That became "Strange Little Girls", released in September 2001. The album is Amos's first concept album, with artwork featuring Amos photographed in character of the women portrayed in each song. Amos would later reveal that a stimulus for the album was to end her contract with Atlantic without giving them original songs; Amos felt that since 1998, the label had not been properly promoting her and had trapped her in a contract by refusing to sell her to another label.
With her Atlantic contract fulfilled after a 15-year stint, Amos signed to Epic in late 2001. In October 2002, Amos released "Scarlet's Walk", another concept album. Described as a "sonic novel", the album explores Amos's alter ego, Scarlet, intertwined with her cross-country concert tour following 9/11. Through the songs, Amos explores such topics as the history of America, American people, Native American history, pornography, masochism, homophobia and misogyny. The album had a strong debut at No. 7 on the Billboard 200. "Scarlet's Walk" is Amos's last album to date to reach certified gold status from the RIAA.
Not long after Amos was ensconced with her new label, she received unsettling news when Polly Anthony resigned as president of Epic Records in 2003. Anthony had been one of the primary reasons Amos signed with the label and as a result of her resignation, Amos formed the Bridge Entertainment Group. Further trouble for Amos occurred the following year when her label, Epic/Sony Music Entertainment, merged with BMG Entertainment as a result of the industry's decline.
Amos released two more albums with the label, "The Beekeeper" (2005) and "American Doll Posse" (2007). Both albums received generally favorable reviews. "The Beekeeper" was conceptually influenced by the ancient art of beekeeping, which she considered a source of female inspiration and empowerment. Through extensive study, Amos also wove in the stories of the Gnostic gospels and the removal of women from a position of power within the Christian church to create an album based largely on religion and politics. The album debuted at No. 5 on the Billboard 200, placing her in an elite group of women who have secured five or more US Top 10 album debuts. While the newly merged label was present throughout the production process of "The Beekeeper", Amos and her crew nearly completed her next project, "American Doll Posse", before inviting the label to listen to it. "American Doll Posse", another concept album, is fashioned around a group of girls (the "posse") who are used as a theme of alter-egos of Amos's. Musically and stylistically, the album saw Amos return to a more confrontational nature. Like its predecessor, "American Doll Posse" debuted at No. 5 on the Billboard 200.
During her tenure with Epic Records, Amos also released a retrospective collection titled "Tales of a Librarian" (2003) through her former label, Atlantic Records; a two-disc DVD set "Fade to Red" (2006) containing most of Amos's solo music videos, released through the Warner Bros. reissue imprint Rhino; a five disc box set titled "" (2006), celebrating Amos's 15-year solo career through remastered album tracks, remixes, alternate mixes, demos, and a string of unreleased songs from album recording sessions, also released through Rhino; and numerous official bootlegs from two world tours, "The Original Bootlegs" (2005) and "Legs & Boots" (2007) through Epic Records.
In May 2008, Amos announced that, due to creative and financial disagreements with Epic Records, she had negotiated an end to her contract with the record label, and would be operating independently of major record labels on future work. In September of the same year, Amos released a live album and DVD, "Live at Montreux 1991/1992", through Eagle Rock Entertainment, of two performances she gave at the Montreux Jazz Festival very early on in her career while promoting her debut solo album, "Little Earthquakes". By December, after a chance encounter with chairman and CEO of Universal Music Group, Doug Morris, Amos signed a "joint venture" deal with Universal Republic Records.
"Abnormally Attracted to Sin", Amos's tenth solo studio album and her first album released through Universal Republic, was released in May 2009 to mostly positive reviews. The album debuted in the top 10 of the Billboard 200, making it Amos's seventh album to do so. "Abnormally Attracted to Sin", admitted Amos, is a "personal album", not a conceptual one, with the album exploring themes of power, boundaries, and the subjective view of sin. Continuing her distribution deal with Universal Republic, Amos released "Midwinter Graces", her first seasonal album, in November of the same year. The album features reworked versions of traditional carols, as well as original songs written by Amos.
During her contract with the label, Amos recorded vocals for two songs for David Byrne's collaboration album with Fatboy Slim, titled "Here Lies Love", which was released in April 2010. In July of the same year, the DVD "Tori Amos- Live from the Artists Den" was released exclusively through Barnes & Noble.
After a brief tour from June to September 2010, Amos released a live album "From Russia With Love" in December the same year, recorded in Moscow on September 3, 2010. The limited edition set included a signature edition Lomography Diana F+ camera, along with two lenses, a roll of film and one of five photographs taken of Amos during her time in Moscow. The set was released exclusively through her website and only 2000 copies were produced.
In September 2011, Amos released her first classical-style music album, "Night of Hunters", featuring variations on a theme to pay tribute to composers such as Bach, Chopin, Debussy, Granados, Satie and Schubert, on the Deutsche Grammophon label, a division of Universal Music Group. Amos recorded the album with several musicians, including the Apollon Musagète string quartet.
To mark the 20th anniversary of her debut album, "Little Earthquakes" (1992), Amos released an album of songs from her back catalogue re-worked and re-recorded with the Metropole Orchestra. The album, titled "Gold Dust", was released in October 2012 through Deutsche Grammophon.
On May 1, 2012, Amos announced the formation of her own record label, Transmission Galactic, which she intends to use to develop new artists.
In 2013, Amos collaborated with the Bullitts on the track "Wait Until Tomorrow" from their debut album, "They Die by Dawn & Other Short Stories". She also stated in an interview that a new album and tour would materialize in 2014 and that it would be a "return to contemporary music".
September 2013 saw the launch of Amos's musical project adaptation of George MacDonald's "The Light Princess", along with book writer Samuel Adamson and Marianne Elliott. It premiered at London's Royal National Theatre and ended in February 2014. "The Light Princess" and its lead actress, Rosalie Craig, were nominated for Best Musical and Best Musical Performance respectively at the Evening Standard Award. Craig won the Best Musical Performance category.
Amos's 14th studio album, "Unrepentant Geraldines", was released on May 13, 2014, via Mercury Classics/Universal Music Classics in the US. Its first single, "Trouble's Lament", was released on March 28. The album was supported by the Unrepentant Geraldines Tour which began May 5, 2014, in Cork and continued across Europe, Africa, North America, and Australia, ending in Brisbane on November 21, 2014. In Sydney, Amos performed two orchestral concerts, reminiscent of the Gold Dust Orchestral Tour, with the Sydney Symphony Orchestra at the Sydney Opera House.
According to a press release, "Unrepentant Geraldines" was a "return to her core identity as a creator of contemporary songs of exquisite beauty following a series of more classically-inspired and innovative musical projects of the last four years. [It is] both one further step in the artistic evolution of one of the most successful and influential artists of her generation, and a return to the inspiring and personal music that Amos is known for all around the world."
The 2-CD set "The Light Princess (Original Cast Recording)" was released on October 9, 2015 via Universal/Mercury Classics. Apart from the original cast performances, the recording also includes two songs from the musical ("Highness in the Sky" and "Darkest Hour') performed by Amos.
On November 18, 2016, Amos released a deluxe version of the album Boys For Pele to commemorate the 20th anniversary of the original release. This follows the deluxe re-releases of her first two albums in 2015.
On September 8, 2017, Amos released "Native Invader", accompanied by a world tour. During the summer of 2017, Amos launched three songs from the album: "Cloud Riders", "Up the Creek" and "Reindeer King", the latter featuring string arrangements by John Philip Shenale. Produced by Amos, the album explores topics like American politics and environmental issues, mixed with mythological elements and first-person narrations.
The initial inspiration for the album came from a trip that Amos took to the Great Smoky Mountains (Tennessee-North Carolina), home of her alleged Native American ancestors; however, two events deeply influenced the final record: in November 2016, Donald Trump was elected President of the United States of America; two months later, in January 2017, Amos's mother, Mary Ellen, suffered a stroke that left her unable to speak. Shocked by both events, Amos spent the first half of 2017 writing and recording the songs that would eventually form "Native Invader". The album, released on September 8, 2017, has been presented in two formats: standard and deluxe. The standard version includes 13 songs, while the deluxe edition adds two extra songs to the tracklist: "Upside Down 2" and "Russia". "Native Invader" has been well-received by most music critics upon release. The album obtained a score of 76 out of 100 on the review aggregator website Metacritic, based on 17 reviews, indicating "generally favorable reviews".
Released in conjunction with "The Beekeeper", Amos co-authored an autobiography with rock music journalist Ann Powers titled "Piece by Piece" (2005). The book's subject is Amos's interest in mythology and religion, exploring her songwriting process, rise to fame, and her relationship with Atlantic Records.
Image Comics released "Comic Book Tattoo" (2008), a collection of comic stories, each based on or inspired by songs recorded by Amos. Editor Rantz Hoseley worked with Amos to gather 80 different artists for the book, including Pia Guerra, David Mack, and Leah Moore.
Additionally, Amos and her music have been the subject of numerous official and unofficial books, as well as academic critique, including "Tori Amos: Lyrics" (2001) and an earlier biography, "Tori Amos: All These Years" (1996).
"Tori Amos: In the Studio" (2011) by Jake Brown features an in-depth look at Amos's career, discography and recording process.
Amos released her second memoir called "Resistance: A Songwriter’s Story of Hope, Change, and Courage" on 05 May 2020.
Amos married English sound engineer Mark Hawley on February 22, 1998. Their daughter was born in 2000. The family divides their time between Sewall's Point in Florida, US; Kinsale, County Cork, in Ireland; and Bude, Cornwall in the UK. Amos' mother, Mary Ellen, died on May 11, 2019.
Early in her professional career, Amos befriended author Neil Gaiman, who became a fan after she referred to him in the song "Tear in Your Hand" and also in print interviews. Although created before the two met, the character Delirium from Gaiman's "The Sandman" series is inspired by Amos; Gaiman has stated that they "steal shamelessly from each other". She wrote the foreword to his collection ""; he in turn wrote the introduction to "Comic Book Tattoo". Gaiman is godfather to her daughter and a poem written for her birth, "Blueberry Girl", was published as a children's book of the same name in 2009. In 2019, Amos performed the British standard "A Nightingale Sang in Berkeley Square" over the closing credits of Gaiman's TV series "Good Omens", based on the novel of the same name written by Gaiman and Terry Pratchett.
In June 1994, the Rape, Abuse & Incest National Network (RAINN), a toll-free help line in the US connecting callers with their local rape crisis center, was founded. Amos, who was raped at knifepoint when she was 21, answered the ceremonial first call to launch the hotline. She was the first national spokesperson for the organization and has continued to be closely associated with RAINN. On August 18, 2013, a concert in honor of her 50th birthday was held, an event which raised money for RAINN.
Amos, who has been performing in bars and clubs from as early as 1976 and under her professional name as early as 1991 has performed more than 1,000 shows since her first world tour in 1992. In 2003, Amos was voted fifth best touring act by the readers of "Rolling Stone" magazine. Her concerts are notable for their changing set lists from night to night.
Little Earthquakes Tour
Under the Pink Tour
Dew Drop Inn Tour
Plugged '98 Tour
5 ½ Weeks Tour / To Dallas and Back
Strange Little Tour
On Scarlet's Walk / Lottapianos Tour
Original Sinsuality Tour / Summer of Sin
American Doll Posse World Tour
Sinful Attraction Tour
Night of Hunters tour
Gold Dust Orchestral Tour
Unrepentant Geraldines Tour
Native Invader Tour
Amos was inducted into the North Carolina Music Hall of Fame on October 11, 2012. | https://en.wikipedia.org/wiki?curid=31471 |
Transcription factor
In molecular biology, a transcription factor (TF) (or sequence-specific DNA-binding factor) is a protein that controls the rate of transcription of genetic information from DNA to messenger RNA, by binding to a specific DNA sequence. The function of TFs is to regulate—turn on and off—genes in order to make sure that they are expressed in the right cell at the right time and in the right amount throughout the life of the cell and the organism. Groups of TFs function in a coordinated fashion to direct cell division, cell growth, and cell death throughout life; cell migration and organization (body plan) during embryonic development; and intermittently in response to signals from outside the cell, such as a hormone. There are up to 1600 TFs in the human genome.
TFs work alone or with other proteins in a complex, by promoting (as an activator), or blocking (as a repressor) the recruitment of RNA polymerase (the enzyme that performs the transcription of genetic information from DNA to RNA) to specific genes.
A defining feature of TFs is that they contain at least one DNA-binding domain (DBD), which attaches to a specific sequence of DNA adjacent to the genes that they regulate. TFs are grouped into classes based on their DBDs. Other proteins such as coactivators, chromatin remodelers, histone acetyltransferases, histone deacetylases, kinases, and methylases are also essential to gene regulation, but lack DNA-binding domains, and therefore are not TFs.
TFs are of interest in medicine because TF mutations can cause specific diseases, and medications can be potentially targeted toward them.
Transcription factors are essential for the regulation of gene expression and are, as a consequence, found in all living organisms. The number of transcription factors found within an organism increases with genome size, and larger genomes tend to have more transcription factors per gene.
There are approximately 2800 proteins in the human genome that contain DNA-binding domains, and 1600 of these are presumed to function as transcription factors, though other studies indicate it to be a smaller number. Therefore, approximately 10% of genes in the genome code for transcription factors, which makes this family the single largest family of human proteins. Furthermore, genes are often flanked by several binding sites for distinct transcription factors, and efficient expression of each of these genes requires the cooperative action of several different transcription factors (see, for example, hepatocyte nuclear factors). Hence, the combinatorial use of a subset of the approximately 2000 human transcription factors easily accounts for the unique regulation of each gene in the human genome during development.
Transcription factors bind to either enhancer or promoter regions of DNA adjacent to the genes that they regulate. Depending on the transcription factor, the transcription of the adjacent gene is either up- or down-regulated. Transcription factors use a variety of mechanisms for the regulation of gene expression. These mechanisms include:
Transcription factors are one of the groups of proteins that read and interpret the genetic "blueprint" in the DNA. They bind to the DNA and help initiate a program of increased or decreased gene transcription. As such, they are vital for many important cellular processes. Below are some of the important functions and biological roles transcription factors are involved in:
In eukaryotes, an important class of transcription factors called general transcription factors (GTFs) are necessary for transcription to occur. Many of these GTFs do not actually bind DNA, but rather are part of the large transcription preinitiation complex that interacts with RNA polymerase directly. The most common GTFs are TFIIA, TFIIB, TFIID (see also TATA binding protein), TFIIE, TFIIF, and TFIIH. The preinitiation complex binds to promoter regions of DNA upstream to the gene that they regulate.
Other transcription factors differentially regulate the expression of various genes by binding to enhancer regions of DNA adjacent to regulated genes. These transcription factors are critical to making sure that genes are expressed in the right cell at the right time and in the right amount, depending on the changing requirements of the organism.
Many transcription factors in multicellular organisms are involved in development. Responding to stimuli, these transcription factors turn on/off the transcription of the appropriate genes, which, in turn, allows for changes in cell morphology or activities needed for cell fate determination and cellular differentiation. The Hox transcription factor family, for example, is important for proper body pattern formation in organisms as diverse as fruit flies to humans. Another example is the transcription factor encoded by the Sex-determining Region Y (SRY) gene, which plays a major role in determining sex in humans.
Cells can communicate with each other by releasing molecules that produce signaling cascades within another receptive cell. If the signal requires upregulation or downregulation of genes in the recipient cell, often transcription factors will be downstream in the signaling cascade. Estrogen signaling is an example of a fairly short signaling cascade that involves the estrogen receptor transcription factor: Estrogen is secreted by tissues such as the ovaries and placenta, crosses the cell membrane of the recipient cell, and is bound by the estrogen receptor in the cell's cytoplasm. The estrogen receptor then goes to the cell's nucleus and binds to its DNA-binding sites, changing the transcriptional regulation of the associated genes.
Not only do transcription factors act downstream of signaling cascades related to biological stimuli but they can also be downstream of signaling cascades involved in environmental stimuli. Examples include heat shock factor (HSF), which upregulates genes necessary for survival at higher temperatures, hypoxia inducible factor (HIF), which upregulates genes necessary for cell survival in low-oxygen environments, and sterol regulatory element binding protein (SREBP), which helps maintain proper lipid levels in the cell.
Many transcription factors, especially some that are proto-oncogenes or tumor suppressors, help regulate the cell cycle and as such determine how large a cell will get and when it can divide into two daughter cells. One example is the Myc oncogene, which has important roles in cell growth and apoptosis.
Transcription factors can also be used to alter gene expression in a host cell to promote pathogenesis. A well studied example of this are the transcription-activator like effectors (TAL effectors) secreted by Xanthomonas bacteria. When injected into plants, these proteins can enter the nucleus of the plant cell, bind plant promoter sequences, and activate transcription of plant genes that aid in bacterial infection. TAL effectors contain a central repeat region in which there is a simple relationship between the identity of two critical residues in sequential repeats and sequential DNA bases in the TAL effector's target site. This property likely makes it easier for these proteins to evolve in order to better compete with the defense mechanisms of the host cell.
It is common in biology for important processes to have multiple layers of regulation and control. This is also true with transcription factors: Not only do transcription factors control the rates of transcription to regulate the amounts of gene products (RNA and protein) available to the cell but transcription factors themselves are regulated (often by other transcription factors). Below is a brief synopsis of some of the ways that the activity of transcription factors can be regulated:
Transcription factors (like all proteins) are transcribed from a gene on a chromosome into RNA, and then the RNA is translated into protein. Any of these steps can be regulated to affect the production (and thus activity) of a transcription factor. An implication of this is that transcription factors can regulate themselves. For example, in a negative feedback loop, the transcription factor acts as its own repressor: If the transcription factor protein binds the DNA of its own gene, it down-regulates the production of more of itself. This is one mechanism to maintain low levels of a transcription factor in a cell.
In eukaryotes, transcription factors (like most proteins) are transcribed in the nucleus but are then translated in the cell's cytoplasm. Many proteins that are active in the nucleus contain nuclear localization signals that direct them to the nucleus. But, for many transcription factors, this is a key point in their regulation. Important classes of transcription factors such as some nuclear receptors must first bind a ligand while in the cytoplasm before they can relocate to the nucleus.
Transcription factors may be activated (or deactivated) through their signal-sensing domain by a number of mechanisms including:
In eukaryotes, DNA is organized with the help of histones into compact particles called nucleosomes, where sequences of about 147 DNA base pairs make ~1.65 turns around histone protein octamers. DNA within nucleosomes is inaccessible to many transcription factors. Some transcription factors, so-called pioneer factors are still able to bind their DNA binding sites on the nucleosomal DNA. For most other transcription factors, the nucleosome should be actively unwound by molecular motors such as chromatin remodelers. Alternatively, the nucleosome can be partially unwrapped by thermal fluctuations, allowing temporary access to the transcription factor binding site. In many cases, a transcription factor needs to compete for binding to its DNA binding site with other transcription factors and histones or non-histone chromatin proteins. Pairs of transcription factors and other proteins can play antagonistic roles (activator versus repressor) in the regulation of the same gene.
Most transcription factors do not work alone. Many large TF families form complex homotypic or heterotypic interactions through dimerization. For gene transcription to occur, a number of transcription factors must bind to DNA regulatory sequences. This collection of transcription factors, in turn, recruit intermediary proteins such as cofactors that allow efficient recruitment of the preinitiation complex and RNA polymerase. Thus, for a single transcription factor to initiate transcription, all of these other proteins must also be present, and the transcription factor must be in a state where it can bind to them if necessary.
Cofactors are proteins that modulate the effects of transcription factors. Cofactors are interchangeable between specific gene promoters; the protein complex that occupies the promoter DNA and the amino acid sequence of the cofactor determine its spatial conformation. For example, certain steroid receptors can exchange cofactors with NF-κB, which is a switch between inflammation and cellular differentiation; thereby steroids can affect the inflammatory response and function of certain tissues.
Transcription factors and methylated cytosines in DNA both have major roles in regulating gene expression. (Methylation of cytosine in DNA primarily occurs where cytosine is followed by guanine in the 5’ to 3’ DNA sequence, a CpG site.) Methylation of CpG sites in a promoter region of a gene usually represses gene transcription, while methylation of CpGs in the body of a gene increases expression. TET enzymes play a central role in demethylation of methylated cytosines. Demethylation of CpGs in a gene promoter by TET enzyme activity increases transcription of the gene.
The DNA binding sites of 519 transcription factors were evaluated. Of these, 169 transcription factors (33%) did not have CpG dinucleotides in their binding sites, and 33 transcription factors (6%) could bind to a CpG-containing motif but did not display a preference for a binding site with either a methylated or unmethylated CpG. There were 117 transcription factors (23%) that were inhibited from binding to their binding sequence if it contained a methylated CpG site, 175 transcription factors (34%) that had enhanced binding if their binding sequence had a methylated CpG site, and 25 transcription factors (5%) were either inhibited or had enhanced binding depending on where in the binding sequence the methylated CpG was located.
TET enzymes do not specifically bind to methylcytosine except when recruited (see DNA demethylation). Multiple transcription factors important in cell differentiation and lineage specification, including NANOG, SALL4A, WT1, EBF1, PU.1, and E2A, have been shown to recruit TET enzymes to specific genomic loci (primarily enhancers) to act on methylcytosine (mC) and convert it to hydroxymethylcytosine hmC (and in most cases marking them for subsequent complete demethylation to cytosine). TET-mediated conversion of mC to hmC appears to disrupt the binding of 5mC-binding proteins including MECP2 and MBD (Methyl-CpG-binding domain) proteins, facilitating nucleosome remodeling and the binding of transcription factors, thereby activating transcription of those genes. EGR1 is an important transcription factor in memory formation. It has an essential role in brain neuron epigenetic reprogramming. The transcription factor EGR1 recruits the TET1 protein that initiates a pathway of DNA demethylation. EGR1, together with TET1, is employed in programming the distribution of methylation sites on brain DNA during brain development and in learning (see Epigenetics in learning and memory).
Transcription factors are modular in structure and contain the following domains:
The portion (domain) of the transcription factor that binds DNA is called its DNA-binding domain. Below is a partial list of some of the major families of DNA-binding domains/transcription factors:
The DNA sequence that a transcription factor binds to is called a transcription factor-binding site or response element.
Transcription factors interact with their binding sites using a combination of electrostatic (of which hydrogen bonds are a special case) and Van der Waals forces. Due to the nature of these chemical interactions, most transcription factors bind DNA in a sequence specific manner. However, not all bases in the transcription factor-binding site may actually interact with the transcription factor. In addition, some of these interactions may be weaker than others. Thus, transcription factors do not bind just one sequence but are capable of binding a subset of closely related sequences, each with a different strength of interaction.
For example, although the consensus binding site for the TATA-binding protein (TBP) is TATAAAA, the TBP transcription factor can also bind similar sequences such as TATATAT or TATATAA.
Because transcription factors can bind a set of related sequences and these sequences tend to be short, potential transcription factor binding sites can occur by chance if the DNA sequence is long enough. It is unlikely, however, that a transcription factor will bind all compatible sequences in the genome of the cell. Other constraints, such as DNA accessibility in the cell or availability of cofactors may also help dictate where a transcription factor will actually bind. Thus, given the genome sequence it is still difficult to predict where a transcription factor will actually bind in a living cell.
Additional recognition specificity, however, may be obtained through the use of more than one DNA-binding domain (for example tandem DBDs in the same transcription factor or through dimerization of two transcription factors) that bind to two or more adjacent sequences of DNA.
Transcription factors are of clinical significance for at least two reasons: (1) mutations can be associated with specific diseases, and (2) they can be targets of medications.
Due to their important roles in development, intercellular signaling, and cell cycle, some human diseases have been associated with mutations in transcription factors.
Many transcription factors are either tumor suppressors or oncogenes, and, thus, mutations or aberrant regulation of them is associated with cancer. Three groups of transcription factors are known to be important in human cancer: (1) the NF-kappaB and AP-1 families, (2) the STAT family and (3) the steroid receptors.
Below are a few of the better-studied examples:
Approximately 10% of currently prescribed drugs directly target the nuclear receptor class of transcription factors. Examples include tamoxifen and bicalutamide for the treatment of breast and prostate cancer, respectively, and various types of anti-inflammatory and anabolic steroids. In addition, transcription factors are often indirectly modulated by drugs through signaling cascades. It might be possible to directly target other less-explored transcription factors such as NF-κB with drugs. Transcription factors outside the nuclear receptor family are thought to be more difficult to target with small molecule therapeutics since it is not clear that they are "drugable" but progress has been made on Pax2 and the notch pathway.
Gene duplications have played a crucial role in the evolution of species. This applies particularly to transcription factors. Once they occur as duplicates, accumulated mutations encoding for one copy can take place without negatively affecting the regulation of downstream targets. However, changes of the DNA binding specificities of the single-copy LEAFY transcription factor, which occurs in most land plants, have recently been elucidated. In that respect, a single-copy transcription factor can undergo a change of specificity through a promiscuous intermediate without losing function. Similar mechanisms have been proposed in the context of all alternative phylogenetic hypotheses, and the role of transcription factors in the evolution of all species.
There are different technologies available to analyze transcription factors. On the genomic level, DNA-sequencing and database research are commonly used The protein version of the transcription factor is detectable by using specific antibodies. The sample is detected on a western blot. By using electrophoretic mobility shift assay (EMSA), the activation profile of transcription factors can be detected. A multiplex approach for activation profiling is a TF chip system where several different transcription factors can be detected in parallel.
The most commonly used method for identifying transcription factor binding sites is chromatin immunoprecipitation (ChIP). This technique relies on chemical fixation of chromatin with formaldehyde, followed by co-precipitation of DNA and the transcription factor of interest using an antibody that specifically targets that protein. The DNA sequences can then be identified by microarray or high-throughput sequencing (ChIP-seq) to determine transcription factor binding sites. If no antibody is available for the protein of interest, DamID may be a convenient alternative.
As described in more detail below, transcription factors may be classified by their (1) mechanism of action, (2) regulatory function, or (3) sequence homology (and hence structural similarity) in their DNA-binding domains.
There are two mechanistic classes of transcription factors:
Transcription factors have been classified according to their regulatory function:
Transcription factors are often classified based on the sequence similarity and hence the tertiary structure of their DNA-binding domains: | https://en.wikipedia.org/wiki?curid=31474 |
Thebaine
Thebaine (paramorphine), also known as codeine methyl enol ether, is an opiate alkaloid, its name coming from the Greek Θῆβαι, "Thēbai" (Thebes), an ancient city in Upper Egypt. A minor constituent of opium, thebaine is chemically similar to both morphine and codeine, but has stimulatory rather than depressant effects. At high doses, it causes convulsions similar to strychnine poisoning. The synthetic enantiomer (+)-thebaine does show analgesic effects apparently mediated through opioid receptors, unlike the inactive natural enantiomer (−)-thebaine. While thebaine is not used therapeutically, it is the main alkaloid extracted from "Papaver bracteatum" (Iranian opium / Persian poppy) and can be converted industrially into a variety of compounds, including hydrocodone, hydromorphone, oxycodone, oxymorphone, nalbuphine, naloxone, naltrexone, buprenorphine and etorphine. Butorphanol can also be derived from thebaine.
Thebaine is controlled under international law, is listed as a Class A drug under the Misuse of Drugs Act 1971 in the United Kingdom, is controlled as an analog of a Schedule II drug per the Analog Act in the United States, and is controlled with its derivatives and salts, as a Schedule I substance of the Controlled Drugs and Substances Act in Canada. The 2013 US Drug Enforcement Administration (DEA) aggregate manufacturing quota for thebaine (ACSCN 9333) was unchanged from the previous year at 145 metric tons.
This alkaloid is biosynthetically related to salutaridine, oripavine, morphine and reticuline.
In 2012 146,000 kilograms of thebaine were produced. In 2013, Australia was the main producer of poppy straw rich in thebaine, followed by Spain and then France. Together, those three countries accounted for about 99 per cent of global production of such poppy straw. The Papaver bracteatum seed capsules are the primary source of thebaine, with significant amount in the stem too. | https://en.wikipedia.org/wiki?curid=31479 |
THC (disambiguation)
THC is tetrahydrocannabinol, the main active chemical compound in cannabis.
THC or ThC may also refer to: | https://en.wikipedia.org/wiki?curid=31481 |
Tangent
In geometry, the tangent line (or simply tangent) to a plane curve at a given point is the straight line that "just touches" the curve at that point. Leibniz defined it as the line through a pair of infinitely close points on the curve. More precisely, a straight line is said to be a tangent of a curve at a point on the curve if the line passes through the point on the curve and has slope , where "f" is the derivative of "f". A similar definition applies to space curves and curves in "n"-dimensional Euclidean space.
As it passes through the point where the tangent line and the curve meet, called the point of tangency, the tangent line is "going in the same direction" as the curve, and is thus the best straight-line approximation to the curve at that point.
Similarly, the tangent plane to a surface at a given point is the plane that "just touches" the surface at that point. The concept of a tangent is one of the most fundamental notions in differential geometry and has been extensively generalized; see Tangent space.
The word "tangent" comes from the Latin , "to touch".
Euclid makes several references to the tangent ( "ephaptoménē") to a circle in book III of the "Elements" (c. 300 BC). In Apollonius work "Conics" (c. 225 BC) he defines a tangent as being "a line such that no other straight line could
fall between it and the curve".
Archimedes (c. 287 – c. 212 BC) found the tangent to an Archimedean spiral by considering the path of a point moving along the curve.
In the 1630s Fermat developed the technique of adequality to calculate tangents and other problems in analysis and used this to calculate tangents to the parabola. The technique of adeqality is similar to taking the difference between formula_1 and formula_2 and dividing by a power of formula_3. Independently Descartes used his method of normals based on the observation that the radius of a circle is always normal to the circle itself.
These methods led to the development of differential calculus in the 17th century. Many people contributed. Roberval discovered a general method of drawing tangents, by considering a curve as described by a moving point whose motion is the resultant of several simpler motions.
René-François de Sluse and Johannes Hudde found algebraic algorithms for finding tangents. Further developments included those of John Wallis and Isaac Barrow, leading to the theory of Isaac Newton and Gottfried Leibniz.
An 1828 definition of a tangent was "a right line which touches a curve, but which when produced, does not cut it". This old definition prevents inflection points from having any tangent. It has been dismissed and the modern definitions are equivalent to those of Leibniz who defined the tangent line as the line through a pair of infinitely close points on the curve.
The intuitive notion that a tangent line "touches" a curve can be made more explicit by considering the sequence of straight lines (secant lines) passing through two points, "A" and "B", those that lie on the function curve. The tangent at "A" is the limit when point "B" approximates or tends to "A". The existence and uniqueness of the tangent line depends on a certain type of mathematical smoothness, known as "differentiability." For example, if two circular arcs meet at a sharp point (a vertex) then there is no uniquely defined tangent at the vertex because the limit of the progression of secant lines depends on the direction in which "point "B"" approaches the vertex.
At most points, the tangent touches the curve without crossing it (though it may, when continued, cross the curve at other places away from the point of tangent). A point where the tangent (at this point) crosses the curve is called an "inflection point". Circles, parabolas, hyperbolas and ellipses do not have any inflection point, but more complicated curves do have, like the graph of a cubic function, which has exactly one inflection point, or a sinusoid, which has two inflection points per each period of the sine.
Conversely, it may happen that the curve lies entirely on one side of a straight line passing through a point on it, and yet this straight line is not a tangent line. This is the case, for example, for a line passing through the vertex of a triangle and not intersecting it otherwise—where the tangent line does not exist for the reasons explained above. In convex geometry, such lines are called supporting lines.
The geometrical idea of the tangent line as the limit of secant lines serves as the motivation for analytical methods that are used to find tangent lines explicitly. The question of finding the tangent line to a graph, or the tangent line problem, was one of the central questions leading to the development of calculus in the 17th century. In the second book of his "Geometry", René Descartes of the problem of constructing the tangent to a curve, "And I dare say that this is not only the most useful and most general problem in geometry that I know, but even that I have ever desired to know".
Suppose that a curve is given as the graph of a function, "y" = "f"("x"). To find the tangent line at the point "p" = ("a", "f"("a")), consider another nearby point "q" = ("a" + "h", "f"("a" + "h")) on the curve. The slope of the secant line passing through "p" and "q" is equal to the difference quotient
As the point "q" approaches "p", which corresponds to making "h" smaller and smaller, the difference quotient should approach a certain limiting value "k", which is the slope of the tangent line at the point "p". If "k" is known, the equation of the tangent line can be found in the point-slope form:
To make the preceding reasoning rigorous, one has to explain what is meant by the difference quotient approaching a certain limiting value "k". The precise mathematical formulation was given by Cauchy in the 19th century and is based on the notion of limit. Suppose that the graph does not have a break or a sharp edge at "p" and it is neither plumb nor too wiggly near "p". Then there is a unique value of "k" such that, as "h" approaches 0, the difference quotient gets closer and closer to "k", and the distance between them becomes negligible compared with the size of "h", if "h" is small enough. This leads to the definition of the slope of the tangent line to the graph as the limit of the difference quotients for the function "f". This limit is the derivative of the function "f" at "x" = "a", denoted "f" ′("a"). Using derivatives, the equation of the tangent line can be stated as follows:
Calculus provides rules for computing the derivatives of functions that are given by formulas, such as the power function, trigonometric functions, exponential function, logarithm, and their various combinations. Thus, equations of the tangents to graphs of all these functions, as well as many others, can be found by the methods of calculus.
Calculus also demonstrates that there are functions and points on their graphs for which the limit determining the slope of the tangent line does not exist. For these points the function "f" is "non-differentiable". There are two possible reasons for the method of finding the tangents based on the limits and derivatives to fail: either the geometric tangent exists, but it is a vertical line, which cannot be given in the point-slope form since it does not have a slope, or the graph exhibits one of three behaviors that precludes a geometric tangent.
The graph "y" = "x"1/3 illustrates the first possibility: here the difference quotient at "a" = 0 is equal to "h"1/3/"h" = "h"−2/3, which becomes very large as "h" approaches 0. This curve has a tangent line at the origin that is vertical.
The graph "y" = "x"2/3 illustrates another possibility: this graph has a "cusp" at the origin. This means that, when "h" approaches 0, the difference quotient at "a" = 0 approaches plus or minus infinity depending on the sign of "x". Thus both branches of the curve are near to the half vertical line for which "y"=0, but none is near to the negative part of this line. Basically, there is no tangent at the origin in this case, but in some context one may consider this line as a tangent, and even, in algebraic geometry, as a "double tangent".
The graph "y" = |"x"| of the absolute value function consists of two straight lines with different slopes joined at the origin. As a point "q" approaches the origin from the right, the secant line always has slope 1. As a point "q" approaches the origin from the left, the secant line always has slope −1. Therefore, there is no unique tangent to the graph at the origin. Having two different (but finite) slopes is called a "corner".
Finally, since differentiability implies continuity, the contrapositive states "discontinuity" implies non-differentiability. Any such jump or point discontinuity will have no tangent line. This includes cases where one slope approaches positive infinity while the other approaches negative infinity, leading to an infinite jump discontinuity
When the curve is given by "y" = "f"("x") then the slope of the tangent is formula_7
so by the point–slope formula the equation of the tangent line at ("X", "Y") is
where ("x", "y") are the coordinates of any point on the tangent line, and where the derivative is evaluated at formula_9.
When the curve is given by "y" = "f"("x"), the tangent line's equation can also be found by using polynomial division to divide formula_10 by formula_11; if the remainder is denoted by formula_12, then the equation of the tangent line is given by
When the equation of the curve is given in the form "f"("x", "y") = 0 then the value of the slope can be found by implicit differentiation, giving
The equation of the tangent line at a point ("X","Y") such that "f"("X","Y") = 0 is then
This equation remains true if formula_16 but formula_17 (in this case the slope of the tangent is infinite). If formula_18 the tangent line is not defined and the point ("X","Y") is said singular.
For algebraic curves, computations may be simplified somewhat by converting to homogeneous coordinates. Specifically, let the homogeneous equation of the curve be "g"("x", "y", "z") = 0 where "g" is a homogeneous function of degree "n". Then, if ("X", "Y", "Z") lies on the curve, Euler's theorem implies
formula_19
It follows that the homogeneous equation of the tangent line is
The equation of the tangent line in Cartesian coordinates can be found by setting "z"=1 in this equation.
To apply this to algebraic curves, write "f"("x", "y") as
where each "u""r" is the sum of all terms of degree "r". The homogeneous equation of the curve is then
Applying the equation above and setting "z"=1 produces
as the equation of the tangent line. The equation in this form is often simpler to use in practice since no further simplification is needed after it is applied.
If the curve is given parametrically by
then the slope of the tangent is
giving the equation for the tangent line at formula_26 as
If formula_28 the tangent line is not defined. However, it may occur that the tangent line exists and may be computed from an implicit equation of the curve.
The line perpendicular to the tangent line to a curve at the point of tangency is called the "normal line" to the curve at that point. The slopes of perpendicular lines have product −1, so if the equation of the curve is "y" = "f"("x") then slope of the normal line is
and it follows that the equation of the normal line at (X, Y) is
Similarly, if the equation of the curve has the form "f"("x", "y") = 0 then the equation of the normal line is given by
If the curve is given parametrically by
then the equation of the normal line is
The angle between two curves at a point where they intersect is defined as the angle between their tangent lines at that point. More specifically, two curves are said to be tangent at a point if they have the same tangent at a point, and orthogonal if their tangent lines are orthogonal.
The formulas above fail when the point is a singular point. In this case there may be two or more branches of the curve that pass through the point, each branch having its own tangent line. When the point is the origin, the equations of these lines can be found for algebraic curves by factoring the equation formed by eliminating all but the lowest degree terms from the original equation. Since any point can be made the origin by a change of variables (or by translating the curve) this gives a method for finding the tangent lines at any singular point.
For example, the equation of the limaçon trisectrix shown to the right is
Expanding this and eliminating all but terms of degree 2 gives
which, when factored, becomes
So these are the equations of the two tangent lines through the origin.
When the curve is not self-crossing, the tangent at a reference point may still not be uniquely defined because the curve is not differentiable at that point although it is differentiable elsewhere. In this case the left and right derivatives are defined as the limits of the derivative as the point at which it is evaluated approaches the reference point from respectively the left (lower values) or the right (higher values). For example, the curve "y" = |"x" | is not differentiable at "x" = 0: its left and right derivatives have respective slopes −1 and 1; the tangents at that point with those slopes are called the left and right tangents.
Sometimes the slopes of the left and right tangent lines are equal, so the tangent lines coincide. This is true, for example, for the curve "y" = "x" 2/3, for which both the left and right derivatives at "x" = 0 are infinite; both the left and right tangent lines have equation "x" = 0.
Two circles of non-equal radius, both in the same plane, are said to be tangent to each other if they meet at only one point. Equivalently, two circles, with radii of "ri" and centers at ("xi", "yi"), for "i" = 1, 2 are said to be tangent to each other if
The "tangent plane" to a surface at a given point "p" is defined in an analogous way to the tangent line in the case of curves. It is the best approximation of the surface by a plane at "p", and can be obtained as the limiting position of the planes passing through 3 distinct points on the surface close to "p" as these points converge to "p". More generally, there is a "k"-dimensional tangent space at each point of a "k"-dimensional manifold in the "n"-dimensional Euclidean space. | https://en.wikipedia.org/wiki?curid=31482 |
Stonewall Jackson
Thomas Jonathan "Stonewall" Jackson (January 21, 1824 – May 10, 1863) served as a Confederate general (1861–1863) during the American Civil War, and became one of the best-known Confederate commanders after General Robert E. Lee. Jackson played a prominent role in nearly all military engagements in the Eastern Theater of the war until his death, and had a key part in winning many significant battles.
Born in what was then part of Virginia (in present-day West Virginia), Jackson received an appointment to the United States Military Academy at West Point and graduated in the class of 1846. He served in the U.S. Army during the Mexican–American War of 1846–1848 and distinguished himself at Chapultepec. From 1851 to 1861 he taught at the Virginia Military Institute, where he was unpopular with his students. During this time, he married twice. His first wife died giving birth, but his second wife, Mary Anna Morrison, lived until 1915. When Virginia seceded from the Union in May 1861 after the attack on Fort Sumter, Jackson joined the Confederate Army. He distinguished himself commanding a brigade at the First Battle of Bull Run in July, providing crucial reinforcements and beating back a fierce Union assault. In this context Barnard Elliott Bee Jr. compared him to a "stone wall", hence his enduring nickname.
Jackson performed exceptionably well in the campaigns in the Shenandoah Valley in 1862. Despite an initial defeat due largely to faulty intelligence, through swift and careful maneuvers Jackson was able to defeat three separate Union armies and prevent any of them from reinforcing General George B. McClellan's Army of the Potomac in its campaign against Richmond. Jackson then quickly moved his three divisions to reinforce General Lee's Army of Northern Virginia in defense of Richmond. He performed poorly in the Seven Days Battles against George B. McClellan's Army of the Potomac, as he was frequently late arriving on the field. During the Northern Virginia Campaign that summer, Jackson's troops captured and destroyed an important supply depot for General John Pope's Army of Virginia, and then withstood repeated assaults from Pope's troops at the Second Battle of Bull Run. Jackson's troops played a prominent role in September's Maryland Campaign, capturing the town of Harpers Ferry, a strategic location, and providing a defense of the Confederate Army's left at Antietam. At Fredericksburg in December, Jackson's corps buckled but ultimately beat back an assault by the Union Army under Major General Ambrose Burnside. In late April and early May 1863, faced with a larger Union army now commanded by Joseph Hooker at Chancellorsville, Lee divided his force three ways. On May 2, Jackson took his 30,000 troops and launched a surprise attack against the Union right flank, driving the opposing troops back about two miles. That evening he was accidentally shot by Confederate pickets. The general lost his left arm to amputation; weakened by his wounds, he died of pneumonia eight days later.
Military historians regard Jackson as one of the most gifted tactical commanders in U.S. history. His tactics are studied even today. His death proved a severe setback for the Confederacy, affecting not only its military prospects, but also the morale of its army and the general public. After Jackson's death, his military exploits developed a legendary quality, becoming an important element of the ideology of the "Lost Cause".
Thomas Jonathan Jackson was the great-grandson of John Jackson (1715/1719–1801) and Elizabeth Cummins (also known as Elizabeth Comings and Elizabeth Needles) (1723–1828). John Jackson was an Irish Protestant from Coleraine, County Londonderry, Ireland. While living in London, England, he was convicted of the capital crime of larceny for stealing £170; the judge at the Old Bailey sentenced him to seven years penal transportation. Elizabeth, a strong, blonde woman over tall, born in London, was also convicted of felony larceny in an unrelated case for stealing 19 pieces of silver, jewelry, and fine lace, and received a similar sentence. They both were transported on the merchant ship "Litchfield", which departed London in May 1749 with 150 convicts. John and Elizabeth met on board and were in love by the time the ship arrived at Annapolis, Maryland. Although they were sent to different locations in Maryland for their bond service, the couple married in July 1755.
The family migrated west across the Blue Ridge Mountains to settle near Moorefield, Virginia (now West Virginia) in 1758. In 1770, they moved farther west to the Tygart Valley. They began to acquire large parcels of virgin farming land near the present-day town of Buckhannon, including 3,000 acres (12 km²) in Elizabeth's name. John and his two teenage sons, were early recruits for the American Revolutionary War, fighting in the Battle of Kings Mountain on October 7, 1780; John finished the war as captain and served as a lieutenant of the Virginia militia after 1787. While the men were in the Army, Elizabeth converted their home to a haven, "Jackson's Fort", for refugees from Indian attacks.
John and Elizabeth had eight children. Their second son was Edward Jackson (March 1, 1759 – December 25, 1828), and Edward's third son was Jonathan Jackson, Thomas's father. Jonathan's mother died on April 17, 1796. Three years later, on October 13, 1799, his father married Elizabeth Wetherholt, and they had nine more children.
Thomas Jackson was born in the town of Clarksburg, Virginia, on January 21, 1824. He was the third child of Julia Beckwith (née Neale) Jackson (1798–1831) and Jonathan Jackson (1790–1826), an attorney. Both of Jackson's parents were natives of Virginia. The family already had two young children and were living in Clarksburg, in what is now West Virginia, when Thomas was born. He was named for his maternal grandfather. There is some dispute about the actual location of Jackson's birth. A historical marker on the floodwall in Parkersburg, West Virginia, claims that he was born in a cabin near that spot when his mother was visiting her parents who lived there. There are writings which indicate that in Jackson's early childhood, he was called "The Real Macaroni", though the origin of the nickname and whether it really existed are unclear.
Thomas's sister Elizabeth (age six) died of typhoid fever on March 6, 1826, with two-year-old Thomas at her bedside. His father also died of a typhoid fever on March 26. Jackson's mother gave birth to Thomas's sister Laura Ann the day after Jackson's father died. Julia Jackson thus was widowed at 28 and was left with much debt and three young children (including the newborn). She sold the family's possessions to pay the debts. She declined family charity and moved into a small rented one-room house. Julia took in sewing and taught school to support herself and her three young children for about four years.
In 1830, Julia Neale Jackson remarried, against the wishes of her friends. Her new husband, Captain Blake B. Woodson, an attorney, did not like his stepchildren. There were continuing financial problems. The following year, after giving birth to Thomas's half-brother Willam Wirt Woodson, Julia died of complications, leaving her three older children orphaned. Julia was buried in an unmarked grave in a homemade coffin in Westlake Cemetery along the James River and Kanawha Turnpike in Fayette County within the corporate limits of present-day Ansted, West Virginia.
As their mother's health continued to fail, Jackson and his sister Laura Ann were sent to live with their half-uncle, Cummins Jackson, who owned a grist mill in Jackson's Mill (near present-day Weston in Lewis County in central West Virginia). Their older brother, Warren, went to live with other relatives on his mother's side of the family, but he later died of tuberculosis in 1841 at the age of twenty. Thomas and Laura Ann returned from Jackson's Mill in November 1831 to be at their dying mother's bedside. They spent four years together at the Mill before being separated—Laura Ann was sent to live with her mother's family, Thomas to live with his Aunt Polly (his father's sister) and her husband, Isaac Brake, on a farm four miles from Clarksburg. Thomas was treated by Brake as an outsider and, having suffered verbal abuse for over a year, ran away from the family. When his cousin in Clarksburg urged him to return to Aunt Polly's, he replied, "Maybe I ought to, ma'am, but I am not going to." He walked eighteen miles through mountain wilderness to Jackson's Mill, where he was welcomed by his uncles and he remained there for the following seven years.
Cummins Jackson was strict with Thomas, who looked up to Cummins as a schoolteacher. Jackson helped around the farm, tending sheep with the assistance of a sheepdog, driving teams of oxen and helping harvest wheat and corn. Formal education was not easily obtained, but he attended school when and where he could. Much of Jackson's education was self-taught. He once made a deal with one of his uncle's slaves to provide him with pine knots in exchange for reading lessons; Thomas would stay up at night reading borrowed books by the light of those burning pine knots. Virginia law forbade teaching a slave, free black or mulatto to read or write; nevertheless, Jackson secretly taught the slave, as he had promised. Once literate, the young slave fled to Canada via the Underground Railroad. In his later years at Jackson's Mill, Thomas served as a schoolteacher.
The Civil War has sometimes been referred to as a war of "brother against brother," but in the case of the Jackson family, it was brother against sister. Laura Jackson Arnold was close to her brother Thomas until the Civil War period. As the war loomed, she became a staunch Unionist in a somewhat divided Harrison County. She was so strident in her beliefs that she expressed mixed feelings upon hearing of Thomas's death. One Union officer said that she seemed depressed at hearing the news, but her Unionism was stronger than her family bonds. In a letter, he wrote that Laura had said she "would rather know that he was dead than to have him a leader in the rebel army." Her Union sentiment also estranged her later from her husband, Jonathan Arnold.
In 1842, Jackson was accepted to the United States Military Academy at West Point, New York. Because of his inadequate schooling, he had difficulty with the entrance examinations and began his studies at the bottom of his class. Displaying a dogged determination that was to characterize his life, he became one of the hardest working cadets in the academy, and moved steadily up the academic rankings. Jackson graduated 17th out of 59 students in the Class of 1846. It was said by his peers that if he had stayed there another year, he would have graduated first.
Jackson began his United States Army career as a second lieutenant in the 1st U.S. Artillery Regiment and was sent to fight in the Mexican–American War from 1846 to 1848. He served at the Siege of Veracruz and the battles of Contreras, Chapultepec, and Mexico City, eventually earning two brevet promotions, and the regular army rank of first lieutenant. It was in Mexico that Thomas Jackson first met Robert E. Lee.
During the assault on Chapultepec Castle on September 13, 1847, he refused what he felt was a "bad order" to withdraw his troops. Confronted by his superior, he explained his rationale, claiming withdrawal was more hazardous than continuing his overmatched artillery duel. His judgment proved correct, and a relieving brigade was able to exploit the advantage Jackson had broached. In contrast to this display of strength of character, he obeyed what he also felt was a "bad order" when he raked a civilian throng with artillery fire after the Mexican authorities failed to surrender Mexico City at the hour demanded by the U.S. forces. The former episode, and later aggressive action against the retreating Mexican army, earned him field promotion to the brevet rank of major.
After the war, Jackson was briefly assigned to forts in New York, and then to Florida during the Second Interbellum of the Seminole Wars, during which the Americans were attempting to force the remaining Seminoles to move West. He was stationed briefly at Fort Casey before being named second-in-command at Fort Meade, a small fort about thirty miles south of Tampa. His commanding officer was Major William H. French. Jackson and French disagreed often, and filed numerous complaints against each other. Jackson stayed in Florida less than a year.
In the spring of 1851, Jackson accepted a newly created teaching position at the Virginia Military Institute (VMI), in Lexington, Virginia. He became Professor of Natural and Experimental Philosophy and Instructor of Artillery. Parts of Jackson's curriculum are still taught at VMI, regarded as timeless military essentials: discipline, mobility, assessing the enemy's strength and intentions while attempting to conceal your own, and the efficiency of artillery combined with an infantry assault.
Though he spent a great deal of time preparing in depth for each class meeting, Jackson was unpopular as a teacher. His students called him "Tom Fool". He memorized his lectures and then recited them to the class; any student who came to ask for help was given the same explanation as before. And if a student asked for help a second time, Jackson viewed him as insubordinate and punished him. For his tests, Jackson typically had students simply recite memorized information that he had given them. The students mocked his apparently stern, religious nature and his eccentric traits. In 1856, a group of alumni attempted to have Jackson removed from his position.
Jackson's peculiar personal traits contributed to his unpopularity as an educator. With little sense of humor, he once tried to get a cadet dismissed from VMI for playing a prank on him. He was a hypochondriac who had sinus problems and arthritis and stood for long periods of time to keep his internal organs in place, a tiring activity that he believed contributed to good health. He rarely ate much food and often subsisted on crackers and milk. He required little sleep but was known to take catnaps. He liked mineral baths.
The founder of VMI and one of its first two faculty members was John Thomas Lewis Preston. Preston's second wife, Margaret Junkin Preston, was the sister of Jackson's first wife, Elinor. In addition to working together on the VMI faculty, Preston taught Sunday School with Jackson and served on his staff during the Civil War.
Little known as he was to the white inhabitants of Lexington, Jackson was revered by many of the African Americans in town, both slaves and free blacks. In 1855, he was instrumental in the organization of Sunday School classes for blacks at the Presbyterian Church. His second wife, Mary Anna Jackson, taught with Jackson, as "he preferred that my labors should be given to the colored children, believing that it was more important and useful to put the strong hand of the Gospel under the ignorant African race, to lift them up." The pastor, Dr. William Spottswood White, described the relationship between Jackson and his Sunday afternoon students: "In their religious instruction he succeeded wonderfully. His discipline was systematic and firm, but very kind. ... His servants reverenced and loved him, as they would have done a brother or father. ... He was emphatically the black man's friend." He addressed his students by name and they, in turn, referred to him affectionately as "Marse Major".
Jackson's family owned six slaves in the late 1850s. Three (Hetty, Cyrus, and George, a mother and two teenage sons) were received as a wedding present. Another, Albert, requested that Jackson purchase him and allow him to work for his freedom; he was employed as a waiter in one of the Lexington hotels and Jackson rented him to VMI. Amy also requested that Jackson purchase her from a public slave auction and she served the family as a cook and housekeeper. The sixth, Emma, was a four-year-old orphan with a learning disability, accepted by Jackson from an aged widow and presented to his second wife, Mary Anna, as a welcome-home gift. After the American Civil War began he appears to have hired out or sold his slaves, except, apparently at least, one slave: "A 'servant', Jim Lewis, had stayed with Jackson in the small house as he lay dying". Mary Anna Jackson, in her 1895 memoir, said, "our servants ... without the firm guidance and restraint of their master, the excitement of the times proved so demoralizing to them that he deemed it best for me to provide them with good homes among the permanent residents." James Robertson wrote about Jackson's view on slavery:
While an instructor at VMI in 1853, Thomas Jackson married Elinor "Ellie" Junkin, whose father, George Junkin, was president of Washington College (later named Washington and Lee University) in Lexington. An addition was built onto the president's residence for the Jacksons, and when Robert E. Lee became president of Washington College he lived in the same home, now known as the Lee–Jackson House. Ellie gave birth to a stillborn son on October 22, 1854, experiencing a hemorrhage an hour later that proved fatal.
After a tour of Europe, Jackson married again, in 1857. Mary Anna Morrison was from North Carolina, where her father was the first president of Davidson College. Her sister, Isabella Morrison, was married to Daniel Harvey Hill. They had a daughter named Mary Graham on April 30, 1858, but the baby died less than a month later. Another daughter was born in 1862, shortly before her father's death. The Jacksons named her Julia Laura, after his mother and sister.
Jackson purchased the only house he ever owned while in Lexington. Built in 1801, the brick town house at 8 East Washington Street was purchased by Jackson in 1859. He lived in it for two years before being called to serve in the Confederacy. Jackson never returned to his home.
In November 1859, at the request of the governor of Virginia, Major William Gilham led a contingent of the VMI Cadet Corps to Charles Town to provide an additional military presence at the hanging of militant abolitionist John Brown on December 2, following his raid on the federal arsenal at Harpers Ferry on October 16. Major Jackson was placed in command of the artillery, consisting of two howitzers manned by twenty-one cadets.
In 1861, after Virginia seceded from the Union and as the American Civil War broke out, Jackson became a drill master for some of the many new recruits in the Confederate Army. On April 27, 1861, Virginia Governor John Letcher ordered Colonel Jackson to take command at Harpers Ferry, where he would assemble and command the unit which later gained fame as the "Stonewall Brigade", consisting of the 2nd, 4th, 5th, 27th, and 33rd Virginia Infantry regiments. All of these units were from the Shenandoah Valley region of Virginia, where Jackson located his headquarters throughout the first two years of the war. Jackson became known for his relentless drilling of his troops; he believed discipline was vital to success on the battlefield. Following raids on the B&O Railroad on May 24, he was promoted to brigadier general on June 17.
Jackson rose to prominence and earned his most famous nickname at the First Battle of Bull Run (First Manassas) on July 21, 1861. As the Confederate lines began to crumble under heavy Union assault, Jackson's brigade provided crucial reinforcements on Henry House Hill, demonstrating the discipline he instilled in his men. Although under heavy fire for several continuous hours, Jackson received a wound, breaking the middle finger of his left hand; about midway between the hand and knuckle, the ball passing on the side next the index finger. The troops of South Carolina, commanded by Gen. Barnard Elliott Bee Jr. had been overwhelmed, and he rode up to Jackson in despair, exclaiming, "They are beating us back!" "Then," said Jackson, "we will give them the bayonet!" As he rode back to his command, Bee exhorted his own troops to re-form by shouting, "There is Jackson standing like a stone wall. Let us determine to die here, and we will conquer. Rally behind the Virginians!" There is some controversy over Bee's statement and intent, which could not be clarified because he was killed almost immediately after speaking and none of his subordinate officers wrote reports of the battle. Major Burnett Rhett, chief of staff to General Joseph E. Johnston, claimed that Bee was angry at Jackson's failure to come immediately to the relief of Bee's and Francis S. Bartow's brigades while they were under heavy pressure. Those who subscribe to this opinion believe that Bee's statement was meant to be pejorative: "Look at Jackson standing there like a stone wall!"
Regardless of the controversy and the delay in relieving Bee, Jackson's brigade, which would thenceforth be known as the Stonewall Brigade, stopped the Union assault and suffered more casualties than any other Southern brigade that day; Jackson has since then been generally known as Stonewall Jackson. During the battle, Jackson displayed a gesture common to him and held his left arm skyward with the palm facing forward – interpreted by his soldiers variously as an eccentricity or an entreaty to God for success in combat. His hand was struck by a bullet or a piece of shrapnel and he suffered a small loss of bone in his middle finger. He refused medical advice to have the finger amputated. After the battle, Jackson was promoted to major general (October 7, 1861) and given command of the Valley District, with headquarters in Winchester.
In the spring of 1862, Union Maj. Gen. George B. McClellan's Army of the Potomac approached Richmond from the southeast in the Peninsula Campaign. Maj. Gen. Irvin McDowell's large corps was poised to hit Richmond from the north, and Maj. Gen. Nathaniel P. Banks's army threatened the Shenandoah Valley. Jackson was ordered by Richmond to operate in the Valley to defeat Banks's threat and prevent McDowell's troops from reinforcing McClellan.
Jackson possessed the attributes to succeed against his poorly coordinated and sometimes timid opponents: a combination of great audacity, excellent knowledge and shrewd use of the terrain, and an uncommon ability to inspire his troops to great feats of marching and fighting.
The campaign started with a tactical defeat at Kernstown on March 23, 1862, when faulty intelligence led him to believe he was attacking a small detachment. But it became a strategic victory for the Confederacy, because his aggressiveness suggested that he possessed a much larger force, convincing President Abraham Lincoln to keep Banks' troops in the Valley and McDowell's 30,000-man corps near Fredericksburg, subtracting about 50,000 soldiers from McClellan's invasion force. As it transpired, it was Jackson's only defeat in the Valley.
By adding Maj. Gen. Richard S. Ewell's large division and Maj. Gen. Edward "Allegheny" Johnson's small division, Jackson increased his army to 17,000 men. He was still significantly outnumbered, but attacked portions of his divided enemy individually at McDowell, defeating both Brig. Gens. Robert H. Milroy and Robert C. Schenck. He defeated Banks at Front Royal and Winchester, ejecting him from the Valley. Lincoln decided that the defeat of Jackson was an immediate priority (though Jackson's orders were solely to keep Union forces occupied away from Richmond). He ordered Irvin McDowell to send 20,000 men to Front Royal and Maj. Gen. John C. Frémont to move to Harrisonburg. If both forces could converge at Strasburg, Jackson's only escape route up the Valley would be cut.
After a series of maneuvers, Jackson defeated Frémont's command at Cross Keys and Brig. Gen. James Shields at Port Republic on June 8–9. Union forces were withdrawn from the Valley.
It was a classic military campaign of surprise and maneuver. Jackson pressed his army to travel in 48 days of marching and won five significant victories with a force of about 17,000 against a combined force of 60,000. Stonewall Jackson's reputation for moving his troops so rapidly earned them the oxymoronic nickname "foot cavalry". He became the most celebrated soldier in the Confederacy (until he was eventually eclipsed by Lee) and lifted the morale of the Southern public.
McClellan's Peninsula Campaign toward Richmond stalled at the Battle of Seven Pines on May 31 and June 1. After the Valley Campaign ended in mid-June, Jackson and his troops were called to join Robert E. Lee's Army of Northern Virginia in defense of the capital. By utilizing a railroad tunnel under the Blue Ridge Mountains and then transporting troops to Hanover County on the Virginia Central Railroad, Jackson and his forces made a surprise appearance in front of McClellan at Mechanicsville. Reports had last placed Jackson's forces in the Shenandoah Valley; their presence near Richmond added greatly to the Union commander's overestimation of the strength and numbers of the forces before him. This proved a crucial factor in McClellan's decision to re-establish his base at a point many miles downstream from Richmond on the James River at Harrison's Landing, essentially a retreat that ended the Peninsula Campaign and prolonged the war almost three more years.
Jackson's troops served well under Lee in the series of battles known as the Seven Days Battles, but Jackson's own performance in those battles is generally considered to be poor. He arrived late at Mechanicsville and inexplicably ordered his men to bivouac for the night within clear earshot of the battle. He was late at Savage's Station. At White Oak Swamp he failed to employ fording places to cross White Oak Swamp Creek, attempting for hours to rebuild a bridge, which limited his involvement to an ineffectual artillery duel and a missed opportunity to intervene decisively at the Battle of Glendale, which was raging nearby. At Malvern Hill Jackson participated in the futile, piecemeal frontal assaults against entrenched Union infantry and massed artillery, and suffered heavy casualties (but this was a problem for all of Lee's army in that ill-considered battle). The reasons for Jackson's sluggish and poorly coordinated actions during the Seven Days are disputed, although a severe lack of sleep after the grueling march and railroad trip from the Shenandoah Valley was probably a significant factor. Both Jackson and his troops were completely exhausted. An explanation for this and other lapses by Jackson was tersely offered by his colleague and brother in-law General Daniel Harvey Hill: "Jackson's genius never shone when he was under the command of another."
The military reputations of Lee's corps commanders are often characterized as Stonewall Jackson representing the audacious, offensive component of Lee's army, whereas his counterpart, James Longstreet, more typically advocated and executed defensive strategies and tactics. Jackson has been described as the army's hammer, Longstreet its anvil. In the Northern Virginia Campaign of August 1862 this stereotype did not hold true. Longstreet commanded the Right Wing (later to become known as the First Corps) and Jackson commanded the Left Wing. Jackson started the campaign under Lee's orders with a sweeping flanking maneuver that placed his corps into the rear of Union Maj. Gen. John Pope's Army of Virginia. The Hotchkiss journal shows that Jackson, most likely, originally conceived the movement. In the journal entries for March 4 and 6, 1863, General Stuart tells Hotchkiss that "Jackson was entitled to all the credit" for the movement and that Lee thought the proposed movement "very hazardous" and "reluctantly consented" to the movement. At Manassas Junction, Jackson was able to capture all of the supplies of the Union Army depot. Then he had his troops destroy all of it, for it was the main depot for the Union Army. Jackson then retreated and then took up a defensive position and effectively invited Pope to assault him. On August 28–29, the start of the Second Battle of Bull Run (Second Manassas), Pope launched repeated assaults against Jackson as Longstreet and the remainder of the army marched north to reach the battlefield.
On August 30, Pope came to believe that Jackson was starting to retreat, and Longstreet took advantage of this by launching a massive assault on the Union army's left with over 25,000 men. Although the Union troops put up a furious defense, Pope's army was forced to retreat in a manner similar to the embarrassing Union defeat at First Bull Run, fought on roughly the same battleground.
When Lee decided to invade the North in the Maryland Campaign, Jackson took Harpers Ferry, then hastened to join the rest of the army at Sharpsburg, Maryland, where they fought McClellan in the Battle of Antietam (Sharpsburg). Antietam was primarily a defensive battle against superior odds, although McClellan failed to exploit his advantage. Jackson's men bore the brunt of the initial attacks on the northern end of the battlefield and, at the end of the day, successfully resisted a breakthrough on the southern end when Jackson's subordinate, Maj. Gen. A. P. Hill, arrived at the last minute from Harpers Ferry. The Confederate forces held their position, but the battle was extremely bloody for both sides, and Lee withdrew the Army of Northern Virginia back across the Potomac River, ending the invasion. On October 10, Jackson was promoted to lieutenant general, being ranked just behind Lee and Longstreet and his command was redesignated the Second Corps.
Before the armies camped for winter, Jackson's Second Corps held off a strong Union assault against the right flank of the Confederate line at the Battle of Fredericksburg, in what became a Confederate victory. Just before the battle, Jackson was delighted to receive a letter about the birth of his daughter, Julia Laura Jackson, on November 23. Also before the battle, Maj. Gen. J. E. B. Stuart, Lee's dashing and well-dressed cavalry commander, presented to Jackson a fine general's frock coat that he had ordered from one of the best tailors in Richmond. Jackson's previous coat was threadbare and colorless from exposure to the elements, its buttons removed by admiring ladies. Jackson asked his staff to thank Stuart, saying that although the coat was too handsome for him, he would cherish it as a souvenir. His staff insisted that he wear it to dinner, which caused scores of soldiers to rush to see him in uncharacteristic garb. Jackson was so embarrassed with the attention that he did not wear the new uniform for months.
At the Battle of Chancellorsville, the Army of Northern Virginia was faced with a serious threat by the Army of the Potomac and its new commanding general, Major General Joseph Hooker. General Lee decided to employ a risky tactic to take the initiative and offensive away from Hooker's new southern thrust – he decided to divide his forces. Jackson and his entire corps went on an aggressive flanking maneuver to the right of the Union lines: this flanking movement would be one of the most successful and dramatic of the war. While riding with his infantry in a wide berth well south and west of the Federal line of battle, Jackson employed Maj. Gen. Fitzhugh Lee's cavalry to provide for better reconnaissance regarding the exact location of the Union right and rear. The results were far better than even Jackson could have hoped. Fitzhugh Lee found the entire right side of the Federal lines in the middle of open field, guarded merely by two guns that faced westward, as well as the supplies and rear encampments. The men were eating and playing games in carefree fashion, completely unaware that an entire Confederate corps was less than a mile away. What happened next is given in Fitzhugh Lee's own words:
Jackson immediately returned to his corps and arranged his divisions into a line of battle to charge directly into the oblivious Federal right. The Confederates marched silently until they were merely several hundred feet from the Union position, then released a bloodthirsty cry and full charge. Many of the Federal soldiers were captured without a shot fired, the rest were driven into a full rout. Jackson pursued relentlessly back toward the center of the Federal line until dusk.
Darkness ended the assault. As Jackson and his staff were returning to camp on May 2, they were mistaken for a Union cavalry force by the 18th North Carolina Infantry regiment who shouted, "Halt, who goes there?", but fired before evaluating the reply. Frantic shouts by Jackson's staff identifying the party were replied to by Major John D. Barry with the retort, "It's a damned Yankee trick! Fire!" A second volley was fired in response; in all, Jackson was hit by three bullets, two in the left arm and one in the right hand. Several other men in his staff were killed, in addition to many horses. Darkness and confusion prevented Jackson from getting immediate care. He was dropped from his stretcher while being evacuated because of incoming artillery rounds. Because of his injuries, Jackson's left arm had to be amputated by Dr. Hunter McGuire. Jackson was moved to Thomas C. Chandler's plantation named "Fairfield". He was offered Chandler's home for recovery, but Jackson refused and suggested using Chandler's plantation office building instead. He was thought to be out of harm's way; but unknown to the doctors, he already had classic symptoms of pneumonia, complaining of a sore chest. This soreness was mistakenly thought to be the result of his rough handling in the battlefield evacuation.
Lee wrote to Jackson after learning of his injuries, stating: "Could I have directed events, I would have chosen for the good of the country to be disabled in your stead." Jackson died of complications from pneumonia on May 10, 1863, eight days after he was shot. On his deathbed, though he became weaker, he remained spiritually strong, saying towards the end: "It is the Lord's Day; my wish is fulfilled. I have always desired to die on Sunday."
Dr. McGuire wrote an account of Jackson's final hours and last words:
His body was moved to the Governor's Mansion in Richmond for the public to mourn, and he was then moved to be buried in the Stonewall Jackson Memorial Cemetery, Lexington, Virginia. The arm that was amputated on May 2 was buried separately by Jackson's chaplain (Beverly Tucker Lacy), at the J. Horace Lacy house, "Ellwood", (now preserved at the Fredericksburg National Battlefield) in the Wilderness of Orange County, near the field hospital.
Upon hearing of Jackson's death, Robert E. Lee mourned the loss of both a friend and a trusted commander. As Jackson lay dying, Lee sent a message through Chaplain Lacy, saying: "Give General Jackson my affectionate regards, and say to him: he has lost his left arm but I my right." The night Lee learned of Jackson's death, he told his cook: "William, I have lost my right arm", and, "I'm bleeding at the heart."
"Harper's Weekly" reported Jackson's death on May 23, 1863, as follows:
Jackson's sometimes unusual command style and personality traits, combined with his frequent success in battle, contribute to his legacy as one of the greatest generals of the Civil War. He was martial and stern in attitude and profoundly religious, a deacon in the Presbyterian Church. One of his many nicknames was "Old Blue Lights," a term applied to a military man whose evangelical zeal burned with the intensity of the blue light used for night-time display.
Jackson held a lifelong belief that one of his arms was longer than the other, and thus usually held the "longer" arm up to equalize his circulation. He was described as a "champion sleeper", and occasionally even fell asleep with food in his mouth. A paper presented to the Society of Clinical Psychologists hypothesized that Jackson had Asperger syndrome, although other possible explanations, such as a herniated diaphragm, exist. Jackson suffered a number of ailments, for which he sought relief via contemporary practices of his day including hydrotherapy, popular in America at that time, visiting establishments at Oswego, New York (1850) and Round Hill, Massachusetts (1860) although with little evidence of success. Jackson also suffered a significant hearing loss in both of his ears as a result of his prior service in the U.S. Army, as an artillery officer.
A recurring story concerns Jackson's love of lemons, which he allegedly gnawed whole to alleviate symptoms of dyspepsia. General Richard Taylor, son of President Zachary Taylor, wrote a passage in his war memoirs about Jackson eating lemons: "Where Jackson got his lemons 'no fellow could find out,' but he was rarely without one." However, recent research by his biographer, James I. Robertson, Jr., has found that none of Jackson's contemporaries, including members of his staff, his friends, or his wife, recorded any unusual obsessions with lemons. Jackson thought of a lemon as a "rare treat ... enjoyed greatly whenever it could be obtained from the enemy's camp". Jackson was fond of all fruits, particularly peaches, "but he enjoyed with relish lemons, oranges, watermelons, apples, grapes, berries, or whatever was available."
Jackson's religion has often been discussed. His biographer, Robert Lewis Dabney, suggested that "It was the fear of God which made him so fearless of all else." Jackson himself had said, "My religious belief teaches me to feel as safe in battle as in bed."
Stephen W. Sears states that "Jackson was fanatical in his Presbyterian faith, and it energized his military thought and character. Theology was the only subject he genuinely enjoyed discussing. His dispatches invariably credited an ever-kind Providence." According to Sears, "this fanatical religiosity had drawbacks. It warped Jackson's judgment of men, leading to poor appointments; it was said he preferred good Presbyterians to good soldiers." James I. Robertson, Jr. suggests that Jackson was "a Christian soldier in every sense of the word." According to Robertson, Jackson "thought of the war as a religious crusade", and "viewed himself as an Old Testament warrior – like David or Joshua – who went into battle to slay the Philistines."
Jackson encouraged the Confederate States Army revival that occurred in 1863, although it was probably more of a grass-roots movement than a top-down revival. Jackson strictly observed the Sunday Sabbath. James I. Robertson, Jr. notes that "no place existed in his Sunday schedule for labor, newspapers, or secular conversation."
In command, Jackson was extremely secretive about his plans and extremely meticulous about military discipline. This secretive nature did not stand him in good stead with his subordinates, who were often not aware of his overall operational intentions until the last minute, and who complained of being left out of key decisions.
Robert E. Lee could trust Jackson with deliberately undetailed orders that conveyed Lee's overall objectives, what modern doctrine calls the "end state". This was because Jackson had a talent for understanding Lee's sometimes unstated goals, and Lee trusted Jackson with the ability to take whatever actions were necessary to implement his end state requirements. Few of Lee's subsequent corps commanders had this ability. At Gettysburg, this resulted in lost opportunities. With a defeated and disorganized Union Army trying to regroup on high ground near town and vulnerable, Lee sent one of his new corps commanders, Richard S. Ewell, discretionary orders that the heights (Cemetery Hill and Culp's Hill) be taken "if practicable." Without Jackson's intuitive grasp of Lee's orders or the instinct to take advantage of sudden tactical opportunities, Ewell chose not to attempt the assault, and this failure is considered by historians to be the greatest missed opportunity of the battle.
Jackson had a poor reputation as a horseman. One of his soldiers, Georgia volunteer William Andrews, wrote that Jackson was "a very ordinary looking man of medium size, his uniform badly soiled as though it had seen hard service. He wore a cap pulled down nearly to his nose and was riding a rawboned horse that did not look much like a charger, unless it would be on hay or clover. He certainly made a poor figure on a horseback, with his stirrup leather six inches too short, putting his knees nearly level with his horse's back, and his heels turned out with his toes sticking behind his horse's foreshoulder. A sorry description of our most famous general, but a correct one." His horse was named "Little Sorrel" (also known as "Old Sorrel"), a small chestnut gelding which was a captured Union horse from a Connecticut farm. He rode Little Sorrel throughout the war, and was riding him when he was shot at Chancellorsville. Little Sorrel died at age 36 and is buried near a statue of Jackson on the parade grounds of VMI. (His mounted hide is on display in the VMI Museum.)
Jackson was greatly admired and respected by people throughout the South, and his death had a profound effect there on civilians and soldiers alike.
After the war, Jackson's wife and young daughter Julia moved from Lexington to North Carolina. Mary Anna Jackson wrote two books about her husband's life, including some of his letters. She never remarried, and was known as the "Widow of the Confederacy", living until 1915. His daughter Julia married, and bore children, but she died of typhoid fever at the age of 26 years.
A former Confederate soldier who admired Jackson, Captain Thomas R. Ranson of Staunton, Virginia, also remembered the tragic life of Jackson's mother. Years after the war, he went to the tiny mountain hamlet of Ansted in Fayette County, West Virginia, and had a marble marker placed over the unmarked grave of Julia Neale Jackson in Westlake Cemetery, to make sure that the site was not lost.
Many theorists through the years have postulated that if Jackson had lived, Lee might have prevailed at Gettysburg. Certainly Jackson's discipline and tactical sense were sorely missed.
General George Patton of World War II fame as a boy prayed next to two portraits of Robert E. Lee and Stonewall Jackson, whom he assumed were God and Jesus. He once told Dwight D. Eisenhower "I will be your Jackson." General Douglas MacArthur called Robert L. Eichelberger his Stonewall Jackson. Chesty Puller idolized Jackson, and carried George Henderson's biography of Jackson with him on campaigns. Alexander Vandegrift also idolized Jackson.
Jackson's grandson and great-grandson, both namesakes, Thomas Jonathan Jackson Christian (1888-1952) and Thomas Jonathan Jackson Christian Jr. (1915-1944), both graduated from West Point. The elder Christian was a career US Army officer who served during both World Wars and rose to the rank of brigadier general. The younger Christian was a colonel in command of the 361st Fighter Group flying P-51 Mustangs in the European Theater of Operations in World War II when he was killed in action in August 1944; his personal aircraft, Lou IV, was one of the most photographed P-51s in the war.
As an important element of the ideology of the "Lost Cause", Jackson has been commemorated in numerous ways, including with statues, currency, and postage. A poem penned during the war soon became a popular song, "Stonewall Jackson's Way". The Stonewall Brigade Band is still active today.
West Virginia's Stonewall Jackson State Park is named in his honor. Nearby, at Stonewall Jackson's historical childhood home, his uncle's grist mill is the centerpiece of a historical site at the Jackson's Mill Center for Lifelong Learning and State 4-H Camp. The facility, located near Weston, serves as a special campus for West Virginia University and the WVU Extension Service.
During a training exercise in Virginia by U.S. Marines in 1921, the Marine commander, General Smedley Butler was told by a local farmer that Stonewall Jackson's arm was buried nearby under a granite marker, to which Butler replied, "Bosh! I will take a squad of Marines and dig up that spot to prove you wrong!" Butler found the arm in a box under the marker. He later replaced the wooden box with a metal one, and reburied the arm. He left a plaque on the granite monument marking the burial place of Jackson's arm; the plaque is no longer on the marker but can be viewed at the Chancellorsville Battlefield visitor's center.
Beginning in 1904 the Commonwealth of Virginia celebrated Jackson's birthday as a state holiday; the observance was eliminated, with Election Day as a replacement holiday, effective July 2020.
Jackson is featured on the 1925 Stone Mountain Memorial half dollar. | https://en.wikipedia.org/wiki?curid=31485 |
Tertiary education
Tertiary education, also referred to as third-level, third-stage or post-secondary education, is the educational level following the completion of secondary education. The World Bank, for example, defines tertiary education as including universities as well as trade schools and colleges. Higher education is taken to include undergraduate and postgraduate education, while vocational education beyond secondary education is known as "further education" in the United Kingdom, or "continuing education" in the United States.
Tertiary education generally culminates in the receipt of certificates, diplomas, or academic degrees.
UNESCO stated that tertiary education focuses on learning endeavors in specialized fields. It includes academic and higher vocational education.
The World Bank's 2019 World Development Report on the future of work argues that given the future of work and the increasing role of technology in value chains, tertiary education becomes even more relevant for workers to compete in the labor market.
Tertiary education systems will keep expanding over the next 10 years. Globally, the gross enrolment ratio in tertiary education increased from 19% in 2000 to 38% in 2017, with the female enrolment ratio exceeding the male ratio by 4 percentage points.
The tertiary gross enrolment ratio ranges from 9% in low-income countries to 77% in high-income countries, where, after rapid growth in the 2000s, reached a plateau in the 2010s.
Between now and 2030, the biggest increase in tertiary enrolment ratios is expected in middle-income countries, where it will reach 52%. Sustainable development goal 4 (SDG 4) commits countries to providing lifelong learning opportunities for all, including tertiary education.
This commitment is monitored through the global indicator for target 4.3 in the sustainable development goal 4 (SDG 4), which measures the participation rate of youth and adults in formal and non-formal education and training in the previous 12 months, whether for work or non-work purposes.
The term "tertiary education" aligns with the global term "higher education". Since the 1970s however, specialized FE colleges have called themselves “tertiary colleges” although being part of the secondary education process. These institutions offer courses such as A Levels, that allow progression to HE, alongside vocational courses.
In some areas, where schools do not universally offer sixth forms, "tertiary colleges" function as a sixth-form college as well as a general FE college.
Unlike sixth-form colleges, the staff join lecturers' rather than teachers' unions.
Under devolution in the United Kingdom, education is administered separately in England, Wales, Northern Ireland and Scotland. In 2018 the Welsh Government adopted the term "tertiary education" to refer to post-16 education and training in Wales.
Within Australia "tertiary education" refers to continuing studies after a students Higher School Certificate. It also refers to any education a student receives after final compulsory schooling, which occurs at the age of 17 within Australia. Tertiary-education options include university, technical and further education or private universities.
The higher education system in the United States is decentralized and regulated independently by each state with accreditors playing a key role in ensuring institutions meet minimum standards. It is large and diverse with institutions that are privately governed and institutions that are owned and operated by state and local governments. Some private institutions are affiliated with religious organizations whereas others are secular with enrollment ranging from a few dozen to tens of thousands of students. In short, there are a wide variety of options which are often locally determined. The United States Department of Education presents a broad-spectrum view of tertiary education and detailed information on the nation's educational structure, accreditation procedures, and connections to state as well as federal agencies and entities.
The Carnegie Classification of Institutions of Higher Education provides one framework for classifying U.S. colleges and universities in several different ways. US tertiary education also includes various non-profit organizations promoting professional development of individuals in the field of higher education and helping expand awareness of related issues like international student services and complete campus internationalization.
Although tertiary education in the EU includes university, it can differ from country to country.
After going to nursery school (French: école maternelle), elementary school (French: école élémentaire), middle school (French: collège), and high school (French: lycée), a student may go to university, but may also stop at that point.
Tertiary education refers to post-secondary education received at Universities (Government or privately funded), Monotechnics, Polytechnics and Colleges of Education. After completing a secondary education, students may enroll in a tertiary institution or acquire a vocational education. Students are required to sit for the Joint Admissions and Matriculation Board Entrance Examination (JAMB) as well as the Secondary School Certificate Examination (SSCE) or General Certificate Examination (GCE) and meet varying cut-off marks to gain admission into a tertiary institution.
4th and 5th grades of colleges of technology and special training colleges fall into the category.
Colleges of technology are provided by the 1st article of the educational law in Japan as well as universities and junior colleges which are very often called as high education for two years but special training colleges are provided by the 124th article of the law as a category of special training schools. Both are regular educational organisations but special training colleges are not “schools” under the law. They are additionally not in high education.
Pupil who finish a junior high school can enter a college of technology but 1st, 2nd and 3rd grades are in secondary education and out of this article. College of technology is special educational system which secondary and tertiary educations intermingle. Graduates from the school are equivalent to graduates from a junior college.
Whilst special training colleges are not “schools” by the law, they are schools in public view. Their most courses are for two years but some have one, three or four-year courses. Graduates from courses for more than two years are equivalent to graduates from junior colleges and graduates from a course for four years can enter a graduate course of a university in recent years.
Special training schools were included in miscellaneous schools by the current educational law when it was enforced in 1947. The 83th article of the law provided for them and they were certainly miscellaneous.
Because miscellaneous schools included educational organisations with lessons for a few times in a week then, some educational organisations including later special training schools were dissatisfied about the system. In addition, there were many problems because of being miscellaneous.
Some educational organisations authorised by some definite condition became miscellaneous schools with reform of the law in 1 January 1957 but were still in the miscellaneous system. The law has not applied to many other educational organisations since the reform.
There were various styles whilst the law authorised: for example, schools to provide about educational backgrounds and those without any provisions about them. There are still many problems and special training schools were created on January 1976. They include three courses: post-secondary, upper-secondary and general courses. Schools with the post-secondary course for graduates who finish senior high schools and people with equivalent educational backgrounds are called as special training colleges. The upper-secondary course is that for graduates from junior high schools and everyone can enter the general course. The latter is near current miscellaneous schools.
Graduates from special training colleges since 1994 can get diploma. The law does not provide about diploma unlike foundation degree that graduates from colleges of technology can get but is public degree as well. | https://en.wikipedia.org/wiki?curid=31486 |
Trimix (breathing gas)
Trimix is a breathing gas consisting of oxygen, helium and nitrogen and is used in deep commercial diving, during the deep phase of dives carried out using technical diving techniques, and in advanced recreational diving.
The helium is included as a substitute for some of the nitrogen, to reduce the narcotic effect of the breathing gas at depth. With a mixture of three gases it is possible to create mixes suitable for different depths or purposes by adjusting the proportions of each gas. Oxygen content can be optimised for the depth to limit the risk of toxicity, and the inert component balanced between nitrogen (which is cheap but narcotic) and helium (which is not narcotic and reduces work of breathing, but is more expensive and increases heat loss).
The mixture of helium and oxygen with a 0% nitrogen content is generally known as Heliox. This is frequently used as a breathing gas in deep commercial diving operations, where it is often recycled to save the expensive helium component. Analysis of two-component gases is much simpler than three-component gases.
The main reason for adding helium to the breathing mix is to reduce the proportions of nitrogen and oxygen below those of air, to allow the gas mix to be breathed safely on deep dives. A lower proportion of nitrogen is required to reduce nitrogen narcosis and other physiological effects of the gas at depth. Helium has very little narcotic effect. A lower proportion of oxygen reduces the risk of oxygen toxicity on deep dives.
The lower density of helium reduces breathing resistance at depth.
Because of its low molecular weight, helium enters and leaves tissues more rapidly than nitrogen as the pressure is increased or reduced (this is called on-gassing and off-gassing). Because of its lower solubility, helium does not load tissues as heavily as nitrogen, but at the same time the tissues can not support as high an amount of helium when super-saturated. In effect, helium is a faster gas to saturate and desaturate, which is a distinct advantage in saturation diving, but less so in bounce diving, where the increased rate of off-gassing is largely counterbalanced by the equivalently increased rate of on-gassing.
Helium conducts heat six times faster than air, so helium-breathing divers often carry a separate supply of a different gas to inflate drysuits. This is to avoid the risk of hypothermia caused by using helium as inflator gas. Argon, carried in a small, separate tank connected only to the inflator of the drysuit, is preferred to air, since air conducts heat 50% faster than argon. Dry suits (if used together with a buoyancy compensator) still require a minimum of inflation to avoid "squeezing", i.e. damage to skin caused by pressurizing dry suit folds.
Some divers suffer from hyperbaric arthralgia (compression arthralgia) during descent and trimix has been shown to help symptoms on compression.
Helium dissolves into tissues (this is called on-gassing) more rapidly than nitrogen as the ambient pressure is increased. A consequence of the higher loading in some tissues is that many decompression algorithms require deeper decompression stops than a similar decompression dive using air, and helium is more likely to come out of solution and cause decompression sickness following a fast ascent.
In addition to physiological disadvantages, the use of trimix also has economic and logistic disadvantages. The price of helium has increased by over 51% between the years 2000 and 2011. This price increase affects open-circuit divers more than closed-circuit divers due to the larger volume of helium consumed on a typical trimix dive. Additionally, as trimix fills require a more elaborate blending and compressor setup than less complex air and nitrox fills, there are fewer trimix filling stations. The relative scarcity of trimix filling stations may necessitate going far out of one's way in order to procure the necessary mix for a deep dive that requires the gas.
Lowering the oxygen content increases the maximum operating depth and duration of the dive before which oxygen toxicity becomes a limiting factor. Most trimix divers limit their working oxygen partial pressure [PO2] to 1.4 bar and may reduce the PO2 further to 1.3 bar or 1.2 bar depending on the depth, the duration and the kind of breathing system used. A maximum oxygen partial pressure of 1.4 bar for the active sectors of the dive, and 1.6 bar for decompression stops is recommended by several recreational and technical diving certification agencies for open circuit, and 1.2 bar or 1.3 bar as maximum for the active sectors of a dive on closed circuit rebreather.
Retaining nitrogen in trimix can contribute to the prevention of High Pressure Nervous Syndrome, a problem that can occur when breathing heliox at depths beyond about . Nitrogen is also much less expensive than helium.
Conventionally, the mix is named by its oxygen percentage, helium percentage and optionally the balance percentage, nitrogen. For example, a mix named "trimix 10/70" or trimix 10/70/20, consisting of 10% oxygen, 70% helium, 20% nitrogen is suitable for a dive.
The ratio of gases in a particular mix is chosen to give a safe maximum operating depth and comfortable equivalent narcotic depth for the planned dive. Safe limits for mix of gases in trimix are generally accepted to be a maximum partial pressure of oxygen (PO2—see Dalton's law) of 1.0 to 1.6 bar and maximum equivalent narcotic depth of . At , "12/52" has a PO2 of 1.3 bar and an equivalent narcotic depth of .
In open-circuit scuba, two classes of trimix are commonly used: "normoxic" trimix—with a minimum PO2 at the surface of 0.18 and "hypoxic" trimix—with a PO2 less than 0.18 at the surface. A normoxic mix such as "19/30" is used in the depth range; a hypoxic mix such as "10/50" is used for deeper diving, as a bottom gas only, and cannot safely be breathed at shallow depths where the PO2 is less than 0.18 bar.
In fully closed-circuit rebreathers that use trimix diluents, the mix can be "hyperoxic" (meaning more oxygen than in air, as in enriched air nitrox) in shallow water, because the rebreather automatically adds oxygen to maintain a specific partial pressure of oxygen. Less commonly, hyperoxic trimix is sometimes used on open circuit scuba. Hyperoxic trimix is sometimes referred to as Helitrox, TriOx, or HOTx (High Oxygen Trimix) with the "x" in HOTx representing the mixture's fraction of helium as a percentage.
See breathing gas for more information on the composition and choice of gas blends.
Gas blending of trimix involves decanting oxygen and helium into the diving cylinder and then topping up the mix with air from a diving air compressor. To ensure an accurate mix, after each helium and oxygen transfer, the mix is allowed to cool, its pressure is measured and further gas is decanted until the correct pressure is achieved. This process often takes hours and is sometimes spread over days at busy blending stations.
A second method called 'continuous blending' is now gaining favor. Oxygen, helium and air are blended on the intake side of a compressor. The oxygen and helium are fed into the air stream using flow meters, so as to achieve the rough mix. The low pressure mixture is analyzed for oxygen content and the oxygen and helium flows adjusted accordingly. On the high pressure side of the compressor a regulator is used to reduce pressure of a sample flow and the trimix is analyzed (preferably for both helium and oxygen) so that the fine adjustment to the intake gas flows can be made.
The benefit of such a system is that the helium delivery tank pressure need not be as high as that used in the partial pressure method of blending and residual gas can be 'topped up' to best mix after the dive. This is important mainly because of the high cost of helium.
Drawbacks may be that the high heat of compression of helium results in the compressor overheating (especially in tropical climates) and that the hot trimix entering the analyzer on the high pressure side can affect the reliability of the analysis. DIY versions of the continuous blend units can be made for as little as $200 (excluding analyzers).
Although theoretically trimix can be blended with almost any combination of helium and oxygen, a number of "standard" mixes have evolved (such as 21/35, 18/45 and 15/55—see "Naming conventions"). Most of these mixes originated from filling the cylinders with a certain percentage of helium, and then topping the mix with 32% enriched air nitrox. The "standard" mixes evolved because of three coinciding factors—the desire to keep that equivalent narcotic depth (END) of the mix at approximately , the requirement to keep the partial pressure of oxygen at 1.4 ATA or below at the deepest point of the dive, and the fact that many dive shops stored standard 32% enriched air nitrox in banks, which simplified mixing. The use of standard mixes makes it relatively easy to top up diving cylinders after a dive using residual mix — only helium and banked nitrox are needed to top up the residual gas from the last fill.
The method of mixing a known nitrox mix with helium allows analysis of the fractions of each gas using only an oxygen analyser, since the ratio of the oxygen fraction in the final mix to the oxygen fraction in the initial nitrox gives the fraction of nitrox in the final mix, hence the fractions of the three components are easily calculated. It is demonstrably true that the END of a nitrox-helium mixture at its maximum operating depth (MOD) is equal to the MOD of the nitrox alone.
Heliair is a breathing gas consisting of mixture of oxygen, nitrogen and helium and is often used during the deep phase of dives carried out using technical diving techniques. This term, first used by Sheck Exley, is mostly used by Technical Diving International (TDI).
It is easily blended from helium and air and so has a fixed 21:79 ratio of oxygen to nitrogen with the balance consisting of a variable amount of helium. It is sometimes referred to as "poor man's trimix", because it is much easier to blend than trimix blends with variable oxygen content, since all that is required is to insert the requisite partial pressure of helium, and then top up with air from a conventional compressor. The more complicated (and dangerous) step of adding pure oxygen at pressure required to blend trimix is absent when blending heliair.
Heliair blends are similar to the standard Trimix blends made with helium and Nitrox 32, but with a deeper END at MOD.
Heliair will always have less than 21% oxygen, and will be hypoxic (less than 17% oxygen) for mixes with more than 20% helium.
The National Association of Underwater Instructors (NAUI) uses the term "helitrox" for hyperoxic 26/17 Trimix, i.e. 26% oxygen, 17% helium, 57% nitrogen. Helitrox requires decompression stops similar to Nitrox-I (EAN32) and has a maximum operating depth of , where it has an equivalent narcotic depth of . This allows diving throughout the usual recreational range, while decreasing decompression obligation and narcotic effects compared to air.
GUE and UTD also promote hyperoxic trimix, but prefer the term "TriOx".
Other divers question whether this proliferation of terminology is useful, and feel that the term Trimix is sufficient, modified as appropriate with the terms hypoxic, normoxic and hyperoxic, and the usual forms for indicating constituent gas fraction.
Technical diver training and certification agencies may differentiate between levels of trimix diving qualifications, The usual distinction is between normoxic trimix and hypoxic trimix, sometimes also called full trimix. | https://en.wikipedia.org/wiki?curid=31489 |
Theoretical chemistry
Theoretical chemistry is the branch of chemistry which develops theoretical generalizations that are part of the theoretical arsenal of modern chemistry: for example, the concepts of chemical bonding, chemical reaction, valence, the surface of potential energy, molecular orbitals, orbital interactions, molecule activation, etc.
Theoretical chemistry unites principles and concepts common to all branches of chemistry. Within the framework of theoretical chemistry, there is a systematization of chemical laws, principles and rules, their refinement and detailing, the construction of a hierarchy. The central place in theoretical chemistry is occupied by the doctrine of the interconnection of the structure and properties of molecular systems. It uses mathematical and physical methods to explain the structures and dynamics of chemical systems and to correlate, understand, and predict their thermodynamic and kinetic properties. In the most general sense, it is explanation of chemical phenomena by methods of theoretical physics. In contrast to theoretical physics, in connection with the high complexity of chemical systems, theoretical chemistry, in addition to approximate mathematical methods, often uses semi-empirical and empirical methods.
In recent years, it has consisted primarily of quantum chemistry, i.e., the application of quantum mechanics to problems in chemistry. Other major components include molecular dynamics, statistical thermodynamics and theories of electrolyte solutions, reaction networks, polymerization, catalysis, molecular magnetism and spectroscopy.
Modern theoretical chemistry may be roughly divided into the study of chemical structure and the study of chemical dynamics. The former includes studies of: electronic structure, potential energy surfaces, and force fields; vibrational-rotational motion; equilibrium properties of condensed-phase systems and macro-molecules. Chemical dynamics includes: bimolecular kinetics and the collision theory of reactions and energy transfer; unimolecular rate theory and metastable states; condensed-phase and macromolecular aspects of dynamics.
Historically, the major field of application of theoretical chemistry has been in the following fields of research:
Hence, theoretical chemistry has emerged as a branch of research. With the rise of the density functional theory and other methods like molecular mechanics, the range of application has been extended to chemical systems which are relevant to other fields of chemistry and physics, including biochemistry, condensed matter physics, nanotechnology or molecular biology. | https://en.wikipedia.org/wiki?curid=31491 |
The Skeptical Environmentalist
The Skeptical Environmentalist: Measuring the Real State of the World (, literal translation: "The True State of the World") is a book by Danish environmentalist author Bjørn Lomborg, controversial for its claims that overpopulation, declining energy resources, deforestation, species loss, water shortages, certain aspects of global warming, and an assortment of other global environmental issues are unsupported by statistical analysis of the relevant data. It was first published in Danish in 1998, while the English edition was published as a work in environmental economics by Cambridge University Press in 2001.
Due to the scope of the project, comprising the range of topics addressed, the diversity of data and sources employed, and the many types of conclusions and comments advanced, "The Skeptical Environmentalist" does not fit easily into a particular scientific discipline or methodology. Although published by the social sciences division of Cambridge University Press, the findings and conclusions were widely challenged on the basis of natural science. This interpretation of "The Skeptical Environmentalist" as a work of environmental science generated much of the controversy and debate that surrounded the book.
Prior to becoming the Director of the Copenhagen Consensus Center and Adjunct Professor at the Copenhagen Business School, Bjørn Lomborg was an Associate Professor of Political Science at the University of Aarhus.
Some critics focus on his lack of training or professional experience in the environmental sciences or economics. Supporters argue his research is an appropriate application of his expertise in cost-benefit analysis, a standard analytical tool in policy assessment. His advocates further note that many of the scientists and environmentalists who criticized the book are not themselves environmental policy experts or experienced in cost-benefit research.
In numerous interviews, Lomborg ascribed his motivation for writing "The Skeptical Environmentalist" to his personal convictions, making clear that he was a pro-environmentalist and Greenpeace supporter. He has stated that he began his research as an attempt to counter what he saw as anti-ecological arguments by Julian Lincoln Simon in an article in "Wired", but changed his mind after starting to analyze data. Lomborg describes the views he attributes to environmental campaigners as the "Litany", which he at one time claims to have affirmed, but purports to correct in his work.
The general analytical approach employed by Lomborg is based on cost-benefit analyses as employed in economics, social science, and the formulation and assessment of government policy. Much of Lomborg's examination of his Litany is based on statistical data analysis, therefore his work may be considered a work of that nature. Since it examines the costs and benefits of its many topics, it could be considered a work in economics, as categorized by its publisher. However, "The Skeptical Environmentalist" is methodologically eclectic and cross-disciplinary, combining interpretation of data with assessments of the media and human behavior, evaluations of scientific theories, and other approaches, to arrive at its various conclusions.
In arriving at the final work, Lomborg has used a similar approach in each of his work's main areas and subtopics. He progresses from the general to the specific, starting with a broad concern, such as pollution or energy, dividing it into subtopics (e.g. air pollution; fossil fuel depletion), and then identifying one or more widely held fears and their source (e.g. our air is growing increasingly toxic, by X measure, according to Y). From there, Lomborg chooses data that he considers to be the most reliable and reasonable available. He then analyzes that data to prove or disprove his selected proposition. In every case, his calculations find that the claim is not substantiated, and is either an exaggeration, or a completely reversed portrayal of an improving situation, rather than a deteriorating one. Having established what he calls "the true state of the world", for each topic and subtopic, Lomborg examines a variety of theories, technologies, implementation strategies and costs, and suggests alternative ways to improve not-so-dire situations, or advance in other areas not currently considered as pressing.
"The Skeptical Environmentalist"'s subtitle refers to the "State of the World" report, published annually since 1984 by the Worldwatch Institute. Lomborg designated the report "one of the best-researched and academically most ambitious environmental policy publications," but criticized it for using short-term trends to predict disastrous consequences, in cases where long-term trends would not support the same conclusions.
In establishing its arguments, "The Skeptical Environmentalist" examined a wide range of issues in the general area of environmental studies, including environmental economics and science, and came to an equally broad set of conclusions and recommendations. Lomborg's work directly challenged popular examples of green concerns by interpreting data from some 3,000 assembled sources. The author suggested that environmentalists diverted potentially beneficial resources to less deserving environmental issues in ways that were economically damaging. Much of the book's methodology and integrity have been subject to criticism which argue that Lomborg distorted the fields of research he covers. Support for the book was staunch as well.
"The Litany" comprises very diverse areas where, Lomborg claims, overly pessimistic claims are made and bad policies are implemented as a result. He cites accepted mainstream sources, like the United States government, United Nations agencies and others, preferring global long-term data over regional and short-term statistics.
"The Skeptical Environmentalist" is arranged around four major themes:
Lomborg's main argument is that the vast majority of environmental problems—such as pollution, water shortages, deforestation, and species loss, as well as population growth, hunger, and AIDS—are area-specific and highly correlated with poverty. Therefore, challenges to human prosperity are essentially logistical matters, and can be solved largely through economic and social development. Concerning problems that are more pressing at the global level, such as the depletion of fossil fuels and global warming, Lomborg argues that these issues are often overstated and that recommended policies are often inappropriate if assessed against alternatives.
Lomborg analyzes three major themes: life expectancy, food and hunger, and prosperity, finding that life expectancy and health levels have "dramatically" improved over the past centuries, even though several regions of the world remain threatened, in particular by AIDS. He dismisses Thomas Malthus' theory that increases in the world's population lead to widespread hunger. On the contrary, Lomborg claims that food is widespread, and humanity's daily intake of calories is increasing, and will continue to rise until hunger's eradication, thanks to technological improvements in agriculture. However, Lomborg notes that Africa in particular still produces too little sustenance, an effect he attributes to the continent's dismal economic and political systems. Concerning prosperity, Lomborg argues that wealth, as measured by per capita GDP, should not be the only judging criterion. He points to improvements in education, safety, leisure, and ever more widespread access to consumer goods as signs that prosperity is increasing in most parts of the world.
In this section, Lomborg looks at the world's natural resources and draws a conclusion that contrasts starkly to that of the well known report "The Limits to Growth". First, he analyzes food once more, this time from an ecological perspective, and again claims that most food products are not threatened by human growth. An exception, however, is fish, which continues to be depleted. As a partial solution, Lomborg presents fish farms, which cause a less disruptive impact on the world's oceans. Next, Lomborg looks at forests. He finds no indication of widespread deforestation, and notes that even the Amazon still retains more than 80% of its 1978 tree cover. Lomborg points out that in developing countries, deforestation is linked to poverty and poor economic conditions, so he proposes that economic growth is the best means to tackle the loss of forests.
Concerning energy, Lomborg asserts that oil is not being depleted as fast as is claimed, and that improvements of technology will provide people with fossil fuels for years to come. The author further asserts that many alternatives already exist, and that with time they will replace fossil fuels as an energy source. Concerning other resources, such as metals, Lomborg suggests that based on their price history they are not in short supply. Examining the challenge of collecting sufficient amounts of water, Lomborg says that wars will probably not erupt over water because fighting such wars is not cost-effective (one week of war with the Palestinians, for instance, would cost Israel more than five desalination plants, according to an Israeli officer). Lomborg emphasizes the need for better water management, as water is distributed unequally around the world.
Lomborg considers pollution from different angles. He notes that air pollution in wealthy nations has steadily decreased in recent decades. He finds that air pollution levels are highly linked to economic development, with moderately developed countries polluting most. Again, Lomborg argues that faster growth in emerging countries would help them reduce their air pollution levels. Lomborg suggests that devoting resources to reduce the levels of specific air pollutants would provide the greatest health benefits and save the largest number of lives (per amount of money spent), continuing an already decades-long improvement in air quality in most developed countries. Concerning water pollution, Lomborg notes again that it is connected with economic progress. He also notes that water pollution in major Western rivers decreased rapidly after the use of sewage systems became widespread. Concerning waste, Lomborg notes once again that fears are overblown, as the entire waste produced by the United States in the 21st century could fit into a square 100 feet thick and 28 km along each side, or 0.009% of the total surface of the United States.
In this last section, Lomborg puts forward his main assertion: based on a cost-benefit analysis, the environmental threats to human prosperity are overstated and much of policy response is misguided. As an example, Lomborg cites worries about pesticides and their link to cancer. He argues that such concerns are vastly exaggerated in the public perception, as alcohol and coffee are the foods that create by far the greatest risk of cancer, as opposed to vegetables that have been sprayed with pesticides. Furthermore, if pesticides were not used on fruit and vegetables, their cost would rise, and consequently their consumption would go down, which would cause cancer rates to increase. He goes on to criticize the fear of a vertiginous decline in biodiversity, proposing that 0.7% of species have gone extinct in the last 50 years (as compared to a maximum of 50%, as claimed by some biologists). While Lomborg admits that extinctions are a problem, he asserts that they are not the catastrophe claimed by some, and have little effect on human prosperity.
Lomborg's most contentious assertion, however, involves global warming. From the outset, Lomborg "accepts the reality of man-made global warming" though he refers to a number of uncertainties in the computer simulations of climate change and some aspects of data collection. His main contention involves not the science of global warming but the politics and the policy response to scientific findings. Lomborg points out that, given the amount of greenhouse gas reduction required to combat global warming, the current Kyoto protocol is grossly insufficient. He argues that the economic costs of legislative restrictions that aim to slow or reverse global warming are far higher than the alternative of international coordination. Moreover, he asserts that the cost of combating global warming would be disproportionately shouldered by developing countries. Lomborg proposes that since the Kyoto agreement limits economic activities, developing countries that suffer from pollution and poverty most, will be perpetually handicapped economically.
Lomborg proposes that the importance of global warming in terms of policy priority is low compared to other policy issues such as fighting poverty, disease and aiding poor countries, which has direct and more immediate impact both in terms of welfare and the environment. He therefore suggests that a global cost-benefit analysis be undertaken before deciding on future measures. The Copenhagen Consensus that Lomborg later organized concluded that combating global warming does have a benefit but its priority compared to other issues is "poor" (ranked 13th) and three projects addressing climate change (optimal carbon tax, the Kyoto protocol and value-at-risk carbon tax), are the least cost-efficient of its proposals.
Lomborg concludes his book by once again reviewing the Litany, and noting that the real state of the world is much better than the Litany claims. According to Lomborg, this discrepancy poses a problem, as it focuses public attention on relatively unimportant issues, while ignoring those that are paramount. In the worst case, "The Skeptical Environmentalist" argues, the global community is pressured to adopt inappropriate policies which have adverse effects on humanity, wasting resources that could be put to better use in aiding poor countries or fighting diseases such as AIDS. Lomborg thus urges us to look at what he calls the true problems of the world, since solving those will also solve the Litany.
"The Skeptical Environmentalist" was controversial even before its English-language release, with anti-publication efforts launched against Cambridge University Press. Once in the public arena, the book elicited strong reactions in scientific circles and in the mainstream media. Opinion was largely polarized. Environmental groups were generally critical.
The January 2002 issue of "Scientific American" contained, under the heading "Misleading Math about the Earth", a set of essays by several scientists, which maintain that Lomborg and "The Skeptical Environmentalist" misrepresent both scientific evidence and scientific opinion. The magazine then refused Lomborg's request to print a lengthy point-by-point rebuttal in his own defence, on the grounds that the 32 pages would have taken a disproportionate share of the month's installment. "Scientific American" allowed Lomborg a one-page defense in the May 2002 edition, and then attempted to remove Lomborg's publication of his complete response online, citing a copyright violation. After receiving much criticism, the magazine published his complete rebuttal on its website, along with the counter rebuttals of John Rennie and John P. Holdren.
"Nature" also published a harsh review of Lomborg's book, in which Stuart Pimm of the Center for Environmental Research and Conservation at Columbia University and Jeff Harvey of the Netherlands Institute of Ecology wrote: ""the text employs the strategy of those who, for example, argue that gay men aren't dying of AIDS, that Jews weren't singled out by the Nazis for extermination, and so on."" Lomborg has also been criticized for using straw man arguments, with charges that his Litany of environmental doom-mongering does not accurately represent the mainstream views of the contemporary green movement.
The "separately written expert reviews" further detail the various expert opinions. Peter Gleick's assessment, for example, states:
Jerry Mahlman's appraisal of the chapter he was asked to evaluate, states:
David Pimentel, who was repeatedly criticized in the book, also wrote a critical review.
One critical article, "The Skeptical Environmentalist: A Case Study in the Manufacture of News", attributes this media success to its initial, influential supporters:
The media was criticized for the biased selection of reviewers and not informing readers of reviewers' background. Richard C. Bell, writing for Worldwatch noted that the Wall Street Journal, "instead of seeking scientists with a critical perspective," like many publications "put out reviews by people who were closely associated with Lomborg", with the Journal soliciting a review from the Competitive Enterprise Institute's Ronald Bailey, someone "who had earlier written a book called The True State of the World, from which much of Lomborg's claims were taken." Bell also criticized the Washington Post, whose Sunday Book World assigned the book review to Denis Dutton, identified as "a professor of philosophy who lectures on the dangers of pseudoscience at the science faculties of the University of Canterbury in New Zealand", and as the editor of the web site Arts and Letters Daily. Bell noted that:
"The Post did not tell its readers that Dutton's web site features links to the Global Climate Coalition, an anti-Kyoto consortium of oil and coal businesses, and to the messages of Julian Simon --the man whose denial that global warming was occurring apparently gave Lomborg the idea for his book in the first place. It was hardly surprising that Dutton anointed Lomborg's book as 'the most significant work on the environment since the appearance of its polar opposite, Rachel Carson's Silent Spring, in 1962. It's a magnificent achievement.'"
Some critics of "The Skeptical Environmentalist" took issue not with the statistical investigation of Lomborg's Litany, but with the suggestions and conclusions for which they were the foundation. This line of criticism considered the book as a contribution to the policy debate over environment rather than the work of natural science. In a BBC column from August 23, 2001, veteran BBC environmental correspondent Alex Kirby wrote:
Kirby's first concern was not with the extensive research and statistical analysis, but the conclusions drawn from them:
On September 5, 2001, at a Lomborg book reading in England, British environmentalist author Mark Lynas threw a cream pie in Lomborg's face. In a September 9, 2001, article, "Why I pied Lomborg", Lynas stated:
The December 12, 2001 issue of "Grist" devoted an issue to "The Skeptical Environmentalist", with a series of essays from various scientists challenging individual sections. A separate article examining the book's overall approach took issue with the framing of Lomborg's conclusions:
Addressing the apparent difficulty of scientists opposing "The Skeptical Environmentalist" in criticizing the book strictly on the basis of statistics and challenging the conclusions about areas of environmental sciences that were drawn from them, Lynas contends:
Influential UK newsweekly "The Economist" weighed in at the start with heavy support, publishing an advance essay by Lomborg in which he detailed his Litany, and following up with a highly favorable review and supportive coverage. It stated that "This is one of the most valuable books on public policy—not merely environmental policy— to have been written for the intelligent general reader in the past ten years..."The Skeptical Environmentalist" is a triumph."
Among the general media, "The New York Times" stated that "The primary target of the book, a substantial work of analysis with almost 3,000 footnotes, are statements made by environmental organizations like the Worldwatch Institute, the World Wildlife Fund and Greenpeace." The "Wall Street Journal" deemed Lomborg's work "a superbly documented and readable book.". A review in "The Washington Post" claimed that "Bjørn Lomborg's good news about the environment is bad news for Green ideologues. His richly informative, lucid book is now the place from which environmental policy decisions must be argued. In fact," The Skeptical Environmentalist "is the most significant work on the environment since the appearance of its polar opposite, Rachel Carson's "Silent Spring", in 1962. It's a magnificent achievement." "Rolling Stone" wrote that "Lomborg pulls off the remarkable feat of welding the techno-optimism of the Internet age with a lefty's concern for the fate of the planet."
In March 2003 the "New York Law School Law Review" published an examination of the critical reviews of "Skeptical Environmentalist" from the "Scientific American", "Nature" and "Science" magazines by Professor of Law David Shoenbrod and then Senior Law Student Christi Wilson of New York Law School. The authors take the perspective of a court faced with an argument against hearing an expert witness in order to evaluate whether Lomborg was credible as an expert, and whether his testimony is valid to his expertise. They classify the types of criticisms leveled at Lomborg and his arguments, and proceed to evaluate each of the reasons given for disqualifying Lomborg. They conclude that a court should accept Lomborg as a credible expert in the field of statistics, and that his testimony was appropriately restricted to his area of expertise. Of course, Professor Shoenbrod and Wilson note, Mr. Lomborg's factual conclusions may not be correct, nor his policy proposals effective, but his criticisms should be addressed, not merely dismissed out of hand.
The Union of Concerned Scientists and the Danish Committees on Scientific Dishonesty raised concern about the responses of certain sections of the scientific community to a peer reviewed book published under the category of environmental economics. The groups worried that the receptions to Lomborg were a politicization of science by scientists. This unease was reflected in the involvement of the Union of Concerned Scientists and Danish Committees on Scientific Dishonesty in "When scientists politicize science: making sense of controversy over The Skeptical Environmentalist", where Roger A. Pielke argued:
In "Green with Ideology - The hidden agenda behind the "scientific" attacks on Bjørn Lomborg’s controversial book, The Skeptical Environmentalist", Ronald Bailey stated that "The bitter anti-Lomborg campaign reveals the hidden crisis of what we might call ideological environmentalism." He further wrote:
After the publication of "The Skeptical Environmentalist", Lomborg was accused of scientific dishonesty. Several environmental scientists brought a total of three complaints against Lomborg to the Danish Committees on Scientific Dishonesty (DCSD), a body under Denmark's Ministry of Science, Technology and Innovation. Lomborg was asked whether he regarded the book as a "debate" publication, and thereby not under the purview of the DCSD, or as a scientific work; he chose the latter, clearing the way for the inquiry that followed. The charges claimed that "The Skeptical Environmentalist" contained deliberately misleading data and flawed conclusions. Due to the similarity of the complaints, the DCSD decided to proceed on the three cases under one investigation.
On January 6, 2003, a mixed DCSD ruling was released, in which the Committees decided that "The Skeptical Environmentalist" was scientifically dishonest, but Lomborg was innocent of wrongdoing due to a lack of expertise in the relevant fields:
The DCSD cited "The Skeptical Environmentalist" for:
On February 13, 2003, Lomborg filed a complaint against the DCSD's decision with the Ministry of Science, Technology and Innovation (MSTI), which oversees the group.
On December 17, 2003, the Ministry found that the DCSD had made a number of procedural errors, including:
The Ministry remitted the case to the DCSD. In doing so the Ministry indicated that it regarded the DCSD's previous findings of scientific dishonesty in regard to the book as invalid. The Ministry also instructed the DCSD to decide whether to reinvestigate. On March 12, 2004, the Committee formally decided not to act further on the complaints, reasoning that renewed scrutiny would, in all likelihood, result in the same conclusion.
The original DCSD decision about Lomborg provoked a petition among Danish academics from 308 scientists, many from the social sciences, who criticised the DCSD's investigative methods.
Another group of Danish scientists collected signatures in support of the DCSD. The 640 signatures in this second petition came almost exclusively from the medical and natural sciences, and included Nobel laureate in Chemistry Jens Christian Skou, former university rector Kjeld Møllgård, and professor Poul Harremoës from the Technical University of Denmark.
A group of scientists published an article in 2005 in the "Journal of Information Ethics", in which they concluded that most criticism against Lomborg was unjustified, and that the scientific community had misused their authority to suppress the author.
The claim that allegations against Lomborg were unsubstantiated was challenged in the next issue of "Journal of Information Ethics" by Kåre Fog, one of the original DCSD petitioners. Fog reasserted his contention that, despite the ministry's decision, most of the accusations against Lomborg were valid, and rejected what he called "the Galileo hypothesis", which portrays Lomborg as a brave young man confronting an entrenched opposition.
Fog has established a curated catalogue of criticisms against Lomborg, which includes a section for each page of every "Skeptical Environmentalist" chapter. Fog enumerates and details what he believes to be flaws and errors in Lomborg's work. He explicitly indicates if particular mistakes may have been made deliberately by Lomborg, in order to mislead. According to Fog, since none of his denunciations of Lomborg's work have been proven false, the suspicion that Lomborg has misled deliberately is maintained. Lomborg has written a full text published online as Godehetens Pris (Danish) that goes through the main allegations put forward by Fog and others. | https://en.wikipedia.org/wiki?curid=31492 |
Tricyclic antidepressant
Tricyclic antidepressants (TCAs) are a class of medications that are used primarily as antidepressants. TCAs were discovered in the early 1950s and were marketed later in the decade. They are named after their chemical structure, which contains three rings of atoms. Tetracyclic antidepressants (TeCAs), which contain four rings of atoms, are a closely related group of antidepressant compounds.
Although TCAs are sometimes prescribed for depressive disorders, they have been largely replaced in clinical use in most parts of the world by newer antidepressants such as selective serotonin reuptake inhibitors (SSRIs), serotonin–norepinephrine reuptake inhibitors (SNRIs) and norepinephrine reuptake inhibitors (NRIs). Adverse effects have been found to be of a similar level between TCAs and SSRIs.
The TCAs were developed amid the "explosive birth" of psychopharmacology in the early 1950s. The story begins with the synthesis of chlorpromazine in December 1950 by Rhône-Poulenc's chief chemist, Paul Charpentier, from synthetic antihistamines developed by Rhône-Poulenc in the 1940s. Its psychiatric effects were first noticed at a hospital in Paris in 1952. The first widely used psychiatric drug, by 1955 it was already generating significant revenue as an antipsychotic. Research chemists quickly began to explore other derivatives of chlorpromazine.
The first TCA reported for the treatment of depression was imipramine, a dibenzazepine analogue of chlorpromazine code-named G22355. It was not originally targeted for the treatment of depression. The drug's tendency to induce manic effects was "later described as 'in some patients, quite disastrous'". The paradoxical observation of a sedative inducing mania led to testing with depressed patients. The first trial of imipramine took place in 1955 and the first report of antidepressant effects was published by Swiss psychiatrist Roland Kuhn in 1957. Some testing of Geigy's imipramine, then known as Tofranil, took place at the Münsterlingen Hospital near Konstanz. Geigy later became Ciba-Geigy and eventually Novartis.
Dibenzazepine derivatives are described in U.S. patent 3,074,931 issued 1963-01-22 by assignment to Smith Kline & French Laboratories. The compounds described share a tricyclic backbone different from the backbone of the TCA amitriptyline.
Merck introduced the second member of the TCA family, amitriptyline (Elavil), in 1961. This compound has a different three-ring structure than imipramine.
The TCAs are used primarily in the clinical treatment of mood disorders such as major depressive disorder (MDD), dysthymia, and treatment-resistant variants. They are also used in the treatment of a number of other medical disorders, including anxiety disorders such as generalized anxiety disorder (GAD), social phobia (SP) also known as social anxiety disorder (SAD), obsessive-compulsive disorder (OCD), and panic disorder (PD), post-traumatic stress disorder (PTSD), body dysmorphic disorder (BDD), eating disorders like anorexia nervosa and bulimia nervosa, certain personality disorders such as borderline personality disorder (BPD), neurological disorders such as attention-deficit hyperactivity disorder (ADHD), Parkinson's disease as well as chronic pain, neuralgia or neuropathic pain, and fibromyalgia, headache, or migraine, smoking cessation, tourette syndrome, trichotillomania, irritable bowel syndrome (IBS), interstitial cystitis (IC), nocturnal enuresis (NE), narcolepsy, insomnia, pathological crying and/or laughing, chronic hiccups, ciguatera poisoning, and as an adjunct in schizophrenia.
For many years the TCAs were the first choice for pharmacological treatment of clinical depression. Although they are still considered to be highly effective, they have been increasingly replaced by antidepressants with an improved safety and side effect profile, such as the SSRIs and other newer antidepressants such as the novel reversible MAOI moclobemide. However, tricyclic antidepressants are possibly more effective in treating melancholic depression than other antidepressant drug classes. Newer antidepressants are thought to have fewer and less severe side effects and are also thought to be less likely to result in injury or death if used in a suicide attempt, as the doses required for clinical treatment and potentially lethal overdose (see therapeutic index) are far wider in comparison.
Nonetheless, the TCAs are commonly prescribed for treatment-resistant depression that has failed to respond to therapy with newer antidepressants, they also tend to have fewer emotional blunting and sexual side effects than SSRI antidepressants. They are not considered addictive and are somewhat preferable to the monoamine oxidase inhibitors (MAOIs). The side effects of the TCAs usually come to prominence before the therapeutic benefits against depression and/or anxiety do, and for this reason, they may potentially be somewhat dangerous, as volition can be increased, possibly giving the patient a greater desire to attempt or commit suicide.
The TCAs were used in the past in the clinical treatment of ADHD, though they are not typically used anymore, having been replaced by more effective agents with fewer side effects such as atomoxetine (Strattera, Tomoxetin) and stimulants like methylphenidate (Ritalin, Focalin, Concerta), and amphetamine (Adderall, Attentin, Dexedrine, Vyvanse). ADHD is thought to be caused by an insufficiency of dopamine and norepinephrine activity in the prefrontal cortex of the brain. Most of the TCAs inhibit the reuptake of norepinephrine, though not dopamine, and as a result, they show some efficacy in remedying the disorder. Notably, the TCAs are more effective in treating the behavioral aspects of ADHD than the cognitive deficits, as they help limit hyperactivity and impulsivity, but have little to no benefits on attention.
The TCAs show efficacy in the clinical treatment of a number of different types of chronic pain, notably neuralgia or neuropathic pain and fibromyalgia. The precise mechanism of action in explanation of their analgesic efficacy is unclear, but it is thought that they indirectly modulate the opioid system in the brain downstream via serotonergic and noradrenergic neuromodulation, among other properties. They are also effective in migraine prophylaxis, though not in the instant relief of an acute migraine attack. They may also be effective to prevent chronic tension headaches.
Many side effects may be related to the antimuscarinic properties of the TCAs. Such side effects are relatively common and may include dry mouth, dry nose, blurry vision, lowered gastrointestinal motility or constipation, urinary retention, cognitive and/or memory impairment, and increased body temperature.
Other side effects may include drowsiness, anxiety, emotional blunting (apathy/anhedonia), confusion, restlessness, dizziness, akathisia, hypersensitivity, changes in appetite and weight, sweating, muscle twitches, weakness, nausea and vomiting, hypotension, tachycardia, and rarely, irregular heart rhythms. Twitching, hallucinations, delirium and coma are also some of the toxic effects caused by overdose. Rhabdomyolysis or muscle breakdown has been rarely reported with this class of drugs as well.
Tolerance to these adverse effects of these drugs often develops if treatment is continued. Side effects may also be less troublesome if treatment is initiated with low doses and then gradually increased, although this may also delay the beneficial effects.
TCAs can behave like class 1A antiarrhythmics, as such, they can theoretically terminate ventricular fibrillation, decrease cardiac contractility and increase collateral blood circulation to ischemic heart muscle. Naturally, in overdose, they can be cardiotoxic, prolonging heart rhythms and increasing myocardial irritability.
New research has also revealed compelling evidence of a link between long-term use of anticholinergic medications like TCAs and dementia. Although many studies have investigated this link, this was the first study to use a long-term approach (over seven years) to find that dementias associated with anticholinergics may not be reversible even years after drug use stops. Anticholinergic drugs block the action of acetylcholine, which transmits messages in the nervous system. In the brain, acetylcholine is involved in learning and memory.
Antidepressants in general may produce a withdrawal. However, since the term "withdrawal" has been linked to addiction to recreational drugs like opioids, the medical profession and pharmaceutical public relations prefer that a different term be used, hence "discontinuation syndrome." Discontinuation symptoms can be managed by a gradual reduction in dosage over a period of weeks or months to minimise symptoms.
In tricyclics, discontinuation syndrome symptoms include anxiety, insomnia, headache, nausea, malaise, or motor disturbance.
TCA overdose is a significant cause of fatal drug poisoning. The severe morbidity and mortality associated with these drugs is well documented due to their cardiovascular and neurological toxicity. Additionally, it is a serious problem in the pediatric population due to their inherent toxicity and the availability of these in the home when prescribed for bed-wetting and depression. In the event of a known or suspected overdose, medical assistance should be sought immediately.
A number of treatments are effective in a TCA overdose.
An overdose on TCA is especially fatal as it is rapidly absorbed from the GI tract in the alkaline conditions of the small intestines. As a result, toxicity often becomes apparent in the first hour after an overdose. However, symptoms may take several hours to appear if a mixed overdose has caused delayed gastric emptying.
Many of the initial signs are those associated to the anticholinergic effects of TCAs such as dry mouth, blurred vision, urinary retention, constipation, dizziness, and emesis (or vomiting). Due to the location of norepinephrine receptors all over the body, many physical signs are also associated with a TCA overdose:
Treatment of TCA overdose depends on severity of symptoms:
Initially, gastric decontamination of the patient is achieved by administering, either orally or via a nasogastric tube, activated charcoal pre-mixed with water, which adsorbs the drug in the gastrointestinal tract (most useful if given within 2 hours of drug ingestion). Other decontamination methods such as stomach pumps, gastric lavage, whole bowel irrigation, or (ipecac induced) emesis, are "not" recommended in TCA poisoning.
If there is metabolic acidosis, intravenous infusion of sodium bicarbonate is recommended by Toxbase.org, the UK and Ireland poisons advice database (TCAs are protein bound and become less bound in more acidic conditions, so by reversing the acidosis, protein binding increases and bioavailability thus decreases – the sodium load may also help to reverse the Na+ channel blocking effects of the TCA).
The TCAs are highly metabolised by the cytochrome P450 (CYP) hepatic enzymes. Drugs that inhibit cytochrome P450 (for example cimetidine, methylphenidate, fluoxetine, antipsychotics, and calcium channel blockers) may produce decreases in the TCAs' metabolism, leading to increases in their blood concentrations and accompanying toxicity. The major factor that distinguishes SSRI's amongst one another is the inhibition of select CYP enzymes . Drugs that prolong the QT interval including antiarrhythmics such as quinidine, the antihistamines astemizole and terfenadine, and some antipsychotics may increase the chance of ventricular dysrhythmias. TCAs may enhance the response to alcohol and the effects of barbiturates and other CNS depressants. Side effects may also be enhanced by other drugs that have antimuscarinic properties.
The majority of the TCAs act primarily as SNRIs by blocking the serotonin transporter (SERT) and the norepinephrine transporter (NET), which results in an elevation of the synaptic concentrations of these neurotransmitters, and therefore an enhancement of neurotransmission. Notably, with the sole exception of amineptine, the TCAs have negligible affinity for the dopamine transporter (DAT), and therefore have no efficacy as dopamine reuptake inhibitors (DRIs). Both serotonin and norepinephrine have been highly implicated in depression and anxiety, and it has been shown that facilitation of their activity has beneficial effects on these mental disorders.
In addition to their reuptake inhibition, many TCAs also have high affinity as antagonists at the 5-HT2 (5-HT2A and 5-HT2C), 5-HT6, 5-HT7, α1-adrenergic, and NMDA receptors, and as agonists at the sigma receptors (σ1 and σ2), some of which may contribute to their therapeutic efficacy, as well as their side effects. The TCAs also have varying but typically high affinity for antagonising the H1 and H2 histamine receptors, as well as the muscarinic acetylcholine receptors. As a result, they also act as potent antihistamines and anticholinergics. These properties are often beneficial in antidepressants, especially with comorbid anxiety, as it provides a sedative effect.
Most, if not all, of the TCAs also potently inhibit sodium channels and L-type calcium channels, and therefore act as sodium channel blockers and calcium channel blockers, respectively. The former property is responsible for the high mortality rate upon overdose seen with the TCAs via cardiotoxicity. It may also be involved in their efficacy as analgesics, however.
In summary, tricyclic antidepressants can act through NMDA antagonism, opioidergic effects, sodium, potassium and calcium channel blocking, through interfering with the reuptake of serotonin and acting as antagonists to SHAM (serotonin, histamine, alpha, muscarinic) receptors. Thus their dangerous side effect profile limits their use in daily practice.
The binding profiles of various TCAs and some metabolites in terms of their affinities (, ) for various receptors and transporters are as follows:
With the exception of the sigma receptors, the TCAs act as antagonists or inverse agonists of the receptors and as inhibitors of the transporters. Tianeptine is included in this list due to it technically being a TCA, but with a vastly different pharmacology.
Therapeutic levels of TCAs are generally in the range of about 100 to 300 ng/mL, or 350 to 1,100 nM. Plasma protein binding is generally 90% or greater.
There are two major groups of TCAs in terms of chemical structure, which most, but not all, TCAs fall into. The groupings are based on the tricyclic ring system. They are the dibenzazepines (imipramine, desipramine, clomipramine, trimipramine, lofepramine) and the dibenzocycloheptadienes (amitriptyline, nortriptyline, protriptyline, butriptyline). Minor TCA groups based on ring system include the dibenzoxepins (doxepin), the dibenzothiepines (dosulepin), and the dibenzoxazepines (amoxapine). In addition to classification based on the ring system, TCAs can also be usefully grouped based on the number of substitutions of the side chain amine. These groups include the tertiary amines (imipramine, clomipramine, trimipramine, amitriptyline, butriptyline, doxepin, dosulepin) and the secondary amines (desipramine, nortriptyline, protriptyline). Lofepramine is technically a tertiary amine, but acts largely as a prodrug of desipramine, a secondary amine, and hence is more similar in profile to the secondary amines than to the tertiary amines. Amoxapine does not have the TCA side chain and hence is neither a tertiary nor secondary amine, although it is often grouped with the secondary amines due to sharing more in common with them.
A very small number of cases involving non-medical use of antidepressants have been reported over the past 30 years. According to the US government classification of psychiatric medications, TCAs are "non-abusable" and generally have low abuse potential. Nonetheless due to their atypical MOA, amineptine and tianeptine (dopamine reuptake inhibition and μ-opioid receptor agonism, respectively) are the two TCAs with the highest addiction and abuse potential. Despite tianeptine’s recreational value, many people use it as a nootropic and follow other countries’ usage guidelines, such as France, as a way to treat their depression if other antidepressants don’t work. Their prescription guidelines are 12.5 mg three times a day, and not to exceed 50 mg in one day. Tianeptine has no recreational value when taken at that dosage and kept under 50 mg a day. Many people report that tianeptine has treated their depression when SSRI’s or SNRI’s haven’t. Several cases of the misuse of amitriptyline alone or together with methadone or in other drug dependent patients and of dosulepin with alcohol or in methadone patients have been reported.
Those that preferentially inhibit the reuptake of serotonin (by at least 10-fold over norepinephrine) include:
Those that preferentially inhibit the reuptake of norepinephrine (by at least 10-fold over serotonin) include:
Whereas either fairly balanced reuptake inhibitors of serotonin and norepinephrine or unspecified inhibitors include:
And the following are TCAs that act via main mechanisms other than serotonin or norepinephrine reuptake inhibition:
Legend: | https://en.wikipedia.org/wiki?curid=31494 |
Ted Williams
Theodore Samuel Williams (August 30, 1918 – July 5, 2002) was an American professional baseball player and manager. He played his entire 19-year Major League Baseball (MLB) career, primarily as a left fielder, for the Boston Red Sox from 1939 to 1960; his career was interrupted by military service during World War II and the Korean War. Nicknamed "Teddy Ballgame", "The Kid", "The Splendid Splinter", and "The Thumper", Williams is regarded as one of the greatest hitters in baseball history.
Williams was a nineteen-time All-Star, a two-time recipient of the American League (AL) Most Valuable Player Award, a six-time AL batting champion, and a two-time Triple Crown winner. He finished his playing career with a .344 batting average, 521 home runs, and a .482 on-base percentage, the highest of all time. His career batting average is the highest of any MLB player whose career was played primarily in the live-ball era, and ranks tied for 7th all-time (with Billy Hamilton).
Born and raised in San Diego, Williams played baseball throughout his youth. After joining the Red Sox in 1939, he immediately emerged as one of the sport's best hitters. In 1941, Williams posted a .406 batting average; he is the last MLB player to bat over .400 in a season. He followed this up by winning his first Triple Crown in 1942. Williams was required to interrupt his baseball career in 1943 to serve three years in the United States Navy and Marine Corps during World War II. Upon returning to MLB in 1946, Williams won his first AL MVP Award and played in his only World Series. In 1947, he won his second Triple Crown. Williams was returned to active military duty for portions of the 1952 and 1953 seasons to serve as a Marine combat aviator in the Korean War. In 1957 and 1958 at the ages of 39 and 40, respectively, he was the AL batting champion for the fifth and sixth time.
Williams retired from playing in 1960. He was inducted into the Baseball Hall of Fame in 1966, in his first year of eligibility. Williams managed the Washington Senators/Texas Rangers franchise from 1969 to 1972. An avid sport fisherman, he hosted a television program about fishing, and was inducted into the IGFA Fishing Hall of Fame. Williams' involvement in the Jimmy Fund helped raise millions in dollars for cancer care and research. In 1991 President George H. W. Bush presented Williams with the Presidential Medal of Freedom, the highest civilian award bestowed by the United States government. He was selected for the Major League Baseball All-Time Team in 1997 and the Major League Baseball All-Century Team in 1999.
Williams was born in San Diego on August 30, 1918, and named Theodore Samuel Williams after former president Theodore Roosevelt as well as his father, Samuel Stuart Williams. He later amended his birth certificate, removing his middle name, which he claimed originated from a maternal uncle (whose actual name was Daniel Venzor), who had been killed in World War I. His father was a soldier, sheriff, and photographer from New York, while his mother, May Venzor, a Mexican-American from El Paso, Texas, was an evangelist and lifelong soldier in the Salvation Army. Williams resented his mother's long hours working in the Salvation Army, and Williams and his brother cringed when she took them to the Army's street-corner revivals.
Williams' paternal ancestors were a mix of Welsh, English, and Irish. The maternal, Mexican side of Williams' family was quite diverse, having Spanish (Basque), Russian, and American Indian roots. Of his Mexican ancestry he said that "If I had my mother's name, there is no doubt I would have run into problems in those days, [considering] the prejudices people had in Southern California".
Williams lived in San Diego's North Park neighborhood (4121 Utah Street). At the age of 8, he was taught how to throw a baseball by his uncle, Saul Venzor. Saul was one of his mother's four brothers, as well as a former semi-professional baseball player who had pitched against Babe Ruth, Lou Gehrig, and Joe Gordon in an exhibition game. As a child, Williams' heroes were Pepper Martin of the St. Louis Cardinals and Bill Terry of the New York Giants. Williams graduated from Herbert Hoover High School in San Diego, where he played baseball as a pitcher and was the star of the team. During this time, he also played American Legion Baseball, later being named the 1960 American Legion Baseball Graduate of the Year.
Though he had offers from the St. Louis Cardinals and the New York Yankees while he was still in high school, his mother thought he was too young to leave home, so he signed up with the local minor league club, the San Diego Padres.
Throughout his career, Williams stated his goal was to have people point to him and remark, "There goes Ted Williams, the greatest hitter who ever lived."
Williams played back-up behind Vince DiMaggio and Ivey Shiver on the (then) Pacific Coast League San Diego Padres. While in the Pacific Coast League in 1936, Williams met future teammates and friends Dom DiMaggio and Bobby Doerr, who were on the Pacific Coast League's San Francisco Seals. When Shiver announced he was quitting to become a high school football coach in Savannah, Georgia, the job, by default, was open for Williams. Williams posted a .271 batting average on 107 at bats in 42 games for the Padres in 1936. Unknown to Williams, he had caught the eye of the Boston Red Sox's general manager, Eddie Collins, while Collins was scouting Bobby Doerr and the shortstop George Myatt in August 1936. Collins later explained, "It wasn't hard to find Ted Williams. He stood out like a brown cow in a field of white cows." In the 1937 season, after graduating from Hoover High in the winter, Williams finally broke into the line-up on June 22, when he hit an inside-the-park home run to help the Padres win 3–2. The Padres ended up winning the PCL title, while Williams ended up hitting .291 with 23 home runs. Meanwhile, Collins kept in touch with Padres general manager Bill Lane, calling him two times throughout the season. In December 1937, during the winter meetings, the deal was made between Lane and Collins, sending Williams to the Boston Red Sox and giving Lane $35,000 and two major leaguers, Dom D'Allessandro and Al Niemiec, and two other minor leaguers.
In 1938, the 19-year-old Williams was 10 days late to spring training camp in Sarasota, Florida, because of a flood in California blocking the railroads. Williams had to borrow $200 from a bank to make the trip from San Diego to Sarasota. Also during spring training Williams was nicknamed "The Kid" by Red Sox equipment manager Johnny Orlando, who after Williams arrived to Sarasota for the first time, said, "The Kid' has arrived". Orlando still called Williams "The Kid" 20 years later, and the nickname stuck with Williams the rest of his life. Williams remained in major league spring training for about a week. Williams was then sent to the Double-A-league Minneapolis Millers. While in the Millers training camp for the springtime, Williams met Rogers Hornsby, who had hit over .400 three times, including a .424 average in 1924. Hornsby was a coach for the Millers for the spring. Hornsby told Williams useful advice, including to "get a good pitch to hit". Talking with the game's greats would become a pattern for Williams, who talked with Hugh Duffy who hit .438 in 1894, Bill Terry who hit .401 in 1930, and Ty Cobb against whom he would argue that a batter should hit up on the ball, opposed to Cobb's view that a batter should hit down on the ball.
While in Minnesota, Williams quickly became the team's star. He collected his first hit in the Millers' first game of the season, as well as his first and second home runs during his third game. Both were inside-the-park home runs, with the second traveling an estimated on the fly to a center field fence. Williams later had a 22 game hitting streak that lasted from Memorial Day through mid-June. While the Millers ended up sixth place in an eight-team race, Williams ended up hitting .366 with 46 home runs and 142 RBIs. He received the American Association's Triple Crown and finished second in the voting for Most Valuable Player.
Williams came to spring training three days late in 1939, thanks to Williams driving from California to Florida and respiratory problems, the latter of which would plague Williams for the rest of his career. In the winter, the Red Sox traded right fielder Ben Chapman to the Cleveland Indians to make room for Williams on the roster, even though Chapman had hit .340 in the previous season. This led "Boston Globe" sports journalist Gerry Moore to quip, "Not since Joe DiMaggio broke in with the Yankees by "five for five" in St. Petersburg in 1936 has any baseball rookie received the nationwide publicity that has been accorded this spring to Theodore Francis Williams". Williams inherited Chapman's number 9 on his uniform as opposed to Williams' number 5 in the previous spring training. He made his major league debut against the New York Yankees on April 20, going 1-for-4 against Yankee pitcher Red Ruffing. This was the only game which featured both Williams and Lou Gehrig playing against one another. In his first series at Fenway Park, Williams hit a double, a home run, and a triple, the first two against Cotton Pippen, who gave Williams his first strikeout as a professional while Williams had been in San Diego. By July, Williams was hitting just .280, but leading the league in RBIs. Johnny Orlando, now Williams' friend, then gave Williams a quick pep talk, telling Williams that he should hit .335 with 35 home runs and he would drive in 150 runs. Williams said he would buy Orlando a Cadillac if this all came true. Williams ended up hitting .327 with 31 home runs and 145 RBIs, leading the league in the latter category, the first rookie to lead the league in RBIs and finishing fourth in MVP voting. He also led the AL in walks, with 107, a rookie record. Even though there was not a Rookie of the Year award yet in 1939, Babe Ruth declared Williams to be the Rookie of the Year, which Williams later said was "good enough for me".
Williams' pay doubled in 1940, going from $5,000 to $10,000. With the addition of a new bullpen in right field of Fenway Park, which reduced the distance from home plate from 400 feet to 380 feet, the bullpen was nicknamed "Williamsburg", because the new addition was "obviously designed for Williams". Williams was then switched from right field to left field, as there would be less sun in his eyes, and it would give Dom DiMaggio a chance to play. Finally, Williams was flip-flopped in the order with the great slugger Jimmie Foxx, with the idea that Williams would get more pitches to hit. Pitchers, though, were not afraid to walk him to get to the 33-year-old Foxx, and after that the 34-year-old Joe Cronin, the player-manager. Williams also made his first of 16 All-Star Game appearances in 1940, going 0-for-2. Although Williams hit .344, his power and runs batted in were down from the previous season, with 23 home runs and 113 RBIs. Williams also caused a controversy in mid-August when he called his salary "peanuts", along with saying he hated the city of Boston and reporters, leading reporters to lash back at him, saying that he should be traded. Williams said that the "only real fun" he had in 1940 was being able to pitch once on August 24, when he pitched the last two innings in a 12–1 loss to the Detroit Tigers, allowing one earned run on three hits, while striking out one batter, Rudy York.
In the second week of spring training in 1941, Williams broke a bone in his right ankle, limiting him to pinch hitting for the first two weeks of the season. Bobby Doerr later claimed that the injury would be the foundation of Williams' season, as it forced him to put less pressure on his right foot for the rest of the season. Against the Chicago White Sox on May 7, in extra innings, Williams told the Red Sox pitcher, Charlie Wagner, to hold the White Sox, since he was going to hit a home run. In the 11th inning, Williams' prediction came true, as he hit a big blast to help the Red Sox win. The home run is still considered to be the longest home run ever hit in the old Comiskey Park, some saying that it went . Williams' average slowly climbed in the first half of May, and on May 15, he started a 22-game hitting streak. From May 17 to June 1, Williams batted .536, with his season average going above .400 on May 25 and then continuing up to .430. By the All-Star break, Williams was hitting .406 with 62 RBIs and 16 home runs.
In the 1941 All-Star Game, Williams batted fourth behind Joe DiMaggio, who was in the midst of his record-breaking hitting streak, having hit safely in 48 consecutive games. In the fourth inning Williams doubled to drive in a run. With the National League (NL) leading 5–2 in the eighth inning, Williams struck out in the middle of an American League (AL) rally. In the ninth inning the AL still trailed 5–3; Ken Keltner and Joe Gordon singled, and Cecil Travis walked to load the bases. DiMaggio grounded to the infield and Billy Herman, attempting to complete a double play, threw wide of first base, allowing Keltner to score. With the score 5–4 and runners on first and third, Williams homered with his eyes closed to secure a 7–5 AL win. Williams later said that that game-winning home run "remains to this day the most thrilling hit of my life".
In late August, Williams was hitting .402. Williams said that "just about everybody was rooting for me" to hit .400 in the season, including Yankee fans, who gave pitcher Lefty Gomez a "hell of a boo" after walking Williams with the bases loaded after Williams had gotten three straight hits one game in September. In mid-September, Williams was hitting .413, but dropped a point a game from then on. Before the final two games on September 28, a doubleheader against the Philadelphia Athletics, he was batting .39955, which would have been officially rounded up to .400. Red Sox manager Joe Cronin offered him the chance to sit out the final day, but he declined. "If I'm going to be a .400 hitter", he said at the time, "I want more than my toenails on the line." Williams went 6-for-8 on the day, finishing the season at .406. (Sacrifice flies were counted as at-bats in 1941; under today's rules, Williams would have hit between .411 and .419, based on contemporaneous game accounts.) Philadelphia fans ran out on the field to surround Williams after the game, forcing him to protect his hat from being stolen; he was helped into the clubhouse by his teammates. Along with his .406 average, Williams also hit 37 home runs and batted in 120 runs, missing the triple crown by five RBI.
Williams' 1941 season is often considered to be the best offensive season of all time, though the MVP award would go to DiMaggio. The .406 batting average—his first of six batting championships—is still the highest single-season average in Red Sox history and the highest batting average in the major leagues since 1924, and the last time any major league player has hit over .400 for a season after averaging at least 3.1 plate appearances per game. ("If I had known hitting .400 was going to be such a big deal", he quipped in 1991, "I would have done it again.") Williams' on-base percentage of .553 and slugging percentage of .735 that season are both also the highest single-season averages in Red Sox history. The .553 OBP stood as a major league record until it was broken by Barry Bonds in 2002 and his .735 slugging percentage was the highest mark in the major leagues between 1932 and 1994. His OPS of 1.287 that year, a Red Sox record, was the highest in the major leagues between 1923 and 2001. Despite playing in only 143 games that year, Williams led the league with 135 runs scored and 37 home runs, and he finished third with 335 total bases, the most home runs, runs scored, and total bases by a Red Sox player since Jimmie Foxx's in 1938. Williams placed second in MVP voting; DiMaggio won, 291 votes to 254, on the strength of his record-breaking 56-game hitting streak and league-leading 125 RBI.
In January 1942, after World War II began, Williams was drafted into the military, being put into Class 1-A. A friend of Williams suggested that Williams see the advisor of the governor's Selective Service Appeal Agent, since Williams was the sole support of his mother, arguing that Williams should not have been placed in Class 1-A, and said Williams should be reclassified to Class 3-A. Williams was reclassified to 3-A ten days later. Afterwards, the public reaction was extremely negative, even though the baseball book "Season of '42" states only four All-Stars and one first-line pitcher entered military service during the 1942 season. (Many more MLB players would enter service during the 1943 season.)
Quaker Oats stopped sponsoring Williams, and Williams, who previously had eaten Quaker products "all the time", never "[ate] one since" the company stopped sponsoring him.
Despite the trouble with the draft board, Williams had a new salary of $30,000 in 1942. In the season, Williams won the Triple Crown, with a .356 batting average, 36 home runs, and 137 RBIs. On May 21, Williams also hit his 100th career home run. He was the third Red Sox player to hit 100 home runs with the team, following his teammates Jimmie Foxx and Joe Cronin. Despite winning the Triple Crown, Williams came in second in the MVP voting, losing to Joe Gordon of the Yankees. Williams felt that he should have gotten a "little more consideration" because of winning the Triple Crown, and he thought that "the reason I didn't get more consideration was because of the trouble I had with the draft [boards]".
Williams joined the Navy Reserve on May 22, 1942, went on active duty in 1943, and was commissioned a second lieutenant in the United States Marine Corps as a Naval Aviator on May 2, 1944. Williams also played on the baseball team in Chapel Hill, North Carolina, along with his Red Sox teammate Johnny Pesky in pre-flight training, after eight weeks in Amherst, Massachusetts, and the Civilian Pilot Training Course. While on the baseball team, Williams was sent back to Fenway Park on July 12, 1943, to play on an All-Star team managed by Babe Ruth. The newspapers reported that Babe Ruth said when finally meeting Williams, "Hiya, kid. You remind me a lot of myself. I love to hit. You're one of the most natural ballplayers I've ever seen. And if my record is broken, I hope you're the one to do it". Williams later said he was "flabbergasted" by the incident, as "after all, it was Babe Ruth". In the game, Williams hit a 425-foot home run to help give the American League All-Stars a 9–8 win.
On August 18, 1945, when the war ended, Lt. Williams was in Pearl Harbor, Hawaii awaiting orders as a replacement pilot. While in Pearl Harbor, Williams played baseball in the Navy League. Also in that eight-team league were Joe DiMaggio, Joe Gordon, and Stan Musial. The Service World Series with the Army versus the Navy attracted crowds of 40,000 for each game. The players said it was even better than the actual World Series being played between the Detroit Tigers and Chicago Cubs that year.
Williams was discharged by the Marine Corps on January 28, 1946, in time to begin preparations for the upcoming pro baseball season. He joined the Red Sox again in 1946, signing a $37,500 contract. On July 14, after Williams hit three home runs and eight RBIs in the first game of a doubleheader, Lou Boudreau, inspired by Williams' consistent pull hitting to right field, created what would later be known as the Boudreau shift (also Williams shift) against Williams, having only one player on the left side of second base (the left fielder). Ignoring the shift, Williams walked twice, doubled, and grounded out to the shortstop, who was positioned in between first and second base. Also during 1946, the All-Star Game was held in Fenway Park. In the game, Williams homered in the fourth inning against Kirby Higbe, singled in a run in the fifth inning, singled in the seventh inning, and hit a three-run home run against Rip Sewell's "eephus pitch" in the eighth inning to help the American League win 12–0.
For the 1946 season, Williams hit .342 with 38 home runs and 123 RBIs, helping the Red Sox win the pennant on September 13. During the season, Williams hit the only inside-the-park home run in his Major League career in a September 1–0 win at Cleveland, and in June hit what is considered the longest home run in Fenway Park history, at and subsequently marked with a lone red seat in the Fenway bleachers. Williams ran away as the winner in the MVP voting. During an exhibition game in Fenway Park against an All-Star team during early October, Williams was hit on the elbow by a curveball by the Washington Senators' pitcher Mickey Haefner. Williams was immediately taken out of the game, and X-rays of his arm showed no damage, but his arm was "swelled up like a boiled egg", according to Williams. Williams could not swing a bat again until four days later, one day before the World Series, when he reported the arm as "sore". During the series, Williams batted .200, going 5-for-25 with no home runs and just one RBI. The Red Sox lost in seven games, with Williams going 0-for-4 in the last game. Fifty years later when asked what one thing he would have done different in his life, Williams replied, "I'd have done better in the '46 World Series. God, I would". The 1946 World Series was the only World Series Williams ever appeared in.
Williams signed a $70,000 contract in 1947. Williams was also almost traded for Joe DiMaggio in 1947. In late April, Red Sox owner Tom Yawkey and Yankees owner Dan Topping agreed to swap the players, but a day later canceled the deal when Yawkey requested that Yogi Berra come with DiMaggio. In May, Williams was hitting .337. Williams won the Triple Crown in 1947, but lost the MVP award to Joe DiMaggio, 202 points to 201 points. One writer left Williams off his ballot. Williams thought it was Mel Webb, whom Williams called a "grouchy old guy", although it now appears it was not Webb.
Through 2011, Williams was one of seven major league players to have had at least four 30-home run and 100-RBI seasons in their first five years, along with Chuck Klein, Joe DiMaggio, Ralph Kiner, Mark Teixeira, Albert Pujols, and Ryan Braun.
In 1948, under their new manager, Joe McCarthy, Williams hit a league-leading .369 with 25 home runs and 127 RBIs, and was third in MVP voting. On April 29, Williams hit his 200th career home run. He became just the second player to hit 200 home runs in a Red Sox uniform, joining his former teammate Jimmie Foxx. On October 2, against the Yankees, Williams hit his 222nd career home run, tying Foxx for the Red Sox all-time record. In the Red Sox' final two games of the regular schedule, they beat the Yankees (to force a one-game playoff against the Cleveland Indians) and Williams got on base eight times out of ten plate appearances. In the playoff, Williams went 1-for-4, with the Red Sox losing 8–3.
In 1949, Williams received a new salary of $100,000 ($ in current dollar terms). He hit .343 (losing the AL batting title by just .0002 to the Tigers' George Kell, thus missing the Triple Crown that year), hitting 43 home runs, his career high, and driving in 159 runs, tied for highest in the league, and at one point, he got on base in 84 straight games, an MLB record that still stands today, helping him win the MVP trophy. On April 28, Williams hit his 223rd career home run, breaking the record for most home runs in a Red Sox uniform, passing Jimmie Foxx. Williams is still the Red Sox career home run leader. However, despite being ahead of the Yankees by one game just before
a 2-game series against them (last regular-season games for both teams), the Red Sox lost both of those games. The Yankees won the first of what would be five straight World Series titles in 1949. For the rest of Williams' career, the Yankees won nine pennants and six World Series titles, while the Red Sox never finished better than third place.
In 1950, Williams was playing in his eighth All-Star Game. In the first inning, Williams caught a line drive by Ralph Kiner, slamming into the Comiskey Park scoreboard and breaking his left arm. Williams played the rest of the game, and he even singled in a run to give the American League the lead in the fifth inning, but by that time Williams' arm was a "balloon" and he was in great pain, so he left the game. Both of the doctors who X-rayed Williams held little hope for a full recovery. The doctors operated on Williams for two hours. When Williams took his cast off, he could only extend the arm to within four inches of his right arm. Williams only played 89 games in 1950. After the baseball season, Williams' elbow hurt so much he considered retirement, since he thought he would never be able to hit again. Tom Yawkey, the Red Sox owner, then sent Jack Fadden to Williams' Florida home to talk to Williams. Williams later thanked Fadden for saving his career.
In 1951, Williams "struggled" to hit .318, with his elbow still hurting. Williams also played in 148 games, 60 more than Williams had played the previous season, 30 home runs, two more than he had hit in 1950, and 126 RBIs, twenty-nine more than 1950. Despite his lower-than-usual production at bat, Williams made the All-Star team. On May 15, 1951, Williams became the 11th player in major league history to hit 300 career home runs. On May 21, Williams passed Chuck Klein for 10th place, on May 25 Williams passed Rogers Hornsby for 9th place, and on July 5 Williams passed Al Simmons for 8th place all-time in career home runs. After the season, manager Steve O'Neill was fired, with Lou Boudreau replacing him. Boudreau's first announcement as manager was that all Red Sox players were "expendable", including Williams.
Williams' name was called from a list of inactive reserves to serve on active duty in the Korean War on January 9, 1952. Williams, who was livid at his recalling, had a physical scheduled for April 2. Williams passed his physical and in May, after only playing in six major league games, began refresher flight training and qualification prior to service in Korea. Right before he left for Korea, the Red Sox had a "Ted Williams Day" in Fenway Park. Friends of Williams gave him a Cadillac, and the Red Sox gave Williams a memory book that was signed by 400,000 fans. The governor of Massachusetts and mayor of Boston were there, along with Korean War veteran named Frederick Wolf who used a wheelchair for mobility. At the end of the ceremony, everyone in the park held hands and sang "Auld Lang Syne" to Williams, a moment which he later said "moved me quite a bit." Private Wolf (an injured Korean veteran from Brooklyn) presented gifts from wounded veterans to Ted Williams. Ted choked and was only able to say,"...ok kid..." The Red Sox went on to win the game 5–3, thanks to a two-run home run by Williams in the seventh inning. After he returned from the Korean War in August 1953, Williams practiced with the Red Sox for ten days before playing in his first game, garnering a large ovation from the crowd and hitting a home run in the eighth inning. In the season, Williams ended up hitting .407 with 13 home runs and 34 RBIs in 37 games and 110 at bats (not nearly enough plate appearances to qualify for that season's batting title). On September 6, Williams hit his 332nd career home run, passing Hank Greenberg for seventh all-time.
On the first day of spring training in 1954, Williams broke his collarbone running after a line drive. Williams was out for six weeks, and in April he wrote an article with Joe Reichler of the "Saturday Evening Post" saying that he intended to retire at the end of the season. Williams returned to the Red Sox lineup on May 7, and he hit .345 with 386 at bats in 117 games, although Bobby Ávila, who had hit .341, won the batting championship. This was because it was required then that a batter needed 400 at bats, despite Lou Boudreau's attempt to bat Williams second in the lineup to get more at-bats. Williams led the league in base on balls with 136 which kept him from qualifying under the rules at the time. By today's standards (plate appearances) he would have been the champion. The rule was changed shortly thereafter to keep this from happening again. On August 25, Williams passed Johnny Mize for sixth place, and on September 3, Williams passed Joe DiMaggio for fifth all-time in career home runs with his 362nd career home run. He finished the season with 366 career home runs. On September 26, Williams "retired" after the Red Sox's final game of the season.
During the off-season of 1954, Williams was offered the chance to be manager of the Red Sox. Williams declined, and he suggested that Pinky Higgins, who had previously played on the 1946 Red Sox team as the third baseman, become the manager of the team. Higgins later was hired as the Red Sox manager in 1955. Williams sat out the first month of the 1955 season due to a divorce settlement with his wife, Doris. When Williams returned, he signed a $98,000 contract on May 13. Williams batted .356 in 320 at bats on the season, lacking enough at bats to win the batting title over Al Kaline, who batted .340. Williams hit 28 home runs and drove in 83 runs while being named the "Comeback Player of the Year."
On July 17, 1956, Williams became the fifth player to hit 400 home runs, following Mel Ott in 1941, Jimmie Foxx in 1938, Lou Gehrig in 1936, and Babe Ruth in 1927. On August 7, 1956, after Williams was booed for dropping a fly ball from Mickey Mantle, Williams spat at one of the fans who was taunting him on the top of the dugout. Williams was fined $5,000 for the incident. The next day against Baltimore, Williams was greeted by a large ovation, and received an even larger ovation when he hit a home run in the sixth inning to break a 2–2 tie. In "The Boston Globe", the publishers ran a "What Globe Readers Say About Ted" section made out of letters about Williams, which were either the sportswriters or the "loud mouths" in the stands. Williams explained years later, "From '56 on, I realized that people were for me. The writers had written that the fans should show me they didn't want me, and I got the biggest ovation yet". Williams lost the batting title to Mickey Mantle in 1956, batting .345 to Mantle's .353, with Mantle on his way to winning the Triple Crown.
In 1957, Williams batted .388 to lead the Major Leagues, and at the age of 40 in 1958, he led the American League with a .328 batting average.
When Pumpsie Green became the first black player on the Boston Red Sox in 1959—the last major league team to integrate its team—Williams openly welcomed Green.
Williams ended his career, hitting a home run in his very last at-bat on September 28, 1960. An essay written by John Updike the following month for "The New Yorker", "Hub Fans Bid Kid Adieu", chronicles this event.
Williams is one of only 29 players in baseball history to date to have appeared in Major League games in four decades.
Williams was an obsessive student of hitting. He famously used a lighter bat than most sluggers, because it generated a faster swing. In 1970 he wrote a book on the subject, "The Science of Hitting" (revised 1986), which is still read by many baseball players. Pitchers apparently feared Williams; his bases-on-balls-to-plate-appearances ratio (.2065) is still the highest of any player in the Hall of Fame.
Williams nearly always took the first pitch.
He helped pass his expertise of playing left-field in front of the Green Monster, to his successor on the Red Sox, Carl Yastrzemski.
Ted Williams was on uncomfortable terms with the Boston newspapers for nearly twenty years, as he felt they liked to discuss his personal life as much as his baseball performance. He maintained a career-long feud with "SPORT" magazine due to a 1948 feature article in which the "SPORT" reporter included a quote from Williams' mother. Insecure about his upbringing, and stubborn because of immense confidence in his own talent, Williams made up his mind that the "knights of the keyboard", as he derisively labeled the press, were against him. After having hit for the league's Triple Crown in 1947, Williams narrowly lost the MVP award in a vote where one midwestern newspaper writer left Williams entirely off his ten-player ballot.
During his career, some sportswriters also criticized aspects of Williams' baseball performance, including what they viewed as his lackadaisical fielding and lack of clutch hitting. Williams pushed back, saying: “They’re always saying that I don’t hit in the clutches. Well, there are a lot [of games] when I do”. He also asserted that it made no sense crashing into an outfield wall to try to make a difficult catch because of the risk of injury or being out of position to make the play after missing the ball.
He treated most of the press accordingly, as he described in his memoir, "My Turn at Bat." Williams also had an uneasy relationship with the Boston fans, though he could be very cordial one-to-one. He felt at times a good deal of gratitude for their passion and their knowledge of the game. On the other hand, Williams was temperamental, high-strung, and at times tactless. In his biography, Ronald Reis relates how Williams committed two fielding miscues in a doubleheader in 1950 and was roundly booed by Boston fans. He bowed three times to various sections of Fenway Park and made an obscene gesture. When he came to bat he spit in the direction of fans near the dugout. The incident caused an avalanche of negative media reaction, and inspired sportswriter Austen Lake's famous comment that when Williams name was announced the sound was like "autumn wind moaning through an apple orchard."
Another incident occurred in 1958 in a game against the Washington Senators. Williams struck out, and as he stepped from the batter's box swung his bat violently in anger. The bat slipped from his hands, was launched into the stands and struck a 60-year-old woman who turned out to be the housekeeper of the Red Sox general manager Joe Cronin. While the incident was an accident and Williams apologized to the woman personally, to all appearances it seemed at the time that Williams had hurled the bat in a fit of temper.
Williams gave generously to those in need. He was especially linked with the Jimmy Fund of the Dana–Farber Cancer Institute, which provides support for children's cancer research and treatment. Williams used his celebrity to virtually launch the fund, which raised more that $750 million between 1948 and 2010. Throughout his career, Williams made countless bedside visits to children being treated for cancer, which Williams insisted go unreported. Often parents of sick children would learn at check-out time that "Mr. Williams has taken care of your bill." The Fund recently stated that, "Williams would travel everywhere and anywhere, no strings or paychecks attached, to support the cause ... His name is synonymous with our battle against all forms of cancer."
Williams demanded loyalty from those around him. He could not forgive the fickle nature of the fans – booing a player for booting a ground ball, and then turning around and roaring approval of the same player for hitting a home run. Despite the cheers and adulation of most of his fans, the occasional boos directed at him in Fenway Park led Williams to stop tipping his cap in acknowledgement after a home run.
Williams maintained this policy up to and including his swan song in 1960. After hitting a home run at Fenway Park, which would be his last career at-bat, Williams characteristically refused either to tip his cap as he circled the bases or to respond to prolonged cheers of "We want Ted!" from the crowd by making an appearance from the dugout. The Boston manager Pinky Higgins sent Williams to his fielding position in left field to start the ninth inning, but then immediately recalled him for his back-up Carroll Hardy, thus allowing Williams to receive one last ovation as he jogged on and off the field, but he did so without reacting to the crowd. Williams' aloof attitude led the writer John Updike to observe wryly that "Gods do not answer letters."
Williams' final home run did not take place during the final game of the 1960 season, but rather in the Red Sox's last home game that year. The Red Sox played three more games, but they were on the road in New York City and Williams did not appear in any of them, as it became clear that Williams' final home at-bat would be the last one of his career.
In 1991 on Ted Williams Day at Fenway Park, Williams pulled a Red Sox cap from out of his jacket and tipped it to the crowd. This was the first time that he had done so since his earliest days as a player.
A Red Smith profile from 1956 describes one Boston writer trying to convince Ted Williams that first cheering and then booing a ballplayer was no different from a moviegoer applauding a "western" movie actor one day and saying the next "He stinks! Whatever gave me the idea he could act?" Williams rejected this; when he liked a western actor like Hoot Gibson, he liked him in every picture, and would not think of booing him.
He once had a friendship with Ty Cobb, with whom he often had discussions about baseball. He often touted Rogers Hornsby as being the greatest right-handed hitter of all time. This assertion actually led to a split in the relationship between Ty Cobb and Ted Williams. Once during one of their yearly debate sessions on the greatest hitters of all-time, Williams asserted that Hornsby was one of the greatest of all-time. Cobb apparently had strong feelings about Hornsby and he threw a fit, expelling Williams from his hotel room. Their friendship effectively terminated after this altercation. This story was later refuted by Ted Williams himself.
Williams served as a Naval Aviator during World War II and the Korean War. Unlike many other major league players, he did not spend all of his war-time playing on service teams. Williams had been classified 3-A by Selective Service prior to the war, a dependency deferment because he was his mother's sole means of financial support. When his classification was changed to 1-A following the American entry into World War II, Williams appealed to his local draft board. The draft board ruled that his draft status should not have been changed. He made a public statement that once he had built up his mother's trust fund, he intended to enlist. Even so, criticism in the media, including withdrawal of an endorsement contract by Quaker Oats, resulted in his enlistment in the U.S. Naval Reserve on May 22, 1942.
Williams did not opt for an easy assignment playing baseball for the Navy, but rather joined the V-5 program to become a Naval aviator. Williams was first sent to the Navy's Preliminary Ground School at Amherst College for six months of academic instruction in various subjects including math and navigation, where he achieved a 3.85 grade point average.
Williams was talented as a pilot, and so enjoyed it that he had to be ordered by the Navy to leave training to personally accept his American League 1942 Major League Baseball Triple Crown. Williams' Red Sox teammate, Johnny Pesky, who went into the same aviation training program, said this about Williams: "He mastered intricate problems in fifteen minutes which took the average cadet an hour, and half of the other cadets there were college grads." Pesky again described Williams' acumen in the advance training, for which Pesky personally did not qualify: "I heard Ted literally tore the sleeve target to shreds with his angle dives. He'd shoot from wingovers, zooms, and barrel rolls, and after a few passes the sleeve was ribbons. At any rate, I know he broke the all-time record for hits." Ted went to Jacksonville for a course in aerial gunnery, the combat pilot's payoff test, and broke all the records in reflexes, coordination, and visual-reaction time. "From what I heard. Ted could make a plane and its six 'pianos' (machine guns) play like a symphony orchestra", Pesky says. "From what they said, his reflexes, coordination, and visual reaction made him a built-in part of the machine."
Williams completed pre-flight training in Athens, Georgia, his primary training at NAS Bunker Hill, Indiana, and his advanced flight training at NAS Pensacola. He received his gold Naval Aviator wings and his commission as a second lieutenant in the U.S. Marine Corps on May 2, 1944.
Williams served as a flight instructor at NAS Pensacola teaching young pilots to fly the complicated F4U Corsair fighter plane. Williams was in Pearl Harbor awaiting orders to join the Fleet in the Western Pacific when the War in the Pacific ended. He finished the war in Hawaii, and then he was released from active duty on January 12, 1946, but he did remain in the Marine Corps Reserve.
On May 1, 1952, 14 months after his promotion to captain in the Marine Corps Reserve, Williams was recalled to active duty for service in the Korean War. He had not flown any aircraft for eight years but he turned down all offers to sit out the war in comfort as a member of a service baseball team. Nevertheless, Williams was resentful of being called up, which he admitted years later, particularly regarding the Navy's policy of calling up Inactive Reservists rather than members of the Active Reserve.
After eight weeks of refresher flight training and qualification in the F9F Panther jet fighter at the Marine Corps Air Station Cherry Point, North Carolina, Williams was assigned to VMF-311, Marine Aircraft Group 33 (MAG-33), based at the K-3 airfield in Pohang, South Korea.
On February 16, 1953, Williams, flying as the wingman for John Glenn (later astronaut then U.S. Senator), was part of a 35-plane raid against a tank and infantry training school just south of Pyongyang, North Korea. During the mission, a piece of flak knocked out his hydraulics and electrical systems, causing Williams to have to "limp" his plane back to K-13 air base, a U.S. Air Force airfield close to the front lines. The plane burst into flames soon after he landed. For his actions of this day, he was awarded the Air Medal.
Williams stayed on K-13 for several days while his plane was being repaired. Because he was so popular, GIs and airmen from all around the base came to see him and his plane. After it was repaired, Williams flew his plane back to his Marine Corps airfield.
Williams flew 39 combat missions in Korea, earning the Air Medal with two Gold Stars in lieu of second and third awards, before being withdrawn from flight status in June 1953 after a hospitalization for pneumonia. This resulted in the discovery of an inner ear infection that disqualified him from flight status. During the Korean War, Williams also served in the same Marine Corps unit with John Glenn; the future astronaut described Williams as one of the best pilots he knew, while his wife Annie described him as the most profane man she ever met. In the last half of his missions, Williams was flying as Glenn's wingman.
Williams likely would have exceeded 600 career home runs if he had not served in the military, and may have even approached Babe Ruth's then record of 714. He might have set the record for career RBIs as well, exceeding Hank Aaron's total. While the absences in the Marine Corps took almost five years out of his baseball career, he never publicly complained about the time devoted to service in the Marine Corps. His biographer, Leigh Montville, argued that Williams was not happy about being pressed into service in South Korea, but he did what he thought was his patriotic duty.
Following his return to the United States in August 1953, he resigned his Reserve commission to resume his baseball career.
After retirement from play, Williams helped Boston's new left fielder, Carl Yastrzemski, in hitting, and was a regular visitor to the Red Sox' spring training camps from 1961 to 1966, where he worked as a special batting instructor. He served as executive assistant to Tom Yawkey (1961–65), then was named a team vice president (1965–68) upon his election to the Hall of Fame. He resumed his spring training instruction role with the club in 1978.
Williams served as manager of the Washington Senators, from 1969–1971, then continued with the team when they became the Texas Rangers after the 1971 season. Williams' best season as a manager was 1969 when he led the expansion Senators to an 86–76 record in the team's only winning season in Washington. He was chosen "Manager of the Year" after that season. Like many great players, Williams became impatient with ordinary athletes' abilities and attitudes, particularly those of pitchers, whom he admitted he never respected. He occasionally appeared at Red Sox spring training as a guest hitting instructor. Beginning in 1961, he would spend summers at the Ted Williams Baseball Camp in Lakeville, Massachusetts, which he had established in 1958 with his friend Al Cassidy and two other business partners. For eight summers and parts of others after that, he would give hitting clinics and talk baseball at the camp. It was not uncommon to find Williams fishing in the pond at the camp. The area now is owned by the town and a few of the buildings still stand. In the main lodge one can still see memorabilia from Williams' playing days.
On the subject of pitchers, in Ted's autobiography written with John Underwood, Ted opines regarding Bob Lemon (a sinker-ball specialist) pitching for the Cleveland Indians around 1951: "I have to rate Lemon as one of the very best pitchers I ever faced. His ball was always moving, hard, sinking, fast-breaking. You could never really uhmmmph with Lemon."
Williams was much more successful in fishing. An avid and expert fly fisherman and deep-sea fisherman, he spent many summers after baseball fishing the Miramichi River, in Miramichi, New Brunswick. Williams was named to the International Game Fish Association Hall of Fame in 2000. Williams, Jim Brown, Cumberland Posey, and Cal Hubbard are the only athletes to be inducted into the Halls of Fame of more than one professional sport. Williams was also known as an accomplished hunter; he was fond of pigeon-shooting for sport in Fenway Park during his career, on one occasion drawing the ire of the Massachusetts Society for the Prevention of Cruelty to Animals. He later buried a bag of endangered terns beneath the pitcher's mound of Fenway Park (which today maintains an "open permit" to remove hawk nests from the stadium, after a 2008 hawk attack on a patron).
Williams reached an extensive deal with Sears, lending his name and talent toward marketing, developing, and endorsing a line of in-house sports equipment – such as the "Ted Williams" edition Gamefisher aluminum boat and 7.5 hp "Ted Williams" edition motor, as well as fishing, hunting, and baseball equipment. Williams continued his involvement in the Jimmy Fund, later losing a brother to leukemia, and spending much of his spare time, effort, and money in support of the cancer organization.
In his later years Williams became a fixture at autograph shows and card shows after his son (by his third wife), John Henry Williams, took control of his career, becoming his de facto manager. The younger Williams provided structure to his father's business affairs, exposed forgeries that were flooding the memorabilia market, and rationed his father's public appearances and memorabilia signings to maximize their earnings.
One of Ted Williams' final, and most memorable, public appearances was at the 1999 All-Star Game in Boston. Able to walk only a short distance, Williams was brought to the pitcher's mound in a golf cart. He proudly waved his cap to the crowd—a gesture he had never done as a player. Fans responded with a standing ovation that lasted several minutes. At the pitcher's mound he was surrounded by players from both teams, including fellow Red Sox player Nomar Garciaparra, and was assisted by the late great Tony Gwynn in throwing out the first pitch of that year's All-Star Game. Later in the year, he was among the members of the Major League Baseball All-Century Team introduced to the crowd at Turner Field in Atlanta prior to Game Two of the World Series.
On May 4, 1944, Williams married Doris Soule, the daughter of his hunting guide. Their daughter, Barbara Joyce ("Bobbi Jo"), was born on January 28, 1948, while Williams was fishing in Florida. They divorced in 1954. Williams married the socialite model Lee Howard on September 10, 1961, and they were divorced in 1967.
Williams married Dolores Wettach, a former Miss Vermont and "Vogue" model, in 1968. Their son John-Henry was born on August 27, 1968, followed by daughter Claudia, on October 8, 1971. They were divorced in 1972.
Williams lived with Louise Kaufman for twenty years until her death in 1993. In his book, Cramer called her the love of Williams's life. After his death, her sons filed suit to recover her furniture from Williams's condominium as well as a half-interest in the condominium they claimed he gave her.
Williams had a strong respect for General Douglas MacArthur, referring to him as his "idol". For Williams' 40th birthday, MacArthur sent him an oil painting of himself with the inscription "To Ted Williams – not only America's greatest baseball player, but a great American who served his country. Your friend, Douglas MacArthur. General U.S. Army."
Politically, Williams was described by a biographer as, "to the right of Attila the Hun" except when it came to Civil Rights. According to friends, Williams was an atheist and this influenced his decision to be cryogenically frozen. His daughter Claudia stated "It was like a religion, something we could have faith in... no different from holding the belief that you might be reunited with your loved ones in heaven".
Williams' brother Danny and his son John-Henry both died of leukemia.
In his last years, Williams suffered from cardiomyopathy. He had a pacemaker implanted in November 2000 and he underwent open-heart surgery in January 2001. After suffering a series of strokes and congestive heart failure, he died of cardiac arrest at the age of 83 on July 5, 2002, at Citrus Memorial Hospital, Inverness, Florida, near his home in Citrus Hills, Florida.
Though his will stated his desire to be cremated and his ashes scattered in the Florida Keys, Williams's son John-Henry and younger daughter Claudia chose to have his remains frozen cryonically.
Ted's elder daughter, Bobby-Jo Ferrell, brought a suit to have her father's wishes recognized. John-Henry's lawyer then produced an informal "family pact" signed by Ted, Claudia, and John-Henry, in which they agreed "to be put into biostasis after we die" to "be able to be together in the future, even if it is only a chance." Bobby-Jo and her attorney, Spike Fitzpatrick (former attorney of Ted Williams), contended that the family pact, which was scribbled on an ink-stained napkin, was forged by John-Henry and/or Claudia. Fitzpatrick and Ferrell believed that the signature was not obtained legally. Laboratory analysis proved that the signature was genuine. John-Henry said that his father was a believer in science and was willing to try cryonics if it held the possibility of reuniting the family.
Though the family pact upset some friends, family and fans, a public plea for financial support of the lawsuit by Ferrell produced little result. Citing financial difficulties, Ferrell dropped her lawsuit on the condition that a $645,000 trust fund left by Williams would immediately pay the sum out equally to the three children. Inquiries to cryonics organizations increased after the publicity from the case.
In "Ted Williams: The Biography of an American Hero", author Leigh Montville claims that the family cryonics pact was a practice Ted Williams autograph on a plain piece of paper, around which the agreement had later been hand written. The pact document was signed ""Ted Williams"", the same as his autographs, whereas he would always sign his legal documents ""Theodore Williams"", according to Montville. However, Claudia testified to the authenticity of the document in an affidavit. Ted's two 24-hour private caregivers who were with him the entire period the note was said to have been created also stated in affidavits that John-Henry and Claudia were never present at any time for the note to be produced.
Following John-Henry's unexpected illness and death from acute myeloid leukemia on March 6, 2004, John-Henry's body was also transported to Alcor, in fulfillment of the family agreement.
In 1954, Williams was also inducted by the San Diego Hall of Champions into the Breitbard Hall of Fame honoring San Diego's finest athletes both on and off the playing surface.
Williams was inducted into the Baseball Hall of Fame on July 25, 1966. In his induction speech, Williams included a statement calling for the recognition of the great Negro Leagues players: "I've been a very lucky guy to have worn a baseball uniform, and I hope some day the names of Satchel Paige and Josh Gibson in some way can be added as a symbol of the great Negro players who are not here only because they weren't given a chance." His successor in left field, Carl Yastrzemski, joined Williams in Cooperstown in 1989, one of the few known instances of one Hall of Famer directly succeeding another at the same position.
Williams was referring to two of the most famous names in the Negro Leagues, who were not given the opportunity to play in the Major Leagues before Jackie Robinson broke the color barrier in 1947. Gibson died early in 1947 and thus never played in the majors; and Paige's brief major league stint came long past his prime as a player. This powerful and unprecedented statement from the Hall of Fame podium was "a first crack in the door that ultimately would open and include Paige and Gibson and other Negro League stars in the shrine." Paige was the first inducted in 1971. Gibson and others followed, starting in 1972 and continuing off and on into the 21st century.
On November 18, 1991, President George H. W. Bush presented Williams with the Presidential Medal of Freedom, the highest civilian award in the US.
The Ted Williams Tunnel in Boston, Massachusetts, carrying of the final of Interstate 90 under Boston Harbor, opened in December 1995, and Ted Williams Parkway (California State Route 56) in San Diego County, California, opened in 1992, were named in his honor while he was still alive. In 2016, the major league San Diego Padres inducted Williams into their hall of fame for his contributions to baseball in San Diego.
The Tampa Bay Rays home field, Tropicana Field, installed the Ted Williams Museum (formerly in Hernando, Florida, 1994–2006) behind the left field fence. From the Tampa Bay Rays website: "The Ted Williams Museum and Hitters Hall of Fame brings a special element to the Tropicana Field. Fans can view an array of different artifacts and pictures of the 'Greatest hitter that ever lived.' These memorable displays range from Ted Williams' days in the military through his professional playing career. This museum is dedicated to some of the greatest players to ever 'lace 'em up,' including Willie Mays, Joe DiMaggio, Mickey Mantle, Roger Maris."
At the time of his retirement, Williams ranked third all-time in home runs (behind Babe Ruth and Jimmie Foxx), seventh in RBIs (after Ruth, Cap Anson, Lou Gehrig, Ty Cobb, Foxx, and Mel Ott), and seventh in batting average (behind Cobb, Rogers Hornsby, Shoeless Joe Jackson, Lefty O'Doul, Ed Delahanty and Tris Speaker). His career batting average of .3444 is the highest of any player who played his entire career in the live-ball era following 1920.
Most modern statistical analyses place Williams, along with Ruth and Bonds, among the three most potent hitters to have played the game. Williams' baseball season of 1941 is often considered favorably with the greatest seasons of Ruth and Bonds in terms of various offensive statistical measures such as slugging, on-base and "offensive winning percentage." As a further indication, of the ten best seasons for "OPS", short for "On-Base Plus Slugging Percentage", a popular modern measure of offensive productivity, four each were achieved by Ruth and Bonds, and two by Williams.
In 1999, Williams was ranked as number eight on "The Sporting News"' list of the 100 Greatest Baseball Players, where he was the highest-ranking left fielder.
Williams received the following decorations and awards:
, or Retrosheet | https://en.wikipedia.org/wiki?curid=31496 |
Turners
Turners () are members of German-American gymnastic clubs called Turnverein. They promoted German culture, physical culture, liberal politics, and supported the Union war effort during the American Civil War. Turners, especially Francis Lieber, 1798–1872, were the leading sponsors of gymnastics as an American sport and the field of academic study.
In Germany a major gymnastic movement was started by "Turnvater" ("father of gymnastics") Friedrich Ludwig Jahn in the early 19th century when Germany was occupied by Napoleon. The "Turnvereine" ("gymnastic unions"; from German "turnen" meaning “to practice gymnastics,” and "Verein" meaning “club, union”) were not only athletic, but also political, reflecting their origin in similar "nationalistic gymnastic" organizations in Europe. The Turner movement in Germany was generally liberal in nature, and many Turners took part in the Revolution of 1848.
After its defeat, the movement was suppressed and many Turners left Germany, some emigrating to the United States, especially to the Ohio Valley region. Several of these Forty-Eighters went on to become Union soldiers, and some became Republican politicians. Besides serving as physical education, social, political and cultural organizations for German immigrants, Turners were also active in public education and the labor movements. They were leading promoters of gymnastics in the United States as a sport, and as a school subject. In the United States, the movement declined after 1900, and especially after 1917.
The "Turnvereine" made a contribution to the integration of German-Americans into their new home. The organizations continue to exist in areas of heavy German immigration, such as Iowa, Texas, Wisconsin, Indiana, Ohio, Minnesota, Missouri, Syracuse, NY, Kentucky, New York City, Sacramento , and Los Angeles.
About 1000 Turners served as Union soldiers during the Civil War. Anti-slavery was a common element, as typified by Carl Schurz. Many Republican leaders in German communities were members. However most German-Americans probably were Democrats in the 19th century. They provided the bodyguard at Abraham Lincoln's inauguration on March 4, 1861, and at his funeral in April 1865. In the Camp Jackson Affair, a large force of German volunteers helped prevent Confederate forces from seizing the government arsenal in St. Louis just prior to the beginning of the war. After the Civil War the national organization took a new name, "Nordamerikanischer Turnerbund" and supported German language teaching in the public high schools, as well as gymnastics. Women's auxiliaries were formed in the 1850s and 1860s. The high point in membership came in 1894, with 317 societies and about 40,000 adult male members, along with 25,000 children and 3000 women.
Like other German-American groups the Turners experienced suspicion during World War I, Even though by this time they had very little contact with Germany. German Language instruction ended at many schools and universities, and the federal government imposed restrictions on German language publications. The younger generation generally demanded the switch to exclusive use of the English language in society affairs, allowing many Turner societies to continue to function.
Cultural assimilation and the two World Wars with Germany took a gradual toll on membership, with some halls closing and others becoming regular dance halls, bars, or bowling alleys. Fifty-four Turner societies still existed around the U.S. as of 2011. The current headquarters of the American Turners is in Louisville, Kentucky.
In 1948, the U.S. Post Office issued a 3-cent commemorative stamp marking the 100th anniversary of the movement in the United States.
The Sacramento, California Turnverein, founded in 1854, claims to be the oldest still in existence in the United States. The Turnverein Vorwaerts of Fort Wayne, Indiana, owned the Hugh McCulloch House from 1906 until 1966. It was listed on the National Register of Historic Places in 1980. | https://en.wikipedia.org/wiki?curid=31500 |
Recession
In economics, a recession is a business cycle contraction when there is a general decline in economic activity. Recessions generally occur when there is a widespread drop in spending (an adverse demand shock). This may be triggered by various events, such as a financial crisis, an external trade shock, an adverse supply shock, the bursting of an economic bubble, or a large-scale natural or anthropogenic disaster (e.g. a pandemic). In the United States, it is defined as "a significant decline in economic activity spread across the market, lasting more than a few months, normally visible in real GDP, real income, employment, industrial production, and wholesale-retail sales". In the United Kingdom, it is defined as a negative economic growth for two consecutive quarters.
Governments usually respond to recessions by adopting expansionary macroeconomic policies, such as increasing money supply or increasing government spending and decreasing taxation.
Put simply, a recession is the decline of economic activity, which means that the public have stopped buying products for a while which can cause the downfall of GDP after a period of economic expansion (a time where products become popular and the income profit of a business becomes large). This causes inflation (the rise of product prices). In a recession, the rate of inflation slows down, stops or decreases.
In a 1974 "The New York Times" article, Commissioner of the Bureau of Labor Statistics Julius Shiskin suggested several rules of thumb for defining a recession, one of which was two consecutive quarters of negative GDP growth. In time, the other rules of thumb were forgotten. Some economists prefer a definition of a 1.5-2 percentage points rise in unemployment within 12 months.
In the United States, the Business Cycle Dating Committee of the National Bureau of Economic Research (NBER) is generally seen as the authority for dating US recessions. The NBER, a private economic research organization, defines an economic recession as: "a significant decline in economic activity spread across the economy, lasting more than a few months, normally visible in real GDP, real income, employment, industrial production, and wholesale-retail sales". Almost universally, academics, economists, policy makers, and businesses refer to the determination by the NBER for the precise dating of a recession's onset and end.
In the United Kingdom, recessions are generally defined as two consecutive quarters of negative economic growth, as measured by the seasonal adjusted quarter-on-quarter figures for real GDP. The same definition is used by member states of the European Union.
A recession has many attributes that can occur simultaneously and includes declines in component measures of economic activity (GDP) such as consumption, investment, government spending, and net export activity. These summary measures reflect underlying drivers such as employment levels and skills, household savings rates, corporate investment decisions, interest rates, demographics, and government policies.
Economist Richard C. Koo wrote that under ideal conditions, a country's economy should have the household sector as net savers and the corporate sector as net borrowers, with the government budget nearly balanced and net exports near zero. When these relationships become imbalanced, recession can develop within the country or create pressure for recession in another country. Policy responses are often designed to drive the economy back towards this ideal state of balance.
A severe (GDP down by 10%) or prolonged (three or four years) recession is referred to as an economic depression, although some argue that their causes and cures can be different. As an informal shorthand, economists sometimes refer to different recession shapes, such as V-shaped, U-shaped, L-shaped and W-shaped recessions.
The type and shape of recessions are distinctive. In the US, v-shaped, or short-and-sharp contractions followed by rapid and sustained recovery, occurred in 1954 and 1990–91; U-shaped (prolonged slump) in 1974–75, and W-shaped, or double-dip recessions in 1949 and 1980–82. Japan's 1993–94 recession was U-shaped and its 8-out-of-9 quarters of contraction in 1997–99 can be described as L-shaped. Korea, Hong Kong and South-east Asia experienced U-shaped recessions in 1997–98, although Thailand’s eight consecutive quarters of decline should be termed L-shaped.
Recessions have psychological and confidence aspects. For example, if companies expect economic activity to slow, they may reduce employment levels and save money rather than invest. Such expectations can create a self-reinforcing downward cycle, bringing about or worsening a recession. Consumer confidence is one measure used to evaluate economic sentiment. The term animal spirits has been used to describe the psychological factors underlying economic activity. Economist Robert J. Shiller wrote that the term "...refers also to the sense of trust we have in each other, our sense of fairness in economic dealings, and our sense of the extent of corruption and bad faith. When animal spirits are on ebb, consumers do not want to spend and businesses do not want to make capital expenditures or hire people."
Behavioral economics, has also explained some psychological biases that may trigger a recession including availability heuristic, money illusion, and non-regressive prediction.
High levels of indebtedness or the bursting of a real estate or financial asset price bubble can cause what is called a "balance sheet recession". This is when large numbers of consumers or corporations pay down debt (i.e., save) rather than spend or invest, which slows the economy. The term balance sheet derives from an accounting identity that holds that assets must always equal the sum of liabilities plus equity. If asset prices fall below the value of the debt incurred to purchase them, then the equity must be negative, meaning the consumer or corporation is insolvent. Economist Paul Krugman wrote in 2014 that "the best working hypothesis seems to be that the financial crisis was only one manifestation of a broader problem of excessive debt—that it was a so-called "balance sheet recession". In Krugman's view, such crises require debt reduction strategies combined with higher government spending to offset declines from the private sector as it pays down its debt.
For example, economist Richard Koo wrote that Japan's "Great Recession" that began in 1990 was a "balance sheet recession". It was triggered by a collapse in land and stock prices, which caused Japanese firms to have negative equity, meaning their assets were worth less than their liabilities. Despite zero interest rates and expansion of the money supply to encourage borrowing, Japanese corporations in aggregate opted to pay down their debts from their own business earnings rather than borrow to invest as firms typically do. Corporate investment, a key demand component of GDP, fell enormously (22% of GDP) between 1990 and its peak decline in 2003. Japanese firms overall became net savers after 1998, as opposed to borrowers. Koo argues that it was massive fiscal stimulus (borrowing and spending by the government) that offset this decline and enabled Japan to maintain its level of GDP. In his view, this avoided a U.S. type Great Depression, in which U.S. GDP fell by 46%. He argued that monetary policy was ineffective because there was limited demand for funds while firms paid down their liabilities. In a balance sheet recession, GDP declines by the amount of debt repayment and un-borrowed individual savings, leaving government stimulus spending as the primary remedy.
Krugman discussed the balance sheet recession concept during 2010, agreeing with Koo's situation assessment and view that sustained deficit spending when faced with a balance sheet recession would be appropriate. However, Krugman argued that monetary policy could also affect savings behavior, as inflation or credible promises of future inflation (generating negative real interest rates) would encourage less savings. In other words, people would tend to spend more rather than save if they believe inflation is on the horizon. In more technical terms, Krugman argues that the private sector savings curve is elastic even during a balance sheet recession (responsive to changes in real interest rates) disagreeing with Koo's view that it is inelastic (non-responsive to changes in real interest rates).
A July 2012 survey of balance sheet recession research reported that consumer demand and employment are affected by household leverage levels. Both durable and non-durable goods consumption declined as households moved from low to high leverage with the decline in property values experienced during the subprime mortgage crisis. Further, reduced consumption due to higher household leverage can account for a significant decline in employment levels. Policies that help reduce mortgage debt or household leverage could therefore have stimulative effects.
A liquidity trap is a Keynesian theory that a situation can develop in which interest rates reach near zero (zero interest-rate policy) yet do not effectively stimulate the economy. In theory, near-zero interest rates should encourage firms and consumers to borrow and spend. However, if too many individuals or corporations focus on saving or paying down debt rather than spending, lower interest rates have less effect on investment and consumption behavior; the lower interest rates are like "pushing on a string". Economist Paul Krugman described the U.S. 2009 recession and Japan's lost decade as liquidity traps. One remedy to a liquidity trap is expanding the money supply via quantitative easing or other techniques in which money is effectively printed to purchase assets, thereby creating inflationary expectations that cause savers to begin spending again. Government stimulus spending and mercantilist policies to stimulate exports and reduce imports are other techniques to stimulate demand. He estimated in March 2010 that developed countries representing 70% of the world's GDP were caught in a liquidity trap.
Behavior that may be optimal for an individual (e.g., saving more during adverse economic conditions) can be detrimental if too many individuals pursue the same behavior, as ultimately one person's consumption is another person's income. Too many consumers attempting to save (or pay down debt) simultaneously is called the paradox of thrift and can cause or deepen a recession. Economist Hyman Minsky also described a "paradox of deleveraging" as financial institutions that have too much leverage (debt relative to equity) cannot all de-leverage simultaneously without significant declines in the value of their assets.
During April 2009, U.S. Federal Reserve Vice Chair Janet Yellen discussed these paradoxes: "Once this massive credit crunch hit, it didn’t take long before we were in a recession. The recession, in turn, deepened the credit crunch as demand and employment fell, and credit losses of financial institutions surged. Indeed, we have been in the grips of precisely this adverse feedback loop for more than a year. A process of balance sheet deleveraging has spread to nearly every corner of the economy. Consumers are pulling back on purchases, especially on durable goods, to build their savings. Businesses are cancelling planned investments and laying off workers to preserve cash. And, financial institutions are shrinking assets to bolster capital and improve their chances of weathering the current storm. Once again, Minsky understood this dynamic. He spoke of the paradox of deleveraging, in which precautions that may be smart for individuals and firms—and indeed essential to return the economy to a normal state—nevertheless magnify the distress of the economy as a whole."
The U.S. Conference Board's Present Situation Index year-over-year change turns negative by more than 15 points before a recession.
The U.S. Conference Board Leading Economic Indicator year-over-year change turns negative before a recession.
When the CFNAI Diffusion Index drops below the value of -0.35, then there is an increased probability of the beginning a recession. Usually the signal happens in the three months of the recession. The CFNAI Diffusion Index signal tends to happen about one month before a related signal by the CFNAI-MA3 (3-month moving average) drops below the -0.7 level. The CFNAI-MA3 correctly identified the 7 recessions between March 1967–August 2019, while triggering only 2 false alarms.
Except for the above, there are no known completely reliable predictors, but the following are considered possible predictors.
Analysis by Prakash Loungani of the International Monetary Fund found that only two of the sixty recessions around the world during the 1990s had been predicted by a consensus of economists one year earlier, while there were zero consensus predictions one year earlier for the 49 recessions during 2009.
Most mainstream economists believe that recessions are caused by inadequate aggregate demand in the economy, and favor the use of expansionary macroeconomic policy during recessions. Strategies favored for moving an economy out of a recession vary depending on which economic school the policymakers follow. Monetarists would favor the use of expansionary monetary policy, while Keynesian economists may advocate increased government spending to spark economic growth. Supply-side economists may suggest tax cuts to promote business capital investment. When interest rates reach the boundary of an interest rate of zero percent (zero interest-rate policy) conventional monetary policy can no longer be used and government must use other measures to stimulate recovery. Keynesians argue that fiscal policy—tax cuts or increased government spending—works when monetary policy fails. Spending is more effective because of its larger multiplier but tax cuts take effect faster.
For example, Paul Krugman wrote in December 2010 that significant, sustained government spending was necessary because indebted households were paying down debts and unable to carry the U.S. economy as they had previously: "The root of our current troubles lies in the debt American families ran up during the Bush-era housing bubble...highly indebted Americans not only can’t spend the way they used to, they’re having to pay down the debts they ran up in the bubble years. This would be fine if someone else were taking up the slack. But what’s actually happening is that some people are spending much less while nobody is spending more — and this translates into a depressed economy and high unemployment. What the government should be doing in this situation is spending more while the private sector is spending less, supporting employment while those debts are paid down. And this government spending needs to be sustained..."
John Maynard Keynes believed that government institutions could stimulate aggregate demand in a crisis.
“Keynes showed that if somehow the level of aggregate demand could be triggered, possibly by the government printing currency notes to employ people to dig holes and fill them up, the wages that would be paid out would resuscitate the economy by generating successive rounds of demand through the multiplier process”
Some recessions have been anticipated by the stock market declines. In "Stocks for the Long Run", Siegel mentions that since 1948, ten recessions were preceded by a stock market decline, by a lead time of 0 to 13 months (average 5.7 months), while ten stock market declines of greater than 10% in the Dow Jones Industrial Average were not followed by a recession.
The real-estate market also usually weakens before a recession. However real-estate declines can last much longer than recessions.
Since the business cycle is very hard to predict, Siegel argues that it is not possible to take advantage of economic cycles for timing investments. Even the National Bureau of Economic Research (NBER) takes a few months to determine if a peak or trough has occurred in the US.
During an economic decline, high-yield stocks such as fast-moving consumer goods, pharmaceuticals, and tobacco tend to hold up better. However, when the economy starts to recover and the bottom of the market has passed, growth stocks tend to recover faster. There is significant disagreement about how health care and utilities tend to recover. Diversifying one's portfolio into international stocks may provide some safety; however, economies that are closely correlated with that of the U.S. may also be affected by a recession in the U.S.
There is a view termed the "halfway rule" according to which investors start discounting an economic recovery about halfway through a recession. In the 16 U.S. recessions since 1919, the average length has been 13 months, although the recent recessions have been shorter. Thus, if the 2008 recession had followed the average, the downturn in the stock market would have bottomed around November 2008. The actual US stock market bottom of the 2008 recession was in March 2009.
Generally an administration gets credit or blame for the state of economy during its time. This has caused disagreements about on how it actually started. In an economic cycle, a downturn can be considered a consequence of an expansion reaching an unsustainable state, and is corrected by a brief decline. Thus it is not easy to isolate the causes of specific phases of the cycle.
The 1981 recession is thought to have been caused by the tight-money policy adopted by Paul Volcker, chairman of the Federal Reserve Board, before Ronald Reagan took office. Reagan supported that policy. Economist Walter Heller, chairman of the Council of Economic Advisers in the 1960s, said that "I call it a Reagan-Volcker-Carter recession." The resulting taming of inflation did, however, set the stage for a robust growth period during Reagan's presidency. .
Economists usually teach that to some degree recession is unavoidable, and its causes are not well understood.
Unemployment is particularly high during a recession. Many economists working within the neoclassical paradigm argue that there is a natural rate of unemployment which, when subtracted from the actual rate of unemployment, can be used to calculate the negative GDP gap during a recession. In other words, unemployment never reaches 0 percent, and thus is not a negative indicator of the health of an economy unless above the "natural rate," in which case it corresponds directly to a loss in the gross domestic product, or GDP.
The full impact of a recession on employment may not be felt for several quarters. Research in Britain shows that low-skilled, low-educated workers and the young are most vulnerable to unemployment in a downturn. After recessions in Britain in the 1980s and 1990s, it took five years for unemployment to fall back to its original levels. Many companies often expect employment discrimination claims to rise during a recession.
Productivity tends to fall in the early stages of a recession, then rises again as weaker firms close. The variation in profitability between firms rises sharply. The fall in productivity could also be attributed to several macro-economic factors, such as the loss in productivity observed across UK due to Brexit, which may create a mini-recession in the region. Global epidemics, such as COVID-19, could be another example, since they disrupt the global supply chain or prevent movement of goods, services and people.
Recessions have also provided opportunities for anti-competitive mergers, with a negative impact on the wider economy: the suspension of competition policy in the United States in the 1930s may have extended the Great Depression.
The living standards of people dependent on wages and salaries are not more affected by recessions than those who rely on fixed incomes or welfare benefits. The loss of a job is known to have a negative impact on the stability of families, and individuals' health and well-being. Fixed income benefits receive small cuts which make it tougher to survive.
According to the International Monetary Fund (IMF), "Global recessions seem to occur over a cycle lasting between eight and 10 years." The IMF takes many factors into account when defining a global recession. Until April 2009, IMF several times communicated to the press, that a global annual real GDP growth of 3.0 percent or less in their view was "...equivalent to a global recession".
By this measure, six periods since 1970 qualify: 1974–1975, 1980–1983, 1990–1993, 1998, 2001–2002, and 2008–2009. During what IMF in April 2002 termed the past three global recessions of the last three decades, global per capita output growth was zero or negative, and IMF argued—at that time—that because of the opposite being found for 2001, the economic state in this year by itself did not qualify as a "global recession".
In April 2009, IMF had changed their Global recession definition to:
By this new definition, a total of four global recessions took place since World War II: 1975, 1982, 1991 and 2009. All of them only lasted one year, although the third would have lasted three years (1991–93) if IMF as criteria had used the normal exchange rate weighted percapita real World GDP rather than the purchase power parity weighted percapita real World GDP.
The worst recession Australia has ever suffered happened in the beginning of the 1930s. As a result of late 1920s profit issues in agriculture and cutbacks, 1931-1932 saw Australia's biggest recession in its entire history. It fared better than other nations, that underwent depressions, but their poor economic states influenced Australia's as well, that depended on them for export, as well as foreign investments. The nation also benefited from bigger productivity in manufacturing, facilitated by trade protection, which also helped with feeling the effects less.
Due to a credit squeeze, the economy had gone into a brief recession in 1961
Australia was facing a rising level of inflation in 1973, caused partially by the oil crisis happening in that same year, which brought inflation at a 13% increase. Economic recession hit by the middle of the year 1974, with no change in policy enacted by the government as a measure to counter the economic situation of the country. Consequently, the unemployment level rose and the trade deficit increased significantly.
Another recession – the most recent one to date – came in the 1990s, at the beginning of the decade. It was the result of a major stock collapse in 1987, in October, referred to now as Black Monday. Although the collapse was larger than the one in 1929, the global economy recovered quickly, but North America still suffered a decline in lumbering savings and loans, which led to a crisis. The recession wasn't limited to only America, but it also affected partnering nations, such as Australia. The unemployment level increased to 10.8%, employment declined by 3.4% and the GDP also decreased as much as 1.7%. Inflation, however, was successfully reduced. Australia is facing recession in 2020 due to the impact of the bush fires and Covid-19 impacting tourism and other important aspects of the economy.
The most recent recession to affect the United Kingdom was the late-2000s recession.
According to economists, since 1854, the U.S. has encountered 32 cycles of expansions and contractions, with an average of 17 months of contraction and 38 months of expansion. However, since 1980 there have been only eight periods of negative economic growth over one fiscal quarter or more, and four periods considered recessions:
For the past three recessions, the NBER decision has approximately conformed with the definition involving two consecutive quarters of decline. While the 2001 recession did not involve two consecutive quarters of decline, it was preceded by two quarters of alternating decline and weak growth.
Official economic data shows that a substantial number of nations were in recession as of early 2009. The US entered a recession at the end of 2007, and 2008 saw many other nations follow suit. The US recession of 2007 ended in June 2009 as the nation entered the current economic recovery. The timeline of the Great Recession details the many elements of this period.
The United States housing market correction (a consequence of the United States housing bubble) and subprime mortgage crisis significantly contributed to a recession.
The 2007–2009 recession saw private consumption fall for the first time in nearly 20 years. This indicated the depth and severity of the recession. With consumer confidence so low, economic recovery took a long time. Consumers in the U.S. were hit hard by the Great Recession, with the value of their houses dropping and their pension savings decimated on the stock market.
U.S. employers shed 63,000 jobs in February 2008, the most in five years. Former Federal Reserve chairman Alan Greenspan said on 6 April 2008 that "There is more than a 50 percent chance the United States could go into recession." On 1 October, the Bureau of Economic Analysis reported that an additional 156,000 jobs had been lost in September. On 29 April 2008, Moody's declared that nine US states were in a recession. In November 2008, employers eliminated 533,000 jobs, the largest single-month loss in 34 years. In 2008, an estimated 2.6 million U.S. jobs were eliminated.
The unemployment rate in the U.S. grew to 8.5 percent in March 2009, and there were 5.1 million job losses by March 2009 since the recession began in December 2007. That was about five million more people unemployed compared to just a year prior, which was the largest annual jump in the number of unemployed persons since the 1940s.
Although the US Economy grew in the first quarter by 1%, by June 2008 some analysts stated that due to a protracted credit crisis and "...rampant inflation in commodities such as oil, food, and steel," the country was nonetheless in a recession. The third quarter of 2008 brought on a GDP retraction of 0.5% the biggest decline since 2001. The 6.4% decline in spending during Q3 on non-durable goods, like clothing and food, was the largest since 1950.
A 17 November 2008 report from the Federal Reserve Bank of Philadelphia based on the survey of 51 forecasters, suggested that the recession started in April 2008 and would last 14 months. They project real GDP declining at an annual rate of 2.9% in the fourth quarter and 1.1% in the first quarter of 2009. These forecasts represent significant downward revisions from the forecasts of three months ago.
A 1 December 2008 report from the National Bureau of Economic Research stated that the U.S. had been in a recession since December 2007 (when economic activity peaked), based on a number of measures including job losses, declines in personal income, and declines in real GDP. By July 2009 a growing number of economists believed that the recession may have ended. The National Bureau of Economic Research announced on 20 September 2010 that the 2008/2009 recession ended in June 2009, making it the longest recession since World War II. Prior to the start of the recession, it appears that no known formal theoretical or empirical model was able to accurately predict the advance of this recession, except for minor signals in the sudden rise of forecasted probabilities, which were still well under 50%. | https://en.wikipedia.org/wiki?curid=25382 |
RSA (cryptosystem)
RSA (Rivest–Shamir–Adleman) is one of the first public-key cryptosystems and is widely used for secure data transmission. In such a cryptosystem, the encryption key is public and distinct from the decryption key which is kept secret (private). In RSA, this asymmetry is based on the practical difficulty of factoring the product of two large prime numbers, the "factoring problem". The acronym RSA is the initial letters of the surnames of Ron Rivest, Adi Shamir, and Leonard Adleman, who publicly described the algorithm in 1977. Clifford Cocks, an English mathematician working for the British intelligence agency Government Communications Headquarters (GCHQ), had developed an equivalent system in 1973, which was not declassified until 1997.
A user of RSA creates and then publishes a public key based on two large prime numbers, along with an auxiliary value. The prime numbers must be kept secret. Anyone can use the public key to encrypt a message, but only someone with knowledge of the prime numbers can decode the message.
Breaking RSA encryption is known as the RSA problem. Whether it is as difficult as the factoring problem is an open question. There are no published methods to defeat the system if a large enough key is used.
RSA is a relatively slow algorithm, and because of this, it is less commonly used to directly encrypt user data. More often, RSA passes encrypted shared keys for symmetric key cryptography which in turn can perform bulk encryption-decryption operations at much higher speed.
The idea of an asymmetric public-private key cryptosystem is attributed to Whitfield Diffie and Martin Hellman, who published this concept in 1976. They also introduced digital signatures and attempted to apply number theory. Their formulation used a shared-secret-key created from exponentiation of some number, modulo a prime number. However, they left open the problem of realizing a one-way function, possibly because the difficulty of factoring was not well-studied at the time.
Ron Rivest, Adi Shamir, and Leonard Adleman at the Massachusetts Institute of Technology, made several attempts over the course of a year to create a one-way function that was hard to invert. Rivest and Shamir, as computer scientists, proposed many potential functions, while Adleman, as a mathematician, was responsible for finding their weaknesses. They tried many approaches including "knapsack-based" and "permutation polynomials". For a time, they thought what they wanted to achieve was impossible due to contradictory requirements. In April 1977, they spent Passover at the house of a student and drank a good deal of Manischewitz wine before returning to their homes at around midnight. Rivest, unable to sleep, lay on the couch with a math textbook and started thinking about their one-way function. He spent the rest of the night formalizing his idea, and he had much of the paper ready by daybreak. The algorithm is now known as RSA – the initials of their surnames in same order as their paper.
Clifford Cocks, an English mathematician working for the British intelligence agency Government Communications Headquarters (GCHQ), described an equivalent system in an internal document in 1973. However, given the relatively expensive computers needed to implement it at the time, it was considered to be mostly a curiosity and, as far as is publicly known, was never deployed. His discovery, however, was not revealed until 1997 due to its top-secret classification.
Kid-RSA (KRSA) is a simplified public-key cipher published in 1997, designed for educational purposes. Some people feel that learning Kid-RSA gives insight into RSA and other public-key ciphers, analogous to simplified DES.
MIT was granted for a "Cryptographic communications system and method" that used the algorithm, on September 20, 1983. Though the patent was going to expire on September 21, 2000 (the term of patent was 17 years at the time), the algorithm was released to the public domain by RSA Security on September 6, 2000, two weeks earlier. Since a detailed description of the algorithm had been published in the Mathematical Games column in the August 1977 issue of Scientific American, prior to the December 1977 filing date of the patent application, regulations in much of the rest of the world precluded patents elsewhere and only the US patent was granted. Had Cocks's work been publicly known, a patent in the United States would not have been legal either.
From the DWPI's abstract of the patent,
The RSA algorithm involves four steps: key generation, key distribution, encryption and decryption.
A basic principle behind RSA is the observation that it is practical to find three very large positive integers , and such that with modular exponentiation for all integers (with ):
and that knowing and , or even , it can be extremely difficult to find . The triple bar (≡) here denotes modular congruence.
In addition, for some operations it is convenient that the order of the two exponentiations can be changed and that this relation also implies:
RSA involves a "public key" and a "private key." The public key can be known by everyone, and it is used for encrypting messages. The intention is that messages encrypted with the public key can only be decrypted in a reasonable amount of time by using the private key. The public key is represented by the integers and ; and, the private key, by the integer (although is also used during the decryption process. Thus, it might be considered to be a part of the private key, too). represents the message (previously prepared with a certain technique explained below).
The keys for the RSA algorithm are generated in the following way:
The "public key" consists of the modulus "n" and the public (or encryption) exponent "e". The "private key" consists of the private (or decryption) exponent "d", which must be kept secret. "p", "q", and "λ"("n") must also be kept secret because they can be used to calculate "d". In fact, they can all be discarded after "d" has been computed.
In the original RSA paper, the Euler totient function is used instead of "λ"("n") for calculating the private exponent "d". Since "φ"("n") is always divisible by "λ"("n") the algorithm works as well. That the Euler totient function can be used can also be seen as a consequence of the Lagrange's theorem applied to the multiplicative group of integers modulo pq. Thus any "d" satisfying also satisfies . However, computing "d" modulo "φ"("n") will sometimes yield a result that is larger than necessary (i.e. ). Most of the implementations of RSA will accept exponents generated using either method (if they use the private exponent "d" at all, rather than using the optimized decryption method based on the Chinese remainder theorem described below), but some standards like FIPS 186-4 may require that . Any "oversized" private exponents not meeting that criterion may always be reduced modulo "λ"("n") to obtain a smaller equivalent exponent.
Since any common factors of and are present in the factorisation of = = , it is recommended that and have only very small common factors, if any besides the necessary 2.
Note: The authors of the original RSA paper carry out the key generation by choosing "d" and then computing "e" as the modular multiplicative inverse of "d" modulo "φ"("n"), whereas most current implementations of RSA, such as those following PKCS#1, do the reverse (choose "e" and compute "d"). Since the chosen key can be small whereas the computed key normally is not, the RSA paper's algorithm optimizes decryption compared to encryption, while the modern algorithm optimizes encryption instead.
Suppose that Bob wants to send information to Alice. If they decide to use RSA, Bob must know Alice's public key to encrypt the message and Alice must use her private key to decrypt the message.
To enable Bob to send his encrypted messages, Alice transmits her public key to Bob via a reliable, but not necessarily secret, route. Alice's private key is never distributed.
After Bob obtains Alice's public key, he can send a message to Alice.
To do it, he first turns (strictly speaking, the un-padded plaintext) into an integer (strictly speaking, the padded plaintext), such that by using an agreed-upon reversible protocol known as a padding scheme. He then computes the ciphertext , using Alice's public key , corresponding to
This can be done reasonably quickly, even for very large numbers, using modular exponentiation. Bob then transmits to Alice.
Alice can recover from by using her private key exponent by computing
Given , she can recover the original message by reversing the padding scheme.
Here is an example of RSA encryption and decryption. The parameters used here are artificially small, but one can also .
The public key is (, ). For a padded plaintext message "m", the encryption function is
The private key is (, ). For an encrypted ciphertext "c", the decryption function is
For instance, in order to encrypt , we calculate
To decrypt , we calculate
Both of these calculations can be computed efficiently using the square-and-multiply algorithm for modular exponentiation. In real-life situations the primes selected would be much larger; in our example it would be trivial to factor "n", 3233 (obtained from the freely available public key) back to the primes "p" and "q". "e", also from the public key, is then inverted to get "d", thus acquiring the private key.
Practical implementations use the Chinese remainder theorem to speed up the calculation using modulus of factors (mod "pq" using mod "p" and mod "q").
The values "d""p", "d""q" and "q"inv, which are part of the private key are computed as follows:
Here is how "d""p", "d""q" and "q"inv are used for efficient decryption. (Encryption is efficient by choice of a suitable "d" and "e" pair)
A working example in JavaScript using BigInteger.js. This code should not be used in production, as codice_1 uses codice_2, which is not a cryptographically secure pseudorandom number generator.
'use strict';
const RSA = {};
RSA.generate = function(keysize) {
};
RSA.encrypt = function(m, n, e) {
RSA.decrypt = function(c, d, n) {
Suppose Alice uses Bob's public key to send him an encrypted message. In the message, she can claim to be Alice but Bob has no way of verifying that the message was actually from Alice since anyone can use Bob's public key to send him encrypted messages. In order to verify the origin of a message, RSA can also be used to sign a message.
Suppose Alice wishes to send a signed message to Bob. She can use her own private key to do so. She produces a hash value of the message, raises it to the power of "d" (modulo "n") (as she does when decrypting a message), and attaches it as a "signature" to the message. When Bob receives the signed message, he uses the same hash algorithm in conjunction with Alice's public key. He raises the signature to the power of "e" (modulo "n") (as he does when encrypting a message), and compares the resulting hash value with the message's actual hash value. If the two agree, he knows that the author of the message was in possession of Alice's private key, and that the message has not been tampered with since.
This works because of exponentiation rules:
Thus, the keys may be swapped without loss of generality, that is a private key of a key pair may be used either to:
The proof of the correctness of RSA is based on Fermat's little theorem, stating that for any integer "a" and prime "p", not dividing "a".
We want to show that
for every integer "m" when "p" and "q" are distinct prime numbers and "e" and "d" are positive integers satisfying .
Since is, by construction, divisible by both and , we can write
for some nonnegative integers "h" and "k".
To check whether two numbers, like "med" and "m", are congruent mod "pq", it suffices (and in fact is equivalent) to check that they are congruent mod "p" and mod "q" separately.
To show , we consider two cases:
The verification that proceeds in a completely analogous way:
This completes the proof that, for any integer "m", and integers "e", "d" such that ,
"Notes:"
Although the original paper of Rivest, Shamir, and Adleman used Fermat's little theorem to explain why RSA works, it is common to find proofs that rely instead on Euler's theorem.
We want to show that , where is a product of two different prime numbers and "e" and "d" are positive integers satisfying . Since "e" and "d" are positive, we can write for some non-negative integer "h". "Assuming" that "m" is relatively prime to "n", we have
where the second-last congruence follows from Euler's theorem.
More generally, for any "e" and "d" satisfying , the same conclusion follows from Carmichael's generalization of Euler's theorem, which states that for all "m" relatively prime to "n".
When "m" is not relatively prime to "n", the argument just given is invalid. This is highly improbable (only a proportion of numbers have this property), but even in this case the desired congruence is still true. Either or , and these cases can be treated using the previous proof.
There are a number of attacks against plain RSA as described below.
To avoid these problems, practical RSA implementations typically embed some form of structured, randomized padding into the value "m" before encrypting it. This padding ensures that "m" does not fall into the range of insecure plaintexts, and that a given message, once padded, will encrypt to one of a large number of different possible ciphertexts.
Standards such as PKCS#1 have been carefully designed to securely pad messages prior to RSA encryption. Because these schemes pad the plaintext "m" with some number of additional bits, the size of the un-padded message "M" must be somewhat smaller. RSA padding schemes must be carefully designed so as to prevent sophisticated attacks which may be facilitated by a predictable message structure. Early versions of the PKCS#1 standard (up to version 1.5) used a construction that appears to make RSA semantically secure. However, at Crypto 1998, Bleichenbacher showed that this version is vulnerable to a practical adaptive chosen ciphertext attack. Furthermore, at Eurocrypt 2000, Coron et al. showed that for some types of messages, this padding does not provide a high enough level of security. Later versions of the standard include Optimal Asymmetric Encryption Padding (OAEP), which prevents these attacks. As such, OAEP should be used in any new application, and PKCS#1 v1.5 padding should be replaced wherever possible. The PKCS#1 standard also incorporates processing schemes designed to provide additional security for RSA signatures, e.g. the Probabilistic Signature Scheme for RSA (RSA-PSS).
Secure padding schemes such as RSA-PSS are as essential for the security of message signing as they are for message encryption. Two US patents on PSS were granted (USPTO 6266771 and USPTO 70360140); however, these patents expired on 24 July 2009 and 25 April 2010, respectively. Use of PSS no longer seems to be encumbered by patents. Note that using different RSA key-pairs for encryption and signing is potentially more secure.
For efficiency many popular crypto libraries (like OpenSSL, Java and .NET) use the following optimization for decryption and signing based on the Chinese remainder theorem. The following values are precomputed and stored as part of the private key:
These values allow the recipient to compute the exponentiation more efficiently as follows:
This is more efficient than computing exponentiation by squaring even though two modular exponentiations have to be computed. The reason is that these two modular exponentiations both use a smaller exponent and a smaller modulus.
The security of the RSA cryptosystem is based on two mathematical problems: the problem of factoring large numbers and the RSA problem. Full decryption of an RSA ciphertext is thought to be infeasible on the assumption that both of these problems are hard, i.e., no efficient algorithm exists for solving them. Providing security against "partial" decryption may require the addition of a secure padding scheme.
The RSA problem is defined as the task of taking "e"th roots modulo a composite "n": recovering a value "m" such that , where is an RSA public key and "c" is an RSA ciphertext. Currently the most promising approach to solving the RSA problem is to factor the modulus "n". With the ability to recover prime factors, an attacker can compute the secret exponent "d" from a public key , then decrypt "c" using the standard procedure. To accomplish this, an attacker factors "n" into "p" and "q", and computes which allows the determination of "d" from "e". No polynomial-time method for factoring large integers on a classical computer has yet been found, but it has not been proven that none exists. "See integer factorization for a discussion of this problem".
Multiple polynomial quadratic sieve (MPQS) can be used to factor the public modulus "n". The time taken to factor 128-bit and 256-bit "n" on a desktop computer are respectively 2 seconds and 35 minutes.
A tool called YAFU can be used to optimize this process. It took about 5720s to factor "320bit-N" on the same computer.
In 2009, Benjamin Moody factored an RSA-512 bit key in 73 days using only public software (GGNFS) and his desktop computer (a dual-core Athlon64 with a 1,900 MHz cpu.). Just less than five gigabytes of disk storage was required and about 2.5 gigabytes of RAM for the sieving process. The first RSA-512 factorization in 1999 required the equivalent of 8,400 MIPS years, over an elapsed time of about seven months.
Rivest, Shamir, and Adleman noted that Miller has shown that – assuming the truth of the Extended Riemann Hypothesis – finding "d" from "n" and "e" is as hard as factoring "n" into "p" and "q" (up to a polynomial time difference). However, Rivest, Shamir, and Adleman noted, in section IX/D of their paper, that they had not found a proof that inverting RSA is equally as hard as factoring.
, the largest factored RSA number was 795 bits long (240 decimal digits, see RSA-240). Its factorization, by a state-of-the-art distributed implementation, took around 900 CPU years. No larger RSA key is known publicly to have been factored. In practice, RSA keys are typically 1024 to 4096 bits long. Some experts believe that 1024-bit keys may become breakable in the near future or may already be breakable by a sufficiently well-funded attacker, though this is disputable. Few people see any way that 4096-bit keys could be broken in the foreseeable future. Therefore, it is generally presumed that RSA is secure if "n" is sufficiently large. If "n" is 300 bits or shorter, it can be factored in a few hours in a personal computer, using software already freely available. Keys of 512 bits have been shown to be practically breakable in 1999 when RSA-155 was factored by using several hundred computers, and these are now factored in a few weeks using common hardware. Exploits using 512-bit code-signing certificates that may have been factored were reported in 2011. A theoretical hardware device named TWIRL, described by Shamir and Tromer in 2003, called into question the security of 1024 bit keys. It is currently recommended that "n" be at least 2048 bits long.
In 1994, Peter Shor showed that a quantum computer – if one could ever be practically created for the purpose – would be able to factor in polynomial time, breaking RSA; see Shor's algorithm.
Finding the large primes "p" and "q" is usually done by testing random numbers of the right size with probabilistic primality tests that quickly eliminate virtually all of the nonprimes.
The numbers "p" and "q" should not be "too close", lest the Fermat factorization for "n" be successful. If "p" − "q" is less than 2"n"1/4 ("n" = "p" * "q", which for even small 1024-bit values of "n" is ) solving for "p" and "q" is trivial. Furthermore, if either "p" − 1 or "q" − 1 has only small prime factors, "n" can be factored quickly by Pollard's p − 1 algorithm, and such values of "p" or "q" should hence be discarded.
It is important that the private exponent "d" be large enough. Michael J. Wiener showed that if "p" is between "q" and 2"q" (which is quite typical) and , then "d" can be computed efficiently from "n" and "e".
There is no known attack against small public exponents such as , provided that the proper padding is used. Coppersmith's Attack has many applications in attacking RSA specifically if the public exponent "e" is small and if the encrypted message is short and not padded. 65537 is a commonly used value for "e"; this value can be regarded as a compromise between avoiding potential small exponent attacks and still allowing efficient encryptions (or signature verification). The NIST Special Publication on Computer Security (SP 800-78 Rev 1 of August 2007) does not allow public exponents "e" smaller than 65537, but does not state a reason for this restriction.
In October 2017, a team of researchers from Masaryk University announced the ROCA vulnerability, which affects RSA keys generated by an algorithm embodied in a library from Infineon known as RSALib. Large number of smart cards and trusted platform modules (TPMs) were shown to be affected. Vulnerable RSA keys are easily identified using a test program the team released.
A cryptographically strong random number generator, which has been properly seeded with adequate entropy, must be used to generate the primes "p" and "q". An analysis comparing millions of public keys gathered from the Internet was carried out in early 2012 by Arjen K. Lenstra, James P. Hughes, Maxime Augier, Joppe W. Bos, Thorsten Kleinjung and Christophe Wachter. They were able to factor 0.2% of the keys using only Euclid's algorithm.
They exploited a weakness unique to cryptosystems based on integer factorization. If is one public key and is another, then if by chance (but "q" is not equal to "q"′), then a simple computation of factors both "n" and "n"′, totally compromising both keys. Lenstra et al. note that this problem can be minimized by using a strong random seed of bit-length twice the intended security level, or by employing a deterministic function to choose "q" given "p", instead of choosing "p" and "q" independently.
Nadia Heninger was part of a group that did a similar experiment. They used an idea of Daniel J. Bernstein to compute the GCD of each RSA key "n" against the product of all the other keys "n"′ they had found (a 729 million digit number), instead of computing each gcd("n","n"′) separately, thereby achieving a very significant speedup since after one large division the GCD problem is of normal size.
Heninger says in her blog that the bad keys occurred almost entirely in embedded applications, including "firewalls, routers, VPN devices, remote server administration devices, printers, projectors, and VOIP phones" from over 30 manufacturers. Heninger explains that the one-shared-prime problem uncovered by the two groups results from situations where the pseudorandom number generator is poorly seeded initially and then reseeded between the generation of the first and second primes. Using seeds of sufficiently high entropy obtained from key stroke timings or electronic diode noise or atmospheric noise from a radio receiver tuned between stations should solve the problem.
Strong random number generation is important throughout every phase of public key cryptography. For instance, if a weak generator is used for the symmetric keys that are being distributed by RSA, then an eavesdropper could bypass RSA and guess the symmetric keys directly.
Kocher described a new attack on RSA in 1995: if the attacker Eve knows Alice's hardware in sufficient detail and is able to measure the decryption times for several known ciphertexts, Eve can deduce the decryption key "d" quickly. This attack can also be applied against the RSA signature scheme. In 2003, Boneh and Brumley demonstrated a more practical attack capable of recovering RSA factorizations over a network connection (e.g., from a Secure Sockets Layer (SSL)-enabled webserver) This attack takes advantage of information leaked by the Chinese remainder theorem optimization used by many RSA implementations.
One way to thwart these attacks is to ensure that the decryption operation takes a constant amount of time for every ciphertext. However, this approach can significantly reduce performance. Instead, most RSA implementations use an alternate technique known as cryptographic blinding. RSA blinding makes use of the multiplicative property of RSA. Instead of computing , Alice first chooses a secret random value "r" and computes . The result of this computation after applying Euler's Theorem is and so the effect of "r" can be removed by multiplying by its inverse. A new value of "r" is chosen for each ciphertext. With blinding applied, the decryption time is no longer correlated to the value of the input ciphertext and so the timing attack fails.
In 1998, Daniel Bleichenbacher described the first practical adaptive chosen ciphertext attack, against RSA-encrypted messages using the PKCS #1 v1 padding scheme (a padding scheme randomizes and adds structure to an RSA-encrypted message, so it is possible to determine whether a decrypted message is valid). Due to flaws with the PKCS #1 scheme, Bleichenbacher was able to mount a practical attack against RSA implementations of the Secure Socket Layer protocol, and to recover session keys. As a result of this work, cryptographers now recommend the use of provably secure padding schemes such as Optimal Asymmetric Encryption Padding, and RSA Laboratories has released new versions of PKCS #1 that are not vulnerable to these attacks.
A side-channel attack using branch prediction analysis (BPA) has been described. Many processors use a branch predictor to determine whether a conditional branch in the instruction flow of a program is likely to be taken or not. Often these processors also implement simultaneous multithreading (SMT). Branch prediction analysis attacks use a spy process to discover (statistically) the private key when processed with these processors.
Simple Branch Prediction Analysis (SBPA) claims to improve BPA in a non-statistical way. In their paper, "On the Power of Simple Branch Prediction Analysis", the authors of SBPA (Onur Aciicmez and Cetin Kaya Koc) claim to have discovered 508 out of 512 bits of an RSA key in 10 iterations.
A power fault attack on RSA implementations has been described in 2010. The author recovered the key by varying the CPU power voltage outside limits; this caused multiple power faults on the server.
The generated primes can be attacked by rainbow tables because the random numbers are fixed and finite sets.
Some cryptography libraries that provide support for RSA include: | https://en.wikipedia.org/wiki?curid=25385 |
Robert A. Heinlein
Robert Anson Heinlein (; July 7, 1907 – May 8, 1988) was an American science-fiction author, aeronautical engineer, and retired Naval officer. Sometimes called the "dean of science fiction writers", he was among the first to emphasize scientific accuracy in his fiction, and was thus a pioneer of the subgenre of hard science fiction. His published works, both fiction and non-fiction, express admiration for competence and emphasize the value of critical thinking. His work continues to have an influence on the science-fiction genre, and on modern culture more generally.
Heinlein became one of the first American science-fiction writers to break into mainstream magazines such as "The Saturday Evening Post" in the late 1940s. He was one of the best-selling science-fiction novelists for many decades, and he, Isaac Asimov, and Arthur C. Clarke are often considered the "Big Three" of English-language science fiction authors. Notable Heinlein works include "Stranger in a Strange Land", "Starship Troopers" (which helped mold the space marine and mecha archetypes) and "The Moon Is a Harsh Mistress". His work sometimes had controversial aspects, such as plural marriage in "The Moon is a Harsh Mistress", militarism in "Starship Troopers" and technologically competent women characters that were strong and independent, yet often stereotypically feminine – such as "Friday".
A writer also of numerous science-fiction short stories, Heinlein was one of a group of writers who came to prominence under the editorship (1937–1971) of John W. Campbell at "Astounding Science Fiction" magazine, though Heinlein denied that Campbell influenced his writing to any great degree.
Heinlein used his science fiction as a way to explore provocative social and political ideas, and to speculate how progress in science and engineering might shape the future of politics, race, religion, and sex. Within the framework of his science-fiction stories, Heinlein repeatedly addressed certain social themes: the importance of individual liberty and self-reliance, the nature of sexual relationships, the obligation individuals owe to their societies, the influence of organized religion on culture and government, and the tendency of society to repress nonconformist thought. He also speculated on the influence of space travel on human cultural practices.
Heinlein was named the first Science Fiction Writers Grand Master in 1974. Four of his novels won Hugo Awards. In addition, fifty years after publication, seven of his works were awarded "Retro Hugos"—awards given retrospectively for works that were published before the Hugo Awards came into existence. In his fiction, Heinlein coined terms that have become part of the English language, including grok,
waldo and speculative fiction, as well as popularizing existing terms like "TANSTAAFL", "pay it forward", and "space marine". He also anticipated mechanical computer-aided design with "Drafting Dan" and described a modern version of a waterbed in his novel "Beyond This Horizon". In the first chapter of the novel "Space Cadet" he anticipated the cell phone, 35 years before Motorola invented the technology. Several of Heinlein's works have been adapted for film and television.
Heinlein, born on July 7, 1907, to Rex Ivar Heinlein (an accountant) and Bam Lyle Heinlein, in Butler, Missouri, was the third of seven children. He was a sixth-generation German-American; a family tradition had it that Heinleins fought in every American war, starting with the War of Independence.
He spent his childhood in Kansas City, Missouri.
The outlook and values of this time and place (in his own words, "The Bible Belt") had a definite influence on his fiction, especially in his later works, as he drew heavily upon his childhood in establishing the setting and cultural atmosphere in works like "Time Enough for Love" and "To Sail Beyond the Sunset". The 1910 appearance of Halley's Comet inspired the young child's life-long interest in astronomy.
When Heinlein graduated from Central High School in Kansas City in 1924, he aspired to a career as an officer in the United States Navy. However, he was initially prevented from attending the United States Naval Academy at Annapolis because his older brother Rex was a student there, and regulations discouraged multiple family-members from attending the Academy simultaneously. He instead matriculated at Kansas City Community College and began vigorously petitioning Missouri Senator James A. Reed for an appointment to the Naval Academy. In part due to the influence of the Pendergast machine, the Naval Academy admitted him in June 1925.
Heinlein's experience in the Navy exerted a strong influence on his character and writing. In 1929, he graduated from the Naval Academy with the equivalent of a Bachelor of Arts degree in Engineering, ranking fifth in his class academically but with a class standing of 20th of 243 due to disciplinary demerits. Shortly after graduation, he was commissioned as an ensign by the U.S. Navy. He advanced to lieutenant, junior grade while serving aboard the new aircraft carrier in 1931, where he worked in radio communications, then in its earlier phases, with the carrier's aircraft. The captain of this carrier was Ernest J. King, who served as the Chief of Naval Operations and Commander-in-Chief, U.S. Fleet during World War II. Heinlein was frequently interviewed during his later years by military historians who asked him about Captain King and his service as the commander of the U.S. Navy's first modern aircraft carrier. Heinlein also served as gunnery officer aboard the destroyer in 1933 and 1934, reaching the rank of lieutenant. His brother, Lawrence Heinlein, served in the U.S. Army, the U.S. Air Force, and the Missouri National Guard, reaching the rank of major general in the National Guard.
In 1929, Heinlein married Elinor Curry of Kansas City. However, their marriage only lasted about a year. His second marriage in 1932 to Leslyn MacDonald (1904–1981) lasted for 15 years. MacDonald was, according to the testimony of Heinlein's Navy friend, Rear Admiral Cal Laning, "astonishingly intelligent, widely read, and extremely liberal, though a registered Republican", while Isaac Asimov later recalled that Heinlein was, at the time, "a flaming liberal". "(See section: Politics of Robert Heinlein.)"
At the Philadelphia Naval Shipyard Heinlein met and befriended a chemical engineer named Virginia "Ginny" Gerstenfeld. After the war, her engagement having fallen through, she moved to UCLA for doctoral studies in chemistry and made contact again.
As his second wife's alcoholism gradually spun out of control, Heinlein moved out and the couple filed for divorce. Heinlein's friendship with Virginia turned into a relationship and on October 21, 1948—shortly after the decree nisi came through—they married in the town of Raton, New Mexico, shortly after setting up housekeeping in Colorado. They remained married until Heinlein's death.
As Heinlein's increasing success as a writer resolved their initial financial woes, they had a house custom built with various innovative features, later described in an article in "Popular Mechanics". In 1965, after various chronic health problems of Virginia's were traced back to altitude sickness, they moved to Santa Cruz, California, which is at sea level. They built a new residence in the adjacent village of Bonny Doon, California. Robert and Virginia designed and built their California house in Bonny Doon themselves; the home is in a circular shape.
Ginny undoubtedly served as a model for many of his intelligent, fiercely independent female characters. She was a chemist and rocket test engineer, and held a higher rank in the Navy than Heinlein himself. She was also an accomplished college athlete, earning four letters. In 1953–1954, the Heinleins voyaged around the world (mostly via ocean liners and cargo liners, as Ginny detested flying), which Heinlein described in "Tramp Royale", and which also provided background material for science fiction novels set aboard spaceships on long voyages, such as "Podkayne of Mars", "Friday" and "", the latter initially being set on a cruise much as detailed in "Tramp Royale". Ginny acted as the first reader of his manuscripts. Isaac Asimov believed that Heinlein made a swing to the right politically at the same time he married Ginny.
In 1934, Heinlein was discharged from the Navy due to pulmonary tuberculosis. During a lengthy hospitalization, and inspired by his own experience while bed-ridden, he developed a design for a waterbed.
After his discharge, Heinlein attended a few weeks of graduate classes in mathematics and physics at the University of California at Los Angeles (UCLA), but he soon quit either because of his health or from a desire to enter politics.
Heinlein supported himself at several occupations, including real estate sales and silver mining, but for some years found money in short supply. Heinlein was active in Upton Sinclair's socialist End Poverty in California movement (EPIC) in the early 1930s. He was deputy publisher of the EPIC News, which Heinlein noted "recalled a mayor, kicked out a district attorney, replaced the governor with one of our choice." When Sinclair gained the Democratic nomination for Governor of California in 1934, Heinlein worked actively in the campaign. Heinlein himself ran for the California State Assembly in 1938, but was unsuccessful. Heinlein was running as a left-wing Democrat in a conservative district, and he never made it past the Democratic primary because of trickery by his Republican opponent .
While not destitute after the campaign—he had a small disability pension from the Navy—Heinlein turned to writing to pay off his mortgage. His first published story, "Life-Line", was printed in the August 1939 issue of "Astounding Science Fiction". Originally written for a contest, he sold it to "Astounding" for significantly more than the contest's first-prize payoff. Another Future History story, "Misfit", followed in November. Some saw Heinlein's talent and stardom from his first story, and he was quickly acknowledged as a leader of the new movement toward "social" science fiction. In California he hosted the Mañana Literary Society, a 1940–41 series of informal gatherings of new authors. He was the guest of honor at Denvention, the 1941 Worldcon, held in Denver. During World War II, Heinlein was employed by the Navy as a civilian aeronautical engineer at the Navy Aircraft Materials Center at the Philadelphia Naval Shipyard in Pennsylvania. Heinlein recruited Isaac Asimov and L. Sprague de Camp to also work there. While at the Philadelphia Naval Shipyards, Asimov, Heinlein, and de Camp brainstormed unconventional approaches to kamikaze attacks, such as using sound to detect approaching planes.
As the war wound down in 1945, Heinlein began to re-evaluate his career. The atomic bombings of Hiroshima and Nagasaki, along with the outbreak of the Cold War, galvanized him to write nonfiction on political topics. In addition, he wanted to break into better-paying markets. He published four influential short stories for "The Saturday Evening Post" magazine, leading off, in February 1947, with "The Green Hills of Earth". That made him the first science fiction writer to break out of the "pulp ghetto". In 1950, the movie "Destination Moon"—the documentary-like film for which he had written the story and scenario, co-written the script, and invented many of the effects—won an Academy Award for special effects. Also, he embarked on a series of juvenile novels for the Charles Scribner's Sons publishing company that went from 1947 through 1959, at the rate of one book each autumn, in time for Christmas presents to teenagers. He also wrote for "Boys' Life" in 1952.
Heinlein had used topical materials throughout his juvenile series beginning in 1947, but in 1958 he interrupted work on "The Heretic" (the working title of "Stranger in a Strange Land") to write and publish a book exploring ideas of civic virtue, initially serialized as "Starship Soldiers". In 1959, his novel (now entitled "Starship Troopers") was considered by the editors and owners of Scribner's to be too controversial for one of its prestige lines, and it was rejected. Heinlein found another publisher (Putnam), feeling himself released from the constraints of writing novels for children. He had told an interviewer that he did not want to do stories that merely added to categories defined by other works. Rather he wanted to do his own work, stating that: "I want to do my own stuff, my own way". He would go on to write a series of challenging books that redrew the boundaries of science fiction, including "Stranger in a Strange Land" (1961) and "The Moon Is a Harsh Mistress" (1966).
Beginning in 1970, Heinlein had a series of health crises, broken by strenuous periods of activity in his hobby of stonemasonry: in a private correspondence, he referred to that as his "usual and favorite occupation between books". The decade began with a life-threatening attack of peritonitis, recovery from which required more than two years, and treatment of which required multiple transfusions of Heinlein's rare blood type, A2 negative. As soon as he was well enough to write again, he began work on "Time Enough for Love" (1973), which introduced many of the themes found in his later fiction.
In the mid-1970s, Heinlein wrote two articles for the "Britannica Compton Yearbook". He and Ginny crisscrossed the country helping to reorganize blood donation in the United States in an effort to assist the system which had saved his life. At science fiction conventions to receive his autograph, fans would be asked to co-sign with Heinlein a beautifully embellished pledge form he supplied stating that the recipient agrees that they will donate blood. He was the guest of honor at the Worldcon in 1976 for the third time at MidAmeriCon in Kansas City, Missouri. At that Worldcon, Heinlein hosted a blood drive and donors' reception to thank all those who had helped save lives.
Beginning in 1977 and including an episode while vacationing in Tahiti in early 1978, he had episodes of reversible neurologic dysfunction due to transient ischemic attacks. Over the next few months, he became more and more exhausted, and his health again began to decline. The problem was determined to be a blocked carotid artery, and he had one of the earliest known carotid bypass operations to correct it. Heinlein and Virginia had been smokers, and smoking appears often in his fiction, as do fictitious strikable self-lighting cigarettes.
In 1980 Robert Heinlein was a member of the Citizens Advisory Council on National Space Policy, chaired by Jerry Pournelle, which met at the home of SF writer Larry Niven to write space policy papers for the incoming Reagan Administration. Members included such aerospace industry leaders as former astronaut Buzz Aldrin, General Daniel O. Graham, aerospace engineer Max Hunter and North American Rockwell VP for Space Shuttle development George Merrick. Policy recommendations from the Council included ballistic missile defense concepts which were later transformed into what was called the Strategic Defense Initiative, or "Star Wars" as derided by Senator Ted Kennedy. Heinlein assisted with Council contribution to the Reagan "Star Wars" speech of Spring 1983.
Asked to appear before a Joint Committee of the United States Congress that year, he testified on his belief that spin-offs from space technology were benefiting the infirm and the elderly. Heinlein's surgical treatment re-energized him, and he wrote five novels from 1980 until he died in his sleep from emphysema and heart failure on May 8, 1988.
At that time, he had been putting together the early notes for another "World as Myth" novel. Several of his other works have been published posthumously. Based on an outline and notes created by Heinlein in 1955, Spider Robinson has written the novel "Variable Star". Heinlein's posthumously published nonfiction includes a selection of correspondence and notes edited into a somewhat autobiographical examination of his career, published in 1989 under the title "Grumbles from the Grave" by his wife, Virginia; his book on practical politics written in 1946 published as "Take Back Your Government"; and a travelogue of their first around-the-world tour in 1954, "Tramp Royale". The novels "Podkayne of Mars" and "Red Planet", which were edited against his wishes in their original release, have been reissued in restored editions. "Stranger In a Strange Land" was originally published in a shorter form, but both the long and short versions are now simultaneously available in print.
Heinlein's archive is housed by the Special Collections department of McHenry Library at the University of California at Santa Cruz. The collection includes manuscript drafts, correspondence, photographs and artifacts. A substantial portion of the archive has been digitized and it is available online through the Robert A. and Virginia Heinlein Archives.
Heinlein published 32 novels, 59 short stories, and 16 collections during his life. Four films, two television series, several episodes of a radio series, and a board game have been derived more or less directly from his work. He wrote a screenplay for one of the films. Heinlein edited an anthology of other writers' SF short stories.
Three nonfiction books and two poems have been published posthumously. "For Us, the Living: A Comedy of Customs" was published posthumously in 2003; "Variable Star", written by Spider Robinson based on an extensive outline by Heinlein, was published in September 2006. Four collections have been published posthumously.
Over the course of his career, Heinlein wrote three somewhat overlapping series:
Heinlein began his career as a writer of stories for "Astounding Science Fiction" magazine, which was edited by John Campbell. The science fiction writer Frederik Pohl has described Heinlein as "that greatest of Campbell-era sf writers". Isaac Asimov said that, from the time of his first story, the science fiction world accepted that Heinlein was the best science fiction writer in existence, adding that he would hold this title through his lifetime.
Alexei and Cory Panshin noted that Heinlein's impact was immediately felt. In 1940, the year after selling 'Life-Line' to Campbell, he wrote three short novels, four novelettes, and seven short stories. They went on to say that "No one ever dominated the science fiction field as Bob did in the first few years of his career." Alexei expresses awe in Heinlein's ability to show readers a world so drastically different from the one we live in now, yet have so many similarities. He says that "We find ourselves not only in a world other than our own, but identifying with a living, breathing individual who is operating within its context, and thinking and acting according to its terms."
The first novel that Heinlein wrote, "" (1939), did not see print during his lifetime, but Robert James tracked down the manuscript and it was published in 2003. Though some regard it as a failure as a novel, considering it little more than a disguised lecture on Heinlein's social theories, some readers took a very different view. In a review of it, John Clute wrote: I'm not about to suggest that if Heinlein had been able to publish [such works] openly in the pages of "Astounding" in 1939, SF would have gotten the future right; I would suggest, however, that if Heinlein, and his colleagues, had been able to publish adult SF in "Astounding" and its fellow journals, then SF might not have done such a grotesquely poor job of prefiguring something of the flavor of actually living here at the onset of 2004.
"For Us, the Living" was intriguing as a window into the development of Heinlein's radical ideas about man as a social animal, including his interest in free love. The root of many themes found in his later stories can be found in this book. It also contained a large amount of material that could be considered background for his other novels. This included a detailed description of the protagonist's treatment to avoid being banned to Coventry (a lawless land in the Heinlein mythos where unrepentant law-breakers are exiled).
It appears that Heinlein at least attempted to live in a manner consistent with these ideals, even in the 1930s, and had an open relationship in his marriage to his second wife, Leslyn. He was also a nudist; nudism and body taboos are frequently discussed in his work. At the height of the Cold War, he built a bomb shelter under his house, like the one featured in "Farnham's Freehold".
After "For Us, the Living", Heinlein began selling (to magazines) first short stories, then novels, set in a Future History, complete with a time line of significant political, cultural, and technological changes. A chart of the future history was published in the May 1941 issue of "Astounding". Over time, Heinlein wrote many novels and short stories that deviated freely from the Future History on some points, while maintaining consistency in some other areas. The Future History was eventually overtaken by actual events. These discrepancies were explained, after a fashion, in his later World as Myth stories.
Heinlein's first novel published as a book, "Rocket Ship Galileo", was initially rejected because going to the moon was considered too far-fetched, but he soon found a publisher, Scribner's, that began publishing a Heinlein juvenile once a year for the Christmas season. Eight of these books were illustrated by Clifford Geary in a distinctive white-on-black scratchboard style. Some representative novels of this type are "Have Space Suit—Will Travel", "Farmer in the Sky", and "Starman Jones". Many of these were first published in serial form under other titles, e.g., "Farmer in the Sky" was published as "Satellite Scout" in the Boy Scout magazine "Boys' Life". There has been speculation that Heinlein's intense obsession with his privacy was due at least in part to the apparent contradiction between his unconventional private life and his career as an author of books for children. However, "For Us, the Living" explicitly discusses the political importance Heinlein attached to privacy as a matter of principle.
The novels that Heinlein wrote for a young audience are commonly called "the Heinlein juveniles", and they feature a mixture of adolescent and adult themes. Many of the issues that he takes on in these books have to do with the kinds of problems that adolescents experience. His protagonists are usually intelligent teenagers who have to make their way in the adult society they see around them. On the surface, they are simple tales of adventure, achievement, and dealing with stupid teachers and jealous peers. Heinlein was a vocal proponent of the notion that juvenile readers were far more sophisticated and able to handle more complex or difficult themes than most people realized. His juvenile stories often had a maturity to them that made them readable for adults. "Red Planet", for example, portrays some subversive themes, including a revolution in which young students are involved; his editor demanded substantial changes in this book's discussion of topics such as the use of weapons by children and the misidentified sex of the Martian character. Heinlein was always aware of the editorial limitations put in place by the editors of his novels and stories, and while he observed those restrictions on the surface, was often successful in introducing ideas not often seen in other authors' juvenile SF.
In 1957, James Blish wrote that one reason for Heinlein's success "has been the high grade of machinery which goes, today as always, into his story-telling. Heinlein seems to have known from the beginning, as if instinctively, technical lessons about fiction which other writers must learn the hard way (or often enough, never learn). He does not always operate the machinery to the best advantage, but he always seems to be aware of it."
Heinlein decisively ended his juvenile novels with "Starship Troopers" (1959), a controversial work and his personal riposte to leftists calling for President Dwight D. Eisenhower to stop nuclear testing in 1958. "The 'Patrick Henry' ad shocked 'em", he wrote many years later. ""Starship Troopers" outraged 'em." "Starship Troopers" is a coming-of-age story about duty, citizenship, and the role of the military in society. The book portrays a society in which suffrage is earned by demonstrated willingness to place society's interests before one's own, at least for a short time and often under onerous circumstances, in government service; in the case of the protagonist, this was military service.
Later, in "Expanded Universe", Heinlein said that it was his intention in the novel that service could include positions outside strictly military functions such as teachers, police officers, and other government positions. This is presented in the novel as an outgrowth of the failure of unearned suffrage government and as a very successful arrangement. In addition, the franchise was only awarded after leaving the assigned service; thus those serving their terms—in the military, or any other service—were excluded from exercising any franchise. Career military were completely disenfranchised until retirement.
The name "Starship Troopers" was licensed for an unrelated, B movie script called "Bug Hunt at Outpost Nine", which was then retitled to benefit from the book's credibility. The resulting film, entitled "Starship Troopers" (1997), which was written by Ed Neumeier and directed by Paul Verhoeven, had little relationship to the book, beyond the inclusion of character names, the depiction of space marines, and the concept of suffrage earned by military service. Fans of Heinlein were critical of the movie, which they considered a betrayal of Heinlein's philosophy, presenting the society in which the story takes place as fascist.
Likewise, the powered armor technology that is not only central to the book, but became a standard subgenre of science fiction thereafter, is completely absent in the movie, where the characters use World War II-technology weapons and wear light combat gear little more advanced than that. In Verhoeven's movie of the same name, there is no battle armor. Verhoeven commented that he had tried to read the book after he had bought the rights to it, in order to add it to his existing movie. However he read only the first two chapters, finding it too boring to continue. He thought it was a bad book and asked Ed Neumeier to tell him the story because he couldn't read it.
From about 1961 ("Stranger in a Strange Land") to 1973 ("Time Enough for Love"), Heinlein explored some of his most important themes, such as individualism, libertarianism, and free expression of physical and emotional love. Three novels from this period, "Stranger in a Strange Land", "The Moon Is a Harsh Mistress", and "Time Enough for Love", won the Libertarian Futurist Society's Prometheus Hall of Fame Award, designed to honor classic libertarian fiction. Jeff Riggenbach described "The Moon Is a Harsh Mistress" as "unquestionably one of the three or four most influential libertarian novels of the last century".
Heinlein did not publish "Stranger in a Strange Land" until some time after it was written, and the themes of free love and radical individualism are prominently featured in his long-unpublished first novel, "For Us, the Living: A Comedy of Customs".
"The Moon Is a Harsh Mistress" tells of a war of independence waged by the Lunar penal colonies, with significant comments from a major character, Professor La Paz, regarding the threat posed by government to individual freedom.
Although Heinlein had previously written a few short stories in the fantasy genre, during this period he wrote his first fantasy novel, "Glory Road". In "Stranger in a Strange Land" and "I Will Fear No Evil", he began to mix hard science with fantasy, mysticism, and satire of organized religion. Critics William H. Patterson, Jr., and Andrew Thornton believe that this is simply an expression of Heinlein's longstanding philosophical opposition to positivism. Heinlein stated that he was influenced by James Branch Cabell in taking this new literary direction. The penultimate novel of this period, "I Will Fear No Evil", is according to critic James Gifford "almost universally regarded as a literary failure" and he attributes its shortcomings to Heinlein's near-death from peritonitis.
After a seven-year hiatus brought on by poor health, Heinlein produced five new novels in the period from 1980 ("The Number of the Beast") to 1987 ("To Sail Beyond the Sunset"). These books have a thread of common characters and time and place. They most explicitly communicated Heinlein's philosophies and beliefs, and many long, didactic passages of dialog and exposition deal with government, sex, and religion. These novels are controversial among his readers and one critic, David Langford, has written about them very negatively. Heinlein's four Hugo awards were all for books written before this period.
Most of the novels from this period are recognized by critics as forming an offshoot from the Future History series, and referred to by the term World as Myth.
The tendency toward authorial self-reference begun in "Stranger in a Strange Land" and "Time Enough for Love" becomes even more evident in novels such as "The Cat Who Walks Through Walls", whose first-person protagonist is a disabled military veteran who becomes a writer, and finds love with a female character.
The 1982 novel "Friday", a more conventional adventure story (borrowing a character and backstory from the earlier short story "Gulf", also containing suggestions of connection to "The Puppet Masters") continued a Heinlein theme of expecting what he saw as the continued disintegration of Earth's society, to the point where the title character is strongly encouraged to seek a new life off-planet. It concludes with a traditional Heinlein note, as in "The Moon Is a Harsh Mistress" or "Time Enough for Love", that freedom is to be found on the frontiers.
The 1984 novel "" is a sharp satire of organized religion. Heinlein himself was agnostic.
Several Heinlein works have been published since his death, including the aforementioned "" as well as 1989's "Grumbles from the Grave", a collection of letters between Heinlein and his editors and agent; 1992's "Tramp Royale", a travelogue of a southern hemisphere tour the Heinleins took in the 1950s; "Take Back Your Government", a how-to book about participatory democracy written in 1946; and a tribute volume called "Requiem: Collected Works and Tributes to the Grand Master", containing some additional short works previously unpublished in book form. "Off the Main Sequence", published in 2005, includes three short stories never before collected in any Heinlein book (Heinlein called them "stinkeroos").
Spider Robinson, a colleague, friend, and admirer of Heinlein, wrote "Variable Star", based on an outline and notes for a juvenile novel that Heinlein prepared in 1955. The novel was published as a collaboration, with Heinlein's name above Robinson's on the cover, in 2006.
A complete collection of Heinlein's published work has been published by the Heinlein Prize Trust as the "Virginia Edition", after his wife. See the Complete Works section of Robert A. Heinlein bibliography for details.
On February 1, 2019, Phoenix Pick announced that through a collaboration with the Heinlein Prize Trust, a reconstruction of the full text of an unpublished Heinlein novel had been produced. It was published in March, 2020. The reconstructed novel, entitled "The Pursuit of the Pankera: A Parallel Novel about Parallel Universes", is an alternative version of "The Number of the Beast", with the first one-third of "The Pursuit of the Pankera" mostly the same as the first one-third of "The Number of the Beast" but the remainder of "The Pursuit of the Pankera" deviating entirely from "The Number of the Beast", with a completely different story-line. The newly reconstructed novel pays homage to Edgar Rice Burroughs and E. E. “Doc” Smith. It was edited by Patrick Lobrutto. Some reviewers describe the newly-reconstructed novel as more in line with the style of a traditional Heinlein novel than was 'The Number of the Beast.' The Pursuit of the Pankera was considered superior to the original version of The Number of the Beast by some reviewers. Both "The Pursuit of the Pankera" and a new edition of "The Number of the Beast" were published in March, 2020. The new edition of the latter shares the subtitle of "The Pursuit of the Pankera", hence entitled "The Number of the Beast: A Parallel Novel about Parallel Universes"
The primary influence on Heinlein's writing style may have been Rudyard Kipling. Kipling is the first known modern example of "indirect exposition", a writing technique for which Heinlein later became famous. In his famous text on "On the Writing of Speculative Fiction", Heinlein quotes Kipling:
"Stranger in a Strange Land" originated as a modernized version of Kipling's "The Jungle Book", his wife suggesting that the child be raised by Martians instead of wolves. Likewise, "Citizen of the Galaxy" can be seen as a reboot of Kipling's novel "Kim".
The "Starship Troopers" idea of needing to serve in the military in order to vote, can be found in Kipling's "The Army of a Dream":
Poul Anderson once said of Kipling's science fiction story "As Easy as A.B.C.", "a wonderful science fiction yarn, showing the same eye for detail that would later distinguish the work of Robert Heinlein".
Heinlein described himself as also being influenced by George Bernard Shaw, having read most of his plays. Shaw is an example of an earlier author who used the competent man, a favorite Heinlein archetype. He denied, though, any direct influence of "Back to Methuselah" on "Methuselah's Children".
Heinlein's books probe a range of ideas about a range of topics such as sex, race, politics, and the military. Many were seen as radical or as ahead of their time in their social criticism. His books have inspired considerable debate about the specifics, and the evolution, of Heinlein's own opinions, and have earned him both lavish praise and a degree of criticism. He has also been accused of contradicting himself on various philosophical questions.
Brian Doherty cites William Patterson, saying that the best way to gain an understanding of Heinlein is as a "full-service iconoclast, the unique individual who decides that things do not have to be, and won't continue, as they are". He says this vision is "at the heart of Heinlein, science fiction, libertarianism, and America. Heinlein imagined how everything about the human world, from our sexual mores to our religion to our automobiles to our government to our plans for cultural survival, might be flawed, even fatally so."
The critic Elizabeth Anne Hull, for her part, has praised Heinlein for his interest in exploring fundamental life questions, especially questions about "political power—our responsibilities to one another" and about "personal freedom, particularly sexual freedom".
Edward R. Murrow hosted a series on CBS Radio called This I Believe, which solicited an Entry from Heinlein that is probably the most enduring and popular of the title: Our Noble, Essential Decency. In it, Heinlein broke with the normal trends, stating that he believed in his neighbors (some of whom he named and described), community, and towns across America that share the same sense of good will and intentions as his own, going on to apply this same philosophy to the US, and humanity in general.
Heinlein's political positions shifted throughout his life. Heinlein's early political leanings were liberal. In 1934, he worked actively for the Democratic campaign of Upton Sinclair for Governor of California. After Sinclair lost, Heinlein became an anti-Communist Democratic activist. He made an unsuccessful bid for a California State Assembly seat in 1938. Heinlein's first novel, "For Us, the Living" (written 1939), consists largely of speeches advocating the Social Credit system, and the early story "Misfit" (1939) deals with an organization—"The Cosmic Construction Corps"—that seems to be Franklin D. Roosevelt's Civilian Conservation Corps translated into outer space.
Of this time in his life, Heinlein later said:
Heinlein's fiction of the 1940s and 1950s, however, began to espouse conservative views. After 1945, he came to believe that a strong world government was the only way to avoid mutual nuclear annihilation. His 1949 novel "Space Cadet" describes a future scenario where a military-controlled global government enforces world peace. Heinlein ceased considering himself a Democrat in 1954.
The Heinleins formed the Patrick Henry League in 1958, and they worked in the 1964 Barry Goldwater Presidential campaign.
That ad was entitled Who Are the Heirs of Patrick Henry?. It started with the famous Henry quotation: "Is life so dear, or peace so sweet, as to be purchased at the price of chains and slavery? Forbid it, Almighty God! I know not what course others may take, but as for me, give me liberty, or give me death!!". It then went on to admit that there was some risk to nuclear testing (albeit less than the "willfully distorted" claims of the test ban advocates), and risk of nuclear war, but that "The alternative is surrender. We accept the risks." Heinlein was among those who in 1968 signed a pro-Vietnam War ad in "Galaxy Science Fiction". In his essay "Starship Stormtroopers", Michael Moorcock posits that Heinlein was a fascist who fetishized violence and militarism.
Heinlein always considered himself a libertarian; in a letter to Judith Merril in 1967 (never sent) he said, "As for libertarian, I've been one all my life, a radical one. You might use the term 'philosophical anarchist' or 'autarchist' about me, but 'libertarian' is easier to define and fits well enough."
"Stranger in a Strange Land" was embraced by the hippie counterculture, and libertarians have found inspiration in "The Moon Is a Harsh Mistress". Both groups found resonance with his themes of personal freedom in both thought and action.
Heinlein grew up in the era of racial segregation in the United States and wrote some of his most influential fiction at the height of the Civil Rights Movement. He explicitly made the case for using his fiction not only to predict the future but also to educate his readers about the value of racial equality and the importance of racial tolerance. His early novels were very much ahead of their time both in their explicit rejection of racism and in their inclusion of protagonists of color. In the context of science fiction before the 1960s, the mere existence of characters of color was a remarkable novelty, with green occurring more often than brown. For example, his 1948 novel "Space Cadet" explicitly uses aliens as a metaphor for minorities. In his novel "The Star Beast", the "de facto" foreign minister of the Terran government is an undersecretary, a Mr. Kiku, who is from Africa. Heinlein explicitly states his skin is "ebony black" and that Kiku is in an arranged marriage that is happy.
In a number of his stories, Heinlein challenges his readers' possible racial preconceptions by introducing a strong, sympathetic character, only to reveal much later that he or she is of African or other ancestry. In several cases, the covers of the books show characters as being light-skinned when the text states or at least implies that they are dark-skinned or of African ancestry. Heinlein repeatedly denounced racism in his nonfiction works, including numerous examples in "Expanded Universe".
Heinlein reveals in "Starship Troopers" that the novel's protagonist and narrator, Johnny Rico, the formerly disaffected scion of a wealthy family, is Filipino, actually named "Juan Rico" and speaks Tagalog in addition to English.
Race was a central theme in some of Heinlein's fiction. The most prominent and controversial example is "Farnham's Freehold", which casts a white family into a future in which white people are the slaves of cannibalistic black rulers. In the 1941 novel "Sixth Column" (also known as "The Day After Tomorrow"), a white resistance movement in the United States defends itself against an invasion by an Asian fascist state (the "Pan-Asians") using a "super-science" technology that allows ray weapons to be tuned to specific races. The book is sprinkled with racist slurs against Asian people, and black and Hispanic people are not mentioned at all. The idea for the story was pushed on Heinlein by editor John W. Campbell, and Heinlein wrote later that he had "had to re-slant it to remove racist aspects of the original story line" and that he did not "consider it to be an artistic success". However, the novel prompted a heated debate in the scientific community regarding the plausibility of developing ethnic bioweapons.
In keeping with his belief in individualism, his work for adults—and sometimes even his work for juveniles—often portrays both the oppressors and the oppressed with considerable ambiguity. Heinlein believed that individualism was incompatible with ignorance. He believed that an appropriate level of adult competence was achieved through a wide-ranging education, whether this occurred in a classroom or not. In his juvenile novels, more than once a character looks with disdain at a student's choice of classwork, saying, "Why didn't you study something useful?" In "Time Enough for Love", Lazarus Long gives a long list of capabilities that anyone should have, concluding, "Specialization is for insects." The ability of the individual to create himself is explored in stories such as "I Will Fear No Evil", "—All You Zombies—", and "By His Bootstraps".
Heinlein claimed to have written "Starship Troopers" in response to "calls for the unilateral ending of nuclear testing by the United States". Heinlein suggests in the book that the Bugs are a good example of Communism being something that humans cannot successfully adhere to, since humans are strongly defined individuals, whereas the Bugs, being a collective, can all contribute to the whole without consideration of individual desire.
For Heinlein, personal liberation included sexual liberation, and free love was a major subject of his writing starting in 1939, with "For Us, the Living". During his early period, Heinlein's writing for younger readers needed to take account of both editorial perceptions of sexuality in his novels, and potential perceptions among the buying public; as critic William H. Patterson has put it, his dilemma was "to sort out what was really objectionable from what was only excessive over-sensitivity to imaginary librarians".
By his middle period, sexual freedom and the elimination of sexual jealousy became a major theme; for instance, in "Stranger in a Strange Land" (1961), the progressively minded but sexually conservative reporter, Ben Caxton, acts as a dramatic foil for the less parochial characters, Jubal Harshaw and Valentine Michael Smith (Mike). Another of the main characters, Jill, is homophobic, and says that "nine times out of ten, if a girl gets raped it's partly her own fault."
According to Gary Westfahl,
In books written as early as 1956, Heinlein dealt with incest and the sexual nature of children. Many of his books including "Time for the Stars", "Glory Road", "Time Enough for Love", and "The Number of the Beast" dealt explicitly or implicitly with incest, sexual feelings and relations between adults, children, or both. The treatment of these themes include the romantic relationship and eventual marriage, once the girl becomes an adult via time-travel, of a 30-year-old engineer and an 11-year-old girl in "The Door into Summer" or the more overt intra-familial incest in "To Sail Beyond the Sunset" and "Farnham's Freehold". Heinlein often posed situations where the nominal purpose of sexual taboos was irrelevant to a particular situation, due to future advances in technology. For example, in "Time Enough for Love" Heinlein describes a brother and sister (Joe and Llita) who were mirror twins, being complementary diploids with entirely disjoint genomes, and thus not at increased risk for unfavorable gene duplication due to consanguinity. In this instance, Llita and Joe were props used to explore the concept of incest, where the usual objection to incest—heightened risk of genetic defect in their children—was not a consideration. Peers such as L. Sprague de Camp and Damon Knight have commented critically on Heinlein's portrayal of incest and pedophilia in a lighthearted and even approving manner. However, Heinlein's intent seems more to provoke the reader and to question sexual mores than to promote any particular sexual agenda.
In "To Sail Beyond the Sunset", Heinlein has the main character, Maureen, state that the purpose of metaphysics is to ask questions: "Why are we here?" "Where are we going after we die?" (and so on); and that you are not allowed to answer the questions. "Asking" the questions is the point of metaphysics, but "answering" them is not, because once you answer this kind of question, you cross the line into religion. Maureen does not state a reason for this; she simply remarks that such questions are "beautiful" but lack answers. Maureen's son/lover Lazarus Long makes a related remark in "Time Enough for Love". In order for us to answer the "big questions" about the universe, Lazarus states at one point, it would be necessary to stand "outside" the universe.
During the 1930s and 1940s, Heinlein was deeply interested in Alfred Korzybski's general semantics and attended a number of seminars on the subject. His views on epistemology seem to have flowed from that interest, and his fictional characters continue to express Korzybskian views to the very end of his writing career. Many of his stories, such as "Gulf", "If This Goes On—", and "Stranger in a Strange Land", depend strongly on the premise, related to the well-known Sapir–Whorf hypothesis, that by using a correctly designed language, one can change or improve oneself mentally, or even realize untapped potential (as in the case of Joe in "Gulf" – whose last name may be Greene, Gilead or Briggs).
When Ayn Rand's novel "The Fountainhead" was published, Heinlein was very favorably impressed, as quoted in "Grumbles ..." and mentioned John Galt—the hero in Rand's "Atlas Shrugged"—as a heroic archetype in "The Moon Is a Harsh Mistress". He was also strongly affected by the religious philosopher P. D. Ouspensky. Freudianism and psychoanalysis were at the height of their influence during the peak of Heinlein's career, and stories such as "Time for the Stars" indulged in psychological theorizing.
However, he was skeptical about Freudianism, especially after a struggle with an editor who insisted on reading Freudian sexual symbolism into his juvenile novels. Heinlein was fascinated by the social credit movement in the 1930s. This is shown in "Beyond This Horizon" and in his 1938 novel "", which was finally published in 2003, long after his death.
The phrase "pay it forward", though it was already in occasional use as a quotation, was popularized by Robert A. Heinlein in his book "Between Planets", published in 1951:
He referred to this in a number of other stories, although sometimes just saying to pay a debt back by helping others, as in one of his last works, "Job, a Comedy of Justice".
Heinlein was a mentor to Ray Bradbury, giving him help and quite possibly passing on the concept, made famous by the publication of a letter from him to Heinlein thanking him. In Bradbury's novel "Dandelion Wine", published in 1957, when the main character Douglas Spaulding is reflecting on his life being saved by Mr. Jonas, the Junkman:
Bradbury has also advised that writers he has helped thank him by helping other writers.
Heinlein both preached and practiced this philosophy; now the Heinlein Society, a humanitarian organization founded in his name, does so, attributing the philosophy to its various efforts, including Heinlein for Heroes, the Heinlein Society Scholarship Program, and Heinlein Society blood drives. Author Spider Robinson made repeated reference to the doctrine, attributing it to his spiritual mentor Heinlein.
Heinlein is usually identified, along with Isaac Asimov and Arthur C. Clarke, as one of the three masters of science fiction to arise in the so-called Golden Age of science fiction, associated with John W. Campbell and his magazine "Astounding".
In the 1950s he was a leader in bringing science fiction out of the low-paying and less prestigious "pulp ghetto". Most of his works, including short stories, have been continuously in print in many languages since their initial appearance and are still available as new paperbacks decades after his death.
He was at the top of his form during, and himself helped to initiate, the trend toward social science fiction, which went along with a general maturing of the genre away from space opera to a more literary approach touching on such adult issues as politics and human sexuality. In reaction to this trend, hard science fiction began to be distinguished as a separate subgenre, but paradoxically Heinlein is also considered a seminal figure in hard science fiction, due to his extensive knowledge of engineering and the careful scientific research demonstrated in his stories. Heinlein himself stated—with obvious pride—that in the days before pocket calculators, he and his wife Virginia once worked for several days on a mathematical equation describing an Earth-Mars rocket orbit, which was then subsumed in a single sentence of the novel "Space Cadet".
Heinlein is often credited with bringing serious writing techniques to the genre of science fiction.
For example, when writing about fictional worlds, previous authors were often limited by the reader's existing knowledge of a typical "space opera" setting, leading to a relatively low creativity level: The same starships, death rays, and horrifying rubbery aliens becoming ubiquitous. This was necessary unless the author was willing to go into long expositions about the setting of the story, at a time when the word count was at a premium in SF.
But Heinlein utilized a technique called "indirect exposition", perhaps first introduced by Rudyard Kipling in his own science fiction venture, the Aerial Board of Control stories. Kipling had picked this up during his time in India, using it to avoid bogging down his stories set in India with explanations for his English readers. This technique—mentioning details in a way that lets the reader infer more about the universe than is actually spelled out became a trademark rhetorical technique of both Heinlein and generation of writers influenced by him. Heinlein was significantly influenced by Kipling beyond this, for example quoting him in On the Writing of Speculative Fiction.
Likewise, Heinlein's name is often associated with the competent hero, a character archetype who, though he or she may have flaws and limitations, is a strong, accomplished person able to overcome any soluble problem set in their path. They tend to feel confident overall, have a broad life experience and set of skills, and not give up when the going gets tough. This style influenced not only the writing style of a generation of authors, but even their personal character. Harlan Ellison once said, "Very early in life when I read Robert Heinlein I got the thread that runs through his stories—the notion of the competent man ... I've always held that as my ideal. I've tried to be a very competent man."
When fellow writers, or fans, wrote Heinlein asking for writing advice, he famously gave out his own list of rules for becoming a successful writer:
About which he said:
Heinlein later published an entire article, "On the Writing of Speculative Fiction", which included his rules, and from which the above quote is taken. When he says "anything said above them", he refers to his other guidelines. For example, he describes most stories as fitting into one of a handful of basic categories:
In the article, Heinlein credits L. Ron Hubbard as having identified "The Man-Who-Learned-Better".
Heinlein has had a pervasive influence on other science fiction writers. In a 1953 poll of leading science fiction authors, he was cited more frequently as an influence than any other modern writer. Critic James Gifford writes that
Heinlein gave Larry Niven and Jerry Pournelle extensive advice on a draft manuscript of "The Mote in God's Eye". He contributed a cover blurb "Possibly the finest science fiction novel I have ever read." Writer David Gerrold, responsible for creating the tribbles in "Star Trek", also credited Heinlein as the inspiration for his "Dingilliad" series of novels. Gregory Benford refers to his novel "Jupiter Project" as a Heinlein tribute. Similarly, Charles Stross says his Hugo Award-nominated novel "Saturn's Children" is "a space opera and late-period Robert A. Heinlein tribute", referring to Heinlein's "Friday". The theme and plot of Kameron Hurley's novel, "The Light Brigade" clearly echo those of Heinlein's "Starship Troopers".
Even outside the science fiction community, several words and phrases coined or adopted by Heinlein have passed into common English usage:
In 1962, Oberon Zell-Ravenheart (then still using his birth name, Tim Zell) founded the Church of All Worlds, a Neopagan religious organization modeled in many ways (including its name) after the treatment of religion in the novel "Stranger in a Strange Land". This spiritual path included several ideas from the book, including non-mainstream family structures, social libertarianism, water-sharing rituals, an acceptance of all religious paths by a single tradition, and the use of several terms such as "grok", "Thou art God", and "Never Thirst". Though Heinlein was neither a member nor a promoter of the Church, there was a frequent exchange of correspondence between Zell and Heinlein, and he was a paid subscriber to their magazine, "Green Egg". This Church still exists as a 501(C)(3) religious organization incorporated in California, with membership worldwide, and it remains an active part of the neopagan community today. Zell-Ravenheart's wife, Morning Glory coined the term polyamory in 1990, another movement that includes Heinlein concepts among its roots.
Heinlein was influential in making space exploration seem to the public more like a practical possibility. His stories in publications such as "The Saturday Evening Post" took a matter-of-fact approach to their outer-space setting, rather than the "gee whiz" tone that had previously been common. The documentary-like film "Destination Moon" advocated a Space Race with an unspecified foreign power almost a decade before such an idea became commonplace, and was promoted by an unprecedented publicity campaign in print publications. Many of the astronauts and others working in the U.S. space program grew up on a diet of the Heinlein juveniles, best evidenced by the naming of a crater on Mars after him, and a tribute interspersed by the Apollo 15 astronauts into their radio conversations while on the moon.
Heinlein was also a guest commentator (along with fellow sci-fi author Arthur C. Clarke) for Walter Cronkite's coverage of the Apollo 11 Moon landing. He remarked to Cronkite during the landing that, "This is the greatest event in human history, up to this time. This is—today is New Year's Day of the Year One." Businessman and entrepreneur Elon Musk says that Heinlein's books have helped inspire his career.
The Heinlein Society was founded by Virginia Heinlein on behalf of her husband, to "pay forward" the legacy of the writer to future generations of "Heinlein's Children". The foundation has programs to:
The Heinlein society also established the Robert A. Heinlein Award in 2003 "for outstanding published works in science fiction and technical writings to inspire the human exploration of space".
In his lifetime, Heinlein received four Hugo Awards, for "Double Star", "Starship Troopers", "Stranger in a Strange Land", and "The Moon Is a Harsh Mistress", and was nominated for four Nebula Awards, for "Stranger in a Strange Land", "Friday", "Time Enough for Love", and "Job: A Comedy of Justice". He was also given seven Retro-Hugos: two for best novel: "Beyond This Horizon" and "Farmer in the Sky"; Three for best novella: :"If This Goes On ...", "Waldo", and "The Man Who Sold the Moon"; one for best novelette: "The Roads Must Roll"; and one for best dramatic presentation: "Destination Moon".
The Science Fiction Writers of America named Heinlein its first Grand Master in 1974, presented 1975. Officers and past presidents of the Association select a living writer for lifetime achievement (now annually and including fantasy literature).
Main-belt asteroid 6312 Robheinlein (1990 RH4), discovered on September 14, 1990 by H. E. Holt, at Palomar was named after him.
There is no lunar feature named explicitly for Heinlein, but in 1994 the International Astronomical Union named Heinlein crater on Mars in his honor.
The Science Fiction and Fantasy Hall of Fame inducted Heinlein in 1998, its third class of two deceased and two living writers and editors.
In 2001 the United States Naval Academy created the Robert A. Heinlein Chair In Aerospace Engineering.
In 2016, after an intensive online campaign to win a vote for the opening, Heinlein was inducted into the Hall of Famous Missourians. His bronze bust, created by Kansas City sculptor E. Spencer Schubert, is on permanent display in the Missouri State Capitol in Jefferson City.
The Libertarian Futurist Society has honored five of Heinlein's novels and two short stories with their Hall of Fame award. The first two were given during his lifetime for "The Moon Is a Harsh Mistress" and "Stranger in a Strange Land". Five more were awarded posthumously for "Red Planet", "Methuselah's Children", "Time Enough for Love", and the short stories "Requiem" and "Coventry". | https://en.wikipedia.org/wiki?curid=25389 |
Russia
Russia (), or the Russian Federation, is a transcontinental country located in Eastern Europe and Northern Asia. Covering an area of , it is the largest country in the world by area, spanning more than one-eighth of the Earth's inhabited land area, stretching eleven time zones, and bordering 16 sovereign nations. The territory of Russia extends from the Baltic Sea in the west to the Pacific Ocean in the east, and from the Arctic Ocean in the north to the Black Sea and the Caucasus in the south. With 146.7 million inhabitants living in the country's 85 federal subjects, Russia is the most populous nation in Europe and the ninth-most populous nation in the world. Russia's capital and largest city is Moscow; other major urban areas include Saint Petersburg, Novosibirsk, Yekaterinburg, Nizhny Novgorod, Kazan and Chelyabinsk.
The East Slavs emerged as a recognisable group in Europe between the 3rd and 8th centuries AD. The medieval state of Rus' arose in the 9th century. In 988 it adopted Orthodox Christianity from the Byzantine Empire, beginning the synthesis of Byzantine and Slavic cultures that defined Russian culture for the next millennium. Rus' ultimately disintegrated into a number of smaller states, until it was finally reunified by the Grand Duchy of Moscow in the 15th century. By the 18th century, the nation had greatly expanded through conquest, annexation, and exploration to become the Russian Empire, which was the third largest empire in history, stretching from Norway on the west to Alaska on the east. Following the Russian Revolution, the Russian Soviet Federative Socialist Republic (Russian SFSR) became the largest and leading constituent of the Union of Soviet Socialist Republics (USSR/Soviet Union), the world's first constitutionally socialist state. The Soviet Union played a decisive role in the Allied victory in World War II, and emerged as a recognised superpower and rival to the United States during the Cold War. The Soviet era saw some of the most significant technological achievements of the 20th century, including the world's first human-made satellite and the launching of the first humans in space. Following the dissolution of the Soviet Union in 1991, the Russian SFSR reconstituted itself as the Russian Federation and is recognised as the continuing legal personality and a successor of the USSR.
Since 1993, Russia is governed as a federal semi-presidential republic. Vladimir Putin has dominated Russia's political system since 2000, serving as either president or prime minister. His government has been accused by non-governmental organisations of numerous human rights abuses, authoritarianism and corruption. In response, Putin has argued that Western-style liberalism is obsolete in Russia, while maintaining that the country is still a democratic nation.
The Russian economy ranks as the fifth-largest in Europe, the eleventh-largest in the world by nominal GDP and the fifth-largest by PPP. Russia's extensive mineral and energy resources are the largest such reserves in the world, making it one of the leading producers of oil and natural gas globally. The country is one of the five recognised nuclear weapons states and possesses the largest stockpile of nuclear warheads. Russia is a major great power, as well as a regional power, and has been characterised as a potential superpower. The Russian Armed Forces have been ranked as the world's second most powerful, and the most powerful in Europe. Russia hosts the world's ninth-greatest number of UNESCO World Heritage Sites, at 29, and is among the world's most popular tourist destinations. It is a permanent member of the United Nations Security Council and an active global partner of ASEAN, as well as a member of the Shanghai Cooperation Organisation (SCO), the G20, the Council of Europe, the Asia-Pacific Economic Cooperation (APEC), the Organization for Security and Co-operation in Europe (OSCE), the International Investment Bank (IIB) and the World Trade Organization (WTO), as well as being the leading member of the Commonwealth of Independent States (CIS), the Collective Security Treaty Organization (CSTO) and a member of the Eurasian Economic Union (EAEU).
The name "Russia" is derived from Rus', a medieval state populated mostly by the East Slavs. However, this proper name became more prominent in the later history, and the country typically was called by its inhabitants "Русская Земля" ("russkaja zemlja"), which can be translated as "Russian Land" or "Land of Rus'". In order to distinguish this state from other states derived from it, it is denoted as "Kievan Rus'" by modern historiography. The name "Rus" itself comes from the early medieval Rus' people, Swedish merchants and warriors who relocated from across the Baltic Sea and founded a state centered on Novgorod that later became Kievan Rus.
An old Latin version of the name Rus' was Ruthenia, mostly applied to the western and southern regions of Rus' that were adjacent to Catholic Europe. The current name of the country, Россия ("Rossija"), comes from the Byzantine Greek designation of the Rus', Ρωσσία "Rossía"—spelled Ρωσία ("Rosía" ) in Modern Greek.
The standard way to refer to citizens of Russia is "Russians" in English and "rossiyane" () in Russian. There are two Russian words which are commonly translated into English as "Russians". One is "русские" ("russkiye"), which most often means "ethnic Russians". Another is "россияне" ("rossiyane"), which means "citizens of Russia, regardless of ethnicity". Translations into other languages often do not distinguish these two groups.
Nomadic pastoralism developed in the Pontic-Caspian steppe beginning in the Chalcolithic.
In classical antiquity, the Pontic Steppe was known as Scythia. Beginning in the 8th century BC, Ancient Greek traders brought their civilization to the trade emporiums in Tanais and Phanagoria. Ancient Greek explorers, most notably Pytheas, even went as far as modern day Kaliningrad, on the Baltic Sea. Romans settled on the western part of the Caspian Sea, where their empire stretched towards the east. In the 3rd to 4th centuries AD a semi-legendary Gothic kingdom of Oium existed in Southern Russia until it was overrun by Huns. Between the 3rd and 6th centuries AD, the Bosporan Kingdom, a Hellenistic polity which succeeded the Greek colonies, was also overwhelmed by nomadic invasions led by warlike tribes, such as the Huns and Eurasian Avars. A Turkic people, the Khazars, ruled the lower Volga basin steppes between the Caspian and Black Seas until the 10th century.
The ancestors of modern Russians are the Slavic tribes, whose original home is thought by some scholars to have been the wooded areas of the Pinsk Marshes. The East Slavs gradually settled Western Russia in two waves: one moving from Kiev toward present-day Suzdal and Murom and another from Polotsk toward Novgorod and Rostov. From the 7th century onwards, the East Slavs constituted the bulk of the population in Western Russia and assimilated the native Finno-Ugric peoples, including the Merya, the Muromians, and the Meshchera.
The establishment of the first East Slavic states in the 9th century coincided with the arrival of Varangians, the traders, warriors and settlers from the Baltic Sea region. Primarily they were Vikings of Scandinavian origin, who ventured along the waterways extending from the eastern Baltic to the Black and Caspian Seas. According to the "Primary Chronicle", a Varangian from Rus' people, named Rurik, was elected ruler of Novgorod in 862. In 882, his successor Oleg ventured south and conquered Kiev, which had been previously paying tribute to the Khazars. Oleg, Rurik's son Igor and Igor's son Sviatoslav subsequently subdued all local East Slavic tribes to Kievan rule, destroyed the Khazar khaganate and launched several military expeditions to Byzantium and Persia.
In the 10th to 11th centuries Kievan Rus' became one of the largest and most prosperous states in Europe. The reigns of Vladimir the Great (980–1015) and his son Yaroslav the Wise (1019–1054) constitute the Golden Age of Kiev, which saw the acceptance of Orthodox Christianity from Byzantium and the creation of the first East Slavic written legal code, the "Russkaya Pravda".
In the 11th and 12th centuries, constant incursions by nomadic Turkic tribes, such as the Kipchaks and the Pechenegs, caused a massive migration of Slavic populations to the safer, heavily forested regions of the north, particularly to the area known as Zalesye.
The age of feudalism and decentralization was marked by constant in-fighting between members of the Rurik Dynasty that ruled Kievan Rus' collectively. Kiev's dominance waned, to the benefit of Vladimir-Suzdal in the north-east, Novgorod Republic in the north-west and Galicia-Volhynia in the south-west.
Ultimately Kievan Rus' disintegrated, with the final blow being the Mongol invasion of 1237–40 that resulted in the destruction of Kiev and the death of about half the population of Rus'. The invading Mongol elite, together with their conquered Turkic subjects (Cumans, Kipchaks, Bulgars), became known as Tatars, forming the state of the Golden Horde, which pillaged the Russian principalities; the Mongols ruled the Cuman-Kipchak confederation and Volga Bulgaria (modern-day southern and central expanses of Russia) for over two centuries.
Galicia-Volhynia was eventually assimilated by the Kingdom of Poland, while the Mongol-dominated Vladimir-Suzdal and Novgorod Republic, two regions on the periphery of Kiev, established the basis for the modern Russian nation. The Novgorod together with Pskov retained some degree of autonomy during the time of the Mongol yoke and were largely spared the atrocities that affected the rest of the country. Led by Prince Alexander Nevsky, Novgorodians repelled the invading Swedes in the Battle of the Neva in 1240, as well as the Germanic crusaders in the Battle of the Ice in 1242, breaking their attempts to colonise the Northern Rus'.
The most powerful state to eventually arise after the destruction of Kievan Rus' was the Grand Duchy of Moscow ("Muscovy" in the Western chronicles), initially a part of Vladimir-Suzdal. While still under the domain of the Mongol-Tatars and with their connivance, Moscow began to assert its influence in the Central Rus' in the early 14th century, gradually becoming the leading force in the process of the Rus' lands' reunification and expansion of Russia. Moscow's last rival, the Novgorod Republic, prospered as the chief fur trade center and the easternmost port of the Hanseatic League.
Times remained difficult, with frequent Mongol-Tatar raids. Agriculture suffered from the beginning of the Little Ice Age. As in the rest of Europe, plague was a frequent occurrence between 1350 and 1490. However, because of the lower population density and better hygiene—widespread practicing of banya, a wet steam bath—the death rate from plague was not as severe as in Western Europe, and population numbers recovered by 1500.
Led by Prince Dmitry Donskoy of Moscow and helped by the Russian Orthodox Church, the united army of Russian principalities inflicted a milestone defeat on the Mongol-Tatars in the Battle of Kulikovo in 1380. Moscow gradually absorbed the surrounding principalities, including formerly strong rivals such as Tver and Novgorod.
Ivan III ("the Great") finally threw off the control of the Golden Horde and consolidated the whole of Central and Northern Rus' under Moscow's dominion. He was also the first to take the title "Grand Duke of all the Russias". After the fall of Constantinople in 1453, Moscow claimed succession to the legacy of the Eastern Roman Empire. Ivan III married Sophia Palaiologina, the niece of the last Byzantine emperor Constantine XI, and made the Byzantine double-headed eagle his own, and eventually Russia's, coat-of-arms.
In development of the Third Rome ideas, the Grand Duke Ivan IV (the "Terrible") was officially crowned first "Tsar" ("Caesar") of Russia in 1547. The "Tsar" promulgated a new code of laws (Sudebnik of 1550), established the first Russian feudal representative body (Zemsky Sobor) and introduced local self-management into the rural regions.
During his long reign, Ivan the Terrible nearly doubled the already large Russian territory by annexing the three Tatar khanates (parts of the disintegrated Golden Horde): Kazan and Astrakhan along the Volga River, and the Siberian Khanate in southwestern Siberia. Thus, by the end of the 16th century Russia was transformed into a multiethnic, multidenominational and transcontinental state.
However, the Tsardom was weakened by the long and unsuccessful Livonian War against the coalition of Poland, Lithuania, and Sweden for access to the Baltic coast and sea trade. At the same time, the Tatars of the Crimean Khanate, the only remaining successor to the Golden Horde, continued to raid Southern Russia. In an effort to restore the Volga khanates, Crimeans and their Ottoman allies invaded central Russia and were even able to burn down parts of Moscow in 1571. But in the next year the large invading army was thoroughly defeated by Russians in the Battle of Molodi, forever eliminating the threat of an Ottoman–Crimean expansion into Russia. The slave raids of Crimeans, however, did not cease until the late 17th century though the construction of new fortification lines across Southern Russia, such as the Great Abatis Line, constantly narrowed the area accessible to incursions.
The death of Ivan's sons marked the end of the ancient Rurik Dynasty in 1598, and in combination with the famine of 1601–03 led to civil war, the rule of pretenders, and foreign intervention during the Time of Troubles in the early 17th century. The Polish–Lithuanian Commonwealth occupied parts of Russia, including Moscow. In 1612, the Poles were forced to retreat by the Russian volunteer corps, led by two national heroes, merchant Kuzma Minin and Prince Dmitry Pozharsky. The Romanov Dynasty acceded to the throne in 1613 by the decision of Zemsky Sobor, and the country started its gradual recovery from the crisis.
Russia continued its territorial growth through the 17th century, which was the age of Cossacks. Cossacks were warriors organised into military communities, resembling pirates and pioneers of the New World. In 1648, the peasants of Ukraine joined the Zaporozhian Cossacks in rebellion against Poland-Lithuania during the Khmelnytsky Uprising in reaction to the social and religious oppression they had been suffering under Polish rule. In 1654, the Ukrainian leader, Bohdan Khmelnytsky, offered to place Ukraine under the protection of the Russian Tsar, Aleksey I. Aleksey's acceptance of this offer led to another Russo-Polish War. Finally, Ukraine was split along the Dnieper River, leaving the western part, right-bank Ukraine, under Polish rule and the eastern part (Left-bank Ukraine and Kiev) under Russian rule. Later, in 1670–71, the Don Cossacks led by Stenka Razin initiated a major uprising in the Volga Region, but the Tsar's troops were successful in defeating the rebels.
In the east, the rapid Russian exploration and colonisation of the huge territories of Siberia was led mostly by Cossacks hunting for valuable furs and ivory. Russian explorers pushed eastward primarily along the Siberian River Routes, and by the mid-17th century there were Russian settlements in Eastern Siberia, on the Chukchi Peninsula, along the Amur River, and on the Pacific coast. In 1648, the Bering Strait between Asia and North America was passed for the first time by Fedot Popov and Semyon Dezhnyov.
Under Peter the Great, Russia was proclaimed an Empire in 1721 and became recognised as a world power. Ruling from 1682 to 1725, Peter defeated Sweden in the Great Northern War, forcing it to cede West Karelia and Ingria (two regions lost by Russia in the Time of Troubles), as well as Estland and Livland, securing Russia's access to the sea and sea trade. On the Baltic Sea, Peter founded a new capital called Saint Petersburg, later known as Russia's "window to Europe". Peter the Great's reforms brought considerable Western European cultural influences to Russia.
The reign of Peter I's daughter Elizabeth in 1741–62 saw Russia's participation in the Seven Years' War (1756–63). During this conflict Russia annexed East Prussia for a while and even took Berlin. However, upon Elizabeth's death, all these conquests were returned to the Kingdom of Prussia by pro-Prussian Peter III of Russia.
Catherine II ("the Great"), who ruled in 1762–96, presided over the Age of Russian Enlightenment. She extended Russian political control over the Polish-Lithuanian Commonwealth and incorporated most of its territories into Russia during the Partitions of Poland, pushing the Russian frontier westward into Central Europe. In the south, after successful Russo-Turkish Wars against Ottoman Turkey, Catherine advanced Russia's boundary to the Black Sea, defeating the Crimean Khanate. As a result of victories over Qajar Iran through the Russo-Persian Wars, by the first half of the 19th century Russia also made significant territorial gains in Transcaucasia and the North Caucasus, forcing the former to irrevocably cede what is nowadays Georgia, Dagestan, Azerbaijan and Armenia to Russia. This continued with Alexander I's (1801–25) wresting of Finland from the weakened kingdom of Sweden in 1809 and of Bessarabia from the Ottomans in 1812. At the same time, Russians colonised Alaska and even founded settlements in California, such as Fort Ross.
In 1803–1806, the first Russian circumnavigation was made, later followed by other notable Russian sea exploration voyages. In 1820, a Russian expedition discovered the continent of Antarctica.
In alliances with various European countries, Russia fought against Napoleon's France. The French invasion of Russia at the height of Napoleon's power in 1812 reached Moscow, but eventually failed miserably as the obstinate resistance in combination with the bitterly cold Russian winter led to a disastrous defeat of invaders, in which more than 95% of the pan-European Grande Armée perished. Led by Mikhail Kutuzov and Barclay de Tolly, the Russian army ousted Napoleon from the country and drove through Europe in the war of the Sixth Coalition, finally entering Paris. Alexander I headed Russia's delegation at the Congress of Vienna that defined the map of post-Napoleonic Europe.
The officers of the Napoleonic Wars brought ideas of liberalism back to Russia with them and attempted to curtail the tsar's powers during the abortive Decembrist revolt of 1825. At the end of the conservative reign of Nicolas I (1825–55), a zenith period of Russia's power and influence in Europe was disrupted by defeat in the Crimean War. Between 1847 and 1851, about one million people died of Asiatic cholera.
Nicholas's successor Alexander II (1855–81) enacted significant changes in the country, including the emancipation reform of 1861. These "Great Reforms" spurred industrialization and modernised the Russian army, which had successfully liberated Bulgaria from Ottoman rule in the 1877–78 Russo-Turkish War.
The late 19th century saw the rise of various socialist movements in Russia. Alexander II was killed in 1881 by revolutionary terrorists, and the reign of his son
Alexander III (1881–94) was less liberal but more peaceful. The last Russian Emperor, Nicholas II (1894–1917), was unable to prevent the events of the Russian Revolution of 1905, triggered by the unsuccessful Russo-Japanese War and the demonstration incident known as Bloody Sunday. The uprising was put down, but the government was forced to concede major reforms (Russian Constitution of 1906), including granting the freedoms of speech and assembly, the legalization of political parties, and the creation of an elected legislative body, the State Duma of the Russian Empire. The Stolypin agrarian reform led to a massive peasant migration and settlement into Siberia. More than four million settlers arrived in that region between 1906 and 1914.
In 1914, Russia entered World War I in response to Austria-Hungary's declaration of war on Russia's ally Serbia, and fought across multiple fronts while isolated from its Triple Entente allies. In 1916, the Brusilov Offensive of the Russian Army almost completely destroyed the military of Austria-Hungary. However, the already-existing public distrust of the regime was deepened by the rising costs of war, high casualties, and rumors of corruption and treason. All this formed the climate for the Russian Revolution of 1917, carried out in two major acts.
The February Revolution forced Nicholas II to abdicate; he and his family were imprisoned and later executed in Yekaterinburg during the Russian Civil War. The monarchy was replaced by a shaky coalition of political parties that declared itself the Provisional Government. On 1 September (14), 1917, upon a decree of the Provisional Government, the Russian Republic was proclaimed. On 6 January (19), 1918, the Russian Constituent Assembly declared Russia a democratic federal republic (thus ratifying the Provisional Government's decision). The next day the Constituent Assembly was dissolved by the All-Russian Central Executive Committee.
An alternative socialist establishment co-existed, the Petrograd Soviet, wielding power through the democratically elected councils of workers and peasants, called "Soviets". The rule of the new authorities only aggravated the crisis in the country, instead of resolving it. Eventually, the October Revolution, led by Bolshevik leader Vladimir Lenin, overthrew the Provisional Government and gave full governing power to the Soviets, leading to the creation of the world's first socialist state.
Following the October Revolution, a civil war broke out between the anti-Communist White movement and the new Soviet regime with its Red Army. Bolshevist Russia lost its Ukrainian, Polish, Baltic, and Finnish territories by signing the Treaty of Brest-Litovsk that concluded hostilities with the Central Powers of World War I. The Allied powers launched an unsuccessful military intervention in support of anti-Communist forces. In the meantime both the Bolsheviks and White movement carried out campaigns of deportations and executions against each other, known respectively as the Red Terror and White Terror. By the end of the civil war, Russia's economy and infrastructure were heavily damaged. There were an estimated 7–12 million casualties during the war, mostly civilians. Millions became White émigrés, and the Russian famine of 1921–22 claimed up to five million victims.
The Russian Soviet Federative Socialist Republic (called "Russian Socialist Federative Soviet Republic" at the time), together with the Ukrainian, Byelorussian, and Transcaucasian Soviet Socialist Republics, formed the Union of Soviet Socialist Republics (USSR), or Soviet Union, on 30 December 1922. Out of the 15 republics that would make up the USSR, the largest in size and over half of the total USSR population was the Russian SFSR, which came to dominate the union for its entire 69-year history.
Following Lenin's death in 1924, a troika was designated to govern the Soviet Union. However, Joseph Stalin, an elected General Secretary of the Communist Party, managed to suppress all opposition groups within the party and consolidate power in his hands. Leon Trotsky, the main proponent of world revolution, was exiled from the Soviet Union in 1929, and Stalin's idea of Socialism in One Country became the primary line. The continued internal struggle in the Bolshevik party culminated in the Great Purge, a period of mass repressions in 1937–38, during which hundreds of thousands of people were executed, including original party members and military leaders accused of coup d'état plots.
Under Stalin's leadership, the government launched a command economy, industrialization of the largely rural country, and collectivization of its agriculture. During this period of rapid economic and social change, millions of people were sent to penal labor camps, including many political convicts for their opposition to Stalin's rule; millions were deported and exiled to remote areas of the Soviet Union. The transitional disorganisation of the country's agriculture, combined with the harsh state policies and a drought, led to the Soviet famine of 1932–1933, which killed between 2 and 3 million people in the Russian SFSR. The Soviet Union made the costly transformation from a largely agrarian economy to a major industrial powerhouse in a short span of time.
Under the doctrine of state atheism in the Soviet Union, there was a "government-sponsored program of forced conversion to atheism" conducted by Communists. The communist regime targeted religions based on State interests, and while most organised religions were never outlawed, religious property was confiscated, believers were harassed, and religion was ridiculed while atheism was propagated in schools. In 1925 the government founded the League of Militant Atheists to intensify the persecution. Accordingly, although personal expressions of religious faith were not explicitly banned, a strong sense of social stigma was imposed on them by the official structures and mass media and it was generally considered unacceptable for members of certain professions (teachers, state bureaucrats, soldiers) to be openly religious. As for the Russian Orthodox Church, Soviet authorities sought to control it and, in times of national crisis, to exploit it for the regime's own purposes; but their ultimate goal was to eliminate it. During the first five years of Soviet power, the Bolsheviks executed 28 Russian Orthodox bishops and over 1,200 Russian Orthodox priests. Many others were imprisoned or exiled. Believers were harassed and persecuted. Most seminaries were closed, and the publication of most religious material was prohibited. By 1941 only 500 churches remained open out of about 54,000 in existence prior to World War I.
The Appeasement policy of Great Britain and France towards Adolf Hitler's annexation of Austria and Czechoslovakia did not stem an increase in the power of Nazi Germany. Around the same time, the Third Reich allied with the Empire of Japan, a rival of the USSR in the Far East and an open enemy of the USSR in the Soviet–Japanese Border Wars in 1938–39.
In August 1939, the Soviet government decided to improve relations with Germany by concluding the Molotov–Ribbentrop Pact, pledging non-aggression between the two countries and dividing Eastern Europe into their respective spheres of influence. When Germany launched the Invasion of Poland, the Soviets followed weeks later with their own invasion of the country, claiming the eastern half of Poland while avoiding war with the Allied Powers. The Soviet government engaged in significant cooperation with Nazi Germany between 1939 and 1941, through extensive trade agreements which supplied Germany with vital raw materials for her war effort against Britain and France. As the other European powers were busy fighting in World War II, the USSR expanded her own military, and occupied the Hertza region as a result of the Winter War, annexed the Baltic states and annexed Bessarabia and Northern Bukovina from Romania.
On 22 June 1941, Nazi Germany broke their non-aggression treaty with their erstwhile partner and invaded the Soviet Union with the largest and most powerful invasion force in human history, opening the largest theater of World War II. The Nazi Hunger Plan foresaw the "extinction of industry as well as a great part of the population". Nearly 3 million Soviet POWs in German captivity were murdered in just eight months of 1941–42. Although the German army had considerable early success, their attack was halted in the Battle of Moscow. Subsequently, the Germans were dealt major defeats first at the Battle of Stalingrad in the winter of 1942–43, and then in the Battle of Kursk in the summer of 1943. Another German failure was the Siege of Leningrad, in which the city was fully blockaded on land between 1941 and 1944 by German and Finnish forces, and suffered starvation and more than a million deaths, but never surrendered. Under Stalin's administration and the leadership of such commanders as Georgy Zhukov and Konstantin Rokossovsky, Soviet forces took Eastern Europe in 1944–45 and captured Berlin in May 1945. In August 1945 the Soviet Army ousted the Japanese from China's Manchukuo and North Korea, contributing to the allied victory over Japan.
The 1941–45 period of World War II is known in Russia as the "Great Patriotic War". The Soviet Union together with the United States, the United Kingdom and China were considered as the Big Four of Allied powers in World War II and later became the Four Policemen which was the foundation of the United Nations Security Council. During this war, which included many of the most lethal battle operations in human history, Soviet civilian and military death were about 27 million, accounting for about a third of all World War II casualties. The full demographic loss to the Soviet peoples was even greater. The Soviet economy and infrastructure suffered massive devastation which caused the Soviet famine of 1946–47, but the Soviet Union emerged as an acknowledged military superpower on the continent.
The Soviet rear was also badly damaged by the German invasion. Luftwaffe bombed the cities of the Soviet Union from the air. Gorky suffered the most from the bombing. This city was the main industrial center of the USSR and was located near the Moscow Defence Zone. The bombing of the Volga capital destroyed the largest automobile plant GAZ. This plant supplied tanks for the front. Whole residential areas and other large factories of the city were destroyed. From 1941 to 1943, German pilots bombed different areas of the city. This bombardment is comparable to the London Blitz. Some damage remains until this time.
After the war, Eastern and Central Europe including East Germany and part of Austria was occupied by Red Army according to the Potsdam Conference. Dependent socialist governments were installed in the Eastern Bloc satellite states. Becoming the world's second nuclear weapons power, the USSR established the Warsaw Pact alliance and entered into a struggle for global dominance, known as the Cold War, with the United States and NATO. The Soviet Union supported revolutionary movements across the world, including the newly formed People's Republic of China, the Democratic People's Republic of Korea and, later on, the Republic of Cuba. Significant amounts of Soviet resources were allocated in aid to the other socialist states.
After Stalin's death and a short period of collective rule, the new leader Nikita Khrushchev denounced the cult of personality of Stalin and launched the policy of de-Stalinization. The penal labor system was reformed and many prisoners were released and rehabilitated (many of them posthumously). The general easement of repressive policies became known later as the Khrushchev Thaw. At the same time, tensions with the United States heightened when the two rivals clashed over the deployment of the United States Jupiter missiles in Turkey and Soviet missiles in Cuba.
In 1957, the Soviet Union launched the world's first artificial satellite, "Sputnik 1", thus starting the Space Age. Russia's cosmonaut Yuri Gagarin became the first human to orbit the Earth, aboard the "Vostok 1" manned spacecraft on 12 April 1961.
Following the ousting of Khrushchev in 1964, another period of collective rule ensued, until Leonid Brezhnev became the leader. The era of the 1970s and the early 1980s was later designated as the Era of Stagnation, a period when economic growth slowed and social policies became static. The 1965 Kosygin reform aimed for partial decentralization of the Soviet economy and shifted the emphasis from heavy industry and weapons to light industry and consumer goods but was stifled by the conservative Communist leadership.
In 1979, after a Communist-led revolution in Afghanistan, Soviet forces entered that country. The occupation drained economic resources and dragged on without achieving meaningful political results. Ultimately, the Soviet Army was withdrawn from Afghanistan in 1989 due to international opposition, persistent anti-Soviet guerrilla warfare, and a lack of support by Soviet citizens.
From 1985 onwards, the last Soviet leader Mikhail Gorbachev, who sought to enact liberal reforms in the Soviet system, introduced the policies of "glasnost" (openness) and "perestroika" (restructuring) in an attempt to end the period of economic stagnation and to democratise the government. This, however, led to the rise of strong nationalist and separatist movements. Prior to 1991, the Soviet economy was the second largest in the world, but during its last years it was afflicted by shortages of goods in grocery stores, huge budget deficits, and explosive growth in the money supply leading to inflation.
By 1991, economic and political turmoil began to boil over, as the Baltic states chose to secede from the Soviet Union. On 17 March, a referendum was held, in which the vast majority of participating citizens voted in favour of changing the Soviet Union into a renewed federation. In August 1991, a coup d'état attempt by members of Gorbachev's government, directed against Gorbachev and aimed at preserving the Soviet Union, instead led to the end of the Communist Party of the Soviet Union. On 25 December 1991, the USSR was dissolved into 15 post-Soviet states.
In June 1991, Boris Yeltsin became the first directly elected president in Russian history when he was elected President of the Russian Soviet Federative Socialist Republic, which became the independent Russian Federation in December of that year. The economic and political collapse of USSR led to a deep and prolonged depression, characterised by a 50% decline in both GDP and industrial output between 1990 and 1995, although some of the recorded declines may have been a result of an upward bias in Soviet-era economic data. During and after the disintegration of the Soviet Union, wide-ranging reforms including privatization and market and trade liberalization were undertaken, including radical changes along the lines of "shock therapy" as recommended by the United States and the International Monetary Fund.
The privatization largely shifted control of enterprises from state agencies to individuals with inside connections in the government. Many of the newly rich moved billions in cash and assets outside of the country in an enormous capital flight. The depression of the economy led to the collapse of social services; the birth rate plummeted while the death rate skyrocketed. Millions plunged into poverty, from a level of 1.5% in the late Soviet era to 39–49% by mid-1993. The 1990s saw extreme corruption and lawlessness, the rise of criminal gangs and violent crime.
The 1990s were plagued by armed conflicts in the North Caucasus, both local ethnic skirmishes and separatist Islamist insurrections. From the time Chechen separatists declared independence in the early 1990s, an intermittent guerrilla war has been fought between the rebel groups and the Russian military. Terrorist attacks against civilians carried out by separatists, most notably the Moscow theater hostage crisis and Beslan school siege, caused hundreds of deaths and drew worldwide attention.
Russia took up the responsibility for settling the USSR's external debts, even though its population made up just half of the population of the USSR at the time of its dissolution. In 1992, most consumer price controls were eliminated, causing extreme inflation and significantly devaluing the Ruble. With a devalued Ruble, the Russian government struggled to pay back its debts to internal debtors, as well as international institutions like the International Monetary Fund. Despite significant attempts at economic restructuring, Russia's debt outpaced GDP growth. High budget deficits coupled with increasing capital flight and inability to pay back debts caused the 1998 Russian financial crisis and resulted in a further GDP decline.
On 31 December 1999, President Yeltsin unexpectedly resigned, handing the post to the recently appointed Prime Minister, Vladimir Putin, who then won the 2000 presidential election. Putin suppressed the Chechen insurgency although sporadic violence still occurs throughout the Northern Caucasus. High oil prices and the initially weak currency followed by increasing domestic demand, consumption, and investments helped the economy grow at an average of 7% per year from 1998 to 2008, improving the standard of living and increasing Russia's influence on the world stage. Following the world economic crisis of 2008 and a subsequent drop in oil prices, Russia's economy stagnated and poverty again started to rise until 2017 when, after the prolonged recession, Russia's economy began to grow again, supported by stronger global growth, higher oil prices, and solid macro fundamentals. While many reforms made during the Putin presidency have been generally criticised by Western nations as undemocratic, Putin's leadership over the return of order, stability, and progress has won him widespread admiration in Russia.
On 2 March 2008, Dmitry Medvedev was elected President of Russia while Putin became Prime Minister. Putin returned to the presidency following the 2012 presidential elections, and Medvedev was appointed Prime Minister. This quick succession in leadership change was coined "tandemocracy" by outside media. Some critics claimed that the leadership change was superficial, and that Putin remained as the decision making force in the Russian government. Within the context of the ongoing Russia–Ukraine gas dispute in early January 2009, Nikolai Petrov, an analyst with the Carnegie Moscow Center said: "What we see right now is the dominant role of Putin. We see him as a real head of state. ... This is not surprising. We are still living in Putin's Russia." Some Russian political analysts and commentators viewed the political power as truly tandem between Medvedev and Putin. Prior to the 2008 election, political scientists Gleb Pavlovsky and Stanislav Belkovsky discussed the future configuration of power. According to Mr. Pavlovsky, people would be very suited with the option of the union of Putin and Medvedev "similar to the two Consuls of Rome". Belkovsky called Medvedev "President of a dream", referring to the early 1990s when people ostensibly dreamed of the time they "would live without the stranglehold of ubiquitous ideology, and a common person would become the head of the state".
In 2014, after President Viktor Yanukovych of Ukraine fled as a result of a revolution, Putin requested and received authorization from the Russian Parliament to deploy Russian troops to Ukraine, leading to the takeover of Crimea. Following a Crimean referendum in which separation was favored by a large majority of voters, the Russian leadership announced the accession of Crimea into the Russian Federation, though this and the referendum that preceded it were not accepted internationally. On 27 March the United Nations General Assembly voted in favor of a non-binding resolution opposing the Russian annexation of Crimea by a vote of 100 member states in favor, 11 against and 58 abstentions. The annexation of Crimea lead to sanctions by Western countries, in which the Russian government responded with its own against a number of countries.
In September 2015, Russia started military intervention in the Syrian Civil War, consisting of air strikes against militant groups of the Islamic State, al-Nusra Front (al-Qaeda in the Levant), and the Army of Conquest.
According to the Constitution of Russia, the country is an asymmetric federation and semi-presidential republic, wherein the President is the head of state and the Prime Minister is the head of government. The Russian Federation is fundamentally structured as a multi-party representative democracy, with the federal government composed of three branches:
The president is elected by popular vote for a six-year term (eligible for a second term, but not for a third consecutive term). Ministries of the government are composed of the Premier and his deputies, ministers, and selected other individuals; all are appointed by the President on the recommendation of the Prime Minister (whereas the appointment of the latter requires the consent of the State Duma). Leading political parties in Russia include United Russia, the Communist Party, the Liberal Democratic Party, and A Just Russia. In 2019, Russia was ranked as 134th of 167 countries in the Democracy Index, compiled by The Economist Intelligence Unit, while the World Justice Project, , ranked Russia 80th of 99 countries surveyed in terms of rule of law.
The Russian Federation is recognised in international law as a successor state of the former Soviet Union. Russia continues to implement the international commitments of the USSR, and has assumed the USSR's permanent seat in the UN Security Council, membership in other international organisations, the rights and obligations under international treaties, and property and debts. Russia has a multifaceted foreign policy. , it maintains diplomatic relations with 191 countries and has 144 embassies. The foreign policy is determined by the President and implemented by the Ministry of Foreign Affairs of Russia.
Although it is the successor state to a former superpower, Russia is commonly accepted to be a major great power, as well as a regional power. Russia is one of five permanent members of the UN Security Council. The country participates in the Quartet on the Middle East and the Six-party talks with North Korea. Russia is a member of the Council of Europe, OSCE, and APEC. Russia usually takes a leading role in regional organisations such as the CIS, EurAsEC, CSTO, and the SCO. Russia became the 39th member state of the Council of Europe in 1996. In 1998, Russia ratified the European Convention on Human Rights. The legal basis for EU relations with Russia is the Partnership and Cooperation Agreement, which came into force in 1997. The Agreement recalls the parties' shared respect for democracy and human rights, political and economic freedom and commitment to international peace and security. In May 2003, the EU and Russia agreed to reinforce their cooperation on the basis of common values and shared interests. President Vladimir Putin had advocated a strategic partnership with close integration in various dimensions, including establishment of EU-Russia Common Spaces. From the dissolution of the Soviet Union, Russia has initially developed a friendlier relationship with the United States and NATO, however today, the trilateral relationship has significantly deteriorated due to several issues and conflicts between Russia and the Western countries. The NATO-Russia Council was established in 2002 to allow the United States, Russia and the 27 allies in NATO to work together as equal partners to pursue opportunities for joint collaboration.
Russia maintains strong and positive relations with other SCO and BRICS countries. In recent years, the country has significantly strengthened bilateral ties with the People's Republic of China by signing the Treaty of Friendship as well as building the Trans-Siberian oil pipeline and gas pipeline from Siberia to China, and has since formed a special relationship with China. India is the largest customer of Russian military equipment and the two countries share extensive defense and strategic relations.
An important aspect of Russia's relations with the West is the criticism of Russia's political system and human rights management (including LGBT rights, media freedom, and reports about killed journalists) by Western governments, the mass media and the leading democracy and human rights watchdogs. In particular, such organisations as Amnesty International and Human Rights Watch consider Russia to have not enough democratic attributes and to allow few political rights and civil liberties to its citizens. Freedom House, an international organisation funded by the United States, ranks Russia as "not free", citing "carefully engineered elections" and "absence" of debate. Russian authorities dismiss these claims and especially criticise Freedom House. The Russian Ministry of Foreign Affairs has called the 2006 "Freedom in the World" report "prefabricated", stating that the human rights issues have been turned into a political weapon in particular by the United States. The ministry also claims that such organisations as Freedom House and Human Rights Watch use the same scheme of voluntary extrapolation of "isolated facts that of course can be found in any country" into "dominant tendencies".
Russia's power on the international stage depends on its petroleum revenue. If the world completes a transition to renewable energy and international demand for Russian oil, gas and coal resources is dramatically reduced, so will Russia's international power be. Russia is ranked 148 out of 156 countries in the index of Geopolitical Gains and Losses after energy transition (GeGaLo).
The Russian military is divided into the Ground Forces, Navy, and Air Force. There are also three independent arms of service: Strategic Missile Troops, Aerospace Defence Forces, and the Airborne Troops. , the military comprised over one million active duty personnel, the fifth largest in the world. Additionally, there are over 2.5 million reservists, with the total number of reserve troops possibly being as high as 20 million. It is mandatory for all male citizens aged 18–27 to be drafted for a year of service in Armed Forces.
Russia has the largest stockpile of nuclear weapons in the world, the second largest fleet of ballistic missile submarines, and the only modern strategic bomber force outside the United States. More than 90% of world's 14,000 nuclear weapons are owned by Russia and the United States. Russia's tank force is the largest in the world, while its surface navy and air force are among the largest.
The country has a large and fully indigenous arms industry, producing most of its own military equipment with only a few types of weapons imported. It has been one of the world's top supplier of arms since 2001, accounting for around 30% of worldwide weapons sales and exporting weapons to about 80 countries. The Stockholm International Peace Research Institute, SIPRI, found that Russia was the second biggest exporter of arms in 2010–14, increasing their exports by 37 per cent from the period 2005–2009. SIPRI estimated in 2020 that Russia is the third biggest exporters of arms, only behind the US and China. In 2010–14, Russia delivered weapons to 56 states and to rebel forces in eastern Ukraine.
The Russian government's official 2014 military budget is about 2.49 trillion rubles (approximately US$69.3 billion), the third largest in the world behind the US and China. The official budget is set to rise to 3.03 trillion rubles (approximately US$83.7 billion) in 2015, and 3.36 trillion rubles (approximately US$93.9 billion) in 2016. However, unofficial estimates put the budget significantly higher, for example the Stockholm International Peace Research Institute (SIPRI) 2013 Military Expenditure Database estimated Russia's military expenditure in 2012 at US$90.749 billion. This estimate is an increase of more than US$18 billion on SIPRI's estimate of the Russian military budget for 2011 (US$71.9 billion). , Russia's military budget is higher than any other European nation.
According to the Constitution, the country comprises eighty-five federal subjects, including the disputed Republic of Crimea and federal city of Sevastopol. In 1993, when the Constitution was adopted, there were eighty-nine federal subjects listed, but later some of them were merged. These subjects have equal representation—two delegates each—in the Federation Council. However, they differ in the degree of autonomy they enjoy.
Federal subjects are grouped into eight federal districts, each administered by an envoy appointed by the President of Russia. Unlike the federal subjects, the federal districts are not a subnational level of government, but are a level of administration of the federal government. Federal districts' envoys serve as liaisons between the federal subjects and the federal government and are primarily responsible for overseeing the compliance of the federal subjects with the federal laws.
Russia is the largest country in the world; its total area is . This makes it larger than the continents of Oceania, Europe and Antarctica. It lies between latitudes 41° and 82° N, and longitudes 19° E and 169° W.
Russia's territorial expansion was achieved largely in the late 16th century under the Cossack Yermak Timofeyevich during the reign of Ivan the Terrible, at a time when competing city-states in the western regions of Russia had banded together to form one country. Yermak mustered an army and pushed eastward where he conquered nearly all the lands once belonging to the Mongols, defeating their ruler, Khan Kuchum.
Russia has a wide natural resource base, including major deposits of timber, petroleum, natural gas, coal, ores and other mineral resources.
The two most widely separated points in Russia are about apart along a geodesic line. These points are: a long Vistula Spit the boundary with Poland separating the Gdańsk Bay from the Vistula Lagoon and the most southeastern point of the Kuril Islands. The points which are farthest separated in longitude are apart along a geodesic line. These points are: in the west, the same spit on the boundary with Poland, and in the east, the Big Diomede Island. The Russian Federation spans 11 time zones.
Most of Russia consists of vast stretches of plains that are predominantly steppe to the south and heavily forested to the north, with tundra along the northern coast. Russia possesses 10% of the world's arable land. Mountain ranges are found along the southern borders, such as the Caucasus (containing Mount Elbrus, which at is the highest point in both Russia and Europe) and the Altai (containing Mount Belukha, which at the is the highest point of Siberia outside of the Russian Far East); and in the eastern parts, such as the Verkhoyansk Range or the volcanoes of Kamchatka Peninsula (containing Klyuchevskaya Sopka, which at the is the highest active volcano in Eurasia as well as the highest point of Asian Russia). The Ural Mountains, rich in mineral resources, form a north–south range that divides Europe and Asia.
Russia has an extensive coastline of over along the Arctic and Pacific Oceans, as well as along the Baltic Sea, Sea of Azov, Black Sea and Caspian Sea. The Barents Sea, White Sea, Kara Sea, Laptev Sea, East Siberian Sea, Chukchi Sea, Bering Sea, Sea of Okhotsk, and the Sea of Japan are linked to Russia via the Arctic and Pacific. Russia's major islands and archipelagos include Novaya Zemlya, the Franz Josef Land, the Severnaya Zemlya, the New Siberian Islands, Wrangel Island, the Kuril Islands, and Sakhalin. The Diomede Islands (one controlled by Russia, the other by the United States) are just apart, and Kunashir Island is about from Hokkaido, Japan.
Russia has thousands of rivers and inland bodies of water, providing it with one of the world's largest surface water resources. Its lakes contain approximately one-quarter of the world's liquid fresh water. The largest and most prominent of Russia's bodies of fresh water is Lake Baikal, the world's deepest, purest, oldest and most capacious fresh water lake. Baikal alone contains over one-fifth of the world's fresh surface water. Other major lakes include Ladoga and Onega, two of the largest lakes in Europe. Russia is second only to Brazil in volume of the total renewable water resources. Of the country's 100,000 rivers, the Volga is the most famous, not only because it is the longest river in Europe, but also because of its major role in Russian history. The Siberian rivers Ob, Yenisey, Lena and Amur are among the longest rivers in the world.
The enormous size of Russia and the remoteness of many areas from the sea result in the dominance of the humid continental climate, which is prevalent in all parts of the country except for the tundra and the extreme southwest. Mountains in the south obstruct the flow of warm air masses from the Indian Ocean, while the plain of the west and north makes the country open to Arctic and Atlantic influences.
Most of Northern European Russia and Siberia has a subarctic climate, with extremely severe winters in the inner regions of Northeast Siberia (mostly the Sakha Republic, where the Northern Pole of Cold is located with the record low temperature of ), and more moderate winters elsewhere. Both the strip of land along the shore of the Arctic Ocean and the Russian Arctic islands have a polar climate.
The coastal part of Krasnodar Krai on the Black Sea, most notably in Sochi, possesses a humid subtropical climate with mild and wet winters. In many regions of East Siberia and the Far East, winter is dry compared to summer; other parts of the country experience more even precipitation across seasons. Winter precipitation in most parts of the country usually falls as snow. The region along the Lower Volga and Caspian Sea coast, as well as some areas of southernmost Siberia, possesses a semi-arid climate.
Throughout much of the territory there are only two distinct seasons—winter and summer—as spring and autumn are usually brief periods of change between extremely low and extremely high temperatures. The coldest month is January (February on the coastline); the warmest is usually July. Great ranges of temperature are typical. In winter, temperatures get colder both from south to north and from west to east. Summers can be quite hot, even in Siberia. The continental interiors are the driest areas.
From north to south the East European Plain, also known as Russian Plain, is clad sequentially in Arctic tundra, coniferous forest (taiga), mixed and broad-leaf forests, grassland (steppe), and semi-desert (fringing the Caspian Sea), as the changes in vegetation reflect the changes in climate. Siberia supports a similar sequence but is largely taiga. Russia has the world's largest forest reserves, known as "the lungs of Europe", second only to the Amazon Rainforest in the amount of carbon dioxide it absorbs.
There are 266 mammal species and 780 bird species in Russia. A total of 415 animal species have been included in the Red Data Book of the Russian Federation as of 1997 and are now protected. There are 28 UNESCO World Heritage Sites in Russia, 40 UNESCO biosphere reserves, 41 national parks and 101 nature reserves. Russia still has many ecosystems which are still untouched by man— mainly in the northern areas taiga and in subarctic tundra of Siberia. Over time Russia has been having improvement and application of environmental legislation, development and implementation of various federal and regional strategies and programmes,and study, inventory and protection of rare and endangered plants, animals, and other organisms, and including them in the Red Data Book of the Russian Federation.
Russia has an upper-middle income mixed economy with enormous natural resources, particularly oil and natural gas. It has the 11th largest economy in the world by nominal GDP and the 6th largest by purchasing power parity (PPP). Since the turn of the 21st century, higher domestic consumption and greater political stability have bolstered economic growth in Russia. The country ended 2008 with its ninth straight year of growth, but growth has slowed with the decline in the price of oil and gas. Real GDP per capita, PPP (current international) was 19,840 in 2010. Growth was primarily driven by non-traded services and goods for the domestic market, as opposed to oil or mineral extraction and exports. The average nominal salary in Russia was $967 per month in early 2013, up from $80 in 2000. In May 2016 the average nominal monthly wages fell below $450 per month, and tax on the income of individuals is payable at the rate of 13% on most incomes. Approximately 19.2 million of Russians lived below the national poverty line in 2016, significantly up from 16.1 million in 2015. Unemployment in Russia was 5.4% in 2014, down from about 12.4% in 1999. Officially, about 20–25% of the Russian population is categorised as middle class; however some economists and sociologists think this figure is inflated and the real fraction is about 7%. After the United States, the European Union and other countries imposed economic sanctions after the annexation of Crimea and a collapse in oil prices, the proportion of middle-class could decrease drastically. The economic development of the country has been uneven geographically with the Moscow region contributing a very large share of the country's GDP.
Oil, natural gas, metals, and timber account for more than 80% of Russian exports abroad. Since 2003, the exports of natural resources started decreasing in economic importance as the internal market strengthened considerably. the oil-and-gas sector accounted for 16% of GDP, 52% of federal budget revenues and over 80% of total exports. Oil export earnings allowed Russia to increase its foreign reserves from $12 billion in 1999 to $597.3 billion on 1 August 2008. , foreign reserves in Russia fell to US$332 Billion. The macroeconomic policy under Finance Minister Alexei Kudrin was prudent and sound, with excess income being stored in the Stabilization Fund of Russia. In 2006, Russia repaid most of its formerly massive debts, leaving it with one of the lowest foreign debts among major economies. The Stabilization Fund helped Russia to come out of the global financial crisis in a much better state than many experts had expected.
A simpler, more streamlined tax code adopted in 2001 reduced the tax burden on people and dramatically increased state revenue. Russia has a flat tax rate of 13%. This ranks it as the country with the second most attractive personal tax system for single managers in the world after the United Arab Emirates. According to Bloomberg, Russia is considered well ahead of most other resource-rich countries in its economic development, with a long tradition of education, science, and industry. The country has a higher proportion of higher education graduates than any other country in Eurasia.
Inequality of household income and wealth has also been noted, with Credit Suisse finding Russian wealth distribution so much more extreme than other countries studied it "deserves to be placed in a separate category."
Another problem is modernisation of infrastructure, ageing and inadequate after years of being neglected in the 1990s; the government has said $1 trillion will be invested in development of infrastructure by 2020. In December 2011, Russia was approved as a member of the World Trade Organisation after 18 years of dialogue, allowing it a greater access to overseas markets. Some analysts estimate that WTO membership could bring the Russian economy a bounce of up to 3% annually. Russia ranks as the second-most corrupt country in Europe (after Ukraine), according to the Corruption Perceptions Index. The Norwegian-Russian Chamber of Commerce also states that "[c]orruption is one of the biggest problems both Russian and international companies have to deal with." Corruption in Russia is perceived as a significant problem impacting all aspects of life, including public administration, law enforcement, healthcare and education. The phenomenon of corruption is strongly established in the historical model of public governance in Russia and attributed to general weakness of rule of law in Russia. According to Transparency International's Corruption Perceptions Index, Russia's public sector ranked 137th (out of 180 countries) with a score of 28 out of 100 in 2019.
The Russian central bank announced plans in 2013 to free float the Russian ruble in 2015. According to a stress test conducted by the central bank Russian financial system would be able to handle a currency decline of 25%–30% without major central bank interference. However, the Russian economy began stagnating in late 2013 and in combination with the War in Donbass is in danger of entering stagflation, slow growth and high inflation. The recent decline in the Russian ruble has increased the costs for Russian companies to make interest payments on debt issued in U.S. dollar or other foreign currencies that have strengthened against the ruble; thus it costs Russian companies more of their ruble-denominated revenue to repay their debt holders in dollars or other foreign currencies. , the ruble was devalued more than 50 percent since July 2014. Moreover, after bringing inflation down to 3.6% in 2012, the lowest rate since gaining independence from the Soviet Union, inflation in Russia jumped to nearly 7.5% in 2014, causing the central bank to increase its lending rate to 8% from 5.5% in 2013. In an October 2014 article in "Bloomberg Business Week", it was reported that Russia had significantly started shifting its economy towards China in response to increasing financial tensions following its annexation of Crimea and subsequent Western economic sanctions.
In recent years, Russia has frequently been described in the media as an energy superpower. The country has the world's largest natural gas reserves, the 8th largest oil reserves, and the second largest coal reserves. Russia is the world's leading natural gas exporter and second largest natural gas producer, while also the largest oil exporter and the largest oil producer.
Russia is the third largest electricity producer in the world and the 5th largest renewable energy producer, the latter because of the well-developed hydroelectricity production in the country. Large cascades of hydropower plants are built in European Russia along big rivers like the Volga. The Asian part of Russia also features a number of major hydropower stations; however, the gigantic hydroelectric potential of Siberia and the Russian Far East largely remains unexploited.
Russia was the first country to develop civilian nuclear power and to construct the world's first nuclear power plant. Currently the country is the 4th largest nuclear energy producer, with all nuclear power in Russia being managed by Rosatom State Corporation. The sector is rapidly developing, with an aim of increasing the total share of nuclear energy from current 16.9% to 23% by 2020. The Russian government plans to allocate 127 billion rubles ($5.42 billion) to a federal program dedicated to the next generation of nuclear energy technology. About 1 trillion rubles ($42.7 billion) is to be allocated from the federal budget to nuclear power and industry development before 2015.
In May 2014 on a two-day trip to Shanghai, President Putin signed a deal on behalf of Gazprom for the Russian energy giant to supply China with 38 billion cubic meters of natural gas per year. Construction of a pipeline to facilitate the deal was agreed whereby Russia would contribute $55bn to the cost, and China $22bn, in what Putin described as "the world's biggest construction project for the next four years." The natural gas would begin to flow sometime between 2018 and 2020 and would continue for 30 years at an ultimate cost to China of $400bn.
Russia recorded a trade surplus of US$130.1 billion in 2017. Russia's Trade Balance recorded a surplus of US$19.7 billion in October 2018, compared with a surplus of US$10.1 billion in October 2017.
The European Union is Russia's largest trading partner and Russia is the EU's fourth largest trading partner. 75% of foreign direct investment (FDI) stocks in Russia come from the EU.
Reuters reported that U.S. companies "generated more than $90 billion in revenue from Russia in 2017." According to the AALEP, "there are almost 3,000 American companies in Russia, and the U.S. is also the leader in terms of foreign companies in Special Economic Zones, with 11 projects."
Russia recorded a trade surplus of US$15.8 billion in 2013. Balance of trade in Russia is reported by the Central Bank of Russia. Historically, from 1997 until 2013, Russia balance of trade averaged US$8338.23 million reaching an all-time high of US$20647 million in December 2011 and a record low of −185 USD million in February 1998. Russia runs regular trade surpluses primarily due to exports of commodities.
In 2015, Russia main exports are oil and natural gas (62.8% of total exports), ores and metals (5.9%), chemical products (5.8%), machinery and transport equipment (5.4%) and food (4.7%). Others include: agricultural raw materials (2.2%) and textiles (0.2%).
Russia imports food, ground transports, pharmaceuticals and textile and footwear. Main trading partners are: China (7% of total exports and 10% of imports), Germany (7% of exports and 8% of imports) and Italy. This page includes a chart with historical data for Russia balance of trade. Exports in Russia decreased to US$39038 million in January 2013 from US$48568 million in December 2012. Exports in Russia is reported by the Central Bank of Russia. Historically, from 1994 until 2013, Russia Exports averaged US$18668.83 million reaching an all-time high of US$51338 million in December 2011 and a record low of US$4087 million in January 1994. Russia is the 16th largest export economy in the world (2016) and is a leading exporter of oil and natural gas. In Russia, services are the biggest sector of the economy and account for 58% of GDP. Within services the most important segments are: wholesale and retail trade, repair of motor vehicles, motorcycles and personal and household goods (17% of total GDP); public administration, health and education (12%); real estate (9%) and transport storage and communications (7%). Industry contributes 40% to total output. Mining (11% of GDP), manufacturing (13%) and construction (4%) are the most important industry segments. Agriculture accounts for the remaining 2%. This page includes a chart with historical data for Russia Exports. Imports in Russia decreased to US$21296 million in January 2013 from US$31436 million in December 2012. Imports in Russia is reported by the Central Bank of Russia. Historically, from 1994 until 2013, Russia imports averaged US$11392.06 million reaching an all-time high of US$31553 million in October 2012 and a record low of US$2691 million in January 1999. Russia main imports are food (13% of total imports) and ground transports (12%). Others include: pharmaceuticals, textile and footwear, plastics and optical instruments. Main import partners are China (10% of total imports) and Germany (8%). Others include: Italy, France, Japan and United States. This page includes a chart with historical data for Russia Imports.
Foreign trade of Russia – Russian export and import
Foreign trade rose 34% to $151.5 billion in the first half of 2005, mainly due to the increase in oil and gas prices which now form 64% of all exports by value. Trade with CIS countries is up 13.2% to $23.3 billion. Trade with the EU forms 52.9%, with the CIS 15.4%, Eurasian Economic Community 7.8% and Asia-Pacific Economic Cooperation 15.9%.
Russia's total area of cultivated land is estimated at , the fourth largest in the world. From 1999 to 2009, Russia's agriculture grew steadily, and the country turned from a grain importer to the third largest grain exporter after the EU and the United States. The production of meat has grown from 6,813,000 tonnes in 1999 to 9,331,000 tonnes in 2008, and continues to grow.
The 2014 devaluation of the rouble and imposition of sanctions spurred domestic production, and in 2016 Russia exceeded Soviet grain production levels, and became the world's largest exporter of wheat.
This restoration of agriculture was supported by a credit policy of the government, helping both individual farmers and large privatised corporate farms that once were Soviet kolkhozes and which still own the significant share of agricultural land. While large farms concentrate mainly on grain production and husbandry products, small private household plots produce most of the country's potatoes, vegetables and fruits.
Since Russia borders three oceans (the Atlantic, Arctic, and Pacific), Russian fishing fleets are a major world fish supplier. Russia captured 3,191,068 tons of fish in 2005. Both exports and imports of fish and sea products grew significantly in recent years, reaching $2,415 and $2,036 million, respectively, in 2008.
Sprawling from the Baltic Sea to the Pacific Ocean, Russia has more than a fifth of the world's forests, which makes it the largest forest country in the world. However, according to a 2012 study by the Food and Agriculture Organization of the United Nations and the Government of the Russian Federation, the considerable potential of Russian forests is underutilised and Russia's share of the global trade in forest products is less than four percent.
Railway transport in Russia is mostly under the control of the state-run Russian Railways monopoly. The company accounts for over 3.6% of Russia's GDP and handles 39% of the total freight traffic (including pipelines) and more than 42% of passenger traffic. The total length of common-used railway tracks exceeds , second only to the United States. Over of tracks are electrified, which is the largest number in the world, and additionally there are more than of industrial non-common carrier lines. Railways in Russia, unlike in the most of the world, use broad gauge of , with the exception of on Sakhalin island using narrow gauge of . The most renowned railway in Russia is Trans-Siberian ("Transsib"), spanning a record seven time zones and serving the longest single continuous services in the world, Moscow-Vladivostok (), Moscow–Pyongyang () and Kiev–Vladivostok ().
Much of Russia's inland waterways, which total, are made up of natural rivers or lakes. In the European part of the country the network of channels connects the basins of major rivers. Russia's capital, Moscow, is sometimes called "the port of the five seas", because of its waterway connections to the Baltic, White, Caspian, Azov and Black Seas.
Major sea ports of Russia include Rostov-on-Don on the Azov Sea, Novorossiysk on the Black Sea, Astrakhan and Makhachkala on the Caspian, Kaliningrad and St Petersburg on the Baltic, Arkhangelsk on the White Sea, Murmansk on the Barents Sea, Petropavlovsk-Kamchatsky and Vladivostok on the Pacific Ocean. In 2008 the country owned 1,448 merchant marine ships. The world's only fleet of nuclear-powered icebreakers advances the economic exploitation of the Arctic continental shelf of Russia and the development of sea trade through the Northern Sea Route between Europe and East Asia.
By total length of pipelines Russia is second only to the United States. Currently many new pipeline projects are being realised, including Nord Stream and South Stream natural gas pipelines to Europe, and the Eastern Siberia – Pacific Ocean oil pipeline (ESPO) to the Russian Far East and China.
Russia has 1,216 airports, the busiest being Sheremetyevo, Domodedovo, and Vnukovo in Moscow, and Pulkovo in St. Petersburg.
Typically, major Russian cities have well-developed systems of public transport, with the most common varieties of exploited vehicles being bus, trolleybus and tram. Seven Russian cities, namely Moscow, Saint Petersburg, Nizhny Novgorod, Novosibirsk, Samara, Yekaterinburg, and Kazan, have underground metros, while Volgograd features a metrotram. The total length of metros in Russia is . Moscow Metro and Saint Petersburg Metro are the oldest in Russia, opened in 1935 and 1955 respectively. These two are among the fastest and busiest metro systems in the world, and some of them are famous for rich decorations and unique designs of their stations, which is a common tradition in Russian metros and railways.
Science and technology in Russia blossomed since the Age of Enlightenment, when Peter the Great founded the Russian Academy of Sciences and Saint Petersburg State University, and polymath Mikhail Lomonosov established the Moscow State University, paving the way for a strong native tradition in learning and innovation. In the 19th and 20th centuries the country produced a large number of notable scientists and inventors.
The Russian physics school began with Lomonosov who proposed the law of conservation of matter preceding the energy conservation law. Russian discoveries and inventions in physics include the electric arc, electrodynamical Lenz's law, space groups of crystals, photoelectric cell, superfluidity, Cherenkov radiation, electron paramagnetic resonance, heterotransistors and 3D holography. Lasers and masers were co-invented by Nikolai Basov and Alexander Prokhorov, while the idea of tokamak for controlled nuclear fusion was introduced by Igor Tamm, Andrei Sakharov and Lev Artsimovich, leading eventually the modern international ITER project, where Russia is a party.
Since the time of Nikolay Lobachevsky (the "Copernicus of Geometry" who pioneered the non-Euclidean geometry) and a prominent tutor Pafnuty Chebyshev, the Russian mathematical school became one of the most influential in the world. Chebyshev's students included Aleksandr Lyapunov, who founded the modern stability theory, and Andrey Markov who invented the Markov chains. In the 20th century Soviet mathematicians, such as Andrey Kolmogorov, Israel Gelfand, and Sergey Sobolev, made major contributions to various areas of mathematics. Nine Soviet/Russian mathematicians were awarded with the Fields Medal, a most prestigious award in mathematics. Recently Grigori Perelman was offered the first ever Clay Millennium Prize Problems Award for his final proof of the Poincaré conjecture in 2002.
Russian chemist Dmitry Mendeleev invented the Periodic table, the main framework of modern chemistry. Aleksandr Butlerov was one of the creators of the theory of chemical structure, playing a central role in organic chemistry. Russian biologists include Dmitry Ivanovsky who discovered viruses, Ivan Pavlov who was the first to experiment with the classical conditioning, and Ilya Mechnikov who was a pioneer researcher of the immune system and probiotics.
Many Russian scientists and inventors were émigrés, like Igor Sikorsky, who built the first airliners and modern-type helicopters; Vladimir Zworykin, often called the father of television; chemist Ilya Prigogine, noted for his work on dissipative structures and complex systems; Nobel Prize-winning economists Simon Kuznets and Wassily Leontief; physicist Georgiy Gamov (an author of the Big Bang theory) and social scientist Pitirim Sorokin. Many foreigners worked in Russia for a long time, like Leonard Euler and Alfred Nobel.
Russian inventions include arc welding by Nikolay Benardos, further developed by Nikolay Slavyanov, Konstantin Khrenov and other Russian engineers. Gleb Kotelnikov invented the knapsack parachute, while Evgeniy Chertovsky introduced the pressure suit. Alexander Lodygin and Pavel Yablochkov were pioneers of electric lighting, and Mikhail Dolivo-Dobrovolsky introduced the first three-phase electric power systems, widely used today. Sergei Lebedev invented the first commercially viable and mass-produced type of synthetic rubber. The first ternary computer, "Setun", was developed by Nikolay Brusentsov.
In the 20th century a number of prominent Soviet aerospace engineers, inspired by the fundamental works of Nikolai Zhukovsky, Sergei Chaplygin and others, designed many hundreds of models of military and civilian aircraft and founded a number of "KBs" ("Construction Bureaus") that now constitute the bulk of Russian United Aircraft Corporation. Famous Russian aircraft include the civilian Tu-series, Su and MiG fighter aircraft, Ka and Mi-series helicopters; many Russian aircraft models are on the list of most produced aircraft in history.
Famous Russian battle tanks include T34, the most heavily produced tank design of World War II, and further tanks of T-series, including the most produced tank in history, T54/55. The AK47 and AK74 by Mikhail Kalashnikov constitute the most widely used type of assault rifle throughout the world—so much so that more AK-type rifles have been manufactured than all other assault rifles combined.
With all these achievements, however, since the late Soviet era Russia was lagging behind the West in a number of technologies, mostly those related to energy conservation and consumer goods production. The crisis of the 1990s led to the drastic reduction of the state support for science and a brain drain migration from Russia.
In the 2000s, on the wave of a new economic boom, the situation in the Russian science and technology has improved, and the government launched a campaign aimed into modernisation and innovation. Russian President Dmitry Medvedev formulated top priorities for the country's technological development:
Currently Russia has completed the GLONASS satellite navigation system. The country is developing its own fifth-generation jet fighter and constructing the first serial mobile nuclear plant in the world.
Russian achievements in the field of space technology and space exploration are traced back to Konstantin Tsiolkovsky, the father of theoretical astronautics. His works had inspired leading Soviet rocket engineers, such as Sergey Korolyov, Valentin Glushko, and many others who contributed to the success of the Soviet space program in the early stages of the Space Race and beyond.
In 1957 the first Earth-orbiting artificial satellite, "Sputnik 1", was launched; in 1961 the first human trip into space was successfully made by Yuri Gagarin. Many other Soviet and Russian space exploration records ensued, including the first spacewalk performed by Alexei Leonov, Luna 9 was the first spacecraft to land on the Moon, Zond 5 brought the first Earthlings (two tortoises and other life forms) to circumnavigate the Moon, Venera 7 was the first to land on another planet (Venus), Mars 3 then the first to land on Mars, the first space exploration rover "Lunokhod 1", and the first space station "Salyut 1" and "Mir".
After the collapse of the Soviet Union, some government-funded space exploration programs, including the Buran space shuttle program, were cancelled or delayed, while participation of the Russian space industry in commercial activities and international cooperation intensified.
Nowadays Russia is the largest satellite launcher. After the United States Space Shuttle program ended in 2011, Soyuz rockets became the only provider of transport for astronauts at the International Space Station.
Luna-Glob is a Russian Moon exploration programme, with first planned mission launch in 2021. Roscosmos is also developing the Orel spacecraft, to replace the aging Soyuz, it could also conduct mission to lunar orbit as early as 2026. In February 2019, it was announced that Russia is intending to conduct its first crewed mission to land on the Moon in 2031.
In Russia, approximately 70 per cent of drinking water comes from surface water and 30 per cent from groundwater. In 2004, water supply systems had a total capacity of 90 million cubic metres a day. The average residential water use was 248 litres per capita per day. One fourth of the world's fresh surface and groundwater is located in Russia. The water utilities sector is one of the largest industries in Russia serving the entire Russian population.
Lake Baikal is famous for its record depth and clear waters. It contains 20% of the world's liquid fresh water. However, as water pollution gets worse, the lake is going to be a swamp instead of a freshwater lake soon.
There are many different estimates of the actual cost of corruption. According to official government statistics from Rosstat, the "shadow economy" occupied only 15% of Russia's GDP in 2011, and this included unreported salaries (to avoid taxes and social payments) and other types of tax evasion. According to Rosstat's estimates, corruption in 2011 amounted to only 3.5 to 7% of GDP. In comparison, some independent experts maintain that corruption consumes as much of 25% of Russia's GDP. A World Bank report puts this figure at 48%. There is also an interesting shift in the main focus of bribery: whereas previously officials took bribes to shut their eyes to legal infractions, they now take them simply to perform their duties. Many experts admit that in recent years corruption in Russia has become a business. In the 1990s, businessmen had to pay different criminal groups to provide a ""krysha"" (literally, a "roof", i.e., protection). Nowadays, this "protective" function is performed by officials. Corrupt hierarchies characterise different sectors of the economy, including education.
In the end, the Russian population pays for this corruption. For example, some experts believe that the rapid increases in tariffs for housing, water, gas and electricity, which significantly outpace the rate of inflation, are a direct result of high volumes of corruption at the highest levels. In the recent years the reaction to corruption has changed: starting from Putin's second term, very few corruption cases have been the subject of outrage. Putin's system is remarkable for its ubiquitous and open merging of the civil service and business, as well as its use of relatives, friends, and acquaintances to benefit from budgetary expenditures and take over state property. Corporate, property, and land raiding is commonplace.
On 26 March 2017, protests against alleged corruption in the federal Russian government took place simultaneously in many cities across the country. They were triggered by the lack of proper response from the Russian authorities to the published investigative film "He Is Not Dimon To You", which has garnered more than 20 million views on YouTube.
In the 2018 results of the Corruption Perceptions Index by Transparency International, Russia ranked 138th out of 180 countries with a score of 28 out of 100, tying with Guinea, Iran, Lebanon, Mexico and Papua New Guinea.
With a population of 142.8 million according to the 2010 census, rising to 146.7 million as of 2020. Russia is the most populous country in Europe, and the ninth-most populous country in the world, its population density stands at 9 inhabitants per square kilometre (23 per square mile). The overall life expectancy in Russia at birth is 72.4 years (66.9 years for males and 77.6 years for females). Since the 1990s, Russia's death rate has exceeded its birth rate. As of 2018, the total fertility rate (TFR) across Russia was estimated to be 1.57 born per woman, one of the lowest fertility rates in the world, below the replacement rate of 2.1, and considerably below the high of 7.44 children born per woman in 1908. Subsequently, the country has one of the oldest population in the world, with an average age of 40.3 years.
Nevertheless, Russia's overall birth rate is higher than that of most European countries (13.3 births per 1000 people in 2014 compared to the European Union average of 10.1 per 1000), though its death rate is also substantially higher (in 2014, Russia's death rate was 13.1 per 1000 people compared to the EU average of 9.7 per 1000). Since 2010, Russia has seen increased population growth due to declining death rates, increased birth rates and increased immigration. In 2009, it recorded annual population growth for the first time in fifteen years, with total growth of 10,500. In 2012, the trend continued, with 1,896,263 births, the highest since 1990, and even exceeding annual births during the period 1967–1969.
Russia is home to approximately 111 million ethnic Russians, and about 20 million ethnic Russians live outside Russia in the former republics of the Soviet Union, mostly in Ukraine and Kazakhstan. The 2010 census recorded 81% of the population as ethnically Russian, and 19% as other ethnicities: 3.7% Tatars; 1.4% Ukrainians; 1.1% Bashkirs; 1% Chuvashes; 11.8% others and unspecified. According to the Census, 84.93% of the Russian population belongs to European ethnic groups (Slavic, Germanic, Finnic, Greek, and others).
The government is implementing a number of programs designed to increase the birth rate and attract more migrants. Monthly government child-assistance payments were doubled to US$55, and a one-time payment of US$9,200 has been offered to women who have a second child since 2007.
The number of Russian emigrants steadily declined from 359,000 in 2000 to 32,000 in 2009. According to the UN, Russia's immigrant population is the third largest in the world, numbering 11.6 million. Ukraine, Uzbekistan, Tajikistan, Azerbaijan, Moldova and Kazakhstan were the leading countries of origin for immigrants to Russia. There are about 3 million Ukrainians living in Russia. In 2016, 196,000 migrants arrived, mostly from the ex-Soviet states.
Since 2006, the Russian government started simplifying immigration laws and launched a state program "for providing assistance to voluntary immigration of ethnic Russians from former Soviet republics". In light of these trends, President Putin declared that Russia's population could reach 146 million by 2025, mainly as a result of immigration.
Ethnic Russians comprise 81% of the country's population. Russia is a multi-national state with over 185 ethnic groups designated as nationalities; the populations of these groups vary enormously, from millions (e.g., Russians and Tatars) to under 10,000 (e.g., Samis and Eskimo).
Russia's 185 ethnic groups speak over 100 languages. According to the 2002 Census, 142.6 million people speak Russian, followed by Tatar with 5.3 million and Ukrainian with 1.8 million speakers. Russian is the only official state language, but the Constitution gives the individual republics the right to establish their own state languages in addition to Russian.
Despite its wide distribution, the Russian language is homogeneous throughout the country. Russian is the most geographically widespread language of Eurasia, as well as the most widely spoken Slavic language. It belongs to the Indo-European language family and is one of the living members of the East Slavic languages, the others being Belarusian and Ukrainian (and possibly Rusyn). Written examples of Old East Slavic ("Old Russian") are attested from the 10th century onwards.
Russian is the second-most used language on the Internet after English, one of two official languages aboard the International Space Station and is one of the six official languages of the UN. 35 languages are officially recognised in Russia in various regions by local governments.
Though a secular state under the constitution, Russia is often said to have Russian Orthodoxy as the "de facto" national religion, despite other minorities: "The Russian Orthodox Church is de facto privileged religion of the state, claiming the right to decide which other religions or denominations are to be granted the right of registration".
Russians have practised Orthodox Christianity since the 10th century. According to the historical traditions of the Orthodox Church, Christianity was first brought to the territory of modern Belarus, Russia and Ukraine by Saint Andrew, the first Apostle of Jesus Christ. Following the "Primary Chronicle", the definitive Christianization of Kievan Rus' dates from the year 988 (the year is disputed), when Vladimir the Great was baptised in Chersonesus and proceeded to baptise his family and people in Kiev. The latter events are traditionally referred to as the "baptism of Rus'" (, ) in Russian and Ukrainian literature. Much of the Russian population, like other Slavic peoples, preserved for centuries a double belief ("dvoeverie") in both indigenous religion and Orthodox Christianity.
At the time of the 1917 Revolution, the Russian Orthodox Church was deeply integrated into the autocratic state, enjoying official status. This was a significant factor that contributed to the Bolshevik attitude to religion and the steps they took to control it. Moreover, the Bolsheviks including many people with non-Russian and/or non-Christian background, such as Vladimir Lenin, Leon Trotsky, Grigory Zinoviev, Lev Kamenev, and Grigori Sokolnikov, who were, at best, indifferent towards Christianity, and at worst hostile to it. The ideas of German philosopher Karl Marx were synthesised with Lenin's own political thought to form the Communist Party.
Thus the USSR became one of the first communist states to proclaim, as an ideological objective, the elimination of religion and its replacement with universal atheism. The communist government ridiculed religions and their believers, and propagated atheism in schools. The confiscation of religious assets was often based on accusations of illegal accumulation of wealth. State atheism in the Soviet Union was known in Russian as "gosateizm", and was based on the ideology of Marxism–Leninism, which consistently advocated has consistently advocated the control, suppression, and elimination of religion. Within about a year of the revolution, the state expropriated all church property, including the churches themselves, and in the period from 1922 to 1926, 28 Russian Orthodox bishops and more than 1,200 priests were killed. Many more were persecuted.
After the collapse of the Soviet Union there has been a renewal of religions in Russia, and among Slavs various movements have emerged besides Christianity, including Rodnovery (Slavic Native Faith), Assianism, and other ethnic Paganisms, Roerichism, Ringing Cedars' Anastasianism, Hinduism, Siberian shamanism or Tengrism, and other religions.
As of a different sociological surveys on religious adherence, from 41% to over 80% of the total population of Russia adhere to the Russian Orthodox Church. In 2012 the research organization Sreda, in cooperation with the 2010 census and the Ministry of Justice, published the Arena Atlas, a detailed enumeration of religious populations and nationalities in Russia, based on a large-sample country-wide survey. The results showed that 46.8% of Russians declared themselves Christians—including 41% Russian Orthodox, 1.5% simply Orthodox or members of non-Russian Orthodox churches, 4.1% unaffiliated Christians, and less than 1% for both Old Believers, Catholics, and Protestants—while 25% were spiritual but not religious, 13% were atheists, 6.5% were Muslims, 1.2% were followers of "traditional religions honoring gods and ancestors" (including Rodnovery, Tengrism and other ethnic religions), and 0.5% were Buddhists, 0.1% were religious Jews and 0.1% were Hindus.
The 2017 Survey "Religious Belief and National Belonging in Central and Eastern Europe" made by the Pew Research Center showed that 73% of Russians declared themselves Christians—including 71% Orthodox, 1% Catholic, and 2% Other Christians, while 15% were unaffiliated, 10% were Muslims, and 1% were from other religions. According to the same study, Christianity experienced significant increase since the fall of the USSR in 1991, and more Russians say they are Christian now (73%) than say they were raised Christian (65%).
According to various reports, the proportion of not religious people in Russia is between 16% and 48% of the population. According to recent studies, the proportion of atheists has significantly decreased over the decades after the dissolution of the Soviet Union.
Orthodox Christianity, Islam, Buddhism, and Paganism (either preserved or revived) are recognised by law as Russia's traditional religions, marking the country's "historical heritage".
An estimated 95% of the registered Orthodox parishes belong to the Russian Orthodox Church while there are a number of smaller Orthodox churches. However, the vast majority of Orthodox believers do not attend church on a regular basis. Easter is the most popular religious holiday in Russia, celebrated by a large segment of the Russian population, including large numbers of those who are non-religious. More than three-quarters of the Russian population celebrate Easter by making traditional Easter cakes, coloured eggs and paskha.
Islam is the second largest religion in Russia after Russian Orthodoxy. It is the traditional or predominant religion amongst some Caucasian ethnicities (notably the Chechens, the Ingush and the Circassians), and amongst some Turkic peoples (notably the Tatars and the Bashkirs). Survey published in 2019 by the Pew Research Center found that 76% of Russians had a favourable view of Muslims.
Buddhism is traditional in three republics of Russia: Buryatia, Tuva, and Kalmykia, the latter being the only region in Europe where Buddhism is the most practiced religion.
In cultural and social affairs, Vladimir Putin has collaborated closely with the Russian Orthodox Church. Patriarch Kirill of Moscow, head of the Church, endorsed his election in 2012. Steven Myers reports, "The church, once heavily repressed, had emerged from the Soviet collapse as one of the most respected institutions... Now Kiril led the faithful directly into an alliance with the state." Baptist minister Mark Woods provides specific examples of how the Church under Patriarch Kirill of Moscow has backed the expansion of Russian power into Crimea and eastern Ukraine.
The Holy Synod of the Russian Orthodox Church, at its session on 15 October 2018, severed ties with the Ecumenical Patriarchate of Constantinople. The decision was taken in response to the move made by the Patriarchate of Constantinople a few days prior that effectively ended the Moscow Patriarchate's jurisdiction over Ukraine and promised autocephaly to Ukraine.
On 26 April 2017, for the first time, the U.S. Commission on International Religious Freedom classified Russia as one of the world's worst violators of religious liberty, recommending in its 2017 annual report that the U.S. government deem Russia a "country of particular concern" under the International Religious Freedom Act and negotiate for religious liberty. The report states, "—it is the sole state to have not only continually intensified its repression of religious freedom since USCIRF commenced monitoring it, but also to have expanded its repressive policies...ranging from administrative harassment to arbitrary imprisonment to extrajudicial killing, are implemented in a fashion that is systematic, ongoing, and egregious." On 4 April 2017 UN Special Rapporteur on Freedom of Opinion and Expression David Kaye, UN Special Rapporteur on Freedoms of Peaceful Assembly and Association Maina Kiai, and UN Special Rapporteur on Freedom of Religion and Belief Ahmed Shaheed condemned Russia's treatment of Jehovah's Witnesses. Many other countries and international organizations have spoken out on Russia's religious abuses.
The Russian Constitution guarantees free, universal health care for all its citizens. In practice, however, free health care is partially restricted because of mandatory registration. While Russia has more physicians, hospitals, and health care workers than almost any other country in the world on a per capita basis, since the dissolution of the Soviet Union the health of the Russian population has declined considerably as a result of social, economic, and lifestyle changes; the trend has been reversed only in the recent years, with average life expectancy having increased 6.8 years for males and 4.2 years for females between 2006 and 2018.
Due to the ongoing Russian financial crisis since 2014, major cuts in health spending have resulted in a decline in the quality of service of the state healthcare system. About 40% of basic medical facilities have fewer staff than they are supposed to have, with others being closed down. Waiting times for treatment have increased, and patients have been forced to pay for more services that were previously free.
, the average life expectancy at birth in Russia is 72.4 years (66.9 years for males and 77.6 years for females). The biggest factor contributing to the relatively low life expectancy for males is a high mortality rate among working-age males. Deaths mostly occur from preventable causes, including alcohol poisoning, smoking, traffic accidents and violent crime. As a result, Russia has one of the world's most female-biased sex ratios, with 0.859 males to every female.
Russia has the most college-level or higher graduates in terms of percentage of population in the world, at 54%. Russia has a free education system, which is guaranteed for all citizens by the Constitution, however entry to subsidised higher education is highly competitive. As a result of great emphasis on science and technology in education, Russian medical, mathematical, scientific, and aerospace research is generally of a high order.
Since 1990, the 11-year school education has been introduced. Education in state-owned secondary schools is free. University level education is free, with exceptions. A substantial share of students is enrolled for full pay (many state institutions started to open commercial positions in the last years).
The oldest and largest Russian universities are Moscow State University and Saint Petersburg State University. In the 2000s, in order to create higher education and research institutions of comparable scale in Russian regions, the government launched a program of establishing "federal universities", mostly by merging existing large regional universities and research institutes and providing them with a special funding. These new institutions include the Southern Federal University, Siberian Federal University, Kazan Volga Federal University, North-Eastern Federal University, and Far Eastern Federal University. According to the 2018 QS World University Rankings, the highest-ranking Russian educational institution is Moscow State University, rated 95th in the world.
There are over 160 different ethnic groups and indigenous peoples in Russia. The country's vast cultural diversity spans ethnic Russians with their Slavic Orthodox traditions, Tatars and Bashkirs with their Turkic Muslim culture, Buddhist nomadic Buryats and Kalmyks, Shamanistic peoples of the Extreme North and Siberia, highlanders of the Northern Caucasus, and Finno-Ugric peoples of the Russian North West and Volga Region.
Handicraft, like Dymkovo toy, khokhloma, gzhel and palekh miniature represent an important aspect of Russian folk culture. Ethnic Russian clothes include kaftan, kosovorotka and ushanka for men, sarafan and kokoshnik for women, with lapti and valenki as common shoes. The clothes of Cossacks from Southern Russia include burka and papaha, which they share with the peoples of the Northern Caucasus.
Russian cuisine widely uses fish, caviar, poultry, mushrooms, berries, and honey. Crops of rye, wheat, barley, and millet provide the ingredients for various breads, pancakes and cereals, as well as for kvass, beer and vodka drinks. Black bread is rather popular in Russia, compared to the rest of the world. Flavourful soups and stews include shchi, borsch, ukha, solyanka and okroshka. Smetana (a heavy sour cream) is often added to soups and salads. Pirozhki, blini and syrniki are native types of pancakes. Chicken Kiev, pelmeni and shashlyk are popular meat dishes, the last two being of Tatar and Caucasus origin respectively. Other meat dishes include stuffed cabbage rolls "(golubtsy)" usually filled with meat. Salads include Olivier salad, vinegret and dressed herring.
Russia's large number of ethnic groups have distinctive traditions regarding folk music. Typical ethnic Russian musical instruments are gusli, balalaika, zhaleika, and garmoshka. Folk music had a significant influence on Russian classical composers, and in modern times it is a source of inspiration for a number of popular folk bands, like Melnitsa. Russian folk songs, as well as patriotic Soviet songs, constitute the bulk of the repertoire of the world-renowned Red Army choir and other popular ensembles.
Russians have many traditions, including the washing in banya, a hot steam bath somewhat similar to sauna. Old Russian folklore takes its roots in the pagan Slavic religion. Many Russian fairy tales and epic bylinas were adapted for animation films, or for feature movies by the prominent directors like Aleksandr Ptushko ("Ilya Muromets", "Sadko") and Aleksandr Rou ("Morozko", "Vasilisa the Beautiful"). Russian poets, including Pyotr Yershov and Leonid Filatov, made a number of well-known poetical interpretations of the classical fairy tales, and in some cases, like that of Alexander Pushkin, also created fully original fairy tale poems of great popularity.
Since the Christianization of Kievan Rus' for several ages Russian architecture was influenced predominantly by the Byzantine architecture. Apart from fortifications (kremlins), the main stone buildings of ancient Rus' were Orthodox churches with their many domes, often gilded or brightly painted.
Aristotle Fioravanti and other Italian architects brought Renaissance trends into Russia since the late 15th century, while the 16th century saw the development of unique tent-like churches culminating in Saint Basil's Cathedral. By that time the onion dome design was also fully developed. In the 17th century, the "fiery style" of ornamentation flourished in Moscow and Yaroslavl, gradually paving the way for the Naryshkin baroque of the 1690s. After the reforms of Peter the Great the change of architectural styles in Russia generally followed that in the Western Europe.
The 18th-century taste for rococo architecture led to the ornate works of Bartolomeo Rastrelli and his followers. The reigns of Catherine the Great and her grandson Alexander I saw the flourishing of Neoclassical architecture, most notably in the capital city of Saint Petersburg. The second half of the 19th century was dominated by the Neo-Byzantine and Russian Revival styles. Prevalent styles of the 20th century were the Art Nouveau, Constructivism, and the Stalin Empire style.
With the change in values imposed by communist ideology, the tradition of preservation was broken. Independent preservation societies, even those that defended only secular landmarks such as Moscow-based OIRU were disbanded by the end of the 1920s. A new anti-religious campaign, launched in 1929, coincided with collectivization of peasants; destruction of churches in the cities peaked around 1932. A number of churches were demolished, including the Cathedral of Christ the Saviour in Moscow. In Moscow alone losses of 1917–2006 are estimated at over 640 notable buildings (including 150 to 200 listed buildings, out of a total inventory of 3,500) – some disappeared completely, others were replaced with concrete replicas.
In 1955, a new Soviet leader, Nikita Khrushchev, condemned the "excesses" of the former academic architecture, and the late Soviet era was dominated by plain functionalism in architecture. This helped somewhat to resolve the housing problem, but created a large quantity of buildings of low architectural quality, much in contrast with the previous bright styles. In 1959 Nikita Khrushchev launched his anti-religious campaign. By 1964 over 10 thousand churches out of 20 thousand were shut down (mostly in rural areas) and many were demolished. Of 58 monasteries and convents operating in 1959, only sixteen remained by 1964; of Moscow's fifty churches operating in 1959, thirty were closed and six demolished.
Early Russian painting is represented in icons and vibrant frescos, the two genres inherited from Byzantium. As Moscow rose to power, Theophanes the Greek, Dionisius and Andrei Rublev became vital names associated with a distinctly Russian art.
The Russian Academy of Arts was created in 1757 and gave Russian artists an international role and status. Ivan Argunov, Dmitry Levitzky, Vladimir Borovikovsky and other 18th-century academicians mostly focused on portrait painting. In the early 19th century, when neoclassicism and romantism flourished, mythological and Biblical themes inspired many prominent paintings, notably by Karl Briullov and Alexander Ivanov.
In the mid-19th century the "Peredvizhniki" ("Wanderers") group of artists broke with the Academy and initiated a school of art liberated from academic restrictions. These were mostly realist painters who captured Russian identity in landscapes of wide rivers, forests, and birch clearings, as well as vigorous genre scenes and robust portraits of their contemporaries. Some artists focused on depicting dramatic moments in Russian history, while others turned to social criticism, showing the conditions of the poor and caricaturing authority; critical realism flourished under the reign of Alexander II. Leading realists include Ivan Shishkin, Arkhip Kuindzhi, Ivan Kramskoi, Vasily Polenov, Isaac Levitan, Vasily Surikov, Viktor Vasnetsov, Ilya Repin, and Boris Kustodiev.
The turn of the 20th century saw the rise of symbolist painting, represented by Mikhail Vrubel, Kuzma Petrov-Vodkin, and Nicholas Roerich.
The Russian avant-garde was a large, influential wave of modernist art that flourished in Russia from approximately 1890 to 1930. The term covers many separate, but inextricably related art movements that occurred at the time, namely neo-primitivism, suprematism, constructivism, rayonism, and Russian Futurism. Notable artists from this era include El Lissitzky, Kazimir Malevich, Wassily Kandinsky, and Marc Chagall. Since the 1930s the revolutionary ideas of the avant-garde clashed with the newly emerged conservative direction of socialist realism.
Soviet art produced works that were furiously patriotic and anti-fascist during and after the Great Patriotic War. Multiple war memorials, marked by a great restrained solemnity, were built throughout the country. Soviet artists often combined innovation with socialist realism, notably the sculptors Vera Mukhina, Yevgeny Vuchetich and Ernst Neizvestny.
Music in 19th-century Russia was defined by the tension between classical composer Mikhail Glinka along with other members of The Mighty Handful, who embraced Russian national identity and added religious and folk elements to their compositions, and the Russian Musical Society led by composers Anton and Nikolay Rubinsteins, which was musically conservative. The later tradition of Pyotr Ilyich Tchaikovsky, one of the greatest composers of the Romantic era, was continued into the 20th century by Sergei Rachmaninoff. World-renowned composers of the 20th century include Alexander Scriabin, Igor Stravinsky, Sergei Prokofiev, Dmitri Shostakovich and Alfred Schnittke.
Russian conservatories have turned out generations of famous soloists. Among the best known are violinists Jascha Heifetz, David Oistrakh, Leonid Kogan, Gidon Kremer, and Maxim Vengerov; cellists Mstislav Rostropovich, Natalia Gutman; pianists Vladimir Horowitz, Sviatoslav Richter, Emil Gilels, Vladimir Sofronitsky and Evgeny Kissin; and vocalists Fyodor Shalyapin, Mark Reizen, Elena Obraztsova, Tamara Sinyavskaya, Nina Dorliak, Galina Vishnevskaya, Anna Netrebko and Dmitry Hvorostovsky.
During the early 20th century, Russian ballet dancers Anna Pavlova and Vaslav Nijinsky rose to fame, and impresario Sergei Diaghilev and his Ballets Russes' travels abroad profoundly influenced the development of dance worldwide. Soviet ballet preserved the perfected 19th-century traditions, and the Soviet Union's choreography schools produced many internationally famous stars, including Galina Ulanova, Maya Plisetskaya, Rudolf Nureyev, and Mikhail Baryshnikov. The Bolshoi Ballet in Moscow and the Mariinsky Ballet in St Petersburg remain famous throughout the world.
Modern Russian rock music takes its roots both in the Western rock and roll and heavy metal, and in traditions of the Russian bards of the Soviet era, such as Vladimir Vysotsky and Bulat Okudzhava. Popular Russian rock groups include Mashina Vremeni, DDT, Aquarium, Alisa, Kino, Kipelov, Nautilus Pompilius, Aria, Grazhdanskaya Oborona, Splean, and Korol i Shut. Russian pop music developed from what was known in the Soviet times as "estrada" into full-fledged industry, with some performers gaining wide international recognition, such as t.A.T.u., Nu Virgos and Vitas.
In the 18th century, during the era of Russian Enlightenment, the development of Russian literature was boosted by the works of Mikhail Lomonosov and Denis Fonvizin. By the early 19th century a modern national tradition had emerged, producing some of the greatest writers in Russian history. This period, known also as the Golden Age of Russian Poetry, began with Alexander Pushkin, who is considered the founder of the modern Russian literary language and often described as the "Russian Shakespeare". It continued with the poetry of Mikhail Lermontov and Nikolay Nekrasov, dramas of Alexander Ostrovsky and Anton Chekhov, and the prose of Nikolai Gogol and Ivan Turgenev. Leo Tolstoy and Fyodor Dostoyevsky have been described by literary critics as the greatest novelists of all time.
By the 1880s, the age of the great novelists was over, and short fiction and poetry became the dominant genres. The next several decades became known as the Silver Age of Russian Poetry, when the previously dominant literary realism was replaced by symbolism. Leading authors of this era include such poets as Valery Bryusov, Vyacheslav Ivanov, Alexander Blok, Nikolay Gumilev and Anna Akhmatova, and novelists Leonid Andreyev, Ivan Bunin, and Maxim Gorky.
Russian philosophy blossomed in the 19th century, when it was defined initially by the opposition of Westernisers, who advocated Western political and economical models, and Slavophiles, who insisted on developing Russia as a unique civilization. The latter group includes Nikolai Danilevsky and Konstantin Leontiev, the founders of eurasianism. In its further development Russian philosophy was always marked by a deep connection to literature and interest in creativity, society, politics and nationalism; Russian cosmism and religious philosophy were other major areas. Notable philosophers of the late 19th and the early 20th centuries include Vladimir Solovyev, Sergei Bulgakov, and Vladimir Vernadsky.
Following the Russian Revolution of 1917 many prominent writers and philosophers left the country, including Bunin, Vladimir Nabokov and Nikolay Berdyayev, while a new generation of talented authors joined together in an effort to create a distinctive working-class culture appropriate for the new Soviet state. In the 1930s censorship over literature was tightened in line with the policy of socialist realism. In the late 1950s restrictions on literature were eased, and by the 1970s and 1980s, writers were increasingly ignoring official guidelines. Leading authors of the Soviet era include novelists Yevgeny Zamyatin (emigrated), Ilf and Petrov, Mikhail Bulgakov (censored) and Mikhail Sholokhov, and poets Vladimir Mayakovsky, Yevgeny Yevtushenko, and Andrey Voznesensky.
The Soviet Union was also a major producer of science fiction, written by authors like Arkady and Boris Strugatsky, Kir Bulychov, Alexander Belayev and Ivan Yefremov. Traditions of Russian science fiction and fantasy are continued today by numerous writers.
Russian and later Soviet cinema was a hotbed of invention in the period immediately following 1917, resulting in world-renowned films such as "The Battleship Potemkin" by Sergei Eisenstein. Eisenstein was a student of filmmaker and theorist Lev Kuleshov, who developed the Soviet montage theory of film editing at the world's first film school, the All-Union Institute of Cinematography. Dziga Vertov, whose "kino-glaz" ("film-eye") theory—that the camera, like the human eye, is best used to explore real life—had a huge impact on the development of documentary film making and cinema realism. The subsequent state policy of socialist realism somewhat limited creativity; however, many Soviet films in this style were artistically successful, including "Chapaev", "The Cranes Are Flying", and "Ballad of a Soldier".
The 1960s and 1970s saw a greater variety of artistic styles in Soviet cinema. Eldar Ryazanov's and Leonid Gaidai's comedies of that time were immensely popular, with many of the catch phrases still in use today. In 1961–68 Sergey Bondarchuk directed an Oscar-winning film adaptation of Leo Tolstoy's epic "War and Peace", which was the most expensive film made in the Soviet Union. In 1969, Vladimir Motyl's "White Sun of the Desert" was released, a very popular film in a genre of ostern; the film is traditionally watched by cosmonauts before any trip into space.
Russian animation dates back to late Russian Empire times. During the Soviet era, Soyuzmultfilm studio was the largest animation producer. Soviet animators developed a great variety of pioneering techniques and aesthetic styles, with prominent directors including Ivan Ivanov-Vano, Fyodor Khitruk and Aleksandr Tatarsky. Many Soviet cartoon heroes such as the Russian-style Winnie-the-Pooh, cute little Cheburashka, Wolf and Hare from "Nu, Pogodi!", are iconic images in Russia and many surrounding countries.
The late 1980s and 1990s were a period of crisis in Russian cinema and animation. Although Russian filmmakers became free to express themselves, state subsidies were drastically reduced, resulting in fewer films produced. The early years of the 21st century have brought increased viewership and subsequent prosperity to the industry on the back of the economic revival. Production levels are already higher than in Britain and Germany. Russia's total box-office revenue in 2007 was $565 million, up 37% from the previous year. In 2002 the "Russian Ark" became the first feature film ever to be shot in a single take. The traditions of Soviet animation were developed recently by such directors as Aleksandr Petrov and studios like Melnitsa Animation.
While there were few stations or channels in the Soviet time, in the past two decades many new state and privately owned radio stations and TV channels have appeared. In 2005 a state-run English language Russia Today TV started broadcasting, and its Arabic version Rusiya Al-Yaum was launched in 2007. Censorship and Media freedom in Russia has always been a main theme of Russian media.
Soviet and later Russian athletes have always been in the top four for the number of gold medals collected at the Summer Olympics. Soviet gymnasts, track-and-field athletes, weightlifters, wrestlers, boxers, fencers, shooters, cross country skiers, biathletes, speed skaters and figure skaters were consistently among the best in the world, along with Soviet basketball, handball, volleyball and ice hockey players. The 1980 Summer Olympics were held in Moscow while the 2014 Winter Olympics were hosted in Sochi.
Although ice hockey was only introduced during the Soviet era, the Soviet Union national team managed to win gold at almost all the Olympics and World Championships they contested. Russian players Valery Kharlamov, Sergei Makarov, Vyacheslav Fetisov and Vladislav Tretiak hold four of six positions in the IIHF "Team of the Century". Russia has not won the Olympic ice hockey tournament since the Unified Team won gold in 1992. Russia won the 1993, 2008, 2009, 2012 and the 2014 IIHF World Championships.
The Kontinental Hockey League (KHL) was founded in 2008 as a successor to the Russian Superleague. It is ranked the top hockey league in Europe , and the second-best in the world. It is an international professional ice hockey league in Eurasia and consists of 29 teams, of which 21 are based in Russia and 7 more are located in Latvia, Kazakhstan, Belarus, Finland, Slovakia, Croatia and China. KHL is on the 4th place by attendance in Europe.
Bandy, also known as Russian hockey, is another traditionally popular ice sport. The Soviet Union won all the Bandy World Championships for men between 1957–79 and some thereafter too. After the dissolution of the Soviet Union, Russia has continuously been one of the most successful teams, winning many world championships.
Association football is one of the most popular sports in modern Russia. The Soviet national team became the first European Champions by winning Euro 1960. Appearing in four FIFA World Cups from 1958 to 1970, Lev Yashin is regarded as one of the greatest goalkeepers in the history of football, and was chosen on the FIFA World Cup Dream Team. The Soviet national team reached the finals of Euro 1988. In 1956 and 1988, the Soviet Union won gold at the Olympic football tournament. Russian clubs CSKA Moscow and Zenit St Petersburg won the UEFA Cup in 2005 and 2008. The Russian national football team reached the semi-finals of Euro 2008, losing only to the eventual champions Spain. Russia was the host nation for the 2018 FIFA World Cup. The matches were held from 14 June to 15 July 2018 in the stadiums of 11 host cities located in the European part of the country and in the Ural region. This was the first football World Cup ever held in Eastern Europe, and the first held in Europe since 2006. Russia will also host games of Euro 2020.
In 2007, the Russian national basketball team won the European Basketball Championship. The Russian basketball club PBC CSKA Moscow is one of the top teams in Europe, winning the Euroleague in 2006 and 2008.
Larisa Latynina, who currently holds the record for the most gold Olympic medals won by a woman, established the USSR as the dominant force in gymnastics for many years. Today, Russia is the leading nation in rhythmic gymnastics with Yevgeniya Kanayeva. Double 50 m and 100 m freestyle Olympic gold medalist Alexander Popov is widely considered the greatest sprint swimmer in history. Russian synchronised swimming is the best in the world, with almost all gold medals at Olympics and World Championships having been swept by Russians in recent decades. Figure skating is another popular sport in Russia, especially pair skating and ice dancing. With the exception of 2010 and 2018 a Soviet or Russian pair has won gold at every Winter Olympics since 1964.
Since the end of the Soviet era, tennis has grown in popularity and Russia has produced a number of famous players, including Maria Sharapova. In martial arts, Russia produced the sport Sambo and renowned fighters, like Fedor Emelianenko. Chess is a widely popular pastime in Russia; from 1927, Russian grandmasters have held the world chess championship almost continuously.
The 2014 Winter Olympics were held in Sochi in the south of Russia. In 2016 the McLaren Report found evidence of widespread state-sponsored doping and an institutional conspiracy to cover up Russian competitors' positive drug tests. 25 athletes are disqualified, 11 medals are stripped.
Formula One is also becoming increasingly popular in Russia. In 2010 Vitaly Petrov of Vyborg became the first Russian to drive in Formula One, and was soon followed by a second – Daniil Kvyat, from Ufa – in 2014. There had only been two Russian Grands Prix (in 1913 and 1914), but the Russian Grand Prix returned as part of the Formula One season in 2014, as part of a six-year deal.
Russia has the most Olympic medals stripped for doping violations (51), the most of any country, four times the number of the runner-up, and more than a third of the global total, and 129 athletes caught doping at the Olympics, also the most of any country. From 2011 to 2015, more than a thousand Russian competitors in various sports, including summer, winter, and Paralympic sports, benefited from a state-sponsored cover-up, with no indication that the program has ceased since then.
There are seven public holidays in Russia, except those always celebrated on Sunday. Russian New Year traditions resemble those of the Western Christmas, with New Year Trees and gifts, and Ded Moroz (Father Frost) playing the same role as Santa Claus. Orthodox Christmas falls on 7 January, because the Russian Orthodox Church still follows the Julian calendar, and all Orthodox holidays are 13 days after Western ones. Two other major Christian holidays are Easter and Trinity Sunday. Kurban Bayram and Uraza Bayram are celebrated by Russian Muslims.
Further Russian public holidays include Defender of the Fatherland Day (23 February), which honors Russian men, especially those serving in the army; International Women's Day (8 March), which combines the traditions of Mother's Day and Valentine's Day; Spring and Labor Day (1 May); Victory Day (9 May); Russia Day (12 June); and Unity Day (4 November), commemorating the popular uprising which expelled the Polish occupation force from Moscow in 1612.
Victory Day is the second most popular holiday in Russia; it commemorates the victory over Nazi Germany and its allies in the Great Patriotic War. A huge military parade, hosted by the President of Russia, is annually organised in Moscow on Red Square. Similar parades take place in all major Russian cities and cities with the status "Hero city" or "City of Military Glory".
Popular non-public holidays include Old New Year (the New Year according to the Julian Calendar on 14 January), Tatiana Day (students holiday on 25 January), Maslenitsa (a pre-Christian spring holiday a week before the Great Lent), Cosmonautics Day (in tribute to the first human trip into space), Ivan Kupala Day (another pre-Christian holiday on 7 July) and Peter and Fevronia Day (which takes place on 8 July and is the Russian analogue of Valentine's Day, focusing, however, on family love and fidelity).
State symbols of Russia include the Byzantine double-headed eagle, combined with St. George of Moscow in the Russian coat of arms. The Russian flag dates from the late Tsardom of Russia period and has been widely used since the time of the Russian Empire. The Russian anthem shares its music with the Soviet Anthem, though not the lyrics. The imperial motto "God is with us" and the Soviet motto "Proletarians of all countries, unite!" are now obsolete and no new motto has replaced them. The hammer and sickle and the full Soviet coat of arms are still widely seen in Russian cities as a part of old architectural decorations. The Soviet Red Stars are also encountered, often on military equipment and war memorials. The Red Banner continues to be honored, especially the Banner of Victory of 1945.
The Matryoshka doll is a recognizable symbol of Russia, and the towers of Moscow Kremlin and Saint Basil's Cathedral in Moscow are Russia's main architectural icons. Cheburashka is a mascot of the Russian national Olympic team. St. Mary, St. Nicholas, St. Andrew, St. George, St. Alexander Nevsky, St. Sergius of Radonezh and St. Seraphim of Sarov are Russia's patron saints. Chamomile is the national flower, while birch is the national tree. The Russian bear is an animal symbol and Mother Russia a national personification of Russia, though the bear image has a Western origin and Russians themselves have accepted it only fairly recently.
Tourism in Russia has seen rapid growth since the late Soviet period, first domestic tourism and then international tourism, fueled by the rich cultural heritage and great natural variety of the country. Major tourist routes in Russia include a journey around the Golden Ring theme route of ancient cities, cruises on the big rivers like the Volga, and long journeys on the famous Trans-Siberian Railway. In 2013, Russia was visited by 28.4 million tourists; it is the ninth-most visited country in the world and the seventh-most visited in Europe. The number of Western visitors dropped in 2014.
The most visited destinations in Russia are Moscow and Saint Petersburg, the current and former capitals of the country. Recognised as World Cities, they feature such world-renowned museums as the Tretyakov Gallery and the Hermitage, famous theaters like Bolshoi and Mariinsky, ornate churches like Saint Basil's Cathedral, Cathedral of Christ the Saviour, Saint Isaac's Cathedral and Church of the Savior on Blood, impressive fortifications like the Kremlin and Peter and Paul Fortress, beautiful squares and streets like Red Square, Palace Square, Tverskaya Street, Nevsky Prospect, and Arbat Street. Rich palaces and parks are found in the former imperial residences in suburbs of Moscow (Kolomenskoye, Tsaritsyno) and St Petersburg (Peterhof, Strelna, Oranienbaum, Gatchina, Pavlovsk and Tsarskoye Selo). Moscow displays Soviet architecture at its best, along with modern skyscrapers, while St Petersburg, nicknamed "Venice of the North", boasts of its classical architecture, many rivers, canals and bridges.
Kazan, the capital of Tatarstan, shows a mix of Christian Russian and Muslim Tatar cultures. The city has registered a brand "The Third Capital of Russia", though a number of other major cities compete for this status, including Novosibirsk, Yekaterinburg and Nizhny Novgorod.
The warm subtropical Black Sea coast of Russia is the site for a number of popular sea resorts, like Sochi, the follow-up host of the 2014 Winter Olympics. The mountains of the Northern Caucasus contain popular ski resorts such as Dombay. The most famous natural destination in Russia is Lake Baikal, "the Blue Eye of Siberia". This unique lake, the oldest and deepest in the world, has crystal-clear waters and is surrounded by taiga-covered mountains. Other popular natural destinations include Kamchatka with its volcanoes and geysers, Karelia with its lakes and granite rocks, the snowy Altai Mountains, and the wild steppes of Tuva. | https://en.wikipedia.org/wiki?curid=25391 |
Rational choice theory
Rational choice theory, also known as choice theory or rational action theory, is a framework for understanding and often formally modeling social and economic behavior. The basic premise of rational choice theory is that aggregate social behavior results from the behavior of individual actors, each of whom is making their individual decisions. The theory also focuses on the determinants of the individual choices (methodological individualism). Rational choice theory then assumes that an individual has preferences among the available choice alternatives that allow them to state which option they prefer. These preferences are assumed to be complete (the person can always say which of two alternatives they consider preferable or that neither is preferred to the other) and transitive (if option A is preferred over option B and option B is preferred over option C, then A is preferred over C). The rational agent is assumed to take account of available information, probabilities of events, and potential costs and benefits in determining preferences, and to act consistently in choosing the self-determined best choice of action. In simpler terms, this theory dictates that every person, even when carrying out the most mundane of tasks, perform their own personal cost and benefit analysis in order to determine whether the action is worth pursuing for the best possible outcome. And following this, a person will choose the optimum venture in every case. This could culminate in a student deciding on whether to attend a lecture or stay in bed, a shopper deciding to provide their own bag to avoid the five pence charge or even a voter deciding which candidate or party based on who will fulfill their needs the best on issues that have an impact on themselves especially.
Rationality is widely used as an assumption of the behavior of individuals in microeconomic models and analyses and appears in almost all economics textbook treatments of human decision-making. It is also used in political science, sociology, and philosophy. Gary Becker was an early proponent of applying rational actor models more widely. Becker won the 1992 Nobel Memorial Prize in Economic Sciences for his studies of discrimination, crime, and human capital.
A particular version of rationality is instrumental rationality, which involves seeking the most cost-effective means to achieve a specific goal without reflecting on the worthiness of that goal.
Rational choice theorists do not claim that the theory describes the choice "process", but rather that it predicts the outcome and pattern of choices.
An assumption often added to the rational choice paradigm is that individual preferences are self-interested, in which case the individual can be referred to as a homo economicus. Such an individual acts "as if" balancing costs against benefits to arrive at action that maximizes personal advantage. Proponents of such models, particularly those associated with the Chicago school of economics, do not claim that a model's assumptions are an accurate description of reality, only that they help formulate clear and falsifiable hypotheses. In this view, the only way to judge the success of a hypothesis is empirical tests. To use an example from Milton Friedman, if a theory that says that the behavior of the leaves of a tree is explained by their rationality passes the empirical test, it is seen as successful.
Without specifying the individual's goal or preferences it may not be possible to empirically test, or falsify, the rationality assumption. However, the predictions made by a specific version of the theory are testable. In recent years, the most prevalent version of rational choice theory, expected utility theory, has been challenged by the experimental results of behavioral economics. Economists are learning from other fields, such as psychology, and are enriching their theories of choice in order to get a more accurate view of human decision-making. For example, the behavioral economist and experimental psychologist Daniel Kahneman won the Nobel Memorial Prize in Economic Sciences in 2002 for his work in this field.
Rational choice theory has become increasingly employed in social sciences other than economics, such as sociology, evolutionary theory and political science in recent decades. It has had far-reaching impacts on the study of political science, especially in fields like the study of interest groups, elections, behaviour in legislatures, coalitions, and bureaucracy. In these fields, the use of the rational choice paradigm to explain broad social phenomena is the subject of controversy.
The concept of rationality used in rational choice theory is different from the colloquial and most philosophical use of the word. Colloquially, "rational" behaviour typically means "sensible", "predictable", or "in a thoughtful, clear-headed manner." Rational choice theory uses a narrower definition of rationality. At its most basic level, behavior is rational if it is goal-oriented, reflective (evaluative), and consistent (across time and different choice situations). This contrasts with behavior that is random, impulsive, conditioned, or adopted by (unevaluative) imitation.
Early neoclassical economists writing about rational choice, including William Stanley Jevons, assumed that agents make consumption choices so as to maximize their happiness, or utility. Contemporary theory bases rational choice on a set of choice axioms that need to be satisfied, and typically does not specify where the goal (preferences, desires) comes from. It mandates just a consistent ranking of the alternatives. Individuals choose the best action according to their personal preferences and the constraints facing them. E.g., there is nothing irrational in preferring fish to meat the first time, but there is something irrational in preferring fish to meat in one instant and preferring meat to fish in another, without anything else having changed.
The premise of rational choice theory as a social science methodology is that the aggregate behavior in society reflects the sum of the choices made by individuals. Each individual, in turn, makes their choice based on their own preferences and the constraints (or choice set) they face.
At the individual level, rational choice theory stipulates that the agent chooses the action (or outcome) they most prefer. In the case where actions (or outcomes) can be evaluated in terms of costs and benefits, a rational individual chooses the action (or outcome) that provides the maximum net benefit, i.e., the maximum benefit minus cost.
The theory applies to more general settings than those identified by costs and benefit. In general, rational decision making entails choosing among all available alternatives the alternative that the individual most prefers. The "alternatives" can be a set of actions ("what to do?") or a set of objects ("what to choose/buy"). In the case of actions, what the individual really cares about are the outcomes that results from each possible action. Actions, in this case, are only an instrument for obtaining a particular outcome.
The available alternatives are often expressed as a set of objects, for example a set of "j" exhaustive and exclusive actions:
For example, if a person can choose to vote for either Roger or Sara or to abstain, their set of possible alternatives is:
The theory makes two technical assumptions about individuals' preferences over alternatives:
Together these two assumptions imply that given a set of exhaustive and exclusive actions to choose from, an individual can "rank" the elements of this set in terms of his preferences in an internally consistent way (the ranking constitutes a partial ordering), and the set has at least one maximal element.
The preference between two alternatives can be:
Research that took off in the 1980s sought to develop models which drop these assumptions and argue that such behaviour could still be rational, Anand (1993). This work, often conducted by economic theorists and analytical philosophers, suggests ultimately that the assumptions or axioms above are not completely general and might at best be regarded as approximations.
Alternative theories of human action include such components as Amos Tversky and Daniel Kahneman's prospect theory, which reflects the empirical finding that, contrary to standard preferences assumed under neoclassical economics, individuals attach extra value to items that they already own compared to similar items owned by others. Under standard preferences, the amount that an individual is willing to pay for an item (such as a drinking mug) is assumed to equal the amount he or she is willing to be paid in order to part with it. In experiments, the latter price is sometimes significantly higher than the former (but see Plott and Zeiler 2005, Plott and Zeiler 2007 and Klass and Zeiler, 2013). Tversky and Kahneman do not characterize loss aversion as irrational. Behavioral economics includes a large number of other amendments to its picture of human behavior that go against neoclassical assumptions.
Often preferences are described by their utility function or "payoff function". This is an ordinal number that an individual assigns over the available actions, such as:
The individual's preferences are then expressed as the relation between these ordinal assignments. For example, if an individual prefers the candidate Sara over Roger over abstaining, their preferences would have the relation:
A preference relation that as above satisfies completeness, transitivity, and, in addition, continuity, can be equivalently represented by a utility function.
Both the assumptions and the behavioral predictions of rational choice theory have sparked criticism from various camps. As mentioned above, some economists have developed models of bounded rationality, which hope to be more psychologically plausible without completely abandoning the idea that reason underlies decision-making processes. Other economists have developed more theories of human decision-making that allow for the roles of uncertainty, institutions, and determination of individual tastes by their socioeconomic environment (cf. Fernandez-Huerga, 2008).
Martin Hollis and Edward J. Nell's 1975 book offers both a philosophical critique of neo-classical economics and an innovation in the field of economic methodology. Further they outlined an alternative vision to neo-classicism based on a rationalist theory of knowledge. Within neo-classicism, the authors addressed consumer behaviour (in the form of indifference curves and simple versions of revealed preference theory) and marginalist producer behaviour in both product and factor markets. Both are based on rational optimizing behaviour. They consider imperfect as well as perfect markets since neo-classical thinking embraces many market varieties and disposes of a whole system for their classification. However, the authors believe that the issues arising from basic maximizing models have extensive implications for econometric methodology (Hollis and Nell, 1975, p. 2). In particular it is this class of models – rational behavior as maximizing behaviour – which provide support for specification and identification. And this, they argue, is where the flaw is to be found. Hollis and Nell (1975) argued that positivism (broadly conceived) has provided neo-classicism with important support, which they then show to be unfounded. They base their critique of neo-classicism not only on their critique of positivism but also on the alternative they propose, rationalism. Indeed, they argue that rationality is central to neo-classical economics – as rational choice – and that this conception of rationality is misused. Demands are made of it that it cannot fulfill.
In their 1994 work, "Pathologies of Rational Choice Theory", Donald P. Green and Ian Shapiro argue that the empirical outputs of rational choice theory have been limited. They contend that much of the applicable literature, at least in political science, was done with weak statistical methods and that when corrected many of the empirical outcomes no longer hold. When taken in this perspective, rational choice theory has provided very little to the overall understanding of political interaction - and is an amount certainly disproportionately weak relative to its appearance in the literature. Yet, they concede that cutting edge research, by scholars well-versed in the general scholarship of their fields (such as work on the U.S. Congress by Keith Krehbiel, Gary Cox, and Mat McCubbins) has generated valuable scientific progress.
Duncan K. Foley (2003, p. 1) has also provided an important criticism of the concept of "rationality" and its role in economics. He argued that“Rationality” has played a central role in shaping and establishing the hegemony of contemporary mainstream economics. As the specific claims of robust neoclassicism fade into the history of economic thought, an orientation toward situating explanations of economic phenomena in relation to rationality has increasingly become the touchstone by which mainstream economists identify themselves and recognize each other. This is not so much a question of adherence to any particular conception of rationality, but of taking rationality of individual behavior as the unquestioned starting point of economic analysis.
Foley (2003, p. 9) went on to argue thatThe concept of rationality, to use Hegelian language, represents the relations of modern capitalist society one-sidedly. The burden of rational-actor theory is the assertion that ‘naturally’ constituted individuals facing existential conflicts over scarce resources would rationally impose on themselves the institutional structures of modern capitalist society, or something approximating them. But this way of looking at matters systematically neglects the ways in which modern capitalist society and its social relations in fact constitute the ‘rational’, calculating individual. The well-known limitations of rational-actor theory, its static quality, its logical antinomies, its vulnerability to arguments of infinite regress, its failure to develop a progressive concrete research program, can all be traced to this starting-point.
Schram and Caterino (2006) contains a fundamental methodological criticism of rational choice theory for promoting the view that the natural science model is the only appropriate methodology in social science and that political science should follow this model, with its emphasis on quantification and mathematization. Schram and Caterino argue instead for methodological pluralism. The same argument is made by William E. Connolly, who in his work Neuropolitics shows that advances in neuroscience further illuminate some of the problematic practices of rational choice theory.
More recently Edward J. Nell and Karim Errouaki (2011, Ch. 1) argued that:The DNA of neoclassical economics is defective. Neither the induction problem nor the problems of methodological individualism can be solved within the framework of neoclassical assumptions. The neoclassical approach is to call on rational economic man to solve both. Economic relationships that reflect rational choice should be ‘projectible’. But that attributes a deductive power to ‘rational’ that it cannot have consistently with positivist (or even pragmatist) assumptions (which require deductions to be simply analytic). To make rational calculations projectible, the agents may be assumed to have idealized abilities, especially foresight; but then the induction problem is out of reach because the agents of the world do not resemble those of the model. The agents of the model can be abstract, but they cannot be endowed with powers actual agents could not have. This also undermines methodological individualism; if behaviour cannot be reliably predicted on the basis of the ‘rational choices of agents’, a social order cannot reliably follow from the choices of agents.
Furthermore, Pierre Bourdieu fiercely opposed rational choice theory as grounded in a misunderstanding of how social agents operate. Bourdieu argued that social agents do not continuously calculate according to explicit rational and economic criteria. According to Bourdieu, social agents operate according to an implicit practical logic—a practical sense—and bodily dispositions. Social agents act according to their "feel for the game" (the "feel" being, roughly, habitus, and the "game" being the field).
Other social scientists, inspired in part by Bourdieu's thinking have expressed concern about the inappropriate use of economic metaphors in other contexts, suggesting that this may have political implications. The argument they make is that by treating everything as a kind of "economy" they make a particular vision of the way an economy works seem more natural. Thus, they suggest, rational choice is as much ideological as it is scientific, which does not in and of itself negate its scientific utility.
An evolutionary psychology perspective is that many of the seeming contradictions and biases regarding rational choice can be explained as being rational in the context of maximizing biological fitness in the ancestral environment but not necessarily in the current one. Thus, when living at subsistence level where a reduction of resources may have meant death it may have been rational to place a greater value on losses than on gains. Proponents argue it may also explain differences between groups.
The rational choice approach allows preferences to be represented as real-valued utility functions. Economic decision making then becomes a problem of maximizing this utility function, subject to constraints (e.g. a budget). This has many advantages. It provides a compact theory that makes empirical predictions with a relatively sparse model - just a description of the agent's objectives and constraints. Furthermore, optimization theory is a well-developed field of mathematics. These two factors make rational choice models tractable compared to other approaches to choice. Most importantly, this approach is strikingly general. It has been used to analyze not only personal and household choices about
traditional economic matters like consumption and savings, but also choices about education, marriage, child-bearing, migration, crime and so on, as well as business decisions about output, investment, hiring, entry, exit, etc. with varying degrees of success.
Despite the empirical shortcomings of rational choice theory, the flexibility and tractability of rational choice models (and the lack of equally powerful alternatives) lead to them still being widely used.
The relationship between the rational choice theory and politics takes many forms, whether that be in voter behaviour, the actions of world leaders or even the way that important matters are dealt with.
Voter behaviour shifts significantly thanks to rational theory, which is ingrained in human nature, the most significant of which occurs when there are times of economic trouble. This was assessed in detail by Anthony Downs who concluded that voters were acting on thoughts of higher income as a person ‘votes for whatever party he believes would provide him with the highest utility income from government action’. This is a significant simplification of how the theory influences people's thoughts but makes up a core part of rational theory as a whole. In a more complex fashion, voters will react often radically in times of real economic strife, which can lead to an increase in extremism. The government will be made responsible by the voters and thus they see a need to make a change. Some of the most infamous extremist parties came to power on the back of economic recessions, the most significant being the far right Nazi Party in Germany, who used the hyperinflation at the time to gain power rapidly, as they promised a solution and a scapegoat for the blame. There is a trend to this, as a comprehensive study carried out by three political scientists concluded, as a ‘turn to the right’ occurs and it is clear that it is the work of the rational theory because within ten years the politics returns to a more common state.
The fear for many is that rational thinking does not allow for an efficient resolution to some of the most troubling world problems, such as the climate crisis. In this way, nationalism will not allow countries to work together and thus the criticisms of the theory should be noted very carefully. | https://en.wikipedia.org/wiki?curid=25400 |
Romance languages
The Romance languages (nowadays rarely Latin languages, or Neo-Latin languages) are the modern languages that evolved from Vulgar Latin between the third and eighth centuries. They are a subgroup of the Italic languages in the Indo-European language family.
Today, around 800 million people are native speakers of Romance languages worldwide, mainly in the Americas, Europe, and parts of Africa, as well as elsewhere. Additionally, the major Romance languages have many non-native speakers and are in widespread use as lingua francas. This is especially the case for French, which is in widespread use throughout Central and West Africa, Madagascar, Mauritius, Seychelles, Comoros, Djibouti, Lebanon and North Africa (excluding Egypt, where it is a minority language).
The five most widely spoken Romance languages by number of native speakers are Spanish (480 million), Portuguese (255 million), French (77 million), Italian (65 million), and Romanian (24 million).
Because of the difficulty of imposing boundaries on a continuum, various counts of the modern Romance languages are given; for example, Dalby lists 23 based on mutual intelligibility. The following, more extensive list, includes 35 current, living languages, and one extinct language, Dalmatian:
The term "Romance" comes from the Vulgar Latin adverb , "in Roman", derived from : for instance, in the expression , "to speak in Roman" (that is, the Latin vernacular), contrasted with , "to speak in Latin" (Medieval Latin, the conservative version of the language used in writing and formal contexts or as a lingua franca), and with , "to speak in Barbarian" (the non-Latin languages of the peoples living outside the Roman Empire). From this adverb the noun "romance" originated, which applied initially to anything written , or "in the Roman vernacular".
Lexical and grammatical similarities among the Romance languages, and between Latin and each of them, are apparent from the following examples having the same meaning in various Romance lects:
English: She always closes the window before she dines / before dining.
Romance-based creoles and pidgins
Some of the divergence comes from semantic change: where the same root words have developed different meanings. For example, the Portuguese word is descended from Latin "window" (and is thus cognate to French , Italian , Romanian and so on), but now means "skylight" and "slit". Cognates may exist but have become rare, such as in Spanish, or dropped out of use entirely. The Spanish and Portuguese terms meaning "to throw through a window" and meaning "replete with windows" also have the same root, but are later borrowings from Latin.
Likewise, Portuguese also has the word , a cognate of Italian and Spanish , but uses it in the sense of "to have a late supper" in most varieties, while the preferred word for "to dine" is (related to archaic Spanish "to eat") because of semantic changes in the 19th century. Galician has both (from medieval "fẽestra", the ancestor of standard Portuguese ) and the less frequently used and .
As an alternative to (originally the genitive form), Italian has the pronoun , a cognate of the other words for "she", but it is hardly ever used in speaking.
Spanish, Asturian, and Leonese and Mirandese and Sardinian come from Latin "wind" (cf. English "window", etymologically 'wind eye'), and Portuguese , Galician , Mirandese from Latin * "small opening", a derivative of "door".
Sardinian (alternative for /) comes from Old Italian and is similar to other Romance languages such as French (from Italian ), Portuguese , Romanian , Spanish , Catalan and Corsican (alternative for ).
The classification of the Romance languages is inherently difficult, because most of the linguistic area is a dialect continuum, and in some cases political biases can come into play. Along with Latin (which is not included among the Romance languages) and a few extinct languages of ancient Italy, they make up the Italic branch of the Indo-European family.
There are various schemes used to subdivide the Romance languages. Three of the most common schemes are as follows:
The main subfamilies that have been proposed by Ethnologue within the various classification schemes for Romance languages are:
This three-way division is made primarily based on the outcome of Vulgar Latin (Proto-Romance) vowels:
Italo-Western is in turn split along the so-called "La Spezia–Rimini Line" in northern Italy, which divides the central and southern Italian languages from the so-called Western Romance languages to the north and west. The primary characteristics dividing the two are:
The reality is somewhat more complex. All of the "southeast" characteristics apply to all languages southeast of the line, and all of the "northwest" characteristics apply to all languages in France and (most of) Spain. However, the Gallo-Italic languages are somewhere in between. All of these languages do have the "northwest" characteristics of lenition and loss of gemination. However:
On top of this, the ancient Mozarabic language in southern Spain, at the far end of the "northwest" group, had the "southeast" characteristics of lack of lenition and palatalization of /k/ to . Certain languages around the Pyrenees (e.g. some highland Aragonese dialects) also lack lenition, and northern French dialects such as Norman and Picard have palatalization of /k/ to (although this is possibly an independent, secondary development, since /k/ between vowels, i.e. when subject to lenition, developed to /dz/ rather than , as would be expected for a primary development).
The usual solution to these issues is to create various nested subgroups. Western Romance is split into the Gallo-Iberian languages, in which lenition happens and which include nearly all the Western Romance languages, and the Pyrenean-Mozarabic group, which includes the remaining languages without lenition (and is unlikely to be a valid clade; probably at least two clades, one for Mozarabic and one for Pyrenean). Gallo-Iberian is split in turn into the Iberian languages (e.g. Spanish and Portuguese), and the larger Gallo-Romance languages (stretching from eastern Spain to northeast Italy).
Probably a more accurate description, however, would be to say that there was a focal point of innovation located in central France, from which a series of innovations spread out as areal changes. The La Spezia–Rimini Line represents the farthest point to the southeast that these innovations reached, corresponding to the northern chain of the Apennine Mountains, which cuts straight across northern Italy and forms a major geographic barrier to further language spread.
This would explain why some of the "northwest" features (almost all of which can be characterized as innovations) end at differing points in northern Italy, and why some of the languages in geographically remote parts of Spain (in the south, and high in the Pyrenees) are lacking some of these features. It also explains why the languages in France (especially standard French) seem to have innovated earlier and more extensively than other Western Romance languages.
Many of the "southeast" features also apply to the Eastern Romance languages (particularly, Romanian), despite the geographic discontinuity. Examples are lack of lenition, maintenance of intertonic vowels, use of vowel-changing plurals, and palatalization of /k/ to . This has led some researchers to postulate a basic two-way East-West division, with the "Eastern" languages including Romanian and central and southern Italian, although this view is troubled by the contrast of numerous Romanian phonological developments with those found in Italy below the La Spezia-Rimini line. Among these features, in Romanian geminates reduced historically to single units — which may be an independent development or perhaps due to Slavic influence — and /kt/ developed into /pt/, whereas in central and southern Italy geminates are preserved and /kt/ underwent assimilation to /tt/.
Despite being the first Romance language to evolve from Vulgar Latin, Sardinian does not fit at all into this sort of division. It is clear that Sardinian became linguistically independent from the remainder of the Romance languages at an extremely early date, possibly already by the first century BC. Sardinian contains a large number of archaic features, including total lack of palatalization of /k/ and /g/ and a large amount of vocabulary preserved nowhere else, including some items already archaic by the time of Classical Latin (first century BC). Sardinian has plurals in /s/ but post-vocalic lenition of voiceless consonants is normally limited to the status of an allophonic rule (e.g. [k]"ane" 'dog' but "su" [g]"ane" or "su" [ɣ]"ane" 'the dog'), and there are a few innovations unseen elsewhere, such as a change of /au/ to /a/. Use of "su" < "ipsum" as an article is a retained archaic feature that also exists in the Catalan of the Balearic Islands and that used to be more widespread in Occitano-Romance, and is known as "" (literally the "salted article"), while Sardinian shares develarisation of earlier /kw/ and /gw/ with Romanian: Sard. "abba", Rum. "apă" 'water'; Sard. "limba", Rom. "limbă" 'language' (cf. Italian "acqua", "lingua").
Gallo-Romance can be divided into the following subgroups:
The following groups are also sometimes considered part of Gallo-Romance:
The Gallo-Romance languages are generally considered the most innovative (least conservative) among the Romance languages. Characteristic Gallo-Romance features generally developed earliest and appear in their most extreme manifestation in the Langue d'oïl, gradually spreading out along riverways and transalpine roads.
In some ways, however, the Gallo-Romance languages are conservative. The older stages of many of the languages preserved a two-case system consisting of nominative and oblique, fully marked on nouns, adjectives and determiners, inherited almost directly from the Latin nominative and accusative and preserving a number of different declensional classes and irregular forms. The languages closest to the oïl epicenter preserve the case system the best, while languages at the periphery lose it early.
Notable characteristics of the Gallo-Romance languages are:
Some Romance languages have developed varieties which seem dramatically restructured as to their grammars or to be mixtures with other languages. It is not always clear whether they should be classified as Romance, pidgins, creole languages, or mixed languages. Some other languages, such as Modern English, are sometimes thought of as creoles of semi-Romance ancestry. There are several dozens of creoles of French, Spanish, and Portuguese origin, some of them spoken as national languages in former European colonies.
Creoles of French:
Creoles of Spanish:
Creoles of Portuguese:
Latin and the Romance languages have also served as the inspiration and basis of numerous auxiliary and constructed languages, so-called "neo-romantic languages".
The concept was first developed in 1903 by Italian mathematician Giuseppe Peano, under the title Latino sine flexione. He wanted to create a "naturalistic" international language, as opposed to an autonomous constructed language like Esperanto or Volapuk which were designed for maximal simplicity of lexicon and derivation of words. Peano used Latin as the base of his language, because at the time of his flourishing it was the "de facto" international language of scientific communication.
Other languages developed since include Idiom Neutral, Occidental, Lingua Franca Nova, and most famously and successfully, Interlingua. Each of these languages has attempted to varying degrees to achieve a pseudo-Latin vocabulary as common as possible to living Romance languages.
There are also languages created for artistic purposes only, such as Talossan. Because Latin is a very well attested ancient language, some amateur linguists have even constructed Romance languages that mirror real languages that developed from other ancestral languages. These include Brithenig (which mirrors Welsh), Breathanach (mirrors Irish), Wenedyk (mirrors Polish), Þrjótrunn (mirrors Icelandic), and Helvetian (mirrors German).
The Romance language most widely spoken natively today is Spanish, followed by Portuguese, French, Italian and Romanian, which together cover a vast territory in Europe and beyond, and work as official and national languages in dozens of countries.
French, Italian, Portuguese, Spanish, and Romanian are also official languages of the European Union. Spanish, Portuguese, French, Italian, Romanian, and Catalan were the official languages of the defunct Latin Union; and French and Spanish are two of the six official languages of the United Nations. Outside Europe, French, Portuguese and Spanish are spoken and enjoy official status in various countries that emerged from the respective colonial empires.
Spanish is an official language in Spain and in nine countries of South America, home to about half that continent's population; in six countries of Central America (all except Belize); and in Mexico. In the Caribbean, it is official in Cuba, the Dominican Republic, and Puerto Rico. In all these countries, Latin American Spanish is the vernacular language of the majority of the population, giving Spanish the most native speakers of any Romance language. In Africa it is an official language of Equatorial Guinea.
Portuguese, in its original homeland, Portugal, is spoken by virtually the entire population of 10 million.
As the official language of Brazil, it is spoken by more than 200 million people in that country, as well as by neighboring residents of eastern Paraguay and northern Uruguay, accounting for a little more than half the population of South America, thus making Portuguese the most spoken official Romance language in a single country. It is the official language of six African countries (Angola, Cape Verde, Guinea-Bissau, Mozambique, Equatorial Guinea, and São Tomé and Príncipe), and is spoken as a first language by perhaps 30 million residents of that continent. In Asia, Portuguese is co-official with other languages in East Timor and Macau, while most Portuguese-speakers in Asia—some 400,000—are in Japan due to return immigration of Japanese Brazilians. In North America 1,000,000 people speak Portuguese as their home language.
In Oceania, Portuguese is the second most spoken Romance language, after French, due mainly to the number of speakers in East Timor. Its closest relative, Galician, has official status in the autonomous community of Galicia in Spain, together with Spanish.
Outside Europe, French is spoken natively most in the Canadian province of Quebec, and in parts of New Brunswick and Ontario. Canada is officially bilingual, with French and English being the official languages. In parts of the Caribbean, such as Haiti, French has official status, but most people speak creoles such as Haitian Creole as their native language. French also has official status in much of Africa, but relatively few native speakers. In France's overseas possessions, native use of French is increasing.
Although Italy also had some colonial possessions before World War II, its language did not remain official after the end of the colonial domination. As a result, Italian outside of Italy and Switzerland is now spoken only as a minority language by immigrant communities in North and South America and Australia. In some former Italian colonies in Africa—namely Libya, Eritrea and Somalia—it is spoken by a few educated people in commerce and government.
Romania did not establish a colonial empire, but beyond its native territory in southeastern Europe, the Romanian language is spoken as a minority language by autochthonous populations in Serbia, Bulgaria, and Hungary, and in some parts of the former Greater Romania (before 1945), as well as in Ukraine (Bukovina, Budjak) and in some villages between the Dniester and Bug rivers. The Aromanian language is spoken today by Aromanians in Bulgaria, Macedonia, Albania, Kosovo, and Greece. Romanian also spread to other countries on the Mediterranean (especially the other Romance-speaking countries, most notably Italy and Spain), and elsewhere such as Israel, where it is the native language of five percent of the population, and is spoken by many more as a secondary language. This is due to the large number of Romanian-born Jews who moved to Israel after World War II. And finally, some 2.6 million people in the former Soviet republic of Moldova speak a variety of Romanian, called variously Moldovan or Romanian by them.
The total native speakers of Romance languages are divided as follows (with their ranking within the languages of the world in brackets):
Catalan is the official language of Andorra. In Spain, it is co-official with Spanish in Catalonia, the Valencian Community, and the Balearic Islands, and it is recognized, but not official, in La Franja, and in Aragon. In addition, it is spoken by many residents of Alghero, on the island of Sardinia, and it is co-official in that city. Galician, with more than a million native speakers, is official together with Spanish in Galicia, and has legal recognition in neighbouring territories in Castilla y León. A few other languages have official recognition on a regional or otherwise limited level; for instance, Asturian and Aragonese in Spain; Mirandese in Portugal; Friulan, Sardinian and Franco-Provençal in Italy; and Romansh in Switzerland.
The remaining Romance languages survive mostly as spoken languages for informal contact. National governments have historically viewed linguistic diversity as an economic, administrative or military liability, as well as a potential source of separatist movements; therefore, they have generally fought to eliminate it, by extensively promoting the use of the official language, restricting the use of the other languages in the media, recognizing them as mere "dialects", or even persecuting them. As a result, all of these languages are considered endangered to varying degrees according to the UNESCO Red Book of Endangered Languages, ranging from "vulnerable" (e.g. Sicilian and Venetian) to "severely endangered" (Arpitan, most of the Occitan varieties). Since the late twentieth and early twenty-first centuries, increased sensitivity to the rights of minorities has allowed some of these languages to start recovering their prestige and lost rights. Yet it is unclear whether these political changes will be enough to reverse the decline of minority Romance languages.
Romance languages are the continuation of Vulgar Latin, the popular and colloquial sociolect of Latin spoken by soldiers, settlers, and merchants of the Roman Empire, as distinguished from the classical form of the language spoken by the Roman upper classes, the form in which the language was generally written. Between 350 BC and 150 AD, the expansion of the Empire, together with its administrative and educational policies, made Latin the dominant native language in continental Western Europe. Latin also exerted a strong influence in southeastern Britain, the Roman province of Africa, western Germany, Pannonia and the whole Balkans.
During the Empire's decline, and after its fragmentation and the collapse of Western half in the fifth and sixth centuries, the spoken varieties of Latin became more isolated from each other, with the western dialects coming under heavy Germanic influence (the Goths and Franks in particular) and the eastern dialects coming under Slavic influence. The dialects diverged from classical Latin at an accelerated rate and eventually evolved into a continuum of recognizably different typologies. The colonial empires established by Portugal, Spain, and France from the fifteenth century onward spread their languages to the other continents to such an extent that about two-thirds of all Romance language speakers today live outside Europe.
Despite other influences (e.g. "substratum" from pre-Roman languages, especially Continental Celtic languages; and "superstratum" from later Germanic or Slavic invasions), the phonology, morphology, and lexicon of all Romance languages consist mainly of evolved forms of Vulgar Latin. However, some notable differences occur between today's Romance languages and their Roman ancestor. With only one or two exceptions, Romance languages have lost the declension system of Latin and, as a result, have SVO sentence structure and make extensive use of prepositions.
Documentary evidence is limited about Vulgar Latin for the purposes of comprehensive research, and the literature is often hard to interpret or generalize. Many of its speakers were soldiers, slaves, displaced peoples, and forced resettlers, more likely to be natives of conquered lands than natives of Rome. In Western Europe, Latin gradually replaced Celtic and other Italic languages, which were related to it by a shared Indo-European origin. Commonalities in syntax and vocabulary facilitated the adoption of Latin.
Vulgar Latin is believed to have already had most of the features shared by all Romance languages, which distinguish them from Classical Latin, such as the almost complete loss of the Latin grammatical case system and its replacement by prepositions; the loss of the neuter grammatical gender and comparative inflections; replacement of some verb paradigms by innovations (e.g. the synthetic future gave way to an originally analytic strategy now typically formed by infinitive + evolved present indicative forms of 'have'); the use of articles; and the initial stages of the palatalization of the plosives /k/, /g/, and /t/.
To some scholars, this suggests the form of Vulgar Latin that evolved into the Romance languages was around during the time of the Roman Empire (from the end of the first century BC), and was spoken alongside the written Classical Latin which was reserved for official and formal occasions. Other scholars argue that the distinctions are more rightly viewed as indicative of sociolinguistic and register differences normally found within any language. Both were mutually intelligible as one and the same language, which was true until very approximately the second half of the 7th century. However, within two hundred years Latin became a dead language since "the Romanized people of Europe could no longer understand texts that were read aloud or recited to them," i.e. Latin had ceased to be a first language and became a foreign language that had to be learned, if the label Latin is constrained to refer to a state of the language frozen in past time and restricted to linguistic features for the most part typical of higher registers.
With the rise of the Roman Empire, Vulgar Latin spread first throughout Italy and then through southern, western, central, and southeast Europe, and northern Africa along parts of western Asia.
During the political decline of the Western Roman Empire in the fifth century, there were large-scale migrations into the empire, and the Latin-speaking world was fragmented into several independent states. Central Europe and the Balkans were occupied by Germanic and Slavic tribes, as well as by Huns. These incursions isolated the Vlachs from the rest of Romance-speaking Europe.
British and African Romance—the forms of Vulgar Latin used in Britain and the Roman province of Africa, where it had been spoken by much of the urban population—disappeared in the Middle Ages (as did Pannonian Romance in what is now Hungary, and Moselle Romance in Germany). But the Germanic tribes that had penetrated Roman Italy, Gaul, and Hispania eventually adopted Latin/Romance and the remnants of the culture of ancient Rome alongside existing inhabitants of those regions, and so Latin remained the dominant language there. In part due to regional dialects of the Latin language and local environments, several languages evolved from it.
Meanwhile, large-scale migrations into the Eastern Roman Empire started with the Goths and continued with Huns, Avars, Bulgars, Slavs, Pechenegs, Hungarians and Cumans. The invasions of Slavs were the most thoroughgoing, and they partially reduced the Romanic element in the Balkans.
The invasion of the Turks and conquest of Constantinople in 1453 marked the end of the empire. The Slavs named the Romance-speaking population Vlachs, while the latter called themselves "Rumân" or "Român", from the Latin "Romanus" The Daco-Roman dialect became fully distinct from the three dialects spoken South of the Danube—Macedo-Romanian, Istro-Romanian, and Megleno-Romanian—during the ninth and tenth centuries, when the Romanians (sometimes called Vlachs or Wallachians) emerged as a people.
Over the course of the fourth to eighth centuries, local changes in phonology, morphology, syntax and lexicon accumulated to the point that the speech of any locale was noticeably different from another. In principle, differences between any two lects increased the more they were separated geographically, reducing easy mutual intelligibility between speakers of distant communities. Clear evidence of some levels of change is found in the "Reichenau Glosses", an eighth-century compilation of about 1,200 words from the fourth-century Vulgate of Jerome that had changed in phonological form or were no longer normally used, along with their eighth-century equivalents in proto-Franco-Provençal. The following are some examples with reflexes in several modern Romance languages for comparison:
In all of the above examples, the words appearing in the fourth century Vulgate are the same words as would have been used in Classical Latin of c. 50 BC. It is likely that some of these words had already disappeared from casual speech by the time of the "Glosses"; but if so, they may well have been still widely understood, as there is no recorded evidence that the common people of the time had difficulty understanding the language.
By the 8th century, the situation was very different. During the late 8th century, Charlemagne, holding that "Latin of his age was by classical standards intolerably corrupt", successfully imposed Classical Latin as an artificial written vernacular for Western Europe. Unfortunately, this meant that parishioners could no longer understand the sermons of their priests, forcing the Council of Tours in 813 to issue an edict that priests needed to translate their speeches into the "rustica romana lingua", an explicit acknowledgement of the reality of the Romance languages as separate languages from Latin.
By this time, and possibly as early as the 6th century according to Price (1984), the Romance lects had split apart enough to be able to speak of separate Gallo-Romance, Ibero-Romance, Italo-Romance and Eastern Romance languages. Some researchers have postulated that the major divergences in the spoken dialects began or accelerated considerably in the 5th century, as the formerly widespread and efficient communication networks of the Western Roman Empire rapidly broke down, leading to the total disappearance of the Western Roman Empire by the end of the century. The critical period between the 5th–10th centuries AD is poorly documented because little or no writing from the chaotic "Dark Ages" of the 5th–8th centuries has survived, and writing after that time was in consciously classicized Medieval Latin, with vernacular writing only beginning in earnest in the 11th or 12th centuries. An exception such as the Oaths of Strasbourg is evidence that by the ninth century effective communication with a non-learnèd audience was carried out in evolved Romance.
A language that was closely related to medieval Romanian was spoken during the Dark Ages by Vlachs in the Balkans, Herzegovina, Dalmatia (Morlachs), Ukraine (Hutsuls), Poland (Gorals), Slovakia, and Czech Moravia, but gradually these communities lost their maternal language.
Between the 10th and 13th centuries, some local vernaculars developed a written form and began to supplant Latin in many of its roles. In some countries, such as Portugal, this transition was expedited by force of law; whereas in others, such as Italy, many prominent poets and writers used the vernacular of their own accord – some of the most famous in Italy being Giacomo da Lentini and Dante Alighieri. Well before that, the vernacular was also used for practical purposes, such as the testimonies in the Placiti Cassinesi, written 960-963.
The invention of the printing press brought a tendency towards greater uniformity of standard languages within political boundaries, at the expense of other Romance languages and dialects less favored politically. In France, for instance, the dialect spoken in the region of Paris gradually spread to the entire country, and the Occitan of the south lost ground.
Significant sound changes affected the consonants of the Romance languages.
There was a tendency to eliminate final consonants in Vulgar Latin, either by dropping them (apocope) or adding a vowel after them (epenthesis).
Many final consonants were rare, occurring only in certain prepositions (e.g. "ad" "towards", "apud" "at, near (a person)"), conjunctions ("sed" "but"), demonstratives (e.g. "illud" "that (over there)", "hoc" "this"), and nominative singular noun forms, especially of neuter nouns (e.g. "lac" "milk", "mel" "honey", "cor" "heart"). Many of these prepositions and conjunctions were replaced by others, while the nouns were regularized into forms based on their oblique stems that avoided the final consonants (e.g. *"lacte", *"mele", *"core").
Final "-m" was dropped in Vulgar Latin. Even in Classical Latin, final "-am", "-em", "-um" (inflectional suffixes of the accusative case) were often elided in poetic meter, suggesting the "m" was weakly pronounced, probably marking the nasalisation of the vowel before it. This nasal vowel lost its nasalization in the Romance languages except in monosyllables, where it became e.g. Spanish "quien" < "quem" "whom", French "rien" "anything" < "rem" "thing"; note especially French and Catalan "mon" < "meum" "my (m.sg.)" pronounced as one syllable ( > *) but Spanish "mío" and Portuguese and Catalan "meu" < "meum" pronounced as two ( > *).
As a result, only the following final consonants occurred in Vulgar Latin:
Final "-t" was eventually dropped in many languages, although this often occurred several centuries after the Vulgar Latin period. For example, the reflex of "-t" was dropped in Old French and Old Spanish only around 1100. In Old French, this occurred only when a vowel still preceded the "t" (generally < Latin "a"). Hence "amat" "he loves" > Old French "aime" but "venit" "he comes" > Old French "vient": the was never dropped and survives into Modern French in liaison, e.g. "vient-il?" "is he coming?" (the corresponding in "aime-t-il?" is analogical, not inherited). Old French also kept the third-person plural ending "-nt" intact.
In Italo-Romance and the Eastern Romance languages, eventually "all" final consonants were either dropped or protected by an epenthetic vowel, except in clitic forms (e.g. prepositions "con", "per"). Modern Standard Italian still has almost no consonant-final words, although Romanian has resurfaced them through later loss of final and . For example, "amās" "you love" > "ame" > Italian "ami"; "amant" "they love" > *"aman" > Ital. "amano". On the evidence of "sloppily written" Lombardic language documents, however, the loss of final in Italy did not occur until the 7th or 8th century, after the Vulgar Latin period, and the presence of many former final consonants is betrayed by the syntactic gemination ("raddoppiamento sintattico") that they trigger. It is also thought that after a long vowel became rather than simply disappearing: "nōs" > "noi" "we", *ses > "sei" "you are", "crās" > "crai" "tomorrow" (southern Italian). In unstressed syllables, the resulting diphthongs were simplified: "canēs" > > "cani" "dogs"; "amīcās" > > "amiche" "(female) friends", where nominative "amīcae" should produce "**amice" rather than "amiche" (note masculine "amīcī" > "amici" not "**amichi").
Central Western Romance languages eventually regained a large number of final consonants through the general loss of final and , e.g. Catalan "llet" "milk" < "lactem", "foc" "fire" < "focum", "peix" "fish" < "piscem". In French, most of these secondary final consonants (as well as primary ones) were lost before around 1700, but tertiary final consonants later arose through the loss of < "-a". Hence masculine "frīgidum" "cold" > Old French "freit" > "froid" , feminine "frigidam" > Old French "freide" > "froide" .
Palatalization was one of the most important processes affecting consonants in Vulgar Latin. This eventually resulted in a whole series of "" and consonants in most Romance languages, e.g. Italian .
The following historical stages occurred:
Note how the environments become progressively less "palatal", and the languages affected become progressively fewer.
The outcomes of palatalization depended on the historical stage, the consonants involved, and the languages involved. The primary division is between the Western Romance languages, with resulting from palatalization of , and the remaining languages (Italo-Dalmatian and Eastern Romance), with resulting. It is often suggested that was the original result in all languages, with > a later innovation in the Western Romance languages. Evidence of this is the fact that Italian has both and as outcomes of palatalization in different environments, while Western Romance has only . Even more suggestive is the fact that the Mozarabic language in al-Andalus (modern southern Spain) had as the outcome despite being in the "Western Romance" area and geographically disconnected from the remaining areas; this suggests that Mozarabic was an outlying "relic" area where the change > failed to reach. (Northern French dialects, such as Norman and Picard, also had , but this may be a secondary development, i.e. due to a later sound change > .) Note that eventually became /s, z, ʒ/ in most Western Romance languages. Thus Latin "caelum" (sky, heaven), pronounced with an initial , became Italian "cielo" , Romanian "cer" , Spanish "cielo" /, French "ciel" , Catalan "cel" , and Portuguese "céu" .
The outcome of palatalized and is less clear:
This suggests that palatalized > > either or depending on location, while palatalized > ; after this, > in most areas, but Spanish and Gascon (originating from isolated districts behind the western Pyrenees) were relic areas unaffected by this change.
In French, the outcomes of palatalized by and by were different: "centum" "hundred" > "cent" but "cantum" "song" > "chant" . French also underwent palatalization of labials before : Vulgar Latin > Old French ("sēpia" "cuttlefish" > "seiche", "rubeus" "red" > "rouge", "sīmia" "monkey" > "singe").
The original outcomes of palatalization must have continued to be phonetically palatalized even after they had developed into //etc. consonants. This is clear from French, where all originally palatalized consonants triggered the development of a following glide in certain circumstances (most visible in the endings "-āre", "-ātum/ātam"). In some cases this came from a consonant palatalized by an adjoining consonant after the late loss of a separating vowel. For example, "mansiōnātam" > > > > early Old French "maisnieḍe" "household". Similarly, "mediētātem" > > > > early Old French "meitieḍ" > modern French "moitié" "half". In both cases, phonetic palatalization must have remained in primitive Old French at least through the time when unstressed intertonic vowels were lost (?8th century), well after the fragmentation of the Romance languages.
The effect of palatalization is indicated in the writing systems of almost all Romance languages, where the letters have the "hard" pronunciation in most situations, but a "soft" pronunciation (e.g. French/Portuguese , Italian/Romanian ) before . (This orthographic trait has passed into Modern English through Norman French-speaking scribes writing Middle English; this replaced the earlier system of Old English, which had developed its own hard-soft distinction with the soft representing .) This has the effect of keeping the modern spelling similar to the original Latin spelling, but complicates the relationship between sound and letter. In particular, the hard sounds must be written differently before (e.g. Italian , Portuguese ), and likewise for the soft sounds when not before these letters (e.g. Italian , Portuguese ). Furthermore, in Spanish, Catalan, Occitan and Brazilian Portuguese, the use of digraphs containing to signal the hard pronunciation before means that a different spelling is also needed to signal the sounds before these vowels (Spanish , Catalan, Occitan and Brazilian Portuguese ). This produces a number of orthographic alternations in verbs whose pronunciation is entirely regular. The following are examples of corresponding first-person plural indicative and subjunctive in a number of regular Portuguese verbs: "marcamos, marquemos" "we mark"; "caçamos, cacemos" "we hunt"; "chegamos, cheguemos" "we arrive"; "averiguamos, averigüemos" "we verify"; "adequamos, adeqüemos" "we adapt"; "oferecemos, ofereçamos" "we offer"; "dirigimos, dirijamos" "we drive" "erguemos, ergamos" "we raise"; "delinquimos, delincamos" "we commit a crime". In the case of Italian, the convention of digraphs and to represent /k/ and /g/ before written results in similar orthographic alternations, such as "dimentico" 'I forget', "dimentichi" 'you forget', "baco" 'worm', "bachi" 'worms' with [k] or "pago" 'I pay', "paghi" 'you pay' and "lago" 'lake', "laghi" 'lakes' with [g]. The use in Italian of and to represent /tʃ/ or /dʒ/ before vowels written neatly distinguishes "dico" 'I say' with /k/ from "dici" 'you say' with /tʃ/ or "ghiro" 'dormouse' /g/ and "giro" 'turn, revolution' /dʒ/, but with orthographic and also representing the sequence of /tʃ/ or /dʒ/ and the actual vowel /i/ (/ditʃi/ "dici", /dʒiro/ "giro"), and no generally observed convention of indicating stress position, the status of "i" when followed by another vowel in spelling can be unrecognizable. For example, the written forms offer no indication that in "camicia" 'shirt' represents a single unstressed syllable /tʃa/ with no /i/ at any level (/kaˈmitʃa/ → [kaˈmiːtʃa] ~ [kaˈmiːʃa]), but that underlying the same spelling in "farmacia" 'pharmacy' is a bisyllabic sequence consisting of the stressed syllable /tʃi/ and syllabic /a/ (/farmaˈtʃia/ → [farmaˈtʃiːa] ~ [farmaˈʃiːa]).
Stop consonants shifted by lenition in Vulgar Latin in some areas.
The voiced labial consonants and (represented by and , respectively) both developed a fricative as an intervocalic allophone. This is clear from the orthography; in medieval times, the spelling of a consonantal is often used for what had been a in Classical Latin, or the two spellings were used interchangeably. In many Romance languages (Italian, French, Portuguese, Romanian, etc.), this fricative later developed into a ; but in others (Spanish, Galician, some Catalan and Occitan dialects, etc.) reflexes of and simply merged into a single phoneme.
Several other consonants were "softened" in intervocalic position in Western Romance (Spanish, Portuguese, French, Northern Italian), but normally not phonemically in the rest of Italy (except some cases of "elegant" or Ecclesiastical words), nor apparently at all in Romanian. The dividing line between the two sets of dialects is called the La Spezia–Rimini Line and is one of the most important isoglosses of the Romance dialects. The changes (instances of diachronic lenition resulting in phonological restructuring) are as follows:
Single voiceless plosives became voiced: "-p-, -t-, -c-" > "-b-, -d-, -g-". Subsequently, in some languages they were further weakened, either becoming fricatives or approximants, (as in Spanish) or disappearing entirely (as and , but not , in French). The following example shows progressive weakening of original /t/: e.g. "vītam" > Italian "vita" , Portuguese "vida" (European Portuguese ), Spanish "vida" (Southern Peninsular Spanish ), and French "vie" . Some scholars once speculated that these sound changes may be due in part to the influence of Continental Celtic languages, but scholarship of the past few decades challenges that hypothesis.
Consonant length is no longer phonemically distinctive in most Romance languages. However some languages of Italy (Italian, Sardinian, Sicilian, and numerous other varieties of central and southern Italy) do have long consonants like , , etc., where the doubling indicates either actual length or, in the case of plosives and affricates, a short hold before the consonant is released, in many cases with distinctive lexical value: e.g. "note" (notes) vs. "notte" (night), "cade" (s/he, it falls) vs. "cadde" (s/he, it fell), "caro" (dear, expensive) vs. "carro" (cart). They may even occur at the beginning of words in Romanesco, Neapolitan, Sicilian and other southern varieties, and are occasionally indicated in writing, e.g. Sicilian "cchiù" (more), and "ccà" (here). In general, the consonants , , and are long at the start of a word, while the archiphoneme is realised as a trill in the same position. In much of central and southern Italy, the affricates /t͡ʃ/ and /d͡ʒ/ weaken synchronically to fricative [ʃ] and [ʒ] between vowels, while their geminate congeners do not, e.g. "cacio" (cheese) vs. "caccio" (I chase).
A few languages have regained secondary geminate consonants. The double consonants of Piedmontese exist only after stressed , written "ë", and are not etymological: "vëdde" (Latin "vidēre", to see), "sëcca" (Latin "sicca", dry, feminine of "sech"). In standard Catalan and Occitan, there exists a geminate sound written "ŀl" (Catalan) or "ll" (Occitan), but it is usually pronounced as a simple sound in colloquial (and even some formal) speech in both languages.
In Late Latin a prosthetic vowel /i/ (lowered to /e/ in most languages) was inserted at the beginning of any word that began with (referred to as "s impura") and a voiceless consonant (#sC- > isC-):
Prosthetic /i/ ~ /e/ in Romance languages may have been influenced by Continental Celtic languages, although the phenomenon exists or existed in some areas where Celtic was never present (e.g. Sardinia, southern Italy). While Western Romance words undergo prothesis, cognates in Balkan Romance and southern Italo-Romance do not, e.g. Italian "scrivere", "spada", "spirito", "Stefano", and "stato". In Italian, syllabification rules were preserved instead by vowel-final articles, thus feminine "spada" as "la spada", but instead of rendering the masculine "*il spaghetto", "lo spaghetto" came to be the norm. Though receding at present, Italian once had a prosthetic if a consonant preceded such clusters, so that 'in Switzerland' was "in" "Svizzera". Some speakers still use the prothetic productively, and it is fossilized in a few set locutions such as "in ispecie" 'especially' or "per iscritto" 'in writing' (although in this case its survival may be due partly to the influence of the separate word "iscritto" < Latin "īnscrīptus"). The association of /i/ ~ /j/ and /s/ also led to the vocalization of word-final -"s" in Italian, Romanian, certain Occitan dialects, and the Spanish dialect of Chocó in Colombia.
One profound change that affected Vulgar Latin was the reorganisation of its vowel system. Classical Latin had five short vowels, "ă, ĕ, ĭ, ŏ, ŭ", and five long vowels, "ā, ē, ī, ō, ū", each of which was an individual phoneme (see the table in the right, for their likely pronunciation in IPA), and four diphthongs, "ae", "oe", "au" and "eu" (five according to some authors, including "ui"). There were also long and short versions of "y", representing the rounded vowel in Greek borrowings, which however probably came to be pronounced even before Romance vowel changes started.
There is evidence that in the imperial period all the short vowels except "a" differed by quality as well as by length from their long counterparts. So, for example "ē" was pronounced close-mid while "ĕ" was pronounced open-mid , and "ī" was pronounced close while "ĭ" was pronounced near-close .
During the Proto-Romance period, phonemic length distinctions were lost. Vowels came to be automatically pronounced long in stressed, open syllables (i.e. when followed by only one consonant), and pronounced short everywhere else. This situation is still maintained in modern Italian: "cade" "he falls" vs. "cadde" "he fell".
The Proto-Romance loss of phonemic length originally produced a system with nine different quality distinctions in monophthongs, where only original had merged. Soon, however, many of these vowels coalesced:
The Proto-Romance allophonic vowel-length system was rephonemicized in the Gallo-Romance languages as a result of the loss of many final vowels. Some northern Italian languages (e.g. Friulan) still maintain this secondary phonemic length, but most languages dropped it by either diphthongizing or shortening the new long vowels.
French phonemicized a third vowel length system around AD 1300 as a result of the sound change /VsC/ > /VhC/ > (where "V" is any vowel and "C" any consonant). This vowel length was eventually lost by around AD 1700, but the former long vowels are still marked with a circumflex. A fourth vowel length system, still non-phonemic, has now arisen: All nasal vowels as well as the oral vowels (which mostly derive from former long vowels) are pronounced long in all stressed closed syllables, and all vowels are pronounced long in syllables closed by the voiced fricatives . This system in turn has been phonemicized in some non-standard dialects (e.g. Haitian Creole), as a result of the loss of final .
The Latin diphthongs "ae" and "oe", pronounced and in earlier Latin, were early on monophthongized.
"ae" became by the 1st century at the latest. Although this sound was still distinct from all existing vowels, the neutralization of Latin vowel length eventually caused its merger with < short "e": e.g. "caelum" "sky" > French "ciel", Spanish/Italian "cielo", Portuguese "céu" , with the same vowel as in "mele" "honey" > French/Spanish "miel", Italian "miele", Portuguese "mel" . Some words show an early merger of "ae" with , as in "praeda" "booty" > *"prēda" > French "proie" (vs. expected **"priée"), Italian "preda" (not **"prieda") "prey"; or "faenum" "hay" > *"fēnum" > Spanish "heno", French "foin" (but Italian "fieno" /fjɛno/).
"oe" generally merged with : "poenam" "punishment" > Romance * > Spanish/Italian "pena", French "peine"; "foedus" "ugly" > Romance * > Spanish "feo", Portuguese "feio". There are relatively few such outcomes, since "oe" was rare in Classical Latin (most original instances had become Classical "ū", as in Old Latin "oinos" "one" > Classical "ūnus") and so "oe" was mostly limited to Greek loanwords, which were typically learned (high-register) terms.
"au" merged with "ō" in the popular speech of Rome already by the 1st century . A number of authors remarked on this explicitly, e.g. Cicero's taunt that the populist politician Publius Clodius Pulcher had changed his name from "Claudius" to ingratiate himself with the masses. This change never penetrated far from Rome, however, and the pronunciation /au/ was maintained for centuries in the vast majority of Latin-speaking areas, although it eventually developed into some variety of "o" in many languages. For example, Italian and French have as the usual reflex, but this post-dates diphthongization of and the French-specific palatalization > (hence "causa" > French "chose", Italian "cosa" not **"cuosa"). Spanish has , but Portuguese spelling maintains , which has developed to (and still remains as in some dialects, and in others). Occitan, Romanian, southern Italian languages, and many other minority Romance languages still have . A few common words, however, show an early merger with "ō" , evidently reflecting a generalization of the popular Roman pronunciation: e.g. French "queue", Italian "coda" , Occitan "co(d)a", Romanian "coadă" (all meaning "tail") must all derive from "cōda" rather than Classical "cauda" (but notice Portuguese "cauda"). Similarly, Spanish "oreja", Portuguese "orelha", French "oreille", Romanian "ureche", and Sardinian "olícra", "orícla" "ear" must derive from "ōric(u)la" rather than Classical "auris" (Occitan "aurelha" was probably influenced by the unrelated "ausir" < "audīre" "to hear"), and the form "oricla" is in fact reflected in the Appendix Probi.
An early process that operated in all Romance languages to varying degrees was metaphony (vowel mutation), conceptually similar to the umlaut process so characteristic of the Germanic languages. Depending on the language, certain stressed vowels were raised (or sometimes diphthongized) either by a final /i/ or /u/ or by a directly following /j/. Metaphony is most extensive in the Italo-Romance languages, and applies to nearly all languages in Italy; however, it is absent from Tuscan, and hence from standard Italian. In many languages affected by metaphony, a distinction exists between final /u/ (from most cases of Latin "-um") and final /o/ (from Latin "-ō", "-ud" and some cases of "-um", esp. masculine "mass" nouns), and only the former triggers metaphony.
Some examples:
A number of languages diphthongized some of the free vowels, especially the open-mid vowels :
These diphthongizations had the effect of reducing or eliminating the distinctions between open-mid and close-mid vowels in many languages. In Spanish and Romanian, all open-mid vowels were diphthongized, and the distinction disappeared entirely. Portuguese is the most conservative in this respect, keeping the seven-vowel system more or less unchanged (but with changes in particular circumstances, e.g. due to metaphony). Other than before palatalized consonants, Catalan keeps intact, but split in a complex fashion into and then coalesced again in the standard dialect (Eastern Catalan) in such a way that most original have reversed their quality to become .
In French and Italian, the distinction between open-mid and close-mid vowels occurred only in closed syllables. Standard Italian more or less maintains this. In French, /e/ and merged by the twelfth century or so, and the distinction between and was eliminated without merging by the sound changes , . Generally this led to a situation where both and occur allophonically, with the close-mid vowels in open syllables and the open-mid vowels in closed syllables. This is still the situation in modern Spanish, for example. In French, however, both and were partly rephonemicized: Both and occur in open syllables as a result of , and both and occur in closed syllables as a result of .
Old French also had numerous falling diphthongs resulting from diphthongization before palatal consonants or from a fronted /j/ originally following palatal consonants in Proto-Romance or later: e.g. "pācem" /patsʲe/ "peace" > PWR */padzʲe/ (lenition) > OF "paiz" /pajts/; *"punctum" "point" > Gallo-Romance */ponʲto/ > */pojɲto/ (fronting) > OF "point" /põjnt/. During the Old French period, preconsonantal /l/ [ɫ] vocalized to /w/, producing many new falling diphthongs: e.g. "dulcem" "sweet" > PWR */doltsʲe/ > OF "dolz" /duɫts/ > "douz" /duts/; "fallet" "fails, is deficient" > OF "falt" > "faut" "is needed"; "bellus" "beautiful" > OF "bels" > "beaus" . By the end of the Middle French period, "all" falling diphthongs either monophthongized or switched to rising diphthongs: proto-OF > early OF > modern spelling > mod. French .
In both French and Portuguese, nasal vowels eventually developed from sequences of a vowel followed by a nasal consonant (/m/ or /n/). Originally, all vowels in both languages were nasalized before any nasal consonants, and nasal consonants not immediately followed by a vowel were eventually dropped. In French, nasal vowels before remaining nasal consonants were subsequently denasalized, but not before causing the vowels to lower somewhat, e.g. "dōnat" "he gives" > OF "dune" > "donne" , "fēminam" > "femme" . Other vowels remained diphthongized, and were dramatically lowered: "fīnem" "end" > "fin" (often pronounced ); "linguam" "tongue" > "langue" ; "ūnum" "one" > "un" .
In Portuguese, /n/ between vowels was dropped, and the resulting hiatus eliminated through vowel contraction of various sorts, often producing diphthongs: "manum, *manōs" > PWR *"manu, ˈmanos" "hand(s)" > "mão, mãos" ; "canem, canēs" "dog(s)" > PWR *"kane, ˈkanes" > *"can, ˈcanes" > "cão, cães" ; "ratiōnem, ratiōnēs" "reason(s)" > PWR *"raˈdʲzʲone, raˈdʲzʲones" > *"raˈdzon, raˈdzones" > "razão, razões" (Brazil), (Portugal). Sometimes the nasalization was eliminated: "lūna" "moon" > Galician-Portuguese "lũa" > "lua"; "vēna" "vein" > Galician-Portuguese "vẽa" > "veia". Nasal vowels that remained actually tend to be raised (rather than lowered, as in French): "fīnem" "end" > "fim" ; "centum" "hundred" > PWR "tʲsʲɛnto" > "cento" ; "pontem" "bridge" > PWR "pɔnte" > "ponte" (Brazil), (Portugal). In Portugal, vowels before a nasal consonant have become denasalized, but in Brazil they remain heavily nasalized.
Characteristic of the Gallo-Romance languages and Rhaeto-Romance languages are the front rounded vowels . All of these languages show an unconditional change /u/ > /y/, e.g. "lūnam" > French "lune" , Occitan . Many of the languages in Switzerland and Italy show the further change /y/ > /i/. Also very common is some variation of the French development (lengthened in open syllables) > > , with mid back vowels diphthongizing in some circumstances and then re-monophthongizing into mid-front rounded vowels. (French has both and , with developing from in certain circumstances.)
There was more variability in the result of the unstressed vowels. Originally in Proto-Romance, the same nine vowels developed in unstressed as stressed syllables, and in Sardinian, they coalesced into the same five vowels in the same way.
In Italo-Western Romance, however, vowels in unstressed syllables were significantly different from stressed vowels, with yet a third outcome for final unstressed syllables. In non-final unstressed syllables, the seven-vowel system of stressed syllables developed, but then the low-mid vowels merged into the high-mid vowels . This system is still preserved, largely or completely, in all of the conservative Romance languages (e.g. Italian, Spanish, Portuguese, Catalan).
In final unstressed syllables, results were somewhat complex. One of the more difficult issues is the development of final short "-u", which appears to have been raised to rather than lowered to , as happened in all other syllables. However, it is possible that in reality, final comes from "long" *"-ū" < "-um", where original final "-m" caused vowel lengthening as well as nasalization. Evidence of this comes from Rhaeto-Romance, in particular Sursilvan, which preserves reflexes of both final "-us" and "-um", and where the latter, but not the former, triggers metaphony. This suggests the development "-us" > > , but "-um" > > .
The original five-vowel system in final unstressed syllables was preserved as-is in some of the more conservative central Italian languages, but in most languages there was further coalescence:
Various later changes happened in individual languages, e.g.:
The so-called "intertonic vowels" are word-internal unstressed vowels, i.e. not in the initial, final, or "tonic" (i.e. stressed) syllable, hence intertonic. Intertonic vowels were the most subject to loss or modification. Already in Vulgar Latin intertonic vowels between a single consonant and a following /r/ or /l/ tended to drop: "vétulum" "old" > "veclum" > Dalmatian "vieklo", Sicilian "vecchiu", Portuguese "velho". But many languages ultimately dropped almost all intertonic vowels.
Generally, those languages south and east of the La Spezia–Rimini Line (Romanian and Central-Southern Italian) maintained intertonic vowels, while those to the north and west (Western Romance) dropped all except /a/. Standard Italian generally maintained intertonic vowels, but typically raised unstressed /e/ > /i/. Examples:
Portuguese is more conservative in maintaining some intertonic vowels other than /a/: e.g. *"offerḗscere" "to offer" > Portuguese "oferecer" vs. Spanish "ofrecer", French "offrir" (< *"offerīre"). French, on the other hand, drops even intertonic /a/ after the stress: "Stéphanum" "Stephen" > Spanish "Esteban" but Old French "Estievne" > French "Étienne". Many cases of /a/ before the stress also ultimately dropped in French: "sacraméntum" "sacrament" > Old French "sairement" > French "serment" "oath".
The Romance languages for the most part have kept the writing system of Latin, adapting it to their evolution.
One exception was Romanian before the nineteenth century, where, after the Roman retreat, literacy was reintroduced through the Romanian Cyrillic alphabet, a Slavic influence. A Cyrillic alphabet was also used for Romanian (Moldovan) in the USSR. The non-Christian populations of Spain also used the scripts of their religions (Arabic and Hebrew) to write Romance languages such as Ladino and Mozarabic in "aljamiado".
The Romance languages are written with the classical Latin alphabet of 23 letters – "A", "B", "C", "D", "E", "F", "G", "H", "I", "K", "L", "M", "N", "O", "P", "Q", "R", "S", "T", "V", "X", "Y", "Z" – subsequently modified and augmented in various ways. In particular, the single Latin letter "V" split into "V" (consonant) and "U" (vowel), and the letter "I" split into "I" and "J". The Latin letter "K" and the new letter "W", which came to be widely used in Germanic languages, are seldom used in most Romance languages – mostly for unassimilated foreign names and words. Indeed, in Italian prose is properly . Catalan eschews importation of "foreign" letters more than most languages. Thus Wikipedia is in Catalan but in Spanish.
While most of the 23 basic Latin letters have maintained their phonetic value, for some of them it has diverged considerably; and the new letters added since the Middle Ages have been put to different uses in different scripts. Some letters, notably "H" and "Q", have been variously combined in digraphs or trigraphs (see below) to represent phonetic phenomena that could not be recorded with the basic Latin alphabet, or to get around previously established spelling conventions. Most languages added auxiliary marks (diacritics) to some letters, for these and other purposes.
The spelling rules of most Romance languages are fairly simple, and consistent within any language. Since the spelling systems are based on phonemic structures rather than phonetics, however, the actual pronunciation of what is represented in standard orthography can be subject to considerable regional variation, as well as to allophonic differentiation by position in the word or utterance. Among the letters representing the most conspicuous phonological variations, between Romance languages or with respect to Latin, are the following:
Otherwise, letters that are not combined as digraphs generally represent the same phonemes as suggested by the International Phonetic Alphabet (IPA), whose design was, in fact, greatly influenced by Romance spelling systems.
Since most Romance languages have more sounds than can be accommodated in the Roman Latin alphabet they all resort to the use of digraphs and trigraphs – combinations of two or three letters with a single phonemic value. The concept (but not the actual combinations) is derived from Classical Latin, which used, for example, "TH", "PH", and "CH" when transliterating the Greek letters "θ", "ϕ" (later "φ"), and "χ". These were once aspirated sounds in Greek before changing to corresponding fricatives, and the "H" represented what sounded to the Romans like an following , , and respectively. Some of the digraphs used in modern scripts are:
While the digraphs "CH", "PH", "RH" and "TH" were at one time used in many words of Greek origin, most languages have now replaced them with "C/QU", "F", "R" and "T". Only French has kept these etymological spellings, which now represent or , , and , respectively.
Gemination, in the languages where it occurs, is usually indicated by doubling the consonant, except when it does not contrast phonemically with the corresponding short consonant, in which case gemination is not indicated. In Jèrriais, long consonants are marked with an apostrophe: is a long , is a long , and is a long . The phonemic contrast between geminate and single consonants is widespread in Italian, and normally indicated in the traditional orthography: 'done' vs. 'fate, destiny'; 's/he, it fell' vs. 's/he, it falls'. The double consonants in French orthography, however, are merely etymological. In Catalan, the gemination of is marked by a ("flying point"): .
Romance languages also introduced various marks (diacritics) that may be attached to some letters, for various purposes. In some cases, diacritics are used as an alternative to digraphs and trigraphs; namely to represent a larger number of sounds than would be possible with the basic alphabet, or to distinguish between sounds that were previously written the same. Diacritics are also used to mark word stress, to indicate exceptional pronunciation of letters in certain words, and to distinguish words with same pronunciation (homophones).
Depending on the language, some letter-diacritic combinations may be considered distinct letters, e.g. for the purposes of lexical sorting. This is the case, for example, of Romanian "ș" () and Spanish "ñ" ().
The following are the most common use of diacritics in Romance languages.
Most languages are written with a mixture of two distinct but phonetically identical variants or "cases" of the alphabet: majuscule ("uppercase" or "capital letters"), derived from Roman stone-carved letter shapes, and minuscule ("lowercase"), derived from Carolingian writing and Medieval quill pen handwriting which were later adapted by printers in the fifteenth and sixteenth centuries.
In particular, all Romance languages capitalize (use uppercase for the first letter of) the following words: the first word of each complete sentence, most words in names of people, places, and organizations, and most words in titles of books. The Romance languages do not follow the German practice of capitalizing all nouns including common ones. Unlike English, the names of months, days of the weeks, and derivatives of proper nouns are usually not capitalized: thus, in Italian one capitalizes "Francia" ("France") and "Francesco" ("Francis"), but not "francese" ("French") or "francescano" ("Franciscan"). However, each language has some exceptions to this general rule.
The tables below provide a vocabulary comparison that illustrates a number of examples of sound shifts that have occurred between Latin and Romance languages. Words are given in their conventional spellings. In addition, for French the actual pronunciation is given, due to the dramatic differences between spelling and pronunciation. (French spelling approximately reflects the pronunciation of Old French, c. 1200 AD.)
Data from Ethnologue:
Overviews:
Phonology:
Lexicon:
French:
Portuguese:
Spanish:
Italian:
Rhaeto-Romance: | https://en.wikipedia.org/wiki?curid=25401 |
Rugby football
Rugby football is a collective name for the team sports of rugby union and rugby league, as well as the earlier forms of football from which both games evolved. Canadian football, and to a lesser extent American football were also broadly considered forms of rugby football but are seldom now referred to as such.
Rugby football started about 1845 at Rugby School in Rugby, Warwickshire, England, although forms of football in which the ball was carried and tossed date to medieval times. Rugby split into two sports in 1895, when twenty-one clubs split from the Rugby Football Union to form the Northern Rugby Football Union (later renamed the Rugby Football League in 1922) in the George Hotel, Huddersfield, over broken-time payments to players who took time off from work to play the sport, thus making rugby league the first code to turn professional and pay players. Rugby union turned professional one hundred years later in 1995, following the 1995 Rugby World Cup in South Africa. The respective world governing bodies are World Rugby (rugby union) and the Rugby League International Federation (rugby league).
Rugby football was one of many versions of football played at English public schools in the 19th century. Although rugby league initially used rugby union rules, they are now wholly separate sports. In addition to these two codes, both American and Canadian football evolved from rugby football in the beginning of the 20th century.
Following the 1895 split in rugby football, the two forms rugby league and rugby union differed in administration only. Soon the rules of rugby league were modified, resulting in two distinctly different forms of rugby. 100 years later, rugby union joined rugby league and most other forms of football as an openly professional sport.
The Olympic form of rugby is known as Rugby Sevens. In this form of the game, each team has seven players on the field at one time playing seven-minute halves. The rules and pitch size are the same as rugby union.
Although rugby football was codified at Rugby School, many rugby playing countries had pre-existing football games not dissimilar to rugby.
Forms of traditional football similar to rugby have been played throughout Europe and beyond. Many of these involved handling of the ball, and scrummaging formations. For example, New Zealand had Ki-o-rahi, Australia marn grook, Japan kemari, Georgia lelo burti, the Scottish Borders Jeddart Ba' and Cornwall Cornish hurling, Central Italy Calcio Fiorentino, South Wales cnapan, East Anglia Campball and Ireland had caid, an ancestor of Gaelic football.
In 1871, English clubs met to form the Rugby Football Union (RFU). In 1892, after charges of professionalism (compensation of team members) were made against some clubs for paying players for missing work, the Northern Rugby Football Union, usually called the Northern Union (NU), was formed. The existing rugby union authorities responded by issuing sanctions against the clubs, players, and officials involved in the new organization. After the schism, the separate clubs were named "rugby league" and "rugby union".
Rugby union is both a professional and amateur game, and is dominated by the first tier unions: New Zealand, Ireland, Wales, England, South Africa, Australia, Argentina, Scotland, Italy and France. Second and third tier unions include Belgium, Brazil, Canada, Chile, Fiji, Georgia, Germany, Hong Kong, Japan, Kenya, Namibia, the Netherlands, Portugal, Romania, Russia, Samoa, Spain, Tonga, the United States and Uruguay. Rugby Union is administered by World Rugby (WR), whose headquarters are located in Dublin, Ireland. It is the national sport in New Zealand, Wales, Fiji, Samoa, Tonga, Georgia and Madagascar, and is the most popular form of rugby globally. The Olympic Games have admitted the seven-a-side version of the game, known as Rugby sevens, into the programme from Rio de Janeiro in 2016 onwards. There was a possibility sevens would be a demonstration sport at the 2012 London Olympics but many sports including sevens were dropped.
In Canada and the United States, rugby developed into gridiron football. During the late 1800s (and even the early 1900s), the two forms of the game were very similar (to the point where the United States was able to win the gold medal for rugby union at the 1924 Summer Olympics), but numerous rule changes have differentiated the gridiron-based game from its rugby counterpart, introduced by Walter Camp in the United States and John Thrift Meldrum Burnside in Canada. Among unique features of the North American game are the separation of play into downs instead of releasing the ball immediately upon tackling, the requirement that the team with the ball set into a set formation for at least one second before resuming play after a tackle (and the allowance of up to 40 seconds to do so), the allowance for one forward pass from behind the site of the last tackle on each down, the evolution of hard plastic equipment (particularly the football helmet and shoulder pads), a smaller and pointier ball that is favorable to being passed but makes drop kicks impractical, a generally smaller and narrower field measured in customary units instead of metric (in some variants of the American game a field can be as short as 50 yards between end zones), and a distinctive field (shaped like a gridiron, from which the code's nickname is derived) with lines marked in five-yard intervals.
Rugby league is also both a professional and amateur game, administered on a global level by the Rugby League International Federation. In addition to amateur and semi-professional competitions in the United States, Russia, Lebanon, Serbia, Europe and Australasia, there are two major professional competitions—the Australasian National Rugby League and the Super League. International Rugby League is dominated by Australia, England and New Zealand. In Papua New Guinea and New Zealand, it is the national sport. Other nations from the South Pacific and Europe also play in the Pacific Cup and European Cup respectively.
Distinctive features common to both rugby codes include the oval ball and throwing the ball forward is not allowed so that players can gain ground only by running with the ball or by kicking it. As the sport of rugby league moved further away from its union counterpart, rule changes were implemented with the aim of making a faster-paced and more try-oriented game. Unlike American and Canadian football, the players do not wear any sort of protection or armour.
The main differences between the two games, besides league having teams of 13 players and union of 15, involve the tackle and its aftermath:
Set pieces of the union code include the "scrum", which occurs after a minor infringement of the rules (most often a knock-on, when a player knocks the ball forward), where packs of opposing players push against each other for possession, and the "line-out", in which parallel lines of players from each team, arranged perpendicular to the touch-line, attempt to catch the ball thrown from touch. A rule has been added to line-outs which allows the jumper to be pulled down once a players' feet are on the ground.
In the league code, the scrum still exists, but with greatly reduced importance as it involves fewer players and is rarely contested. Set pieces are generally started from the play-the-ball situation. Many of the rugby league positions have names and requirements similar to rugby union positions, but there are no flankers in rugby league.
In England, rugby union is widely regarded as an "establishment" sport, played mostly by members of the upper and middle classes. For example, many pupils at public schools and grammar schools play rugby union, although the game (which had a long history of being played at state schools until the 1980s) is becoming increasingly popular in comprehensive schools. Despite this stereotype, the game, particularly in the West Country is popular amongst all classes. In contrast, rugby league has traditionally been seen as a working-class pursuit. Another exception to rugby union's upper-class stereotype is in Wales, where it has been traditionally associated with small village teams made up of coal miners and other industrial workers who played on their days off. In Ireland, both rugby union and rugby league are unifying forces across the national and sectarian divide, with the Ireland international teams representing both political entities.
In Australia, support for both codes is concentrated in New South Wales, Queensland and the Australian Capital Territory. The same perceived class barrier as exists between the two games in England also occurs in these states, fostered by rugby union's prominence and support at private schools.
Exceptions to the above include New Zealand (although rugby league is still considered to be a lower class game by many or a game for 'westies' referring to lower class western suburbs of Auckland and more recently, southern Auckland where the game is also popular), Wales, France (except Paris), Cornwall, Gloucestershire, Somerset, Scottish Borders, County Limerick (see Munster) and the Pacific Islands, where rugby union is popular in working class communities. Nevertheless, rugby league is perceived as the game of the working-class people in northern England and in the Australian states of New South Wales and Queensland.
In the United Kingdom, rugby union fans sometimes used the term "rugger" as an alternative name for the sport, (see Oxford '-er'), although this archaic expression has not had currency since the 1950s or earlier. | https://en.wikipedia.org/wiki?curid=25402 |
Rugby union
Rugby union, widely known simply as rugby, is a full contact team sport that originated in England in the first half of the 19th century. One of the two codes of rugby football, it is based on running with the ball in hand. In its most common form, a game is played between two teams of 15 players using an oval-shaped ball on a rectangular field called a pitch. The field has H-shaped goalposts at both ends.
Rugby union is a popular sport around the world, played by male and female players of all ages. Rules do not differ between the sexes. In 2014, there were more than 6 million people playing worldwide, of whom 2.36 million were registered players. World Rugby, previously called the International Rugby Football Board (IRFB) and the International Rugby Board (IRB), has been the governing body for rugby union since 1886, and currently has 101 countries as full members and 18 associate members.
In 1845, the first football laws were written by pupils at Rugby School; other significant events in the early development of rugby include the decision by Blackheath F.C. to leave the Football Association in 1863 and, in 1895, the acrimonious split between the then amateur rugby union and the professional rugby league. Historically rugby union was an amateur sport, but in 1995 formal restrictions on payments to players were removed, making the game openly professional at the highest level for the first time.
Rugby union spread from the Home Nations of Great Britain and Ireland and was embraced by many of the countries associated with the British Empire. Early exponents of the sport included Australia, New Zealand, South Africa and France. Countries that have adopted rugby union as their "de facto" national sport include Fiji, Georgia, Madagascar, New Zealand, Samoa, and Tonga.
International matches have taken place since 1871 when the first game was played between Scotland and England at Raeburn Place in Edinburgh. The Rugby World Cup, first held in 1987, is contested every four years. The Six Nations Championship in Europe and The Rugby Championship in the Southern Hemisphere are other major international competitions that are held annually.
National club and provincial competitions include the Premiership in England, the Top 14 in France, the Mitre 10 Cup in New Zealand, the National Rugby Championship in Australia, and the Currie Cup in South Africa. Other transnational club competitions include the European Rugby Champions Cup, the Pro14 in Europe and South Africa, and Super Rugby in the Southern Hemisphere and Japan.
The origin of rugby football is reputed to be an incident during a game of English school football at Rugby School in Warwickshire in 1823, when William Webb Ellis is said to have picked up the ball and run with it. Although the story may well be apocryphal, it was immortalised at the school with a commemorative plaque that was unveiled in 1895, and the Rugby World Cup trophy is named after Webb Ellis. Rugby football stems from the form of the game played at Rugby School, which former pupils then introduced to their universities.
Former Rugby School student Albert Pell is credited with having formed the first "football" team while a student at Cambridge University. Major private schools each used different rules during this early period, with former pupils from Rugby and Eton attempting to carry their preferred rules through to their universities. A significant event in the early development of rugby football was the production of a written set of rules at Rugby School in 1845, followed by the Cambridge Rules that were drawn up in 1848.
Formed in 1863, the national governing body The Football Association (FA) began codifying a set of universal football rules. These new rules specifically banned players from running with the ball in hand and also disallowed hacking (kicking players in the shins), both of which were legal and common tactics under the Rugby School's rules of the sport. In protest at the imposition of the new rules, the Blackheath Club left the FA followed by several other clubs that also favoured the "Rugby Rules". Although these clubs decided to ban hacking soon afterwards, the split was permanent, and the FA's codified rules became known as "association football" whilst the clubs that had favoured the Rugby Rules formed the Rugby Football Union in 1871, and their code became known as "rugby football".
In 1895, there was a major schism within rugby football in England in which numerous clubs from Northern England resigned from the RFU over the issue of reimbursing players for time lost from their workplaces. The split highlighted the social and class divisions in the sport in England, and led directly to the creation of the separate code of "rugby league". The existing sport thereafter took on the name "rugby union" to differentiate it from rugby league, but both versions of the sport are known simply as "rugby" throughout most of the world.
The first rugby football international was played on 27 March 1871 between Scotland and England in Edinburgh. Scotland won the game 1–0. By 1881 both Ireland and Wales had representative teams and in 1883 the first international competition, the Home Nations Championship had begun. 1883 is also the year of the first rugby sevens tournament, the Melrose Sevens, which is still held annually.
Two important overseas tours took place in 1888: a British Isles team visited Australia and New Zealand—although a private venture, it laid the foundations for future British and Irish Lions tours; and the 1888–89 New Zealand Native football team brought the first overseas team to British spectators.
During the early history of rugby union, a time before commercial air travel, teams from different continents rarely met. The first two notable tours both took place in 1888the British Isles team touring New Zealand and Australia, followed by the New Zealand team touring Europe. Traditionally the most prestigious tours were the Southern Hemisphere countries of Australia, New Zealand and South Africa making a tour of a Northern Hemisphere, and the return tours made by a joint British and Irish team. Tours would last for months, due to long traveling times and the number of games undertaken; the 1888 New Zealand team began their tour in Hawkes Bay in June and did not complete their schedule until August 1889, having played 107 rugby matches. Touring international sides would play Test matches against international opponents, including national, club and county sides in the case of Northern Hemisphere rugby, or provincial/state sides in the case of Southern Hemisphere rugby.
Between 1905 and 1908, all three major Southern Hemisphere rugby countries sent their first touring teams to the Northern Hemisphere: New Zealand in 1905, followed by South Africa in 1906 and Australia in 1908. All three teams brought new styles of play, fitness levels and tactics, and were far more successful than critics had expected.
The New Zealand 1905 touring team performed a haka before each match, leading Welsh Rugby Union administrator Tom Williams to suggest that Wales player Teddy Morgan lead the crowd in singing the Welsh National Anthem, "Hen Wlad Fy Nhadau", as a response. After Morgan began singing, the crowd joined in: the first time a national anthem was sung at the start of a sporting event. In 1905 France played England in its first international match.
Rugby union was included as an event in the Olympic Games four times during the early 20th century. No international rugby games and union-sponsored club matches were played during the First World War, but competitions continued through service teams such as the New Zealand Army team. During the Second World War no international matches were played by most countries, though Italy, Germany and Romania played a limited number of games, and Cambridge and Oxford continued their annual University Match.
The first officially sanctioned international rugby sevens tournament took place in 1973 at Murrayfield, one of Scotland's biggest stadiums, as part of the Scottish Rugby Union centenary celebrations.
In 1987 the first Rugby World Cup was held in Australia and New Zealand, and the inaugural winners were New Zealand. The first World Cup Sevens tournament was held at Murrayfield in 1993. Rugby Sevens was introduced into the Commonwealth Games in 1998 and was added to the Olympic Games of 2016. Both men and women's Sevens will again take place at the 2020 Olympic Games in Tokyo.
Rugby union was an amateur sport until the IRB declared the game "open" in August 1995 (shortly after the completion of the 1995 World Cup), removing restrictions on payments to players. However, the pre-1995 period of rugby union was marked by frequent accusations of "shamateurism", including an investigation in Britain by a House of Commons Select committee in early 1995. Following the introduction of professionalism trans-national club competitions were started, with the Heineken Cup in the Northern Hemisphere and Super Rugby in the Southern Hemisphere.
The Tri Nations, an annual international tournament involving Australia, New Zealand and South Africa, kicked off in 1996. In 2012, this competition was extended to include Argentina, a country whose impressive performances in international games (especially finishing in third place in the 2007 Rugby World Cup) was deemed to merit inclusion in the competition. As a result of the expansion to four teams, the tournament was renamed The Rugby Championship.
Each team starts the match with 15 players on the field and seven or eight substitutes. Players in a team are divided into eight forwards (two more than in rugby league) and seven backs.
The main responsibilities of the forward players are to gain and retain possession of the ball. Forwards play a vital role in tackling and rucking opposing players. Players in these positions are generally bigger and stronger and take part in the scrum and line-out. The forwards are often collectively referred to as the 'pack', especially when in the scrum formation.
The front row consists of three players: two props (the loosehead prop and the tighthead prop) and the hooker. The role of the two props is to support the hooker during scrums, to provide support for the jumpers during line-outs and to provide strength and power in rucks and mauls. The third position in the front row is the hooker. The hooker is a key position in attacking and defensive play and is responsible for winning the ball in the scrum. Hookers normally throw the ball in at line-outs.
The second row consists of two locks or lock forwards. Locks are usually the tallest players in the team, and specialise as line-out jumpers. The main role of the lock in line-outs is to make a standing jump, often supported by the other forwards, to either collect the thrown ball or ensure the ball comes down on their side. Locks also have an important role in the scrum, binding directly behind the three front row players and providing forward drive.
The back row, not to be confused with ‘Backs’, is the third and final row of the forward positions, who are often referred to as the loose forwards. The three positions in the back row are the two flankers and the number 8. The two flanker positions called the blindside flanker and openside flanker, are the final row in the scrum. They are usually the most mobile forwards in the game. Their main role is to win possession through 'turn overs'. The number 8 packs down between the two locks at the back of the scrum. The role of the number 8 in the scrum is to control the ball after it has been heeled back from the front of the pack, and the position provides a link between the forwards and backs during attacking phases.
The role of the backs is to create and convert point-scoring opportunities. They are generally smaller, faster and more agile than the forwards. Another distinction between the backs and the forwards is that the backs are expected to have superior kicking and ball-handling skills, especially the fly-half, scrum-half, and full-back.
The half-backs consist of two positions, the scrum-half and the fly-half. The fly-half is crucial to a team's game plan, orchestrating the team's performance. They are usually the first to receive the ball from the scrum-half following a breakdown, lineout, or scrum, and need to be decisive with what actions to take and be effective at communicating with the outside backs. Many fly-halves are also their team's goal kickers. The scrum-half is the link between the forwards and the backs. They receive the ball from the lineout and remove the ball from the back of the scrum, usually passing it to the fly-half. They also feed the scrum and sometimes have to act as a fourth loose forward.
There are four three quarter positions: two centres (inside and outside) and two wings (left and right). The centres will attempt to tackle attacking players; whilst in attack, they should employ speed and strength to breach opposition defences. The wings are generally positioned on the outside of the backline. Their primary function is to finish off moves and score tries. Wings are usually the fastest players in the team and are elusive runners who use their speed to avoid tackles.
The full-back is normally positioned several metres behind the back line. They often field opposition kicks and are usually the last line of defence should an opponent break through the back line. Two of the most important attributes of a good full-back are dependable catching skills and a good kicking game.
Rugby union is played between two teams – the one that scores more points wins the game. Points can be scored in several ways: a try, scored by grounding the ball in the in-goal area (between the goal line and the dead-ball line), is worth 5 points and a subsequent conversion kick scores 2 points; a successful penalty kick or a drop goal each score 3 points. The values of each of these scoring methods have been changed over the years.
The field of play on a rugby pitch is as near as possible to a maximum of long by wide. In actual gameplay the length of a pitch can vary. There are typically between the two try-lines, but it can be as short as . Anywhere between behind each try line serves as the in-goal area. The pitch must be at least wide, up to a maximum of
Rugby goalposts are H-shaped and are situated in the middle of the goal lines at each end of the field. They consist of two poles, apart, connected by a horizontal crossbar above the ground. The minimum height for posts is .
At the beginning of the game, the captains and the referee toss a coin to decide which team will kick off first. Play then starts with a dropkick, with the players chasing the ball into the opposition's territory, and the other side trying to retrieve the ball and advance it. The dropkick must make contact with the ground before kicked. If the ball does not reach the opponent's line 10 meters away, the opposing team has two choices: to have the ball kicked off again, or to have a scrum at the centre of the half-way line.
If the player with the ball is tackled, frequently a ruck will result.
Games are divided into 40-minute halves, with a break in the middle. The sides exchange ends of the field after the half-time break. Stoppages for injury or to allow the referee to take disciplinary action do not count as part of the playing time, so that the elapsed time is usually longer than 80 minutes. The referee is responsible for keeping time, even when—as in many professional tournaments—he is assisted by an official time-keeper. If time expires while the ball is in play, the game continues until the ball is "dead", and only then will the referee blow the whistle to signal half-time or full-time; but if the referee awards a penalty or free-kick, the game continues.
In the knockout stages of rugby competitions, most notably the Rugby World Cup, two extra time periods of 10 minutes periods are played (with an interval of 5 minutes in between) if the game is tied after full-time. If scores are level after 100 minutes then the rules call for 20 minutes of sudden-death extra time to be played. If the sudden-death extra time period results in no scoring a kicking competition is used to determine the winner. However, no match in the history of the Rugby World Cup has ever gone past 100 minutes into a sudden-death extra time period.
Forward passing (throwing the ball ahead to another player) is not allowed; the ball can be passed laterally or backwards. The ball tends to be moved forward in three ways — by kicking, by a player running with it or within a scrum or maul. Only the player with the ball may be tackled or rucked. When a ball is knocked forward by a player with their arms, a "knock-on" is committed, and play is restarted with a scrum.
Any player may kick the ball forward in an attempt to gain territory. When a player anywhere in the playing area kicks indirectly into touch so that the ball first bounces in the field of play, the throw-in is taken where the ball went into touch. If the player kicks directly into touch (i.e. without bouncing in-field first) from within one's own line, the lineout is taken by the opposition where the ball went into touch, but if the ball is kicked into touch directly by a player outside the line, the lineout is taken level to where the kick was taken.
The aim of the defending side is to stop the player with the ball, either by bringing them to ground (a tackle, which is frequently followed by a ruck) or by contesting for possession with the ball-carrier on their feet (a maul). Such a circumstance is called a breakdown and each is governed by a specific law.
Tackling
A player may tackle an opposing player who has the ball by holding them while bringing them to ground. Tacklers cannot tackle above the shoulder (the neck and head are out of bounds), and the tackler has to attempt to wrap their arms around the player being tackled to complete the tackle. It is illegal to push, shoulder-charge, or to trip a player using feet or legs, but hands may be used (this being referred to as a tap-tackle or ankle-tap). Tacklers may not tackle an opponent who has jumped to catch a ball until the player has landed.
Rucking and Mauling
Mauls occur after a player with the ball has come into contact with an opponent but the handler remains on his feet; once any combination of at least three players have bound themselves a maul has been set. A ruck is similar to the maul, but in this case the ball has gone to ground with at least three attacking players binding themselves on the ground in an attempt to secure the ball.
When the ball leaves the side of the field, a line-out is awarded against the team which last touched the ball. Forward players from each team line up a metre apart, perpendicular to the touchline and between from the touchline. The ball is thrown from the touchline down the centre of the lines of forwards by a player (usually the hooker) from the team that did not play the ball into touch. The exception to this is when the ball went out from a penalty, in which case the side who gained the penalty throws the ball in.
Both sides compete for the ball and players may lift their teammates. A jumping player cannot be tackled until they stand and only shoulder-to-shoulder contact is allowed; deliberate infringement of this law is dangerous play, and results in a penalty kick.
A scrum is a way of restarting the game safely and fairly after a minor infringement. It is awarded when the ball has been knocked or passed forward, if a player takes the ball over their own try line and puts the ball down, when a player is accidentally offside or when the ball is trapped in a ruck or maul with no realistic chance of being retrieved. A team may also opt for a scrum if awarded a penalty.
A scrum is formed by the eight forwards from each team crouching down and binding together in three rows, before interlocking with the opposing team. For each team, the front row consists of two props (loosehead and tighthead) either side of the hooker. The two props are typically amongst the strongest players on the team. The second row consists of two locks and the two flankers. Behind the second row is the number 8. This formation is known as the 3–4–1 formation. Once a scrum is formed the scrum-half from the team awarded the "feed" rolls the ball into the gap between the two front-rows known as the "tunnel". The two hookers then compete for possession by hooking the ball backwards with their feet, while each pack tries to push the opposing pack backwards to help gain possession. The side that wins possession can either keep the ball under their feet while driving the opposition back, in order to gain ground, or transfer the ball to the back of the scrum where it can be picked up by the number 8 or by the scrum-half.
There are three match officials: a referee, and two assistant referees. The referees are commonly addressed as "Sir". The latter, formerly known as touch judges, had the primary function of indicating when the ball had gone into "touch"; their role has been expanded and they are now expected to assist the referee in a number of areas, such as watching for foul play and checking offside lines. In addition, for matches in high level competitions, there is often a television match official (TMO; popularly called the "video referee"), to assist with certain decisions, linked up to the referee by radio. The referees have a system of hand signals to indicate their decisions.
Common offences include tackling above the shoulders, collapsing a scrum, ruck or maul, not releasing the ball when on the ground, or being offside. The non-offending team has a number of options when awarded a penalty: a "tap" kick, when the ball is kicked a very short distance from hand, allowing the kicker to regather the ball and run with it; a punt, when the ball is kicked a long distance from hand, for field position; a place-kick, when the kicker will attempt to score a goal; or a scrum. Players may be sent off (signalled by a red card) or temporarily suspended ("sin-binned") for ten minutes (yellow card) for foul play or repeated infringements, and may not be replaced.
Occasionally, infringements are not caught by the referee during the match and these may be "cited" by the citing commissioner after the match and have punishments (usually suspension for a number of weeks) imposed on the infringing player.
During the match, players may be replaced (for injury) or substituted (for tactical reasons). A player who has been replaced may not rejoin play unless he was temporarily replaced to have bleeding controlled; a player who has been substituted may return temporarily, to replace a player who has a blood injury or has suffered a concussion, or permanently, if he is replacing a front-row forward. In international matches, eight replacements are allowed; in domestic or cross-border tournaments, at the discretion of the responsible national union(s), the number of replacements may be nominated to a maximum of eight, of whom three must be sufficiently trained and experienced to provide cover for the three front row positions.
Prior to 2016, all substitutions, no matter the cause, counted against the limit during a match. In 2016, World Rugby changed the law so that substitutions made to replace a player deemed unable to continue due to foul play by the opposition would no longer count against the match limit. This change was introduced in January of that year in the Southern Hemisphere and June in the Northern Hemisphere.
The most basic items of equipment for a game of rugby union are the ball itself, a rugby shirt (also known as a "jersey"), rugby shorts, socks, and boots. The rugby ball is oval in shape (technically a prolate spheroid), and is made up of four panels. The ball was historically made of leather, but in the modern era most games use a ball made from a synthetic material. World Rugby lays out specific dimensions for the ball, in length, in circumference of length and in circumference of width. Rugby boots have soles with studs to allow grip on the turf of the pitch. The studs may be either metal or plastic but must not have any sharp edges or ridges.
Protective equipment is optional and strictly regulated. The most common items are mouthguards, which are worn by almost all players, and are compulsory in some rugby-playing nations. Other protective items that are permitted include head gear; thin (not more than 10 mm thick), non-rigid shoulder pads and shin guards; which are worn underneath socks. Bandages or tape can be worn to support or protect injuries; some players wear tape around the head to protect the ears in scrums and rucks. Female players may also wear chest pads. Although not worn for protection, some types of fingerless mitts are allowed to aid grip.
It is the responsibility of the match officials to check players' clothing and equipment before a game to ensure that it conforms to the laws of the game.
The international governing body of rugby union (and associated games such as sevens) is World Rugby (WR). The WR headquarters are in Dublin, Ireland. WR, founded in 1886, governs the sport worldwide and publishes the game's laws and rankings. As of February 2014, WR (then known as the IRB, for International Rugby Board) recorded 119 unions in its membership, 101 full members and 18 associate member countries. According to WR, rugby union is played by men and women in over 100 countries. WR controls the Rugby World Cup, the Women's Rugby World Cup, Rugby World Cup Sevens, HSBC Sevens Series, HSBC Women's Sevens Series, World Under 20 Championship, World Under 20 Trophy, Nations Cup and the Pacific Nations Cup. WR holds votes to decide where each of these events are to be held, except in the case of the Sevens World Series for men and women, for which WR contracts with several national unions to hold individual events.
Six regional associations, which are members of WR, form the next level of administration; these are:
SANZAAR (South Africa, New Zealand, Australia and Argentina Rugby) is a joint venture of the South African Rugby Union, New Zealand Rugby, Rugby Australia and the Argentine Rugby Union (UAR) that operates Super Rugby and The Rugby Championship (formerly the Tri Nations before the entry of Argentina). Although UAR initially had no representation on the former SANZAR board, it was granted input into the organisation's issues, especially with regard to The Rugby Championship, and became a full SANZAAR member in 2016 (when the country entered Super Rugby).
National unions oversee rugby union within individual countries and are affiliated to WR. Since 2016, the WR Council has 40 seats. A total of 11 unions—the eight foundation unions of Scotland, Ireland, Wales, England, Australia, New Zealand, South Africa and France, plus Argentina, and —have two seats each. In addition, the six regional associations have two seats each. Four more unions—, , and the USA—have one seat each. Finally, the Chairman and Vice Chairman, who usually come from one of the eight foundation unions (although the current Vice Chairman, Agustín Pichot, is with the non-foundation Argentine union) have one vote each.
The earliest countries to adopt rugby union were England, the country of inception, and the other three Home Nations, Scotland, Ireland and Wales. The spread of rugby union as a global sport has its roots in the exporting of the game by British expatriates, military personnel, and overseas university students.
The first rugby club in France was formed by British residents in Le Havre in 1872, while the next year Argentina recorded its first game: 'Banks' v 'City' in Buenos Aires.
At least seven countries have adopted rugby union as their de facto national sport; they are Fiji, Georgia, Madagascar, New Zealand, Samoa, Tonga and Wales.
A rugby club was formed in Sydney, New South Wales, Australia in 1864; while the sport was said to have been introduced to New Zealand by Charles Monro in 1870, who played rugby while a student at Christ's College, Finchley.
Several island nations have embraced the sport of rugby. Rugby was first played in Fiji circa 1884 by European and Fijian soldiers of the Native Constabulary at Ba on Viti Levu island. Fiji then sent their first overseas team to Samoa in 1924, who in turn set up their own union in 1927. Along with Tonga, other countries to have national rugby teams in Oceania include the Cook Islands, Niue, Papua New Guinea and Solomon Islands.
In North America a club formed in Montreal in 1868, Canada's first club. The city of Montreal also played its part in the introduction of the sport in the United States, when students of McGill University played against a team from Harvard University in 1874.
Although the exact date of arrival of rugby union in Trinidad and Tobago is unknown, their first club Northern RFC was formed in 1923, a national team was playing by 1927 and due to a cancelled tour to British Guiana in 1933, switched their venue to Barbados; introducing rugby to the island. Other Atlantic countries to play rugby union include Jamaica and Bermuda.
The growth of rugby union in Europe outside the 6 Nations countries in terms of playing numbers has been sporadic. Historically, British and Irish home teams played the Southern Hemisphere teams of Australia, New Zealand, and South Africa, as well as France. The rest of Europe were left to play amongst themselves. During a period when it had been isolated by the British and Irish Unions, France, lacking international competition, became the only European team from the top tier to regularly play the other European countries; mainly Belgium, the Netherlands, Germany, Spain, Romania, Poland, Italy and Czechoslovakia. In 1934, instigated by the French Rugby Federation, FIRA (Fédération Internationale de Rugby Amateur) was formed to organise rugby union outside the authority of the IRFB. The founding members were , , , , , and .
Other European rugby playing nations of note include Russia, whose first officially recorded match is marked by an encounter between Dynamo Moscow and the Moscow Institute of Physical Education in 1933. Rugby union in Portugal also took hold between the First and Second World Wars, with a Portuguese National XV set up in 1922 and an official championship started in 1927.
In 1999, FIRA agreed to place itself under the auspices of the IRB, transforming itself into a strictly European organising body. Accordingly, it changed its name to FIRA–AER (Fédération Internationale de Rugby Amateur – Association Européenne de Rugby). It adopted its current name of Rugby Europe in 2014.
Although Argentina is the best-known rugby playing nation in South America, founding the Argentine Rugby Union in 1899, several other countries on the continent have a long history. Rugby had been played in Brazil since the end of the 19th century, but the game was played regularly only from 1926, when São Paulo beat Santos in an inter-city match. It took Uruguay several aborted attempts to adapt to rugby, led mainly by the efforts of the Montevideo Cricket Club; these efforts succeeded in 1951 with the formation of a national league and four clubs. Other South American countries that formed a rugby union include Chile (1948), and Paraguay (1968).
Many Asian countries have a tradition of playing rugby dating from the British Empire. India began playing rugby in the early 1870s, the Calcutta Football Club forming in 1873. However, with the departure of a local British army regiment, interest in rugby diminished in the area. In 1878, The Calcutta Football Club was disbanded, and rugby in India faltered. Sri Lanka claims to have founded their union in 1878, and although little official information from the period is available, the team won the All-India cup in Madras in 1920. The first recorded match in Malaysia was in 1892, but the first confirmation of rugby is the existence of the "HMS Malaya Cup" which was first presented in 1922 and is still awarded to the winners of the Malay sevens.
Rugby union was introduced to Japan in 1899 by two Cambridge students: Ginnosuke Tanaka and Edward Bramwell Clarke. The Japan RFU was founded in 1926 and its place in rugby history was cemented with the news that Japan will host the 2019 World Cup. It will be the first country outside the Commonwealth, Ireland and France to host the event, and this is viewed by the IRB as an opportunity for rugby union to extend its reach, particularly in Asia. Other Asian playing countries of note include Singapore, South Korea, China and The Philippines, while the former British colony of Hong Kong is notable within rugby for its development of the rugby sevens game, especially the Hong Kong Sevens tournament which was founded in 1976.
Rugby in the Middle East and the Gulf States has its history in the 1950s, with clubs formed by British and French Services stationed in the region after the Second World War. When these servicemen left, the clubs and teams were kept alive by young professionals, mostly Europeans, working in these countries. The official union of Oman was formed in 1971. Bahrain founded its union a year later, while in 1975 the Dubai Sevens, the Gulf's leading rugby tournament, was created. Rugby remains a minority sport in the region with Israel and the United Arab Emirates, as of 2019, being the only member union from the Middle East to be included in the IRB World Rankings.
In 1875, rugby was introduced to South Africa by British soldiers garrisoned in Cape Town. During the late 19th and early 20th century, the sport in Africa was spread by settlers and colonials who often adopted a "whites-only" policy to playing the game. This resulted in rugby being viewed as a bourgeois sport by the indigenous people with limited appeal. The earliest countries to see the playing of competitive rugby include South Africa, and neighbouring Rhodesia (modern-day Zimbabwe), which formed the Rhodesia Rugby Football Union in 1895.
In more recent times the sport has been embraced by several African nations. In the early 21st century Madagascar has experienced crowds of 40,000 at national matches, while Namibia, whose history of rugby can be dated from 1915, have qualified for the final stages of the World Cup four times since 1999. Other African nations to be represented in the World Rugby Rankings as Member Unions include Côte d'Ivoire, Kenya, Uganda and Zambia. South Africa and Kenya are among the 15 "core teams" that participate in every event of the men's World Rugby Sevens Series.
Records of women's rugby football date from the late 19th century, with the first documented source being Emily Valentine's writings, in which she states that she set up a rugby team in Portora Royal School in Enniskillen, Ireland in 1887. Although there are reports of early women's matches in New Zealand and France, one of the first notable games to prove primary evidence was the 1917 war-time encounter between Cardiff Ladies and Newport Ladies; a photo of which shows the Cardiff team before the match at the Cardiff Arms Park. Since the 1980s, the game has grown in popularity among female athletes, and by 2010, according to World Rugby, women's rugby was being played in over 100 countries.
The English-based Women's Rugby Football Union (WRFU), responsible for women's rugby in England, Scotland, Ireland, and Wales, was founded in 1983, and is the oldest formally organised national governing body for women's rugby. This was replaced in 1994 by the Rugby Football Union for Women (RFUW) in England with each of the other Home Nations governing their own countries.
The premier international competition in rugby union for women is the Women's Rugby World Cup, first held in 1991; from 1994 through 2014, it was held every four years. After the 2014 event, the tournament was brought forward a year to 2017 to avoid clashing with other sporting cycles, in particular the Rugby World Cup Sevens competition. The Women's Rugby World Cup returned to a four-year cycle after 2017, with future competitions to be held in the middle year of the men's World Cup cycle.
The most important competition in rugby union is the Rugby World Cup, a men's tournament that has taken place every four years since the inaugural event in 1987. South Africa are the reigning champions, having defeated England in the final of the 2019 Rugby World Cup in Yokohama. New Zealand and South Africa have each won the title three times (New Zealand: 1987, 2011, 2015; South Africa: 1995, 2007, 2019), Australia have won twice (1991 and 1999), and England once (2003). England is the only team from the Northern Hemisphere to have won the Rugby World Cup.
The Rugby World Cup has continued to grow since its inception in 1987. The first tournament, in which 16 teams competed for the title, was broadcast to 17 countries with an accumulated total of 230 million television viewers. Ticket sales during the pool stages and finals of the same tournament was less than a million. The 2007 World Cup was contested by 94 countries with ticket sales of 3,850,000 over the pool and final stage. The accumulated television audience for the event, then broadcast to 200 countries, was a claimed 4.2 billion.
The 2019 Rugby World Cup took place in Japan between 20 September and 2 November. It was the ninth edition and the first time the tournament has been held in Asia.
Major international competitions are the Six Nations Championship and The Rugby Championship, held in Europe and the Southern Hemisphere respectively.
The Six Nations is an annual competition involving the European teams , , , , and . Each country plays the other five once. Following the first internationals between England and Scotland, Ireland and Wales began competing in the 1880s, forming the "Home International Championships". France joined the tournament in the 1900s and in 1910 the term "Five Nations" first appeared. However, the Home Nations (England, Ireland, Scotland, and Wales) excluded France in 1931 amid a run of poor results, allegations of professionalism and concerns over on-field violence. France then rejoined in 1939–1940, though World War II halted proceedings for a further eight years. France has played in all the tournaments since WWII, the first of which was played in 1947. In 2000, Italy became the sixth nation in the contest and Rome's Stadio Olimpico has replaced Stadio Flaminio as the venue for their home games since 2013. The current Six Nations champions are Wales.
The Rugby Championship is the Southern Hemisphere's annual international series for that region's top national teams. From its inception in 1996 through 2011, it was known as the Tri Nations, as it featured the hemisphere's traditional powers of Australia, New Zealand and South Africa. These teams have dominated world rankings in recent years, and many considered the Tri Nations to be the toughest competition in international rugby. The Tri Nations was initially played on a home and away basis with the three nations playing each other twice.
In 2006 a new system was introduced where each nation plays the others three times, though in 2007 and 2011 the teams played each other only twice, as both were World Cup years. Since Argentina's strong performances in the 2007 World Cup, after the 2009 Tri Nations tournament, SANZAR (South Africa, New Zealand and Australian Rugby) invited the Argentine Rugby Union (UAR) to join an expanded Four Nations tournament in 2012. The competition has been officially rechristened as The Rugby Championship beginning with the 2012 edition. The competition reverted to the Tri Nations' original home-and-away format, but now involving four teams. In World Cup years, an abbreviated tournament is held in which each team plays the others only once.
Rugby union was played at the Olympic Games in 1900, 1908, 1920 and 1924. As per Olympic rules, the nations of Scotland, Wales and England were not allowed to play separately as they are not sovereign states. In 1900, France won the gold, beating Great Britain 27 points to 8 and defeating Germany 27 points to 17. In 1908, Australia defeated Great Britain, claiming the gold medal, the score being 32 points to three. In 1920, the United States, fielding a team with many players new to the sport of rugby, upset France in a shock win, eight points to zero. In 1924, the United States again defeated France 17 to 3, becoming the only team to win gold twice in the sport.
In 2009 the International Olympic Committee voted with a majority of 81 to 8 that rugby union be reinstated as an Olympic sport in at least the 2016 and 2020 games, but in the sevens, 4-day tournament format. This is something the rugby world has aspired to for a long time and Bernard Lapasset, president of the International Rugby Board, said the Olympic gold medal would be considered to be "the pinnacle of our sport" (Rugby Sevens).
Rugby sevens has been played at the Commonwealth Games since the 1998 Games in Kuala Lumpur. The most gold medal holders are New Zealand who have won the competition on four successive occasions until South Africa beat them in 2014. Rugby union has also been an Asian Games event since the 1998 games in Bangkok, Thailand. In the 1998 and 2002 editions of the games, both the usual fifteen-a-side variety and rugby sevens were played, but from 2006 onwards, only rugby sevens was retained. In 2010, the women's rugby sevens event was introduced. The event is likely to remain a permanent fixture of the Asian Games due to elevation of rugby sevens as an Olympic sport from the 2016 Olympics onwards. The present gold medal holders in the sevens tournament, held in 2014, are Japan in the men's event and China in the women's.
Women's international rugby union began in 1982, with a match between France and the Netherlands played in Utrecht. As of 2009 over six hundred women's internationals have been played by over forty different nations.
The first Women's Rugby World Cup was held in Wales in 1991, and was won by the United States. The second tournament took place in 1994, and from that time through 2014 was held every four years. The New Zealand Women's team then won four straight World Cups (1998, 2002, 2006, 2010) before England won in 2014. Following the 2014 event, World Rugby moved the next edition of the event to 2017, with a new four-year cycle from that point forward. New Zealand are the current World Cup holders.
As well as the Women's Rugby World Cup there are also other regular tournaments, including a Six Nations, run in parallel to the men's competition. The Women's Six Nations, first played in 1996 has been dominated by England, who have won the tournament on 14 occasions, including a run of seven consecutive wins from 2006 to 2012. However, since then, England have won only in 2017; reigning champion France have won in each even-numbered year (2014, 2016, 2018) whilst Ireland won in 2013 and 2015.
Rugby union has been professionalised since 1995. The following table shows fully professional rugby competitions (semi-professional competitions are excluded from this list).
Rugby union has spawned several variants of the full-contact, 15-a-side game. The two most common differences in adapted versions are fewer players and reduced player contact.
The oldest variant is rugby sevens (sometimes 7s or VIIs), a fast-paced game which originated in Melrose, Scotland in 1883. In rugby sevens, there are only seven players per side, and each half is normally seven minutes. Major tournaments include the Hong Kong Sevens and Dubai Sevens, both held in areas not normally associated with the highest levels of the 15-a-side game.
A more recent variant of the sport is rugby tens (10s or Xs), a Malaysian invention with ten players per side.
Touch rugby, in which "tackles" are made by simply touching the ball carrier with two hands, is popular both as a training game and more formally as a mixed sex version of the sport played by both children and adults.
Several variants have been created to introduce the sport to children with a less physical contact. Mini rugby is a version aimed at fostering the sport in children. It is played with only eight players and on a smaller pitch.
Tag Rugby is a version in which the players wear a belt with two tags attached by velcro, the removal of either counting as a 'tackle'. Tag Rugby also varies in that kicking the ball is not allowed. Similar to Tag Rugby, American Flag Rugby, (AFR), is a mixed gender, non-contact imitation of rugby union designed for American children entering grades K-9. Both American Flag Rugby and Mini Rugby differ to Tag Rugby in that they introduce more advanced elements of rugby union as the participants age.
Other less formal variants include beach rugby and snow rugby.
Rugby league was formed after the Northern Union broke from the Rugby Football Union in a disagreement over payment to players. It went on to change its laws and became a code in its own right. The two sports continue to influence each other to this day.
American football and Canadian football are derived from early forms of rugby.
Australian rules football was influenced by rugby football and other games originating in English public schools.
James Naismith took aspects of many sports including rugby to invent basketball. The most obvious contribution is the jump ball's similarity to the line-out as well as the underhand shooting style that dominated the early years of the sport. Naismith played rugby at McGill University.
Swedish football was a code whose rules were a mix of Association and Rugby football rules.
Rugby lends its name to wheelchair rugby, a full-contact sport which contains elements of rugby such as crossing a try line with the ball to score.
According to a 2011 report by the Centre for the International Business of Sport, over four and a half million people play rugby union or one of its variants organised by the IRB. This is an increase of 19 percent since the previous report in 2007. The report also claimed that since 2007 participation has grown by 33 percent in Africa, 22 percent in South America and 18 percent in Asia and North America. In 2014 the IRB published a breakdown of the total number of players worldwide by national unions. It recorded a total of 6.6 million players globally, of those, 2.36 million were registered members playing for a club affiliated to their country's union. The 2016 World Rugby Year in Review reported 8.5 million players, of which 3.2 million were registered union players and 1.9 million were registered club players; 22% of all players were female.
The most capped international player from the tier 1 nations is former New Zealand openside flanker and captain Richie McCaw who has played in 148 internationals. While the top scoring tier 1 international player is New Zealand's Dan Carter, who has amassed 1442 points during his career. In April 2010 Lithuania which is a second tier rugby nation, broke the record of consecutive international wins for second tier rugby nations. In 2016, the All Blacks of New Zealand set the new record 18 consecutive test wins among tier 1 rugby nations, bettering their previous consecutive run of 17. This record was equalled by England on 11 March 2017 with a win over Scotland at Twickenham. The highest scoring international match between two recognised unions was Hong Kong's 164–13 victory over Singapore on 27 October 1994. While the largest winning margin of 152 points is held by two countries, Japan (a 155–3 win over Chinese Taipei) and Argentina (152–0 over Paraguay) both in 2002.
The record attendance for a rugby union game was set on 15 July 2000 in which New Zealand defeated Australia 39–35 in a Bledisloe Cup game at Stadium Australia in Sydney before 109,874 fans. The record attendance for a match in Europe of 104,000 (at the time a world record) was set on 1 March 1975 when Scotland defeated Wales 12–10 at Murrayfield in Edinburgh during the 1975 Five Nations Championship. The record attendance for a domestic club match is 99,124, set when Racing 92 defeated Toulon in the 2016 Top 14 final on 24 June at Camp Nou in Barcelona. The match had been moved from its normal site of Stade de France near Paris due to scheduling conflicts with France's hosting of UEFA Euro 2016.
Thomas Hughes 1857 novel "Tom Brown's Schooldays", set at Rugby School, includes a rugby football match, also portrayed in the 1940s film of the same name. James Joyce mentions Irish team Bective Rangers in several of his works, including "Ulysses" (1922) and "Finnegans Wake" (1939), while his 1916 semi-autobiographical work "A Portrait of the Artist as a Young Man" has an account of Ireland international James Magee. Sir Arthur Conan Doyle, in his 1924 Sherlock Holmes tale "The Adventure of the Sussex Vampire", mentions that Dr Watson played rugby for Blackheath.
Henri Rousseau's 1908 work "Joueurs de football" shows two pairs of rugby players competing. Other French artists to have represented the sport in their works include Albert Gleizes' "Les Joueurs de football" (1912), Robert Delaunay's "Football. L'Équipe de Cardiff" (1916) and André Lhote's "Partie de Rugby" (1917). The 1928 Gold Medal for Art at the Antwerp Olympics was won by Luxembourg's Jean Jacoby for his work "Rugby".
In film, Ealing Studios' 1949 comedy "A Run for Your Money" and the 1979 BBC Wales television film "Grand Slam" both centre on fans attending a match. Films that explore the sport in more detail include independent production "Old Scores" (1991) and "Forever Strong" (2008). "Invictus" (2009), based on John Carlin's book "Playing the Enemy", explores the events of the 1995 Rugby World Cup and Nelson Mandela's attempt to use the sport to connect South Africa's people post-apartheid.
In public art and sculpture there are many works dedicated to the sport. There is a 27 ft bronze statue of a rugby line-out by pop artist Gerald Laing at Twickenham and one of rugby administrator Sir Tasker Watkins at the Millennium Stadium. Rugby players to have been honoured with statues include Gareth Edwards in Cardiff and Danie Craven in Stellenbosch. | https://en.wikipedia.org/wiki?curid=25405 |
Rugby World Cup
The Rugby World Cup is a men's rugby union tournament contested every four years between the top international teams. The tournament was first held in 1987, when the tournament was co-hosted by New Zealand and Australia.
The winners are awarded the Webb Ellis Cup, named after William Webb Ellis, the Rugby School pupil who, according to a popular legend, invented rugby by picking up the ball during a football game. Four countries have won the trophy; New Zealand and South Africa three times, Australia twice, and England once. South Africa are the current champions, having defeated England in the final of the 2019 tournament in Japan.
The tournament is administered by World Rugby, the sport's international governing body. Sixteen teams were invited to participate in the inaugural tournament in 1987, however since 1999 twenty teams have taken part. Japan hosted the 2019 Rugby World Cup and France will host the next in 2023.
On 21 August 2019, World Rugby announced that gender designations would be removed from the titles of the men's and women's World Cups. Accordingly, all future World Cups for men and women will officially bear the "Rugby World Cup" name. The first tournament to be affected by the new policy will be the next women's tournament to be held in New Zealand in 2021, which will officially be titled as "Rugby World Cup 2021".
Qualifying tournaments were introduced for the second tournament, where eight of the sixteen places were contested in a twenty-four-nation tournament. The inaugural World Cup in 1987, did not involve any qualifying process; instead, the 16 places were automatically filled by seven eligible International Rugby Football Board (IRFB, now World Rugby) member nations, and the rest by invitation.
In 2003 and 2007, the qualifying format allowed for eight of the twenty available positions to be filled by automatic qualification, as the eight quarter-finalists of the previous tournament enter its successor. The remaining twelve positions were filled by continental qualifying tournaments. Positions were filled by three teams from the Americas, one from Asia, one from Africa, three from Europe and two from Oceania. Another two places were allocated for repechage. The first repechage place was determined by a match between the runners-up from the Africa and Europe qualifying tournaments, with that winner then playing the Americas runner-up to determine the place. The second repechage position was determined between the runners-up from the Asia and Oceania qualifiers.
The current format allows for 12 of the 20 available positions to be filled by automatic qualification, as the teams who finish third or better in the group (pool) stages of the previous tournament enter its successor (where they will be seeded). The qualification system for the remaining eight places is region-based, with a total eight teams allocated for Europe, five for Oceania, three for the Americas, two for Africa, and one for Asia. The last place is determined by an intercontinental play-off.
The 2015 tournament involved twenty nations competing over six weeks. There were two stages, a pool and a knockout. Nations were divided into four pools, A through to D, of five nations each. The teams were seeded before the start of the tournament, with the seedings taken from the World Rankings in December 2012. The four highest-ranked teams were drawn into pools A to D. The next four highest-ranked teams were then drawn into pools A to D, followed by the next four. The remaining positions in each pool were filled by the qualifiers.
Nations play four pool games, playing their respective pool members once each. A bonus points system is used during pool play. If two or more teams are level on points, a system of criteria is used to determine the higher ranked; the sixth and final criterion decides the higher rank through the official World Rankings.
The winner and runner-up of each pool enter the knockout stage. The knockout stage consists of quarter- and semi-finals, and then the final. The winner of each pool is placed against a runner-up of a different pool in a quarter-final. The winner of each quarter-final goes on to the semi-finals, and the respective winners proceed to the final. Losers of the semi-finals contest for third place, called the 'Bronze Final'. If a match in the knockout stages ends in a draw, the winner is determined through extra time. If that fails, the match goes into sudden death and the next team to score any points is the winner. As a last resort, a kicking competition is used.
Prior to the Rugby World Cup, there was no truly global rugby union competition, but there were a number of other tournaments. One of the oldest is the annual Six Nations Championship, which started in 1883 as the Home Nations Championship, a tournament between England, Ireland, Scotland and Wales. It expanded to the Five Nations in 1910, when France joined the tournament. France did not participate from 1931 to 1939, during which period it reverted to a Home Nations championship. In 2000, Italy joined the competition, which became the Six Nations.
Rugby union was also played at the Summer Olympic Games, first appearing at the 1900 Paris games and subsequently at London in 1908, Antwerp in 1920, and Paris again in 1924. France won the first gold medal, then Australasia, with the last two being won by the United States. However rugby union ceased to be on Olympic program after 1924.
The idea of a Rugby World Cup had been suggested on numerous occasions going back to the 1950s, but met with opposition from most unions in the IRFB. The idea resurfaced several times in the early 1980s, with the Australian Rugby Union (ARU; now known as Rugby Australia) in 1983, and the New Zealand Rugby Union (NZRU; now known as New Zealand Rugby) in 1984 independently proposing the establishment of a world cup. A proposal was again put to the IRFB in 1985 and this time passed 10–6. The delegates from Australia, France, New Zealand and South Africa all voted for the proposal, and the delegates from Ireland and Scotland against; the English and Welsh delegates were split, with one from each country for and one against.
The inaugural tournament, jointly hosted by Australia and New Zealand, was held in May and June 1987, with sixteen nations taking part. New Zealand became the first-ever champions, defeating France 29–9 in the final. The subsequent 1991 tournament was hosted by England, with matches played throughout Britain, Ireland and France. This tournament saw the introduction of a qualifying tournament; eight places were allocated to the quarter-finalists from 1987, and the remaining eight decided by a thirty-five nation qualifying tournament. Australia won the second tournament, defeating England 12–6 in the final.
In 1992, eight years after their last official series, South Africa hosted New Zealand in a one-off test match. The resumption of international rugby in South Africa came after the dismantling of the apartheid system, and was only done with permission of the African National Congress. With their return to test rugby, South Africa were selected to host the 1995 Rugby World Cup. After upsetting Australia in the opening match, South Africa continued to advance through the tournament until they met New Zealand in the final. After a tense final that went into extra time, South Africa emerged 15–12 winners, with then President Nelson Mandela, wearing a Springbok jersey, presenting the trophy to South Africa's captain, Francois Pienaar.
The tournament in 1999 was hosted by Wales with matches also being held throughout the rest of the United Kingdom, Ireland and France. The tournament included a repechage system, alongside specific regional qualifying places, and an increase from sixteen to twenty participating nations. Australia claimed their second title, defeating France in the final.
The 2003 event was hosted by Australia, although it was originally intended to be held jointly with New Zealand. England emerged as champions defeating Australia in extra time. England's win was unique in that it broke the southern hemisphere's dominance in the event. Such was the celebration of England's victory that an estimated 750,000 people gathered in central London to greet the team, making the day the largest sporting celebration of its kind ever in the United Kingdom.
The 2007 competition was hosted by France, with matches also being held in Wales and Scotland. South Africa claimed their second title by defeating defending champions England 15–6. The 2011 tournament was awarded to New Zealand in November 2005, ahead of bids from Japan and South Africa. The All Blacks reclaimed their place atop the rugby world with a narrow 8–7 win over France in the 2011 final.
In the 2015 edition of tournament, hosted by England, New Zealand once again won the final, this time against established rivals, Australia. In doing so, they became the first team in World Cup history to win three titles, as well as the first to successfully defend a title. It was also New Zealand's first title victory on foreign soil.
The 2019 World Cup, hosted by Japan, saw South Africa claim their third trophy to match New Zealand for the most Rugby World Cup titles. South Africa defeated England 32–12 in the final.
The Webb Ellis Cup is the prize presented to winners of the Rugby World Cup, named after William Webb Ellis. The trophy is also referred to simply as the "Rugby World Cup". The trophy was chosen in 1987 as an appropriate cup for use in the competition, and was created in 1906 by Garrard's Crown Jewellers. The trophy is restored after each game by fellow Royal Warrant holder Thomas Lyte. The words 'The International Rugby Football Board' and 'The Webb Ellis Cup' are engraved on the face of the cup. It stands thirty-eight centimetres high and is silver gilded in gold, and supported by two cast scroll handles, one with the head of a satyr, and the other a head of a nymph. In Australia the trophy is colloquially known as "Bill" — a reference to William Webb Ellis.
Tournaments are organised by Rugby World Cup Ltd (RWCL), which is itself owned by World Rugby. The selection of host is decided by a vote of World Rugby Council members. The voting procedure is managed by a team of independent auditors, and the voting kept secret. The allocation of a tournament to a host nation is now made five or six years prior to the commencement of the event, for example New Zealand were awarded the 2011 event in late 2005.
The tournament has been hosted by multiple nations. For example, the 1987 tournament was co-hosted by Australia and New Zealand. World Rugby requires that the hosts must have a venue with a capacity of at least 60,000 spectators for the final. Host nations sometimes construct or upgrade stadia in preparation for the World Cup, such as Millennium Stadium – purpose built for the 1999 tournament – and Eden Park, upgraded for 2011. The first country outside of the traditional rugby nations of SANZAAR or the Six Nations to be awarded the hosting rights was 2019 host Japan. France will host the 2023 tournament.
Organizers of the Rugby World Cup, as well as the Global Sports Impact, state that the Rugby World Cup is the third largest sporting event in the world, behind only the FIFA World Cup and the Olympics, although other sources question whether this is accurate.
Reports emanating from World Rugby and its business partners have frequently touted the tournament's media growth, with cumulative worldwide television audiences of 300 million for the inaugural 1987 tournament, 1.75 billion in 1991, 2.67 billion in 1995, 3 billion in 1999, 3.5 billion in 2003, and 4 billion in 2007. The 4 billion figure was widely dismissed as the global audience for television is estimated to be about 4.2 billion.
However, independent reviews have called into question the methodology of those growth estimates, pointing to factual inconsistencies. The event's supposed drawing power outside of a handful of rugby strongholds was also downplayed significantly, with an estimated 97 percent of the 33 million average audience produced by the 2007 final coming from Australasia, South Africa, the British Isles and France. Other sports have been accused of exaggerating their television reach over the years; such claims are not exclusive to the Rugby World Cup.
While the event's global popularity remains a matter of dispute, high interest in traditional rugby nations is well documented. The 2003 final, between Australia and England, became the most watched rugby union match in the history of Australian television.
†Typhoon Hagibis caused 3 group stage matches to be cancelled. As a result, only 45 of the scheduled 48 matches were played in the 2019 Rugby World Cup.
Notes:
Twenty-five nations have participated at the Rugby World Cup (excluding qualifying tournaments). The only nations to host and win a tournament are New Zealand (1987 and 2011) and South Africa (1995). The performance of other host nations includes England (1991 final hosts) and Australia (2003 hosts) both finishing runners-up, while France (2007 hosts) finished fourth, and Wales (1999 hosts) and Japan (2019 hosts) reached the quarter-finals. Wales became the first host nation to be eliminated at the pool stages in 1991 while England became the first solo host nation to be eliminated at the pool stages in 2015. Of the twenty-five nations that have participated in at least one tournament, eleven of them have never missed a tournament.
1 South Africa was excluded from the first two tournaments due to a sporting boycott during the apartheid era.
The record for most points overall is held by English player Jonny Wilkinson, who scored 277 during his World Cup career. New Zealand All Black Grant Fox holds the record for most points in one competition, with 126 in 1987; Jason Leonard of England holds the record for most World Cup matches: 22 between 1991 and 2003. All Black Simon Culhane holds the record for most points in a match by one player, 45, as well as the record for most conversions in a match, 20. All Black Marc Ellis holds the record for most tries in a match, six, which he scored against Japan in 1995.
New Zealand All Black Jonah Lomu is the youngest player to appear in a final – aged 20 years and 43 days at the 1995 Final. Lomu (playing in two tournaments) and South African Bryan Habana (playing in three tournaments) share the record for most total World Cup tournament tries, both scoring 15. Lomu (in 1999) and Habana (in 2007) also share the record, along with All Black Julian Savea (in 2015), for most tries in a tournament, with 8 each. South Africa's Jannie de Beer kicked five drop-goals against England in 1999 – an individual record for a single World Cup match. The record for most penalties in a match is 8, held by Australian Matt Burke, Argentinian Gonzalo Quesada, Scotland's Gavin Hastings and France's Thierry Lacroix, with Quesada also holding the record for most penalties in a tournament, with 31.
The most points scored in a game is 145, by the All Blacks against Japan in 1995, while the widest winning margin is 142, held by Australia in a match against Namibia in 2003.
A total of 16 players have been sent off (red carded) in the tournament. Welsh lock Huw Richards was the first, while playing against New Zealand in 1987. No player has been red carded more than once. | https://en.wikipedia.org/wiki?curid=25406 |
Recursion
Recursion (adjective: "recursive") occurs when a thing is defined in terms of itself or of its type. Recursion is used in a variety of disciplines ranging from linguistics to logic. The most common application of recursion is in mathematics and computer science, where a function being defined is applied within its own definition. While this apparently defines an infinite number of instances (function values), it is often done in such a way that no infinite loop or infinite chain of references can occur.
In mathematics and computer science, a class of objects or methods exhibits recursive behavior when it can be defined by two properties:
For example, the following is a recursive definition of a person's "ancestor". One's ancestor is either:
The Fibonacci sequence is another classic example of recursion:
formula_1
formula_2
formula_3
Many mathematical axioms are based upon recursive rules. For example, the formal definition of the natural numbers by the Peano axioms can be described as: "Zero is a natural number, and each natural number has a successor, which is also a natural number." By this base case and recursive rule, one can generate the set of all natural numbers.
Other recursively defined mathematical objects include factorials, functions (e.g., recurrence relations), sets (e.g., Cantor ternary set), and fractals.
There are various more tongue-in-cheek definitions of recursion; see recursive humor.
Recursion is the process a procedure goes through when one of the steps of the procedure involves invoking the procedure itself. A procedure that goes through recursion is said to be 'recursive'.
To understand recursion, one must recognize the distinction between a procedure and the running of a procedure. A procedure is a set of steps based on a set of rules, while the running of a procedure involves actually following the rules and performing the steps.
Recursion is related to, but not the same as, a reference within the specification of a procedure to the execution of some other procedure.
When a procedure is defined as such, this immediately creates the possibility of an endless loop; recursion can only be properly used in a definition if the step in question is skipped in certain cases so that the procedure can complete.
But even if it is properly defined, a recursive procedure is not easy for humans to perform, as it requires distinguishing the new from the old, partially executed invocation of the procedure; this requires some administration as to how far various simultaneous instances of the procedures have progressed. For this reason, recursive definitions are very rare in everyday situations.
Linguist Noam Chomsky, among many others, has argued that the lack of an upper bound on the number of grammatical sentences in a language, and the lack of an upper bound on grammatical sentence length (beyond practical constraints such as the time available to utter one), can be explained as the consequence of recursion in natural language.
This can be understood in terms of a recursive definition of a syntactic category, such as a sentence. A sentence can have a structure in which what follows the verb is another sentence: "Dorothy thinks witches are dangerous", in which the sentence "witches are dangerous" occurs in the larger one. So a sentence can be defined recursively (very roughly) as something with a structure that includes a noun phrase, a verb, and optionally another sentence. This is really just a special case of the mathematical definition of recursion.
This provides a way of understanding the creativity of language—the unbounded number of grammatical sentences—because it immediately predicts that sentences can be of arbitrary length: "Dorothy thinks that Toto suspects that Tin Man said that...". There are many structures apart from sentences that can be defined recursively, and therefore many ways in which a sentence can embed instances of one category inside another. Over the years, languages in general have proved amenable to this kind of analysis.
Recently, however, the generally accepted idea that recursion is an essential property of human language has been challenged by Daniel Everett on the basis of his claims about the Pirahã language. Andrew Nevins, David Pesetsky and Cilene Rodrigues are among many who have argued against this. Literary self-reference can in any case be argued to be different in kind from mathematical or logical recursion.
Recursion plays a crucial role not only in syntax, but also in natural language semantics. The word "and", for example, can be construed as a function that can apply to sentence meanings to create new sentences, and likewise for noun phrase meanings, verb phrase meanings, and others. It can also apply to intransitive verbs, transitive verbs, or ditransitive verbs. In order to provide a single denotation for it that is suitably flexible, "and" is typically defined so that it can take any of these different types of meanings as arguments. This can be done by defining it for a simple case in which it combines sentences, and then defining the other cases recursively in terms of the simple one.
A recursive grammar is a formal grammar that contains recursive production rules.
Recursion is sometimes used humorously in computer science, programming, philosophy, or mathematics textbooks, generally by giving a circular definition or self-reference, in which the putative recursive step does not get closer to a base case, but instead leads to an infinite regress. It is not unusual for such books to include a joke entry in their glossary along the lines of:
A variation is found on page 269 in the index of some editions of Brian Kernighan and Dennis Ritchie's book "The C Programming Language"; the index entry recursively references itself ("recursion 86, 139, 141, 182, 202, 269"). Early versions of this joke can be found in "Let's talk Lisp" by Laurent Siklóssy (published by Prentice Hall PTR on December 1, 1975 with a copyright date of 1976) and in "Software Tools" by Kernighan and Plauger (published by Addison-Wesley Professional on January 11, 1976). The joke also appears in "The UNIX Programming Environment" by Kernighan and Pike. It did not appear in the first edition of "The C Programming Language". The joke is part of the Functional programming folklore and was already widespread in the functional programming community before the publication of the aforementioned books.
Another joke is that "To understand recursion, you must understand recursion." In the English-language version of the Google web search engine, when a search for "recursion" is made, the site suggests "Did you mean: "recursion"." An alternative form is the following, from Andrew Plotkin: ""If you already know what recursion is, just remember the answer. Otherwise, find someone who is standing closer to Douglas Hofstadter than you are; then ask him or her what recursion is.""
Recursive acronyms are other examples of recursive humor. PHP, for example, stands for "PHP Hypertext Preprocessor", WINE stands for "WINE Is Not an Emulator" GNU stands for "GNU's not Unix", and SPARQL denotes the "SPARQL Protocol and RDF Query Language".
The canonical example of a recursively defined set is given by the natural numbers:
In mathematical logic, the Peano axioms (or Peano postulates or Dedekind–Peano axioms), are axioms for the natural numbers presented in the 19th century by the German mathematician Richard Dedekind and by the Italian mathematician Giuseppe Peano. The Peano Axioms define the natural numbers referring to a recursive successor function and addition and multiplication as recursive functions.
Another interesting example is the set of all "provable" propositions in an axiomatic system that are defined in terms of a proof procedure which is inductively (or recursively) defined as follows:
Finite subdivision rules are a geometric form of recursion, which can be used to create fractal-like images. A subdivision rule starts with a collection of polygons labelled by finitely many labels, and then each polygon is subdivided into smaller labelled polygons in a way that depends only on the labels of the original polygon. This process can be iterated. The standard `middle thirds' technique for creating the Cantor set is a subdivision rule, as is barycentric subdivision.
A function may be recursively defined in terms of itself. A familiar example is the Fibonacci number sequence: "F"("n") = "F"("n" − 1) + "F"("n" − 2). For such a definition to be useful, it must be reducible to non-recursively defined values: in this case "F"(0) = 0 and "F"(1) = 1.
A famous recursive function is the Ackermann function, which, unlike the Fibonacci sequence, cannot be expressed without recursion.
Applying the standard technique of proof by cases to recursively defined sets or functions, as in the preceding sections, yields structural induction — a powerful generalization of mathematical induction widely used to derive proofs in mathematical logic and computer science.
Dynamic programming is an approach to optimization that restates a multiperiod or multistep optimization problem in recursive form. The key result in dynamic programming is the Bellman equation, which writes the value of the optimization problem at an earlier time (or earlier step) in terms of its value at a later time (or later step).
In set theory, this is a theorem guaranteeing that recursively defined functions exist. Given a set "X", an element "a" of "X" and a function formula_7, the theorem states that there is a unique function formula_8 (where formula_4 denotes the set of natural numbers including zero) such that
for any natural number "n".
Take two functions formula_8 and formula_13 such that:
where "a" is an element of "X".
It can be proved by mathematical induction that formula_18 for all natural numbers "n":
By induction, formula_18 for all formula_25.
A common method of simplification is to divide a problem into subproblems of the same type. As a computer programming technique, this is called divide and conquer and is key to the design of many important algorithms. Divide and conquer serves as a top-down approach to problem solving, where problems are solved by solving smaller and smaller instances. A contrary approach is dynamic programming. This approach serves as a bottom-up approach, where problems are solved by solving larger and larger instances, until the desired size is reached.
A classic example of recursion is the definition of the factorial function, given here in C code:
unsigned int factorial(unsigned int n) {
The function calls itself recursively on a smaller version of the input and multiplies the result of the recursive call by , until reaching the base case, analogously to the mathematical definition of factorial.
Recursion in computer programming is exemplified when a function is defined in terms of simpler, often smaller versions of itself. The solution to the problem is then devised by combining the solutions obtained from the simpler versions of the problem. One example application of recursion is in parsers for programming languages. The great advantage of recursion is that an infinite set of possible sentences, designs or other data can be defined, parsed or produced by a finite computer program.
Recurrence relations are equations which define one or more sequences recursively. Some specific kinds of recurrence relation can be "solved" to obtain a non-recursive definition (e.g., a closed-form expression).
Use of recursion in an algorithm has both advantages and disadvantages. The main advantage is usually the simplicity of instructions. The main disadvantage is that the memory usage of recursive algorithms may grow very quickly, rendering them impractical for larger instances.
Shapes that seem to have been created by recursive processes sometimes appear in plants and animals, e.g. in branching structures where one large part branches out to two or more similar smaller parts. One example is Romanesco broccoli.
The Russian Doll or Matryoshka doll is a physical artistic example of the recursive concept.
Recursion has been used in paintings since Giotto's "Stefaneschi Triptych", made in 1320. Its central panel contains the kneeling figure of Cardinal Stefaneschi, holding up the triptych itself as an offering.
M. C. Escher's "Print Gallery" (1956) is a print which depicts a distorted city containing a gallery which recursively contains the picture, and so "ad infinitum". | https://en.wikipedia.org/wiki?curid=25407 |
Robert Byrd
Robert Carlyle Byrd (born Cornelius Calvin Sale Jr.; November 20, 1917June 28, 2010) was an American politician who served as a United States Senator from West Virginia for over 51 years, from 1959 until his death in 2010. A member of the Democratic Party, Byrd previously served as a U.S. Representative from 1953 until 1959. He is the longest-serving U.S. Senator in history, was the longest-serving member in the history of the United States Congress, until surpassed by Representative John Dingell of Michigan; the last remaining member of the U.S. Senate to have served during the presidency of Dwight Eisenhower; and the last remaining member of Congress to have served during the presidency of Harry S. Truman. Byrd is also the only West Virginian to have served in both chambers of the state legislature and both chambers of Congress.
Byrd served in the West Virginia House of Delegates from 1947 to 1950, and the West Virginia State Senate from 1950 to 1952. Initially elected to the United States House of Representatives in 1952, Byrd served there for six years before being elected to the Senate in 1958. He rose to become one of the Senate's most powerful members, serving as secretary of the Senate Democratic Caucus from 1967 to 1971 and—after defeating his longtime colleague, Ted Kennedy—as Senate Majority Whip from 1971 to 1977. Over the next three decades, Byrd led the Democratic caucus in numerous roles depending on whether his party held control of the Senate, including Senate Majority Leader, Senate Minority Leader, President pro tempore of the United States Senate and President pro tempore emeritus. As President pro tempore—a position he held four times in his career—he was third in the line of presidential succession, after the Vice President and the Speaker of the House of Representatives.
Serving three different tenures as Chairman of the United States Senate Committee on Appropriations enabled Byrd to steer a great deal of federal money toward projects in West Virginia. Critics derided his efforts as pork barrel spending, while Byrd argued that the many federal projects he worked to bring to West Virginia represented progress for the people of his state. He filibustered against the 1964 Civil Rights Act and supported the Vietnam War, but later renounced racism and segregation, and spoke in opposition to the Iraq War. Renowned for his knowledge of Senate precedent and parliamentary procedure, Byrd wrote a four-volume history of the Senate in later life.
Near the end of his life, Byrd was in declining health and was hospitalized several times. He died in office on June 28, 2010, at the age of 92. Byrd is the oldest member of Congress to die in office. He was buried at Columbia Gardens Cemetery in Arlington, Virginia.
Robert Byrd was born on November 20, 1917, as Cornelius Calvin Sale Jr. in North Wilkesboro, North Carolina, to Cornelius Calvin Sale Sr. and his wife Ada Mae (Kirby). When he was ten months old, his mother died in the 1918 flu pandemic. In accordance with his mother's wishes, his father dispersed their children among relatives. Calvin Jr. was adopted by his aunt and uncle, Titus and Vlurma Byrd, who changed his name to Robert Carlyle Byrd and raised him in the coal-mining region of southern West Virginia, primarily in the coal town of Stotesbury, West Virginia.
Byrd was valedictorian of his 1934 graduating class at Mark Twain High School.
On May 29, 1936, Byrd married Erma Ora James (June 12, 1917 – March 25, 2006) who was born to a coal mining family in Floyd County, Virginia. Her family moved to Raleigh County, West Virginia, where she met Byrd when they attended the same high school.
Robert Byrd had two daughters (Mona Byrd Fatemi and Marjorie Byrd Moore), six grandchildren, and seven great-grandchildren.
In the early 1940s, Byrd recruited 150 of his friends and associates to create a new chapter of the Ku Klux Klan in Sophia, West Virginia.
As a young boy, Byrd had witnessed his adoptive father walk in a Klan parade in Matoaka, West Virginia. While growing up, Byrd had heard that "the Klan defended the American way of life against racemixers and communists". He then wrote to Joel L. Baskin, Grand Dragon of the Realm of Virginia, West Virginia, Maryland and Delaware, who responded that he would come and organize a chapter when Byrd had recruited 150 people. Byrd’s house couldn't fit 150 people, so he arranged to hold the ceremony at the home of C.M. “Clyde” Goodwin, a former law enforcement officer who lived in Crab Orchard, West Virginia. When Baskin called for nominations for Exalted Cyclops, the highest-ranking official in the Klavern, Byrd was nominated and quickly elected by unanimous vote.
It was Baskin who told Byrd, "You have a talent for leadership, Bob ... The country needs young men like you in the leadership of the nation." Byrd later recalled, "Suddenly lights flashed in my mind! Someone important had recognized my abilities! I was only 23 or 24 years old, and the thought of a political career had never really hit me. But strike me that night, it did." Byrd became a recruiter and leader of his chapter. When it came time to elect the top officer (Exalted Cyclops) in the local Klan unit, Byrd won unanimously.
In December 1944, Byrd wrote to segregationist Mississippi Senator Theodore G. Bilbo:
In 1946, Byrd wrote a letter to a Grand Wizard stating, "The Klan is needed today as never before, and I am anxious to see its rebirth here in West Virginia and in every state in the nation." However, when running for the United States House of Representatives in 1952, he announced "After about a year, I became disinterested, quit paying my dues, and dropped my membership in the organization. During the nine years that have followed, I have never been interested in the Klan." He said he had joined the Klan because he felt it offered excitement and was anti-communist.
Byrd later called joining the KKK "the greatest mistake I ever made." In 1997, he told an interviewer he would encourage young people to become involved in politics but also warned, "Be sure you avoid the Ku Klux Klan. Don't get that albatross around your neck. Once you've made that mistake, you inhibit your operations in the political arena." In his last autobiography, Byrd explained that he was a KKK member because he "was sorely afflicted with tunnel vision—a jejune and immature outlook—seeing only what I wanted to see because I thought the Klan could provide an outlet for my talents and ambitions." Byrd also said in 2005, "I know now I was wrong. Intolerance had no place in America. I apologized a thousand times ... and I don't mind apologizing over and over again. I can't erase what happened."
Byrd worked as a gas station attendant, a grocery store clerk, a shipyard welder during World War II, and a butcher before he won a seat in the West Virginia House of Delegates in 1946, representing Raleigh County from 1947 to 1950. Byrd became a local celebrity after a radio station in Beckley began broadcasting his "fiery fundamentalist lessons." In 1950, he was elected to the West Virginia Senate, where he served from December 1950 to December 1952.
In 1951, Byrd was among the official witnesses of the execution of Harry Burdette and Fred Painter, which was the first use of the electric chair in West Virginia. In 1965 the state abolished capital punishment, with the last execution having occurred in 1959.
Early in his career Byrd attended Beckley College, Concord College, Morris Harvey College, Marshall College, and George Washington University Law School, and joined the Tau Kappa Epsilon fraternity.
Byrd began night classes at American University Washington College of Law in 1953, while a member of the United States House of Representatives. He earned his J.D. "cum laude" a decade later, by which time he was a U.S. Senator. President John F. Kennedy spoke at the commencement ceremony on June 10, 1963 and presented the graduates their diplomas, including Byrd. Byrd completed law school in an era when undergraduate degrees were not a requirement. He later decided to complete his Bachelor of Arts degree in political science, and in 1994 he graduated "summa cum laude" from Marshall University.
In 1952, Byrd was elected to the United States House of Representatives for West Virginia's 6th congressional district, succeeding E. H. Hedrick, who retired from the House to make an unsuccessful run for the Democratic nomination for Governor. Byrd was re-elected twice from this district, anchored in Charleston and also including his home in Sophia, serving from January 3, 1953 to January 3, 1959. Byrd defeated Republican incumbent W. Chapman Revercomb for the United States Senate in 1958. Revercomb's record supporting civil rights had become an issue, playing in Byrd's favor. Byrd was re-elected to the Senate eight times. He was West Virginia's junior senator for his first four terms; his colleague from 1959 to 1985 was Jennings Randolph, who had been elected on the same day as Byrd's first election in a special election to fill the seat of the late Senator Matthew Neely.
While Byrd faced some vigorous Republican opposition in his career, his last serious electoral opposition occurred in 1982 when he was challenged by freshman Congressman Cleve Benedict. Despite his tremendous popularity in the state, Byrd ran unopposed only once, in 1976. On three other occasions – in 1970, 1994 and 2000 – he won all 55 of West Virginia's counties. In his re-election bid in 2000, he won all but seven precincts. Congresswoman Shelley Moore Capito, the daughter of one of Byrd's longtime foes, former governor Arch Moore Jr., briefly considered a challenge to Byrd in 2006 but decided against it. Capito's district covered much of the territory Byrd had represented in the U.S. House.
In the 1960 Democratic presidential election primaries, Byrd – a close Senate ally of Lyndon B. Johnson – endorsed and campaigned for Hubert Humphrey over front-runner John F. Kennedy in the state's crucial primary. However, Kennedy won the state's primary and eventually the general election.
Byrd was elected to a record ninth consecutive full Senate term on November 7, 2006. He became the longest-serving senator in American history on June 12, 2006, surpassing Strom Thurmond of South Carolina with 17,327 days of service. On November 18, 2009, Byrd became the longest-serving member in congressional history, with 56 years, 320 days of combined service in the House and Senate, passing Carl Hayden of Arizona. Previously, Byrd had held the record for the longest unbroken tenure in the Senate (Thurmond resigned during his first term and was re-elected seven months later). He is the only senator ever to serve more than 50 years. Including his tenure as a state legislator from 1947 to 1953, Byrd's service on the political front exceeded 60 continuous years. Byrd, who never lost an election, cast his 18,000th vote on June 21, 2007, the most of any senator in history. John Dingell broke Byrd's record as longest-serving member of Congress on June 7, 2013.
Upon the death of former Florida Senator George Smathers on January 20, 2007, Byrd became the last living United States Senator from the 1950s.
Having taken part in the admission of Alaska and Hawaii to the union, Byrd was the last surviving senator to have voted on a bill granting statehood to a U.S. territory. At the time of Byrd's death, fourteen sitting or former members of the Senate had not been born when Byrd's tenure in the Senate began, President Barack Obama among them.
These are the committee assignments for Sen. Byrd's 9th and final term.
Byrd was a member of the wing of the Democratic Party that opposed federally-mandated desegregation and civil rights. However, despite his early career in the KKK, Byrd was linked to such senators as John C. Stennis, J. William Fulbright and George Smathers, who based their segregationist positions on their view of states' rights in contrast to senators like James Eastland, who held a reputation as a committed racist.
Byrd joined with Democratic senators to filibuster the Civil Rights Act of 1964, personally filibustering the bill for 14 hours, a move he later said he regretted. Despite an 83-day filibuster in the Senate, both parties in Congress voted overwhelmingly in favor of the Act (Democrats 47–16, Republicans 30–2), and President Johnson signed the bill into law. Byrd cast no vote on the Voting Rights Act of 1965, and voted against the confirmation of Thurgood Marshall to the U.S. Supreme Court. He did not sign the 1956 Southern Manifesto and voted for the Civil Rights Acts of 1957, 1960, and 1968, as well as the 24th Amendment to the U.S. Constitution. In 2005, Byrd told "The Washington Post" that his membership in the Baptist church led to a change in his views. In the opinion of one reviewer, Byrd, like other Southern and border-state Democrats, came to realize that he would have to temper "his blatantly segregationist views" and move to the Democratic Party mainstream if he wanted to play a role nationally.
In February 1968, Byrd questioned General Earle Wheeler during the latter's testimony to the Senate Armed Services Committee. During a White House meeting between President Johnson and congressional Democratic leaders on February 6, Byrd stated his concern for the ongoing Vietnam War, citing the US's lack of intelligence, preparation, underestimating of the morale and vitality of the Viet Cong, and overestimated how backed Americans would be by South Vietnam.
President Johnson rejected Byrd's observations. "Anyone can kick a barn down. It takes a good carpenter to build one."
During the 1968 Democratic Party presidential primaries, Byrd supported the incumbent President Johnson. Of the challenging Robert F. Kennedy, Byrd said, "Bobby-come-lately has made a mistake. I won't even listen to him. There are many who liked his brother—as Bobby will find out—but who don't like him." Byrd praised Chicago Mayor Richard J. Daley's police response to protest activity at that year's Democratic National Convention, stating that the violence that resulted was the fault of the protesters, while the police only tried to restore order. Vice President Hubert Humphrey won the presidential nomination, and Byrd campaigned for him that fall.
Byrd served in the Senate Democratic leadership. He succeeded George Smathers as secretary of the Senate Democratic Conference from 1967 to 1971. He unseated Ted Kennedy in 1971 to become majority whip, or the second highest-ranking Democrat, until 1977. Smathers recalled that, "Ted was off playing. While Ted was away at Christmas, down in the islands, floating around having a good time with some of his friends, male and female, here was Bob up here calling on the phone. 'I want to do this, and would you help me?' He had it all committed so that when Teddy got back to town, Teddy didn't know what hit him, but it was already all over. That was Lyndon Johnson's style. Bob Byrd learned that from watching Lyndon Johnson." Byrd himself had told Smathers that " I have never in my life played a game of cards. I have never in my life had a golf club in my hand. I have never in life hit a tennis ball. I have—believe it or not—never thrown a line over to catch a fish. I don't do any of those things. I have only had to work all my life. And every time you told me about swimming, I don't know how to swim."
In 1976, Byrd was the "favorite son" Presidential candidate in West Virginia's primary. His easy victory gave him control of the delegation to the Democratic National Convention. Byrd had the inside track as majority whip but focused most of his time running for majority leader, more so than for re-election to the Senate, as he was virtually unopposed for his fourth term. By the time the vote for majority leader came, his lead was so secure that his lone rival, Minnesota's Hubert Humphrey, withdrew before the balloting took place. From 1977 to 1989 Byrd was the leader of the Senate Democrats, serving as majority leader from 1977 to 1981 and 1987 to 1989, and as minority leader from 1981 to 1987.
Byrd was known for steering federal dollars to West Virginia, one of the country's poorest states. He was called the "King of Pork" by Citizens Against Government Waste. After becoming chair of the Appropriations Committee in 1989, Byrd set a goal securing a total of for public works in the state. He passed that mark in 1991, and funds for highways, dams, educational institutions, and federal agency offices flowed unabated over the course of his membership. More than 30 existing or pending federal projects bear his name. He commented on his reputation for attaining funds for projects in West Virginia in August 2006, when he called himself "Big Daddy" at the dedication for the Robert C. Byrd Biotechnology Science Center. Examples of this ability to claim funds and projects for his state include the Federal Bureau of Investigation's repository for computerized fingerprint records as well as several United States Coast Guard computing and office facilities.
Byrd was also known for using his knowledge of parliamentary procedure. Byrd frustrated Republicans with his encyclopedic knowledge of the inner workings of the Senate, particularly prior to the Reagan Revolution. From 1977 to 1979 he was described as "performing a procedural tap dance around the minority, outmaneuvering Republicans with his mastery of the Senate's arcane rules." In 1988, majority leader Byrd moved a call of the Senate, which was adopted by the majority present, in order to have the Sergeant-at-Arms arrest members not in attendance. One member (Robert Packwood, R-Oregon) was escorted back to the chamber by the Sergeant-at-Arms in order to obtain a quorum.
As the longest-serving Democratic senator, Byrd served as President pro tempore four times when his party was in the majority: from 1989 until the Republicans won control of the Senate in 1995; for 17 days in early 2001, when the Senate was evenly split between parties and outgoing Vice President Al Gore broke the tie in favor of the Democrats; when the Democrats regained the majority in June 2001 after Senator Jim Jeffords of Vermont left the Republican Party to become an independent; and again from 2007 to his death in 2010, as a result of the 2006 Senate elections. In this capacity, Byrd was third in the line of presidential succession at the time of his death, behind Vice President Joe Biden and House Speaker Nancy Pelosi.
In 1969, Byrd launched a Scholastic Recognition Award; he also began to present a savings bond to valedictorians from high schools—public and private—in West Virginia. In 1985 Congress approved the nation's only merit-based scholarship program funded through the U.S. Department of Education, a program which Congress later named in Byrd's honor. The Robert C. Byrd Honors Scholarship Program initially comprised a one-year, $1,500 award to students with "outstanding academic achievement" who had been accepted at a college or university. In 1993, the program began providing four-year scholarships.
In 2002 Byrd secured unanimous approval for a major national initiative to strengthen the teaching of "traditional American history" in K-12 public schools. The Department of Education competitively awards $50 to a year to school districts (in amounts of about $500,000 to ). The money goes to teacher training programs that are geared to improving the knowledge of history teachers. The Continuing Appropriations Act, 2011 eliminated funding for the Robert C. Byrd Honors Scholarship Program.
Television cameras were first introduced to the House of Representatives on March 19, 1979, by C-SPAN. Unsatisfied that Americans only saw Congress as the House of Representatives, Byrd and others pushed to televise Senate proceedings to prevent the Senate from becoming the "invisible branch" of government, succeeding in June 1986.
To help introduce the public to the inner workings of the legislative process, Byrd launched a series of one hundred speeches based on his examination of the Roman Republic and the intent of the Framers. Byrd published a four-volume series on Senate history: "The Senate: 1789–1989: Addresses on the History of the Senate". The first volume won the Henry Adams Prize of the Society for History in the Federal Government as "an outstanding contribution to research in the history of the Federal Government." He also published "The Senate of the Roman Republic: Addresses on the History of Roman Constitutionalism".
In 2004, Byrd received the American Historical Association's first Theodore Roosevelt-Woodrow Wilson Award for Civil Service; in 2007, Byrd received the Friend of History Award from the Organization of American Historians. Both awards honor individuals outside the academy who have made a significant contribution to the writing and/or presentation of history. In 2014, The Byrd Center for Legislative Studies began assessing the archiving of Senator Byrd's electronic correspondence and floor speeches in order to preserve these documents and make them available to the wider community.
On July 19, 2007, Byrd gave a 25-minute speech in the Senate against dog fighting, in response to the indictment of football player Michael Vick. In recognition of the speech, People for the Ethical Treatment of Animals named Byrd their "2007 Person of the Year".
For 2007, Byrd was deemed the fourteenth-most powerful senator, as well as the twelfth-most powerful Democratic senator.
On May 19, 2008, Byrd endorsed then-Senator Barack Obama for president. One week after the West Virginia Democratic Primary, in which Hillary Clinton defeated Obama by 67 to 25 percent, | https://en.wikipedia.org/wiki?curid=25408 |
Reptile
Reptiles are tetrapod animals in the class Reptilia, comprising today's turtles, crocodilians, snakes, amphisbaenians, lizards, tuatara, and their extinct relatives. The study of these traditional reptile orders, historically combined with that of modern amphibians, is called herpetology.
Because some reptiles are more closely related to birds than they are to other reptiles (e.g., crocodiles are more closely related to birds than they are to lizards), the traditional groups of "reptiles" listed above do not together constitute a monophyletic grouping or clade (consisting of all descendants of a common ancestor). For this reason, many modern scientists prefer to consider the birds part of Reptilia as well, thereby making Reptilia a monophyletic class, including all living diapsids. The term "reptiles" is sometimes used as shorthand for 'non-avian Reptilia'.
The earliest known proto-reptiles originated around 312 million years ago during the Carboniferous period, having evolved from advanced reptiliomorph tetrapods that became increasingly adapted to life on dry land. Some early examples include the lizard-like "Hylonomus" and "Casineria". In addition to the living reptiles, there are many diverse groups that are now extinct, in some cases due to mass extinction events. In particular, the Cretaceous–Paleogene extinction event wiped out the pterosaurs, plesiosaurs, ornithischians, and sauropods, alongside many species of theropods, crocodyliforms, and squamates (e.g., mosasaurs).
Modern non-avian reptiles inhabit all the continents except Antarctica, although some birds are found on the periphery of Antarctica. Several living subgroups are recognized: Testudines (turtles and tortoises), 350 species; Rhynchocephalia (tuatara from New Zealand), 1 species; Squamata (lizards, snakes, and worm lizards), over 10,200 species; and Crocodilia (crocodiles, gharials, caimans, and alligators), 24 species.
Reptiles are tetrapod vertebrates, creatures that either have four limbs or, like snakes, are descended from four-limbed ancestors. Unlike amphibians, reptiles do not have an aquatic larval stage. Most reptiles are oviparous, although several species of squamates are viviparous, as were some extinct aquatic clades – the fetus develops within the mother, contained in a placenta rather than an eggshell. As amniotes, reptile eggs are surrounded by membranes for protection and transport, which adapt them to reproduction on dry land. Many of the viviparous species feed their fetuses through various forms of placenta analogous to those of mammals, with some providing initial care for their hatchlings. Extant reptiles range in size from a tiny gecko, "Sphaerodactylus ariasae", which can grow up to to the saltwater crocodile, "Crocodylus porosus", which can reach in length and weigh over .
In the 13th century the category of "reptile" was recognized in Europe as consisting of a miscellany of egg-laying creatures, including "snakes, various fantastic monsters, lizards, assorted amphibians, and worms", as recorded by Vincent of Beauvais in his "Mirror of Nature".
In the 18th century, the reptiles were, from the outset of classification, grouped with the amphibians. Linnaeus, working from species-poor Sweden, where the common adder and grass snake are often found hunting in water, included all reptiles and amphibians in class "III – Amphibia" in his "Systema Naturæ".
The terms "reptile" and "amphibian" were largely interchangeable, "reptile" (from Latin "repere", 'to creep') being preferred by the French. Josephus Nicolaus Laurenti was the first to formally use the term "Reptilia" for an expanded selection of reptiles and amphibians basically similar to that of Linnaeus. Today, the two groups are still commonly treated under the single heading herpetology.
It was not until the beginning of the 19th century that it became clear that reptiles and amphibians are, in fact, quite different animals, and Pierre André Latreille erected the class "Batracia" (1825) for the latter, dividing the tetrapods into the four familiar classes of reptiles, amphibians, birds, and mammals. The British anatomist Thomas Henry Huxley made Latreille's definition popular and, together with Richard Owen, expanded Reptilia to include the various fossil "antediluvian monsters", including dinosaurs and the mammal-like (synapsid) "Dicynodon" he helped describe. This was not the only possible classification scheme: In the Hunterian lectures delivered at the Royal College of Surgeons in 1863, Huxley grouped the vertebrates into mammals, sauroids, and ichthyoids (the latter containing the fishes and amphibians). He subsequently proposed the names of Sauropsida and Ichthyopsida for the latter two groups. In 1866, Haeckel demonstrated that vertebrates could be divided based on their reproductive strategies, and that reptiles, birds, and mammals were united by the amniotic egg.
The terms "Sauropsida" ('lizard faces') and "Theropsida" ('beast faces') were used again in 1916 by E.S. Goodrich to distinguish between lizards, birds, and their relatives on the one hand (Sauropsida) and mammals and their extinct relatives (Theropsida) on the other. Goodrich supported this division by the nature of the hearts and blood vessels in each group, and other features, such as the structure of the forebrain. According to Goodrich, both lineages evolved from an earlier stem group, Protosauria ("first lizards") in which he included some animals today considered reptile-like amphibians, as well as early reptiles.
In 1956, D.M.S. Watson observed that the first two groups diverged very early in reptilian history, so he divided Goodrich's Protosauria between them. He also reinterpreted Sauropsida and Theropsida to exclude birds and mammals, respectively. Thus his Sauropsida included Procolophonia, Eosuchia, Millerosauria, Chelonia (turtles), Squamata (lizards and snakes), Rhynchocephalia, Crocodilia, "thecodonts" (paraphyletic basal Archosauria), non-avian dinosaurs, pterosaurs, ichthyosaurs, and sauropterygians.
In the late 19th century, a number of definitions of Reptilia were offered. The traits listed by Lydekker in 1896, for example, include a single occipital condyle, a jaw joint formed by the quadrate and articular bones, and certain characteristics of the vertebrae. The animals singled out by these formulations, the amniotes other than the mammals and the birds, are still those considered reptiles today.
The synapsid/sauropsid division supplemented another approach, one that split the reptiles into four subclasses based on the number and position of temporal fenestrae, openings in the sides of the skull behind the eyes. This classification was initiated by Henry Fairfield Osborn and elaborated and made popular by Romer's classic "Vertebrate Paleontology". Those four subclasses were:
The composition of Euryapsida was uncertain. Ichthyosaurs were, at times, considered to have arisen independently of the other euryapsids, and given the older name Parapsida. Parapsida was later discarded as a group for the most part (ichthyosaurs being classified as "incertae sedis" or with Euryapsida). However, four (or three if Euryapsida is merged into Diapsida) subclasses remained more or less universal for non-specialist work throughout the 20th century. It has largely been abandoned by recent researchers: in particular, the anapsid condition has been found to occur so variably among unrelated groups that it is not now considered a useful distinction.
By the early 21st century, vertebrate paleontologists were beginning to adopt phylogenetic taxonomy, in which all groups are defined in such a way as to be monophyletic; that is, groups include all descendants of a particular ancestor. The reptiles as historically defined are paraphyletic, since they exclude both birds and mammals. These respectively evolved from dinosaurs and from early therapsids, which were both traditionally called reptiles. Birds are more closely related to crocodilians than the latter are to the rest of extant reptiles. Colin Tudge wrote:
Mammals are a clade, and therefore the cladists are happy to acknowledge the traditional taxon Mammalia; and birds, too, are a clade, universally ascribed to the formal taxon Aves. Mammalia and Aves are, in fact, subclades within the grand clade of the Amniota. But the traditional class Reptilia is not a clade. It is just a section of the clade Amniota: the section that is left after the Mammalia and Aves have been hived off. It cannot be defined by synapomorphies, as is the proper way. Instead, it is defined by a combination of the features it has and the features it lacks: reptiles are the amniotes that lack fur or feathers. At best, the cladists suggest, we could say that the traditional Reptilia are 'non-avian, non-mammalian amniotes'.
Despite the early proposals for replacing the paraphyletic Reptilia with a monophyletic Sauropsida, which includes birds, that term was never adopted widely or, when it was, was not applied consistently. When Sauropsida was used, it often had the same content or even the same definition as Reptilia. In 1988, Jacques Gauthier proposed a cladistic definition of Reptilia as a monophyletic node-based crown group containing turtles, lizards and snakes, crocodilians, and birds, their common ancestor and all its descendants. While Gauthier's definition was close to the modern consensus, nonetheless, it became considered inadequate because the actual relationship of turtles to other reptiles was not yet well understood at this time. Major revisions since have included the reassignment of synapsids as non-reptiles, and classification of turtles as diapsids.
A variety of other definitions were proposed by other scientists in the years following Gauthier's paper. The first such new definition, which attempted to adhere to the standards of the PhyloCode, was published by Modesto and Anderson in 2004. Modesto and Anderson reviewed the many previous definitions and proposed a modified definition, which they intended to retain most traditional content of the group while keeping it stable and monophyletic. They defined Reptilia as all amniotes closer to "Lacerta agilis" and "Crocodylus niloticus" than to "Homo sapiens". This stem-based definition is equivalent to the more common definition of Sauropsida, which Modesto and Anderson synonymized with Reptilia, since the latter is better known and more frequently used. Unlike most previous definitions of Reptilia, however, Modesto and Anderson's definition includes birds, as they are within the clade that includes both lizards and crocodiles.
Classification to order level of the reptiles, after Benton, 2014.
The cladogram presented here illustrates the "family tree" of reptiles, and follows a simplified version of the relationships found by M.S. Lee, in 2013. All genetic studies have supported the hypothesis that turtles are diapsids; some have placed turtles within archosauriformes, though a few have recovered turtles as lepidosauriformes instead. The cladogram below used a combination of genetic (molecular) and fossil (morphological) data to obtain its results.
The placement of turtles has historically been highly variable. Classically, turtles were considered to be related to the primitive anapsid reptiles. Molecular work has usually placed turtles within the diapsids. As of 2013, three turtle genomes have been sequenced. The results place turtles as a sister clade to the archosaurs, the group that includes crocodiles, dinosaurs, and birds. However, in their comparative analysis of the timing of organogenesis, Werneburg and Sánchez-Villagra (2009) found support for the hypothesis that turtles belong to a separate clade within Sauropsida, outside the saurian clade altogether.
The origin of the reptiles lies about 310–320 million years ago, in the steaming swamps of the late Carboniferous period, when the first reptiles evolved from advanced reptiliomorphs.
The oldest known animal that may have been an amniote is "Casineria" (though it may have been a temnospondyl). A series of footprints from the fossil strata of Nova Scotia dated to show typical reptilian toes and imprints of scales. These tracks are attributed to "Hylonomus", the oldest unquestionable reptile known.
It was a small, lizard-like animal, about long, with numerous sharp teeth indicating an insectivorous diet. Other examples include "Westlothiana" (for the moment considered a reptiliomorph rather than a true amniote) and "Paleothyris", both of similar build and presumably similar habit.
The earliest amniotes, including stem-reptiles (those amniotes closer to modern reptiles than to mammals), were largely overshadowed by larger stem-tetrapods, such as "Cochleosaurus", and remained a small, inconspicuous part of the fauna until the Carboniferous Rainforest Collapse. This sudden collapse affected several large groups. Primitive tetrapods were particularly devastated, while stem-reptiles fared better, being ecologically adapted to the drier conditions that followed. Primitive tetrapods, like modern amphibians, need to return to water to lay eggs; in contrast, amniotes, like modern reptiles – whose eggs possess a shell that allows them to be laid on land – were better adapted to the new conditions. Amniotes acquired new niches at a faster rate than before the collapse and at a much faster rate than primitive tetrapods. They acquired new feeding strategies including herbivory and carnivory, previously only having been insectivores and piscivores. From this point forward, reptiles dominated communities and had a greater diversity than primitive tetrapods, setting the stage for the Mesozoic (known as the Age of Reptiles). One of the best known early stem-reptiles is "Mesosaurus", a genus from the Early Permian that had returned to water, feeding on fish.
It was traditionally assumed that the first reptiles retained an anapsid skull inherited from their ancestors. This type of skull has a skull roof with only holes for the nostrils, eyes and a pineal eye. The discoveries of synapsid-like openings (see below) in the skull roof of the skulls of several members of Parareptilia (the clade containing most of the amniotes traditionally referred to as "anapsids"), including lanthanosuchoids, millerettids, bolosaurids, some nycteroleterids, some procolophonoids and at least some mesosaurs made it more ambiguous and it's currently uncertain whether the ancestral amniote had an anapsid-like or synapsid-like skull. These animals are traditionally referred to as "anapsids", and form a paraphyletic basic stock from which other groups evolved. Very shortly after the first amniotes appeared, a lineage called Synapsida split off; this group was characterized by a temporal opening in the skull behind each eye to give room for the jaw muscle to move. These are the "mammal-like amniotes", or stem-mammals, that later gave rise to the true mammals. Soon after, another group evolved a similar trait, this time with a double opening behind each eye, earning them the name Diapsida ("two arches"). The function of the holes in these groups was to lighten the skull and give room for the jaw muscles to move, allowing for a more powerful bite.
Turtles have been traditionally believed to be surviving parareptiles, on the basis of their anapsid skull structure, which was assumed to be primitive trait. The rationale for this classification has been disputed, with some arguing that turtles are diapsids that evolved anapsid skulls in order to improve their armor. Later morphological phylogenetic studies with this in mind placed turtles firmly within Diapsida. All molecular studies have strongly upheld the placement of turtles within diapsids, most commonly as a sister group to extant archosaurs.
With the close of the Carboniferous, the amniotes became the dominant tetrapod fauna. While primitive, terrestrial reptiliomorphs still existed, the synapsid amniotes evolved the first truly terrestrial megafauna (giant animals) in the form of pelycosaurs, such as "Edaphosaurus" and the carnivorous "Dimetrodon". In the mid-Permian period, the climate became drier, resulting in a change of fauna: The pelycosaurs were replaced by the therapsids.
The parareptiles, whose massive skull roofs had no postorbital holes, continued and flourished throughout the Permian. The pareiasaurian parareptiles reached giant proportions in the late Permian, eventually disappearing at the close of the period (the turtles being possible survivors).
Early in the period, the modern reptiles, or crown-group reptiles, evolved and split into two main lineages: the Archosauromorpha (forebears of turtles, crocodiles, and dinosaurs) and the Lepidosauromorpha (predecessors of modern lizards and tuataras). Both groups remained lizard-like and relatively small and inconspicuous during the Permian.
The close of the Permian saw the greatest mass extinction known (see the Permian–Triassic extinction event), an event prolonged by the combination of two or more distinct extinction pulses. Most of the earlier parareptile and synapsid megafauna disappeared, being replaced by the true reptiles, particularly archosauromorphs. These were characterized by elongated hind legs and an erect pose, the early forms looking somewhat like long-legged crocodiles. The archosaurs became the dominant group during the Triassic period, though it took 30 million years before their diversity was as great as the animals that lived in the Permian. Archosaurs developed into the well-known dinosaurs and pterosaurs, as well as the ancestors of crocodiles. Since reptiles, first rauisuchians and then dinosaurs, dominated the Mesozoic era, the interval is popularly known as the "Age of Reptiles". The dinosaurs also developed smaller forms, including the feather-bearing smaller theropods. In the Cretaceous period, these gave rise to the first true birds.
The sister group to Archosauromorpha is Lepidosauromorpha, containing lizards and tuataras, as well as their fossil relatives. Lepidosauromorpha contained at least one major group of the Mesozoic sea reptiles: the mosasaurs, which lived during the Cretaceous period. The phylogenetic placement of other main groups of fossil sea reptiles – the ichthyopterygians (including ichthyosaurs) and the sauropterygians, which evolved in the early Triassic – is more controversial. Different authors linked these groups either to lepidosauromorphs or to archosauromorphs, and ichthyopterygians were also argued to be diapsids that did not belong to the least inclusive clade containing lepidosauromorphs and archosauromorphs.
The close of the Cretaceous period saw the demise of the Mesozoic era reptilian megafauna (see the Cretaceous–Paleogene extinction event). Of the large marine reptiles, only sea turtles were left; and of the non-marine large reptiles, only the semi-aquatic crocodiles and broadly similar choristoderes survived the extinction, with the latter becoming extinct in the Miocene. Of the great host of dinosaurs dominating the Mesozoic, only the small beaked birds survived. This dramatic extinction pattern at the end of the Mesozoic led into the Cenozoic. Mammals and birds filled the empty niches left behind by the reptilian megafauna and, while reptile diversification slowed, bird and mammal diversification took an exponential turn. However, reptiles were still important components of the megafauna, particularly in the form of large and giant tortoises.
After the extinction of most archosaur and marine reptile lines by the end of the Cretaceous, reptile diversification continued throughout the Cenozoic. Squamates took a massive hit during the KT-event, only recovering ten million years after it, but they underwent a great radiation event once they recovered, and today squamates make up the majority of living reptiles (> 95%). Approximately 10,000 extant species of traditional reptiles are known, with birds adding about 10,000 more, almost twice the number of mammals, represented by about 5,700 living species (excluding domesticated species).
All squamates and turtles have a three-chambered heart consisting of two atria, one variably partitioned ventricle, and two aortas that lead to the systemic circulation. The degree of mixing of oxygenated and deoxygenated blood in the three-chambered heart varies depending on the species and physiological state. Under different conditions, deoxygenated blood can be shunted back to the body or oxygenated blood can be shunted back to the lungs. This variation in blood flow has been hypothesized to allow more effective thermoregulation and longer diving times for aquatic species, but has not been shown to be a fitness advantage.
For example, Iguana hearts, like the majority of the squamates hearts, are composed of three chambers with two aorta and one ventricle, cardiac involuntary muscles. The main structures of the heart are the sinus venosus, the pacemaker, the left atrium, the right atruim, the atrioventricular valve, the cavum venosum, cavum arteriosum, the cavum pulmonale, the muscular ridge, the ventricular ridge, pulmonary veins, and paired aortic arches.
Some squamate species (e.g., pythons and monitor lizards) have three-chambered hearts that become functionally four-chambered hearts during contraction. This is made possible by a muscular ridge that subdivides the ventricle during ventricular diastole and completely divides it during ventricular systole. Because of this ridge, some of these squamates are capable of producing ventricular pressure differentials that are equivalent to those seen in mammalian and avian hearts.
Crocodilians have an anatomically four-chambered heart, similar to birds, but also have two systemic aortas and are therefore capable of bypassing their pulmonary circulation.
Modern non-avian reptiles exhibit some form of cold-bloodedness (i.e. some mix of poikilothermy, ectothermy, and bradymetabolism) so that they have limited physiological means of keeping the body temperature constant and often rely on external sources of heat. Due to a less stable core temperature than birds and mammals, reptilian biochemistry requires enzymes capable of maintaining efficiency over a greater range of temperatures than in the case for warm-blooded animals. The optimum body temperature range varies with species, but is typically below that of warm-blooded animals; for many lizards, it falls in the 24°–35 °C (75°–95 °F) range, while extreme heat-adapted species, like the American desert iguana "Dipsosaurus dorsalis", can have optimal physiological temperatures in the mammalian range, between 35° and 40 °C (95° and 104 °F). While the optimum temperature is often encountered when the animal is active, the low basal metabolism makes body temperature drop rapidly when the animal is inactive.
As in all animals, reptilian muscle action produces heat. In large reptiles, like leatherback turtles, the low surface-to-volume ratio allows this metabolically produced heat to keep the animals warmer than their environment even though they do not have a warm-blooded metabolism. This form of homeothermy is called gigantothermy; it has been suggested as having been common in large dinosaurs and other extinct large-bodied reptiles.
The benefit of a low resting metabolism is that it requires far less fuel to sustain bodily functions. By using temperature variations in their surroundings, or by remaining cold when they do not need to move, reptiles can save considerable amounts of energy compared to endothermic animals of the same size. A crocodile needs from a tenth to a fifth of the food necessary for a lion of the same weight and can live half a year without eating. Lower food requirements and adaptive metabolisms allow reptiles to dominate the animal life in regions where net calorie availability is too low to sustain large-bodied mammals and birds.
It is generally assumed that reptiles are unable to produce the sustained high energy output necessary for long distance chases or flying. Higher energetic capacity might have been responsible for the evolution of warm-bloodedness in birds and mammals. However, investigation of correlations between active capacity and thermophysiology show a weak relationship. Most extant reptiles are carnivores with a sit-and-wait feeding strategy; whether reptiles are cold blooded due to their ecology is not clear. Energetic studies on some reptiles have shown active capacities equal to or greater than similar sized warm-blooded animals.
All reptiles breathe using lungs. Aquatic turtles have developed more permeable skin, and some species have modified their cloaca to increase the area for gas exchange. Even with these adaptations, breathing is never fully accomplished without lungs. Lung ventilation is accomplished differently in each main reptile group. In squamates, the lungs are ventilated almost exclusively by the axial musculature. This is also the same musculature that is used during locomotion. Because of this constraint, most squamates are forced to hold their breath during intense runs. Some, however, have found a way around it. Varanids, and a few other lizard species, employ buccal pumping as a complement to their normal "axial breathing". This allows the animals to completely fill their lungs during intense locomotion, and thus remain aerobically active for a long time. Tegu lizards are known to possess a proto-diaphragm, which separates the pulmonary cavity from the visceral cavity. While not actually capable of movement, it does allow for greater lung inflation, by taking the weight of the viscera off the lungs.
Crocodilians actually have a muscular diaphragm that is analogous to the mammalian diaphragm. The difference is that the muscles for the crocodilian diaphragm pull the pubis (part of the pelvis, which is movable in crocodilians) back, which brings the liver down, thus freeing space for the lungs to expand. This type of diaphragmatic setup has been referred to as the "hepatic piston". The airways form a number of double tubular chambers within each lung. On inhalation and exhalation air moves through the airways in the same direction, thus creating a unidirectional airflow through the lungs. A similar system is found in birds, monitor lizards and iguanas.
Most reptiles lack a secondary palate, meaning that they must hold their breath while swallowing. Crocodilians have evolved a bony secondary palate that allows them to continue breathing while remaining submerged (and protect their brains against damage by struggling prey). Skinks (family Scincidae) also have evolved a bony secondary palate, to varying degrees. Snakes took a different approach and extended their trachea instead. Their tracheal extension sticks out like a fleshy straw, and allows these animals to swallow large prey without suffering from asphyxiation.
How turtles and tortoises breathe has been the subject of much study. To date, only a few species have been studied thoroughly enough to get an idea of how those turtles breathe. The varied results indicate that turtles and tortoises have found a variety of solutions to this problem.
The difficulty is that most turtle shells are rigid and do not allow for the type of expansion and contraction that other amniotes use to ventilate their lungs. Some turtles, such as the Indian flapshell ("Lissemys punctata"), have a sheet of muscle that envelops the lungs. When it contracts, the turtle can exhale. When at rest, the turtle can retract the limbs into the body cavity and force air out of the lungs. When the turtle protracts its limbs, the pressure inside the lungs is reduced, and the turtle can suck air in. Turtle lungs are attached to the inside of the top of the shell (carapace), with the bottom of the lungs attached (via connective tissue) to the rest of the viscera. By using a series of special muscles (roughly equivalent to a diaphragm), turtles are capable of pushing their viscera up and down, resulting in effective respiration, since many of these muscles have attachment points in conjunction with their forelimbs (indeed, many of the muscles expand into the limb pockets during contraction).
Breathing during locomotion has been studied in three species, and they show different patterns. Adult female green sea turtles do not breathe as they crutch along their nesting beaches. They hold their breath during terrestrial locomotion and breathe in bouts as they rest. North American box turtles breathe continuously during locomotion, and the ventilation cycle is not coordinated with the limb movements. This is because they use their abdominal muscles to breathe during locomotion. The last species to have been studied is the red-eared slider, which also breathes during locomotion, but takes smaller breaths during locomotion than during small pauses between locomotor bouts, indicating that there may be mechanical interference between the limb movements and the breathing apparatus. Box turtles have also been observed to breathe while completely sealed up inside their shells.
Reptilian skin is covered in a horny epidermis, making it watertight and enabling reptiles to live on dry land, in contrast to amphibians. Compared to mammalian skin, that of reptiles is rather thin and lacks the thick dermal layer that produces leather in mammals.
Exposed parts of reptiles are protected by scales or scutes, sometimes with a bony base (osteoderms), forming armor. In lepidosaurians, such as lizards and snakes, the whole skin is covered in overlapping epidermal scales. Such scales were once thought to be typical of the class Reptilia as a whole, but are now known to occur only in lepidosaurians. The scales found in turtles and crocodiles are of dermal, rather than epidermal, origin and are properly termed scutes. In turtles, the body is hidden inside a hard shell composed of fused scutes.
Lacking a thick dermis, reptilian leather is not as strong as mammalian leather. It is used in leather-wares for decorative purposes for shoes, belts and handbags, particularly crocodile skin.
Reptiles shed their skin through a process called ecdysis which occurs continuously throughout their lifetime. In particular, younger reptiles tend to shed once every 5–6 weeks while adults shed 3–4 times a year. Younger reptiles shed more because of their rapid growth rate. Once full size, the frequency of shedding drastically decreases. The process of ecdysis involves forming a new layer of skin under the old one. Proteolytic enzymes and lymphatic fluid is secreted between the old and new layers of skin. Consequently, this lifts the old skin from the new one allowing shedding to occur. Snakes will shed from the head to the tail while lizards shed in a "patchy pattern". Dysecdysis, a common skin disease in snakes and lizards, will occur when ecdysis, or shedding, fails. There are numerous reasons why shedding fails and can be related to inadequate humidity and temperature, nutritional deficiencies, dehydration and traumatic injuries. Nutritional deficiencies decrease proteolytic enzymes while dehydration reduces lymphatic fluids to separate the skin layers. Traumatic injuries on the other hand, form scars that will not allow new scales to form and disrupt the process of ecdysis.
Excretion is performed mainly by two small kidneys. In diapsids, uric acid is the main nitrogenous waste product; turtles, like mammals, excrete mainly urea. Unlike the kidneys of mammals and birds, reptile kidneys are unable to produce liquid urine more concentrated than their body fluid. This is because they lack a specialized structure called a loop of Henle, which is present in the nephrons of birds and mammals. Because of this, many reptiles use the colon to aid in the reabsorption of water. Some are also able to take up water stored in the bladder. Excess salts are also excreted by nasal and lingual salt glands in some reptiles.
In all reptiles the urinogenital ducts and the anus both empty into an organ called a cloaca. In some reptiles, a midventral wall in the cloaca may open into a urinary bladder, but not all. It is present in all turtles and tortoises as well as most lizards, but is lacking in the monitor lizard, the legless lizards. It is absent in the snakes, alligators, and crocodiles.
Many turtles, tortoises, and lizards have proportionally very large bladders. Charles Darwin noted that the Galapagos tortoise had a bladder which could store up to 20% of its body weight. Such adaptations are the result of environments such as remote islands and deserts where water is very scarce. Other desert-dwelling reptiles have large bladders that can store a long-term reservoir of water for up to several months and aid in osmoregulation.
Turtles have two or more accessory urinary bladders, located lateral to the neck of the urinary bladder and dorsal to the pubis, occupying a significant portion of their body cavity. Their bladder is also usually bilobed with a left and right section. The right section is located under the liver, which prevents large stones from remaining in that side while the left section is more likely to have calculi.
Most reptiles are insectivorous or carnivorous and have simple and comparatively short digestive tracts due to meat being fairly simple to break down and digest. Digestion is slower than in mammals, reflecting their lower resting metabolism and their inability to divide and masticate their food. Their poikilotherm metabolism has very low energy requirements, allowing large reptiles like crocodiles and large constrictors to live from a single large meal for months, digesting it slowly.
While modern reptiles are predominantly carnivorous, during the early history of reptiles several groups produced some herbivorous megafauna: in the Paleozoic, the pareiasaurs; and in the Mesozoic several lines of dinosaurs. Today, turtles are the only predominantly herbivorous reptile group, but several lines of agamas and iguanas have evolved to live wholly or partly on plants.
Herbivorous reptiles face the same problems of mastication as herbivorous mammals but, lacking the complex teeth of mammals, many species swallow rocks and pebbles (so called gastroliths) to aid in digestion: The rocks are washed around in the stomach, helping to grind up plant matter. Fossil gastroliths have been found associated with both ornithopods and sauropods, though whether they actually functioned as a gastric mill in the latter is disputed. Salt water crocodiles also use gastroliths as ballast, stabilizing them in the water or helping them to dive. A dual function as both stabilizing ballast and digestion aid has been suggested for gastroliths found in plesiosaurs.
The reptilian nervous system contains the same basic part of the amphibian brain, but the reptile cerebrum and cerebellum are slightly larger. Most typical sense organs are well developed with certain exceptions, most notably the snake's lack of external ears (middle and inner ears are present). There are twelve pairs of cranial nerves. Due to their short cochlea, reptiles use electrical tuning to expand their range of audible frequencies.
Reptiles are generally considered less intelligent than mammals and birds. The size of their brain relative to their body is much less than that of mammals, the encephalization quotient being about one tenth of that of mammals, though larger reptiles can show more complex brain development. Larger lizards, like the monitors, are known to exhibit complex behavior, including cooperation and cognitive abilities allowing them to optimize their foraging and territoriality over time. Crocodiles have relatively larger brains and show a fairly complex social structure. The Komodo dragon is even known to engage in play, as are turtles, which are also considered to be social creatures, and sometimes switch between monogamy and promiscuity in their sexual behavior. One study found that wood turtles were better than white rats at learning to navigate mazes. Another study found that giant tortoises are capable of learning through operant conditioning, visual discrimination and retained learned behaviors with long-term memory. Sea turtles have been regarded as having simple brains, but their flippers are used for a variety of foraging tasks (holding, bracing, corralling) in common with marine mammals.
Most reptiles are diurnal animals. The vision is typically adapted to daylight conditions, with color vision and more advanced visual depth perception than in amphibians and most mammals.
Reptiles usually have excellent vision, allowing them to detect shapes and motions at long distances. They often have only a few Rod cells and have poor vision in low-light conditions. At the same time they have cells called “double cones” which give them sharp color vision and enable them to see ultraviolet wavelengths. In some species, such as blind snakes, vision is reduced.
Many lepidosaurs have a photosensory organ on the top of their heads called the parietal eye, which are also called third eye, pineal eye or pineal gland. This “eye” does not work the same way as a normal eye does as it has only a rudimentary retina and lens and thus, cannot form images. It is however sensitive to changes in light and dark and can detect movement.
Some snakes have extra sets of visual organs (in the loosest sense of the word) in the form of pits sensitive to infrared radiation (heat). Such heat-sensitive pits are particularly well developed in the pit vipers, but are also found in boas and pythons. These pits allow the snakes to sense the body heat of birds and mammals, enabling pit vipers to hunt rodents in the dark.
Most reptiles including birds possess a nictitating membrane, a translucent third eyelid which is drawn over the eye from the inner corner. Notably, it protects a crocodilian's eyeball surface while allowing a degree of vision underwater. However, many squamates, geckos and snakes in particular, lack eyelids, which are replaced by a transparent scale. This is called the brille, spectacle, or eyecap. The brille is usually not visible, except for when the snake molts, and it protects the eyes from dust and dirt.
Reptiles generally reproduce sexually, though some are capable of asexual reproduction. All reproductive activity occurs through the cloaca, the single exit/entrance at the base of the tail where waste is also eliminated. Most reptiles have copulatory organs, which are usually retracted or inverted and stored inside the body. In turtles and crocodilians, the male has a single median penis, while squamates, including snakes and lizards, possess a pair of hemipenes, only one of which is typically used in each session. Tuatara, however, lack copulatory organs, and so the male and female simply press their cloacas together as the male discharges sperm.
Most reptiles lay amniotic eggs covered with leathery or calcareous shells. An amnion, chorion, and allantois are present during embryonic life. The eggshell (1) protects the crocodile embryo (11) and keeps it from drying out, but it is flexible to allow gas exchange. The chorion (6) aids in gas exchange between the inside and outside of the egg. It allows carbon dioxide to exit the egg and oxygen gas to enter the egg. The albumin (9) further protects the embryo and serves as a reservoir for water and protein. The allantois (8) is a sac that collects the metabolic waste produced by the embryo. The amniotic sac (10) contains amniotic fluid (12) which protects and cushions the embryo. The amnion (5) aids in osmoregulation and serves as a saltwater reservoir. The yolk sac (2) surrounding the yolk (3) contains protein and fat rich nutrients that are absorbed by the embryo via vessels (4) that allow the embryo to grow and metabolize. The air space (7) provides the embryo with oxygen while it is hatching. This ensures that the embryo will not suffocate while it is hatching. There are no larval stages of development. Viviparity and ovoviviparity have evolved in many extinct clades of reptiles and in squamates. In the latter group, many species, including all boas and most vipers, utilize this mode of reproduction. The degree of viviparity varies; some species simply retain the eggs until just before hatching, others provide maternal nourishment to supplement the yolk, and yet others lack any yolk and provide all nutrients via a structure similar to the mammalian placenta. The earliest documented case of viviparity in reptiles is the Early Permian mesosaurs, although some individuals or taxa in that clade may also have been oviparous because a putative isolated egg has also been found. Several groups of Mesozoic marine reptiles also exhibited viviparity, such as mosasaurs, ichthyosaurs, and Sauropterygia, a group that include pachypleurosaurs and Plesiosauria.
Asexual reproduction has been identified in squamates in six families of lizards and one snake. In some species of squamates, a population of females is able to produce a unisexual diploid clone of the mother. This form of asexual reproduction, called parthenogenesis, occurs in several species of gecko, and is particularly widespread in the teiids (especially "Aspidocelis") and lacertids ("Lacerta"). In captivity, Komodo dragons (Varanidae) have reproduced by parthenogenesis.
Parthenogenetic species are suspected to occur among chameleons, agamids, xantusiids, and typhlopids.
Some reptiles exhibit temperature-dependent sex determination (TDSD), in which the incubation temperature determines whether a particular egg hatches as male or female. TDSD is most common in turtles and crocodiles, but also occurs in lizards and tuatara. To date, there has been no confirmation of whether TDSD occurs in snakes.
Many small reptiles, such as snakes and lizards that live on the ground or in the water, are vulnerable to being preyed on by all kinds of carnivorous animals. Thus avoidance is the most common form of defense in reptiles. At the first sign of danger, most snakes and lizards crawl away into the undergrowth, and turtles and crocodiles will plunge into water and sink out of sight.
Reptiles tend to avoid confrontation through camouflage. Two major groups of reptile predators are birds and other reptiles, both of which have well developed color vision. Thus the skins of many reptiles have cryptic coloration of plain or mottled gray, green, and brown to allow them to blend into the background of their natural environment. Aided by the reptiles' capacity for remaining motionless for long periods, the camouflage of many snakes is so effective that people or domestic animals are most typically bitten because they accidentally step on them.
When camouflage fails to protect them, blue-tongued skinks will try to ward off attackers by displaying their blue tongues, and the frill-necked lizard will display its brightly colored frill. These same displays are used in territorial disputes and during courtship. If danger arises so suddenly that flight is useless, crocodiles, turtles, some lizards, and some snakes hiss loudly when confronted by an enemy. Rattlesnakes rapidly vibrate the tip of the tail, which is composed of a series of nested, hollow beads to ward of approaching danger.
In contrast to the normal drab coloration of most reptiles, the lizards of the genus "Heloderma" (the Gila monster and the beaded lizard) and many of the coral snakes have high-contrast warning coloration, warning potential predators they are venomous. A number of non-venomous North American snake species have colorful markings similar to those of the coral snake, an oft cited example of Batesian mimicry.
Camouflage does not always fool a predator. When caught out, snake species adopt different defensive tactics and use a complicated set of behaviors when attacked. Some first elevate their head and spread out the skin of their neck in an effort to look large and threatening. Failure of this strategy may lead to other measures practiced particularly by cobras, vipers, and closely related species, which use venom to attack. The venom is modified saliva, delivered through fangs from a venom gland. Some non-venomous snakes, such as American hognose snakes or European grass snake, play dead when in danger; some, including the grass snake, exude a foul-smelling liquid to deter attackers.
When a crocodilian is concerned about its safety, it will gape to expose the teeth and yellow tongue. If this doesn't work, the crocodilian gets a little more agitated and typically begins to make hissing sounds. After this, the crocodilian will start to change its posture dramatically to make itself look more intimidating. The body is inflated to increase apparent size. If absolutely necessary it may decide to attack an enemy.
Some species try to bite immediately. Some will use their heads as sledgehammers and literally smash an opponent, some will rush or swim toward the threat from a distance, even chasing the opponent onto land or galloping after it. The main weapon in all crocodiles is the bite, which can generate very high bite force. Many species also possess canine-like teeth. These are used primarily for seizing prey, but are also used in fighting and display.
Geckos, skinks, and other lizards that are captured by the tail will shed part of the tail structure through a process called autotomy and thus be able to flee. The detached tail will continue to wiggle, creating a deceptive sense of continued struggle and distracting the predator's attention from the fleeing prey animal. The detached tails of leopard geckos can wiggle for up to 20 minutes. In many species the tails are of a separate and dramatically more intense color than the rest of the body so as to encourage potential predators to strike for the tail first. In the shingleback skink and some species of geckos, the tail is short and broad and resembles the head, so that the predators may attack it rather than the more vulnerable front part.
Reptiles that are capable of shedding their tails can partially regenerate them over a period of weeks. The new section will however contain cartilage rather than bone, and will never grow to the same length as the original tail. It is often also distinctly discolored compared to the rest of the body and may lack some of the external sculpting features seen in the original tail.
Dinosaurs have been widely depicted in culture since the English palaeontologist Richard Owen coined the name "dinosaur" in 1842. As soon as 1854, the Crystal Palace Dinosaurs were on display to the public in south London. One dinosaur appeared in literature even earlier, as Charles Dickens placed a "Megalosaurus" in the first chapter of his novel "Bleak House" in 1852. The dinosaurs featured in books, films, television programs, artwork, and other media have been used for both education and entertainment. The depictions range from the realistic, as in the television documentaries of the 1990s and first decade of the 21st century, or the fantastic, as in the monster movies of the 1950s and 1960s.
The snake or serpent has played a powerful symbolic role in different cultures. In Egyptian history, the Nile cobra adorned the crown of the pharaoh. It was worshipped as one of the gods and was also used for sinister purposes: murder of an adversary and ritual suicide (Cleopatra). In Greek mythology snakes are associated with deadly antagonists, as a chthonic symbol, roughly translated as "earthbound". The nine-headed Lernaean Hydra that Hercules defeated and the three Gorgon sisters are children of Gaia, the earth. Medusa was one of the three Gorgon sisters who Perseus defeated. Medusa is described as a hideous mortal, with snakes instead of hair and the power to turn men to stone with her gaze. After killing her, Perseus gave her head to Athena who fixed it to her shield called the Aegis. The Titans are depicted in art with their legs replaced by bodies of snakes for the same reason: They are children of Gaia, so they are bound to the earth. In Hinduism, snakes are worshipped as gods, with many women pouring milk on snake pits. The cobra is seen on the neck of Shiva, while Vishnu is depicted often as sleeping on a seven-headed snake or within the coils of a serpent. There are temples in India solely for cobras sometimes called "Nagraj" (King of Snakes), and it is believed that snakes are symbols of fertility. In the annual Hindu festival of Nag Panchami, snakes are venerated and prayed to. In religious terms, the snake and jaguar are arguably the most important animals in ancient Mesoamerica. "In states of ecstasy, lords dance a serpent dance; great descending snakes adorn and support buildings from Chichen Itza to Tenochtitlan, and the Nahuatl word "coatl" meaning serpent or twin, forms part of primary deities such as Mixcoatl, Quetzalcoatl, and Coatlicue." In Christianity and Judaism, a serpent appears in Genesis to tempt Adam and Eve with the forbidden fruit from the Tree of Knowledge of Good and Evil.
The turtle has a prominent position as a symbol of steadfastness and tranquility in religion, mythology, and folklore from around the world. A tortoise's longevity is suggested by its long lifespan and its shell, which was thought to protect it from any foe. In the cosmological myths of several cultures a "World Turtle" carries the world upon its back or supports the heavens.
Deaths from snakebites are uncommon in many parts of the world, but are still counted in tens of thousands per year in India. Snakebite can be treated with antivenom made from the venom of the snake. To produce antivenom, a mixture of the venoms of different species of snake is injected into the body of a horse in ever-increasing dosages until the horse is immunized. Blood is then extracted; the serum is separated, purified and freeze-dried. The cytotoxic effect of snake venom is being researched as a potential treatment for cancers.
Lizards such as the Gila monster produce toxins with medical applications. Gila toxin reduces plasma glucose; the substance is now synthesised for use in the anti-diabetes drug exenatide (Byetta). Another toxin from Gila monster saliva has been studied for use as an anti-Alzheimer's drug.
Geckos have also been used as medicine, especially in China. Turtles have been used in Chinese traditional medicine for thousands of years, with every part of the turtle believed to have medical benefits. There is a lack of scientific evidence that would correlate claimed medical benefits to turtle consumption. Growing demand for turtle meat has placed pressure on vulnerable wild populations of turtles.
Crocodiles are protected in many parts of the world, and are farmed commercially. Their hides are tanned and used to make leather goods such as shoes and handbags; crocodile meat is also considered a delicacy. The most commonly farmed species are the saltwater and Nile crocodiles. Farming has resulted in an increase in the saltwater crocodile population in Australia, as eggs are usually harvested from the wild, so landowners have an incentive to conserve their habitat. Crocodile leather is made into wallets, briefcases, purses, handbags, belts, hats, and shoes. Crocodile oil has been used for various purposes.
Snakes are also farmed, primarily in East and Southeast Asia, and their production has become more intensive in the last decade. Snake farming has been troubling for conservation in the past as it can lead to overexploitation of wild snakes and their natural prey to supply the farms. However, farming snakes can limit the hunting of wild snakes, while reducing the slaughter of higher-order vertebrates like cows. The energy efficiency of snakes is higher than expected for carnivores, due to their ectothermy and low metabolism. Waste protein from the poultry and pig industries is used as feed in snake farms. Snake farms produce meat, snake skin, and antivenom.
Turtle farming is another known but controversial practice. Turtles have been farmed for a variety of reasons, ranging from food to traditional medicine, the pet trade, and scientific conservation. Demand for turtle meat and medicinal products is one of the main threats to turtle conservation in Asia. Though commercial breeding would seem to insulate wild populations, it can stoke the demand for them and increase wild captures. Even the potentially appealing concept of raising turtles at a farm to release into the wild is questioned by some veterinarians who have had some experience with farm operations. They caution that this may introduce into the wild populations infectious diseases that occur on the farm, but have not (yet) been occurring in the wild.
In the Western world, some snakes (especially docile species such as the ball python and corn snake) are kept as pets. Numerous species of lizard are kept as pets, including bearded dragons, iguanas, anoles, and geckos (such as the popular leopard gecko).
Turtles and tortoises are an increasingly popular pet, but keeping them can be challenging due to particular requirements, such as temperature control and a varied diet, as well as the long lifespans of turtles, who can potentially outlive their owners. Good hygiene and significant maintenance is necessary when keeping reptiles, due to the risks of "Salmonella" and other pathogens.
A herpetarium is a zoological exhibition space for reptiles or amphibians. | https://en.wikipedia.org/wiki?curid=25409 |
Rhode Island
Rhode Island (, like "road"), officially the State of Rhode Island and Providence Plantations, is a state in the New England region of the United States. It is the smallest U.S. state by area and the seventh least populous, but it is also the second most densely populated. Rhode Island is bordered by Connecticut to the west, Massachusetts to the north and east, and the Atlantic Ocean to the south via Rhode Island Sound and Block Island Sound. It also shares a small maritime border with New York. Providence is the state capital and most populous city in Rhode Island.
On May 4, 1776, the Colony of Rhode Island was the first of the Thirteen Colonies to renounce its allegiance to the British Crown, and it was the fourth state to ratify the Articles of Confederation, doing so on February 9, 1778. The state boycotted the 1787 convention which drew up the United States Constitution and initially refused to ratify it; it was the last of the original 13 states to do so on May 29, 1790.
Rhode Island's official nickname is "The Ocean State", a reference to the large bays and inlets that amount to about 14 percent of its total area.
Despite its name, most of Rhode Island is located on the mainland of the United States. Its official name is "State of Rhode Island and Providence Plantations", which is derived from the merger of four Colonial settlements. The settlements of Newport and Portsmouth were situated on what is commonly called Aquidneck Island today but was called "Rhode Island" in Colonial times. "Providence Plantation" was the name of the colony founded by Roger Williams in the state's capital of Providence. This was adjoined by the settlement of Warwick; hence the plural Providence Plantations.
It is unclear how the island came to be named "Rhode Island", but two historical events may have been of influence:
The earliest documented use of the name "Rhode Island" for Aquidneck was in 1637 by Roger Williams. The name was officially applied to the island in 1644 with these words: "Aquethneck shall be henceforth called the Isle of Rodes or Rhode-Island." The name "Isle of Rodes" is used in a legal document as late as 1646. Dutch maps as early as 1659 call the island "Red Island" ("Roodt Eylant").
The first English settlement in Rhode Island was the town of Providence, which the Narragansett granted to Roger Williams in 1636. At that time, Williams obtained no permission from the English crown as he believed the English had no legitimate claim on Narragansett and Wampanoag territory. However, in 1643, Williams petitioned Charles I of England to grant Providence and neighboring towns a colonial patent, due to threats of invasion from the colonies of Boston and Plymouth. In his petition he used the term "Providence Plantations," plantation being the contemporary English term for a colony. "Providence Plantations" was therefore the official name of the colony from 1643 to 1663, when a new charter was issued. Following the American Revolution, the new state incorporated both "Rhode Island" and "Providence Plantations" in its official name. The word "plantation" in the state's name has become a contested issue, and the Rhode Island General Assembly voted on June 25, 2009, to hold a general referendum determining whether "and Providence Plantations" would be dropped from the official name.
Advocates for excising "plantation" claimed that the word symbolized an alleged legacy of disenfranchisement for many Rhode Islanders, as well as the proliferation of slavery in the colonies and in the post-colonial United States. Advocates for retaining the name argued that "plantation" was simply an archaic synonym for "colony" and bore no relation to slavery. The referendum election was held on November 2, 2010, and the people voted overwhelmingly (78% to 22%) to retain the entire original name.
On June 18, 2020, State Senator Harold Metts sponsored a motion in the State that would allow for another ballot referendum on removing the words "and Providence Plantations" from the state's name. The motion was passed unanimously amidst the George Floyd protests and nationwide calls to address systemic racism. Metts said, "Whatever the meaning of the term 'plantations' in the context of Rhode Island's history, it carries a horrific connotation when considering the tragic and racist history of our nation." The resolution's companion legislation is expected to be voted on in the state House of Representatives in July 2020. If passed, the question would be decided as part of the 2020 United States elections. On June 22, 2020, Rhode Island governor Gina Raimondo issued an executive order to remove the phrase 'Providence Plantations' from a range of official documents and state websites.
In 1636, Roger Williams was banished from the Massachusetts Bay Colony for his religious views, and he settled at the top of Narragansett Bay on land sold or given to him by Narragansett sachem Canonicus. He named the site Providence, "having a sense of God's merciful providence unto me in my distress", and it became a place of religious freedom where all were welcome.
In 1638 (after conferring with Williams), Anne Hutchinson, William Coddington, John Clarke, Philip Sherman, and other religious dissenters settled on Aquidneck Island (then known as Rhode Island), which was purchased from the local tribes who called it Pocasset. This settlement was called Portsmouth and was governed by the Portsmouth Compact. The southern part of the island became the separate settlement of Newport after disagreements among the founders.
Samuel Gorton purchased lands at Shawomet in 1642 from the Narragansetts, precipitating a dispute with the Massachusetts Bay Colony. In 1644, Providence, Portsmouth, and Newport united for their common independence as the Colony of Rhode Island and Providence Plantations, governed by an elected council and "president". Gorton received a separate charter for his settlement in 1648 which he named Warwick after his patron.
Brown University was founded in 1764 as the College in the English Colony of Rhode Island and Providence Plantations. It was one of nine Colonial colleges granted charters before the American Revolution, but was the first college in America to accept students regardless of religious affiliation.
Metacomet was the Wampanoag tribe's war leader, whom the colonists called King Philip. They invaded and burned down several of the towns in the area during King Philip's War (1675–1676), including Providence which was attacked twice. A force of Massachusetts, Connecticut, and Plymouth militia under General Josiah Winslow invaded and destroyed the fortified Narragansett Indian village in the Great Swamp in South Kingstown, Rhode Island on December 19, 1675. In one of the final actions of the war, an Indian associated with Benjamin Church killed King Philip in Bristol, Rhode Island.
The colony was amalgamated into the Dominion of New England in 1686, as King James II attempted to enforce royal authority over the autonomous colonies in British North America, but the colony regained its independence under the Royal Charter after the Glorious Revolution of 1688. Slaves were introduced in Rhode Island at this time, although there is no record of any law legalizing slave-holding. The colony later prospered under the slave trade, distilling rum to sell in Africa as part of a profitable triangular trade in slaves and sugar with the Caribbean.Rhode Island's legislative body passed an act in 1652 abolishing the holding of slaves (the first British colony to do so), but this edict was never enforced, and the proportion of enslaved Blacks exceeded 6% of the colony's population by 1800 (nearly twice the ratio of other New England colonies).
Rhode Island's tradition of independence and dissent gave it a prominent role in the American Revolution. At approximately 2 a.m. on June 10, 1772, a band of Providence residents attacked the grounded revenue schooner "Gaspee", burning it to the waterline for enforcing unpopular trade regulations within Narragansett Bay. Rhode Island was the first of the thirteen colonies to renounce its allegiance to the British Crown on May 4, 1776. It was also the last of the thirteen colonies to ratify the United States Constitution on May 29, 1790, and only under threat of heavy trade tariffs from the other former colonies and after assurances were made that a Bill of Rights would become part of the Constitution. During the Revolution, the British occupied Newport in December 1776. A combined Franco-American force fought to drive them off Aquidneck Island. Portsmouth was the site of the first African-American military unit, the 1st Rhode Island Regiment, to fight for the U.S. in the unsuccessful Battle of Rhode Island of August 29, 1778. A month earlier, the appearance of a French fleet off Newport caused the British to scuttle some of their own ships in an attempt to block the harbor. The British abandoned Newport in October 1779, concentrating their forces in New York City. An expedition of 5,500 French troops under Count Rochambeau arrived in Newport by sea on July 10, 1780. The celebrated march to Yorktown, Virginia in 1781 ended with the defeat of the British at the Siege of Yorktown and the Battle of the Chesapeake.
Rhode Island was heavily involved in the slave trade during the post-revolution era. In 1774, the slave population of Rhode Island was 6.3% of the total, nearly twice as high as any other New England colony.
Rhode Island was also heavily involved in the Industrial Revolution, which began in America in 1787 when Thomas Somers reproduced textile machine plans which he imported from England. He helped to produce the Beverly Cotton Manufactory, in which Moses Brown of Providence took an interest. Moses Brown teamed up with Samuel Slater and helped to create the second cotton mill in America, a water-powered textile mill. The Industrial Revolution moved large numbers of workers into the cities, creating a permanently landless class who were therefore, by the law of the time, also voteless. By 1829, 60% of the state's free white males were ineligible to vote. Several attempts were unsuccessfully made to address this problem, and a new state constitution was passed in 1843 allowing landless men to vote if they could pay a $1 poll tax.
For the first several decades of statehood, Rhode Island was governed in accordance with the 1663 colonial charter. Voting rights were restricted to landowners holding at least $134 in property, disenfranchising well over half of the state's male citizens. The charter apportioned legislative seats equally among the state's towns, over-representing rural areas and under-representing the growing industrial centers. Additionally, the charter disallowed landless citizens from filing civil suits without endorsement from a landowner. Bills were periodically introduced in the legislature to expand suffrage, but they were invariably defeated. In 1841, activists led by Thomas W. Dorr organized an extralegal convention to draft a state constitution, arguing that the charter government violated the Guarantee Clause in Article Four, Section Four of the United States Constitution. In 1842, the charter government and Dorr's supporters held separate elections, and two rival governments claimed sovereignty over the state. Dorr's supporters led an armed rebellion against the charter government, and Dorr was arrested and imprisoned for treason against the state. Later that year, the legislature drafted a state constitution, removing property requirements for American-born citizens but keeping them in place for immigrants, and retaining urban under-representation in the legislature.
In the early 19th century, Rhode Island was subject to a tuberculosis outbreak which led to public hysteria about vampirism.
During the American Civil War, Rhode Island was the first Union state to send troops in response to President Lincoln's request for help from the states. Rhode Island furnished 25,236 fighting men, of whom 1,685 died. On the home front, Rhode Island and the other northern states used their industrial capacity to supply the Union Army with the materials that it needed to win the war. The United States Naval Academy moved to Rhode Island temporarily during the war.
In 1866, Rhode Island abolished racial segregation in the public schools throughout the state.
The 50 years following the Civil War were a time of prosperity and affluence that author William G. McLoughlin calls "Rhode Island's halcyon era." Rhode Island was a center of the Gilded Age and provided a home or summer home to many of the country's most prominent industrialists. This was a time of growth in textile mills and manufacturing and brought an influx of immigrants to fill those jobs, bringing population growth and urbanization. In Newport, New York's wealthiest industrialists created a summer haven to socialize and build grand mansions. Thousands of French-Canadian, Italian, Irish, and Portuguese immigrants arrived to fill jobs in the textile and manufacturing mills in Providence, Pawtucket, Central Falls, and Woonsocket.
During World War I, Rhode Island furnished 28,817 soldiers, of whom 612 died. After the war, the state was hit hard by the Spanish Influenza.
In the 1920s and 1930s, rural Rhode Island saw a surge in Ku Klux Klan membership, largely in reaction to large waves of immigrants moving to the state. The Klan is believed to be responsible for burning the Watchman Industrial School in Scituate, which was a school for African-American children.
Since the Great Depression, the Rhode Island Democratic Party has dominated local politics. Rhode Island has comprehensive health insurance for low-income children and a large social safety net. Many urban areas still have a high rate of children in poverty. Due to an influx of residents from Boston, increasing housing costs have resulted in more homelessness in Rhode Island.
The 350th Anniversary of the founding of Rhode Island was celebrated with a free concert held on the tarmac of the Quonset State Airport on August 31, 1986. Performers included Chuck Berry, Tommy James, and headliner Bob Hope.
In 2003, a nightclub fire in West Warwick claimed 100 lives and resulted in nearly twice as many injured, catching national attention. The fire resulted in criminal sentences.
In March 2010, areas of the state received record flooding due to rising rivers from heavy rain. The first period of rainy weather in mid-March caused localized flooding and, two weeks later, more rain caused more widespread flooding in many towns, especially south of Providence. Rain totals on March 29–30, 2010 exceeded 14 inches (35.5 cm) in many locales, resulting in the inundation of area rivers—especially the Pawtuxet River which runs through central Rhode Island. The overflow of the Pawtuxet River, nearly above flood stage, submerged a sewage treatment plant and closed a five-mile (8 km) stretch of Interstate 95. In addition, it flooded two shopping malls, numerous businesses, and many homes in the towns of Warwick, West Warwick, Cranston, and Westerly. Amtrak service was also suspended between New York and Boston during this period. Following the flood, Rhode Island was in a state of emergency for two days. The Federal Emergency Management Agency (FEMA) was called in to help flood victims.
Rhode Island covers an area of located within the New England region and is bordered on the north and east by Massachusetts, on the west by Connecticut, and on the south by Rhode Island Sound and the Atlantic Ocean. It shares a narrow maritime border with New York State between Block Island and Long Island. The mean elevation of the state is . It is only wide and long, yet the state has a tidal shoreline on Narragansett Bay and the Atlantic Ocean of .
Rhode Island is nicknamed the Ocean State and has a number of oceanfront beaches. It is mostly flat with no real mountains, and the state's highest natural point is Jerimoth Hill, above sea level. The state has two distinct natural regions. Eastern Rhode Island contains the lowlands of the Narragansett Bay, while Western Rhode Island forms part of the New England upland. Rhode Island's forests are part of the Northeastern coastal forests ecoregion.
Narragansett Bay is a major feature of the state's topography. There are more than 30 islands within the bay; the largest is Aquidneck Island which holds the municipalities of Newport, Middletown, and Portsmouth. The second-largest island is Conanicut, and the third is Prudence. Block Island lies about off the southern coast of the mainland and separates Block Island Sound and the Atlantic Ocean proper.
A rare type of rock called Cumberlandite is found only in Rhode Island (specifically, in the town of Cumberland) and is the state rock. There were initially two known deposits of the mineral, but it is an ore of iron, and one of the deposits was extensively mined for its ferrous content.
Most of Rhode Island has a humid continental climate, with warm summers and cold winters. The southern coastal portions of the state are the broad transition zone into subtropical climates, with hot summers and cool winters with a mix of rain and snow. Block Island has an oceanic climate. The highest temperature recorded in Rhode Island was , recorded on August 2, 1975 in Providence. The lowest recorded temperature in Rhode Island was on February 5, 1996 in Greene. Monthly average temperatures range from a high of to a low of .
Rhode Island is vulnerable to tropical storms and hurricanes due to its location in New England, catching the brunt of many storms blowing up the eastern seaboard. Some hurricanes that have done significant damage in the state are the 1938 New England hurricane, Hurricane Carol (1954), Hurricane Donna (1960), and Hurricane Bob (1991).
The capital of Rhode Island is Providence. The state's current governor is Gina Raimondo (D), and the lieutenant governor is Daniel McKee (D). Raimondo became Rhode Island's first female governor with a plurality of the vote in the November 2014 state elections. Its United States senators are Jack Reed (D) and Sheldon Whitehouse (D). Rhode Island's two United States representatives are David Cicilline (D-1) and Jim Langevin (D-2). "See congressional districts map." Rhode Island is one of a few states that do not have an official governor's residence. "See List of Rhode Island Governors."
The state legislature is the Rhode Island General Assembly, consisting of the 75-member House of Representatives and the 38-member Senate. Both houses of the bicameral body are currently dominated by the Democratic Party; the presence of the Republican Party is minor in the state government, with Republicans holding a handful of seats in both the Senate and House of Representatives.
Rhode Island's population barely crosses the threshold beyond the minimum of three for additional votes in both the federal House of Representatives and Electoral College; it is well represented relative to its population, with the eighth-highest number of electoral votes and second-highest number of House Representatives per resident. Based on its area, Rhode Island even has the highest density of electoral votes.
Federally, Rhode Island is a reliably Democratic state during presidential elections, usually supporting the Democratic presidential nominee. The state voted for the Republican presidential candidate until 1908. Since then, it has voted for the Republican nominee for president seven times, and the Democratic nominee 17 times. The last 16 presidential elections in Rhode Island have resulted in the Democratic Party winning the Ocean State's Electoral College votes 12 times. In the 1980 presidential election, Rhode Island was one of six states to vote against Republican Ronald Reagan. Reagan was the last Republican to win any of the state's counties in a Presidential election until Donald Trump won Kent County in 2016. In 1988, George H. W. Bush won over 40% of the state's popular vote, something that no Republican has done since.
Rhode Island was the Democrats' leading state in 1988 and 2000, and second-best in 1968, 1996, and 2004. Rhode Island's most one-sided Presidential election result was in 1964, with over 80% of Rhode Island's votes going for Lyndon B. Johnson. In 2004, Rhode Island gave John Kerry more than a 20-percentage-point margin of victory (the third-highest of any state), with 59.4% of its vote. All but three of Rhode Island's 39 cities and towns voted for the Democratic candidate. The exceptions were East Greenwich, West Greenwich, and Scituate. In 2008, Rhode Island gave Barack Obama a 28-percentage-point margin of victory (the third-highest of any state), with 63% of its vote. All but one of Rhode Island's 39 cities and towns voted for the Democratic candidate (the exception being Scituate).
Rhode Island is one of 21 states that have abolished capital punishment; it was second do so, just after Michigan, and carried out its last execution in the 1840s. Rhode Island was the second to last state to make prostitution illegal. Until November 2009 Rhode Island law made prostitution legal provided it took place indoors. In a 2009 study Rhode Island was listed as the 9th safest state in the country.
In 2011, Rhode Island became the third state in the United States to pass legislation to allow the use of medical marijuana. Additionally, the Rhode Island General Assembly passed civil unions, and it was signed into law by Governor Lincoln Chafee on July 2, 2011. Rhode Island became the eighth state to fully recognize either same-sex marriage or civil unions. Same-sex marriage became legal on May 2, 2013, and took effect August 1.
Rhode Island has some of the highest taxes in the country, particularly its property taxes, ranking seventh in local and state taxes, and sixth in real estate taxes.
The United States Census Bureau estimates that the population of Rhode Island was 1,059,361 on July 1, 2019, a 0.65% increase since the 2010 United States Census. The center of population of Rhode Island is located in Providence County, in the city of Cranston. A corridor of population can be seen from the Providence area, stretching northwest following the Blackstone River to Woonsocket, where 19th-century mills drove industry and development.
According to the 2010 Census, 81.4% of the population was White (76.4% non-Hispanic white), 5.7% was Black or African American, 0.6% American Indian and Alaska Native, 2.9% Asian, 0.1% Native Hawaiian and other Pacific Islander, 3.3% from two or more races. 12.4% of the total population was of Hispanic or Latino origin (they may be of any race).
Of the people residing in Rhode Island, 58.7% were born in Rhode Island, 26.6% were born in a different state, 2.0% were born in Puerto Rico, U.S. Island areas or born abroad to American parent(s), and 12.6% were foreign born.
According to the U.S. Census Bureau, , Rhode Island had an estimated population of 1,056,298, which is an increase of 1,125, or 0.10%, from the prior year and an increase of 3,731, or 0.35%, since the year 2010. This includes a natural increase since the last census of 15,220 people (that is 66,973 births minus 51,753 deaths) and an increase due to net migration of 14,001 people into the state. Immigration from outside the United States resulted in a net increase of 18,965 people, and migration within the country produced a net decrease of 4,964 people.
Hispanics in the state make up 12.8% of the population, predominantly Dominican, Puerto Rican, and Guatemalan populations.
According to the 2000 U.S. Census, 84% of the population aged 5 and older spoke only American English, while 8.07% spoke Spanish at home, 3.80% Portuguese, 1.96% French, 1.39% Italian and 0.78% speak other languages at home accordingly.
The state's most populous ethnic group, non-Hispanic white, has declined from 96.1% in 1970 to 76.5% in 2011. In 2011, 40.3% of Rhode Island's children under the age of one belonged to racial or ethnic minority groups, meaning that they had at least one parent who was not non-Hispanic white.
6.1% of Rhode Island's population were reported as under 5, 23.6% under 18, and 14.5% were 65 or older. Females made up approximately 52% of the population.
According to the 2010–2015 American Community Survey, the largest ancestry groups were Irish (18.3%), Italian (18.0%), English (10.5%), French (10.4%), and Portuguese (9.3%).
Rhode Island has a higher percentage of Americans of Portuguese ancestry, including Portuguese Americans and Cape Verdean Americans than any other state in the nation. Additionally, the state also has the highest percentage of Liberian immigrants, with more than 15,000 residing in the state. Italian Americans make up a plurality in central and southern Providence County and French-Canadian Americans form a large part of northern Providence County. Irish Americans have a strong presence in Newport and Kent counties. Americans of English ancestry still have a presence in the state as well, especially in Washington County, and are often referred to as "Swamp Yankees." African immigrants, including Cape Verdean Americans, Liberian Americans, Nigerian Americans and Ghanaian Americans, form significant and growing communities in Rhode Island.
Although Rhode Island has the smallest land area of all 50 states, it has the second highest population density of any state in the Union, second to that of New Jersey.
A Pew survey of Rhode Island residents' religious self-identification showed the following distribution of affiliations: Roman Catholic 43%, Protestant 27%, Jewish 1%, Orthodox 1%, Jehovah's Witnesses 1%, Buddhism 1%, Mormonism 0.5%, Hinduism 0.5%, Islam 0.5% and Non-religious 23%. The largest denominations are the Roman Catholic Church with 456,598 adherents, the Episcopal Church with 19,377, the American Baptist Churches USA with 15,220, and the United Methodist Church with 6,901 adherents.
Rhode Island has the highest proportion of Roman Catholic residents of any state, mainly due to large Irish, Italian, and French-Canadian immigration in the past; recently, significant Portuguese and various Hispanic communities have also been established in the state. Though it has the highest overall Catholic percentage of any state, none of Rhode Island's individual counties ranks among the 10 most Catholic in the United States, as Catholics are very evenly spread throughout the state.
The Jewish community of Rhode Island is centered in the Providence area, and emerged during a wave of Jewish immigration predominantly from Eastern Europeans shtetls between 1880 and 1920. The presence of the Touro Synagogue in Newport, the oldest existing synagogue in the United States, emphasizes that these second-wave immigrants did not create Rhode Island's first Jewish community; a comparatively smaller wave of Spanish and Portuguese Jews immigrated to Newport during the colonial era.
Rhode Island is divided into five counties but it has no county governments. The entire state is divided into municipalities, which handle all local government affairs.
There are 39 cities and towns in Rhode Island. Major population centers today result from historical factors; development took place predominantly along the Blackstone, Seekonk, and Providence Rivers with the advent of the water-powered mill. Providence is the base of a large metropolitan area.
The state's 18 largest municipalities ranked by population are :
Some of Rhode Island's cities and towns are further partitioned into villages, in common with many other New England states. Notable villages include Kingston in the town of South Kingstown, which houses the University of Rhode Island; Wickford in the town of North Kingstown, the site of an annual international art festival; and Wakefield where the Town Hall is located for the Town of South Kingstown.
The Rhode Island economy had a colonial base in fishing.
The Blackstone River Valley was a major contributor to the American Industrial Revolution. It was in Pawtucket that Samuel Slater set up Slater Mill in 1793, using the waterpower of the Blackstone River to power his cotton mill. For a while, Rhode Island was one of the leaders in textiles. However, with the Great Depression, most textile factories relocated to southern U.S. states. The textile industry still constitutes a part of the Rhode Island economy but does not have the same power that it once had.
Other important industries in Rhode Island's past included toolmaking, costume jewelry, and silverware. An interesting by-product of Rhode Island's industrial history is the number of abandoned factories, many of them now being used for condominiums, museums, offices, and low-income and elderly housing. Today, much of the economy of Rhode Island is based in services, particularly healthcare and education, and still manufacturing to some extent. The state's nautical history continues in the 21st century in the form of nuclear submarine construction.
Per the 2013 American Communities Survey, Rhode Island has the highest paid elementary school teachers in the country, with an average salary of $75,028 (adjusted to inflation).
The headquarters of Citizens Financial Group is located in Providence, the 14th largest bank in the United States. The Fortune 500 companies CVS Caremark and Textron are based in Woonsocket and Providence, respectively. FM Global, GTECH Corporation, Hasbro, American Power Conversion, Nortek, and Amica Mutual Insurance are all Fortune 1000 companies that are based in Rhode Island.
Rhode Island's 2000 total gross state production was $46.18 billion (adjusted to inflation), placing it 45th in the nation. Its 2000 "per capita" personal income was $41,484 (adjusted to inflation), 16th in the nation. Rhode Island has the lowest level of energy consumption per capita of any state. Additionally, Rhode Island is rated as the 5th most energy efficient state in the country. In December 2012, the state's unemployment rate was 10.2%.
Health services are Rhode Island's largest industry. Second is tourism, supporting 39,000 jobs, with tourism-related sales at $4.56 billion (adjusted to inflation) in the year 2000. The third-largest industry is manufacturing. Its industrial outputs are submarine construction, shipbuilding, costume jewelry, fabricated metal products, electrical equipment, machinery, and boatbuilding. Rhode Island's agricultural outputs are nursery stock, vegetables, dairy products, and eggs.
Rhode Island's taxes were appreciably higher than neighboring states, because Rhode Island's income tax was based on 25% of the payer's federal income tax payment. Former Governor Donald Carcieri claimed that the higher tax rate had an inhibitory effect on business growth in the state and called for reductions to increase the competitiveness of the state's business environment. In 2010, the Rhode Island General Assembly passed a new state income tax structure that was then signed into law on June 9, 2010 by Governor Carcieri. The income tax overhaul has now made Rhode Island competitive with other New England states by lowering its maximum tax rate to 5.99% and reducing the number of tax brackets to three. The state's first income tax was enacted in 1971.
, the largest employers in Rhode Island (excluding employees of municipalities) are the following:
The Rhode Island Public Transit Authority (RIPTA) operates statewide intra- and intercity bus transport from its hubs at Kennedy Plaza in Providence, Pawtucket, and Newport. RIPTA bus routes serve 38 of Rhode Island's 39 cities and towns. (New Shoreham on Block Island is not served). RIPTA currently operates 58 routes, including daytime trolley service (using trolley-style replica buses) in Providence and Newport.
From 2000 through 2008, RIPTA offered seasonal ferry service linking Providence and Newport (already connected by highway) funded by grant money from the United States Department of Transportation. Though the service was popular with residents and tourists, RIPTA was unable to continue on after the federal funding ended. Service was discontinued . The service was resumed in 2016 and has been successful. The privately run Block Island Ferry links Block Island with Newport and Narragansett with traditional and fast-ferry service, while the Prudence Island Ferry connects Bristol with Prudence Island. Private ferry services also link several Rhode Island communities with ports in Connecticut, Massachusetts, and New York. The Vineyard Fast Ferry offers seasonal service to Martha's Vineyard from Quonset Point with bus and train connections to Providence, Boston, and New York. Viking Fleet offers seasonal service from Block Island to New London, Connecticut, and Montauk, New York.
The MBTA Commuter Rail's Providence/Stoughton Line links Providence and T. F. Green Airport with Boston. The line was later extended southward to Wickford Junction, with service beginning April 23, 2012. The state hopes to extend the MBTA line to Kingston and Westerly. as well as explore the possibility of extending Connecticut's Shore Line East to T.F. Green Airport. Amtrak's Acela Express stops at Providence Station (the only Acela stop in Rhode Island), linking Providence to other cities in the Northeast Corridor. Amtrak's Northeast Regional service makes stops at Providence Station, Kingston, and Westerly.
Rhode Island's primary airport for passenger and cargo transport is T. F. Green Airport in Warwick, though most Rhode Islanders who wish to travel internationally on direct flights and those who seek a greater availability of flights and destinations often fly through Logan International Airport in Boston.
Interstate 95 (I-95) runs southwest to northeast across the state, linking Rhode Island with other states along the East Coast. I-295 functions as a partial beltway encircling Providence to the west. I-195 provides a limited-access highway connection from Providence (and Connecticut and New York via I-95) to Cape Cod. Initially built as the easternmost link in the (now cancelled) extension of I-84 from Hartford, Connecticut, a portion of U.S. Route 6 (US 6) through northern Rhode Island is limited-access and links I-295 with downtown Providence.
Several Rhode Island highways extend the state's limited-access highway network. Route 4 is a major north–south freeway linking Providence and Warwick (via I-95) with suburban and beach communities along Narragansett Bay. Route 10 is an urban connector linking downtown Providence with Cranston and Johnston. Route 37 is an important east–west freeway through Cranston and Warwick and links I-95 with I-295. Route 99 links Woonsocket with Providence (via Route 146). Route 146 travels through the Blackstone Valley, linking Providence and I-95 with Worcester, Massachusetts and the Massachusetts Turnpike. Route 403 links Route 4 with Quonset Point.
Several bridges cross Narragansett Bay connecting Aquidneck Island and Conanicut Island to the mainland, most notably the Claiborne Pell Newport Bridge and the Jamestown-Verrazano Bridge.
The East Bay Bike Path stretches from Providence to Bristol along the eastern shore of Narragansett Bay, while the Blackstone River Bikeway will eventually link Providence and Worcester. In 2011, Rhode Island completed work on a marked on-road bicycle path through Pawtucket and Providence, connecting the East Bay Bike Path with the Blackstone River Bikeway, completing a bicycle route through the eastern side of the state. The William C. O'Neill Bike Path (commonly known as the South County Bike Path) is an path through South Kingstown and Narragansett. The Washington Secondary Bike Path stretches from Cranston to Coventry, and the Ten Mile River Greenway path runs through East Providence and Pawtucket.
On May 29, 2014, Governor Lincoln D. Chafee announced that Rhode Island was one of eight states to release a collaborative Action Plan to put 3.3 million zero emission vehicles on the roads by 2025. The goal of the plan is to reduce greenhouse gas and smog-causing emissions. The Action Plan covers promoting zero emission vehicles and investing in the infrastructure to support them.
In 2014, Rhode Island received grants from the Environmental Protection Agency in the amount of $2,711,685 to clean up Brownfield sites in eight locations. The intent of the grants was to provide communities with the funding necessary to assess, clean up, and redevelop contaminated properties, boost local economies, and leverage jobs while protecting public health and the environment.
In 2013, the "Lots of Hope" program was established in the City of Providence to focus on increasing the city's green space and local food production, improve urban neighborhoods, promote healthy lifestyles and improve environmental sustainability. "Lots of Hope", supported by a $100,000 grant, will partner with the City of Providence, the Southside Community Land Trust and the Rhode Island Foundation to convert city-owned vacant lots into productive urban farms.
In 2012, Rhode Island passed bill S2277/H7412, "An act relating to Health and Safety – Environmental Cleanup Objectives for Schools", informally known as the "School Siting Bill." The bill, sponsored by Senator Juan Pichardo and Representative Scott Slater and signed into law by the Governor, made Rhode Island the first state in the US to prohibit school construction on Brownfield Sites where there is an ongoing potential for toxic vapors to negatively impact indoor air quality. It also creates a public participation process whenever a city or town considers building a school on any other kind of contaminated site.
Rhode Island has several colleges and universities:
Some Rhode Islanders speak with the distinctive, non-rhotic, traditional Rhode Island accent that many compare to a cross between the New York City and Boston accents (e.g., "water" sounds like "watuh"). Many Rhode Islanders distinguish a strong "aw" sound (i.e., do not exhibit the cot–caught merger) as one might hear in New Jersey or New York City; for example, the word "coffee" is pronounced . This type of accent may have been brought to the region by early settlers from eastern England in the Puritan migration to New England in the mid-17th century.
Rhode Islanders refer to a drinking fountain as a "bubbler" (sometimes pronounced "bubahluh") and sometimes call milkshakes "cabinets". A foot-long, overstuffed sandwich (of whatever kind) is called a "grinder."
Rhode Island, like the rest of New England, has a tradition of clam chowder. Both the white New England and the red Manhattan varieties are popular, but there is also a unique clear-broth chowder known as "Rhode Island Clam Chowder" available in many restaurants. A culinary tradition in Rhode Island is the "clam cake" (also known as a clam fritter outside of Rhode Island), a deep fried ball of buttery dough with chopped bits of clam inside. They are sold by the half-dozen or dozen in most seafood restaurants around the state, and the quintessential summer meal in Rhode Island is chowder and clam cakes.
The quahog is a large local clam usually used in a chowder. It is also ground and mixed with stuffing or spicy minced sausage, and then baked in its shell to form a "stuffie". Calamari (squid) is sliced into rings and fried as an appetizer in most Italian restaurants, typically served Sicilian-style with sliced banana peppers and marinara sauce on the side. Clams Casino originated in Rhode Island, invented by Julius Keller, the maitre d' in the original Casino next to the seaside Towers in Narragansett. Clams Casino resemble the beloved stuffed quahog but are generally made with the smaller littleneck or cherrystone clam and are unique in their use of bacon as a topping.
The official state drink of Rhode Island is "coffee milk", a beverage created by mixing milk with coffee syrup. This unique syrup was invented in the state and is sold in almost all Rhode Island supermarkets, as well as its bordering states. Johnnycakes have been a Rhode Island staple since Colonial times, made with corn meal and water then pan-fried much like pancakes.
Submarine sandwiches are called "grinders" throughout Rhode Island, and the Italian grinder is especially popular, made with cold cuts such as ham, prosciutto, capicola, salami, and Provolone cheese. Linguiça or chouriço is a spicy Portuguese sausage, frequently served with peppers among the state's large Portuguese community and eaten with hearty bread.
The Farrelly brothers and Seth MacFarlane depict Rhode Island in popular culture, often making comedic parodies of the state. MacFarlane's television series "Family Guy" is based in a fictional Rhode Island city named Quahog, and notable local events and celebrities are regularly lampooned. Peter Griffin is seen working at the Pawtucket brewery, and other state locations are mentioned.
The movie "High Society" (starring Bing Crosby, Grace Kelly, and Frank Sinatra) was set in Newport, Rhode Island.
The 1974 film adaptation of "The Great Gatsby" was also filmed in Newport.
Jacqueline Bouvier and John F. Kennedy were married at St. Mary's church in Newport. Their reception was held at Hammersmith Farm, the Bouvier summer home in Newport.
Cartoonist Don Bousquet, a state icon, has made a career out of Rhode Island culture, drawing Rhode Island-themed gags in "The Providence Journal" and "Yankee" magazine. These cartoons have been reprinted in the "Quahog" series of paperbacks ("I Brake for Quahogs", "Beware of the Quahog", and "The Quahog Walks Among Us".) Bousquet has also collaborated with humorist and "Providence Journal" columnist Mark Patinkin on two books: "The Rhode Island Dictionary" and "The Rhode Island Handbook".
The 1998 film "Meet Joe Black" was filmed at Aldrich Mansion in the Warwick Neck area of Warwick.
"Body of Proof"s first season was filmed entirely in Rhode Island. The show premiered on March 29, 2011.
The 2007 Steve Carell and Dane Cook film "Dan in Real Life" was filmed in various coastal towns in the state. The sunset scene with the entire family on the beach takes place at Napatree Point.
"Jersey Shore" star Pauly D filmed part of his spin-off "The Pauly D Project" in his hometown of Johnston.
The Comedy Central cable television series "Another Period" is set in Newport during the Gilded Age.
Rhode Island has been the first in a number of initiatives. The Colony of Rhode Island and Providence Plantations enacted the first law prohibiting slavery in America on May 18, 1652.
The first act of armed rebellion in America against the British Crown was the boarding and burning of the Revenue Schooner "Gaspee" in Narragansett Bay on June 10, 1772. The idea of a Continental Congress was first proposed at a town meeting in Providence on May 17, 1774. Rhode Island elected the first delegates (Stephen Hopkins and Samuel Ward) to the Continental Congress on June 15, 1774. The Rhode Island General Assembly created the first standing army in the colonies (1,500 men) on April 22, 1775. On June 15, 1775, the first naval engagement took place in the American Revolution between an American sloop commanded by Capt. Abraham Whipple and an armed tender of the British Frigate "Rose". The tender was chased aground and captured. Later in June, the General Assembly created the American Navy when it commissioned the sloops "Katy" and , armed with 24 guns and commanded by Abraham Whipple who was promoted to Commodore. Rhode Island was the first Colony to declare independence from Britain on May 4, 1776.
Slater Mill in Pawtucket was the first commercially successful cotton-spinning mill with a fully mechanized power system in America and was the birthplace of the Industrial Revolution in the US. The oldest Fourth of July parade in the country is still held annually in Bristol, Rhode Island. The first Baptist church in America was founded in Providence in 1638. Ann Smith Franklin of the Newport "Mercury" was the first female newspaper editor in America (August 22, 1762). Touro Synagogue was the first synagogue in America, founded in Newport in 1763.
Pelham Street in Newport was the first in America to be illuminated by gaslight in 1806. The first strike in the United States in which women participated occurred in Pawtucket in 1824. Watch Hill has the nation's oldest flying horses carousel that has been in continuous operation since 1850. The motion picture machine was patented in Providence on April 23, 1867. The first lunch wagon in America was introduced in Providence in 1872. The first nine-hole golf course in America was completed in Newport in 1890. The first state health laboratory was established in Providence on September 1, 1894. The Rhode Island State House was the first building with an all-marble dome to be built in the United States (1895–1901). The first automobile race on a track was held in Cranston on September 7, 1896. The first automobile parade was held in Newport on September 7, 1899 on the grounds of Belcourt Castle.
Rhode Island is nicknamed "The Ocean State", and the nautical nature of Rhode Island's geography pervades its culture. Newport Harbor, in particular, holds many pleasure boats. In the lobby of T. F. Green, the state's main airport, is a large life-sized sailboat, and the state's license plates depict an ocean wave or a sailboat.
Additionally, the large number of beaches in Washington County lures many Rhode Islanders south for summer vacation.
The state was notorious for organized crime activity from the 1950s into the 1990s when the Patriarca crime family held sway over most of New England from its Providence headquarters.
Rhode Islanders developed a unique style of architecture in the 17th century called the stone-ender.
Rhode Island is the only state to still celebrate Victory over Japan Day which is officially named "Victory Day" but is sometimes referred to as "VJ Day." It is celebrated on the second Monday in August.
Nibbles Woodaway, more commonly referred to as "The Big Blue Bug", is a 58-foot-long termite mascot for a Providence extermination business. Since its construction in 1980, it has been featured in several movies and television shows, and has come to be recognized as a cultural landmark by many locals.
Rhode Island has two professional sports teams, both of which are top-level minor league affiliates for teams in Boston. The Pawtucket Red Sox baseball team of the Triple-A International League are an affiliate of the Boston Red Sox. They play at McCoy Stadium in Pawtucket and have won four league titles, the Governors' Cup, in 1973, 1984, 2012, and 2014. McCoy Stadium also has the distinction of being home to the longest professional baseball game ever played – 33 innings.
The other professional minor league team is the Providence Bruins ice hockey team of the American Hockey League, who are an affiliate of the Boston Bruins. They play in the Dunkin' Donuts Center in Providence and won the AHL's Calder Cup during the 1998–99 AHL season.
The Providence Reds were a hockey team that played in the Canadian-American Hockey League (CAHL) between 1926 and 1936 and the American Hockey League (AHL) from 1936 to 1977, the last season of which they played as the Rhode Island Reds. The team won the Calder Cup in 1938, 1940, 1949, and 1956. The Reds played at the Rhode Island Auditorium, located on North Main Street in Providence, Rhode Island from 1926 through 1972, when the team affiliated with the New York Rangers and moved into the newly built Providence Civic Center. The team name came from the rooster known as the Rhode Island Red. They moved to New York in 1977, then to Connecticut in 1997, and are now called the Hartford Wolf Pack.
The Reds are the oldest continuously operating minor-league hockey franchise in North America, having fielded a team in one form or another since 1926 in the CAHL. It is also the only AHL franchise to have never missed a season. The AHL returned to Providence in 1992 in the form of the Providence Bruins.
Before the great expansion of athletic teams all over the country, Providence and Rhode Island in general played a great role in supporting teams. The Providence Grays won the first World Championship in baseball history in 1884. The team played their home games at the old Messer Street Field in Providence. The Grays played in the National League from 1878 to 1885. They defeated the New York Metropolitans of the American Association in a best of five game series at the Polo Grounds in New York. Providence won three straight games to become the first champions in major league baseball history. Babe Ruth played for the minor league Providence Grays of 1914 and hit his only official minor league home run for that team before being recalled by the Grays' parent club, the Boston Red Stockings.
The now-defunct professional football team the Providence Steam Roller won the 1928 NFL title. They played in a 10,000 person stadium called the Cycledrome. The Providence Steamrollers played in the Basketball Association of America which became the National Basketball Association.
Rhode Island is also home to a top semi-professional soccer club, the Rhode Island Reds, which compete in the National premier soccer league, in the fourth division of U.S. Soccer.
Rhode Island is home to one top level non-minor league team, the Rhode Island Rebellion rugby league team, a semi-professional rugby league team that competes in the USA Rugby League, the Top Competition in the United States for the Sport of Rugby League. The Rebellion play their home games at Classical High School in Providence.
There are four NCAA Division I schools in Rhode Island. All four schools compete in different conferences. The Brown University Bears compete in the Ivy League, the Bryant University Bulldogs compete in the Northeast Conference, the Providence College Friars compete in the Big East Conference, and the University of Rhode Island Rams compete in the Atlantic-10 Conference. Three of the schools' football teams compete in the Football Championship Subdivision, the second-highest level of college football in the United States. Brown plays FCS football in the Ivy League, Bryant plays FCS football in the Northeast Conference, and Rhode Island plays FCS football in the Colonial Athletic Association. All four of the Division I schools in the state compete in an intrastate all-sports competition known as the Ocean State Cup, with Bryant winning the most recent cup in 2011–12 academic year.
From 1930 to 1983, America's Cup races were sailed off Newport, and the extreme-sport X Games and Gravity Games were founded and hosted in the state's capital city.
The International Tennis Hall of Fame is in Newport at the Newport Casino, site of the first U.S. National Championships in 1881. The Hall of Fame and Museum were established in 1954 by James Van Alen as "a shrine to the ideals of the game".
Rhode Island is also home to the headquarters of the governing body for youth rugby league in the United States, the American Youth Rugby League Association or AYRLA. The AYRLA has started the first-ever Rugby League youth competition in Providence Middle Schools, a program at the RI Training School, in addition to starting the first High School Competition in the US in Providence Public High School.
The state capitol building is made of white Georgian marble. On top is the world's fourth largest self-supported marble dome. It houses the Rhode Island Charter granted by King Charles II in 1663, the Brown University charter, and other state treasures.
The First Baptist Church of Providence is the oldest Baptist church in the Americas, founded by Roger Williams in 1638.
The first fully automated post office in the country is located in Providence. There are many historic mansions in the seaside city of Newport, including The Breakers, Marble House, and Belcourt Castle. Also located there is the Touro Synagogue, dedicated on December 2, 1763, considered by locals to be the first synagogue within the United States (see below for information on New York City's claim), and still serving. The synagogue showcases the religious freedoms that were established by Roger Williams, as well as impressive architecture in a mix of the classic colonial and Sephardic style. The Newport Casino is a National Historic Landmark building complex that presently houses the International Tennis Hall of Fame and features an active grass-court tennis club.
Scenic Route 1A (known locally as Ocean Road) is in Narragansett. "The Towers" is also located in Narragansett featuring a large stone arch. It was once the entrance to a famous Narragansett casino that burned down in 1900. The Towers now serve as an event venue and host the local Chamber of Commerce, which operates a tourist information center.
The Newport Tower has been hypothesized to be of Viking origin, although most experts believe that it was a Colonial-era windmill. | https://en.wikipedia.org/wiki?curid=25410 |
Rock and roll
Rock and roll (often written as rock & roll, rock 'n' roll, or rock 'n roll) is a genre of popular music that originated and evolved in the United States during the late 1940s and early 1950s from musical styles such as gospel, jump blues, jazz, boogie woogie, rhythm and blues, and country music. While elements of what was to become rock and roll can be heard in blues records from the 1920s and in country records of the 1930s, the genre did not acquire its name until 1954.
According to journalist Greg Kot, "rock and roll" refers to a style of popular music originating in the U.S. in the 1950s prior to its development by the mid-1960s into "the more encompassing international style known as rock music, though the latter also continued to be known as rock and roll." For the purpose of differentiation, this article deals with the first definition.
In the earliest rock and roll styles, either the piano or saxophone was typically the lead instrument, but these instruments were generally replaced or supplemented by guitar in the middle to late 1950s. The beat is essentially a dance rhythm with an accentuated backbeat, which is almost always provided by a snare drum. Classic rock and roll is usually played with one or two electric guitars (one lead, one rhythm), a double bass (string bass) or after the mid-1950s an electric bass guitar, and a drum kit.
Beyond just a musical style, rock and roll, as depicted in movies, in fan magazines, and on television, influenced lifestyles, fashion, attitudes, and language. Rock and roll may have contributed to the civil rights movement because both African American and White American teenagers enjoyed the music.
The term "rock and roll" is defined by "Encyclopædia Britannica" as the music that originated in the mid-1950s and later developed "into the more encompassing international style known as rock music". The term is sometimes also used as synonymous with "rock music" and is defined as such in some dictionaries.
The phrase "rocking and rolling" originally described the movement of a ship on the ocean, but by the early 20th century was used both to describe the spiritual fervor of black church rituals and as a sexual analogy. Various gospel, blues and swing recordings used the phrase before it became used more frequently – but still intermittently – in the 1940s, on recordings and in reviews of what became known as "rhythm and blues" music aimed at a black audience.
In 1934, the song "Rock and Roll" by the Boswell Sisters appeared in the film "Transatlantic Merry-Go-Round". In 1942, "Billboard" magazine columnist Maurie Orodenker started to use the term "rock-and-roll" to describe upbeat recordings such as "Rock Me" by Sister Rosetta Tharpe. By 1943, the "Rock and Roll Inn" in South Merchantville, New Jersey, was established as a music venue. In 1951, Cleveland, Ohio, disc jockey Alan Freed began playing this music style while popularizing the phrase to describe it.
The origins of rock and roll have been fiercely debated by commentators and historians of music. There is general agreement that it arose in the Southern United States – a region that would produce most of the major early rock and roll acts – through the meeting of various influences that embodied a merging of the African musical tradition with European instrumentation. The migration of many former slaves and their descendants to major urban centers such as St. Louis, Memphis, New York City, Detroit, Chicago, Cleveland, and Buffalo (See: Second Great Migration (African American)) meant that black and white residents were living in close proximity in larger numbers than ever before, and as a result heard each other's music and even began to emulate each other's fashions. Radio stations that made white and black forms of music available to both groups, the development and spread of the gramophone record, and African-American musical styles such as jazz and swing which were taken up by white musicians, aided this process of "cultural collision".
The immediate roots of rock and roll lay in the rhythm and blues, then called "race music", and country music of the 1940s and 1950s. Particularly significant influences were jazz, blues, gospel, country, and folk. Commentators differ in their views of which of these forms were most important and the degree to which the new music was a re-branding of African-American rhythm and blues for a white market, or a new hybrid of black and white forms.
In the 1930s, jazz, and particularly swing, both in urban-based dance bands and blues-influenced country swing (Jimmie Rodgers, Moon Mullican and other similar singers), were among the first music to present African-American sounds for a predominantly white audience. One particularly noteworthy example of a jazz song with recognizably rock and roll elements is Big Joe Turner with pianist Pete Johnson's 1939 single "Roll 'Em Pete", which is regarded as an important precursor of rock and roll. The 1940s saw the increased use of blaring horns (including saxophones), shouted lyrics and boogie woogie beats in jazz-based music. During and immediately after World War II, with shortages of fuel and limitations on audiences and available personnel, large jazz bands were less economical and tended to be replaced by smaller combos, using guitars, bass and drums. In the same period, particularly on the West Coast and in the Midwest, the development of jump blues, with its guitar riffs, prominent beats and shouted lyrics, prefigured many later developments. In the documentary film "Hail! Hail! Rock 'n' Roll", Keith Richards proposes that Chuck Berry developed his brand of rock and roll by transposing the familiar two-note lead line of jump blues piano directly to the electric guitar, creating what is instantly recognizable as rock guitar. Similarly, country boogie and Chicago electric blues supplied many of the elements that would be seen as characteristic of rock and roll. Inspired by electric blues, Chuck Berry introduced an aggressive guitar sound to rock and roll, and established the electric guitar as its centrepiece, adapting his rock band instrumentation from the basic blues band instrumentation of a lead guitar, second chord instrument, bass and drums.
Rock and roll arrived at a time of considerable technological change, soon after the development of the electric guitar, amplifier and microphone, and the 45 rpm record. There were also changes in the record industry, with the rise of independent labels like Atlantic, Sun and Chess servicing niche audiences and a similar rise of radio stations that played their music. It was the realization that relatively affluent white teenagers were listening to this music that led to the development of what was to be defined as rock and roll as a distinct genre. Because the development of rock and roll was an evolutionary process, no single record can be identified as unambiguously "the first" rock and roll record. Contenders for the title of "first rock and roll record" include Sister Rosetta Tharpe's "Strange Things Happening Every Day" (1944),, "That's All Right" by Arthur Crudup (1946), "The Fat Man" by Fats Domino (1949), Goree Carter's "Rock Awhile" (1949), Jimmy Preston's "Rock the Joint" (1949), which was later covered by Bill Haley & His Comets in 1952, "Rocket 88" by Jackie Brenston and his Delta Cats (Ike Turner and his band The Kings of Rhythm), recorded by Sam Phillips for Sun Records in March 1951. In terms of its wide cultural impact across society in the US and elsewhere, Bill Haley's "Rock Around the Clock", recorded in April 1954 but not a commercial success until the following year, is generally recognized as an important milestone, but it was preceded by many recordings from earlier decades in which elements of rock and roll can be clearly discerned.
Other artists with early rock and roll hits included Chuck Berry, Bo Diddley, Little Richard, Jerry Lee Lewis, and Gene Vincent. Chuck Berry's 1955 classic "Maybellene" in particular features a distorted electric guitar solo with warm overtones created by his small valve amplifier. However, the use of distortion was predated by electric blues guitarists such as Joe Hill Louis, Guitar Slim, Willie Johnson of Howlin' Wolf's band, and Pat Hare; the latter two also made use of distorted power chords in the early 1950s. Also in 1955, Bo Diddley introduced the "Bo Diddley beat" and a unique electric guitar style, influenced by African and Afro-Cuban music and in turn influencing many later artists.
"Rockabilly" usually (but not exclusively) refers to the type of rock and roll music which was played and recorded in the mid-1950s primarily by white singers such as Elvis Presley, Carl Perkins, Johnny Cash, and Jerry Lee Lewis, who drew mainly on the country roots of the music. Elvis Presley was greatly influenced and incorporated his style of music with some of the greatest African American musicians like BB King, Chuck Berry and Fats Domino. His style of music combined with black influences created controversy during a turbulent time in history. Many other popular rock and roll singers of the time, such as Fats Domino and Little Richard, came out of the black rhythm and blues tradition, making the music attractive to white audiences, and are not usually classed as "rockabilly".
Bill Flagg who is a Connecticut resident, began referring to his mix of hillbilly and rock 'n' roll music as rockabilly around 1953. His song "Guitar Rock" is considered as classic rockabilly.
In July 1954, Elvis Presley recorded the regional hit "That's All Right" at Sam Phillips' Sun Studio in Memphis. Three months earlier, on April 12, 1954, Bill Haley & His Comets recorded "Rock Around the Clock". Although only a minor hit when first released, when used in the opening sequence of the movie "Blackboard Jungle" a year later, it set the rock and roll boom in motion. The song became one of the biggest hits in history, and frenzied teens flocked to see Haley and the Comets perform it, causing riots in some cities. "Rock Around the Clock" was a breakthrough for both the group and for all of rock and roll music. If everything that came before laid the groundwork, "Rock Around the Clock" introduced the music to a global audience.
In 1956, the arrival of rockabilly was underlined by the success of songs like "Folsom Prison Blues" by Johnny Cash, "Blue Suede Shoes" by Perkins and the No. 1 hit "Heartbreak Hotel" by Presley. For a few years it became the most commercially successful form of rock and roll. Later rockabilly acts, particularly performing songwriters like Buddy Holly, would be a major influence on British Invasion acts and particularly on the song writing of the Beatles and through them on the nature of later rock music.
Doo-wop was one of the most popular forms of 1950s rhythm and blues, often compared with rock and roll, with an emphasis on multi-part vocal harmonies and meaningless backing lyrics (from which the genre later gained its name), which were usually supported with light instrumentation. Its origins were in African-American vocal groups of the 1930s and 40s, such as the Ink Spots and the Mills Brothers, who had enjoyed considerable commercial success with arrangements based on close harmonies. They were followed by 1940s R&B vocal acts such as the Orioles, the Ravens and the Clovers, who injected a strong element of traditional gospel and, increasingly, the energy of jump blues. By 1954, as rock and roll was beginning to emerge, a number of similar acts began to cross over from the R&B charts to mainstream success, often with added honking brass and saxophone, with the Crows, the Penguins, the El Dorados and the Turbans all scoring major hits. Despite the subsequent explosion in records from doo wop acts in the later '50s, many failed to chart or were one-hit wonders. Exceptions included the Platters, with songs including "The Great Pretender" (1955) and the Coasters with humorous songs like "Yakety Yak" (1958), both of which ranked among the most successful rock and roll acts of the era. Towards the end of the decade there were increasing numbers of white, particularly Italian-American, singers taking up Doo Wop, creating all-white groups like the Mystics and Dion and the Belmonts and racially integrated groups like the Del-Vikings and the Impalas. Doo-wop would be a major influence on vocal surf music, soul and early Merseybeat, including the Beatles.
Many of the earliest white rock and roll hits were covers or partial re-writes of earlier black rhythm and blues or blues songs. Through the late 1940s and early 1950s, R&B music had been gaining a stronger beat and a wilder style, with artists such as Fats Domino and Johnny Otis speeding up the tempos and increasing the backbeat to great popularity on the juke joint circuit. Before the efforts of Freed and others, black music was taboo on many white-owned radio outlets, but artists and producers quickly recognized the potential of rock and roll. Some of Presley's early recordings were covers of black rhythm and blues or blues songs, such as "That's All Right" (a countrified arrangement of a blues number), "Baby Let's Play House", "Lawdy Miss Clawdy" and "Hound Dog". The racial lines, however, are rather more clouded by the fact that some of these R&B songs originally recorded by black artists had been written by white songwriters, such as the team of Jerry Leiber and Mike Stoller. Songwriting credits were often unreliable; many publishers, record executives, and even managers (both white and black) would insert their name as a composer in order to collect royalty checks.
Covers were customary in the music industry at the time; it was made particularly easy by the compulsory license provision of United States copyright law (still in effect). One of the first relevant successful covers was Wynonie Harris's transformation of Roy Brown's 1947 original jump blues hit "Good Rocking Tonight" into a more showy rocker and the Louis Prima rocker "Oh Babe" in 1950, as well as Amos Milburn's cover of what may have been the first white rock and roll record, Hardrock Gunter's "Birmingham Bounce" in 1949. The most notable trend, however, was white pop covers of black R&B numbers. The more familiar sound of these covers may have been more palatable to white audiences, there may have been an element of prejudice, but labels aimed at the white market also had much better distribution networks and were generally much more profitable. Famously, Pat Boone recorded sanitized versions of songs recorded by the likes of Fats Domino, Little Richard, the Flamingos and Ivory Joe Hunter. Later, as those songs became popular, the original artists' recordings received radio play as well.
The cover versions were not necessarily straightforward imitations. For example, Bill Haley's incompletely bowdlerized cover of "Shake, Rattle and Roll" transformed Big Joe Turner's humorous and racy tale of adult love into an energetic teen dance number, while Georgia Gibbs replaced Etta James's tough, sarcastic vocal in "Roll With Me, Henry" (covered as "Dance With Me, Henry") with a perkier vocal more appropriate for an audience unfamiliar with the song to which James's song was an answer, Hank Ballard's "Work With Me, Annie". Elvis' rock and roll version of "Hound Dog", taken mainly from a version recorded by the pop band Freddie Bell and the Bellboys, was very different from the blues shouter that Big Mama Thornton had recorded four years earlier. Other white artists who recorded cover versions of rhythm & blues songs included Gale Storm [Smiley Lewis' "I Hear You Knockin'"], the Diamonds [The Gladiolas' "Little Darlin'" and Frankie Lymon & the Teenagers' "Why Do Fools Fall in Love?"], the Crew Cuts [the Chords' "Sh-Boom" and Nappy Brown's "Don't Be Angry"], the Fountain Sisters [The Jewels' "Hearts of Stone"] and the Maguire Sisters [The Moonglows' "Sincerely"].
Some commentators have suggested a decline of rock and roll in the late 1950s and early 1960s. By 1959, the deaths of Buddy Holly, The Big Bopper and Ritchie Valens in a plane crash (February 1959), the departure of Elvis for service in the United States Army (March 1958), the retirement of Little Richard to become a preacher (October 1957), the scandal surrounding Jerry Lee Lewis' marriage to his thirteen-year-old cousin (May 1958), the arrest of Chuck Berry (December 1959), and the breaking of the Payola scandal implicating major figures, including Alan Freed, in bribery and corruption in promoting individual acts or songs (November 1959), gave a sense that the initial phase of rock and roll had come to an end.
During the late 1950s and early 1960s, the rawer sounds of Elvis Presley, Gene Vincent, Jerry Lee Lewis and Buddy Holly were commercially superseded by a more polished, commercial style of rock and roll. Marketing frequently emphasized the physical looks of the artist rather than the music, contributing to the successful careers of Ricky Nelson, Tommy Sands, Bobby Vee and the Philadelphia trio of Bobby Rydell, Frankie Avalon and Fabian, who all became "teen idols."
Some music historians have also pointed to important and innovative developments that built on rock and roll in this period, including multitrack recording, developed by Les Paul, the electronic treatment of sound by such innovators as Joe Meek, and the "Wall of Sound" productions of Phil Spector, continued desegregation of the charts, the rise of surf music, garage rock and the Twist dance craze. Surf rock in particular, noted for the use of reverb-drenched guitars, became one of the most popular forms of American rock of the 1960s.
In the 1950s, Britain was well placed to receive American rock and roll music and culture. It shared a common language, had been exposed to American culture through the stationing of troops in the country, and shared many social developments, including the emergence of distinct youth sub-cultures, which in Britain included the Teddy Boys and the rockers. Trad Jazz became popular, and many of its musicians were influenced by related American styles, including boogie woogie and the blues. The skiffle craze, led by Lonnie Donegan, utilised amateurish versions of American folk songs and encouraged many of the subsequent generation of rock and roll, folk, R&B and beat musicians to start performing. At the same time British audiences were beginning to encounter American rock and roll, initially through films including "Blackboard Jungle" (1955) and "Rock Around the Clock" (1956). Both movies contained the Bill Haley & His Comets hit "Rock Around the Clock", which first entered the British charts in early 1955 – four months before it reached the US pop charts – topped the British charts later that year and again in 1956, and helped identify rock and roll with teenage delinquency. American rock and roll acts such as Elvis Presley, Little Richard, Buddy Holly, Chuck Berry and Carl Perkins thereafter became major forces in the British charts.
The initial response of the British music industry was to attempt to produce copies of American records, recorded with session musicians and often fronted by teen idols. More grassroots British rock and rollers soon began to appear, including Wee Willie Harris and Tommy Steele. During this period American Rock and Roll remained dominant; however, in 1958 Britain produced its first "authentic" rock and roll song and star, when Cliff Richard reached number 2 in the charts with "Move It". At the same time, TV shows such as "Six-Five Special" and "Oh Boy!" promoted the careers of British rock and rollers like Marty Wilde and Adam Faith. Cliff Richard and his backing band, the Shadows, were the most successful home grown rock and roll based acts of the era. Other leading acts included Billy Fury, Joe Brown, and Johnny Kidd & the Pirates, whose 1960 hit song "Shakin' All Over" became a rock and roll standard.
As interest in rock and roll was beginning to subside in America in the late 1950s and early 1960s, it was taken up by groups in major British urban centres like Liverpool, Manchester, Birmingham, and London. About the same time, a British blues scene developed, initially led by purist blues followers such as Alexis Korner and Cyril Davies who were directly inspired by American musicians such as Robert Johnson, Muddy Waters and Howlin' Wolf. Many groups moved towards the beat music of rock and roll and rhythm and blues from skiffle, like the Quarrymen who became the Beatles, producing a form of rock and roll revivalism that carried them and many other groups to national success from about 1963 and to international success from 1964, known in America as the British Invasion. Groups that followed the Beatles included the beat-influenced Freddie and the Dreamers, Wayne Fontana and the Mindbenders, Herman's Hermits and the Dave Clark Five. Early British rhythm and blues groups with more blues influences include the Animals, the Rolling Stones, and the Yardbirds.
Rock and roll influenced lifestyles, fashion, attitudes, and language. In addition, rock and roll may have contributed to the civil rights movement because both African-American and white American teens enjoyed the music.
Many early rock and roll songs dealt with issues of cars, school, dating, and clothing. The lyrics of rock and roll songs described events and conflicts that most listeners could relate to through personal experience. Topics such as sex that had generally been considered taboo began to appear in rock and roll lyrics. This new music tried to break boundaries and express emotions that people were actually feeling but had not talked about. An awakening began to take place in American youth culture.
In the crossover of African-American "race music" to a growing white youth audience, the popularization of rock and roll involved both black performers reaching a white audience and white musicians performing African-American music. Rock and roll appeared at a time when racial tensions in the United States were entering a new phase, with the beginnings of the civil rights movement for desegregation, leading to the U.S. Supreme Court ruling that abolished the policy of "separate but equal" in 1954, but leaving a policy which would be extremely difficult to enforce in parts of the United States. The coming together of white youth audiences and black music in rock and roll inevitably provoked strong white racist reactions within the US, with many whites condemning its breaking down of barriers based on color. Many observers saw rock and roll as heralding the way for desegregation, in creating a new form of music that encouraged racial cooperation and shared experience. Many authors have argued that early rock and roll was instrumental in the way both white and black teenagers identified themselves.
Several rock historians have claimed that rock and roll was one of the first music genres to define an age group. It gave teenagers a sense of belonging, even when they were alone. Rock and roll is often identified with the emergence of teen culture among the first baby boomer generation, who had greater relative affluence and leisure time and adopted rock and roll as part of a distinct subculture. This involved not just music, absorbed via radio, record buying, jukeboxes and TV programs like "American Bandstand", but also extended to film, clothes, hair, cars and motorbikes, and distinctive language. The youth culture exemplified by rock and roll was a recurring source of concern for older generations, who worried about juvenile delinquency and social rebellion, particularly because to a large extent rock and roll culture was shared by different racial and social groups.
In America, that concern was conveyed even in youth cultural artifacts such as comic books. In "There's No Romance in Rock and Roll" from "True Life Romance" (1956), a defiant teen dates a rock and roll-loving boy but drops him for one who likes traditional adult music—to her parents' relief. In Britain, where postwar prosperity was more limited, rock and roll culture became attached to the pre-existing Teddy Boy movement, largely working class in origin, and eventually to the rockers. Rock and roll has been seen as reorienting popular music toward a youth market, as in Dion and the Belmonts' "A Teenager in Love" (1960).
From its early 1950s beginnings through the early 1960s, rock and roll spawned new dance crazes including the twist. Teenagers found the syncopated backbeat rhythm especially suited to reviving Big Band-era jitterbug dancing. Sock hops, school and church gym dances, and home basement dance parties became the rage, and American teens watched Dick Clark's "American Bandstand" to keep up on the latest dance and fashion styles. From the mid-1960s on, as "rock and roll" was rebranded as "rock," later dance genres followed, leading to funk, disco, house, techno, and hip hop. | https://en.wikipedia.org/wiki?curid=25412 |
Religion
Religion is a social-cultural system of designated behaviors and practices, morals, worldviews, texts, sanctified places, prophecies, ethics, or organizations, that relates humanity to supernatural, transcendental, or spiritual elements. However, there is no scholarly consensus over what precisely constitutes a religion.
Different religions may or may not contain various elements ranging from the divine, sacred things, faith, a supernatural being or supernatural beings or "some sort of ultimacy and transcendence that will provide norms and power for the rest of life". Religious practices may include rituals, sermons, commemoration or veneration (of deities and/or saints), sacrifices, festivals, feasts, trances, initiations, funerary services, matrimonial services, meditation, prayer, music, art, dance, public service, or other aspects of human culture. Religions have sacred histories and narratives, which may be preserved in sacred scriptures, and symbols and holy places, that aim mostly to give a meaning to life. Religions may contain symbolic stories, which are sometimes said by followers to be true, that have the side purpose of explaining the origin of life, the universe, and other things. Traditionally, faith, in addition to reason, has been considered a source of religious beliefs.
There are an estimated 10,000 distinct religions worldwide. About 84% of the world's population is affiliated with Christianity, Islam, Hinduism, Buddhism, or some form of folk religion. The religiously unaffiliated demographic includes those who do not identify with any particular religion, atheists, and agnostics. While the religiously unaffiliated have grown globally, many of the religiously unaffiliated still have various religious beliefs.
The study of religion encompasses a wide variety of academic disciplines, including theology, comparative religion and social scientific studies. Theories of religion offer various explanations for the origins and workings of religion, including the ontological foundations of religious being and belief.
"Religion" (from O.Fr. "religion" religious community, from L. "religionem" (nom. "religio") "respect for what is sacred, reverence for the gods, sense of right, moral obligation, sanctity", "obligation, the bond between man and the gods") is derived from the Latin "religiō", the ultimate origins of which are obscure. One possible interpretation traced to Cicero, connects ' read, i.e. "re" (again) with "lego" in the sense of choose, go over again or consider carefully. The definition of "religio" by Cicero is "cultum deorum", "the proper performance of rites in veneration of the gods." Julius Caesar used "religio" to mean "obligation of an oath" when discussing captured soldiers making an oath to their captors. The Roman naturalist Pliny the Elder used the term "religio" on elephants in that they venerate the sun and the moon. Modern scholars such as Tom Harpur and Joseph Campbell favor the derivation from ' bind, connect, probably from a prefixed "", i.e. "re" (again) + "ligare" or to reconnect, which was made prominent by St. Augustine, following the interpretation given by Lactantius in "Divinae institutiones", IV, 28. The medieval usage alternates with "order" in designating bonded communities like those of monastic orders: "we hear of the 'religion' of the Golden Fleece, of a knight 'of the religion of Avys'".
In classic antiquity, 'religio' broadly meant conscientiousness, sense of right, moral obligation, or duty to anything. In the ancient and medieval world, the etymological Latin root "religio" was understood as an individual virtue of worship in mundane contexts; never as doctrine, practice, or actual source of knowledge. In general, "religio" referred to broad social obligations towards anything including family, neighbors, rulers, and even towards God. "Religio" was most often used by the ancient Romans not in the context of a relation towards gods, but as a range of general emotions such as hesitation, caution, anxiety, fear; feelings of being bound, restricted, inhibited; which arose from heightened attention in any mundane context. The term was also closely related to other terms like "scrupulus" which meant "very precisely" and some Roman authors related the term "superstitio", which meant too much fear or anxiety or shame, to "religio" at times. When "religio" came into English around the 1200s as religion, it took the meaning of "life bound by monastic vows" or monastic orders. The compartmentalized concept of religion, where religious things were separated from worldly things, was not used before the 1500s. The concept of religion was first used in the 1500s to distinguish the domain of the church and the domain of civil authorities.
In the ancient Greece, the Greek term "threskeia" was loosely translated into Latin as "religio" in late antiquity. The term was sparsely used in classical Greece but became more frequently used in the writings of Josephus in the first century CE. It was used in mundane contexts and could mean multiple things from respectful fear to excessive or harmfully distracting practices of others; to cultic practices. It was often contrasted with the Greek word "deisidaimonia" which meant too much fear.
The modern concept of religion, as an abstraction that entails distinct sets of beliefs or doctrines, is a recent invention in the English language. Such usage began with texts from the 17th century due to events such the splitting of Christendom during the Protestant Reformation and globalization in the age of exploration, which involved contact with numerous foreign cultures with non-European languages.
Some argue that regardless of its definition, it is not appropriate to apply the term religion to non-Western cultures. Others argue that using religion on non-Western cultures distorts what people do and believe.
The concept of religion was formed in the 16th and 17th centuries, despite the fact that ancient sacred texts like the Bible, the Quran, and others did not have a word or even a concept of religion in the original languages and neither did the people or the cultures in which these sacred texts were written. For example, there is no precise equivalent of religion in Hebrew, and Judaism does not distinguish clearly between religious, national, racial, or ethnic identities. One of its central concepts is "halakha", meaning the walk or path sometimes translated as law, which guides religious practice and belief and many aspects of daily life. Even though the beliefs and traditions of Judaism are found in the ancient world, ancient Jews saw Jewish identity as being about an ethnic or national identity and did not entail a compulsory belief system or regulated rituals. Even in the 1st century CE, Josephus had used the Greek term "ioudaismos", which some translate as Judaism today, even though he used it as an ethnic term, not one linked to modern abstract concepts of religion as a set of beliefs. It was in the 19th century that Jews began to see their ancestral culture as a religion analogous to Christianity. The Greek word "threskeia", which was used by Greek writers such as Herodotus and Josephus, is found in the New Testament. "Threskeia" is sometimes translated as religion in today's translations, however, the term was understood as worship well into the medieval period. In the Quran, the Arabic word "din" is often translated as religion in modern translations, but up to the mid-1600s translators expressed "din" as law.
The Sanskrit word dharma, sometimes translated as religion, also means law. Throughout classical South Asia, the study of law consisted of concepts such as penance through piety and ceremonial as well as practical traditions. Medieval Japan at first had a similar union between imperial law and universal or Buddha law, but these later became independent sources of power.
Throughout the Americas, Native Americans never had a concept of "religion" and any suggestion otherwise is a colonial imposition by Christians.
Though traditions, sacred texts, and practices have existed throughout time, most cultures did not align with Western conceptions of religion since they did not separate everyday life from the sacred. In the 18th and 19th centuries, the terms Buddhism, Hinduism, Taoism, Confucianism, and world religions first entered the English language. No one self-identified as a Hindu or Buddhist or other similar terms before the 1800s. "Hindu" has historically been used as a geographical, cultural, and later religious identifier for people indigenous to the Indian subcontinent. Throughout its long history, Japan had no concept of religion since there was no corresponding Japanese word, nor anything close to its meaning, but when American warships appeared off the coast of Japan in 1853 and forced the Japanese government to sign treaties demanding, among other things, freedom of religion, the country had to contend with this Western idea.
According to the philologist Max Müller in the 19th century, the root of the English word religion, the Latin "religio", was originally used to mean only reverence for God or the gods, careful pondering of divine things, piety (which Cicero further derived to mean diligence). Max Müller characterized many other cultures around the world, including Egypt, Persia, and India, as having a similar power structure at this point in history. What is called ancient religion today, they would have only called law.
Scholars have failed to agree on a definition of religion. There are, however, two general definition systems: the sociological/functional and the phenomenological/philosophical.
Religion is a modern Western concept. Parallel concepts are not found in many current and past cultures; there is no equivalent term for religion in many languages. Scholars have found it difficult to develop a consistent definition, with some giving up on the possibility of a definition. Others argue that regardless of its definition, it is not appropriate to apply it to non-Western cultures.
An increasing number of scholars have expressed reservations about ever defining the essence of religion. They observe that the way we use the concept today is a particularly modern construct that would not have been understood through much of history and in many cultures outside the West (or even in the West until after the Peace of Westphalia). The MacMillan Encyclopedia of Religions states:
The anthropologist Clifford Geertz defined religion as a
Alluding perhaps to Tylor's "deeper motive", Geertz remarked that
The theologian Antoine Vergote took the term supernatural simply to mean whatever transcends the powers of nature or human agency. He also emphasized the cultural reality of religion, which he defined as
Peter Mandaville and Paul James intended to get away from the modernist dualisms or dichotomous understandings of immanence/transcendence, spirituality/materialism, and sacredness/secularity. They define religion as
According to the MacMillan Encyclopedia of Religions, there is an experiential aspect to religion which can be found in almost every culture:
Friedrich Schleiermacher in the late 18th century defined religion as "das schlechthinnige Abhängigkeitsgefühl", commonly translated as "the feeling of absolute dependence".
His contemporary Georg Wilhelm Friedrich Hegel disagreed thoroughly, defining religion as "the Divine Spirit becoming conscious of Himself through the finite spirit."
Edward Burnett Tylor defined religion in 1871 as "the belief in spiritual beings". He argued that narrowing the definition to mean the belief in a supreme deity or judgment after death or idolatry and so on, would exclude many peoples from the category of religious, and thus "has the fault of identifying religion rather with particular developments than with the deeper motive which underlies them". He also argued that the belief in spiritual beings exists in all known societies.
In his book "The Varieties of Religious Experience", the psychologist William James defined religion as "the feelings, acts, and experiences of individual men in their solitude, so far as they apprehend themselves to stand in relation to whatever they may consider the divine". By the term divine James meant "any object that is god"like", whether it be a concrete deity or not" to which the individual feels impelled to respond with solemnity and gravity.
The sociologist Émile Durkheim, in his seminal book "The Elementary Forms of the Religious Life", defined religion as a "unified system of beliefs and practices relative to sacred things". By sacred things he meant things "set apart and forbidden—beliefs and practices which unite into one single moral community called a Church, all those who adhere to them". Sacred things are not, however, limited to gods or spirits. On the contrary, a sacred thing can be "a rock, a tree, a spring, a pebble, a piece of wood, a house, in a word, anything can be sacred". Religious beliefs, myths, dogmas and legends are the representations that express the nature of these sacred things, and the virtues and powers which are attributed to them.
Echoes of James' and Durkheim's definitions are to be found in the writings of, for example, Frederick Ferré who defined religion as "one's way of valuing most comprehensively and intensively". Similarly, for the theologian Paul Tillich, faith is "the state of being ultimately concerned", which "is itself religion. Religion is the substance, the ground, and the depth of man's spiritual life."
When religion is seen in terms of sacred, divine, intensive valuing, or ultimate concern, then it is possible to understand why scientific findings and philosophical criticisms (e.g., those made by Richard Dawkins) do not necessarily disturb its adherents.
Traditionally, faith, in addition to reason, has been considered a source of religious beliefs. The interplay between faith and reason, and their use as perceived support for religious beliefs, have been a subject of interest to philosophers and theologians. The origin of religious belief as such is an open question, with possible explanations including awareness of individual death, a sense of community, and dreams.
The word "myth" has several meanings.
Ancient polytheistic religions, such as those of Greece, Rome, and Scandinavia, are usually categorized under the heading of mythology. Religions of pre-industrial peoples, or cultures in development, are similarly called myths in the anthropology of religion. The term myth can be used pejoratively by both religious and non-religious people. By defining another person's religious stories and beliefs as mythology, one implies that they are less real or true than one's own religious stories and beliefs. Joseph Campbell remarked, "Mythology is often thought of as "other people's" religions, and religion can be defined as mis-interpreted mythology."
In sociology, however, the term myth has a non-pejorative meaning. There, myth is defined as a story that is important for the group whether or not it is objectively or provably true. Examples include the resurrection of their real-life founder Jesus, which, to Christians, explains the means by which they are freed from sin, is symbolic of the power of life over death, and is also said to be a historical event. But from a mythological outlook, whether or not the event actually occurred is unimportant. Instead, the symbolism of the death of an old life and the start of a new life is what is most significant. Religious believers may or may not accept such symbolic interpretations.
The practices of a religion may include rituals, sermons, commemoration or veneration (of a deity, gods, or goddesses), sacrifices, festivals, feasts, trances, initiations, funerary services, matrimonial services, meditation, prayer, religious music, religious art, sacred dance, public service, or other aspects of human culture.
Religions have a societal basis, either as a living tradition which is carried by lay participants, or with an organized clergy, and a definition of what constitutes adherence or membership.
A number of disciplines study the phenomenon of religion: theology, comparative religion, history of religion, evolutionary origin of religions, anthropology of religion, psychology of religion (including neuroscience of religion and evolutionary psychology of religion), law and religion, and sociology of religion.
Daniel L. Pals mentions eight classical theories of religion, focusing on various aspects of religion: animism and magic, by E.B. Tylor and J.G. Frazer; the psycho-analytic approach of Sigmund Freud; and further Émile Durkheim, Karl Marx, Max Weber, Mircea Eliade, E.E. Evans-Pritchard, and Clifford Geertz.
Michael Stausberg gives an overview of contemporary theories of religion, including cognitive and biological approaches.
Sociological and anthropological theories of religion generally attempt to explain the origin and function of religion. These theories define what they present as universal characteristics of religious belief and practice.
The origin of religion is uncertain. There are a number of theories regarding the subsequent origins of religious practices.
According to anthropologists John Monaghan and Peter Just, "Many of the great world religions appear to have begun as revitalization movements of some sort, as the vision of a charismatic prophet fires the imaginations of people seeking a more comprehensive answer to their problems than they feel is provided by everyday beliefs. Charismatic individuals have emerged at many times and places in the world. It seems that the key to long-term success—and many movements come and go with little long-term effect—has relatively little to do with the prophets, who appear with surprising regularity, but more to do with the development of a group of supporters who are able to institutionalize the movement."
The development of religion has taken different forms in different cultures. Some religions place an emphasis on belief, while others emphasize practice. Some religions focus on the subjective experience of the religious individual, while others consider the activities of the religious community to be most important. Some religions claim to be universal, believing their laws and cosmology to be binding for everyone, while others are intended to be practiced only by a closely defined or localized group. In many places, religion has been associated with public institutions such as education, hospitals, the family, government, and political hierarchies.
Anthropologists John Monoghan and Peter Just state that, "it seems apparent that one thing religion or belief helps us do is deal with problems of human life that are significant, persistent, and intolerable. One important way in which religious beliefs accomplish this is by providing a set of ideas about how and why the world is put together that allows people to accommodate anxieties and deal with misfortune."
While religion is difficult to define, one standard model of religion, used in religious studies courses, was proposed by Clifford Geertz, who simply called it a "cultural system". A critique of Geertz's model by Talal Asad categorized religion as "an anthropological category". Richard Niebuhr's (1894–1962) five-fold classification of the relationship between Christ and culture, however, indicates that religion and culture can be seen as two separate systems, though not without some interplay.
One modern academic theory of religion, social constructionism, says that religion is a modern concept that suggests all spiritual practice and worship follows a model similar to the Abrahamic religions as an orientation system that helps to interpret reality and define human beings. Among the main proponents of this theory of religion are Daniel Dubuisson, Timothy Fitzgerald, Talal Asad, and Jason Ānanda Josephson. The social constructionists argue that religion is a modern concept that developed from Christianity and was then applied inappropriately to non-Western cultures.
Cognitive science of religion is the study of religious thought and behavior from the perspective of the cognitive and evolutionary sciences. The field employs methods and theories from a very broad range of disciplines, including: cognitive psychology, evolutionary psychology, cognitive anthropology, artificial intelligence, cognitive neuroscience, neurobiology, zoology, and ethology. Scholars in this field seek to explain how human minds acquire, generate, and transmit religious thoughts, practices, and schemas by means of ordinary cognitive capacities.
Hallucinations and delusions related to religious content occurs in about 60% of people with schizophrenia. While this number varies across cultures, this had led to theories about a number of influential religious phenomenon and possible relation to psychotic disorders. A number of prophetic experiences are consistent with psychotic symptoms, although retrospective diagnoses are practically impossible. Schizophrenic episodes are also experienced by people who do not have belief in gods.
Religious content is also common in temporal lobe epilepsy, and obsessive-compulsive disorder. Atheistic content is also found to be common with temporal lobe epilepsy.
Comparative religion is the branch of the study of religions concerned with the systematic comparison of the doctrines and practices of the world's religions. In general, the comparative study of religion yields a deeper understanding of the fundamental philosophical concerns of religion such as ethics, metaphysics, and the nature and form of salvation. Studying such material is meant to give one a richer and more sophisticated understanding of human beliefs and practices regarding the sacred, numinous, spiritual and divine.
In the field of comparative religion, a common geographical classification of the main world religions includes Middle Eastern religions (including Zoroastrianism and Iranian religions), Indian religions, East Asian religions, African religions, American religions, Oceanic religions, and classical Hellenistic religions.
In the 19th and 20th centuries, the academic practice of comparative religion divided religious belief into philosophically defined categories called world religions. Some academics studying the subject have divided religions into three broad categories:
Some recent scholarship has argued that not all types of religion are necessarily separated by mutually exclusive philosophies, and furthermore that the utility of ascribing a practice to a certain philosophy, or even calling a given practice religious, rather than cultural, political, or social in nature, is limited. The current state of psychological study about the nature of religiousness suggests that it is better to refer to religion as a largely invariant phenomenon that should be distinguished from cultural norms (i.e. religions).
Some scholars classify religions as either "universal religions" that seek worldwide acceptance and actively look for new converts, or "ethnic religions" that are identified with a particular ethnic group and do not seek converts. Others reject the distinction, pointing out that all religious practices, whatever their philosophical origin, are ethnic because they come from a particular culture. Christianity, Islam, Buddhism and Jainism are universal religions while Hinduism and Judaism are ethnic religions.
The five largest religious groups by world population, estimated to account for 5.8 billion people and 84% of the population, are Christianity, Islam, Buddhism, Hinduism (with the relative numbers for Buddhism and Hinduism dependent on the extent of syncretism) and traditional folk religion.
A global poll in 2012 surveyed 57 countries and reported that 59% of the world's population identified as religious, 23% as not religious, 13% as convinced atheists, and also a 9% decrease in identification as religious when compared to the 2005 average from 39 countries. A follow-up poll in 2015 found that 63% of the globe identified as religious, 22% as not religious, and 11% as convinced atheists. On average, women are more religious than men. Some people follow multiple religions or multiple religious principles at the same time, regardless of whether or not the religious principles they follow traditionally allow for syncretism.
Abrahamic religions are monotheistic religions which believe they descend from Abraham.
Judaism is the oldest Abrahamic religion, originating in the people of ancient Israel and Judea. The Torah is its foundational text, and is part of the larger text known as the Tanakh or Hebrew Bible. It is supplemented by oral tradition, set down in written form in later texts such as the Midrash and the Talmud. Judaism includes a wide corpus of texts, practices, theological positions, and forms of organization. Within Judaism there are a variety of movements, most of which emerged from Rabbinic Judaism, which holds that God revealed his laws and commandments to Moses on Mount Sinai in the form of both the Written and Oral Torah; historically, this assertion was challenged by various groups. The Jewish people were scattered after the destruction of the Temple in Jerusalem in 70 CE. Today there are about 13 million Jews, about 40 per cent living in Israel and 40 per cent in the United States. The largest Jewish religious movements are Orthodox Judaism (Haredi Judaism and Modern Orthodox Judaism), Conservative Judaism and Reform Judaism.
Christianity is based on the life and teachings of Jesus of Nazareth (1st century) as presented in the New Testament. The Christian faith is essentially faith in Jesus as the Christ, the Son of God, and as Savior and Lord. Almost all Christians believe in the Trinity, which teaches the unity of Father, Son (Jesus Christ), and Holy Spirit as three persons in one Godhead. Most Christians can describe their faith with the Nicene Creed. As the religion of Byzantine Empire in the first millennium and of Western Europe during the time of colonization, Christianity has been propagated throughout the world. The main divisions of Christianity are, according to the number of adherents:
There are also smaller groups, including:
Islam is based on the Qur'an, one of the holy books considered by Muslims to be revealed by God, and on the teachings (hadith) of the Islamic prophet Muhammad, a major political and religious figure of the 7th century CE. Islam is based on the unity of all religious philosophies and accepts all of the Abrahamic prophets of Judaism, Christianity and other Abrahamic religions before Muhammad. It is the most widely practiced religion of Southeast Asia, North Africa, Western Asia, and Central Asia, while Muslim-majority countries also exist in parts of South Asia, Sub-Saharan Africa, and Southeast Europe. There are also several Islamic republics, including Iran, Pakistan, Mauritania, and Afghanistan.
Other denominations of Islam include Nation of Islam, Ibadi, Sufism, Quranism, Mahdavia, and non-denominational Muslims. Wahhabism is the dominant Muslim schools of thought in the Kingdom of Saudi Arabia.
Whilst Judaism, Christianity and Islam are commonly seen as the three Abrahamic faiths, there are smaller and newer traditions which lay claim to the designation as well.
For example, the Bahá'í Faith is a new religious movement that has links to the major Abrahamic religions as well as other religions (e.g. of Eastern philosophy). Founded in 19th-century Iran, it teaches the unity of all religious philosophies and accepts all of the prophets of Judaism, Christianity, and Islam as well as additional prophets (Buddha, Mahavira), including its founder Bahá'u'lláh. It is an offshoot of Bábism. One of its divisions is the Orthodox Bahá'í Faith.
Even smaller regional Abrahamic groups also exist, including Samaritanism (primarily in Israel and the West Bank), the Rastafari movement (primarily in Jamaica), and Druze (primarily in Syria and Lebanon).
East Asian religions (also known as Far Eastern religions or Taoic religions) consist of several religions of East Asia which make use of the concept of Tao (in Chinese) or Dō (in Japanese or Korean). They include:
Indian religions are practiced or were founded in the Indian subcontinent. They are sometimes classified as the "dharmic religions", as they all feature dharma, the specific law of reality and duties expected according to the religion.
Indigenous religions or folk religions refers to a broad category of traditional religions that can be characterised by shamanism, animism and ancestor worship, where traditional means "indigenous, that which is aboriginal or foundational, handed down from generation to generation…". These are religions that are closely associated with a particular group of people, ethnicity or tribe; they often have no formal creeds or sacred texts. Some faiths are syncretic, fusing diverse religious beliefs and practices.
Folk religions are often omitted as a category in surveys even in countries where they are widely practiced, e.g. in China.
African traditional religion encompasses the traditional religious beliefs of people in Africa. In West Africa, these religions include the Akan religion, Dahomey (Fon) mythology, Efik mythology, Odinani, Serer religion (A ƭat Roog), and Yoruba religion, while Bushongo mythology, Mbuti (Pygmy) mythology, Lugbara mythology, Dinka religion, and Lotuko mythology come from central Africa. Southern African traditions include Akamba mythology, Masai mythology, Malagasy mythology, San religion, Lozi mythology, Tumbuka mythology, and Zulu mythology. Bantu mythology is found throughout central, southeast, and southern Africa. In north Africa, these traditions include Berber and ancient Egyptian.
There are also notable African diasporic religions practiced in the Americas, such as Santeria, Candomble, Vodun, Lucumi, Umbanda, and Macumba.
Iranian religions are ancient religions whose roots predate the Islamization of Greater Iran. Nowadays these religions are practiced only by minorities.
Zoroastrianism is based on the teachings of prophet Zoroaster in the 6th century BCE. Zoroastrians worship the creator Ahura Mazda. In Zoroastrianism, good and evil have distinct sources, with evil trying to destroy the creation of Mazda, and good trying to sustain it.
Mandaeism is a monotheistic religion with a strongly dualistic worldview. Mandaeans are sometime labeled as the Last Gnostics.
Kurdish religions include the traditional beliefs of the Yazidi, Alevi, and Ahl-e Haqq. Sometimes these are labeled Yazdânism.
The study of law and religion is a relatively new field, with several thousand scholars involved in law schools, and academic departments including political science, religion, and history since 1980. Scholars in the field are not only focused on strictly legal issues about religious freedom or non-establishment, but also study religions as they are qualified through judicial discourses or legal understanding of religious phenomena. Exponents look at canon law, natural law, and state law, often in a comparative perspective. Specialists have explored themes in Western history regarding Christianity and justice and mercy, rule and equity, and discipline and love. Common topics of interest include marriage and the family and human rights. Outside of Christianity, scholars have looked at law and religion links in the Muslim Middle East and pagan Rome.
Studies have focused on secularization. In particular, the issue of wearing religious symbols in public, such as headscarves that are banned in French schools, have received scholarly attention in the context of human rights and feminism.
Science acknowledges reason, empiricism, and evidence; and religions include revelation, faith and sacredness whilst also acknowledging philosophical and metaphysical explanations with regard to the study of the universe. Both science and religion are not monolithic, timeless, or static because both are complex social and cultural endeavors that have changed through time across languages and cultures.
The concepts of science and religion are a recent invention: the term religion emerged in the 17th century in the midst of colonization and globalization and the Protestant Reformation. The term science emerged in the 19th century out of natural philosophy in the midst of attempts to narrowly define those who studied nature (natural science), and the phrase religion and science emerged in the 19th century due to the reification of both concepts. It was in the 19th century that the terms Buddhism, Hinduism, Taoism, and Confucianism first emerged. In the ancient and medieval world, the etymological Latin roots of both science ("scientia") and religion ("religio") were understood as inner qualities of the individual or virtues, never as doctrines, practices, or actual sources of knowledge.
In general the scientific method gains knowledge by testing hypotheses to develop theories through elucidation of facts or evaluation by experiments and thus only answers cosmological questions about the universe that can be observed and measured. It develops theories of the world which best fit physically observed evidence. All scientific knowledge is subject to later refinement, or even rejection, in the face of additional evidence. Scientific theories that have an overwhelming preponderance of favorable evidence are often treated as "de facto" verities in general parlance, such as the theories of general relativity and natural selection to explain respectively the mechanisms of gravity and evolution.
Religion does not have a method per se partly because religions emerge through time from diverse cultures and it is an attempt to find meaning in the world, and to explain humanity's place in it and relationship to it and to any posited entities. In terms of Christian theology and ultimate truths, people rely on reason, experience, scripture, and tradition to test and gauge what they experience and what they should believe. Furthermore, religious models, understanding, and metaphors are also revisable, as are scientific models.
Regarding religion and science, Albert Einstein states (1940): "For science can only ascertain what is, but not what should be, and outside of its domain value judgments of all kinds remain necessary. Religion, on the other hand, deals only with evaluations of human thought and action; it cannot justifiably speak of facts and relationships between facts…Now, even though the realms of religion and science in themselves are clearly marked off from each other, nevertheless there exist between the two strong reciprocal relationships and dependencies. Though religion may be that which determine the goals, it has, nevertheless, learned from science, in the broadest sense, what means will contribute to the attainment of the goals it has set up."
Many religions have value frameworks regarding personal behavior meant to guide adherents in determining between right and wrong. These include the Triple Jems of Jainism, Judaism's Halacha, Islam's Sharia, Catholicism's Canon Law, Buddhism's Eightfold Path, and Zoroastrianism's good thoughts, good words, and good deeds concept, among others.
Religion and morality are not synonymous. While it is "an almost automatic assumption." in Christianity, morality can have a secular basis.
The study of religion and morality can be contentious due to ethnocentric views on morality, failure to distinguish between in group and out group altruism, and inconsistent definitions of religiosity.
Religion has had a significant impact on the political system in many countries. Notably, most Muslim-majority countries adopt various aspects of sharia, the Islamic law. Some countries even define themselves in religious terms, such as The Islamic Republic of Iran. The sharia thus affects up to 23% of the global population, or 1.57 billion people who are Muslims. However, religion also affects political decisions in many western countries. For instance, in the United States, 51% of voters would be less likely to vote for a presidential candidate who did not believe in God, and only 6% more likely. Christians make up 92% of members of the US Congress, compared with 71% of the general public (as of 2014). At the same time, while 23% of U.S. adults are religiously unaffiliated, only one member of Congress (Kyrsten Sinema, D-Arizona), or 0.2% of that body, claims no religious affiliation. In most European countries, however, religion has a much smaller influence on politics although it used to be much more important. For instance, same-sex marriage and abortion were illegal in many European countries until recently, following Christian (usually Catholic) doctrine. Several European leaders are atheists (e.g. France's former president Francois Hollande or Greece's prime minister Alexis Tsipras). In Asia, the role of religion differs widely between countries. For instance, India is still one of the most religious countries and religion still has a strong impact on politics, given that Hindu nationalists have been targeting minorities like the Muslims and the Christians, who historically belonged to the lower castes. By contrast, countries such as China or Japan are largely secular and thus religion has a much smaller impact on politics.
Secularization is the transformation of the politics of a society from close identification with a particular religion's values and institutions toward nonreligious values and secular institutions. The purpose of this is frequently modernization or protection of the populations religious diversity.
One study has found there is a negative correlation between self-defined religiosity and the wealth of nations. In other words, the richer a nation is, the less likely its inhabitants to call themselves religious, whatever this word means to them (Many people identify themselves as part of a religion (not irreligion) but do not self-identify as religious).
Sociologist and political economist Max Weber has argued that Protestant Christian countries are wealthier because of their Protestant work ethic.
According to a study from 2015, Christians hold the largest amount of wealth (55% of the total world wealth), followed by Muslims (5.8%), Hindus (3.3%) and Jews (1.1%). According to the same study it was found that adherents under the classification Irreligion or other religions hold about 34.8% of the total global wealth.
Mayo Clinic researchers examined the association between religious involvement and spirituality, and physical health, mental health, health-related quality of life, and other health outcomes. The authors reported that: "Most studies have shown that religious involvement and spirituality are associated with better health outcomes, including greater longevity, coping skills, and health-related quality of life (even during terminal illness) and less anxiety, depression, and suicide."
The authors of a subsequent study concluded that the influence of religion on health is largely beneficial, based on a review of related literature. According to academic James W. Jones, several studies have discovered "positive correlations between religious belief and practice and mental and physical health and longevity."
An analysis of data from the 1998 US General Social Survey, whilst broadly confirming that religious activity was associated with better health and well-being, also suggested that the role of different dimensions of spirituality/religiosity in health is rather more complicated. The results suggested "that it may not be appropriate to generalize findings about the relationship between spirituality/religiosity and health from one form of spirituality/religiosity to another, across denominations, or to assume effects are uniform for men and women.
Critics like Hector Avalos Regina Schwartz, Christopher Hitchens and Richard Dawkins have argued that religions are inherently violent and harmful to society by using violence to promote their goals, in ways that are endorsed and exploited by their leaders.
Anthropologist Jack David Eller asserts that religion is not inherently violent, arguing "religion and violence are clearly compatible, but they are not identical." He asserts that "violence is neither essential to nor exclusive to religion" and that "virtually every form of religious violence has its nonreligious corollary."
Done by some (but not all) religions, animal sacrifice is the ritual killing and offering of an animal to appease or maintain favour with a deity. It has been banned in India.
Greek and Roman pagans, who saw their relations with the gods in political and social terms, scorned the man who constantly trembled with fear at the thought of the gods ("deisidaimonia"), as a slave might fear a cruel and capricious master. The Romans called such fear of the gods "superstitio". Ancient Greek historian Polybius described superstition in Ancient Rome as an "instrumentum regni", an instrument of maintaining the cohesion of the Empire.
Superstition has been described as the non-rational establishment of cause and effect. Religion is more complex and is often composed of social institutions and has a moral aspect. Some religions may include superstitions or make use of magical thinking. Adherents of one religion sometimes think of other religions as superstition. Some atheists, deists, and skeptics regard religious belief as superstition.
The Roman Catholic Church considers superstition to be sinful in the sense that it denotes a lack of trust in the divine providence of God and, as such, is a violation of the first of the Ten Commandments. The Catechism of the Catholic Church states that superstition "in some sense represents a perverse excess of religion" (para. #2110). "Superstition," it says, "is a deviation of religious feeling and of the practices this feeling imposes. It can even affect the worship we offer the true God, e.g., when one attributes an importance in some way magical to certain practices otherwise lawful or necessary. To attribute the efficacy of prayers or of sacramental signs to their mere external performance, apart from the interior dispositions that they demand is to fall into superstition. Cf. Matthew 23:16–22" (para. #2111)
The terms atheist (lack of belief in any gods) and agnostic (belief in the unknowability of the existence of gods), though specifically contrary to theistic (e.g. Christian, Jewish, and Muslim) religious teachings, do not by definition mean the opposite of religious. There are religions (including Buddhism, Taoism, and Hinduism), in fact, that classify some of their followers as agnostic, atheistic, or nontheistic. The true opposite of religious is the word irreligious. Irreligion describes an absence of any religion; antireligion describes an active opposition or aversion toward religions in general.
Because religion continues to be recognized in Western thought as a universal impulse, many religious practitioners have aimed to band together in interfaith dialogue, cooperation, and religious peacebuilding. The first major dialogue was the Parliament of the World's Religions at the 1893 Chicago World's Fair, which affirmed universal values and recognition of the diversity of practices among different cultures. The 20th century has been especially fruitful in use of interfaith dialogue as a means of solving ethnic, political, or even religious conflict, with Christian–Jewish reconciliation representing a complete reverse in the attitudes of many Christian communities towards Jews.
Recent interfaith initiatives include A Common Word, launched in 2007 and focused on bringing Muslim and Christian leaders together, the "C1 World Dialogue", the Common Ground initiative between Islam and Buddhism, and a United Nations sponsored "World Interfaith Harmony Week".
Culture and religion have usually been seen as closely related. Paul Tillich looked at religion as the soul of culture and culture as the form or framework of religion. In his own words:
Religion as ultimate concern is the meaning-giving substance of culture, and culture is the totality of forms in which the basic concern of religion expresses itself. In abbreviation: religion is the substance of culture, culture is the form of religion. Such a consideration definitely prevents the establishment of a dualism of religion and culture. Every religious act, not only in organized religion, but also in the most intimate movement of the soul, is culturally formed.
Ernst Troeltsch, similarly, looked at culture as the soil of religion and thought that, therefore, transplanting a religion from its original culture to a foreign culture would actually kill it in the same manner that transplanting a plant from its natural soil to an alien soil would kill it. However, there have been many attempts in the modern pluralistic situation to distinguish culture from religion. Domenic Marbaniang has argued that elements grounded on beliefs of a metaphysical nature (religious) are distinct from elements grounded on nature and the natural (cultural). For instance, language (with its grammar) is a cultural element while sacralization of language in which a particular religious scripture is written is more often a religious practice. The same applies to music and the arts.
Criticism of religion is criticism of the ideas, the truth, or the practice of religion, including its political and social implications. | https://en.wikipedia.org/wiki?curid=25414 |
Reed College
Reed College is a private liberal arts college in Portland, Oregon. Founded in 1908, Reed is a residential college with a campus in the Eastmoreland neighborhood, with Tudor-Gothic style architecture, and a forested canyon nature preserve at its center.
Reed is known for its academic rigor, mandatory freshman humanities program, senior thesis, and unusually high proportion of graduates who go on to earn doctorates and other postgraduate degrees. The college has many prominent alumni, including over a hundred Fulbright Scholars, 67 Watson Fellows, and three Winston Churchill Scholars; its 32 Rhodes Scholars are the second-highest count for a liberal arts college. Reed is ranked fourth in the U.S. of all colleges for the percentage of its graduates who go on to earn a PhD.
The Reed Institute (the legal name of the college) was founded in 1908, and held its first classes in 1911. Reed is named for Oregon pioneers Simeon Gannett Reed (1830–1895) and Amanda Reed (died 1904). Simeon was an entrepreneur involved in several enterprises, including trade on the Willamette and Columbia Rivers with his close friend and associate, former Portland Mayor William S. Ladd (for whom Ladd's Addition is named). Unitarian minister Thomas Lamb Eliot, who knew the Reeds from the church choir, is credited with convincing Reed of the need for "a lasting legacy, a 'Reed Institute of Lectures,' and joked it would 'need a mine to run it.'" Reed's will suggested his wife could "devote some portion of my estate to benevolent objects, or to the cultivation, illustration, or development of the fine arts in the city of Portland, or to some other suitable purpose, which shall be of permanent value and contribute to the beauty of the city and to the intelligence, prosperity, and happiness of the inhabitants". Ladd's son, William Mead Ladd, donated 40 acres from the Ladd Estate Company to build the new college. Reed's first president (1910–1919) was William Trufant Foster, a former professor at Bates College and Bowdoin College in Maine.
Contrary to popular belief, the college did not grow out of student revolts and experimentation, but out of a desire to provide a "more flexible, individualized approach to a rigorous liberal arts education". Founded explicitly in reaction to the "prevailing model of East Coast, Ivy League education", the college's lack of varsity athletics, fraternities, and exclusive social clubs – as well as its coeducational, nonsectarian, and egalitarian status—gave way to an intensely academic and intellectual college whose purpose was to devote itself to "the life of the mind", that life being understood primarily as the academic life.
During the 1930s, President Dexter Keezer became very concerned about what he considered to be dishonorable behavior at Reed. Foremost among these behaviors was fraternization among male and female students but the consumption of alcohol was also an issue. A large portion of the Student Council even took the position that Oregon's liquor laws did not apply to Reed's campus. Policies restricting the ability of student's from visiting the dormitories of the opposite sex were fiercely resisted.
The college has a reputation for political liberalism.
According to sociologist Burton Clark, Reed is one of the most unusual institutions of higher learning in the United States, featuring a traditional liberal arts and natural sciences curriculum. It requires freshmen to take Humanities 110, an intensive introduction to multidisciplinary inquiry, covering ancient Greece and Rome, the Hebrew Bible and ancient Jewish history, and as of 2019 Tenochtitlan/Mexico City and the Harlem Renaissance. Its program in the sciences is likewise unusual with its TRIGA research reactor making it the only school in the United States to have a nuclear reactor operated primarily by undergraduates. Reed also requires all students to complete a thesis (a two-semester-long research project conducted under the guidance of professors) during the senior year as a prerequisite of graduation with successful completion of a junior qualifying exam at the end of the junior year a prerequisite to beginning the thesis. Upon completion of the senior thesis, students must also pass an oral exam that may encompass questions not only about the thesis but also about any course previously taken.
Reed maintains a 9:1 student-to-faculty ratio, and its small classes emphasize a "conference" style where the teacher often acts as a mediator for discussion rather than a lecturer. While large lecture-style classes exist, Reed emphasizes its smaller lab and conference sections.
Although letter grades are given to students, grades are de-emphasized at Reed and focus is placed on a narrative evaluation. According to the school, "A conventional letter grade for each course is recorded for every student, but the registrar's office does not distribute grades to students, provided that work continues at satisfactory (C or higher) levels. Unsatisfactory grades are reported directly to the student and the student's adviser. Papers and exams are generally returned to students with lengthy comments but without grades affixed." There is no dean's list or honor roll "per se", but students who maintain a GPA of 3.5 or above for an academic year receive academic commendations at the end of the spring semester which are noted on their transcripts. Many Reed students graduate without knowing their cumulative GPA or their grades in individual classes. Reed also claims to have experienced very little grade inflation over the years, noting, for example, that only ten students graduated with a perfect 4.0 GPA in the period from 1983 to 2012. (Transcripts are accompanied by a card explaining Reed's relatively tough grading system so as not to penalize students applying to graduate schools.) Although Reed does not award Latin honors to graduates, it confers several awards for academic achievement at commencement, including naming students to Phi Beta Kappa.
Reed has no fraternities or sororities and few NCAA sports teams although physical education classes (which range from kayaking to juggling to capoeira) are required for graduation. Reed also has several intercollegiate athletic clubs, most notably the rugby, Ultimate Frisbee, and soccer teams.
Reed's ethical code is known as "The Honor Principle". First introduced as an agreement to promote ethical academic behavior with the explicit end of relieving the faculty of policing student behavior, the Honor Principle was extended to cover all aspects of student life. While inspired by traditional honor systems, Reed's Honor Principle differs from these in that it is a guide for ethical standards themselves and not just their enforcement. Under the Honor Principle, there are no codified rules governing behavior. Rather, the onus is on students individually and as a community to define which behaviors are acceptable and which are not.
Discrete cases of grievance, known as "Honor Cases," are adjudicated by a Judicial Board of twelve full-time students. There is also an "Honor Council" of students, faculty, and staff who educate the community on the Honor Principle and mediate conflict between individuals.
Reed categorizes its academic program into five Divisions and the Humanities program. Overall, Reed offers five Humanities courses, twenty-six department majors, twelve interdisciplinary majors, six dual-degree programs with other colleges and universities, and programs for pre-medical and pre-veterinary students.
Reed President Richard Scholz in 1922 called the educational program as a whole "an honest effort to disregard old historic rivalries and hostilities between the sciences and the arts, between professional and cultural subjects, and, ... the formal chronological cleavage between the graduate and the undergraduate attitude of mind". The Humanities program, which came into being in 1943 (as the union of two year-long courses, one in "world" literature, the other in "world" history) is one manifestation of this effort. One change to the program was the addition of a course in Chinese Civilization in 1995. The faculty has also recently approved several significant changes to the introductory syllabus. These changes include expanding the parameters of the course to include more material regarding urban and cultural environments.
Reed's Humanities program includes the mandatory freshman course "Introduction to Western Humanities" covering ancient Greek and Roman literature, history, art, religion, and philosophy. Sophomores, juniors, and seniors may take "Early Modern Europe" covering Renaissance thought and literature; "Modern Humanities" covering the Enlightenment, the French Revolution, the Industrial Revolution, and Modernism, and/or "Foundations of Chinese Civilization". There is also a Humanities Senior Symposium.
Reed also offers interdisciplinary programs in American studies, Environmental Studies, Biochemistry and Molecular Biology, Chemistry-Physics, Classics-Religion, Dance/Theatre, History-Literature, International and Comparative Policy Studies (ICPS), Literature-Theatre, Mathematics-Economics, and Mathematics-Physics.
Reed offers dual-degree programs in Computer Science (with University of Washington), Engineering (with Caltech, Columbia University, and Rensselaer Polytechnic Institute), Forestry or Environmental Management (with Duke University), and Fine Art (with the Pacific Northwest College of Art).
For Fall 2016, the freshman class had 357 students. 10% were valedictorians of their high school classes and another 2% were salutatorians. 32% ranked in the top 5% of their class. The median scores on their SAT tests were 680 math, 710 verbal, and 680 writing, which puts them at the 96th percentile. The class was drawn from the largest pool ever—5,705 applicants—and was the most selective in Reed's history, with an admittance rate of 31%. To increase student enrollment from historically underrepresented minorities, Reed offers an all-inclusive, all-expenses-paid "Discover Reed Fly-In Program" to non-white US citizens and permanent residents.
The total direct cost for the 2018–19 academic year, including tuition, fees and room-and-board, is $70,550. Indirect costs (books, supplies, transportation, personal expenses) can tack on another $3,950. For the 2017–18 academic year, the average financial aid package – including grants, loans, and work opportunities – was approximately $45,325". In 2017–18 about half of students received financial aid from the college. In 2004 (the most recent data available), 1.4% of Reed graduates defaulted on their student loans – below the national Cohort Default Rate average of 5.1%.
Reed's endowment as of June 30, 2014, was $543 million. In the economic downturn that began in late 2007, Reed's total endowment had declined from $455 million in June 2007 to $311 million in June 2009. By the end of 2013, however, the endowment surpassed the $500 million mark.
In 1995, Reed College refused to participate in the "U.S. News & World Report" "best colleges" rankings, making it the first educational institution in the United States to refuse to participate in college rankings. According to Reed's Office of Admissions the school's refusal to participate is based in 1994 disclosures by the "Wall Street Journal" about institutions flagrantly manipulating data in order to move up in the rankings in "U.S. News" and other popular college guides. "U.S. News" maintains that their rankings are "a very legitimate tool for getting at a certain level of knowledge about colleges." In 2019, a team of statistics students recreated the formula used by "U.S. News" and were able to identify and quantify the penalty imposed on Reed, and found the college to be ranked an estimated 52 places below an unbiased application of the U.S. News scoring rubric.
In 2015, "Money" magazine ranked Reed College 196th among U.S. colleges with an overall score of C+ based on its aggregate score on measures of educational quality, tuition costs, and post-graduation alumni earnings.
Reed is ranked as tied for the 93rd best liberal arts college by "U.S. News & World Report" in its 2016 rankings, and tied for 18th in its high school counselor rankings, although the former has been harshly criticized by the college.
In 2006, "Newsweek" magazine named Reed as one of twenty-five "New Ivies", listing it among "the nation's elite colleges." In 2012, "Newsweek" ranked Reed the 15th "most rigorous" college in the nation.
Reed College ranked in the bottom 6% of four year colleges nationwide in the Brookings Institute's rating of U.S. colleges by incremental impact on alumni earnings 10 years post-enrollment.
Reed has produced the second-highest number of Rhodes scholars for any liberal arts college—32—as well as over fifty Fulbright Scholars, over sixty Watson Fellows, and two MacArthur ("Genius") Award winners. A very high proportion of Reed graduates go on to earn PhDs, particularly in the sciences, history, political science, and philosophy. Reed is third in percentage of its graduates who go on to earn PhDs in all disciplines, after only Caltech and Harvey Mudd. In 1961, "Scientific American" declared that second only to Caltech, "This small college in Oregon has been far and away more productive of future scientists than any other institution in the U.S." Reed is first in this percentage in biology, second in chemistry and humanities, third in history, foreign languages, and political science, fourth in science and mathematics, fifth in physics and social sciences, sixth in anthropology, seventh in area and ethnic studies and linguistics, and eighth in English literature and medicine.
Reed's debating team, which had existed for only two years at the time, was awarded the first place sweepstakes trophy for Division II schools at the final tournament of the Northwest Forensics Conference in February 2004.
Loren Pope, former education editor for "The New York Times," writes about Reed in "Colleges That Change Lives," saying, "If you're a genuine intellectual, live the life of the mind, and want to learn for the sake of learning, the place most likely to empower you is not Harvard, Yale, Princeton, Chicago, or Stanford. It is the most intellectual college in the country—Reed in Portland, Oregon."
Since the 1960s, Reed has had a reputation for tolerating open drug use among its students. "The Insider's Guide to the Colleges", written by the staff of "Yale Daily News", notes an impression among students of institutional permissiveness: "According to students, the school does not bust students for drug or alcohol use unless they cause harm or embarrassment to another student."
In April 2008, student Alex Lluch died of a heroin overdose in his on-campus dorm room. His death prompted revelations of several previous incidents, including the near-death heroin overdose of another student only months earlier. College President Colin Diver said "I don't honestly know" whether the drug death was an isolated incident or part of a larger problem. "When you say Reed," Diver said, "two words often come to mind. One is brains. One is drugs." Local reporter James Pitkin of the newspaper "Willamette Week" editorialized that "Reed College, a private school with one of the most prestigious academic programs in the U.S., is one of the last schools in the country where students enjoy almost unlimited freedom to experiment openly with drugs, with little or no hassles from authorities," though "Willamette Week" stated the following week concerning Pitkin's editorial: "As of press time, almost 500 responses, many expressing harsh criticism of "Willamette Week", had been posted on our website."
In March 2010, another student died of drug-related causes in his off-campus residence. This led "The New York Times" to conclude that "Reed…has long been known almost as much for its unusually permissive atmosphere as for its impressively rigorous academics." Law enforcement authorities promised to take action, including sending undercover agents to Reed's annual Renn Fayre celebration.
In February 2012, the Reed administration chose to call the police following the discovery of "two to three pounds of marijuana and a small amount of ecstasy and LSD in the on-campus apartment of two juniors." Following campus debate, Reed's president at the time, Colin Diver, issued a letter to students and staff, saying the college would not tolerate illegal drug use on campus: "Such behavior endangers the health and welfare of the entire community, attracts potentially dangerous criminal activity on campus, undermines the academic mission of the college, and violates the college's obligations under state and federal law."
Reed has a reputation for being politically liberal.
During the McCarthy era of the 1950s, then-President Duncan Ballantine fired Marxist philosopher Stanley Moore, a tenured professor, for his failure to cooperate with the House Un-American Activities Committee (HUAC) investigation. According to an article in the college's alumni magazine, "because of the decisive support expressed by Reed's faculty, students, and alumni for the three besieged teachers and for the principle of academic freedom, Reed College's experience with McCarthyism stands apart from that of most other American colleges and universities. Elsewhere in the academic world both tenured and nontenured professors with alleged or admitted communist party ties were fired with relatively little fuss or protest. At Reed, however, opposition to the political interrogations of the teachers was so strong that some believed the campus was in danger of closure." A statement of "regret" by the Reed administration and Board of Trustees was published in 1981, formally revising the judgment of the 1954 trustees. In 1993, then-President Steve Koblik invited Moore to visit the College, and in 1995 the last surviving member of the Board that fired Moore expressed his regret and apologized to him.
On September 26, 2016, students organized a boycott of all college operations in participation with the National Day of Boycott, a national day of protest which was proposed by actor Isaiah Washington on Twitter in response to the issue of police brutality against African-Americans. Following the boycott, students created an activist group called Reedies Against Racism (RAR) and presented a list of demands for the college purportedly on behalf of students from marginalized backgrounds. The primary demand concerned Reed's mandatory freshman Humanities course, proposing that the course either be changed to be more inclusive of world literature and classics or to be made not mandatory. One element of the class deemed racist by the protestors was the use of the 1978 Steve Martin song "King Tut" in a discussion about cultural appropriation. Students began a protest campaign against the curriculum by sitting in during lectures with signs with quotations from various African-American and non-white academics. Other protests separate from the Humanities course also included efforts to shout down speakers, including Kimberly Peirce after she was accused of profiting from transphobia while making the film "Boys Don't Cry". The group eventually focused on Reed's banking relationship with Wells Fargo, based on allegations that the bank had invested in the Dakota Access Pipeline project and the private prison industry, and staged an occupation of Reed's Eliot Hall.
There was some opposition to the lecture protests, notably by Reed professor of English Lucía Martínez Valdivia, who stated that a protest during her lecture on Sappho would amplify her pre-existing case of PTSD. In November 2017, Chris Bodenner of "The Atlantic" wrote about growing student resentment toward the tactics of RAR. In response to protests the faculty decided to undergo the decennial review process a year early, as well as to complete the process in three months instead of the usual year. In January 2018, Humanities 110 Chair professor Libby Drumm announced in a campus-wide email that the course curriculum would be restructured after years of faculty discussion and in response to student feedback as well as input from an external review committee composed of humanities faculty from other institutes, adopting a "four-module structure" that would include texts from the Americas and allow greater flexibility in the curriculum which would be integrated beginning fall 2018. The external review had not in fact been completed nor reviewed at the time of the announcement.
Following "a contentious year of protests, including an anti-racism sit-in in Kroger’s office," college president John Kroger resigned, effective June 2018.
The Reed College campus was established on a tract of land in southeast Portland known in 1910 as Crystal Springs Farm, a part of the Ladd Estate, formed in the 1870s from original land claims. The college's grounds include of contiguous land, including a wooded wetland known as Reed Canyon.
Portland architect A. E. Doyle developed a plan, never implemented in full, modeled on the University of Oxford's St. John's College. The original campus buildings (including the Library, the Old Dorm Block, and what is now the primary administration building, Eliot Hall) are brick Tudor Gothic buildings in a style similar to Ivy League campuses. In contrast, the science section of campus, including the physics, biology, and psychology (originally chemistry) buildings, were designed in the Modernist style. The Psychology Building, completed in 1949, was designed by Modernist architect Pietro Belluschi at the same time as his celebrated Equitable Building in downtown Portland.
The campus and buildings have undergone several phases of growth, and there are now 21 academic and administrative buildings and 18 residence halls. Since 2004, Reed's campus has expanded to include adjacent properties beyond its historic boundaries, such as the Birchwood Apartments complex and former medical administrative offices on either side of SE 28th Avenue, and the Parker House, across SE Woodstock from Prexy. At the same time the Willard House (donated to Reed in 1964), across from the college's main entrance at SE Woodstock and SE Reed College Place, was converted from faculty housing to administrative use. Reed announced on July 13, 2007, that it had purchased the Rivelli farm, a tract of land south of the Garden House and west of Botsford Drive. Reed's "immediate plans for the acquired property include housing a small number of students in the former Rivelli home during the 2007–08 academic year. Longer term, the college anticipates that it may seek to develop the northern portion of the property for additional student housing".
Reed houses 946 students in 18 residence halls on campus and several college-owned houses and apartment buildings on or adjacent to campus. Residence halls on campus range from the traditional (i.e., Gothic Old Dorm Block, referred to as "ODB") to the eclectic (e.g., Anna Mann, a Tudor-style cottage built in the 1920s by Reed's founding architect A. E. Doyle, originally used as a women's hall), language houses (Spanish, Russian, French, German, and Chinese), "temporary" housing, built in the 1960s (Cross Canyon – Chittick, Woodbridge, McKinley, Griffin), to more recently built dorms (Bragdon, Naito, Sullivan). There are also theme residence halls including everything from substance-free living to Japanese culture to music to a dorm for students interested in outdoors activities (hiking, climbing, bicycling, kayaking, skiing, etc.). The college's least-loved complex (as measured by applications to the College's housing lottery), MacNaughton and Foster-Scholz, is known on campus as "Asylum Block" because of its post-World War II modernist architecture and interior spaces dominated by long, straight corridors lined with identical doors, said by students to resemble that of an insane asylum. Until 2006, it was thought that these residence halls had been designed by architect Pietro Belluschi.
Under the 10-year Campus Master Plan adopted in 2006, Foster-Scholz is scheduled to be demolished and replaced, and MacNaughton to be remodeled. According to the master plan, "The College's goal is to provide housing on or adjacent to the campus that accommodates 75% of the [full-time] student population. At present, the College provides on-campus housing for 838 students".
In Spring 2007, the College broke ground on the construction of a new quadrangle called the Grove with four new Leed certified residence halls (Aspen, Sequoia, Sitka, Bidwell) on the northwest side of the campus, which opened in Fall 2008. A new Spanish House residence was completed. Together, the five new residences added 142 new beds.
Reed also has off-campus housing. Many houses in the Woodstock and Eastmoreland Portland neighborhoods are traditionally rented to Reed students.
On February 21, 2018, Reed announced the construction of the "largest residence hall in its history." Set to be complete by Fall 2019, it will house an additional 180 students, boosting Reed's housing capacity to nearly 80% of the student body, up from 68%. This will guarantee housing for both freshman and sophomores, as students were formerly subjected to a housing lottery after freshman year. The new building is also designed to meet "LEED Platinum standards," and Reed is currently evaluating proposals to put solar panels on the roof.
The Reed College Canyon, a natural area and national wildlife preserve, bisects the campus, separating the academic buildings from many of the residence halls (the so-called "cross-canyon halls"). The canyon is filled by Crystal Creek Springs, a natural spring that drains into Johnson Creek.
Canyon Day, a tradition dating back to 1915, is held twice a year. On Canyon Day students and Reed neighbors join canyon crew workers to spend a day helping with restoration efforts.
A landmark of the campus, the Blue Bridge, spans the canyon. This bridge replaced the unique cantilevered bridge that served in that spot between 1959 and 1991, which "featured stressed plywood girders – the first time this construction had been used on a span of this size: a straight bridge long and high. It attracted great architectural interest during its lifetime".
A new pedestrian and bicycle bridge spanning the canyon was opened in Fall 2008. This bridge, dubbed the "Bouncy Bridge" or "Amber Bridge" by students, is long, about a third longer than the Blue Bridge, and "connect[s] the new north campus quad to Gray Campus Center, the student union, the library, and academic buildings on the south side of campus".
Reed's Cooley Gallery is an internationally recognized contemporary art space located at the entrance to the Eric V. Hauser Memorial Library. It was established in 1988 as the result of a gift from Susan and Edward Cooley in honor of their late son. The Cooley Gallery has exhibited international artists such as Mona Hatoum, Al Held, David Reed and Gregory Crewdson as well as the contemporary art collection of Michael Ovitz. In pursuit of its mission to support the curriculum of the art, art history, and humanities programs at Reed, the gallery produces three or four exhibitions each year, along with lectures, colloquia, and artist visits. The gallery is currently under the directorship of Stephanie Snyder, who succeeded founding director Susan Fillin-Yeh in 2004.
The cafeteria, known simply as "Commons," has a reputation for ecologically sustainable food services. The commons dining hall is operated by Bon Appétit, and food is purchased on an item-by-item basis. Suiting the student body, vegan and vegetarian dishes feature heavily on the menu. It is currently the only cafeteria on the small campus, with the exception of Caffe Circo (formerly Caffe Paradiso), a small cafe on the other side of campus which also operated by board points. Scrounging is a long tradition at Reed College allowing students to offer unfinished Commons' food to students without board points from their trays as they are returned to be washed.
The Reed College Co-ops are a theme community that reside in the Farm and Garden Houses, after many years on the first floor of MacNaughton Hall. These are the only campus dorms that are independent of the school's board plan. They traditionally throw an alternative "Thanksgiving" celebration that has sometimes included a square-dance. The Co-ops house students who purchase and prepare food together, sharing chores and conducting weekly, consensus-based meetings. It is a close community valuing sustainability, organic food, consensus-based decisions, self-government, music, and plants.
The Paradox ("Est. in the 80s") is a student-run coffee shop located on campus. In 2003 the Paradox opened a second coffee shop, dubbing it the "Paradox Lost" (an allusion to John Milton's "Paradise Lost,") at the southern end of the biology building, in the space commonly called the "Bio Fishbowl." The new north-campus dorms, which opened in Fall 2008, feature yet another small cafe, originally dubbed "Cafe Paradiso," thereby providing three coffee shops within a campus. The recent addition of a circus-themed mural to the cafe prompted a name change, and it now operates as Caffe Circo. This third shop is not student-run, but is operated by Bon Appétit. Bon Appétit has a monopoly on the food services at Reed as they are the only ones who accept board points; written into their contract is the prohibition of food carts on campus.
The official mascot of Reed is the griffin. In mythology, the griffin often pulled the chariot of the sun; in canto 32 of Dante's "Commedia" the griffin is associated with the Tree of Knowledge. The griffin was featured on the coat-of-arms of founder Simeon Reed and is now on the official seal of Reed College.
The official school color of Reed is Richmond Rose. Over the years, institutional memory of this fact has faded and the color appearing on the school's publications and merchandise has darkened to a shade of maroon. The most common examples of "Richmond Rose" are the satin tapes securing the degree certificate inside a Reed College diploma.
The school song, "Fair Reed," is sung to the tune of the 1912 popular song "Believe Me, if All Those Endearing Young Charms." It may be imitative of the Harvard anthem "Fair Harvard," which is also sung to the tune of "Believe Me, if All Those Endearing Young Charms." It was composed by former president William Trufant Foster shortly after Reed's founding, and is rarely heard today.
An unofficial Reed Alma Mater, "Epistemology Forever," sung to the tune of "The Battle Hymn of the Republic," has been sung by Reed students since the 1950s.
Reed students and alumni referred to themselves as "Reedites" in the early years of the college. This term faded out in favor of the now ubiquitous "Reedie" after World War II. Around campus, prospective students are called "prospies."
An unofficial motto of Reed is "Communism, Atheism, Free Love," and can be found in the Reed College Bookstore on sweaters, T-shirts, etc. It was a label that the Reed community claimed from critics during the 1920s as a "tongue-in-cheek slogan" in reference to Reed's nonconformism. Reed's founding president William T. Foster's outspoken opposition against the entrance of the United States into World War I, as well as the college's support for feminism, its adherence to academic freedom (i.e., inviting a leader of the Socialist Party of America to speak on campus about the Russian Revolution’s potential effect on militarism, emancipation of women, and ending the persecution of Jews), and its nonsectarian status made the college a natural target for what was originally meant to be a pejorative slur.
The faux Reed Seal has changed over the years. In its original form the griffin was holding a hammer and sickle in its paws. Later versions had the griffin wearing boxing gloves.
One of the unofficial symbols of Reed is the Doyle Owl, a roughly concrete statue that has been continuously stolen and re-stolen since about 1919. The original Doyle Owl (originally "House F Owl" after the dormitory named House F that later became Doyle dormitory) was a garden sculpture from the neighborhood stolen by House F residents as a prank (there is a photo of House F residents around the original owl that has been made into a T-shirt). The on-campus folklore of events surrounding the Doyle Owl is sufficiently large that, in 1983, a senior thesis was written on the topic of the Owl's oral history. The original Doyle Owl was destroyed many years ago; the current avatar is Doyle Owl number 13, plus or minus 11. At the present time only one Owl is being shown.
Each January, before the beginning of second-semester classes, the campus holds an interim period called Paideia (drawn from the Greek, meaning 'education'). Originally conceived and approved by the faculty in 1968 for unstructured independent study or "UIS," Paideia ran for the full month of January from 1969–1981, supervised by a committee of faculty, staff and students. This festival of learning takes the form of classes and seminars put on by anyone who wishes to teach, including students, professors, staff members, and outside educators invited on-campus by members of the Reed Community. The classes are intended to be informal, yet intellectual activities free of the usual academic pressure endemic to Reed. Many such classes are explicitly trivial (one long-running tradition is to hold an underwater basket weaving class), while others are trivially academic (such as "Giant Concrete Gnome Construction," a class that, incidental to building monolithic gnomes, includes some content relating to the construction of pre-Christian monoliths). More structured classes (such as martial arts seminars and mini-classes on obscure academic topics), tournaments, and film festivals round out the schedule, which is different every year. The objective of Paideia is not only to learn new (possibly non-useful) things, but to turn the tables on students and encourage them to teach.
In his 2005 Stanford commencement lecture, Apple Inc. founder and Reed dropout Steve Jobs credited a Reed calligraphy class taught by Robert Palladino for his focus on choosing quality typefaces for the Macintosh. While the full calligraphy course is no longer taught at Reed, Paideia usually features a short course on the subject in addition to the informal, weekly gatherings (currently held every Thursday night) of aspiring calligraphy enthusiasts.
Renn Fayre is an annual three-day celebration with a different theme each year. Born in the 1960s as an actual renaissance fair, it has long since lost all connection to anachronism and the Renaissance, although its name has persisted. The event is initiated by a procession of seniors throwing their thesis notes in a large bonfire after the completed theses are submitted.
Reed Arts Week is a week-long celebration of the arts at Reed. It features music, dance, film, creative writing, and the visual arts.
According to Reed's website, each semester, a $130 student body fee "is collected from each full-time student by the business office, acting as agent for the student senate. The fee underwrites publication of the student newspaper and extracurricular activities, and partially supports the student union and ski cabin."
Student body funds (totaling roughly $370,000 annually) are distributed each semester to groups that place among the top 40 organizations in the semester's funding poll. The funding poll uses a voting system in which each organization provides a description that is ranked by each member of the student body with either 'top six,' 'approve,' 'no opinion,' 'disapprove.' A former 'deep six' was eliminated from the system in 2019. These ranks are then tabulated by assigning numbers to each rank and summing across all voters. Afterwards, the top forty organizations present their budgets to the student body senate during Funding Circus. The following day the senate makes decisions about each budget in a process called Funding Hell.
The school's student-run newspaper, "The Reed College Quest "or simply the "Quest," has been published since 1913, and its radio station KRRC had been broadcasting, with a few interruptions, from 1955 until its 2012 sale to a Portland community group; the station is now KXRY.
Although some that partner with outside groups such as Oxfam or Planned Parenthood are more structured, most organizations are highly informal. There is no formal process for forming a student organization at Reed; a group of students (or a single student) announcing themselves as or just considering themselves a student organization is enough. Groups that want funding from the school's Student Activities office or Student Body Fees, however, must register with Student Activities or through the Student Senate. The Reed archive of comic books and graphic novels, the MLLL (Comic Book Reading Room), is well into its fourth decade, and Beer Nation, the student group that organizes and manages various beer gardens throughout the year and during Renn Fayre, has existed for many years. Some organizations, such as the Motorized Couch Collective – dedicated to installing motors and wheels into furniture – have become more Reed myth than reality in recent years.
Reed has ample recreational facilities on campus, a ski cabin on Mount Hood, recreational clubs such as the Reed Outing Club (ROC), and Club Sports (with college-paid coaches), including ultimate frisbee, co-ed soccer, rugby, basketball, and squash.
According to a "Washington Post" analysis of federal campus safety data from 2014, Reed College had 12.9 reports of rape per 1,000 students, the "highest total of reports of rape" per 1,000 students of any college in the nation on its main campus.
In 2012, Reed College had the third highest reported sexual assault rate among U.S. colleges and universities. It is unclear whether this high reporting rate arises from an environment that is more supportive of reporting by crime victims or due to a higher underlying rate of sexual assault. in 2013 there were 19 reported forcible sexual offenses among the approximately 1,400 students at the college. In 2011 a student member of Reed's Judicial Board resigned over the college's handling of sexual assault cases. An investigation by the Center for Public Integrity found that those found responsible in cases of sexual assault frequently faced few consequences, while the lives of the victims were left in turmoil.
Notable Reed alumni include Tektronix co-founder Howard Vollum (1936), businessman John Sperling (1948), Pulitzer Prize-winning poet Gary Snyder (1951), fantasy author David Eddings (1954), distance learning pioneer John Bear (1959), socialist and feminist activist and author Barbara Ehrenreich (1963), radio personality Dr. Demento (1963), programmer, software publisher, author, and philanthropist Peter Norton (1965), former U.S. Secretary of the Navy Richard Danzig (1965), alpinist and biophysical chemist Arlene Blum (1966), chemist Mary Jo Ondrechen (1974), computer engineer Daniel Kottke (1976), and Wikipedia co-founder Larry Sanger (1991).
Among those who attended but did not graduate from Reed are Academy Award-nominated actress Hope Lange, chef James Beard, and Apple co-founder and CEO Steve Jobs.
Notable Reed faculty of the past and present include former U.S. Senator from Illinois Paul Douglas, and physicists Richard Crandall and David Griffiths. | https://en.wikipedia.org/wiki?curid=25417 |
Proof by contradiction
In logic and mathematics, proof by contradiction is a form of proof that establishes the truth or the validity of a proposition, by showing that assuming the proposition to be false leads to a contradiction. Proof by contradiction is also known as indirect proof, proof by assuming the opposite, and reductio ad impossibile.
Proof by contradiction is based on the law of noncontradiction as first formalized as a metaphysical principle by Aristotle. Noncontradiction is also a theorem in propositional logic. This states that an assertion or mathematical statement cannot be both true and false. That is, a proposition "Q" and its negation formula_1"Q" ("not-"Q"") cannot both be true. In a proof by contradiction, it is shown that the denial of the statement being proved results in such a contradiction. It has the form of a "reductio ad absurdum" argument, and usually proceeds as follows:
The 3rd step is based on the following possible truth value cases of a valid argument p → q.
It tells that if a false statement is reached via a valid logic from an assumed statement, then the assumed statement is a false statement. This fact is used in proof by contradiction.
Proof by contradiction is formulated as formula_6, where formula_7 is a logical contradiction or a "false" statement (a statement which truth value is "false"). If formula_7 is reached from formula_1"P" via a valid logic, then formula_10 is proved as true so p is prove as true.
An alternate form of proof by contradiction derives a contradiction with the statement to be proved by showing that formula_1"P" implies "P". This is a contradiction so the assumption formula_1"P" must be false, equivalently "P" as true. This is formulated as formula_13.
An existence proof by contradiction assumes that some object doesn't exist, and then proves that this would lead to a contradiction; thus, such an object must exist. Although it is quite freely used in mathematical proofs, not every school of mathematical thought accepts this kind of nonconstructive proof as universally valid.
Proof by contradiction also depends on the law of the excluded middle, also first formulated by Aristotle. This states that either an assertion or its negation must be true
That is, there is no other truth value besides "true" and "false" that a proposition can take. Combined with the principle of noncontradiction, this means that exactly one of formula_15 and formula_16 is true. In proof by contradiction, this permits the conclusion that since the possibility of formula_16 has been excluded, formula_15 must be true.
The law of the excluded middle is accepted in virtually all formal logics; however, some intuitionist mathematicians do not accept it, and thus reject proof by contradiction as a viable proof technique.
Proof by contradiction is closely related to proof by contrapositive, and the two are sometimes confused, though they are distinct methods. The main distinction is that a proof by contrapositive applies only to statements formula_15 that can be written in the form formula_20 (i.e., implications), whereas the technique of proof by contradiction applies to statements formula_15 of any form:
In the case where the statement to be proven "is" an implication formula_25, then the differences between direct proof, proof by contrapositive, and proof by contradiction can be outlined as follows:
A classic proof by contradiction from mathematics is the proof that the square root of 2 is irrational. If it were rational, it would be expressible as a fraction "a"/"b" in lowest terms, where "a" and "b" are integers, at least one of which is odd. But if "a"/"b" = , then "a"2 = 2"b"2. Therefore, "a"2 must be even, and because the square of an odd number is odd, that in turn implies that "a" is itself even — which means that "b" must be odd because a/b is in lowest terms.
On the other hand, if "a" is even, then "a"2 is a multiple of 4. If "a"2 is a multiple of 4 and "a"2 = 2"b"2, then 2"b"2 is a multiple of 4, and therefore "b"2 must be even, which means that so is "b" too.
So "b" is both odd and even, a contradiction. Therefore, the initial assumption—that can be expressed as a fraction—must be false.
The method of proof by contradiction has also been used to show that for any non-degenerate right triangle, the length of the hypotenuse is less than the sum of the lengths of the two remaining sides. By letting "c" be the length of the hypotenuse and "a" and "b" be the lengths of the legs, one can also express the claim more succinctly as "a" + "b" > "c". In which case, a proof by contradiction can then be made by appealing to the Pythagorean theorem.
First, the claim is negated to assume that "a" + "b" ≤ "c". In which case, squaring both sides would yield that ("a" + "b")2 ≤ "c"2, or equivalently, "a"2 + 2"ab" + "b"2 ≤ "c"2. A triangle is non-degenerate if each of its edges has positive length, so it may be assumed that both "a" and "b" are greater than 0. Therefore, "a"2 + "b"2 2 + 2"ab" + "b"2 ≤ "c"2, and the transitive relation may be reduced further to "a"2 + "b"2 2.
On the other hand, it is also known from the Pythagorean theorem that "a"2 + "b"2 = "c"2. This would result in a contradiction since strict inequality and equality are mutually exclusive. The contradiction means that it is impossible for both to be true and it is known that the Pythagorean theorem holds. It follows from there that the assumption "a" + "b" ≤ "c" must be false and hence "a" + "b" > "c", proving the claim.
Consider the proposition, "P": "there is no smallest rational number greater than 0". In a proof by contradiction, we start by assuming the opposite, ¬"P": that there "is" a smallest rational number, say, "r".
Now, "r"/2 is a rational number greater than 0 and smaller than "r". But that contradicts the assumption that "r" was the "smallest" rational number (if ""r" is the smallest rational number" were "Q, then" one can infer from ""r"/2 is a rational number smaller than "r"" that ¬"Q".) This contradictions shows that the original proposition, "P", must be true. That is, that "there is no smallest rational number greater than 0".
For other examples, see proof that the square root of 2 is not rational (where indirect proofs different from the one above can be found) and Cantor's diagonal argument.
Proofs by contradiction sometimes end with the word "Contradiction!". Isaac Barrow and Baermann used the notation Q.E.A., for ""quod est absurdum"" ("which is absurd"), along the lines of Q.E.D., but this notation is rarely used today. A graphical symbol sometimes used for contradictions is a downwards zigzag arrow "lightning" symbol (U+21AF: ↯), for example in Davey and Priestley. Others sometimes used include a pair of opposing arrows (as formula_34 or formula_35), struck-out arrows (formula_36), a stylized form of hash (such as U+2A33: ⨳), or the "reference mark" (U+203B: ※). The "up tack" symbol (U+22A5: ⊥) used by philosophers and logicians (see contradiction) also appears, but is often avoided due to its usage for orthogonality.
A curious logical consequence of the principle of non-contradiction is that a contradiction implies any statement; if a contradiction is accepted as true, any proposition (including its negation) can be proved from it. This is known as the principle of explosion (, "from a falsehood, anything [follows]", or "", "from a contradiction, anything follows"), or the principle of pseudo-scotus.
Thus a contradiction in a formal axiomatic system is disastrous; since any theorem can be proven true, it destroys the conventional meaning of truth and falsity.
The discovery of contradictions at the foundations of mathematics at the beginning of the 20th century, such as Russell's paradox, threatened the entire structure of mathematics due to the principle of explosion. This motivated a great deal of work during the 20th century to create consistent axiomatic systems to provide a logical underpinning for mathematics. This has also led a few philosophers such as Newton da Costa, Walter Carnielli and Graham Priest to reject the principle of non-contradiction, giving rise to theories such as paraconsistent logic and dialethism, which accepts that there exist statements that are both true and false.
G. H. Hardy described proof by contradiction as "one of a mathematician's finest weapons", saying "It is a far finer gambit than any chess gambit: a chess player may offer the sacrifice of a pawn or even a piece, but a mathematician offers the game." | https://en.wikipedia.org/wiki?curid=25418 |
Rapping
Rapping (or rhyming, spitting, emceeing, MCing) is a musical form of vocal delivery that incorporates "rhyme, rhythmic speech, and street vernacular", which is performed or chanted in a variety of ways, usually over a backing beat or musical accompaniment. The components of rap include "content" (what is being said), "flow" (rhythm, rhyme), and "delivery" (cadence, tone). Rap differs from spoken-word poetry in that it is usually performed in time to musical accompaniment. Rap being a primary ingredient of hip hop music, it is commonly associated with that genre in particular; however, the origins of rap precede hip-hop culture. The earliest precursor to modern rap is the West African griot tradition, in which "oral historians", or "praise-singers", would disseminate oral traditions and genealogies, or use their rhetorical techniques for gossip or to "praise or critique individuals." Griot traditions connect to rap along a lineage of black verbal reverence, through James Brown interacting with the crowd and the band between songs, to Muhammad Ali's verbal taunts and the poems of The Last Poets. Therefore, rap lyrics and music are part of the "Black rhetorical continuum", and aim to reuse elements of past traditions while expanding upon them through "creative use of language and rhetorical styles and strategies". The person credited with originating the style of "delivering rhymes over extensive music", that would become known as rap, was Anthony "DJ Hollywood" Holloway from Harlem, New York.
Rap is usually delivered over a beat, typically provided by a DJ, turntablist, beatboxer, or performed a cappella without accompaniment. Stylistically, rap occupies a gray area between speech, prose, poetry, and singing. The word, which predates the musical form, originally meant "to lightly strike", and is now used to describe quick speech or repartee. The word had been used in British English since the 16th century. It was part of the African American dialect of English in the 1960s meaning "to converse", and very soon after that in its present usage as a term denoting the musical style. Today, the term rap is so closely associated with hip-hop music that many writers use the terms interchangeably.
The English verb "rap" has various meanings, these include "to strike, especially with a quick, smart, or light blow", as well "to utter sharply or vigorously: to rap out a command". The "Shorter Oxford English Dictionary" gives a date of 1541 for the first recorded use of the word with the meaning "to utter (esp. an oath) sharply, vigorously, or suddenly". Wentworth and Flexner's "Dictionary of American Slang" gives the meaning "to speak to, recognize, or acknowledge acquaintance with someone", dated 1932, and a later meaning of "to converse, esp. in an open and frank manner". It is these meanings from which the musical form of "rapping" derives, and this definition may be from a shortening of repartee. A "rapper" refers to a performer who "raps". By the late 1960s, when Hubert G. Brown changed his name to H. Rap Brown, "rap" was a slang term referring to an oration or speech, such as was common among the "hip" crowd in the protest movements, but it did not come to be associated with a musical style for another decade.
"Rap" was used to describe talking on records as early as 1971, on Isaac Hayes' album "Black Moses" with track names such as "Ike's Rap", "Ike's Rap II", "Ike's Rap III", and so on. Hayes' "husky-voiced sexy spoken 'raps' became key components in his signature sound".
Del the Funky Homosapien similarly states that "rap" was used to refer to talking in a stylistic manner in the early 1970s: "I was born in '72 ... back then what rapping meant, basically, was you trying to convey something—you're trying to convince somebody. That's what rapping is, it's in the way you talk."
Rapping can be traced back to its African roots. Centuries before hip-hop music existed, the griots of West Africa were delivering stories rhythmically, over drums and sparse instrumentation. Such connections have been acknowledged by many modern artists, modern day "griots", spoken word artists, mainstream news sources, and academics.
Blues music, rooted in the work songs and spirituals of slavery and influenced greatly by West African musical traditions, was first played by black Americans, and later by some white Americans, in the Mississippi Delta region of the United States around the time of the Emancipation Proclamation. Grammy-winning blues musician/historian Elijah Wald and others have argued that the blues were being rapped as early as the 1920s. Wald went so far as to call hip hop "the living blues." A notable recorded example of rapping in blues music was the 1950 song "Gotta Let You Go" by Joe Hill Louis.
Jazz, which developed from the blues and other African-American and European musical traditions and originated around the beginning of the 20th century, has also influenced hip hop and has been cited as a precursor of hip hop. Not just jazz music and lyrics but also jazz poetry. According to John Sobol, the jazz musician and poet who wrote "Digitopia Blues", rap "bears a striking resemblance to the evolution of jazz both stylistically and formally". Boxer Muhammad Ali anticipated elements of rap, often using rhyme schemes and spoken word poetry, both for when he was trash talking in boxing and as political poetry for his activism outside of boxing, paving the way for The Last Poets in 1968, Gil Scott-Heron in 1970, and the emergence of rap music in the 1970s.
Precursors also exist in non-African/African-American traditions, especially in vaudeville and musical theater. A tradition of competitive, improvised performance poetry was a popular pasttime found in many pastoral societies. One striking example comes from the Mediterranean island of Corsica, where the tradition of 'chjamu e rispondi' (call and response) originated as a way of passing the time among young men engaged in shepherding, who developed a practice remarkably similar to the modern rap battle: long head-to-head verbal jousts formed of sixteen-syllable impromptu poems, frequently concerning complex wordplay, braggadocious boasts, elaborate insults, and old grudges and rivalries, and often involving the interactive crowd support for the opposing performers.
Another more modern comparable traditionion is the patter song exemplified by Gilbert and Sullivan but that has origins in earlier Italian opera. "Rock Island" from Meridith Wilson's "The Music Man" is wholly spoken by an ensemble of travelling salesmen, as are most of the numbers for British actor Rex Harrison in the 1964 Lerner and Loewe musical My Fair Lady. Glenn Miller's "The Lady's in Love with You" and "The Little Man Who Wasn't There" (both 1939), each contain distinctly rap-like sequences set to a driving beat as does the 1937 song "Doin' the Jive". In musical theater, the term "vamp" is identical to its meaning in jazz, gospel, and funk, and it fulfills the same function. Semi-spoken music has long been especially popular in British entertainment, and such examples as David Croft's theme to the 1970s sitcom "Are You Being Served?" have elements indistinguishable from modern rap.
In classical music, semi-spoken music was popular stylized by composer Arnold Schoenberg as Sprechstimme, and famously used in Ernst Toch's 1924 "Geographical Fugue" for spoken chorus and the final scene in Darius Milhaud's 1915 ballet "Les Choéphores". In the French chanson field, irrigated by a strong poetry tradition, such singer-songwriters as Léo Ferré or Serge Gainsbourg made their own use of spoken word over rock or symphonic music from the very beginning of the 1970s. Although these probably did not have a direct influence on rap's development in the African-American cultural sphere, they paved the way for acceptance of spoken word music in the media market, as well as providing a broader backdrop, in a range of cultural contexts distinct from that of the African American experience, upon which rapping could later be grafted.
With the decline of disco in the early 1980s rap became a new form of expression. Rap arose from musical experimentation with rhyming, rhythmic speech. Rap was a departure from disco. Sherley Anne Williams refers to the development of rap as "anti-Disco" in style and means of reproduction. The early productions of Rap after Disco sought a more simplified manner of producing the tracks they were to sing over. Williams explains how Rap composers and DJ's opposed the heavily orchestrated and ritzy multi-tracks of Disco for "break beats" which were created from compiling different records from numerous genres and did not require the equipment from professional recording studios. Professional studios were not necessary therefore opening the production of rap to the youth who as Williams explains felt "locked out" because of the capital needed to produce Disco records.
More directly related to the African-American community were items like schoolyard chants and taunts, clapping games, jump-rope rhymes, some with unwritten folk histories going back hundreds of years across many nationalities. Sometimes these items contain racially offensive lyrics. A related area that is not strictly folklore is rhythmical cheering and cheerleading for military and sports.
In his narration between the tracks on George Russell's 1958 jazz album New York N.Y., the singer Jon Hendricks recorded something close to modern rap, since it all rhymed and was delivered in a hip, rhythm-conscious manner. Art forms such as spoken word jazz poetry and comedy records had an influence on the first rappers. Coke La Rock, often credited as hip-hop's first MC cites the Last Poets among his influences, as well as comedians such as Wild Man Steve and Richard Pryor. Comedian Rudy Ray Moore released under the counter albums in the 1960s and 1970s such as "This Pussy Belongs To Me" (1970), which contained "raunchy, sexually explicit rhymes that often had to do with pimps, prostitutes, players, and hustlers", and which later led to him being called "The Godfather of Rap".
Gil Scott-Heron, a jazz poet/musician, has been cited as an influence on rappers such as Chuck D and KRS-One. Scott-Heron himself was influenced by Melvin Van Peebles, whose first album was 1968's "Brer Soul". Van Peebles describes his vocal style as "the old Southern style", which was influenced by singers he had heard growing up in South Chicago. Van Peebles also said that he was influenced by older forms of African-American music: "... people like Blind Lemon Jefferson and the field hollers. I was also influenced by spoken word song styles from Germany that I encountered when I lived in France."
During the mid-20th century, the musical culture of the Caribbean was constantly influenced by the concurrent changes in American music. As early as 1956, deejays were toasting (an African tradition of "rapped out" tales of heroism) over dubbed Jamaican beats. It was called "rap", expanding the word's earlier meaning in the African-American community—"to discuss or debate informally."
The early rapping of hip-hop developed out of DJ and Master of Ceremonies' announcements made over the microphone at parties, and later into more complex raps. Grandmaster Caz states: "The microphone was just used for making announcements, like when the next party was gonna be, or people's moms would come to the party looking for them, and you have to announce it on the mic. Different DJs started embellishing what they were saying. I would make an announcement this way, and somebody would hear that and they add a little bit to it. I'd hear it again and take it a little step further 'til it turned from lines to sentences to paragraphs to verses to rhymes."
One of the first rappers at the beginning of the hip hop period, at the end of the 1970s, was also hip hop's first DJ, DJ Kool Herc. Herc, a Jamaican immigrant, started delivering simple raps at his parties, which some claim were inspired by the Jamaican tradition of toasting. However, Kool Herc himself denies this link (in the 1984 book "Hip Hop"), saying, "Jamaican toasting? Naw, naw. No connection there. I couldn't play reggae in the Bronx. People wouldn't accept it. The inspiration for rap is James Brown and the album "Hustler's Convention"". Herc also suggests he was too young while in Jamaica to get into sound system parties: "I couldn't get in. Couldn't get in. I was ten, eleven years old," and that while in Jamaica, he was listening to James Brown: "I was listening to American music in Jamaica and my favorite artist was James Brown. That's who inspired me. A lot of the records I played were by James Brown."
However, in terms of what we identify in the 2010s as "rap" the source came from Manhattan. Pete DJ Jones said the first person he heard rap was DJ Hollywood, a Harlem (not Bronx) native who was the house DJ at the Apollo Theater. Kurtis Blow also says the first person he heard rhyme was DJ Hollywood. In a 2014 interview, Hollywood said: "I used to like the way Frankie Crocker would ride a track, but he wasn't syncopated to the track though. I liked [WWRL DJ] Hank Spann too, but he wasn't on the one. Guys back then weren't concerned with being musical. I wanted to flow with the record". And in 1975, he ushered in what became known as the Hip Hop style by rhyming syncopated to the beat of an existing record uninterruptedly for nearly a minute. He adapted the lyrics of Isaac Hayes "Good Love 6-9969" and rhymed it to the breakdown part of "Love is the Message". His partner Kevin Smith, better known as Lovebug Starski, took this new style and introduced it to the Bronx Hip Hop set that until then was composed of DJing and B-boying (or beatboxing), with traditional "shout out" style rapping.
The style that Hollywood created and his partner introduced to the Hip Hop set quickly became the standard. What actually did Hollywood do? He created "flow." Before then all MCs rhymed based on radio DJs. This usually consisted of short patters that were disconnected thematically; they were separate unto themselves. But by Hollywood using song lyrics, he had an inherent flow and theme to his rhyme. This was the game changer. By the end of the 1970s, artists such as Kurtis Blow and The Sugarhill Gang were just starting to receive radio airplay and make an impact far outside of New York City, on a national scale. Blondie's 1981 single, "Rapture", was one of the first songs featuring rap to top the U.S. "Billboard" Hot 100 chart.
Old school rap (1979–84) was "easily identified by its relatively simple raps" according to AllMusic, "the emphasis was not on lyrical technique, but simply on good times", one notable exception being Melle Mel, who set the way for future rappers through his socio-political content and creative wordplay.
Golden age hip hop (the mid-1980s to early '90s) was the time period where hip-hop lyricism went through its most drastic transformation – writer William Jelani Cobb says "in these golden years, a critical mass of mic prodigies were literally creating themselves and their art form at the same time" and Allmusic writes, "rhymers like PE's Chuck D, Big Daddy Kane, KRS-One, and Rakim basically invented the complex wordplay and lyrical kung-fu of later hip-hop". The golden age is considered to have ended around 1993–94, marking the end of rap lyricism's most innovative period.
"Flow" is defined as "the rhythms and rhymes" of a hip-hop song's lyrics and how they interact – the book "How to Rap" breaks flow down into rhyme, rhyme schemes, and rhythm (also known as cadence). 'Flow' is also sometimes used to refer to elements of the delivery (pitch, timbre, volume) as well, though often a distinction is made between the flow and the delivery.
Staying on the beat is central to rap's flow – many MCs note the importance of staying on-beat in "How to Rap" including Sean Price, Mighty Casey, Zion I, Vinnie Paz, Fredro Starr, Del The Funky Homosapien, Tech N9ne, People Under The Stairs, Twista, B-Real, Mr Lif, 2Mex, and Cage.
MCs stay on beat by stressing syllables in time to the four beats of the musical backdrop. Poetry scholar Derek Attridge describes how this works in his book "Poetic Rhythm" – "rap lyrics are written to be performed to an accompaniment that emphasizes the metrical structure of the verse". He says rap lyrics are made up of, "lines with four stressed beats, separated by other syllables that may vary in number and may include other stressed syllables. The strong beat of the accompaniment coincides with the stressed beats of the verse, and the rapper organizes the rhythms of the intervening syllables to provide variety and surprise".
The same technique is also noted in the book "How to Rap", where diagrams are used to show how the lyrics line up with the beat – "stressing a syllable on each of the four beats gives the lyrics the same underlying rhythmic pulse as the music and keeps them in rhythm ... other syllables in the song may still be stressed, but the ones that fall in time with the four beats of a bar are the only ones that need to be emphasized in order to keep the lyrics in time with the music".
In rap terminology, 16-bars is the amount of time that rappers are generally given to perform a guest verse on another artist's song; one bar is typically equal to four beats of music.
Old school flows were relatively basic and used only few syllables per bar, simple rhythmic patterns, and basic rhyming techniques and rhyme schemes.
Melle Mel is cited as an MC who epitomizes the old school flow – Kool Moe Dee says, "from 1970 to 1978 we rhymed one way [then] Melle Mel, in 1978, gave us the new cadence we would use from 1978 to 1986". He's the first emcee to explode in a new rhyme cadence, and change the way every emcee rhymed forever. Rakim, The Notorious B.I.G., and Eminem have flipped the flow, but Melle Mel's downbeat on the two, four, kick to snare cadence is still the rhyme foundation all emcees are building on".
Artists and critics often credit Rakim with creating the overall shift from the more simplistic old school flows to more complex flows near the beginning of hip hop's new school – Kool Moe Dee says, "any emcee that came after 1986 had to study Rakim just to know what to be able to do. Rakim, in 1986, gave us flow and that was the rhyme style from 1986 to 1994. from that point on, anybody emceeing was forced to focus on their flow". Kool Moe Dee explains that before Rakim, the term 'flow' wasn't widely used – "Rakim is basically the inventor of flow. We were not even using the word flow until Rakim came along. It was called rhyming, it was called cadence, but it wasn't called flow. Rakim created flow!" He adds that while Rakim upgraded and popularized the focus on flow, "he didn't invent the word".
Kool Moe Dee states that Biggie introduced a newer flow which "dominated from 1994 to 2002", and also says that Method Man was "one of the emcees from the early to mid-'90s that ushered in the era of flow ... Rakim invented it, Big Daddy Kane, KRS-One, and Kool G Rap expanded it, but Biggie and Method Man made flow the single most important aspect of an emcee's game". He also cites Craig Mack as an artist who contributed to developing flow in the '90s.
Music scholar Adam Krims says, "the flow of MCs is one of the profoundest changes that separates out new-sounding from older-sounding music ... it is widely recognized and remarked that rhythmic styles of many commercially successful MCs since roughly the beginning of the 1990s have progressively become faster and more 'complex'". He cites "members of the Wu-Tang Clan, Nas, AZ, Big Pun, and Ras Kass, just to name a few" as artists who exemplify this progression.
Kool Moe Dee adds, "in 2002 Eminem created the song that got the first Oscar in Hip-Hop history [Lose Yourself] ... and I would have to say that his flow is the most dominant right now (2003)".
There are many different styles of flow, with different terminology used by different people – stic.man of Dead Prez uses the following terms –
Alternatively, music scholar Adam Krims uses the following terms –
MCs use many different rhyming techniques, including complex rhyme schemes, as Adam Krims points out – "the complexity ... involves multiple rhymes in the same rhyme complex (i.e. section with consistently rhyming words), internal rhymes, [and] offbeat rhymes". There is also widespread use of multisyllabic rhymes, by artists such as Kool G Rap, Big Daddy Kane, Rakim, Big L, Nas and Eminem.
It has been noted that rap's use of rhyme is some of the most advanced in all forms of poetry – music scholar Adam Bradley notes, "rap rhymes so much and with such variety that it is now the largest and richest contemporary archive of rhymed words. It has done more than any other art form in recent history to expand rhyme's formal range and expressive possibilities".
In the book "How to Rap", Masta Ace explains how Rakim and Big Daddy Kane caused a shift in the way MCs rhymed: "Up until Rakim, everybody who you heard rhyme, the last word in the sentence was the rhyming [word], the connection word. Then Rakim showed us that you could put rhymes within a rhyme ... now here comes Big Daddy Kane — instead of going three words, he's going multiple". "How to Rap" explains that "rhyme is often thought to be the most important factor in rap writing ... rhyme is what gives rap lyrics their musicality.
Many of the rhythmic techniques used in rapping come from percussive techniques and many rappers compare themselves to percussionists. "How to Rap 2" identifies all the rhythmic techniques used in rapping such as triplets, flams, 16th notes, 32nd notes, syncopation, extensive use of rests, and rhythmic techniques unique to rapping such as West Coast "lazy tails", coined by Shock G. Rapping has also been done in various time signatures, such as 3/4 time.
Since the 2000s, rapping has evolved into a style of rap that spills over the boundaries of the beat, closely resembling spoken English. Rappers like MF Doom and Eminem have exhibited this style, and since then, rapping has been difficult to notate. The American hip-hop group Crime Mob exhibited a new rap flow in songs such as "Knuck If You Buck", heavily dependent on triplets. Rappers including Drake, Kanye West, Rick Ross, Young Jeezy and more have included this influence in their music. In 2014, an American hip-hop collective from Atlanta, Migos, popularized this flow, and is commonly referred to as the "Migos Flow" (a term that is contentious within the hip-hop community).
The standard form of rap notation is the flow diagram, where rappers line-up their lyrics underneath "beat numbers". Different rappers have slightly different forms of flow diagram that they use: Del the Funky Homosapien says, "I'm just writing out the rhythm of the flow, basically. Even if it's just slashes to represent the beats, that's enough to give me a visual path.", Vinnie Paz states, "I've created my own sort of writing technique, like little marks and asterisks to show like a pause or emphasis on words in certain places.", and Aesop Rock says, "I have a system of maybe 10 little symbols that I use on paper that tell me to do something when I'm recording."
Hip-hop scholars also make use of the same flow diagrams: the books "How to Rap" and "How to Rap 2" use the diagrams to explain rap's triplets, flams, rests, rhyme schemes, runs of rhyme, and breaking rhyme patterns, among other techniques. Similar systems are used by PhD musicologists Adam Krims in his book "Rap Music and the Poetics of Identity" and Kyle Adams in his academic work on flow.
Because rap revolves around a strong 4/4 beat, with certain syllables said in time to the beat, all the notational systems have a similar structure: they all have the same 4 beat numbers at the top of the diagram, so that syllables can be written in-line with the beat numbers. This allows devices such as rests, "lazy tails", flams, and other rhythmic techniques to be shown, as well as illustrating where different rhyming words fall in relation to the music.
To successfully deliver a rap, a rapper must also develop vocal presence, enunciation, and breath control. Vocal presence is the distinctiveness of a rapper's voice on record. Enunciation is essential to a flowing rap; some rappers choose also to exaggerate it for comic and artistic effect. Breath control, taking in air without interrupting one's delivery, is an important skill for a rapper to master, and a must for any MC. An MC with poor breath control cannot deliver difficult verses without making unintentional pauses.
Raps are sometimes delivered with melody. West Coast rapper Egyptian Lover was the first notable MC to deliver "sing-raps". Popular rappers such as 50 Cent and Ja Rule add a slight melody to their otherwise purely percussive raps whereas some rappers such as Cee-Lo Green are able to harmonize their raps with the beat. The Midwestern group Bone Thugs-n-Harmony was one of the first groups to achieve nationwide recognition for using the fast-paced, melodic and harmonic raps that are also practiced by Do or Die, another Midwestern group. Another rapper that harmonized his rhymes was Nate Dogg, a rapper part of the group 213. Rakim experimented not only with following the beat, but also with complementing the song's melody with his own voice, making his flow sound like that of an instrument (a saxophone in particular).
The ability to rap quickly and clearly is sometimes regarded as an important sign of skill. In certain hip-hop subgenres such as chopped and screwed, slow-paced rapping is often considered optimal. The current record for fastest rapper is held by Spanish rapper Domingo Edjang Moreno, known by his alias Chojin, who rapped 921 syllables in one minute on December 23, 2008.
In the late 1970s, the term Emcee, MC or M.C., derived from the term master of ceremonies, became an alternative title for a rapper, and for their role within hip-hop music and culture. An MC uses rhyming verses, pre-written or ad lib ('freestyled'), to introduce the DJ with whom they work, to keep the crowd entertained or to glorify themselves. As hip hop progressed, the title MC acquired backronyms such as 'mike chanter' 'microphone controller', 'microphone checker', 'music commentator', and one who 'moves the crowd'. Some use this word interchangeably with the term "rapper", while for others the term denotes a superior level of skill and connection to the wider culture.
MC can often be used as a term of distinction; referring to an artist with good performance skills. As Kool G Rap notes, "masters of ceremony, where the word 'M.C.' comes from, means just keeping the party alive" Sic. Many people in hiphop including DJ Premier and KRS-One feel that James Brown was the first MC. James Brown had the lyrics, moves, and soul that greatly influenced a lot of rappers in Hip-Hop, and arguably even started the first MC rhyme.
For some rappers, there was a distinction to the term, such as for MC Hammer who acquired the nickname "MC" for being a "Master of Ceremonies" which he used when he began performing at various clubs while on the road with the Oakland As and eventually in the military (United States Navy). It was within the lyrics of a rap song called "This Wall" that Hammer first identified himself as M.C. Hammer and later marketed it on his debut album "Feel My Power".
Uncertainty over the acronym's expansion may be considered evidence for its ubiquity: the full term "Master of Ceremonies" is very rarely used in the hip-hop scene. This confusion prompted the hip-hop group A Tribe Called Quest to include this statement in the liner notes to their 1993 album "Midnight Marauders:
The use of the term MC when referring to a rhyming wordsmith originates from the dance halls of Jamaica. At each event, there would be a master of ceremonies who would introduce the different musical acts and would say a toast in style of a rhyme, directed at the audience and to the performers. He would also make announcements such as the schedule of other events or advertisements from local sponsors. The term MC continued to be used by the children of women who moved to New York City to work as maids in the 1970s. These MCs eventually created a new style of music called hip-hop based on the rhyming they used to do in Jamaica and the breakbeats used in records. MC has also recently been accepted to refer to all who engineer music.
"Party rhymes", meant to pump up the crowd at a party, were nearly the exclusive focus of old school hip hop, and they remain a staple of hip-hop music to this day. In addition to party raps, rappers also tend to make references to love and sex. Love raps were first popularized by Spoonie Gee of the Treacherous Three, and later, in the golden age of hip hop, Big Daddy Kane, Heavy D, and LL Cool J would continue this tradition.
Hip-hop artists such as KRS-One, Hopsin, Public Enemy, Lupe Fiasco, Mos Def, Talib Kweli, Jay-Z, Nas, The Notorious B.I.G. (Biggie), and dead prez are known for their sociopolitical subject matter. Their West Coast counterparts include Emcee Lynx, The Coup, Paris, and Michael Franti. Tupac Shakur was also known for rapping about social issues such as police brutality, teenage pregnancy, and racism.
Other rappers take a less critical approach to urbanity, sometimes even embracing such aspects as crime. Schoolly D was the first notable MC to rap about crime. Early on KRS-One was accused of celebrating crime and a hedonistic lifestyle, but after the death of his DJ, Scott La Rock, KRS-One went on to speak out against violence in hip hop and has spent the majority of his career condemning violence and writing on issues of race and class. Ice-T was one of the first rappers to call himself a "playa" and discuss guns on record, but his theme tune to the 1988 film "Colors" contained warnings against joining gangs. Gangsta rap, made popular largely because of N.W.A, brought rapping about crime and the gangster lifestyle into the musical mainstream.
Materialism has also been a popular topic in hip-hop since at least the early 1990s, with rappers boasting about their own wealth and possessions, and name-dropping specific brands: liquor brands Cristal and Rémy Martin, car manufacturers Bentley and Mercedes-Benz and clothing brands Gucci and Versace have all been popular subjects for rappers.
Various politicians, journalists, and religious leaders have accused rappers of fostering a culture of violence and hedonism among hip-hop listeners through their lyrics. However, there are also rappers whose messages may not be in conflict with these views, for example Christian hip hop. Others have praised the "political critique, innuendo and sarcasm" of hip-hop music.
In contrast to the more hedonistic approach of gangsta rappers, some rappers have a spiritual or religious focus. Christian rap is currently the most commercially successful form of religious rap. With Christian rappers like Lecrae, Thi'sl and Hostyle Gospel winning national awards and making regular appearances on television, Christian hip hop seem to have found its way in the hip-hop family. Aside from Christianity, the Five Percent Nation, an Islamic esotericist religious/spiritual group, has been represented more than any religious group in popular hip hop. Artists such as Rakim, the members of the Wu-Tang Clan, Brand Nubian, X-Clan and Busta Rhymes have had success in spreading the theology of the Five Percenters.
Rappers use the literary techniques of double entendres, alliteration, and forms of wordplay that are found in classical poetry. Similes and metaphors are used extensively in rap lyrics; rappers such as Fabolous and Lloyd Banks have written entire songs in which every line contains similes, whereas MCs like Rakim, GZA, and Jay-Z are known for the metaphorical content of their raps. Rappers such as Lupe Fiasco are known for the complexity of their songs that contain metaphors within extended metaphors.
Many hip-hop listeners believe that a rapper's lyrics are enhanced by a complex vocabulary. Kool Moe Dee claims that he appealed to older audiences by using a complex vocabulary in his raps. Rap is famous, however, for having its own vocabulary—from international hip-hop slang to regional slang. Some artists, like the Wu-Tang Clan, develop an entire lexicon among their clique. African-American English has always had a significant effect on hip-hop slang and vice versa. Certain regions have introduced their unique regional slang to hip-hop culture, such as the Bay Area (Mac Dre, E-40), Houston (Chamillionaire, Paul Wall), Atlanta (Ludacris, Lil Jon, T.I.), and Kentucky (Nappy Roots). The Nation of Gods and Earths, aka The Five Percenters, has influenced mainstream hip-hop slang with the introduction of phrases such as "word is bond" that have since lost much of their original spiritual meaning. Preference toward one or the other has much to do with the individual; GZA, for example, prides himself on being very visual and metaphorical but also succinct, whereas underground rapper MF DOOM is known for heaping similes upon similes. In still another variation, 2Pac was known for saying exactly what he meant, literally and clearly.
Rap music's development into popular culture in the 1990s can be accredited to the album "Niggaz4life" by artists Niggaz With Attitude, the first rap group to ever take the top spot of the Billboard's Top 200 in 1991, in the United States. With this victory, came the beginning of an era of popular culture guided by the musical influences of hip-hop and rap itself, moving away from the influences of rock music. As rap continued to develop and further disseminate, it went on to influence clothing brands, movies, sports, and dancing through popular culture. As rap has developed to become more of a presence in popular culture, it has focused itself on a particular demographic, adolescent and young adults. As such, it has had a significant impact on the modern vernacular of this portion of the population, which has diffused throughout society.
The effects of rap music on modern vernacular can be explored through the study of semiotics. Semiotics is the study of signs and symbols, or the study of language as a system. French literary theorist Roland Barthes furthers this study with this own theory of myth. He maintains that the first order of signification is language and that the second is "myth", arguing that a word has both its literal meaning, and its mythical meaning, which is heavily dependent on socio-cultural context. To illustrate, Barthes uses the example of a rat: it has a literal meaning (a physical, objective description) and it has a greater socio-cultural understanding. This contextual meaning is subjective and is dynamic within society.
Through Barthes' semiotic theory of language and myth, it can be shown that rap music has culturally influenced the language of its listeners, as they influence the connotative message to words that already exist. As more people listen to rap, the words that are used in the lyrics become culturally bound to the song, and then are disseminated through the conversations that people have using these words.
Most often, the terms that rappers use are pre-established words that have been prescribed new meaning through their music, that are eventually disseminated through social spheres. This newly contextualized word is called a neosemanticism. Neosemanticisms are forgotten words that are often brought forward from subcultures that attract the attention of members of the reigning culture of their time, then they are brought forward by the influential voices in society – in this case, these figures are rappers. To illustrate, the acronym YOLO was popularized by rapper, actor and RNB singer Drake in 2012 when he featured it in his own song, "The Motto". That year the term YOLO was so popular that it was printed on t-shirts, became a trending hashtag on Twitter, and was even considered as the inspiration for several tattoos. However, although the rapper may have come up with the acronym, the motto itself was in no way first established by Drake. Similar messages can be seen in many well-known sayings, or as early as 1896, in the English translation of "La Comédie Humaine", by Honoré de Balzac where one of his free-spirited characters tells another, "You Only Live Once!". Another example of a neosemanticism is the word "broccoli". Rapper E-40 initially uses the word "broccoli" to refer to marijuana, on his hit track "Broccoli" in 1993. In contemporary society, artists D.R.A.M. and Lil Yachty are often accredited for this slang on for "their" hit song, also titled "Broccoli".
With the rise in technology and mass media, the dissemination of subcultural terms has only become easier. Dick Hebdige, author of "," merits that subcultures often use music to vocalize the struggles of their experiences. As rap is also the culmination of a prevalent sub-culture in African-American social spheres, often their own personal cultures are disseminated through rap lyrics.
It is here that lyrics can be categorized as either historically influenced or (more commonly) considered as slang. Vernon Andrews, the professor of the course "American Studies 111: Hip-Hop Culture", suggests that many words, such as "hood", "homie", and "dope", are historically influenced. Most importantly, this also brings forward the anarchistic culture of rap music. Common themes from rap are anti-establishment and instead, promote black excellence and diversity. It is here that rap can be seen to reclaim words, namely, "nigga", a historical term used to subjugate and oppress Black people in America. This word has been reclaimed by Black Americans and is heavily used in rap music. Niggaz With Attitude embodies this notion by using it as the first word of their influential rap group name.
There are two kinds of freestyle rap: one is scripted (recitation), but having no particular overriding subject matter, the second typically referred to as "freestyling" or "spitting", is the improvisation of rapped lyrics. When freestyling, some rappers inadvertently reuse old lines, or even "cheat" by preparing segments or entire verses in advance. Therefore, freestyles with proven spontaneity are valued above generic, always usable lines. Rappers will often reference places or objects in their immediate setting, or specific (usually demeaning) characteristics of opponents, to prove their authenticity and originality.
Battle rapping, which can be freestyled, is the competition between two or more rappers in front of an audience. The tradition of insulting one's friends or acquaintances in rhyme goes back to the dozens, and was portrayed famously by Muhammad Ali in his boxing matches. The winner of a battle is decided by the crowd and/or preselected judges. According to Kool Moe Dee, a successful battle rap focuses on an opponent's weaknesses, rather than one's own strengths. Television shows such as MTV's "DFX" and BET's "106 and Park" host weekly freestyle battles live on the air. Battle rapping gained widespread public recognition outside of the African-American community with rapper Eminem's movie "8 Mile".
The strongest battle rappers will generally perform their rap fully freestyled. This is the most effective form in a battle as the rapper can comment on the other person, whether it be what they look like, or how they talk, or what they wear. It also allows the rapper to reverse a line used to "diss" him or her if they are the second rapper to battle. This is known as a "flip". Jin The Emcee was considered "World Champion" battle rapper in the mid-2000s.
Throughout hip hop's history, new musical styles and genres have developed that contain rapping. Entire genres, such as rap rock and its derivatives rapcore and rap metal (rock/metal/punk with rapped vocals), or hip house have resulted from the fusion of rap and other styles. Many popular music genres with a focus on percussion have contained rapping at some point; be it disco (DJ Hollywood), jazz (Gang Starr), new wave (Blondie), funk (Fatback Band), contemporary R&B (Mary J. Blige), reggaeton (Daddy Yankee), or even Japanese dance music (Soul'd Out). UK garage music has begun to focus increasingly on rappers in a new subgenre called grime which emerged in London in the early 2000s and was pioneered and popularized by the MC Dizzee Rascal. Increased popularity with the music has shown more UK rappers going to America as well as tour there, such as Sway DaSafo possibly signing with Akon's label Konvict. Hyphy is the latest of these spin-offs. It is typified by slowed-down atonal vocals with instrumentals that borrow heavily from the hip-hop scene and lyrics centered on illegal street racing and car culture. Another Oakland, California group, Beltaine's Fire, has recently gained attention for their Celtic fusion sound which blends hip-hop beats with Celtic melodies. Unlike the majority of hip-hop artists, all their music is performed live without samples, synths, or drum machines, drawing comparisons to The Roots and Rage Against the Machine.
Bhangra, a widely popular style of music from Punjab, India has been mixed numerous times with reggae and hip-hop music. The most popular song in this genre in the United States was "Mundian to Bach Ke" or "Beware the Boys" by Panjabi MC and Jay-Z. Although "Mundian To Bach Ke" had been released previously, the mixing with Jay-Z popularized the genre further.
Although the majority of rappers are male, there have been a number of female rap stars, including Lauryn Hill, MC Lyte, Lil' Kim, Missy Elliott, Queen Latifah, Da Brat, Eve, Trina, Nicki Minaj, Khia, M.I.A., CL from 2NE1, Foxy Brown, Iggy Azalea, and Lisa Lopes from TLC. There is also deaf rap artist Signmark. | https://en.wikipedia.org/wiki?curid=25421 |
Rock music
Rock music is a broad genre of popular music that originated as "rock and roll" in the United States in the late 1940s and early 1950s, and developed into a range of different styles in the mid-1960s and later, particularly in the United States and the United Kingdom. It has its roots in 1940s and 1950s rock and roll, a style which drew heavily from the genres of blues, rhythm and blues, and from country music. Rock music also drew strongly from a number of other genres such as electric blues and folk, and incorporated influences from jazz, classical and other musical styles. Musically, rock has centered on the electric guitar, usually as part of a rock group with electric bass, drums, and one or more singers. Usually, rock is song-based music with a 4/4 time signature using a verse–chorus form, but the genre has become extremely diverse. Like pop music, lyrics often stress romantic love but also address a wide variety of other themes that are frequently social or political.
By the late 1960s "classic rock" period, a number of distinct rock music subgenres had emerged, including hybrids like blues rock, folk rock, country rock, southern rock, raga rock, and jazz rock, many of which contributed to the development of psychedelic rock, which was influenced by the countercultural psychedelic and hippie scene. New genres that emerged included progressive rock, which extended the artistic elements, glam rock, which highlighted showmanship and visual style, and the diverse and enduring subgenre of heavy metal, which emphasized volume, power, and speed. In the second half of the 1970s, punk rock reacted by producing stripped-down, energetic social and political critiques. Punk was an influence in the 1980s on new wave, post-punk and eventually alternative rock. From the 1990s alternative rock began to dominate rock music and break into the mainstream in the form of grunge, Britpop, and indie rock. Further fusion subgenres have since emerged, including pop punk, electronic rock, rap rock, and rap metal, as well as conscious attempts to revisit rock's history, including the garage rock/post-punk and techno-pop revivals in the early 2000s. The late 2000s and 2010s saw a slow decline in the cultural relevancy of the genre, being usurped by hip-hop as the most popular genre in the United States in 2017.
Rock music has also embodied and served as the vehicle for cultural and social movements, leading to major subcultures including mods and rockers in the UK and the hippie counterculture that spread out from San Francisco in the US in the 1960s. Similarly, 1970s punk culture spawned the goth, punk, and emo subcultures. Inheriting the folk tradition of the protest song, rock music has been associated with political activism as well as changes in social attitudes to race, sex and drug use, and is often seen as an expression of youth revolt against adult consumerism and conformity.
The sound of rock is traditionally centered on the amplified electric guitar, which emerged in its modern form in the 1950s with the popularity of rock and roll. Also, it was influenced by the sounds of electric blues guitarists. The sound of an electric guitar in rock music is typically supported by an electric bass guitar, which pioneered in jazz music in the same era, and percussion produced from a drum kit that combines drums and cymbals. This trio of instruments has often been complemented by the inclusion of other instruments, particularly keyboards such as the piano, the Hammond organ, and the synthesizer. The basic rock instrumentation was derived from the basic blues band instrumentation (prominent lead guitar, second chordal instrument, bass, and drums). A group of musicians performing rock music is termed as a rock band or a rock group. Furthermore, it typically consists of between three (the power trio) and five members. Classically, a rock band takes the form of a quartet whose members cover one or more roles, including vocalist, lead guitarist, rhythm guitarist, bass guitarist, drummer, and often keyboard player or other instrumentalist.
Rock music is traditionally built on a foundation of simple unsyncopated rhythms in a 4/4 meter, with a repetitive snare drum back beat on beats two and four. Melodies often originate from older musical modes such as the Dorian and Mixolydian, as well as major and minor modes. Harmonies range from the common triad to parallel perfect fourths and fifths and dissonant harmonic progressions. Since the late 1950s and particularly from the mid 1960s onwards, rock music often used the verse-chorus structure derived from blues and folk music, but there has been considerable variation from this model. Critics have stressed the eclecticism and stylistic diversity of rock. Because of its complex history and its tendency to borrow from other musical and cultural forms, it has been argued that "it is impossible to bind rock music to a rigidly delineated musical definition."
Unlike many earlier styles of popular music, rock lyrics have dealt with a wide range of themes, including romantic love, sex, rebellion against "The Establishment", social concerns, and life styles. These themes were inherited from a variety of sources such as the Tin Pan Alley pop tradition, folk music, and rhythm and blues. Music journalist Robert Christgau characterizes rock lyrics as a "cool medium" with simple diction and repeated refrains, and asserts that rock's primary "function" "pertains to music, or, more generally, noise." The predominance of white, male, and often middle class musicians in rock music has often been noted, and rock has been seen as an appropriation of black musical forms for a young, white and largely male audience. As a result, it has also been seen to articulate the concerns of this group in both style and lyrics. Christgau, writing in 1972, said in spite of some exceptions, "rock and roll usually implies an identification of male sexuality and aggression".
Since the term "rock" started being used in preference to "rock and roll" from the late-1960s, it has usually been contrasted with pop music, with which it has shared many characteristics, but from which it is often distanced by an emphasis on musicianship, live performance, and a focus on serious and progressive themes as part of an ideology of authenticity that is frequently combined with an awareness of the genre's history and development. According to Simon Frith, rock was "something more than pop, something more than rock and roll" and "[r]ock musicians combined an emphasis on skill and technique with the romantic concept of art as artistic expression, original and sincere".
In the new millennium, the term "rock" has occasionally been used as a blanket term including forms like pop music, reggae music, soul music, and even hip hop, which it has been influenced with but often contrasted through much of its history. Christgau has used the term broadly to refer to popular and semipopular music that cater to his sensibility as "a rock-and-roller", including a fondness for a good beat, a meaningful lyric with some wit, and the theme of youth, which holds an "eternal attraction" so objective "that all youth music partakes of sociology and the field report." Writing in "" (1990), he said this sensibility is evident in the music of folk singer-songwriter Michelle Shocked, rapper LL Cool J, and synth-pop duo Pet Shop Boys—"all kids working out their identities"—as much as it is in the music of Chuck Berry, the Ramones, and the Replacements.
The foundations of rock music are in rock and roll, which originated in the United States during the late 1940s and early 1950s, and quickly spread to much of the rest of the world. Its immediate origins lay in a melding of various black musical genres of the time, including rhythm and blues and gospel music, with country and western. In 1951, Cleveland, Ohio disc jockey Alan Freed began playing rhythm and blues music (then termed "race music") for a multi-racial audience, and is credited with first using the phrase "rock and roll" to describe the music.
Debate surrounds which record should be considered the first rock and roll record. Contenders include Goree Carter's "" (1949); Jimmy Preston's "Rock the Joint" (1949), which was later covered by Bill Haley & His Comets in 1952; and "Rocket 88" by Jackie Brenston and his Delta Cats (in fact, Ike Turner and his band the Kings of Rhythm), recorded by Sam Phillips for Sun Records in 1951. Four years later, Bill Haley's "Rock Around the Clock" (1955) became the first rock and roll song to top "Billboard" magazine's main sales and airplay charts, and opened the door worldwide for this new wave of popular culture.
It also has been argued that "That's All Right (Mama)" (1954), Elvis Presley's first single for Sun Records in Memphis, could be the first rock and roll record, but, at the same time, Big Joe Turner's "Shake, Rattle & Roll", later covered by Haley, was already at the top of the Billboard R&B charts. Other artists with early rock and roll hits included Chuck Berry, Bo Diddley, Fats Domino, Little Richard, Jerry Lee Lewis, and Gene Vincent. Soon rock and roll was the major force in American record sales and crooners, such as Eddie Fisher, Perry Como, and Patti Page, who had dominated the previous decade of popular music, found their access to the pop charts significantly curtailed.
Rock and roll has been seen as leading to a number of distinct subgenres, including rockabilly, combining rock and roll with "hillbilly" country music, which was usually played and recorded in the mid-1950s by white singers such as Carl Perkins, Jerry Lee Lewis, Buddy Holly and with the greatest commercial success, Elvis Presley. Hispanic and Latino American movements in rock and roll, which would eventually lead to the success of Latin rock and Chicano rock within the United States, began to rise in Southwestern United States; with rock and roll standard musician Ritchie Valens and even those within other heritage genres, such as Al Hurricane along with his brothers Tiny Morrie and Baby Gaby as they began combining rock and roll with country-western within traditional New Mexico music. Other styles like doo wop placed an emphasis on multi-part vocal harmonies and backing lyrics (from which the genre later gained its name), which were usually supported with light instrumentation and had its origins in 1930s and 1940s African American vocal groups. Acts like the Crows, the Penguins, the El Dorados and the Turbans all scored major hits, and groups like the Platters, with songs including "The Great Pretender" (1955), and the Coasters with humorous songs like "Yakety Yak" (1958), ranked among the most successful rock and roll acts of the period.
The era also saw the growth in popularity of the electric guitar, and the development of a specifically rock and roll style of playing through such exponents as Chuck Berry, Link Wray, and Scotty Moore. The use of distortion, pioneered by electric blues guitarists such as Guitar Slim, Willie Johnson and Pat Hare in the early 1950s, was popularized by Chuck Berry in the mid-1950s. The use of power chords, pioneered by Willie Johnson and Pat Hare in the early 1950s, was popularized by Link Wray in the late 1950s.
In the United Kingdom, the trad jazz and folk movements brought visiting blues music artists to Britain. Lonnie Donegan's 1955 hit "Rock Island Line" was a major influence and helped to develop the trend of skiffle music groups throughout the country, many of which, including John Lennon's Quarrymen, moved on to play rock and roll.
Commentators have traditionally perceived a decline of rock and roll in the late 1950s and early 1960s. By 1959, the death of Buddy Holly, The Big Bopper and Ritchie Valens in a plane crash, the departure of Elvis for the army, the retirement of Little Richard to become a preacher, prosecutions of Jerry Lee Lewis and Chuck Berry and the breaking of the payola scandal (which implicated major figures, including Alan Freed, in bribery and corruption in promoting individual acts or songs), gave a sense that the rock and roll era established at that point had come to an end.
The term "pop" has been used since the early 20th century to refer to popular music in general, but from the mid-1950s it began to be used for a distinct genre, aimed at a youth market, often characterized as a softer alternative to rock and roll. From about 1967, it was increasingly used in opposition to the term rock music, to describe a form that was more commercial, ephemeral and accessible. In contrast rock music was seen as focusing on extended works, particularly albums, was often associated with particular sub-cultures (like the counterculture of the 1960s), placed an emphasis on artistic values and "authenticity", stressed live performance and instrumental or vocal virtuosity and was often seen as encapsulating progressive developments rather than simply reflecting existing trends. Nevertheless, much pop and rock music has been very similar in sound, instrumentation and even lyrical content.
The period of the later 1950s and early 1960s has traditionally been seen as an era of hiatus for rock and roll. More recently some authors have emphasised important innovations and trends in this period without which future developments would not have been possible. While early rock and roll, particularly through the advent of rockabilly, saw the greatest commercial success for male and white performers, in this era the genre was dominated by black and female artists. Rock and roll had not disappeared at the end of the 1950s and some of its energy can be seen in the Twist dance craze of the early 1960s, mainly benefiting the career of Chubby Checker.
Cliff Richard had the first British rock and roll hit with "Move It", effectively ushering in the sound of British rock. At the start of the 1960s, his backing group the Shadows was the most successful group recording instrumentals. While rock 'n' roll was fading into lightweight pop and ballads, British rock groups at clubs and local dances, heavily influenced by blues-rock pioneers like Alexis Korner, were starting to play with an intensity and drive seldom found in white American acts.
Also significant was the advent of soul music as a major commercial force. Developing out of rhythm and blues with a re-injection of gospel music and pop, led by pioneers like Ray Charles and Sam Cooke from the mid-1950s, by the early 1960s figures like Marvin Gaye, James Brown, Aretha Franklin, Curtis Mayfield and Stevie Wonder were dominating the R&B charts and breaking through into the main pop charts, helping to accelerate their desegregation, while Motown and Stax/Volt Records were becoming major forces in the record industry. Some historians of music have also pointed to important and innovative technical developments that built on rock and roll in this period, including the electronic treatment of sound by such innovators as Joe Meek, and the elaborate production methods of the Wall of Sound pursued by Phil Spector.
The instrumental rock and roll of performers such as Duane Eddy, Link Wray and the Ventures was developed by Dick Dale, who added distinctive "wet" reverb, rapid alternate picking, and Middle Eastern and Mexican influences. He produced the regional hit "Let's Go Trippin'" in 1961 and launched the surf music craze, following up with songs like "Misirlou" (1962). Like Dale and his Del-Tones, most early surf bands were formed in Southern California, including the Bel-Airs, the Challengers, and Eddie & the Showmen. The Chantays scored a top ten national hit with "Pipeline" in 1963 and probably the best known surf tune was 1963's "Wipe Out", by the Surfaris, which hit number 2 and number 10 on the "Billboard" charts in 1965.
Surf music achieved its greatest commercial success as vocal music, particularly the work of the Beach Boys, formed in 1961 in Southern California. Their early albums included both instrumental surf rock (among them covers of music by Dick Dale) and vocal songs, drawing on rock and roll and doo wop and the close harmonies of vocal pop acts like the Four Freshmen. The Beach Boys first chart hit, "Surfin'" in 1962 reached the "Billboard" top 100 and helped make the surf music craze a national phenomenon. It is often argued that the surf music craze and the careers of almost all surf acts was effectively ended by the arrival of the British Invasion from 1964, because most surf music hits were recorded and released between 1961 and 1965. Only the Beach Boys were able to sustain a creative career into the mid-late 1960s, producing a string of hit singles and albums, including the highly regarded "Pet Sounds" in 1966, which made them, arguably, the only American rock or pop act that could rival The Beatles.
By the end of 1962, what would become the British rock scene had started with beat groups like the Beatles, Gerry & the Pacemakers and the Searchers from Liverpool and Freddie and the Dreamers, Herman's Hermits and the Hollies from Manchester. They drew on a wide range of American influences including 1950s rock and roll, soul, rhythm and blues, and surf music, initially reinterpreting standard American tunes and playing for dancers. Bands like the Animals from Newcastle and Them from Belfast, and particularly those from London like the Rolling Stones and the Yardbirds, were much more directly influenced by rhythm and blues and later blues music. Soon these groups were composing their own material, combining US forms of music and infusing it with a high energy beat. Beat bands tended towards "bouncy, irresistible melodies", while early British blues acts tended towards less sexually innocent, more aggressive songs, often adopting an anti-establishment stance. There was, however, particularly in the early stages, considerable musical crossover between the two tendencies. By 1963, led by the Beatles, beat groups had begun to achieve national success in Britain, soon to be followed into the charts by the more rhythm and blues focused acts.
"I Want to Hold Your Hand" was the Beatles' first number one hit on the "Billboard" Hot 100, spending seven weeks at the top and a total of 15 weeks on the chart. Their first appearance on "The Ed Sullivan Show" on 9 February 1964, drawing an estimated 73 million viewers (at the time a record for an American television program) is often considered a milestone in American pop culture. During the week of 4 April 1964, the Beatles held 12 positions on the "Billboard" Hot 100 singles chart, including the entire top five. The Beatles went on to become the biggest selling rock band of all time and they were followed into the US charts by numerous British bands. During the next two years British acts dominated their own and the US charts with Peter and Gordon, the Animals, Manfred Mann, Petula Clark, Freddie and the Dreamers, Wayne Fontana and the Mindbenders, Herman's Hermits, the Rolling Stones, the Troggs, and Donovan all having one or more number one singles. Other major acts that were part of the invasion included the Kinks and the Dave Clark Five.
The British Invasion helped internationalize the production of rock and roll, opening the door for subsequent British (and Irish) performers to achieve international success. In America it arguably spelled the end of instrumental surf music, vocal girl groups and (for a time) the teen idols, that had dominated the American charts in the late 1950s and early 1960s. It dented the careers of established R&B acts like Fats Domino and Chubby Checker and even temporarily derailed the chart success of surviving rock and roll acts, including Elvis. The British Invasion also played a major part in the rise of a distinct genre of rock music, and cemented the primacy of the rock group, based on guitars and drums and producing their own material as singer-songwriters.
Garage rock was a raw form of rock music, particularly prevalent in North America in the mid-1960s and so called because of the perception that it was rehearsed in the suburban family garage. Garage rock songs often revolved around the traumas of high school life, with songs about "lying girls" and unfair social circumstances being particularly common. The lyrics and delivery tended to be more aggressive than was common at the time, often with growled or shouted vocals that dissolved into incoherent screaming. They ranged from crude one-chord music (like the Seeds) to near-studio musician quality (including the Knickerbockers, the Remains, and the Fifth Estate). There were also regional variations in many parts of the country with flourishing scenes particularly in California and Texas. The Pacific Northwest states of Washington and Oregon had perhaps the most defined regional sound.
The style had been evolving from regional scenes as early as 1958. "Tall Cool One" (1959) by The Wailers and "Louie Louie" by the Kingsmen (1963) are mainstream examples of the genre in its formative stages. By 1963, garage band singles were creeping into the national charts in greater numbers, including Paul Revere and the Raiders (Boise), the Trashmen (Minneapolis) and the Rivieras (South Bend, Indiana). Other influential garage bands, such as the Sonics (Tacoma, Washington), never reached the "Billboard" Hot 100.
The British Invasion greatly influenced garage bands, providing them with a national audience, leading many (often surf or hot rod groups) to adopt a British influence, and encouraging many more groups to form. Thousands of garage bands were extant in the US and Canada during the era and hundreds produced regional hits. Despite scores of bands being signed to major or large regional labels, most were commercial failures. It is generally agreed that garage rock peaked both commercially and artistically around 1966. By 1968 the style largely disappeared from the national charts and at the local level as amateur musicians faced college, work or the draft. New styles had evolved to replace garage rock. In Detroit, garage rock's legacy remained alive into the early 1970s, with bands such as the MC5 and the Stooges, who employed a much more aggressive approach to the form. These bands began to be labelled punk rock and are now often seen as proto-punk or proto-hard rock.
Although the first impact of the British Invasion on American popular music was through beat and R&B based acts, the impetus was soon taken up by a second wave of bands that drew their inspiration more directly from American blues, including the Rolling Stones and the Yardbirds. British blues musicians of the late 1950s and early 1960s had been inspired by the acoustic playing of figures such as Lead Belly, who was a major influence on the Skiffle craze, and Robert Johnson. Increasingly they adopted a loud amplified sound, often centered on the electric guitar, based on the Chicago blues, particularly after the tour of Britain by Muddy Waters in 1958, which prompted Cyril Davies and guitarist Alexis Korner to form the band Blues Incorporated. The band involved and inspired many of the figures of the subsequent British blues boom, including members of the Rolling Stones and Cream, combining blues standards and forms with rock instrumentation and emphasis.
The other key focus for British blues was John Mayall; his band, the Bluesbreakers, included Eric Clapton (after Clapton's departure from the Yardbirds) and later Peter Green. Particularly significant was the release of "Blues Breakers with Eric Clapton (Beano)" album (1966), considered one of the seminal British blues recordings and the sound of which was much emulated in both Britain and the United States. Eric Clapton went on to form supergroups Cream, Blind Faith, and Derek and the Dominos, followed by an extensive solo career that helped bring blues rock into the mainstream. Green, along with the Bluesbreaker's rhythm section Mick Fleetwood and John McVie, formed Peter Green's Fleetwood Mac, who enjoyed some of the greatest commercial success in the genre. In the late 1960s Jeff Beck, also an alumnus of the Yardbirds, moved blues rock in the direction of heavy rock with his band, the Jeff Beck Group. The last Yardbirds guitarist was Jimmy Page, who went on to form "The New Yardbirds" which rapidly became Led Zeppelin. Many of the songs on their first three albums, and occasionally later in their careers, were expansions on traditional blues songs.
In America, blues rock had been pioneered in the early 1960s by guitarist Lonnie Mack, but the genre began to take off in the mid-1960s as acts developed a sound similar to British blues musicians. Key acts included Paul Butterfield (whose band acted like Mayall's Bluesbreakers in Britain as a starting point for many successful musicians), Canned Heat, the early Jefferson Airplane, Janis Joplin, Johnny Winter, the J. Geils Band and Jimi Hendrix with his power trios, the Jimi Hendrix Experience (which included two British members, and was founded in Britain), and Band of Gypsys, whose guitar virtuosity and showmanship would be among the most emulated of the decade. Blues rock bands from the southern states, like the Allman Brothers Band, Lynyrd Skynyrd, and ZZ Top, incorporated country elements into their style to produce the distinctive genre Southern rock.
Early blues rock bands often emulated jazz, playing long, involved improvisations, which would later be a major element of progressive rock. From about 1967 bands like Cream and the Jimi Hendrix Experience had moved away from purely blues-based music into psychedelia. By the 1970s, blues rock had become heavier and more riff-based, exemplified by the work of Led Zeppelin and Deep Purple, and the lines between blues rock and hard rock "were barely visible", as bands began recording rock-style albums. The genre was continued in the 1970s by figures such as George Thorogood and Pat Travers, but, particularly on the British scene (except perhaps for the advent of groups such as Status Quo and Foghat who moved towards a form of high energy and repetitive boogie rock), bands became focused on heavy metal innovation, and blues rock began to slip out of the mainstream.
By the 1960s, the scene that had developed out of the American folk music revival had grown to a major movement, utilising traditional music and new compositions in a traditional style, usually on acoustic instruments. In America the genre was pioneered by figures such as Woody Guthrie and Pete Seeger and often identified with progressive or labor politics. In the early sixties figures such as Joan Baez and Bob Dylan had come to the fore in this movement as singer-songwriters. Dylan had begun to reach a mainstream audience with hits including "Blowin' in the Wind" (1963) and "Masters of War" (1963), which brought "protest songs" to a wider public, but, although beginning to influence each other, rock and folk music had remained largely separate genres, often with mutually exclusive audiences.
Early attempts to combine elements of folk and rock included the Animals' "House of the Rising Sun" (1964), which was the first commercially successful folk song to be recorded with rock and roll instrumentation and the Beatles "I'm a Loser" (1964), arguably the first Beatles song to be influenced directly by Dylan. The folk rock movement is usually thought to have taken off with The Byrds' recording of Dylan's "Mr. Tambourine Man" which topped the charts in 1965. With members who had been part of the cafe-based folk scene in Los Angeles, the Byrds adopted rock instrumentation, including drums and 12-string Rickenbacker guitars, which became a major element in the sound of the genre. Later that year Dylan adopted electric instruments, much to the outrage of many folk purists, with his "Like a Rolling Stone" becoming a US hit single. Folk rock particularly took off in California, where it led acts like the Mamas & the Papas and Crosby, Stills and Nash to move to electric instrumentation, and in New York, where it spawned performers including The Lovin' Spoonful and Simon and Garfunkel, with the latter's acoustic "The Sounds of Silence" (1965) being remixed with rock instruments to be the first of many hits.
These acts directly influenced British performers like Donovan and Fairport Convention. In 1969 Fairport Convention abandoned their mixture of American covers and Dylan-influenced songs to play traditional English folk music on electric instruments. This British folk-rock was taken up by bands including Pentangle, Steeleye Span and the Albion Band, which in turn prompted Irish groups like Horslips and Scottish acts like the JSD Band, Spencer's Feat and later Five Hand Reel, to use their traditional music to create a brand of Celtic rock in the early 1970s.
Folk-rock reached its peak of commercial popularity in the period 1967–68, before many acts moved off in a variety of directions, including Dylan and the Byrds, who began to develop country rock. However, the hybridization of folk and rock has been seen as having a major influence on the development of rock music, bringing in elements of psychedelia, and helping to develop the ideas of the singer-songwriter, the protest song, and concepts of "authenticity".
Psychedelic music's LSD-inspired vibe began in the folk scene. The first group to advertise themselves as psychedelic rock were the 13th Floor Elevators from Texas. The Beatles introduced many of the major elements of the psychedelic sound to audiences in this period, such as guitar feedback, the Indian sitar and backmasking sound effects. Psychedelic rock particularly took off in California's emerging music scene as groups followed the Byrds's shift from folk to folk rock from 1965. The psychedelic lifestyle, which revolved around hallucinogenic drugs, had already developed in San Francisco and particularly prominent products of the scene were Big Brother and the Holding Company, the Grateful Dead and Jefferson Airplane. The Jimi Hendrix Experience's lead guitarist, Jimi Hendrix did extended distorted, feedback-filled jams which became a key feature of psychedelia. Psychedelic rock reached its apogee in the last years of the decade. 1967 saw the Beatles release their definitive psychedelic statement in "Sgt. Pepper's Lonely Hearts Club Band", including the controversial track "Lucy in the Sky with Diamonds", the Rolling Stones responded later that year with "Their Satanic Majesties Request", and the Pink Floyd debuted with "The Piper at the Gates of Dawn". Key recordings included Jefferson Airplane's "Surrealistic Pillow" and the Doors' "Strange Days". These trends peaked in the 1969 Woodstock festival, which saw performances by most of the major psychedelic acts.
Progressive rock, a term sometimes used interchangeably with art rock, moved beyond established musical formulas by experimenting with different instruments, song types, and forms. From the mid-1960s the Left Banke, the Beatles, Queen, the Rolling Stones and the Beach Boys, had pioneered the inclusion of harpsichords, wind, and string sections on their recordings to produce a form of Baroque rock and can be heard in singles like Procol Harum's "A Whiter Shade of Pale" (1967), with its Bach-inspired introduction. The Moody Blues used a full orchestra on their album "Days of Future Passed" (1967) and subsequently created orchestral sounds with synthesizers. Classical orchestration, keyboards, and synthesizers were a frequent addition to the established rock format of guitars, bass, and drums in subsequent progressive rock.
Instrumentals were common, while songs with lyrics were sometimes conceptual, abstract, or based in fantasy and science fiction. The Pretty Things' "SF Sorrow" (1968), and the Kinks' "Arthur (Or the Decline and Fall of the British Empire)" (1969) introduced the format of rock operas and opened the door to concept albums, often telling an epic story or tackling a grand overarching theme. King Crimson's 1969 début album, "In the Court of the Crimson King", which mixed powerful guitar riffs and mellotron, with jazz and symphonic music, is often taken as the key recording in progressive rock, helping the widespread adoption of the genre in the early 1970s among existing blues-rock and psychedelic bands, as well as newly formed acts. The vibrant Canterbury scene saw acts following Soft Machine from psychedelia, through jazz influences, toward more expansive hard rock, including Caravan, Hatfield and the North, Gong, and National Health.
Greater commercial success was enjoyed by Pink Floyd, who also moved away from psychedelia after the departure of Syd Barrett in 1968, with "The Dark Side of the Moon" (1973), seen as a masterpiece of the genre, becoming one of the best-selling albums of all time. There was an emphasis on instrumental virtuosity, with Yes showcasing the skills of both guitarist Steve Howe and keyboard player Rick Wakeman, while Emerson, Lake & Palmer were a supergroup who produced some of the genre's most technically demanding work. Jethro Tull and Genesis both pursued very different, but distinctly English, brands of music. Renaissance, formed in 1969 by ex-Yardbirds Jim McCarty and Keith Relf, evolved into a high-concept band featuring the three-octave voice of Annie Haslam. Most British bands depended on a relatively small cult following, but a handful, including Pink Floyd, Genesis, and Jethro Tull, managed to produce top ten singles at home and break the American market. The American brand of progressive rock varied from the eclectic and innovative Frank Zappa, Captain Beefheart and Blood, Sweat & Tears, to more pop rock orientated bands like Boston, Foreigner, Kansas, Journey, and Styx. These, beside British bands Supertramp and ELO, all demonstrated a prog rock influence and while ranking among the most commercially successful acts of the 1970s, heralding the era of "pomp" or "arena rock", which would last until the costs of complex shows (often with theatrical staging and special effects), would be replaced by more economical rock festivals as major live venues in the 1990s.
The instrumental strand of the genre resulted in albums like Mike Oldfield's "Tubular Bells" (1973), the first record, and worldwide hit, for the Virgin Records label, which became a mainstay of the genre. Instrumental rock was particularly significant in continental Europe, allowing bands like Kraftwerk, Tangerine Dream, Can, and Faust to circumvent the language barrier. Their synthesiser-heavy "krautrock", along with the work of Brian Eno (for a time the keyboard player with Roxy Music), would be a major influence on subsequent electronic rock. With the advent of punk rock and technological changes in the late 1970s, progressive rock was increasingly dismissed as pretentious and overblown. Many bands broke up, but some, including Genesis, ELP, Yes, and Pink Floyd, regularly scored top ten albums with successful accompanying worldwide tours. Some bands which emerged in the aftermath of punk, such as Siouxsie and the Banshees, Ultravox, and Simple Minds, showed the influence of progressive rock, as well as their more usually recognized punk influences.
In the late 1960s, jazz-rock emerged as a distinct subgenre out of the blues-rock, psychedelic, and progressive rock scenes, mixing the power of rock with the musical complexity and improvisational elements of jazz. AllMusic states that the term jazz-rock "may refer to the loudest, wildest, most electrified fusion bands from the jazz camp, but most often it describes performers coming from the rock side of the equation." Jazz-rock "...generally grew out of the most artistically ambitious rock subgenres of the late '60s and early '70s", including the singer-songwriter movement. Many early US rock and roll musicians had begun in jazz and carried some of these elements into the new music. In Britain the subgenre of blues rock, and many of its leading figures, like Ginger Baker and Jack Bruce of the Eric Clapton-fronted band Cream, had emerged from the British jazz scene. Often highlighted as the first true jazz-rock recording is the only album by the relatively obscure New York-based the Free Spirits with "Out of Sight and Sound" (1966). The first group of bands to self-consciously use the label were R&B oriented white rock bands that made use of jazzy horn sections, like Electric Flag, Blood, Sweat & Tears and Chicago, to become some of the most commercially successful acts of the later 1960s and the early 1970s.
British acts to emerge in the same period from the blues scene, to make use of the tonal and improvisational aspects of jazz, included Nucleus and the Graham Bond and John Mayall spin-off Colosseum. From the psychedelic rock and the Canterbury scenes came Soft Machine, who, it has been suggested, produced one of the artistically successfully fusions of the two genres. Perhaps the most critically acclaimed fusion came from the jazz side of the equation, with Miles Davis, particularly influenced by the work of Hendrix, incorporating rock instrumentation into his sound for the album "Bitches Brew" (1970). It was a major influence on subsequent rock-influenced jazz artists, including Herbie Hancock, Chick Corea and Weather Report. The genre began to fade in the late 1970s, as a mellower form of fusion began to take its audience, but acts like Steely Dan, Frank Zappa and Joni Mitchell recorded significant jazz-influenced albums in this period, and it has continued to be a major influence on rock music.
Roots rock is the term now used to describe a move away from what some saw as the excesses of the psychedelic scene, to a more basic form of rock and roll that incorporated its original influences, particularly country and folk music, leading to the creation of country rock and Southern rock. In 1966 Bob Dylan went to Nashville to record the album "Blonde on Blonde". This, and subsequent more clearly country-influenced albums, have been seen as creating the genre of country folk, a route pursued by a number of largely acoustic folk musicians. Other acts that followed the back-to-basics trend were the Canadian group the Band and the California-based Creedence Clearwater Revival, both of which mixed basic rock and roll with folk, country and blues, to be among the most successful and influential bands of the late 1960s. The same movement saw the beginning of the recording careers of Californian solo artists like Ry Cooder, Bonnie Raitt and Lowell George, and influenced the work of established performers such as the Rolling Stones' "Beggar's Banquet" (1968) and the Beatles' "Let It Be" (1970). Reflecting on this change of trends in rock music over the past few years, Christgau wrote in his June 1970 "Consumer Guide" column that this "new orthodoxy" and "cultural lag" abandoned improvisatory, studio-ornamented productions in favor of an emphasis on "tight, spare instrumentation" and song composition: "Its referents are '50s rock, country music, and rhythm-and-blues, and its key inspiration is the Band."
In 1968, Gram Parsons recorded "Safe at Home" with the International Submarine Band, arguably the first true country rock album. Later that year he joined the Byrds for "Sweetheart of the Rodeo" (1968), generally considered one of the most influential recordings in the genre. The Byrds continued in the same vein, but Parsons left to be joined by another ex-Byrds member Chris Hillman in forming the Flying Burrito Brothers who helped establish the respectability and parameters of the genre, before Parsons departed to pursue a solo career. Bands in California that adopted country rock included Hearts and Flowers, Poco, New Riders of the Purple Sage, the Beau Brummels, and the Nitty Gritty Dirt Band. Some performers also enjoyed a renaissance by adopting country sounds, including: the Everly Brothers; one-time teen idol Rick Nelson who became the frontman for the Stone Canyon Band; former Monkee Mike Nesmith who formed the First National Band; and Neil Young. The Dillards were, unusually, a country act, who moved towards rock music. The greatest commercial success for country rock came in the 1970s, with artists including the Doobie Brothers, Emmylou Harris, Linda Ronstadt and the Eagles (made up of members of the Burritos, Poco, and Stone Canyon Band), who emerged as one of the most successful rock acts of all time, producing albums that included "Hotel California" (1976).
The founders of Southern rock are usually thought to be the Allman Brothers Band, who developed a distinctive sound, largely derived from blues rock, but incorporating elements of boogie, soul, and country in the early 1970s. The most successful act to follow them were Lynyrd Skynyrd, who helped establish the "Good ol' boy" image of the subgenre and the general shape of 1970s' guitar rock. Their successors included the fusion/progressive instrumentalists Dixie Dregs, the more country-influenced Outlaws, jazz-leaning Wet Willie and (incorporating elements of R&B and gospel) the Ozark Mountain Daredevils. After the loss of original members of the Allmans and Lynyrd Skynyrd, the genre began to fade in popularity in the late 1970s, but was sustained the 1980s with acts like .38 Special, Molly Hatchet and the Marshall Tucker Band.
Glam rock emerged from the English psychedelic and art rock scenes of the late 1960s and can be seen as both an extension of and reaction against those trends. Musically diverse, varying between the simple rock and roll revivalism of figures like Alvin Stardust to the complex art rock of Roxy Music, and can be seen as much as a fashion as a musical subgenre. Visually it was a mesh of various styles, ranging from 1930s Hollywood glamor, through 1950s pin-up sex appeal, pre-war Cabaret theatrics, Victorian literary and symbolist styles, science fiction, to ancient and occult mysticism and mythology; manifesting itself in outrageous clothes, makeup, hairstyles, and platform-soled boots. Glam is most noted for its sexual and gender ambiguity and representations of androgyny, beside extensive use of theatrics. It was prefigured by the showmanship and gender-identity manipulation of American acts such as the Cockettes and Alice Cooper.
The origins of glam rock are associated with Marc Bolan, who had renamed his folk duo to T. Rex and taken up electric instruments by the end of the 1960s. Often cited as the moment of inception is his appearance on the UK TV programme "Top of the Pops" in December 1970 wearing glitter, to perform what would be his first number 1 single "Ride a White Swan". From 1971, already a minor star, David Bowie developed his Ziggy Stardust persona, incorporating elements of professional make up, mime and performance into his act. These performers were soon followed in the style by acts including Roxy Music, Sweet, Slade, Mott the Hoople, Mud and Alvin Stardust. While highly successful in the single charts in the UK, very few of these musicians were able to make a serious impact in the United States; Bowie was the major exception becoming an international superstar and prompting the adoption of glam styles among acts like Lou Reed, Iggy Pop, New York Dolls and Jobriath, often known as "glitter rock" and with a darker lyrical content than their British counterparts. In the UK the term glitter rock was most often used to refer to the extreme version of glam pursued by Gary Glitter and his support musicians the Glitter Band, who between them achieved eighteen top ten singles in the UK between 1972 and 1976. A second wave of glam rock acts, including Suzi Quatro, Roy Wood's Wizzard and Sparks, dominated the British single charts from about 1974 to 1976. Existing acts, some not usually considered central to the genre, also adopted glam styles, including Rod Stewart, Elton John, Queen and, for a time, even the Rolling Stones. It was also a direct influence on acts that rose to prominence later, including Kiss and Adam Ant, and less directly on the formation of gothic rock and glam metal as well as on punk rock, which helped end the fashion for glam from about 1976. Glam has since enjoyed sporadic modest revivals through bands such as Chainsaw Kittens, the Darkness and in R n' B crossover act Prince.
The early 1960s saw the early recording careers of Al Hurricane and his brothers Tiny Morrie and Baby Gaby, they blended their traditional New Mexico music with rock and country-western to great success. They had earlier begun recording instrumental rock during the 1950s, but Al Hurricane saw his first hit singles, as a singer-songwriter, with his 1960s vocal recordings of "Sentimiento" and "Mi Saxophone". Carlos Santana began his recording career in the late 1960s, with his band simply referred to as Santana. His first hit single Evil Ways debuted in 1969, on the eponymous album "Santana".
After the early successes of Latin rock in the 1960s, Chicano musicians like Carlos Santana and Al Hurricane would continue to have successful careers throughout the 1970s. Santana opened the decade with success in his 1970 single "Black Magic Woman" on the Abraxas album. His third album "Santana III" yielded the single "No One to Depend On", and his fourth album Caravanserai experimented with his sound to mixed reception. He later released a series of four albums that all achieved gold status; "Welcome", "Borboletta", "Amigos", and "Festivál". Al Hurricane continued to mix his rock music with New Mexico music, though he was also experimenting more heavily with Jazz music, which lead to several successful singles, especially on his "Vestido Mojado" album, including the eponymous "Vestido Mojado", as well as "Por Una Mujer Casada" and "Puño de Tierra", his brothers had successful New Mexico music singles in "La Del Moño Colorado" by Tiny Morrie and "La Cumbia De San Antone" by Baby Gaby. Al Hurricane Jr. would also began his successful rock-infused New Mexico music recording career in the 1970s, with his 1976 rendition of "Flor De Las Flores". Los Lobos also gained popularity at this time, with their first album Los Lobos del Este de Los Angeles in 1977.
Reflecting on developments in rock music at the start of the 1970s, Robert Christgau later wrote in "" (1981):
Rock saw greater commodification during this decade, turning into a multibillion-dollar industry and doubling its market while, as Christgau noted, suffering a significant "loss of cultural prestige". "Maybe the Bee Gees became more popular than the Beatles, but they were never more popular than Jesus", he said. "Insofar as the music retained any mythic power, the myth was self-referential — there were lots of songs about the rock and roll life but very few about how rock could change the world, except as a new brand of painkiller ... In the '70s the powerful took over, as rock industrialists capitalized on the national mood to reduce potent music to an often reactionary species of entertainment—and to transmute rock's popular base from the audience to market."
From the late 1960s it became common to divide mainstream rock music into soft and hard rock. Soft rock was often derived from folk rock, using acoustic instruments and putting more emphasis on melody and harmonies. Major artists included Carole King, Cat Stevens and James Taylor. It reached its commercial peak in the mid- to late 1970s with acts like Billy Joel, America and the reformed Fleetwood Mac, whose "Rumours" (1977) was the best-selling album of the decade.
In contrast to soft rock, hard rock was more often derived from blues-rock and was played louder and with more intensity. It often emphasised the electric guitar, both as a rhythm instrument using simple repetitive riffs and as a solo lead instrument, and was more likely to be used with distortion and other effects. Key 1960s acts included British Invasion bands like the Kinks, as well as psychedelic era performers like Cream, Jimi Hendrix and the Jeff Beck Group. Hard rock-influenced bands that enjoyed international success in the 1970s and 1980s included Queen, Thin Lizzy, Aerosmith, AC/DC, and Van Halen.
From the late 1960s the term "heavy metal" began to be used to describe some hard rock played with even more volume and intensity, first as an adjective and by the early 1970s as a noun. The term was first used in music in Steppenwolf's "Born to Be Wild" (1967) and began to be associated with pioneer bands like San Francisco's Blue Cheer, Cleveland's James Gang and Michigan's Grand Funk Railroad. By 1970 three key British bands had developed the characteristic sounds and styles which would help shape the subgenre. Led Zeppelin added elements of fantasy to their riff laden blues-rock, Deep Purple brought in symphonic and medieval interests from their progressive rock phase and Black Sabbath introduced facets of the gothic and modal harmony, helping to produce a "darker" sound. These elements were taken up by a "second generation" of heavy metal bands into the late 1970s, including: Judas Priest, UFO, Motörhead and Rainbow from Britain; Kiss, Ted Nugent, and Blue Öyster Cult from the US; Rush from Canada and Scorpions from Germany, all marking the expansion in popularity of the subgenre. Despite a lack of airplay and very little presence on the singles charts, late-1970s heavy metal built a considerable following, particularly among adolescent working-class males in North America and Europe.
Although many established bands continued to perform and record, heavy metal suffered a hiatus in the face of the punk movement in the mid-1970s. Part of the reaction saw the popularity of bands like Motörhead, who had adopted a punk sensibility, and Judas Priest, who created a stripped-down sound, largely removing the remaining elements of blues music, from their 1978 album "Stained Class". This change of direction was compared to punk and in the late 1970s became known as the new wave of British heavy metal (NWOBHM). During this era, almost all heavy metal performers were male, with the exception of the all-female band Girlschool from the UK. These bands were soon followed by acts including Iron Maiden, Vardis, Diamond Head, Saxon, Def Leppard and Venom, many of which began to enjoy considerable success in the US. In the same period Eddie Van Halen established himself as a metal guitar virtuoso after his band's self-titled 1978 album. Randy Rhoads and Yngwie Malmsteen also became established virtuosos, associated with what would be known as the neoclassical metal style.
In the late 1980s metal fragmented into several subgenres, including thrash metal, which developed in the US from the style known as speed metal, under the influence of hardcore punk, with low-register guitar riffs typically overlaid by shredding leads. Lyrics often expressed nihilistic views or deal with social issues using visceral, gory language. It was popularised by the "Big Four of Thrash": Metallica, Anthrax, Megadeth, and Slayer. Death metal developed out of thrash, particularly influenced by the bands Venom and Slayer. Florida's Death and the Bay Area's Possessed emphasized lyrical elements of blasphemy, diabolism and millenarianism, with vocals usually delivered as guttural "death growls," high-pitched screaming, complemented by downtuned, highly distorted guitars and extremely fast double bass percussion. Black metal, again influenced by Venom and pioneered by Denmark's Mercyful Fate, Switzerland's Hellhammer and Celtic Frost, and Sweden's Bathory, had many similarities in sound to death metal, but was often intentionally lo-fi in production and placed greater emphasis on satanic and pagan themes. Bathory were particularly important in inspiring the further subgenres of Viking metal and folk metal. Power metal emerged in Europe in the late 1980s as a reaction to the harshness of death and black metal and was established by Germany's Helloween, who combined a melodic approach with thrash's speed and energy. England's DragonForce and Florida's Iced Earth have a sound indebted to NWOBHM, while acts such as Florida's Kamelot, Finland's Nightwish, Italy's Rhapsody of Fire, and Russia's Catharsis feature a keyboard-based "symphonic" sound, sometimes employing orchestras and opera singers. In contrast to other subgenres doom metal, influenced by Gothic rock, slowed down the music, with bands like England's Pagan Altar and Witchfinder General and the United States' Pentagram, Saint Vitus and Trouble, emphasizing melody, down-tuned guitars, a 'thicker' or 'heavier' sound and a sepulchral mood. American bands such as Queensrÿche and Dream Theater pioneered an often instrumentally challenging fusion of NWOBHM and progressive rock called progressive metal, with bands such as Symphony X combining aspects of power metal and classical music with the style, while Sweden's Opeth developed a unique style indebted to both death metal and atmospheric 1970s prog rock.
Rock, mostly the heavy metal genre, has been criticized by some Christian leaders, who have condemned it as immoral, anti-Christian and even demonic. However, Christian rock began to develop in the late 1960s, particularly out of the Jesus movement beginning in Southern California, and emerged as a subgenre in the 1970s with artists like Larry Norman, usually seen as the first major "star" of Christian rock. The genre has been particularly popular in the United States. Many Christian rock performers have ties to the contemporary Christian music scene, while other bands and artists are closely linked to independent music. Since the 1980s Christian rock performers have gained mainstream success, including figures such as the American gospel-to-pop crossover artist Amy Grant and the British singer Cliff Richard. While these artists were largely acceptable in Christian communities the adoption of heavy rock and glam metal styles by bands like Petra and Stryper, who achieved considerable mainstream success in the 1980s, was more controversial. From the 1990s there were increasing numbers of acts who attempted to avoid the Christian band label, preferring to be seen as groups who were also Christians, including P.O.D and Collective Soul.
American working-class oriented heartland rock, characterized by a straightforward musical style, and a concern with the lives of ordinary, blue-collar American people, developed in the second half of the 1970s. The term heartland rock was first used to describe Midwestern arena rock groups like Kansas, REO Speedwagon and Styx, but which came to be associated with a more socially concerned form of roots rock more directly influenced by folk, country and rock and roll. It has been seen as an American Midwest and Rust Belt counterpart to West Coast country rock and the Southern rock of the American South. Led by figures who had initially been identified with punk and New Wave, it was most strongly influenced by acts such as Bob Dylan, the Byrds, Creedence Clearwater Revival and Van Morrison, and the basic rock of 1960s garage and the Rolling Stones.
Exemplified by the commercial success of singer songwriters Bruce Springsteen, Bob Seger, and Tom Petty, along with less widely known acts such as Southside Johnny and the Asbury Jukes and Joe Grushecky and the Houserockers, it was partly a reaction to post-industrial urban decline in the East and Mid-West, often dwelling on issues of social disintegration and isolation, beside a form of good-time rock and roll revivalism. The genre reached its commercial, artistic and influential peak in the mid-1980s, with Springsteen's "Born in the USA" (1984), topping the charts worldwide and spawning a series of top ten singles, together with the arrival of artists including John Mellencamp, Steve Earle and more gentle singer-songwriters such as Bruce Hornsby. It can also be heard as an influence on artists as diverse as Billy Joel, Kid Rock and the Killers.
Heartland rock faded away as a recognized genre by the early 1990s, as rock music in general, and blue-collar and white working class themes in particular, lost influence with younger audiences, and as heartland's artists turned to more personal works. Many heartland rock artists continue to record today with critical and commercial success, most notably Bruce Springsteen, Tom Petty, and John Mellencamp, although their works have become more personal and experimental and no longer fit easily into a single genre. Newer artists whose music would perhaps have been labeled heartland rock had it been released in the 1970s or 1980s, such as Missouri's Bottle Rockets and Illinois' Uncle Tupelo, often find themselves labeled alt-country.
Inspired by NWOBHM and Van Halen's success, a metal scene began to develop in Southern California from the late 1970s, based on the clubs of L.A.'s Sunset Strip and including such bands as Quiet Riot, Ratt, Mötley Crüe, and W.A.S.P., who, along with similarly styled acts such as New York's Twisted Sister and Pennsylvania's Poison, incorporated the theatrics (and sometimes makeup) of glam rock acts like Alice Cooper and Kiss. The lyrics of these glam metal bands characteristically emphasized hedonism and wild behavior and musically were distinguished by rapid-fire shred guitar solos, anthemic choruses, and a relatively melodic, pop-oriented approach. The most commercially significant release of the era being "Slippery When Wet" (1986) by Bon Jovi from New Jersey, selling over 12 million copies in the US alone. The album has been credited with widening the audience for the subgenre, particularly by appealing to women as well as the traditional male dominated audience, and opening the door to MTV and commercial success for other bands at the end of the decade. By the mid-1980s bands were beginning to emerge from the L.A. scene that pursued a less glam image and a rawer sound, particularly Guns N' Roses, breaking through with the chart-topping "Appetite for Destruction" (1987), and Jane's Addiction, who emerged with their major label debut "Nothing's Shocking", the following year.
Punk rock was developed between 1974 and 1976 in the United States and the United Kingdom. Rooted in garage rock and other forms of what is now known as protopunk music, punk rock bands eschewed the perceived excesses of mainstream 1970s rock. They created fast, hard-edged music, typically with short songs, stripped-down instrumentation, and often political, anti-establishment lyrics. Punk embraces a DIY (do it yourself) ethic, with many bands self-producing their recordings and distributing them through informal channels.
By late 1976, acts such as the Ramones and Patti Smith, in New York City, and the Sex Pistols and the Clash, in London, were recognized as the vanguard of a new musical movement. The following year saw punk rock spreading around the world. Punk quickly, though briefly, became a major cultural phenomenon in the United Kingdom. For the most part, punk took root in local scenes that tended to reject association with the mainstream. An associated punk subculture emerged, expressing youthful rebellion and characterized by distinctive clothing styles and a variety of anti-authoritarian ideologies.
By the beginning of the 1980s, faster, more aggressive styles such as hardcore and Oi! had become the predominant mode of punk rock. This has resulted in several evolved strains of hardcore punk, such as D-beat (a distortion-heavy subgenre influenced by the UK band Discharge), anarcho-punk (such as Crass), grindcore (such as Napalm Death), and crust punk. Musicians identifying with or inspired by punk also pursued a broad range of other variations, giving rise to New wave, post-punk and the alternative rock movement.
Although punk rock was a significant social and musical phenomenon, it achieved less in the way of record sales (being distributed by small specialty labels such as Stiff Records), or American radio airplay (as the radio scene continued to be dominated by mainstream formats such as disco and album-oriented rock). Punk rock had attracted devotees from the art and collegiate world and soon bands sporting a more literate, arty approach, such as Talking Heads and Devo began to infiltrate the punk scene; in some quarters the description "new wave" began to be used to differentiate these less overtly punk bands. Record executives, who had been mostly mystified by the punk movement, recognized the potential of the more accessible new wave acts and began aggressively signing and marketing any band that could claim a remote connection to punk or new wave. Many of these bands, such as the Cars and the Go-Go's can be seen as pop bands marketed as new wave; other existing acts, including the Police, the Pretenders and Elvis Costello, used the new wave movement as the springboard for relatively long and critically successful careers, while "skinny tie" bands exemplified by the Knack, or the photogenic Blondie, began as punk acts and moved into more commercial territory.
Between 1979 and 1985, influenced by Kraftwerk, Yellow Magic Orchestra, David Bowie and Gary Numan, British new wave went in the direction of such New Romantics as Spandau Ballet, Ultravox, Japan, Duran Duran, A Flock of Seagulls, Culture Club, Talk Talk and the Eurythmics, sometimes using the synthesizer to replace all other instruments. This period coincided with the rise of MTV and led to a great deal of exposure for this brand of synth-pop, creating what has been characterised as a second British Invasion. Some more traditional rock bands adapted to the video age and profited from MTV's airplay, most obviously Dire Straits, whose "Money for Nothing" gently poked fun at the station, despite the fact that it had helped make them international stars, but in general, guitar-oriented rock was commercially eclipsed.
If hardcore most directly pursued the stripped down aesthetic of punk, and new wave came to represent its commercial wing, post-punk emerged in the later 1970s and early 1980s as its more artistic and challenging side. Major influences beside punk bands were the Velvet Underground, Frank Zappa and Captain Beefheart, and the New York-based no wave scene which placed an emphasis on performance, including bands such as James Chance and the Contortions, DNA and Sonic Youth. Early contributors to the genre included the US bands Pere Ubu, Devo, the Residents and Talking Heads.
The first wave of British post-punk included Gang of Four, Siouxsie and the Banshees and Joy Division, who placed less emphasis on art than their US counterparts and more on the dark emotional qualities of their music. Bands like Siouxsie and the Banshees, Bauhaus, the Cure, and the Sisters of Mercy, moved increasingly in this direction to found Gothic rock, which had become the basis of a major sub-culture by the early 1980s. Similar emotional territory was pursued by Australian acts like the Birthday Party and Nick Cave. Members of Bauhaus and Joy Division explored new stylistic territory as Love and Rockets and New Order respectively. Another early post-punk movement was the industrial music developed by British bands Throbbing Gristle and Cabaret Voltaire, and New York-based Suicide, using a variety of electronic and sampling techniques that emulated the sound of industrial production and which would develop into a variety of forms of post-industrial music in the 1980s.
The second generation of British post-punk bands that broke through in the early 1980s, including the Fall, the Pop Group, the Mekons, Echo and the Bunnymen and the Teardrop Explodes, tended to move away from dark sonic landscapes. Arguably the most successful band to emerge from post-punk was Ireland's U2, who incorporated elements of religious imagery together with political commentary into their often anthemic music, and by the late 1980s had become one of the biggest bands in the world. Although many post-punk bands continued to record and perform, it declined as a movement in the mid-1980s as acts disbanded or moved off to explore other musical areas, but it has continued to influence the development of rock music and has been seen as a major element in the creation of the alternative rock movement.
Post-hardcore developed in the US, particularly in the Chicago and Washington, DC areas, in the early to mid-1980s, with bands that were inspired by the do-it-yourself ethics and guitar-heavy music of hardcore punk, but influenced by post-punk, adopting longer song formats, more complex musical structures and sometimes more melodic vocal styles. Emo also emerged from the hardcore scene in 1980s Washington, D.C., initially as "emocore", used as a term to describe bands who favored expressive vocals over the more common abrasive, barking style. The early emo scene operated as an underground, with short-lived bands releasing small-run vinyl records on tiny independent labels.
The term alternative rock was coined in the early 1980s to describe rock artists who did not fit into the mainstream genres of the time. Bands dubbed "alternative" had no unified style, but were all seen as distinct from mainstream music. Alternative bands were linked by their collective debt to punk rock, through hardcore, New Wave or the post-punk movements. Important alternative rock bands of the 1980s in the US included R.E.M., Hüsker Dü, Jane's Addiction, Sonic Youth, and the Pixies, and in the UK the Cure, New Order, the Jesus and Mary Chain, and the Smiths. Artists were largely confined to independent record labels, building an extensive underground music scene based on college radio, fanzines, touring, and word-of-mouth. They rejected the dominant synth-pop of the early 1980s, marking a return to group-based guitar rock.
Few of these early bands achieved mainstream success, although exceptions to this rule include R.E.M., the Smiths, and the Cure. Despite a general lack of spectacular album sales, the original alternative rock bands exerted a considerable influence on the generation of musicians who came of age in the 1980s and ended up breaking through to mainstream success in the 1990s. Styles of alternative rock in the U.S. during the 1980s included jangle pop, associated with the early recordings of R.E.M., which incorporated the ringing guitars of mid-1960s pop and rock, and college rock, used to describe alternative bands that began in the college circuit and college radio, including acts such as 10,000 Maniacs and the Feelies. In the UK Gothic rock was dominant in the early 1980s, but by the end of the decade indie or dream pop like Primal Scream, Bogshed, Half Man Half Biscuit and the Wedding Present, and what were dubbed shoegaze bands like My Bloody Valentine, Slowdive, Ride and Lush. Particularly vibrant was the Madchester scene, produced such bands as Happy Mondays, Inspiral Carpets and the Stone Roses. The next decade would see the success of grunge in the United States and Britpop in the United Kingdom, bringing alternative rock into the mainstream.
Disaffected by commercialized and highly produced pop and rock in the mid-1980s, bands in Washington state (particularly in the Seattle area) formed a new style of rock which sharply contrasted with the mainstream music of the time. The developing genre came to be known as "grunge", a term descriptive of the dirty sound of the music and the unkempt appearance of most musicians, who actively rebelled against the over-groomed images of other artists. Grunge fused elements of hardcore punk and heavy metal into a single sound, and made heavy use of guitar distortion, fuzz and feedback. The lyrics were typically apathetic and angst-filled, and often concerned themes such as social alienation and entrapment, although it was also known for its dark humor and parodies of commercial rock. Bands such as Green River, Soundgarden, Melvins and Skin Yard pioneered the genre, with Mudhoney becoming the most successful by the end of the decade.
Grunge remained largely a local phenomenon until 1991, when Nirvana's album "Nevermind" became a huge success, containing the anthemic song "Smells Like Teen Spirit". "Nevermind" was more melodic than its predecessors, by signing to Geffen Records the band was one of the first to employ traditional corporate promotion and marketing mechanisms such as an MTV video, in store displays and the use of radio "consultants" who promoted airplay at major mainstream rock stations. During 1991 and 1992, other grunge albums such as Pearl Jam's "Ten", Soundgarden's "Badmotorfinger" and Alice in Chains' "Dirt", along with the "Temple of the Dog" album featuring members of Pearl Jam and Soundgarden, became among the 100 top-selling albums. Major record labels signed most of the remaining grunge bands in Seattle, while a second influx of acts moved to the city in the hope of success. However, with the death of Kurt Cobain and the subsequent break-up of Nirvana in 1994, touring problems for Pearl Jam and the departure of Alice in Chains' lead singer Layne Staley in 1998, the genre began to decline, partly to be overshadowed by Britpop and more commercial sounding post-grunge.
Britpop emerged from the British alternative rock scene of the early 1990s and was characterised by bands particularly influenced by British guitar music of the 1960s and 1970s. The Smiths were a major influence, as were bands of the Madchester scene, which had dissolved in the early 1990s. The movement has been seen partly as a reaction against various U.S.-based, musical and cultural trends in the late 1980s and early 1990s, particularly the grunge phenomenon and as a reassertion of a British rock identity. Britpop was varied in style, but often used catchy tunes and hooks, beside lyrics with particularly British concerns and the adoption of the iconography of the 1960s British Invasion, including the symbols of British identity previously utilised by the mods. It was launched around 1993 with releases by groups such as Suede and Blur, who were soon joined by others including Oasis, Pulp, Supergrass, and Elastica, who produced a series of successful albums and singles. For a while the contest between Blur and Oasis was built by the popular press into the "Battle of Britpop", initially won by Blur, but with Oasis achieving greater long-term and international success, directly influencing later Britpop bands, such as Ocean Colour Scene and Kula Shaker. Britpop groups brought British alternative rock into the mainstream and formed the backbone of a larger British cultural movement known as Cool Britannia. Although its more popular bands, particularly Blur and Oasis, were able to spread their commercial success overseas, especially to the United States, the movement had largely fallen apart by the end of the decade.
The term post-grunge was coined for the generation of bands that followed the emergence into the mainstream and subsequent hiatus of the Seattle grunge bands. Post-grunge bands emulated their attitudes and music, but with a more radio-friendly commercially oriented sound. Often they worked through the major labels and came to incorporate diverse influences from jangle pop, pop-punk, alternative metal or hard rock. The term post-grunge originally was meant to be pejorative, suggesting that they were simply musically derivative, or a cynical response to an "authentic" rock movement. Originally, grunge bands that emerged when grunge was mainstream and were suspected of emulating the grunge sound were pejoratively labelled as post-grunge. From 1994, former Nirvana drummer Dave Grohl's new band, the Foo Fighters, helped popularize the genre and define its parameters.
Some post-grunge bands, like Candlebox, were from Seattle, but the subgenre was marked by a broadening of the geographical base of grunge, with bands like Los Angeles' Audioslave, and Georgia's Collective Soul and beyond the US to Australia's Silverchair and Britain's Bush, who all cemented post-grunge as one of the most commercially viable subgenres of the late 1990s. Although male bands predominated post-grunge, female solo artist Alanis Morissette's 1995 album "Jagged Little Pill", labelled as post-grunge, also became a multi-platinum hit. Post-grunge morphed during the late 1990s and early 2000s as bands like Creed, Nickelback, Shinedown, Seether, 3 Doors Down, and Puddle of Mudd emerged. They abandoned most of the angst and anger of the original post-grunge movement for more conventional anthems, narratives, and romantic songs, with considerable commercial success.
The origins of 1990s pop punk can be seen in the more song-oriented bands of the 1970s punk movement like Buzzcocks and the Clash, commercially successful new wave acts such as the Jam and the Undertones, and the more hardcore-influenced elements of alternative rock in the 1980s. Pop-punk tends to use power-pop melodies and chord changes with speedy punk tempos and loud guitars. Punk music provided the inspiration for some California-based bands on independent labels in the early 1990s, including Rancid, Pennywise, Weezer and Green Day. In 1994 Green Day moved to a major label and produced the album "Dookie", which found a new, largely teenage, audience and proved a surprise diamond-selling success, leading to a series of hit singles, including two number ones in the US. They were soon followed by the eponymous debut from Weezer, which spawned three top ten singles in the US. This success opened the door for the multi-platinum sales of metallic punk band the Offspring with "Smash" (1994). This first wave of pop punk reached its commercial peak with Green Day's "Nimrod" (1997) and The Offspring's "Americana" (1998).
A second wave of pop punk was spearheaded by Blink-182, with their breakthrough album "Enema of the State" (1999), followed by bands such as Good Charlotte, Simple Plan and Sum 41, who made use of humour in their videos and had a more radio-friendly tone to their music, while retaining the speed, some of the attitude and even the look of 1970s punk. Later pop-punk bands, including All Time Low, 5 Seconds Of Summer, the All-American Rejects and Fall Out Boy, had a sound that has been described as closer to 1980s hardcore, while still achieving commercial success.
In the 1980s the terms indie rock and alternative rock were used interchangeably. By the mid-1990s, as elements of the movement began to attract mainstream interest, particularly grunge and then Britpop, post-grunge and pop-punk, the term alternative began to lose its meaning. Those bands following the less commercial contours of the scene were increasingly referred to by the label indie. They characteristically attempted to retain control of their careers by releasing albums on their own or small independent labels, while relying on touring, word-of-mouth, and airplay on independent or college radio stations for promotion. Linked by an ethos more than a musical approach, the indie rock movement encompassed a wide range of styles, from hard-edged, grunge-influenced bands like the Cranberries and Superchunk, through do-it-yourself experimental bands like Pavement, to punk-folk singers such as Ani DiFranco. It has been noted that indie rock has a relatively high proportion of female artists compared with preceding rock genres, a tendency exemplified by the development of feminist-informed Riot Grrrl music. Many countries have developed an extensive local indie scene, flourishing with bands with enough popularity to survive inside the respective country, but virtually unknown outside them.
By the end of the 1990s many recognisable subgenres, most with their origins in the late 1980s alternative movement, were included under the umbrella of indie. Lo-fi eschewed polished recording techniques for a D.I.Y. ethos and was spearheaded by Beck, Sebadoh and Pavement. The work of Talk Talk and Slint helped inspire both post rock, an experimental style influenced by jazz and electronic music, pioneered by Bark Psychosis and taken up by acts such as Tortoise, Stereolab, and Laika, as well as leading to more dense and complex, guitar-based math rock, developed by acts like Polvo and Chavez. Space rock looked back to progressive roots, with drone heavy and minimalist acts like Spacemen 3, the two bands created out of its split, Spectrum and Spiritualized, and later groups including Flying Saucer Attack, Godspeed You! Black Emperor and Quickspace. In contrast, Sadcore emphasised pain and suffering through melodic use of acoustic and electronic instrumentation in the music of bands like American Music Club and Red House Painters, while the revival of Baroque pop reacted against lo-fi and experimental music by placing an emphasis on melody and classical instrumentation, with artists like Arcade Fire, Belle and Sebastian and Rufus Wainwright.
Alternative metal emerged from the hardcore scene of alternative rock in the US in the later 1980s, but gained a wider audience after grunge broke into the mainstream in the early 1990s. Early alternative metal bands mixed a wide variety of genres with hardcore and heavy metal sensibilities, with acts like Jane's Addiction and Primus utilizing progressive rock, Soundgarden and Corrosion of Conformity using garage punk, the Jesus Lizard and Helmet mixing noise rock, Ministry and Nine Inch Nails influenced by industrial music, Monster Magnet moving into psychedelia, Pantera, Sepultura and White Zombie creating groove metal, while Biohazard and Faith No More turned to hip hop and rap.
Hip hop had gained attention from rock acts in the early 1980s, including The Clash with "The Magnificent Seven" (1980) and Blondie with "Rapture" (1980). Early crossover acts included Run DMC and the Beastie Boys. Detroit rapper Esham became known for his "acid rap" style, which fused rapping with a sound that was often based in rock and heavy metal. Rappers who sampled rock songs included Ice-T, The Fat Boys, LL Cool J, Public Enemy and Whodini. The mixing of thrash metal and rap was pioneered by Anthrax on their 1987 comedy-influenced single "I'm the Man".
In 1990, Faith No More broke into the mainstream with their single "Epic", often seen as the first truly successful combination of heavy metal with rap. This paved the way for the success of existing bands like 24-7 Spyz and Living Colour, and new acts including Rage Against the Machine and Red Hot Chili Peppers, who all fused rock and hip hop among other influences. Among the first wave of performers to gain mainstream success as rap rock were 311, Bloodhound Gang, and Kid Rock. A more metallic sound"nu metal"was pursued by bands including Limp Bizkit, Korn and Slipknot. Later in the decade this style, which contained a mix of grunge, punk, metal, rap and turntable scratching, spawned a wave of successful bands like Linkin Park, P.O.D. and Staind, who were often classified as rap metal or nu metal, the first of which are the best-selling band of the genre.
In 2001, nu metal reached its peak with albums like Staind's "Break the Cycle", P.O.D's "Satellite", Slipknot's "Iowa" and Linkin Park's "Hybrid Theory". New bands also emerged like Disturbed, Godsmack and Papa Roach, whose major label début "Infest" became a platinum hit. Korn's long-awaited fifth album "Untouchables", and Papa Roach's second album "Lovehatetragedy", did not sell as well as their previous releases, while nu metal bands were played more infrequently on rock radio stations and MTV began focusing on pop punk and emo. Since then, many bands have changed to a more conventional hard rock, heavy metal, or electronic music sound.
From about 1997, as dissatisfaction grew with the concept of Cool Britannia, and Britpop as a movement began to dissolve, emerging bands began to avoid the Britpop label while still producing music derived from it. Many of these bands tended to mix elements of British traditional rock (or British trad rock), particularly the Beatles, Rolling Stones and Small Faces, with American influences, including post-grunge. Drawn from across the United Kingdom (with several important bands emerging from the north of England, Scotland, Wales and Northern Ireland), the themes of their music tended to be less parochially centered on British, English and London life and more introspective than had been the case with Britpop at its height. This, beside a greater willingness to engage with the American press and fans, may have helped some of them in achieving international success.
Post-Britpop bands have been seen as presenting the image of the rock star as an ordinary person and their increasingly melodic music was criticised for being bland or derivative. Post-Britpop bands like Travis from "The Man Who" (1999), Stereophonics from "Performance and Cocktails" (1999), Feeder from "Echo Park" (2001), and particularly Coldplay from their debut album "Parachutes" (2000), achieved much wider international success than most of the Britpop groups that had preceded them, and were some of the most commercially successful acts of the late 1990s and early 2000s, arguably providing a launchpad for the subsequent garage rock or post-punk revival, which has also been seen as a reaction to their introspective brand of rock.
Emo broke into mainstream culture in the early 2000s with the platinum-selling success of Jimmy Eat World's "Bleed American" (2001) and Dashboard Confessional's "The Places You Have Come to Fear the Most" (2003). The new emo had a much more mainstream sound than in the 1990s and a far greater appeal amongst adolescents than its earlier incarnations. At the same time, use of the term emo expanded beyond the musical genre, becoming associated with fashion, a hairstyle and any music that expressed emotion. By 2003 post-hardcore bands had also caught the attention of major labels and began to enjoy mainstream success in the album charts. A number of these bands were seen as a more aggressive offshoot of emo and given the often vague label of screamo.
In the early 2000s, a new group of bands that played a stripped down and back-to-basics version of guitar rock, emerged into the mainstream. They were variously characterised as part of a garage rock, post-punk or new wave revival. Because the bands came from across the globe, cited diverse influences (from traditional blues, through New Wave to grunge), and adopted differing styles of dress, their unity as a genre has been disputed. There had been attempts to revive garage rock and elements of punk in the 1980s and 1990s and by 2000 scenes had grown up in several countries.
The commercial breakthrough from these scenes was led by four bands: the Strokes, who emerged from the New York club scene with their début album "Is This It" (2001); the White Stripes, from Detroit, with their third album "White Blood Cells" (2001); the Hives from Sweden after their compilation album "Your New Favourite Band" (2001); and the Vines from Australia with "Highly Evolved" (2002). They were christened by the media as the "The" bands, and dubbed "The saviours of rock 'n' roll", leading to accusations of hype. A second wave of bands that gained international recognition due to the movement included Black Rebel Motorcycle Club, the Killers, Interpol and Kings of Leon from the US, the Libertines, Arctic Monkeys, Bloc Party, Kaiser Chiefs and Franz Ferdinand from the UK, Jet from Australia, and the Datsuns and the D4 from New Zealand.
In the 2000s, as computer technology became more accessible and music software advanced, it became possible to create high quality music using little more than a single laptop computer. This resulted in a massive increase in the amount of home-produced electronic music available to the general public via the expanding internet, and new forms of performance such as laptronica and live coding. These techniques also began to be used by existing bands and by developing genres that mixed rock with digital techniques and sounds, including indie electronic, electroclash, dance-punk and new rave.
The Telecommunications Act of 1996 brought drastic changes in the American music industry. There was an increase in homogenization and monopoly of U.S. radios and media outlets; consequently, the Act influenced the trends of music broadcast worldwide.
After 1999, which was the year the music industry achieved its historical peak in sales, with a record $28.9 billion (in large part due to the compact disc format, which also reached its historical peak that year), there would be a progressive decline in physical sales and revenue, a trend that signaled a crisis in the music industry. On the contrary, online consumption (with projects as early as Napster in 1999) grew steadily, to the point that by the 2010s most of the revenue was made by streamings and music downloads.
During the late 2000s and 2010s, rock music saw a decline in mainstream popularity. Some commentators cite the popularity of electronic dance music at that time as a contributing factor to rock's declining popularity, and hip hop surpassed rock as the United States's most consumed musical genre in 2017. Critics in the latter half of the 2010s took notice of the genre's waning popularity, increasing vagueness, a perceived inability by newer artists to evolve the genre, and changing attitudes in music creation. Bill Flanagan, in an 2016 opinion piece for "The New York Times", compared the state of rock during this period to the state of jazz in the early 1980s, "slowing down and looking back." "Vice" suggests that this decline in popularity could actually benefit the genre by attracting outsiders with "something to prove and nothing to gain."
Despite rock's decline in mainstream popularity, some rock bands have continued to achieve mainstream success in the 2010s, including Tool, Maroon 5, Imagine Dragons, Fall Out Boy, Greta Van Fleet, Panic! at the Disco, The Lumineers, Twenty One Pilots, Walk the Moon, Portugal. The Man, and The Black Keys.
The COVID-19 pandemic brought drastic changes to the rock scene worldwide. Due to the quarantine, there were massive cancellations and postponements of concerts, tours, festivals, album releases, award ceremonies, and competitions. Artists resorted to online performances to make their careers stay active. Another scheme to circumvent the quarantine limitations was used at a concert of Danish rock musician Mads Langer: the attendees saw the performance inside cars, much like in a drive-in theater. Musically, the pandemic brought an increase in rock subgenres that were slower, less energetic, and more acoustic.
Different subgenres of rock were adopted by, and became central to, the identity of a large number of sub-cultures. In the 1950s and 1960s, respectively, British youths adopted the Teddy Boy and Rocker subcultures, which revolved around US rock and roll. The counterculture of the 1960s was closely associated with psychedelic rock. The mid-late 1970s punk subculture began in the US, but it was given a distinctive look by British designer Vivienne Westwood, a look which spread worldwide. Out of the punk scene, the Goth and Emo subcultures grew, both of which presented distinctive visual styles.
When an international rock culture developed, it supplanted cinema as the major sources of fashion influence. Paradoxically, followers of rock music have often mistrusted the world of fashion, which has been seen as elevating image above substance. Rock fashions have been seen as combining elements of different cultures and periods, as well as expressing divergent views on sexuality and gender, and rock music in general has been noted and criticised for facilitating greater sexual freedom. Rock has also been associated with various forms of drug use, including the amphetamines taken by mods in the early to mid-1960s, through the LSD, mescaline, hashish and other hallucinogenic drugs linked with psychedelic rock in the late 1960s and early 1970s; and sometimes to cannabis, cocaine and heroin, all of which have been eulogised in song.
Rock has been credited with changing attitudes to race by opening up African-American culture to white audiences; but at the same time, rock has been accused of appropriating and exploiting that culture. While rock music has absorbed many influences and introduced Western audiences to different musical traditions, the global spread of rock music has been interpreted as a form of cultural imperialism. Rock music inherited the folk tradition of protest song, making political statements on subjects such as war, religion, poverty, civil rights, justice and the environment. Political activism reached a mainstream peak with the "Do They Know It's Christmas?" single (1984) and Live Aid concert for Ethiopia in 1985, which, while successfully raising awareness of world poverty and funds for aid, have also been criticised (along with similar events), for providing a stage for self-aggrandisement and increased profits for the rock stars involved.
Since its early development rock music has been associated with rebellion against social and political norms, most obviously in early rock and roll's rejection of an adult-dominated culture, the counterculture's rejection of consumerism and conformity and punk's rejection of all forms of social convention, however, it can also be seen as providing a means of commercial exploitation of such ideas and of diverting youth away from political action.
Professional women instrumentalists are uncommon in rock genres such as heavy metal although bands such as Within Temptation have featured women as lead singers with men playing instruments. According to Schaap and Berkers, "playing in a band is largely a male homosocial activity, that is, learning to play in a band is largely a peer-based ... experience, shaped by existing sex-segregated friendship networks. They note that rock music "is often defined as a form of male rebellion vis-à-vis female bedroom culture." (The theory of "bedroom culture" argues that society influences girls to not engage in crime and deviance by virtually trapping them in their bedroom; it was developed by a sociologist named Angela McRobbie.) In popular music, there has been a gendered "distinction between public (male) and private (female) participation" in music. "Several scholars have argued that men exclude women from bands or from the bands' rehearsals, recordings, performances, and other social activities". "Women are mainly regarded as passive and private consumers of allegedly slick, prefabricatedhence, inferiorpop music ..., excluding them from participating as high status rock musicians". One of the reasons that there are rarely mixed gender bands is that "bands operate as tight-knit units in which homosocial solidaritysocial bonds between people of the same sex ... plays a crucial role". In the 1960s rock music scene, "singing was sometimes an acceptable pastime for a girl, but playing an instrument ... simply wasn't done".
"The rebellion of rock music was largely a male rebellion; the womenoften, in the 1950s and '60s, girls in their teensin rock usually sang songs as personæ utterly dependent on their macho boyfriends ...". Philip Auslander says that "Although there were many women in rock by the late 1960s, most performed only as singers, a traditionally feminine position in popular music". Though some women played instruments in American all-female garage rock bands, none of these bands achieved more than regional success. So they "did not provide viable templates for women's on-going participation in rock". In relation to the gender composition of heavy metal bands, it has been said that "[h]eavy metal performers are almost exclusively male" "...at least until the mid-1980s" apart from "...exceptions such as Girlschool". However, "...now [in the 2010s] maybe more than ever–strong metal women have put up their dukes and got down to it", "carv[ing] out a considerable place for [them]selves." When Suzi Quatro emerged in 1973, "no other prominent female musician worked in rock simultaneously as a singer, instrumentalist, songwriter, and bandleader". According to Auslander, she was "kicking down the male door in rock and roll and proving that a female "musician" ... and this is a point I am extremely concerned about ... could play as well if not better than the boys".
An all-female band is a musical group in genres such as rock and blues which is exclusively composed of female musicians. This is distinct from a girl group, in which the female members are solely vocalists, though this terminology is not universally followed. | https://en.wikipedia.org/wiki?curid=25423 |
Retronym
A retronym is a newer name for an existing thing that differentiates the original form/version from a more recent one. It is thus a word or phrase created to avoid confusion between two types, whereas previously (before there were more than one type), no clarification was required.
Advances in technology are often responsible for the coinage of retronyms. For example, the term "acoustic guitar" was coined with the advent of electric guitars; analog watches were renamed to distinguish them from digital watches once the latter were invented; "association football" was coined to distinguish from the later sports of rugby football and gridiron football; and "push bike" was created to distinguish from motorbikes and motorized bicycles.
The first bicycles with two wheels of equal size were called "safety bicycles" because they were easier to handle than the then-dominant style that had one large wheel and one small wheel, which then became known as an "ordinary" bicycle. Since the end of the 19th century, most bicycles have been expected to have two equal sized wheels, and the other type has been renamed "penny-farthing" or "high-wheeler" bicycle.
The Atari Video Computer System platform was rebranded the "Atari 2600" (after its product code, CX-2600) in 1982 following the launch of its successor, the Atari 5200, and all hardware and software related to the platform were released under this new branding from that point on.
The original Game Boy was referred to as "Game Boy Classic" after the release of Game Boy Color. Another game console example is the original Xbox being referred to as the "Xbox 1" prior to the release of the Xbox One. After the Xbox One released, the first Xbox has been commonly referred to as the "original Xbox" instead.
The term "retronym", a neologism composed of the combining forms "retro-" (from Latin "retro"", "before")" + "-nym" (from Greek ónoma, "“name”"), was coined by Frank Mankiewicz in 1980 and popularized by William Safire in "The New York Times Magazine".
In 2000 "The American Heritage Dictionary" (4th edition) became the first major dictionary to include the word "retronym". | https://en.wikipedia.org/wiki?curid=25424 |
Superman
Superman is a fictional superhero. The character was created by writer Jerry Siegel and artist Joe Shuster, and first appeared in the comic book "Action Comics" #1 (cover-dated June 1938 and published April 18, 1938). The character regularly appears in comic books published by DC Comics, and has been adapted to a number of radio serials, movies, and television shows.
Superman was born on the planet Krypton and was given the name Kal-El at birth. As a baby, his parents sent him to Earth in a small spaceship moments before Krypton was destroyed in a natural cataclysm. His ship landed in the American countryside, near the fictional town of Smallville. He was found and adopted by farmers Jonathan and Martha Kent, who named him Clark Kent. Clark developed various superhuman abilities, such as incredible strength and impervious skin. His foster parents advised him to use his abilities for the benefit of humanity, and he decided to fight crime as a vigilante. To protect his privacy, he changes into a colorful costume and uses the alias "Superman" when fighting crime. Clark Kent resides in the fictional American city of Metropolis, where he works as a journalist for the "Daily Planet". Superman's supporting characters include his love interest and fellow journalist Lois Lane, "Daily Planet" photographer Jimmy Olsen and editor-in-chief Perry White. His most well-known villain is Lex Luthor. Superman is part of the DC Universe, and as such often appears in stories alongside other DC Universe heroes such as Batman and Wonder Woman.
Although Superman was not the first superhero character, he popularized the superhero archetype and defined its conventions. Superheroes are usually judged by how closely they resemble the standard established by Superman. He was the best-selling superhero character in American comic books up until the 1980s.
Jerry Siegel and Joe Shuster met in 1932 while attending Glenville High School in Cleveland and bonded over their admiration of fiction. Siegel aspired to become a writer and Shuster aspired to become an illustrator. Siegel wrote amateur science fiction stories, which he self-published as a magazine called "Science Fiction: The Advance Guard of Future Civilization". His friend Shuster often provided illustrations for his work. In January 1933, Siegel published a short story in his magazine titled "The Reign of the Superman". The titular character is a vagrant named Bill Dunn who is tricked by an evil scientist into consuming an experimental drug. The drug gives Dunn the powers of mind-reading, mind-control, and clairvoyance. He uses these powers maliciously for profit and amusement, but then the drug wears off, leaving him a powerless vagrant again. Shuster provided illustrations, depicting Dunn as a bald man.
Siegel and Shuster shifted to making comic strips, with a focus on adventure and comedy. They wanted to become syndicated newspaper strip authors, so they showed their ideas to various newspaper editors. However, the newspaper editors told them that their ideas weren't sensational enough. If they wanted to make a successful comic strip, it had to be something more sensational than anything else on the market. This prompted Siegel to revisit Superman as a comic strip character. Siegel modified Superman's powers to make him even more sensational: Like Bill Dunn, the second prototype of Superman is given powers against his will by an unscrupulous scientist, but instead of psychic abilities, he acquires superhuman strength and bullet-proof skin. Additionally, this new Superman was a crime-fighting hero instead of a villain, because Siegel noted that comic strips with heroic protagonists tended to be more successful. In later years, Siegel once recalled that this Superman wore a "bat-like" cape in some panels, but typically he and Shuster agreed there was no costume yet, and there is none apparent in the surviving artwork.
Siegel and Shuster showed this second concept of Superman to Consolidated Book Publishers, based in Chicago. In May 1933, Consolidated had published a proto-comic book titled "Detective Dan: Secret Operative 48". It contained all-original stories as opposed to reprints of newspaper strips, which was a novelty at the time. Siegel and Shuster put together a comic book in similar format called "The Superman". A delegation from Consolidated visited Cleveland that summer on a business trip and Siegel and Shuster took the opportunity to present their work in person. Although Consolidated expressed interest, they later pulled out of the comics business without ever offering a book deal because the sales of "Detective Dan" were disappointing.
Siegel believed publishers kept rejecting them because he and Shuster were young and unknown, so he looked for an established artist to replace Shuster. When Siegel told Shuster what he was doing, Shuster reacted by burning their rejected Superman comic, sparing only the cover. They continued collaborating on other projects, but for the time being Shuster was through with Superman.
Siegel wrote to numerous artists. The first response came in July 1933 from Leo O'Mealia, who drew the "Fu Manchu" strip for the Bell Syndicate. In the script that Siegel sent O'Mealia, Superman's origin story changes: He is a "scientist-adventurer" from the far future when humanity has naturally evolved "superpowers". Just before the Earth explodes, he escapes in a time-machine to the modern era, whereupon he immediately begins using his superpowers to fight crime. O'Mealia produced a few strips and showed them to his newspaper syndicate, but they were rejected. O'Mealia did not send to Siegel any copies of his strips, and they have been lost.
In June 1934, Siegel found another partner: an artist in Chicago named Russell Keaton. Keaton drew the "Buck Rogers" and "Skyroads" comic strips. In the script that Siegel sent Keaton in June, Superman's origin story further evolved: In the distant future, when Earth is on the verge of exploding due to "giant cataclysms", the last surviving man sends his three-year-old son back in time to the year 1935. The time-machine appears on a road where it is discovered by motorists Sam and Molly Kent. They leave the boy in an orphanage, but the staff struggles to control him because he has superhuman strength and impenetrable skin. The Kents adopt the boy and name him Clark, and teach him that he must use his fantastic natural gifts for the benefit of humanity. In November, Siegel sent Keaton an extension of his script: an adventure where Superman foils a conspiracy to kidnap a star football player. The extended script mentions that Clark puts on a special "uniform" when assuming the identity of Superman, but it is not described. Keaton produced two weeks' worth of strips based on Siegel's script. In November, Keaton showed his strips to a newspaper syndicate, but they too were rejected, and he abandoned the project.
Siegel and Shuster reconciled and resumed developing Superman together. The character became an alien from the planet Krypton. Shuster designed the now-familiar costume: tights with an "S" on the chest, over-shorts, and a cape. They made Clark Kent a journalist who pretends to be timid, and conceived his colleague Lois Lane, who is attracted to the bold and mighty Superman but does not realize that he and Kent are the same person.
In June 1935 Siegel and Shuster finally found work with National Allied Publications, a comic magazine publishing company in New York owned by Malcolm Wheeler-Nicholson. Wheeler-Nicholson published two of their strips in "New Fun Comics" #6 (1935): "Henri Duval" and "Doctor Occult". Siegel and Shuster also showed him Superman and asked him to market Superman to the newspapers on their behalf. In October, Wheeler-Nicholson offered to publish Superman in one of his own magazines. Siegel and Shuster refused his offer because Wheeler-Nicholson had demonstrated himself to be an irresponsible businessman. He had been slow to respond to their letters and hadn't paid them for their work in "New Fun Comics" #6. They chose to keep marketing Superman to newspaper syndicates themselves. Despite the erratic pay, Siegel and Shuster kept working for Wheeler-Nicholson because he was the only publisher who was buying their work, and over the years they produced other adventure strips for his magazines.
Wheeler-Nicholson's financial difficulties continued to mount. In 1936, he formed a joint corporation with Harry Donenfeld and Jack Liebowitz called Detective Comics, Inc., in order to release his third magazine, titled "Detective Comics". Siegel and Shuster produced stories for "Detective Comics" too, such as "Slam Bradley". Wheeler-Nicholson fell into deep debt to Donenfeld and Liebowitz, and in early January 1938, Donenfeld and Liebowitz petitioned Wheeler-Nicholson's company into bankruptcy and seized it.
In early December 1937, Siegel visited Liebowitz in New York, and Liebowitz asked Siegel to produce some comics for an upcoming comic anthology magazine called "Action Comics". Siegel proposed some new stories, but not Superman. Siegel and Shuster were, at the time, negotiating a deal with the McClure Newspaper Syndicate for Superman. In early January 1938, Siegel had a three-way telephone conversation with Liebowitz and an employee of McClure named Max Gaines. Gaines informed Siegel that McClure had rejected Superman, and asked if he could forward their Superman strips to Liebowitz so that Liebowitz could consider them for "Action Comics". Siegel agreed. Liebowitz and his colleagues were impressed by the strips, and they asked Siegel and Shuster to develop the strips into 13 pages for "Action Comics". Having grown tired of rejections, Siegel and Shuster accepted the offer. Siegel and Shuster submitted their work in late February and were paid $130 () for their work ($10 per page). In early March they signed a contract (at Liebowitz's request) in which they released the copyright for Superman to Detective Comics, Inc. This was normal practice in the business, and Siegel and Shuster had given away the copyrights to their previous works as well.
The duo's revised version of Superman appeared in the first issue of "Action Comics", which was published on April 18, 1938. The issue was a huge success thanks to Superman’s feature.
Siegel and Shuster read pulp science-fiction and adventure magazines, and many stories featured characters with fantastical abilities such as telepathy, clairvoyance, and superhuman strength. An influence was John Carter of Mars, a character from the novels by Edgar Rice Burroughs. John Carter is a human who is transported to Mars, where the lower gravity makes him stronger than the natives and allows him to leap great distances. Another influence was Philip Wylie's 1930 novel "Gladiator", featuring a protagonist named Hugo Danner who had similar powers.
Superman's stance and devil-may-care attitude was influenced by the characters of Douglas Fairbanks, who starred in adventure films such as "The Mark of Zorro" and "Robin Hood". The name of Superman's home city, Metropolis, was taken from the 1927 film of the same name. Popeye cartoons were also an influence.
Clark Kent's harmless facade and dual identity were inspired by the protagonists of such movies as Don Diego de la Vega in "The Mark of Zorro" and Sir Percy Blakeney in "The Scarlet Pimpernel". Siegel thought this would make for interesting dramatic contrast and good humor. Another inspiration was slapstick comedian Harold Lloyd. The archetypal Lloyd character was a mild-mannered man who finds himself abused by bullies but later in the story snaps and fights back furiously.
Kent is a journalist because Siegel often imagined himself becoming one after leaving school. The love triangle between Lois Lane, Clark, and Superman were inspired by Siegel's own awkwardness with girls.
The pair collected comic strips in their youth, with a favorite being Winsor McCay's fantastical "Little Nemo". Shuster remarked on the artists which played an important part in the development of his own style: "Alex Raymond and Burne Hogarth were my idols – also Milt Caniff, Hal Foster, and Roy Crane." Shuster taught himself to draw by tracing over the art in the strips and magazines they collected.
As a boy, Shuster was interested in fitness culture and a fan of strongmen such as Siegmund Breitbart and Joseph Greenstein. He collected fitness magazines and manuals and used their photographs as visual references for his art.
The visual design of Superman came from multiple influences. The tight-fitting suit and shorts were inspired by the costumes of wrestlers, boxers, and strongmen. In early concept art, Shuster gave Superman laced sandals like those of strongmen and classical heroes, but these were eventually changed to red boots. The costumes of Douglas Fairbanks were also an influence. The emblem on his chest may have been inspired by the uniforms of athletic teams. Many pulp action heroes such as swashbucklers wore capes. Superman's face was based on Johnny Weissmuller with touches derived from the comic-strip character Dick Tracy and from the work of cartoonist Roy Crane.
The word "superman" was commonly used in the 1920s and 1930s to describe men of great ability, most often athletes and politicians. It occasionally appeared in pulp fiction stories as well, such as "The Superman of Dr. Jukes". It is unclear whether Siegel and Shuster were influenced by Friedrich Nietzsche's concept of the "Übermensch"; they never acknowledged as much.
Since 1938, Superman stories have been regularly published in periodical comic books published by DC Comics. The first and oldest of these is "Action Comics", which began in April 1938. "Action Comics" was initially an anthology magazine, but it eventually became dedicated to Superman stories. The second oldest periodical is "Superman", which began in June 1939. "Action Comics" and "Superman" have been published without interruption (ignoring changes to the title and numbering scheme). A number of other shorter-lived Superman periodicals have been published over the years. Superman is part of the DC Universe, which is a shared universe of superhero characters owned by DC Comics, and consequently he frequently appears in stories alongside the likes of Batman, Wonder Woman, and others.
Superman has sold more comic books over his publication history than any other American superhero character. Exact sales figures for the early decades of Superman comic books are hard to find because, like most publishers at the time, DC Comics concealed this data to deny competitors, but given the general market trends at the time, sales of "Action Comics" and "Superman" probably peaked in the mid-1940s and thereafter steadily declined. Sales data first became public in 1960, and showed that Superman was the best-selling comic book character of the 1960s and 1970s. Sales rose again starting in 1987. "Superman" #75 (Nov 1992) sold over 23 million copies, making it the best-selling issue of a comic book of all time, thanks to a media sensation over the supposedly permanent death of the character in that issue. Sales declined from that point on. In March 2018, "Action Comics" sold just 51,534 copies, although such low figures are normal for superhero comic books in general (for comparison, "Amazing Spider-Man" #797 sold only 128,189 copies). The comic books are today considered a niche aspect of the Superman franchise due to low readership, though they remain influential as creative engines for the movies and television shows. Comic book stories can be produced quickly and cheaply, and are thus an ideal medium for experimentation.
Whereas comic books in the 1950s were read by children, since the 1990s the average reader has been an adult. A major reason for this shift was DC Comics' decision in the 1970s to sell its comic books to specialty stores instead of traditional magazine retailers (supermarkets, newsstands, etc.) — a model called "direct distribution". This made comic books less accessible to children.
Beginning in January 1939, a "Superman" daily comic strip appeared in newspapers, syndicated through the McClure Syndicate. A color Sunday version was added that November. Jerry Siegel wrote most of the strips until he was conscripted in 1943. The Sunday strips had a narrative continuity separate from the daily strips, possibly because Siegel had to delegate the Sunday strips to ghostwriters. By 1941, the newspaper strips had an estimated readership of 20 million. Joe Shuster drew the early strips, then passed the job to Wayne Boring. From 1949 to 1956, the newspaper strips were drawn by Win Mortimer. The strip ended in May 1966, but was revived from 1977 to 1983 to coincide with a series of movies released by Warner Bros.
Initially, Siegel was allowed to write Superman more or less as he saw fit because nobody had anticipated the success and rapid expansion of the franchise. But soon Siegel and Shuster's work was put under careful oversight for fear of trouble with censors. Siegel was forced to tone down the violence and social crusading that characterized his early stories. Editor Whitney Ellsworth, hired in 1940, dictated that Superman not kill. Sexuality was banned, and colorfully outlandish villains such as Ultra-Humanite and Toyman were thought to be less nightmarish for young readers.
Mort Weisinger was the editor on Superman comics from 1941 to 1970, his tenure briefly interrupted by military service. Siegel and his fellow writers had developed the character with little thought of building a coherent mythology, but as the number of Superman titles and the pool of writers grew, Weisinger demanded a more disciplined approach. Weisinger assigned story ideas, and the logic of Superman's powers, his origin, the locales, and his relationships with his growing cast of supporting characters were carefully planned. Elements such as Bizarro, Supergirl, the Phantom Zone, the Fortress of Solitude, alternate varieties of kryptonite, robot doppelgangers, and Krypto were introduced during this era. The complicated universe built under Weisinger was beguiling to devoted readers but alienating to casuals. Weisinger favored lighthearted stories over serious drama, and avoided sensitive subjects such as the Vietnam War and the American civil rights movement because he feared his right-wing views would alienate his left-leaning writers and readers. Weisinger also introduced letters columns in 1958 to encourage feedback and build intimacy with readers.
Weisinger retired in 1970 and Julius Schwartz took over. By his own admission, Weisinger had grown out of touch with newer readers. Schwartz updated Superman by removing overused plot elements such as kryptonite and robot doppelgangers and making Clark Kent a television anchor. Schwartz also scaled Superman's powers down to a level closer to Siegel's original. These changes would eventually be reversed by later writers. Schwartz allowed stories with serious drama such as "For the Man Who Has Everything" ("Superman Annual" #11), in which the villain Mongul torments Superman with an illusion of happy family life on a living Krypton.
Schwartz retired from DC Comics in 1986 and was succeeded by Mike Carlin as an editor on Superman comics. His retirement coincided with DC Comics' decision to streamline the shared continuity called the DC Universe with the companywide-crossover storyline "Crisis on Infinite Earths". Writer John Byrne rewrote the Superman mythos, again reducing Superman's powers, which writers had slowly re-strengthened, and revised many supporting characters, such as making Lex Luthor a billionaire industrialist rather than a mad scientist, and making Supergirl an artificial shapeshifting organism because DC wanted Superman to be the sole surviving Kryptonian.
Carlin was promoted to Executive Editor for the DC Universe books in 1996, a position he held until 2002. K.C. Carlson took his place as editor of the Superman comics.
In the earlier decades of Superman comics, artists were expected to conform to a certain "house style". Joe Shuster defined the aesthetic style of Superman in the 1940s. After Shuster left National, Wayne Boring succeeded him as the principal artist on Superman comic books. He redrew Superman taller and more detailed. Around 1955, Curt Swan in turn succeeded Boring. The 1980s saw a boom in the diversity of comic book art and now there is no single "house style" in Superman comics.
The first adaptation of Superman beyond comic books was a radio show, "The Adventures of Superman", which ran from 1940 to 1951 for 2,088 episodes, most of which were aimed at children. The episodes were initially 15 minutes long, but after 1949 they were lengthened to 30 minutes. Most episodes were done live. Bud Collyer was the voice actor for Superman in most episodes. The show was produced by Robert Maxwell and Allen Ducovny, who were employees of Superman, Inc. and Detective Comics, Inc. respectively.
Paramount Pictures released a series of Superman theatrical animated shorts between 1941 and 1943. Seventeen episodes in total were made, each 8–10 minutes long. The first nine episodes were produced by Fleischer Studios and the next eight were produced by Famous Studios. Bud Collyer provided the voice of Superman. The first episode had a production budget of $50,000 with the remaining episodes at $30,000 each (), which was exceptionally lavish for the time. Joe Shuster provided model sheets for the characters, so the visuals resembled the contemporary comic book aesthetic.
The first live-action adaptation of Superman was a movie serial released in 1948, targeted at children. Kirk Alyn became the first actor to portray the hero onscreen. The production cost up to $325,000 (). It was the most profitable movie serial in movie history. A sequel serial, "Atom Man vs. Superman", was released in 1950. For flying scenes, Superman was hand-drawn in animated form, composited onto live-action footage.
The first feature film was "Superman and the Mole Men", a 58-minute B-movie released in 1951, produced on an estimated budget of $30,000 (). It starred George Reeves as Superman, and was intended to promote the subsequent television series.
The first big-budget movie was "Superman" in 1978, starring Christopher Reeve and produced by Alexander and Ilya Salkind. It was 143 minutes long and was made on a budget of $55 million (). It is the most successful Superman feature film to date in terms of box office revenue adjusted for inflation. The soundtrack was composed by John Williams and was nominated for an Academy Award; the title theme has become iconic. "Superman" (1978) was the first big-budget superhero movie, and its success arguably paved the way for later superhero movies like "Batman" (1989) and "Spider-Man" (2002). The 1978 movie spawned four sequels: "Superman II" (1980), "Superman III" (1983), "" (1987) and "Superman Returns" (2006); the last of which replaced Reeve with Brandon Routh.
In 2013, "Man of Steel" was released by Warner Bros. as a reboot of the film series; starring Henry Cavill as Superman. Its sequel, "" (2016), featured Superman alongside Batman and Wonder Woman, making it the first theatrical movie in which Superman appeared alongside other superheroes from the DC Universe. Cavill reprised his role in "Justice League" (2017) and is under contract to play Superman in one more film.
"Adventures of Superman", which aired from 1952 to 1958, was the first television series based on a superhero. It starred George Reeves as Superman. Whereas the radio serial was aimed at children, this television show was aimed at a general audience, although children made up the majority of viewers. Robert Maxwell, who produced the radio serial, was the producer for the first season. For the second season, Maxwell was replaced with Whitney Ellsworth. Ellsworth toned down the violence of the show to make it more suitable for children, though he still aimed for a general audience. This show was extremely popular in Japan, where it achieved an audience share rating of 74.2% in 1958.
"Superboy" aired from 1988 to 1992. It was produced by Alexander and Ilya Salkind, the same men who had produced the Superman movies starring Christopher Reeve.
"" aired from 1993 to 1997. This show was aimed at adults and focused on the relationship between Clark Kent and Lois Lane as much as Superman's heroics. Dean Cain played Superman, and Teri Hatcher played Lois.
"Smallville" aired from 2001 to 2011. This show was targeted at young adult women. The show covered Clark Kent's life prior to becoming Superman, spanning ten years from his high school years in Smallville to his early life in Metropolis. Although Clark engages in heroics in this show, he doesn't wear a costume, nor does he call himself Superboy. Rather, he relies on misdirection and his blinding speed to avoid being recognized.
The first animated television series was "The New Adventures of Superman", which aired from 1966 to 1970.
"" (with the voice of Tim Daly on main character) aired from 1996 to 2000. After the show's cancellation, this version of Superman appeared in the sequel shows "Batman Beyond" (voiced by Christopher McDonald) aired from 1999 to 2001 and "Justice League" and "Justice League Unlimited" (voiced by George Newbern), which ran from 2001 to 2006. All of these shows were produced by Bruce Timm. This was the most successful and longest-running animated version of Superman.
Superman has appeared in a series of direct-to-video animated movies produced by Warner Bros. Animation called DC Universe Animated Original Movies, beginning with "" in 2007. Many of these movies are adaptations of popular comic book stories.
Tyler Hoechlin appears as Superman in The CW Arrowverse television series "Supergirl", "The Flash" and "Arrow".
The first electronic game was simply titled "Superman", and released in 1979 for the Atari 2600. The last game centered on Superman was "Superman Returns" (adapted from the movie) in 2006. Superman has, however, appeared in more recent games starring the Justice League, such as "Injustice 2" (2017).
DC Comics trademarked the Superman chest logo in August 1938. Jack Liebowitz established Superman, Inc. in October 1939 to develop the franchise beyond the comic books. Superman, Inc. merged with DC Comics in October 1946. After DC Comics merged with Warner Communications in 1967, licensing for Superman was handled by the Licensing Corporation of America.
The Licensing Letter (an American market research firm) estimated that Superman licensed merchandise made $634 million in sales globally in 2018 (43.3% of this revenue came from the North American market). For comparison, in the same year, Spider-Man merchandise made $1.075 billion and Star Wars merchandise made $1.923 billion globally.
The earliest paraphernalia appeared in 1939: a button proclaiming membership in the Supermen of America club. The first toy was a wooden doll in 1939 made by the Ideal Novelty and Toy Company. "Superman" #5 (May 1940) carried an advertisement for a "Krypto-Raygun", which was a gun-shaped device that could project images on a wall. The majority of Superman merchandise is targeted at children, but since the 1970s, adults have been increasingly targeted because the comic book readership has gotten older.
During World War 2, Superman was used to support the war effort. "Action Comics" and "Superman" carried messages urging readers to buy war bonds and participate in scrap drives.
In a contract dated 1 March 1938, Jerry Siegel and Joe Shuster gave away the copyright to Superman to their employer, DC Comics (then known as Detective Comics, Inc.) prior to Superman's first publication in April. Contrary to popular perception, the $130 that DC Comics paid them was for their first Superman story, not the copyright to the character — that, they gave away for free. This was normal practice in the comic magazine industry and they had done the same with their previous published works (Slam Bradley, Doctor Occult, etc.), but Superman became far more popular and valuable than they anticipated and they much regretted giving him away. DC Comics retained Siegel and Shuster, and they were paid well because they were popular with the readers. Between 1938 and 1947, DC Comics paid them together over $400,000 (equivalent to $ in ).
Siegel wrote most of the magazine and daily newspaper stories until he was conscripted into the army in 1943, whereupon the task was passed to ghostwriters. While Siegel was serving in Hawaii, DC Comics published a story featuring a child version of Superman called "Superboy", which was based on a script Siegel had submitted several years before. Siegel was furious because DC Comics did this without having bought the character.
After Siegel's discharge from the Army, he and Shuster sued DC Comics in 1947 for the rights to Superman and Superboy. The judge ruled that Superman belonged to DC Comics, but that Superboy was a separate entity that belonged to Siegel. Siegel and Shuster settled out-of-court with DC Comics, which paid the pair $94,013.16 () in exchange for the full rights to both Superman and Superboy. DC Comics then fired Siegel and Shuster.
DC Comics rehired Jerry Siegel as a writer in 1957.
In 1965, Siegel and Shuster attempted to regain rights to Superman using the renewal option in the Copyright Act of 1909, but the court ruled Siegel and Shuster had transferred the renewal rights to DC Comics in 1938. Siegel and Shuster appealed, but the appeals court upheld this decision. DC Comics fired Siegel when he filed this second lawsuit.
In 1975, Siegel and a number of other comic book writers and artists launched a public campaign for better compensation and treatment of comic creators. Warner Brothers agreed to give Siegel and Shuster a yearly stipend, full medical benefits, and credit their names in all future Superman productions in exchange for never contesting ownership of Superman. Siegel and Shuster upheld this bargain.
Shuster died in 1992. DC Comics offered Shuster's heirs a stipend in exchange for never challenging ownership of Superman, which they accepted for some years.
Siegel died in 1996. His heirs attempted to take the rights to Superman using the termination provision of the Copyright Act of 1976. DC Comics negotiated an agreement wherein it would pay the Siegel heirs several million dollars and a yearly stipend of $500,000 in exchange for permanently granting DC the rights to Superman. DC Comics also agreed to insert the line "By Special Arrangement with the Jerry Siegel Family" in all future Superman productions. The Siegels accepted DC's offer in an October 2001 letter.
Copyright lawyer and movie producer Marc Toberoff then struck a deal with the heirs of both Siegel and Shuster to help them get the rights to Superman in exchange for signing the rights over to his production company, Pacific Pictures. Both groups accepted. The Siegel heirs called off their deal with DC Comics and in 2004 sued DC for the rights to Superman and Superboy. In 2008, the judge ruled in favor of the Siegels. DC Comics appealed the decision, and the appeals court ruled in favor of DC, arguing that the October 2001 letter was binding. In 2003, the Shuster heirs served a termination notice for Shuster's grant of his half of the copyright to Superman. DC Comics sued the Shuster heirs in 2010, and the court ruled in DC's favor on the grounds that the 1992 agreement with the Shuster heirs barred them from terminating the grant.
Superman is due to enter the public domain in 2033. However, this would only apply to the character as he is depicted in "Action Comics" #1 (1938). Versions of him with later developments, such as his power of "heat vision" (introduced in 1949), may persist under copyright until the works they were introduced in enter the public domain themselves.
Superman's success immediately begat a wave of imitations. The most successful of these at this early age was Captain Marvel, first published by Fawcett Comics in December 1939. Captain Marvel had many similarities to Superman: Herculean strength, invulnerability, the ability to fly, a cape, a secret identity, and a job as a journalist. DC Comics filed a lawsuit against Fawcett Comics for copyright infringement.
The trial began in March 1948 after seven years of discovery. The judge ruled that Fawcett had indeed infringed on Superman. However, the judge also found that the copyright notices that appeared with the Superman newspaper strips did not meet the technical standards of the Copyright Act of 1909 and were therefore invalid. Furthermore, since the newspaper strips carried stories adapted from "Action Comics", the judge ruled that DC Comics had effectively abandoned the copyright to the "Action Comics" stories. The judge ruled that DC Comics had effectively abandoned the copyright to Superman and therefore forfeited its right to sue Fawcett for copyright infringement.
DC Comics appealed this decision. The appeals court ruled that unintentional mistakes in the copyright notices of the newspaper strips did not invalidate the copyrights. Furthermore, Fawcett knew that DC Comics never intended to abandon the copyrights, and therefore Fawcett's infringement was not an innocent misunderstanding, and therefore Fawcett owed damages to DC Comics. The appeals court remanded the case back to the lower court to determine how much Fawcett owed in damages.
At that point, Fawcett Comics decided to settle out of court with DC Comics. Fawcett paid DC Comics $400,000 () and agreed to stop publishing Captain Marvel. The last Captain Marvel story from Fawcett Comics was published in September 1953. DC licensed in 1972, and eventually acquired by 1991, the intellectual property rights to Captain Marvel, today marketed under the title "Shazam!"
This section details the most consistent elements of the Superman narrative in the myriad stories published since 1938.
In "Action Comics"#1 (1938), Superman is born on an alien world to a technologically advanced species that resembles humans. Shortly after he is born, his planet is destroyed in a natural cataclysm, but Superman's scientist father foresaw the calamity and saves his baby son by sending him to Earth in a small spaceship. The ship, sadly, is too small to carry anyone else, so Superman's parents stay behind and die. The earliest newspaper strips name the planet "Krypton", the baby "Kal-L", and his biological parents "Jor-L" and "Lora"; their names were changed to "Jor-el", and "Lara" in a 1942 spinoff novel by George Lowther. The ship lands in the American countryside, where the baby is discovered by the Kents, a farming couple.
The Kents name the boy Clark and raise him in a farming community. A 1947 episode of the radio serial places this unnamed community in Iowa. It is named Smallville in "Superboy" #2 (June 1949). The 1978 Superman movie placed it in Kansas, as have most Superman stories since. "New Adventures of Superboy" #22 (Oct. 1981) places it in Maryland.
In "Action Comics"#1 and most stories before 1986, Superman's powers begin developing in infancy. From 1944 to 1986, DC Comics regularly published stories of Superman's childhood and adolescent adventures, when he called himself "Superboy". In "Man of Steel" #1, Superman's powers emerged more slowly and he began his superhero career as an adult.
The Kents teach Clark he must conceal his otherworldly origins and use his fantastic powers to do good. Clark creates the costumed identity of Superman so as to protect his personal privacy and the safety of his loved ones. As Clark Kent, he wears eyeglasses to disguise his face and wears his Superman costume underneath his clothes so that he can change at a moment's notice. To complete this disguise, Clark avoids violent confrontation, preferring to slip away and change into Superman when danger arises, and he suffers occasional ridicule for his apparent cowardice.
In "Superboy" #78 (1960), Superboy makes his costume out of the indestructible blankets found in the ship he came to Earth in. In "Man of Steel" #1 (1986), Martha Kent makes the costume from human-manufactured cloth, and it is rendered indestructible by an "aura" that Superman projects. The "S" on Superman's chest at first was simply an initial for "Superman". When writing the script for the 1978 movie, Tom Mankiewicz made it Superman's Kryptonian family crest. This was carried over into some comic book stories and later movies, such as "Man of Steel". In the comic story "", the crest is described as an old Kryptonian symbol for hope.
Clark works as a newspaper journalist. In the earliest stories, he worked for "The Daily Star", but the second episode of the radio serial changed this to the "Daily Planet". In comics from the early 1970s, Clark worked as a television journalist (an attempt to modernize the character). However, for the 1978 movie, the producers chose to make Clark a newspaper journalist again because that was how most of the public thought of him.
The first story in which Superman dies was published in "Superman" #149 (1961), in which he is murdered by Lex Luthor by means of kryptonite. This story was "imaginary" and thus was ignored in subsequent books. In "Superman" #188 (April 1966), Superman is killed by kryptonite radiation but is revived in the same issue by one of his android doppelgangers. In the 1990s "The Death and Return of Superman" story arc, after a deadly battle with Doomsday, Superman died in "Superman" #75 (Jan. 1993). He was later revived by the Eradicator using Kryptonian technology. In "Superman" #52 (May 2016) Superman is killed by kryptonite poisoning, and this time he is not resurrected, but replaced by the Superman of an alternate timeline.
Superman maintains a secret hideout called the "Fortress of Solitude", which is located somewhere in the Arctic. Here, Superman keeps a collection of mementos and a laboratory for science experiments. In "Action Comics" #241, the Fortress of Solitude is a cave in a mountain, sealed with a very heavy door that is opened with a gigantic key too heavy for anyone but Superman to use. In the 1978 movie, the Fortress of Solitude is a structure made out of ice. The movie "Man of Steel" portrays the Fortress as a Kryptonian exploratory craft buried deep beneath rock and ice.
Superman's secret identity is Clark Joseph Kent, a reporter for the "Daily Planet". Although his name and history were taken from his early life with his adoptive Earth parents, everything about Clark was staged for the benefit of his alternate identity: as a reporter for the "Daily Planet", he receives late-breaking news before the general public, has a plausible reason to be present at crime scenes, and need not strictly account for his whereabouts as long as he makes his story deadlines. He sees his job as a journalist as an extension of his Superman responsibilities—bringing truth to the forefront and fighting for the little guy. He believes that everybody has the right to know what is going on in the world, regardless of who is involved.
To deflect suspicion that he is Superman, Clark Kent adopted a largely passive and introverted personality with conservative mannerisms, a higher-pitched voice, and a slight slouch. This personality is typically described as "mild-mannered", perhaps most famously by the opening narration of Max Fleischer's "Superman" animated theatrical shorts. These traits extended into Clark's wardrobe, which typically consists of a bland-colored business suit, a red necktie, black-rimmed glasses, combed-back hair, and occasionally a fedora. Clark wears his Superman costume underneath his street clothes, allowing easy changes between the two personae and the dramatic gesture of ripping open his shirt to reveal the familiar "S" emblem when called into action. Superman usually stores his Clark Kent clothing compressed in a secret pouch within his cape, though some stories have shown him leaving his clothes in some covert location (such as the "Daily Planet" storeroom) for later retrieval.
As Superman's alter ego, the personality, concept, and name of Clark Kent have become ingrained in popular culture as well, becoming synonymous with secret identities and innocuous fronts for ulterior motives and activities. In 1992, Superman co-creator Joe Shuster told the "Toronto Star" that the name derived from 1930s cinematic leading men Clark Gable and Kent Taylor, but the persona from bespectacled silent film comic Harold Lloyd and himself. Another, perhaps more likely possibility, is that Jerry Siegel pulled from his own love of pulp heroes Doc Clark Savage and The Shadow alias Kent Allard. This idea was notably stated in the book "Men of Tomorrow: Geeks, Gangsters, and the Rise of the American Comic Book". Clark's middle name is given variously as either Joseph, Jerome, or Jonathan, all being allusions to creators Jerry Siegel and Joe Shuster.
In the original Siegel and Shuster stories, Superman's personality is rough and aggressive. He often uses excessive force and terror against criminals, on some occasions even killing them. This came to an end in late 1940 when new editor Whitney Ellsworth instituted a code of conduct for his characters to follow, banning Superman from ever killing. The character was softened and given a sense of humanitarianism. Ellsworth's code, however, is not to be confused with "the Comics Code", which was created in 1954 by the Comics Code Authority and ultimately abandoned by every major comic book publisher by the early 21st century.
In his first appearances, Superman was considered a vigilante by the authorities, being fired upon by the National Guard as he razed a slum so that the government would create better housing conditions for the poor. By 1942, however, Superman was working side-by-side with the police. Today, Superman is commonly seen as a brave and kind-hearted hero with a strong sense of justice, morality, and righteousness. He adheres to an unwavering moral code instilled in him by his adoptive parents. His commitment to operating within the law has been an example to many citizens and other heroes, but has stirred resentment and criticism among others, who refer to him as the "big blue boy scout". Superman can be rather rigid in this trait, causing tensions in the superhero community. This was most notable with Wonder Woman, one of his closest friends, after she killed Maxwell Lord. Booster Gold had an initial icy relationship with the Man of Steel, but grew to respect him.
Having lost his home world of Krypton, Superman is very protective of Earth, and especially of Clark Kent's family and friends. This same loss, combined with the pressure of using his powers responsibly, has caused Superman to feel lonely on Earth, despite having his friends and parents. Previous encounters with people he thought to be fellow Kryptonians, Power Girl (who is, in fact from the Krypton of the Earth-Two universe) and Mon-El, have led to disappointment. The arrival of Supergirl, who has been confirmed to be not only from Krypton, but also his cousin, has relieved this loneliness somewhat. Superman's Fortress of Solitude acts as a place of solace for him in times of loneliness and despair.
In "Superman/Batman" #3 (Dec. 2003), Batman, underwriter Jeph Loeb, observes, "It is a remarkable dichotomy. In many ways, Clark is the most human of us all. Then ... he shoots fire from the skies, and it is difficult not to think of him as a god. And how fortunate we all are that it does not occur to "him"." In writer Geoff Johns' "Infinite Crisis" #1 (Dec. 2005), part of the 2005–2006 "Infinite Crisis" crossover storyline, Batman admonishes him for identifying with humanity too much and failing to provide the strong leadership that superhumans need.
The catalog of Superman's abilities and his strength has varied considerably over the vast body of Superman fiction released since 1938.
Since "Action Comics" #1 (1938), Superman has superhuman strength. The cover of "Action Comics" #1 shows him effortlessly lifting a car over his head. Another classic feat of strength on Superman's part is breaking steel chains. In some stories, he is strong enough to shift the orbits of planets and crush coal into diamond with his hands.
Since "Action Comics" #1 (1938), Superman has a highly durable body, invulnerable for most practical purposes. At the very least, bullets bounce harmlessly off his body. In some stories, such as "Kingdom Come", not even a nuclear bomb can harm him.
In some stories, Superman is said to project an aura that renders invulnerable any tight-fitting clothes he wears, and hence his costume is as durable as he is despite being made of common human-fractured cloth. This concept was first introduced in "Man of Steel" #1 (1986). In other stories, Superman's costume is made out of exotic materials that are as tough as he is.
In "Action Comics" #1, Superman could not fly. He traveled by running and leaping, which he could do to a prodigious degree thanks to his strength. Superman gained the ability to fly in the second episode of the radio serial in 1940. Superman can fly at great speeds. He can break the sound barrier, and in some stories, he can even fly faster than light to travel to distant galaxies.
Superman can project and perceive X-rays via his eyes, which allows him to see through objects. He first uses this power in "Action Comics" #11 (1939). Certain materials such as lead can block his X-ray vision.
Superman can project beams of heat from his eyes which are hot enough to melt steel. He first used this power in "Superman" #59 (1949) by applying his X-ray vision at its highest intensity. In later stories, this ability is simply called "heat vision".
Superman can hear sounds that are too faint for a human to hear, and at frequencies outside the human hearing range. This ability was introduced in "Action Comics" #11 (1939).
Since "Action Comics" #20 (1940), Superman possesses superhuman breath, which enables him to inhale or blow huge amounts of air, as well as holding his breath indefinitely to remain underwater or space without adverse effects. He has a significant focus of his breath's intensity to the point of freezing targets by blowing on them. The "freezing breath" was first demonstrated in "Superman" #129 (1959).
"Action Comics" #1 (1938) explained that Superman's strength was common to all Kryptonians because they were a species "millions of years advanced of our own". Later stories explained they evolved superhuman strength simply because of Krypton's higher gravity. "Superman" #146 (1961) explains that his abilities other than strength (flight, durability, etc.) are activated by the light of Earth's yellow sun. In "Action Comics" #300 (1963), all of his powers including strength are activated by yellow sunlight and can be deactivated by red sunlight similar to that of Krypton's sun.
Exposure to green kryptonite radiation nullifies Superman's powers and incapacitates him with pain and nausea; prolonged exposure will eventually kill him. Although green kryptonite is the most commonly seen form, writers have introduced other forms over the years: such as red, gold, blue, white, and black, each with its own effect. Gold kryptonite, for instance, permanently nullifies Superman's powers but otherwise does not harm him. Kryptonite first appeared in a 1943 episode of the radio serial. It first appeared in comics in "Superman" #61 (Dec. 1949).
Superman is also vulnerable to magic. Enchanted weapons and magical spells affect Superman as easily as they would a normal human. This weakness was established in "Superman" #171 (1964).
Superman's first and most famous supporting character is Lois Lane, introduced in "Action Comics" #1. She is a fellow journalist at the "Daily Planet". As Jerry Siegel conceived her, Lois considers Clark Kent to be a wimp, but she is infatuated with the bold and mighty Superman, not knowing that Kent and Superman are the same person. Siegel objected to any proposal that Lois discover that Clark is Superman because he felt that, as implausible as Clark's disguise is, the love triangle was too important to the book's appeal. However, Siegel wrote stories in which Lois suspects Clark is Superman and tries to prove it, with Superman always duping her in the end; the first such story was in "Superman" #17 (July–August 1942). This was common plot in comic book stories prior to the 1970s. In a story in "Action Comics" #484 (June 1978), Clark Kent admits to Lois that he is Superman, and they marry. This was the first story in which Superman and Lois marry that wasn't an "imaginary tale." Many Superman stories since then have depicted Superman and Lois as a married couple, but about as many depict them in the classic love triangle.
Other supporting characters include Jimmy Olsen, a photographer at the "Daily Planet", who is friends with both Superman and Clark Kent, though in most stories he doesn't know that Clark is Superman. Jimmy is frequently described as "Superman's pal", and was conceived to give young male readers a relatable character through which they could fantasize being friends with Superman.
In the earliest comic book stories, Clark Kent's employer is George Taylor of "The Daily Star", but the second episode of the radio serial changed this to Perry White of the "Daily Planet".
Clark Kent's foster parents are Ma and Pa Kent. In many stories, one or both of them have died by the time Clark becomes Superman. Clark's parents taught him that he should use his abilities for altruistic means, but that he should also find some way to safeguard his private life.
The villains Superman faced in the earliest stories were ordinary humans, such as gangsters, corrupt politicians, and violent husbands; but they soon grew more colorful and outlandish so as to avoid offending censors or scaring children. The mad scientist Ultra-Humanite, introduced in "Action Comics" #13 (June 1939), was Superman's first recurring villain. Superman's best-known nemesis, Lex Luthor, was introduced in "Action Comics" #23 (April 1940) and has been depicted as either a mad scientist or a wealthy businessman (sometimes both). In 1944, the magical imp Mister Mxyzptlk, Superman's first recurring super-powered adversary, was introduced. Superman's first alien villain, Brainiac, debuted in "Action Comics" #242 (July 1958). The monstrous Doomsday, introduced in "" #17–18 (Nov.-Dec. 1992), was the first villain to evidently kill Superman in physical combat. Other adversaries include the odd Superman-doppelgänger Bizarro, the Kryptonian criminal General Zod, and alien tyrants Darkseid and Mongul.
The details Superman's story and supporting cast vary across his large body of fiction released since 1938, but most versions conform to the basic template described above. A few stories feature radically altered versions of Superman. An example is the graphic novel "", which depicts a communist Superman who rules the Soviet Union. DC Comics has on some occasions published crossover stories where different versions of Superman interact with each other using the plot device of parallel universes. For instance, in the 1960s, the Superman of "Earth-One" would occasionally feature in stories alongside the Superman of "Earth-Two", the latter of whom resembled Superman as he was portrayed in the 1940s. DC Comics has not developed a consistent and universal system to classify all versions of Superman.
Superman is often thought of as the first superhero. This point is debated by historians: Ogon Bat, the Phantom, Zorro, and Mandrake the Magician arguably fit the definition of the superhero yet predate Superman. Nevertheless, Superman popularized the archetype and established its conventions: a costume, a codename, extraordinary abilities, and an altruistic mission. Superman's success in 1938 began a wave of imitations, which include Batman, Wonder Woman, Green Lantern, Captain America, and Captain Marvel. This flourishing is today referred to as America's Golden Age of Comic Books, which lasted from 1938 to about 1950. The Golden Age ended when American superhero book sales declined, leading to the cancellation of many characters; but Superman was one of the few superhero franchises that survived this decline, and his sustained popularity into the late 1950s helped the second flourishing in the Silver Age of Comic Books, when characters such as Spider-Man, Iron Man, and The X-Men were created.
After World War 2, American superhero fiction entered Japanese culture. Astro Boy, first published in 1952, was inspired by Mighty Mouse, which itself was a parody of Superman. The "Superman" animated shorts from the 1940s were first broadcast on Japanese television in 1955, and they were followed in 1956 by the TV show "Adventures of Superman" starring George Reeves. These shows were popular with the Japanese and inspired Japan's own prolific genre of superheroes. The first Japanese superhero movie, "Super Giant", was released in 1957. The first Japanese superhero TV show was "Moonlight Mask" in 1958. Notable characters include Ultraman, Kamen Rider, and Sailor Moon.
Superman has also featured as an inspiration for musicians, with songs by numerous artists from several generations celebrating the character. Donovan's "Billboard" Hot 100 topping single "Sunshine Superman" utilized the character in both the title and the lyric, declaring "Superman and Green Lantern ain't got nothing on me." Folk singer-songwriter Jim Croce sung about the character in a list of warnings in the chorus of his song "You Don't Mess Around with Jim", introducing the phrase "you don't tug on Superman's cape" into popular lexicon. Other tracks to reference the character include Genesis' "Land of Confusion", the video to which featured a Spitting Image puppet of Ronald Reagan dressed as Superman, "(Wish I Could Fly Like) Superman" by The Kinks on their 1979 album "Low Budget" and "Superman" by The Clique, a track later covered by R.E.M. on its 1986 album "Lifes Rich Pageant". This cover is referenced by Grant Morrison in "Animal Man", in which Superman meets the character, and the track comes on Animal Man's Walkman immediately after. Crash Test Dummies' "Superman's Song", from the 1991 album "The Ghosts That Haunt Me" explores the isolation and commitment inherent in Superman's life. Five for Fighting released "Superman (It's Not Easy)" in 2000, which is from Superman's point of view, although Superman is never mentioned by name. From 1988 to 1993, American composer Michael Daugherty composed "Metropolis Symphony", a five-movement orchestral work inspired by Superman comics.
Superman is the prototypical superhero and consequently the most frequently parodied. The first popular parody was "Mighty Mouse", introduced in "The Mouse of Tomorrow" animated short in 1942. While the character swiftly took on a life of its own, moving beyond parody, other animated characters soon took their turn to parody the character. In 1943, Bugs Bunny was featured in a short, "Super-Rabbit", which sees the character gaining powers through eating fortified carrots. This short ends with Bugs stepping into a phone booth to change into a real "Superman" and emerging as a U.S. Marine. In 1956 Daffy Duck assumes the mantle of "Cluck Trent" in the short "Stupor Duck", a role later reprised in various issues of the "Looney Tunes" comic book. In the United Kingdom Monty Python created the character Bicycle Repairman, who fixes bicycles on a world full of Supermen, for a sketch in series of their BBC show. Also on the BBC was the sitcom "My Hero", which presented Thermoman as a slightly dense Superman pastiche, attempting to save the world and pursue romantic aspirations. In the United States, "Saturday Night Live" has often parodied the figure, with Margot Kidder reprising her role as Lois Lane in a 1979 episode. The manga and anime series "Dr. Slump" featured the character "Suppaman"; a short, fat, pompous man who changes into a thinly veiled Superman-like alter-ego by eating a sour-tasting umeboshi. Jerry Seinfeld, a noted Superman fan, filled his series "Seinfeld" with references to the character and in 1997 asked for Superman to co-star with him in a commercial for American Express. The commercial aired during the 1998 NFL Playoffs and Super Bowl, Superman animated in the style of artist Curt Swan, again at the request of Seinfeld. Superman has also been used as reference point for writers, with Steven T. Seagle's graphic novel "Superman: It's a Bird" exploring Seagle's feelings on his own mortality as he struggles to develop a story for a Superman tale. Brad Fraser used the character as a reference point for his play "Poor Super Man", with "The Independent" noting the central character, a gay man who has lost many friends to AIDS as someone who "identifies all the more keenly with Superman's alien-amid-deceptive-lookalikes status." Superman's image was also used in an AIDS awareness campaign by French organization AIDES. Superman was depicted as emaciated and breathing from an oxygen tank, demonstrating that no-one is beyond the reach of the disease, and it can destroy the lives of everyone.
Superman has been interpreted and discussed in many forms in the years since his debut, with Umberto Eco noting that "he can be seen as the representative of all his similars". Writing in "Time" in 1971, Gerald Clarke stated: "Superman's enormous popularity might be looked upon as signaling the beginning of the end for the Horatio Alger myth of the self-made man." Clarke viewed the comics characters as having to continuously update in order to maintain relevance and thus representing the mood of the nation. He regarded Superman's character in the early seventies as a comment on the modern world, which he saw as a place in which "only the man with superpowers can survive and prosper." Andrew Arnold, writing in the early 21st century, has noted Superman's partial role in exploring assimilation, the character's alien status allowing the reader to explore attempts to fit in on a somewhat superficial level.
A.C. Grayling, writing in "The Spectator", traces Superman's stances through the decades, from his 1930s campaign against crime being relevant to a nation under the influence of Al Capone, through the 1940s and World War II, a period in which Superman helped sell war bonds, and into the 1950s, where Superman explored the new technological threats. Grayling notes the period after the Cold War as being one where "matters become merely personal: the task of pitting his brawn against the brains of Lex Luthor and Brainiac appeared to be independent of bigger questions", and discusses events post 9/11, stating that as a nation "caught between the terrifying George W. Bush and the terrorist Osama bin Laden, America is in earnest need of a Saviour for everything from the minor inconveniences to the major horrors of world catastrophe. And here he is, the down-home clean-cut boy in the blue tights and red cape".
An influence on early Superman stories is the context of the Great Depression. Superman took on the role of social activist, fighting crooked businessmen and politicians and demolishing run-down tenements. Comics scholar Roger Sabin sees this as a reflection of "the liberal idealism of Franklin Roosevelt's New Deal", with Shuster and Siegel initially portraying Superman as champion to a variety of social causes. In later Superman radio programs the character continued to take on such issues, tackling a version of the Ku Klux Klan in a 1946 broadcast, as well as combating anti-semitism and veteran discrimination.
Scott Bukatman has discussed Superman, and the superhero in general, noting the ways in which they humanize large urban areas through their use of the space, especially in Superman's ability to soar over the large skyscrapers of Metropolis. He writes that the character "represented, in 1938, a kind of Corbusierian ideal. Superman has X-ray vision: walls become permeable, transparent. Through his benign, controlled authority, Superman renders the city open, modernist and democratic; he furthers a sense that Le Corbusier described in 1925, namely, that 'Everything is known to us'."
Jules Feiffer has argued that Superman's real innovation lay in the creation of the Clark Kent persona, noting that what "made Superman extraordinary was his point of origin: Clark Kent." Feiffer develops the theme to establish Superman's popularity in simple wish fulfillment, a point Siegel and Shuster themselves supported, Siegel commenting that "If you're interested in what made Superman what it is, here's one of the keys to what made it universally acceptable. Joe and I had certain inhibitions ... which led to wish-fulfillment which we expressed through our interest in science fiction and our comic strip. That's where the dual-identity concept came from" and Shuster supporting that as being "why so many people could relate to it".
Ian Gordon suggests that the many incarnations of Superman across media use nostalgia to link the character to an ideology of the American Way. He defines this ideology as a means of associating individualism, consumerism, and democracy and as something that took shape around WWII and underpinned the war effort. Superman, he notes was very much part of that effort.
Superman is considered the prototypical superhero. He established the major conventions of the archetype: a selfless, prosocial mission; extraordinary, perhaps superhuman, abilities; a secret identity and codename; and a colorful costume that expresses his nature. Superman's cape and skintight suit are widely recognized as the generic superhero costume.
Superman's immigrant status is a key aspect of his appeal. Aldo Regalado saw the character as pushing the boundaries of acceptance in America. The extraterrestrial origin was seen by Regalado as challenging the notion that Anglo-Saxon ancestry was the source of all might. Gary Engle saw the "myth of Superman [asserting] with total confidence and a childlike innocence the value of the immigrant in American culture." He argues that Superman allowed the superhero genre to take over from the Western as the expression of immigrant sensibilities. Through the use of a dual identity, Superman allowed immigrants to identify with both of their cultures. Clark Kent represents the assimilated individual, allowing Superman to express the immigrants' cultural heritage for the greater good. David Jenemann has offered a contrasting view. He argues that Superman's early stories portray a threat: "the possibility that the exile would overwhelm the country." David Rooney, a theater critic for "The New York Times", in his evaluation of the play, "Year Zero", considers Superman to be the "quintessential immigrant story ... (b)orn on an alien planet, he grows stronger on Earth, but maintains a secret identity tied to a homeland that continues to exert a powerful hold on him even as his every contact with those origins does him harm."
Some see Judaic themes in Superman. The British rabbi Simcha Weinstein notes that Superman's story has some parallels to that of Moses. For example, Moses as a baby was sent away by his parents in a reed basket to escape death and adopted by a foreign culture. Weinstein also posits that Superman's Kryptonian name, "Kal-El", resembles the Hebrew words קל-אל, which can be taken to mean "voice of God". The historian Larry Tye suggests that this "Voice of God" is an allusion to Moses' role as a prophet. The suffix "el", meaning "(of) God", is also found in the name of angels (e.g. Gabriel, Ariel), who are airborne humanoid agents of good with superhuman powers. The Nazis also thought Superman was a Jew and in 1940 Joseph Goebbels publicly denounced Superman and his creator Jerry Siegel. However, the historian Martin Lund argues that the evidence for Jewish influence is circumstantial, and notes that Jerry Siegel was not a practicing Jew and that he never acknowledged the influence of Judaism in any memoir or interview.
Superman stories have occasionally exhibited Christian themes as well. Screenwriter Tom Mankiewicz consciously made Superman an allegory for Christ in the 1978 movie starring Christopher Reeve: baby Kal-El's ship resembles the Star of Bethlehem, and Jor-El gives his son a messianic mission to lead humanity into a brighter future. | https://en.wikipedia.org/wiki?curid=28381 |
Splay tree
A splay tree is a self-balancing binary search tree with the additional property that recently accessed elements are quick to access again. It performs basic operations such as insertion, look-up and removal in O(log n) amortized time. For many sequences of non-random operations, splay trees perform better than other search trees, even when the specific pattern of the sequence is unknown. The splay tree was invented by Daniel Sleator and Robert Tarjan in 1985.
All normal operations on a binary search tree are combined with one basic operation, called "splaying". Splaying the tree for a certain element rearranges the tree so that the element is placed at the root of the tree. One way to do this with the basic search operation is to first perform a standard binary tree search for the element in question, and then use tree rotations in a specific fashion to bring the element to the top. Alternatively, a top-down algorithm can combine the search and the tree reorganization into a single phase.
Good performance for a splay tree depends on the fact that it is self-optimizing, in that frequently accessed nodes will move nearer to the root where they can be accessed more quickly. The worst-case height—though unlikely—is O(n), with the average being O(log "n").
Having frequently-used nodes near the root is an advantage for many practical applications (also see Locality of reference), and is particularly useful for implementing caches and garbage collection algorithms.
Advantages include:
The most significant disadvantage of splay trees is that the height of a splay tree can be linear. For example, this will be the case after accessing all "n" elements in non-decreasing order. Since the height of a tree corresponds to the worst-case access time, this means that the actual cost of an operation can be high. However the amortized access cost of this worst case is logarithmic, O(log "n"). Also, the expected access cost can be reduced to O(log "n") by using a randomized variant.
The representation of splay trees can change even when they are accessed in a 'read-only' manner (i.e. by "find" operations). This complicates the use of such splay trees in a multi-threaded environment. Specifically, extra management is needed if multiple threads are allowed to perform "find" operations concurrently. This also makes them unsuitable for general use in purely functional programming, although even there they can be used in limited ways to implement priority queues.
When a node "x" is accessed, a splay operation is performed on "x" to move it to the root. To perform a splay operation we carry out a sequence of "splay steps", each of which moves "x" closer to the root. By performing a splay operation on the node of interest after every access, the recently accessed nodes are kept near the root and the tree remains roughly balanced, so that we achieve the desired amortized time bounds.
Each particular step depends on three factors:
It is important to remember to set "gg" (the "great-grandparent" of x) to now point to x after any splay operation. If "gg" is null, then x obviously is now the root and must be updated as such.
There are three types of splay steps, each of which has two symmetric variants: left- and right-handed. For the sake of brevity, only one of these two is shown for each type. (In the following diagrams, circles indicate nodes of interest and triangles indicate single nodes or sub-trees.) The three types of splay steps are:
Zig step: this step is done when "p" is the root. The tree is rotated on the edge between "x" and "p". Zig steps exist to deal with the parity issue and will be done only as the last step in a splay operation and only when "x" has odd depth at the beginning of the operation.
Zig-zig step: this step is done when "p" is not the root and "x" and "p" are either both right children or are both left children. The picture below shows the case where "x" and "p" are both left children. The tree is rotated on the edge joining "p" with "its" parent "g", then rotated on the edge joining "x" with "p". Note that zig-zig steps are the only thing that differentiate splay trees from the "rotate to root" method introduced by Allen and Munro prior to the introduction of splay trees.
Zig-zag step: this step is done when "p" is not the root and "x" is a right child and "p" is a left child or vice versa. The tree is rotated on the edge between "p" and x, and then rotated on the resulting edge between "x" and g.
Given two trees S and T such that all elements of S are smaller than the elements of T, the following steps can be used to join them to a single tree:
Given a tree and an element "x", return two new trees: one containing all elements less than or equal to "x" and the other containing all elements greater than "x". This can be done in the following way:
To insert a value "x" into a splay tree:
Alternatively:
To delete a node "x", use the same method as with a binary search tree:
In this way, deletion is reduced to the problem of removing a node with 0 or 1 children. Unlike a binary search tree, in a splay tree after deletion, we splay the parent of the removed node to the top of the tree.
Alternatively:
Splaying, as mentioned above, is performed during a second, bottom-up pass over the access path of a node. It is possible to record the access path during the first pass for use during the second, but that requires extra space during the access operation. Another alternative is to keep a parent pointer in every node, which avoids the need for extra space during access operations but may reduce overall time efficiency because of the need to update those pointers.
Another method which can be used is based on the argument that we can restructure the tree on our way down the access path instead of making a second pass. This top-down splaying routine uses three sets of nodes - left tree, right tree and middle tree. The first two contain all items of original tree known to be less than or greater than current item respectively. The middle tree consists of the sub-tree rooted at the current node. These three sets are updated down the access path while keeping the splay operations in check. Another method, semisplaying, modifies the zig-zig case to reduce the amount of restructuring done in all operations.
Below there is an implementation of splay trees in C++, which uses pointers to represent each node on the tree. This implementation is based on bottom-up splaying version and uses the second method of deletion on a splay tree. Also, unlike the above definition, this C++ version does "not" splay the tree on finds - it only splays on insertions and deletions, and the find operation, therefore, has linear time complexity.
templatekey; }
A simple amortized analysis of static splay trees can be carried out using the potential method. Define:
Φ will tend to be high for poorly balanced trees and low for well-balanced trees.
To apply the potential method, we first calculate ΔΦ: the change in the potential caused by a splay operation. We check each case separately. Denote by rank′ the rank function after the operation. x, p and g are the nodes affected by the rotation operation (see figures above).
The amortized cost of any operation is ΔΦ plus the actual cost. The actual cost of any zig-zig or zig-zag operation is 2 since there are two rotations to make. Hence:
When summed over the entire splay operation, this telescopes to 3(rank(root)−rank("x")) which is O(log "n"). The Zig operation adds an amortized cost of 1, but there's at most one such operation.
So now we know that the total "amortized" time for a sequence of "m" operations is:
To go from the amortized time to the actual time, we must add the decrease in potential from the initial state before any operation is done (Φ"i") to the final state after all operations are completed (Φ"f").
where the last inequality comes from the fact that for every node "x", the minimum rank is 0 and the maximum rank is log("n").
Now we can finally bound the actual time:
The above analysis can be generalized in the following way.
The same analysis applies and the amortized cost of a splaying operation is again:
where "W" is the sum of all weights.
The decrease from the initial to the final potential is bounded by:
since the maximum size of any single node is "W" and the minimum is "w(x)".
Hence the actual time is bounded by:
There are several theorems and conjectures regarding the worst-case runtime for performing a sequence "S" of "m" accesses in a splay tree containing "n" elements.
In addition to the proven performance guarantees for splay trees there is an unproven conjecture of great interest from the original Sleator and Tarjan paper. This conjecture is known as the "dynamic optimality conjecture" and it basically claims that splay trees perform as well as any other binary search tree algorithm up to a constant factor.
There are several corollaries of the dynamic optimality conjecture that remain unproven:
In order to reduce the number of restructuring operations, it is possible to replace the splaying with "semi-splaying", in which an element is splayed only halfway towards the root.
Another way to reduce restructuring is to do full splaying, but only in some of the access operations - only when the access path is longer than a threshold, or only in the first "m" access operations. | https://en.wikipedia.org/wiki?curid=28382 |
List of The Sandman characters
This is a list of characters appearing in "The Sandman" comic book, published by DC Comics' Vertigo imprint. This page discusses not only events which occur in "The Sandman" (1989–1994), but also some occurring in spinoffs of "The Sandman", such as "The Dreaming" (1996–2001) and "Lucifer" (1999–2007), as well as characters from earlier stories which "The Sandman" was based on. These stories occur in the DC Universe, but are generally tangential to the mainstream DC stories.
The Endless are a family of seven anthropomorphic personifications of universal concepts, around whom much of the series revolves. From eldest to youngest, they are:
All debuted in the "Sandman" series, except Destiny, who was created by Marv Wolfman and Berni Wrightson in "Weird Mystery Tales" #1 (1972).
These inhabitants of the Dreaming are often gods, myths, and even ordinary human beings who later became dreams.
Cain and Abel are based on the Biblical Cain and Abel and adapted by editor Joe Orlando with Bob Haney (writer) and Jack Sparling (artist) (Cain), and Mark Hannerfeld (writer) and Bill Draut (artist) (Abel). They were depicted together in Abel's first appearance, and parted to their respective Houses at the end of the story. Although Cain would abuse Abel, he was not shown killing him until "Swamp Thing" vol. 2 #33. In "Elvira's House of Mystery" #11, Cain expresses shock at having killed his brother in recent times. In the same issue, a contest-winning letter establishes that Cain and the House exist both in the dream world and the real world, and that only in the dream world Cain continues to harm Abel. In "The Sandman", Cain is shown to kill Abel quite often. In issue #2, Lucien calls this unusual, and recent.
Originally they were the respective "hosts" of the EC-style horror comic anthologies "House of Mystery" and "House of Secrets", which ran from the 1950s through 1983—Cain debuting in "House of Mystery" #175 (1968) and Abel in "DC Special" #4 and "House of Secrets" #81 (both 1969). During the 1970s, they also co-hosted the horror/humor anthology "Plop!". They were also both recurring characters in DC's "Elvira's House of Mystery" (1986–88).
In 1985, the characters were revived by writer Alan Moore, who introduced them into his "Swamp Thing" series in issue #33, retelling the Swamp Thing's original origin story from a 1971 issue of "House of Secrets". Gary Cohn and Dan Mishkin included them in the pages of "Blue Devil" in 1986. Jamie Delano also occasionally used them in a cameo role in his title "Hellblazer".
In Gaiman's "Sandman" universe, the biblical Cain and Abel live in the Dreaming at Dream's invitation. This is based on the verse in the Bible which says that Cain was sent to live in the Land of Nod. They live as neighbors in two houses near a graveyard: Cain in the broad House of Mystery and Abel in the tall House of Secrets. According to their appearance in "Swamp Thing", the difference is that 'a mystery may be shared, but a secret must be forgotten if one tries to tell it'.
Gaiman's Cain is an aggressive, overbearing character. He is a thin, long-limbed man with an angular, drawn face, glasses, a tufty beard, and hair drawn into two points above his ears. He has been described by other characters as sounding "just like Vincent Price".
Gaiman's Abel is a nervous, stammering, kind-hearted man; somewhat similar in appearance to Cain, with a tufty beard and hair that comes to points above his ears, though his hair is black rather than brown. He is shorter and fatter than Cain, with a more open face. It is eventually learned that the only time he does not stutter is when he is telling a story or when he is dead.
Cain frequently kills Abel in a brutal ways; whereupon Abel later returns to life, and frequently hopes for a more harmonious relationship between the two.
Cain and Abel own a large green draconic gargoyle named Gregory, who also made his debut in "House of Mystery" #175. In the first appearance of the characters in "Sandman", issue #2, Cain gives Abel an egg that soon hatches into another gargoyle, a small golden one. Abel names the gargoyle "Irving". but Cain insists that the names of gargoyles must always begin with a "G.", and Abel (after another death and resurrection) renames the gargoyle "Goldie", after an invisible/imaginary friend to whom Abel told his early "House of Secrets" stories. A letter in issue #91 was attributed to Goldie, who claimed that it was herself depicted on the cover of issue #88.
They shelter Dream until his strength is restored following his 72-year-long imprisonment. In the fourth story arc, "", Cain is sent to Hell to give a message to Lucifer because cain is protected by a curse that would deter lucifer from harming him. Cain and Abel also aid The Corinthian with the child Daniel during "", the penultimate story arc of the series. Abel is one of the victims of the Furies in this series, and is brought back to life by the new Dream.
The Corinthian is a nightmare created by Dream, of human appearance but with two small additional mouths in place of his eyes. He enjoys eating the eyeballs of people he kills.
The first version of the Corinthian is destroyed by Dream for spending several unsupervised decades on Earth as a serial killer (in Dream's view, a waste of his potential), and it is shown in "" (2013) that Dream intended to do this before his imprisonment. Near the end of the series Dream creates a second Corinthian, altering his personality to be obedient and useful rather than homicidal. In a later story in "The Dreaming", the second Corinthian is haunted by the actions of the first.
Eve is based on the biblical Eve, the mother of humanity and wife of Adam.
Eve originally appeared in "Secrets of Sinister House" #6 (August–September 1972); she was the series' principal host, often in stock images, usually with her raven. After issue 15, in which Eve reveals in the letter column that her raven, Edgar Allen , is an enchanted deceased human, editor Joe Orlando departed from the series and so did she, the series focusing on "sinister house"s. That month (December 1973), she started hosting one story per month in "Weird Mystery Tales".
She became the principal host of "Weird Mystery Tales" with issue #15, Destiny having moved to "Secrets of Haunted House" as principal host. In "Plop!", Eve, Cain, and Abel each tell one story per issue. She also makes a few appearances in "House of Mystery" and "House of Secrets". In her early appearances, she appears only as a crone, is often identified as a witch, and has a tendency to sharp speech. In her first appearance, she scares Cain and Abel, and shouts at them, "Get out of the kitchen when it gets too hot, you cowardly mortals! Old Eve doesn't care..." Her letter column, which was answered in character, was called "Witch's Tales". She appeared as a principal character in stories in "Secrets of Sinister House" #9 and #11 and "Weird Mystery Tales" #18. In issue #9, she stays in an apartment building under an assumed name (she denies it is her in the letters column of issue #13), where the smell of her cooking causes her neighbor to report her to the superintendent, so she curses the neighbor to repeat a day—which begins wonderfully and ends in two deaths—over and over again.
In "Weird Mystery Tales" #3 (November–December 1972), Destiny insisted that Eve, Cain, and Abel are not their Biblical eponyms. When she is shown in "Sandman" #2, Lucien's comment about her addresses her unfriendly nature prior to Dream's return, stating that she confines herself to nightmares.
Eve lives in a cave in the Dreaming, and is often accompanied by Dream's raven. The first raven, Lucien, taught her how to bury Abel after Cain murdered him and she has been accompanied by a raven ever since. She is kind and has a maternal nature, though she retains her sharp language. Most of the time she appears as a black-haired woman of indeterminate age; but sometimes appears a young, attractive maiden, a middle-aged mother, or an elderly crone. When we first see her in "The Sandman" #2, she looks little different from her original appearances. Next, in, #24, she has put on much weight, has a friendlier face, and shows her ability to de-age as she embraces Matthew. Her largest appearance is in #40, wherein she appears young and beautiful for the first time.
Fiddler's Green is a place in the Dreaming which all travellers (specifically sailors) dream of someday finding, which sometimes assumes human form and goes wandering, under the alias Gilbert; a kindly, portly man who, in appearance and behavior, resembles G. K. Chesterton. As 'Gilbert', Fiddler's Green accompanied Rose Walker to find her brother Jed, and gave her the means by which to summon Dream to rescue her from danger; and thereafter returned to the Dreaming. He was killed by the Three in "", and himself refused resurrection by the new Dream. Here, it is implied that he was "in love, a little" with Rose.
A wyvern, a griffin and a hippogriff are the guardians of Dream's castle. The hippogriff has a horse's head instead of the traditional eagle's head.
They derive all their power and authority from Dream, so when Dream was captured and lost his power, they could no longer guard or protect the Dreaming.
After the griffin was destroyed by the Furies, the new Dream did not remake him, but asked the gryphons of Greek myth to send one of their own. (#71)
A large green gargoyle, the pet of Cain. Gregory communicates in 'grunts' which inhabitants of the Dreaming appear to understand. He helps Goldie re-assemble Abel when Cain kills him. He first appeared as the child of two stone gargoyles in "House of Mystery" #175, wherein his parents perched on the House of Mystery until they were able to kill their sculptor, a boarder in the house who had murdered their designer, and left without their egg. He later appears during the "Blackest Night" crossover, defending Scandal Savage, the new owner of the House of Mystery, from members of the Suicide Squad.
Goldie is Abel's pet gargoyle.
Goldie is a pet (baby) gargoyle, given to Abel by his brother Cain in "". Abel originally intended to name him "Irving", but Cain insisted that gargoyles' names must all begin with a "G." Cain then proceeded to murder Abel over this, after which Abel names the gargoyle Goldie, after a friend who went away (in fact Abel's "imaginary" girlfriend, who appeared on the cover of "The House of Secrets" #88, and to whom he addressed many of his stories).
Goldie takes centre stage in "The Dreaming", a "Sandman" spin-off series not written by Gaiman. In "The Goldie Factor," Goldie leaves the Dreaming and ends up in the Garden of Eden.
Lucien is the chief librarian in The Dreaming, and is a tall thin, bookish man. He first appeared in "Weird Mystery Tales" #18 (May 1975) and was apparently killed in "Secrets of Haunted House" #44 (January 1982).
Like Cain and Abel, Lucien, created by Paul Levitz, Nestor Redondo, and Joe Orlando, was originally the host of a 1970s "weird tales" comic, specifically the three-issue "Tales of Ghost Castle" (May/June–October 1975). In that series, he is portrayed as the guardian of a castle in Transylvania abandoned by both sides during World War II, watching over its forgotten library with his companion, a werewolf named Rover. In his first appearance in "" (issue #2) this is retroactively revealed to be Dream's castle.
Lucien is the effective keeper of the Dreaming in Dream's absence, and becomes one of Dream's most faithful and trusted servants after proving his loyalty by never abandoning his post during that period. His primary function is to protect the Library, wherein are contained all the books that have ever been dreamed of, including the ones that have never been written. The titles of some of these books, many of which are sequels to real works, are visible. He is, despite his frail appearance, apparently quite capable in combat, "[dealing] with" several unpleasant creatures who escape imprisonment during the events of "The Kindly Ones".
In issue #68, it is revealed that Lucien's existence in the Dreaming began as serving the role of Dream's first raven. When writing "The Sandman Companion", author Hy Bender interpreted this as meaning that Lucien was also the first man. An allusion to "Mr. Raven", the ghostly librarian in George MacDonald's novel "Lilith", may be intended.
Matthew is the raven companion of Dream of the Endless.
Matthew was originally Matthew Cable, a long-time supporting character in the "Swamp Thing" series, but because he died while asleep in the Dreaming, he was offered the chance to become a dream raven and serve Dream if he wished, and he accepted.
Matthew is not the first of Morpheus' ravens. Former ravens include Aristeas of Marmora, who returned to his life as a man for one year at one point, and Lucien, the first of the ravens. Morpheus seems to keep the ravens around out of some sort of unspoken need for companionship, though he also sends them on occasional missions.
Matthew's word balloons and font style are scratchy and uneven, probably to represent a hoarse, cawing voice, and perhaps as an indicator of his crude, smart-aleck personality. Underneath his frequently irreverent manner, Matthew is actually very loyal to Dream, and he is one of the characters who takes it the hardest when Dream perishes, initially seeking release from his service, but eventually coming to terms with his loss and choosing to remain as Daniel's raven.
Mervyn Pumpkinhead is Dream's cantankerous, cigar-smoking janitor: an animated scarecrow whose head is a jack-o'-lantern. He resembles Jack Pumpkinhead of L. Frank Baum's Oz books.
Mervyn is first seen in "" when Dream travels on a bus. Thereafter Merv is in charge of the construction, maintenance, and demolition work in the Dreaming, though he sometimes complains that his job is superfluous because Dream can change any of the Dreaming at will. One issue of the "Dreaming" spin-off comic focuses on a dreamer who enjoys working under Merv's supervision.
Mervyn was one of the few who took arms against the Furies in ""; but is easily killed. He is returned to life by the new Dream in "".
In a past incarnation shown in "", Mervyn was seen to have had a turnip for a head instead of a pumpkin, as pumpkins were not then known in Europe.
Bast, in Neil Gaiman's comic book series "The Sandman", is the DC Universe version of the goddess Bast of Egyptian mythology. She was once a major goddess, but the loss of her believers over time has significantly reduced her powers. She is often coquettish toward Dream, who sometimes goes to her for advice or companionship; but she has often claimed never to have been his lover. Bast has also appeared in issues of "Wonder Woman" and "Hawkgirl", wherein she is one of the chief goddesses worshiped by the Amazons of Bana-Mighdall. She appears in "Sandman Presents: Bast: Eternity Game" (2003), where she attempts to regain her lost power.
The Presence is the "Sandman" universe's equivalent of a Supreme Being, and he shares many characteristics with the standard Abrahamic God, such as almost never taking a physical form, being a Creator deity, and having unmatched power. Nevertheless, Gaiman has on several occasions stated that he never intended the Creator to be any specific religion's god, just as he makes it clear in the first appearance of the abode of the angels, the Silver City, that it "is not Paradise. It is not Heaven. It is the Silver City, that is not part of the order of created things", although the Silver City is often identified as "Heaven" in the "Lucifer" comic book series.
In that series, one of the critical turning points is the Presence's abandonment of his Creation, which leads to a large number of problems, including struggles to claim the power that the Creator has abandoned, to make the destruction of the universe inevitable and to the slow unraveling of the universe due to the disappearance of the Name of the Creator written on every atom in existence. This is an ongoing storyline in "Lucifer".
Loki is a trickster god seen in ""; based on the Norse god Loki. In his own form, Loki is a tall, thin man with yellow eyes and long red hair that resembles flames; but he is capable of assuming any appearance at will. He is sometimes nicknamed 'Lie-Smith' and 'Sky-walker' by other characters.
He is temporarily freed from his punishment by Odin to accompany his negotiations for the rulership of Hell; whereafter he deceives Odin and Thor into imprisoning another god in his place, but fails to fool Dream, who frees the other god and sends a simulacrum of Loki to take his punishment, in exchange for Loki's debt to himself. Loki returns in "", wherein he works with Puck to kidnap Daniel, a child under Dream's protection. The Corinthian and Matthew eventually find Daniel, and Loki attempts to fool them by taking the form of Dream; but the Corinthian strangles Loki and consumes his eyes. Loki, now blind, is taken by Odin and Thor back to his punishment.
Loki reappears in "Lucifer", wherein Lucifer comes to Loki to take his ship for his own universe, and destroys the snake that tortures Loki, who therefore allows him the ship.
Odin, as based on the Norse God Odin, appears as an old man wearing a wide-brimmed hat and cloak and carrying a staff. He is usually depicted as a dark, mysterious figure, missing one eye and accompanied by two ravens, Hugin and Munin ("thought" and "memory"), and two wolves, Geri and Freki.
The Three appear in the form of any group of three women; usually the Mother, the Maiden and the Crone, the three aspects of the Triple Goddess in many mythologies. Sometimes they appear in the form of the three witches from DC's horror anthology, "The Witching Hour": Mildred, Mordred, and Cynthia. As these witches, they also appeared in a prestige format limited series of the same title, and two standard limited series, "Witchcraft" and "Witchcraft: Le Terreur".
The Three repeatedly appear throughout "The Sandman", fulfilling different functions at different points in the story. Their first appearance is in "The Sandman" #2, where they appear as the three witches, Mildred (mother), Mordred (crone), and Cynthia (maiden) from the DC horror anthology "The Witching Hour". They later take many different forms over the course of the series, and the "three women" symbol remains an extremely common one, often blurring the lines between when characters are supposed to be merely themselves and when they are supposed to be representations of the Three. The Three represent the female principle, prophecy, and mystery, and they are often a vaguely menacing and enigmatic presence in the series. Incarnations of the Three include the Erinyes (Furies) in their vengeful aspect and the Moirai (Fates) or Weird Sisters in their divinatory aspect. They also sometimes subtly appear in the form of other characters (such as Eve) or groups of characters.
The Three later appeared in a graphic novel named "WitchCraft", in which one of their priestesses in ancient Rome, Ursula, is raped by barbarians. She is then reincarnated three times, followed by the witches, and wronged again by reincarnations of the barbarian leader until the modern age, when she comes back as his elderly mother-in-law and manages to defeat him.
The Three then assure that he would be reincarnated as each of the priestesses he had raped, in order, with the exception of Ursula. He would never know what was happening until the moment of death, at which point it would start all over again.
The Three are satisfied, and in the end decide that Ursula will live another twenty years and become an accomplished and respected witch in her twilight years, and her grandchild will be beautiful.
Azazel is a former ruler of Hell, reigning for a time alongside Lucifer and Beelzebub. Based on a statement from Agony and Ecstasy in "Hellblazer" #12, he may have usurped his position from Belial (who they stated at the time was the third member of the triumvirate). He appears as a ragged opening into darkness, full of disembodied eyes and mouths. He was cast out after Lucifer abandoned Hell, and later imprisoned by Dream in a glassjar. He reappears, still in Dream's glassjar, in "Lucifer Volume 2" (2015).
He is based on the demon Azazel.
Azazel first appeared in DC Comics battling Madame Xanadu in the story intended for "Doorway to Nightmare" #6 (it was cancelled after #5) that was eventually published in "Cancelled Comic Cavalcade" #2 and "The Unexpected" #190. As with Lucifer's appearance in "The Brave and the Bold", he looked more like a traditional devil, but was identified as an incubus: here, a creature who steals people's dreams and imprints them upon tapestries that give him power, and cannot be destroyed without killing the victims.
Along with Lucifer and Azazel, Beelzebub was the third King of Hell. He often appears as either a gigantic green fly, or a fly's head on two short human legs. Sometimes a human face can be seen between the fly's eyes. His constant buzzing slurs his speech (for example, 'Bbbbut nooo. Itzzz a Triummmvirate.') He is based on the demon Beelzebub.
Choronzon is a former duke of Hell who served under Beelzebub. He has pink skin and two mouths, one under the other.
He had possession of Dream's helm, but lost it in a challenge. He later reappeared briefly as one of Azazel's tactics to gain ownership of Hell.
He is based on the demon Choronzon.
Choronzon appears in "52" #25 (Late October 2006).
Duma is a fallen angel from the DC Vertigo series "The Sandman". Duma's name means "silence", and he is based on the angel Duma from Jewish mythology. In "", Lucifer abdicates Hell and gives the key to Dream until God assigns Duma and Remiel to control of Hell. Remiel and Duma lose ownership of Hell in the "Lucifer" spin-off series. Duma eventually allies with Lucifer and Elaine Belloc to save creation, and persuades Hell's new ruler Christopher Rudd to bring his army to Heaven's aid at the Battle of Armageddon.
Lucifer is the sometime ruler of Hell, and a fallen angel. He is based on the fallen angel Lucifer, whose story was created by John Milton in his epic poem "Paradise Lost". Neil Gaiman also used the character Lucifer in his short story 'Murder Mysteries', wherein he was a captain in the Silver City, with Azazel as his protégé.
In the book "Hanging out with the Dream King" (a book consisting of interviews with Gaiman's collaborators), one of Gaiman's artists, Kelley Jones, states that Lucifer's appearance is based on that of David Bowie:
"...Neil was adamant that the Devil was David Bowie. He just said, 'He is. You "must" draw David Bowie. Find David Bowie, or I'll send you David Bowie. Because if it isn't David Bowie, you're going to have to redo it until it "is" David Bowie.' So I said, 'Okay, it's David Bowie.'..."
Lucifer made at least three previous appearances in DC Comics ("Superman's Pal Jimmy Olsen" #65, "Weird Mystery Tales" #4, and "DC Special Series" #8, a.k.a. "The Brave and the Bold Special"), but his appearance was more traditional. Lucifer as he appeared in "The Sandman" also appeared in issues of the series "The Demon" (vol. 3) and "The Spectre" (vol. 2) and in the miniseries "Stanley and His Monster" (vol. 2).
Mazikeen is a fictional character from Neil Gaiman's "Sandman" mythos. The name "Mazikeen" comes from that of a shapeshifting demon of Jewish mythology.
Mazikeen first appeared in "The Sandman", where she was Lucifer's consort while he reigned in Hell. At the time, half of her face was normal, but the other half was horribly misshapen and skeletal, causing her speech to be nearly unintelligible. (Gaiman wrote Mazikeen's dialogue by trying to speak using only half of his mouth, and writing down phonetically what came out.)
When Lucifer resigned, Mazikeen left Hell and ended up following her master, becoming part of the staff at the "Lux" (Latin for "light", and the first root word in "Lucifer"), an elite Los Angeles bar that Lucifer had opened and played piano at. To conceal her demonic nature, she covered the deformed half of her face with a white mask and rarely spoke.
In the ongoing comic book series "Lucifer", Mazikeen is a devoted ally of Lucifer Morningstar and the war leader of the Lilin, a race descended from Lilith. A fearsome warrior and a respected leader, Mazikeen is a prominent character in the Lucifer comics. She has the appearance of a human female with long black hair.
In "Lucifer", Mazikeen's face was turned fully human when she was resuscitated by the Basanos following the destruction of the Lux in a fire. This was because the vessel of the Basanos, Jill Presto, did not realize that Mazikeen's face was naturally deformed, and assumed that it was burned in the fire.
When Lucifer refused to assist her in restoring her face to its former state, she defected to her family, the Lilim-in-Exile. As their war leader, she led their army against Lucifer's cosmos, allying herself briefly with the Basanos. However, this was a ruse; after a desperate gamble, she bought Lucifer enough time to destroy the Basanos and regain control of his creation. Lucifer then accepted her into his service once more and made the Lilim-in-Exile the standing army of his universe.
Lucifer ultimately restores Mazikeen's half-skeletal face shortly before departing the known universes.
Remiel is an angel in the comic book series "The Sandman"; based on the angel Remiel. He first appears in "". In Biblical and Judaic traditions, Remiel is an Archangel and a "Grigori"; a Choir/Hierarchy of angels, whose role is to observe humanity, lending a helping hand when necessary but not interfere.
Remiel, along with Duma, is sent to observe when Dream is given the key to Hell. Dream finally gives the key to Remiel and Duma, and the two angels descend to Hell to rule over the countless sinners and demons there.
Following the end of the "Sandman" series, Remiel and Duma lose ownership of Hell in the "Lucifer" spin-off series. At the end of the series, Remiel tries to rebel against Elaine Belloc, refusing to accept her as God's successor. When he tries to kill Gaudium and Spera, friends of Elaine's, she puts him in his own Hell until he reforms.
Remiel is confirmed to be appearing in the 4th season of 'Lucifer.' Remiel is depicted as female, another younger sibling of Lucifer and Amenadiel (as is Azreal and the late Uriel). Her personality is similar to Amendiel's early in the series. She is portrayed by actress Vinessa Vidotto, .
Inhabitants of Faerie.
The Cluracan is a courtier of the Queen of Faerie and the brother to Nuala, the Dream King's fairy servant. An amoral, merry, capricious, homosexual rogue, Cluracan features in "", "", "", and "". He is strongly reminiscent of the "trickster" archetype also associated with Loki. Following the events of "", Cluracan offends his queen so badly that she sends him to the court of Llinor, where tradition demands that he marry a lady of the royal house; whereupon Cluracan's nemesis – identical to him in every way except his sexual orientation – takes Cluracan's place.
The Cluracan is named after a drunken leprechaun of Irish mythology, the Cluricaun.
Nuala is a faerie given to Dream at the end of "", who takes on the housekeeping duties of the Dreaming, only stopping when her brother Cluracan brings her back to Faerie in "". When she leaves, Dream grants her permission to summon him at need; and when she asks to become his paramour, he refuses.
She subsequently appears in the "Sandman" spinoff series, "The Dreaming."
Auberon is a character in the comic book series "The Sandman" and "The Books of Magic". He is seen for the first time in as Auberon of Dom-Daniel, and again in several issues of "The Books of Magic" and in the "Books of Faerie" miniseries.
The character was inspired by Oberon of Shakespeare's "A Midsummer Night's Dream".
Titania is a character in Neil Gaiman's comic book series "The Sandman".
Titania is the queen of the fay; she first appears in . The character was inspired by Shakespeare's Titania (Fairy Queen) in the play "A Midsummer Night's Dream". There is implication that she in the past was a lover of Dream's, although this is never confirmed.
Titania is also a major character in the comic book "The Books of Magic", of which the first four issues were written by Gaiman, and its spin-off series "The Books of Faerie". In the latter series, it is revealed that she was a human girl who crossed over into the fay realm and was then adopted by the previous queen of the fay, and received her faerie powers from a circlet seized by her from that queen. Despite this power, it was revealed that she is illiterate, and so regularly uses Dream's library because its special properties allow its users to read books in any language, including those they cannot speak. There are suggestions that she may be the mother of the series' protagonist, Timothy Hunter.
Puck is a brown-furred trickster and hobgoblin, who appears several times in "The Sandman". Puck aids the Norse God Loki in kidnapping Daniel, playing a small role in the death of the Sandman and Daniel's subsequent assuming of the title. Puck later appeared in an issue of "The Books of Magic", hiding as a gangster called Mr. Robbins in Brighton whose true nature is discovered—but not exposed—by Timothy Hunter. The character was inspired by Puck of Shakespeare's "A Midsummer Night's Dream".
Robert "Hob" Gadling is a human granted immortality, who meets with Dream once every hundred years.
Hob was granted immortality in a pub named the White Horse in 1389 when he simply declared that he "had decided never to die"; whereupon Death agrees, at Dream's request, to forgo him. Hob thereupon takes to a variety of occupations over the centuries, including slaving, and periodically reinvents himself as a descendant of his previous persona. Gradually, he acquires a conscience, and by the 20th Century has become full of remorse at his past deeds. Dream converses with Gadling once per century, of Gadling's latest occupations. At their 20th Century meeting, Dream admits that the purpose of the exercise was simply for him to have a friend. In "", Death offers to end his six-hundred-year life; but Gadling declines.
Orpheus is the son of Dream and the muse Calliope. He is based on Orpheus of Greek mythology.
In "", the Endless attend Orpheus's wedding to Eurydice. Eurydice dies on the same night, and Orpheus asks his father retrieve her from Hades. Dream refuses, but Orpheus gets help from Destruction and Death. As in the legend, Orpheus travels to Hades, plays his sad music, loses Eurydice again, and gets torn apart by the Bacchanae (the beloved madwomen of Dionysus) but because of his immortality survives as a disembodied head. Dream establishes a priesthood to take care of his son, saying that they will never meet again.
In "", Johanna Constantine is asked by Dream to rescue Orpheus from Revolutionary France. Orpheus's singing stuns Robespierre and Louis de Saint-Just, leading to the Thermidorian Reaction. Orpheus misses his father, who still has not visited him.
In "", Dream has to talk to Orpheus in order to find Destruction. In return, Orpheus is granted his wish of death.
Thessaly is the last of the millennia-old witches of Thessaly. She makes her first appearance in "". She has a bookish appearance with straight hair and thick glasses that belie her personality: amoral, cold-blooded, proud, and ruthless, though not malicious. She will kill people who are potential threats with no hesitation or remorse.
Neil Gaiman named this character after the land of witches, Thessaly, in Greece. In one of Plato's dialogues, the Gorgias, Socrates states "I would not have us risk that which is dearest on the acquisition of this power, like the Thessalian enchantresses, who, as they say, bring down the moon from heaven at the risk of their own perdition." In the series, Thessaly does exactly that, with deadly consequences, just as Socrates predicts. Later in the series, Thessaly changes her name to Larissa, which is the capital of Thessaly. Larissa was actually the local fountain nymph, after whom the town was named. It is suggested however that Thessaly is even older than this civilization and may date from Neolithic times.
Thessaly returns in the later volumes, where she is Dream's lover for a time, but this relationship ends unhappily for both and is never actually shown in the series. When it is alluded to in "" Thessaly is never mentioned by name, so only in "" is this romance revealed. Also in "The Kindly Ones", Thessaly provides Lyta Hall with protection and sanctuary from Dream while he is being targeted for death by the Furies, who are using Hall as a vessel.
In "" she attends Dream's wake and funeral. She speaks with two of Dream's lovers and recalls her relationship with Dream. She remarks that part of his attraction to her was that she was not intimidated by him. To her surprise she later would dream of Morpheus, and the two kindled a romance, with Dream madly in love with Thessaly (though this affection was not mutual). When Morpheus ended his courtship and resumed working Thessaly realized she did not love Morpheus and left the Dreaming.
When Lyta wakes up after Dream's death, Thessaly calmly advises her to leave. Thessaly suggests that many people, including herself, would be more than happy to murder Lyta for her part in Morpheus' destruction.
Thessaly also is the star of two spin-off comic series, "The Thessaliad" and "Thessaly, Witch for Hire" written by Bill Willingham. In the spin-offs, Thessaly (under that name) and her companion, a ghost named Fetch, first set out to tackle various gods of the underworld who want her dead. Later she is unwillingly pressured into a monster-killing contract.
She is alluded to in the Faction Paradox series, in the character Thessalia and her protégé Larissa.
A London tramp born in 1741. At the time of "Sandman" #3, she was 247 years old. She appears frequently in other DC comics such as "Hellblazer", first appearing in #9. She also had a large role in "Death: The High Cost of Living", where she is shown to be rude, miserly and constantly complains about the lack of knowledge that present day youths have. She has been accused of being a witch, and also appears to have abilities as a haruspex, however she merely states that "you don't get to your two hundred and fiftieth without learning a few tricks".
Later, Hettie worked in the series "The Dreaming," in which it was discovered that she had dealings with Destiny, Johanna Constantine and President Thomas Jefferson.
In "The Sandman: Overture", it is revealed that she had stolen a magical timepiece in her youth, which remained hidden in her memories until Daniel retrieved it.
Appearing for the first time in "", The Silk Man is an immortal sorcerer, described by Lucifer as ""..a fossil remnant from an earlier, cruder creation. His body is a weaving that has to be renewed constantly. His spirit too, come to that. A messy form of immortality, but it seems to do the job."" In earlier days he was the leader of the Arao Jinn. He appears as a mercenary, hired by the angel Perdissa to kill Lucifer. He seems to need to consume living things to stay alive, weaving them into himself. He is severely damaged by Perdissa and eventually killed by Lucifer.
In , Vassily appears as an old man telling his teen-aged granddaughter a tale from "the old country", medieval Russia. A youth raised in a remote forest has a series of adventures, including meeting with Lucien (to whom he gives a book) and Baba Yaga, and marrying a fellow shape-changing wolf. At the end of the story, it is revealed that the grandfather is the youth in his own story.
Alex Burgess is the son of Roderick Burgess, mother unknown (but probably Ethel Cripps, and therefore half-brother of Doctor Destiny). He is taught by his father, and takes part in his rituals. Upon Roderick Burgess' death, Alex inherits his estate, including his magical order. He keeps Dream imprisoned, as his father did, trying to bargain for power and immortality in exchange for Dream's release.
The Order of the Ancient Mysteries enjoys a resurgence in popularity in the 1960s, but by the 1970s it is in decline again. Alex passes ownership of the Order on to his boyfriend, Paul McGuire, and becomes obsessed with his prisoner and with his father. Finally, in 1988, Dream escapes and puts Alex into a nightmare of "eternal waking," in which he is forever dreaming he is waking up, and each waking degenerates into another horrible nightmare. This nightmare lasts for years, ending only with Dream's death in "".
Alex is quite tall and near-sighted. He has brown hair which he wears in a variety of styles throughout his life, but by old age he is bald and has come to resemble his father very closely. His relationship with McGuire is deep and heartfelt, but his obsessions with his father and with Dream eventually come to rule his life. In "", he appears again as the child that we see in his first appearance.
Alex is in many ways a tragic figure, perhaps the first statement of the theme that Desire explores in "" : "The bonds of family bind both ways". Had Alex not been born the son of his father, inheriting the imprisoned Dream, his life might have been much happier. However, he is finally able to find some measure of fulfillment in his old age, following Dream's death.
His name almost certainly derives from Anthony Burgess's "A Clockwork Orange", the protagonist of which is named Alex, but could also be a nod to Aleister Crowley, whose original middle name was Alexander and who was mentioned in the first issue.
Roderick Burgess (1863–1947) was the Lord Magus of The Order of the Ancient Mysteries. Born Morris Burgess Brocklesby and known also as The Daemon King, his magical fraternity was based in "Fawney Rig" in Sussex, and was initially funded by his inherited industrial wealth. Burgess is a magician rather in the vein of the real Aleister Crowley, and within the DC world is Crowley's rival.
The series begins with Burgess' attempt to capture and bind Death, which fails, capturing Dream instead. Burgess keeps Dream trapped in a glass globe for the rest of his (Burgess') life, attempting to bargain with Dream, but Dream remains silent. Burgess dies from a heart attack still attempting to get a response out of Dream. His order passes the globe and Dream to his son Alex.
Burgess is a bald-headed, slightly pot-bellied man with a large hook nose. He is ultimately self-centred; his sole purpose for the Order is to bring money and power to himself, and he is consumed by his desire to achieve immortality. His relationship with his son is only briefly touched on, though it is implied that it is unhealthy, with Burgess pushing his son to spend his life pursuing his father's dreams.
Lady Johanna Constantine is an 18th-century supernatural adventuress. Dream encounters her several times, once to ask her to recover the head of his son, Orpheus – a mission she performed so successfully that part of its aftereffects was the ending of the French Revolution's Reign of Terror.
In the "Hellblazer Special: Lady Constantine" graphic novel, an ancient evil refers to Johanna Constantine as 'the Constantine', the 'laughing magician', and the 'constant one', all titles that have been used (usually by other ancient evils) to describe John Constantine. The evil taunts her, saying "did you think to trick us with a new form?" There is the implication that throughout all times there have been recurring incarnations of Constantine who contain the spark of magic. In the story Johanna Constantine learns that "the Devil and the Wandering Jew" meet once every hundred years in a London pub; this meeting is actually between Dream and Hob Gadling, as she discovers when she interrupts the meeting. The story's conclusion shows Johanna Constantine inheriting a property she calls "Fawney Rig", after the con job wherein a gilded ring is sold as though it were solid gold... the implication being that she attained the property through trickery. This property was later owned by Roderick Burgess, the mage who captured Dream in the beginning of The Sandman story.
In her middle age, Johanna Constantine is charged by persons unknown with the key to a box containing the sigil of America, allegedly created by Destiny. This is stolen and hidden in the future by the wanderer, Mad Hettie. Hettie both blackmails ('I knows about you and the little Corsican') and bribes Johanna for her silence, promising her that she would live to age 99. This promise proves true, with Johanna dying at age 99 while getting out of her wheelchair when she hears the song of her old companion, Orpheus.
Johanna is an ancestor of John Constantine, as revealed in the miniseries The Sandman Presents: Love Street.
She is also mentioned in the Doctor Who novel "The Man in the Velvet Mask", set in an alternate post-Revolutionary France.
John Constantine is a con man and magician who accompanies Dream on a quest to find his pouch of sand.
John Constantine has his own series, "John Constantine: Hellblazer", which occasionally has guest appearances by Cain and Abel. He is also prominently featured in another series, "Swamp Thing", from which he originated.
Ethel Cripps, also known as Ethel Dee, is the mother of John Dee. She was the mistress of Roderick Burgess until she fled with Ruthven Sykes.
Her last joy was her son, John Dee, whom she sought for 10 years. She discovered that he had become a living corpse, which happened because of his use of the Sandman's Ruby.
At this time, she was 90 years old, and it was alluded that she had been kept alive by an amulet in the shape of an eye which granted its user protection, the amulet that Ruthven Sykes had been given by the demon Choronzon in exchange for Dream's helmet. Sykes, who had been second in command in The Order of Ancient Mysteries, needed protection from Roderick Burgess who was seeking retribution for Sykes' treachery of the theft of the £200,000 and Dream's magical items, which were in possession of the Order at the time he fled with Ethel Cripps to San Francisco in 1930. "Magical War" was declared upon them, and Ruthven knew he would need a way to protect himself from the hexes Burgess sought to put upon him.
In 1936, Ethel walked out on Ruthven, taking with her the amulet of protection and Dream's Ruby. While in his possession, the amulet protected Sykes from Burgess' hexes, but without it, he died a messy and painful death, with his insides exploding out of him. The amulet continued to protect Ethel while Choronzon was still in possession of Dream's helmet.
After Dream escaped and sought to regain his items, he descended to Hell to find his helmet. He had to battle Choronzon to regain it, and after his victory, the compact was withdrawn and the power of protection the amulet possessed ended, which also ended the life of Ethel Dee.
John Dee, also known as Doctor Destiny, is a DC Comics villain whose powers were derived from his use of Dream's Ruby. His name is almost certainly a reference to the real-life John Dee. He was incarcerated in Arkham Asylum, with other Batman villains such as The Scarecrow and The Joker, until freed by the amulet given to him by his mother, Ethel Dee, former mistress to Roderick Burgess. He had previously fought the Sandman (Garrett Sanford) alongside the Justice League.
John originally named himself 'Doctor Destiny' to protect his mother's surname, but after her death changed it back. The Ruby had drained away his mental and physical state until he was no longer able to sleep or dream without it. This had the unpleasant effect of turning him into a browned, living corpse.
Being able to control dreams, he used the ruby to bring out the 'darkness' and 'bestiality' of many people across the world. He originally sought power, money and mostly the restoration of his human body, but the madness brought about by overuse of the relic drove him to savage, monstruous acts of depravity using the ruby. To quote: 'I think I'll dismember the world and then I'll dance in the wreckage.'
While doing this, over a period of 24 hours he focused the energy of the ruby on several people in a cafe, one of them a friend of Rose Walker and an ex-lover of Foxglove. He used them as puppets, horribly having them murder and degrade each other as if they were toys, until all were dead.
Dream double-bluffed him into destroying the ruby, which Dee believed to be Dream's life. It actually only stored some of his energy, and with it released Dream instead became even more powerful than before. Easily overpowering Dee, Dream decided not to destroy him, and instead returned him to Arkham. Dee was finally able to sleep, and his sadism and depravity faded as he now could again dream.
He has since appeared in "Justice League" and "Justice Society" stories, having retained some residual power from the ruby. Even worse, since he has managed to replicate its power perfectly, the second ruby is now out of his grasp. However, since the new ruby is attuned to him, he has since not regressed to his previous vicious persona, mostly seeking the dominion of dreams or the waking world through dreams.
Wesley Dodds, also known as Sandman, is the original costumed crimefighter who used the name. According to Gaiman, he was merely filling a hole in the universe in a similar way to a process of evolution, in which animals fill up a niche—for instance, what should fly. He is first seen in "The Sandman" series in a two-panel cameo in issue #1, and another cameo in issue #26. Dream occasionally appeared in dream sequences in Dodds's own series, "Sandman Mystery Theatre". The two finally met for real in Gaiman's "Sandman Midnight Theatre". Dodds appeared out of costume during "" (#72). The reason for his prophetic visions is explained as him being embodied with a small portion of Dream's essence. His reasoning for assuming his role as The Sandman is given as nightmares of Dream in his helmet that plague him, "until" he begins his career as a crimefighter after which; "Wesley Dodds sleeps the sleep of the Just."
Foxglove (Donna Cavanagh) is a lesbian writer and musician who first appears in "".
She is mentioned in "" as the girlfriend of Judy, one of the patrons at the diner who dies in the story concerning John Dee, titled "24 Hours." In "A Game of You", Foxglove is going out with Hazel McNamara, and the two help Thessaly rescue Barbie.
In "", Foxglove has become a pop superstar after being seen by a promoter in "". She is raising a child with Hazel named Alvie. Alvie dies of cot death, leading Hazel to make a deal with Death. However, even in the world of the Endless there is no such thing as a free lunch, and another character's life has to be sacrificed for the child's.
Daniel Hall is the son of Lyta Hall, and the successor to the role of Dream of the Endless.
Hippolyta "Lyta" Hall is a major character, the mother of Daniel. During Dream's captivity, pregnant Lyta and her husband were held captive in a dream-realm controlled by Brute and Glob, two of Dream's minions. In this pocket realm, Lyta remained pregnant for two years, giving birth to her son Daniel only after Dream destroys the pocket realm (and Lyta's husband) and frees her. When Dream tells Lyta that the child she gestated in dreams will one day belong to him, Lyta swears she will protect Daniel at all costs. When Daniel goes missing, Lyta is convinced that Dream has stolen him and seeks revenge, unwittingly setting into motion the events of Dream's death.
John Hathaway is the senior curator of the Royal Museum. He steals the Magdalene Grimoire from the museum's collection to aid Roderick Burgess in his attempt to gain immortality after his son, Edmund, dies. He commits suicide in 1920 using a dagger from the museum after a stock taking reveals his theft. His suicide note, implicating Roderick Burgess in a multitude of crimes, is never found.
Hazel McNamara is Foxglove's lover. She appears in
"" and "".
She has a son, Alvie, from her one heterosexual encounter. It is likely that Alvie is named after Wanda (see below). In "" Alvie dies of cot death and Hazel makes a deal with Death to bring him back.
Unity Kinkaid first appears as one of the victims of the sleepy sickness that follows Dream's capture in the first collection of issues in the series, "". Following his capture, she sleeps until he escapes. While asleep, she gives birth to a daughter, Miranda Walker. It is later shown that the father of this child was Desire. Unity is later identified as a "vortex of Dream": a rare entity with the ability to telepathically combine the dreams of other beings, and who can thus cause the destruction of The Dreaming. The only time Dream is allowed to take a human life is to kill a vortex. Desire's intervention transfers the vortex to Unity's granddaughter, Rose Walker, in the hope that Dream will kill one of their relatives, and thus incur the vengeance of the Furies. Before Dream can kill Rose, Unity reclaims the vortex and dies in her stead.
Unity is of medium height, with reddish-brown hair that she wears long and loose, in the final dream-meeting between herself, Rose, and Dream; as the old woman of waking life, she has grey hair and wears a curiously old-fashioned dress.
Prez Rickard is a fictional character who first appeared in "Prez" #1 (December 1973). He is the subject of the story "The Golden Boy", in "Sandman" #54, where he is the first 19-year-old to be elected President of the United States.
Ruthven Sykes is a bespectacled Afro-Caribbean man with short hair.
He is Roderick Burgess' second-in-command of the Order of the Ancient Mysteries until November 1930, when he steals a number of treasures (including Dream's helmet, ruby and pouch of sand) and £200,000 in cash from the order and flees to San Francisco with Roderick's mistress, Ethel Cripps. In December 1930, he trades the helmet to the demon Choronzon for an amulet that looks like an eyeball on a chain. This amulet protects him from the magics of Burgess until 1936, when Ethel Cripps leaves him, taking the amulet with her. He is then killed.
Jed Walker, created by Joe Simon and Jack Kirby, first appeared in "The Sandman", vol. 1, #1, where he was protected from nightmare monsters by the titular hero. In "Cancelled Comic Cavalcade" #2, he was revealed to be the Earth-1 equivalent of Kirby's Kamandi. In Neil Gaiman's revisionist version of "The Sandman", Jed is the brother of Rose Walker and the grandson of Unity Kinkaid and Desire. He was raised by his grandfather, Ezra Paulsen, then taken and imprisoned by his aunt and uncle at the behest of Desire. Once Rose rescues him, he is revealed in "The Wake" to have become close to her.
Rose Walker is a fictional character from the "Sandman" series written by Neil Gaiman. She makes her first appearance in issue #10, part one of "" story arc. She is a young blonde with red- and purple-dyed streaks in her hair. In later issues, she is shown as having red hair with a blonde streak. In "", several characters remark that Rose looks much younger than her actual age; Rose's responses to these comments imply that while she may not be a true immortal, she is aware that she is aging more slowly than normal. She is the granddaughter of Desire.
Clarice and Barnaby, aunt and uncle of Jed and Rose, were introduced in "The Sandman" vol. 1, #5, created by Michael Fleisher and Jack Kirby. The pair mysteriously show up on Dolphin Island a few hours after the drowning death of Jed's grandfather, fisherman Ezra Paulsen. They take him to live with their own children, Bruce and Susie. They treat him as a personal slave not unlike Cinderella, with minimal food even as he does all the cooking. Eventually, their treatment of him is revealed to have become much more abusive—after he runs away from home, they place him in a basement dungeon with no toilet. This is told in issues 5 and 6 of the first series, "The Best of DC" #22, and recapped in Rose's diary in issue #11 of the Gaiman series. In issue #12, their mysterious appearance is revealed to have been because they were being paid an $800 monthly stipend by social services. In issue #14, they are revealed to have been killed.
Wanda: A transgender woman featured in "" who is Barbie's best friend. She dies in a storm caused by Thessaly's magic and is buried as 'Alvin Mann', her former identity. Wanda is last seen, along with Death, in Barbie's dream.
Barnabas is a sarcastic talking dog who belonged to Destruction and was assigned to guard Delirium. His origins are unknown.
The Basanos was a living Tarot deck created by the seraph Meleos to duplicate the divining power of Destiny's book. They are incredibly powerful due to the fact that they control probability, making whatever outcome they desire not only likely, but inevitable.
After escaping from Meleos, the Basanos took possession of Jill Presto, a cabaret worker. Lucifer Morningstar sought them out for a tarot reading, which they granted.
When Lucifer created his new universe, the Basanos moved to take control of it so that they could breed (something that is impossible in God's cosmos). Though initially successful in their plan, forming an alliance with Lucifer's enemies, their ability to control randomness was severely limited by Lucifer's creation, and Lucifer was able to outmaneuver them. Lucifer finally gave them an ultimatum: destroy themselves or risk letting the egg they laid in Jill Presto die. The Basanos chose death and extinguished themselves.
"Basanos" is Greek for touchstone. Such a touchstone may be a piece of slate used to test gold, or it may be a metaphor for torture or torment to test truthfulness. Why Meleos chose this name for his creation is unknown.
Charles Rowland was the only boy left at his boarding school during the holidays when Lucifer closed Hell, sending its former inhabitants back to Earth. While the adults of the school are preoccupied with the dead spirits who came back into their own lives, Charles is tortured and killed by three dead boys who used to go to the same school. Edwin Paine is a previous victim of the trio, his body still trapped on the grounds. He befriends Charles, but is unable to keep him from dying. When Death shows up, Charles refuses to go with her, and she lets him go, preferring to focus on all the other trouble Hell's closure has brought her. They later appeared in other books as the Dead Boy Detectives.
Eblis O'Shaughnessy: a golem and envoy created by the Endless to obtain the Cerements and the "Book of Ritual" for the funeral rites of their brother Dream. Five of the Endless participated in the creation of Eblis O'Shaughnessy, and Delirium named him. He thereafter accompanied them at the funeral. He reappears in the Vertigo story "The Girl Who Would Be Death" (1999).
Alianora was first introduced in "A Game of You" as the original inhabitant of The Land, a region of the dreaming that Barbie has visited since childhood and is being threatened by the Cuckoo. After the Hierogram is broken and The Land is dissolved, Alianora appears and speaks to Dream. Her history is expanded in "The Sandman: Overture," where it is revealed that she was created by Desire to be Dream's lover and to help him escape imprisonment after the Dreaming is invaded by two unspecified gods. Together, they vanquish the Gods, but Dream is unable to make her happy so he creates The Land as a place in which she can be free and contented. | https://en.wikipedia.org/wiki?curid=28388 |
Seattle Seahawks
The Seattle Seahawks are a professional American football team based in Seattle, Washington. They compete in the National Football League (NFL) as a member club of the league's National Football Conference (NFC) West division. The Seahawks joined the NFL in 1976 as an expansion team. Currently coached by Pete Carroll, they have played their home games at CenturyLink Field (formerly Qwest Field) in Seattle's SoDo neighborhood since 2002. They previously played home games in the Kingdome (1976–1999) and Husky Stadium (1994, 2000–2001).
Seahawks fans have been referred to collectively as the "12th Man", "12th Fan", or "12s". The team's fans have twice set the Guinness World Record for the loudest crowd noise at a sporting event, first registering 136.6 decibels during a game against the San Francisco 49ers in September 2013, and later during a "Monday Night Football" game against the New Orleans Saints a few months later, with a then record-setting 137.6 dB. The Seahawks are the only NFL franchise based in the Pacific Northwest region of North America, and thus attract support from a wide geographical area, including some parts of Oregon, Montana, Idaho, and Alaska, as well as Canadian fans in British Columbia, Alberta and Saskatchewan.
Steve Largent, Cortez Kennedy, Walter Jones, and Kenny Easley have been voted into the Pro Football Hall of Fame primarily or wholly for their accomplishments as Seahawks. In addition to them, Dave Brown, Jacob Green, Dave Krieg, Curt Warner, and Jim Zorn have been inducted into the Seahawks Ring of Honor along with Pete Gross (radio announcer) and Chuck Knox (head coach). The Seahawks have won 10 division titles and three conference championships, and are the only team to have played in both the AFC and NFC Championship Games. They have appeared in three Super Bowls: losing 21–10 to the Pittsburgh Steelers in Super Bowl XL, defeating the Denver Broncos 43–8 for their first championship in Super Bowl XLVIII, and losing 28–24 to the New England Patriots in Super Bowl XLIX.
As per one of the agreed parts of the 1970 AFL–NFL merger, the NFL began planning to expand from 26 to 28 teams. In June 1972, Seattle Professional Football Inc., a group of Seattle business and community leaders, announced its intention to acquire an NFL franchise for the city of Seattle. In June 1974, the NFL gave the city an expansion franchise. That December, NFL Commissioner Pete Rozelle announced the official signing of the franchise agreement by Lloyd W. Nordstrom, representing the Nordstrom family as majority partners for the consortium.
In March 1975, John Thompson, former Executive Director of the NFL Management Council and a former Washington Huskies executive, was hired as the general manager of the new team. The name "Seattle Seahawks" ("Seahawk" is another name for Osprey) was selected on June 17, 1975 after a public naming contest which drew more than 20,000 entries and over 1,700 names.
Thompson recruited and hired Jack Patera, a Minnesota Vikings assistant coach, to be the first head coach of the Seahawks; the hiring was announced on January 3, 1976. The expansion draft was held March 30–31, 1976, with Seattle and the Tampa Bay Buccaneers alternating picks for rounds selecting unprotected players from the other 26 teams in the league. The Seahawks were awarded the 2nd overall pick in the 1976 draft, a pick they used on defensive tackle Steve Niehaus. The team took the field for the first time on August 1, 1976 in a pre-season game against the San Francisco 49ers in the then newly opened Kingdome.
The Seahawks are the only NFL team to switch conferences twice in the post-merger era. The franchise began play in 1976 in the aforementioned NFC West but switched conferences with the Buccaneers after one season and joined the AFC West. This realignment was dictated by the league as part of the 1976 expansion plan, so that both expansion teams could play each other twice and every other NFL franchise once (the ones in their conference at the time) during their first two seasons. The Seahawks won both matchups against the Buccaneers in their first two seasons, the former of which was the Seahawks' first regular season victory.
In 1983, the Seahawks hired Chuck Knox as head coach. Finishing with a 9–7 record, the Seahawks made their first post-season appearance, defeating the Denver Broncos in the Wild Card Round, and then the Miami Dolphins, before losing in the AFC Championship to the eventual Super Bowl champion Los Angeles Raiders. The following season, the Seahawks had their best season before 2005, finishing 12–4. Knox won the NFL Coach of the Year Award.
In 1988, Ken Behring and partner Ken Hofmann purchased the team for a reported $80 million. The Seahawks won their first division title in 1988, but from 1989 to 1998 had poor records; their best record in that span came in 1990, when the team finished 9–7, and the lowest point came in 1992 when the team finished with its worst record in team history, 2–14.
In 1996, Behring and Hoffman transferred the team's operations to Anaheim, California–a widely criticized move, although the team continued to play in Seattle. The team almost relocated, and was in bankruptcy for a short period. The NFL threatened Behring with fining him $500,000 a day if he did not move the team's operations back to Seattle; with this, Behring and Hoffman sold the team to Microsoft co-founder Paul Allen in 1997 for $200 million. In 1999, Mike Holmgren was hired as head coach. He would coach for 10 seasons. The Seahawks won their second division title, as well as a wild card berth in the playoffs.
In 2002, the Seahawks returned to the NFC West as part of an NFL realignment plan that gave each conference four balanced divisions of four teams each. This realignment restored the AFC West to its initial post-merger roster of original AFL teams Denver, San Diego, Kansas City, and Oakland. That same year, the team opened its new home stadium, Seahawks Stadium, after spending the last two seasons at Husky Stadium after the Kingdome's implosion in 2000.
In the 2005 season, the Seahawks had their best season in franchise history (a feat that would later be matched in 2013) with a record of 13–3, which included a 42–0 rout of the Philadelphia Eagles in a Monday Night Football game. The 13–3 record earned them the number one seed in the NFC. They won the NFC Championship Game in 2005, but lost in Super Bowl XL against the Pittsburgh Steelers. The loss was controversial; NFL Films has Super Bowl XL at number 8 on its top ten list of games with controversial referee calls. Referee Bill Leavy later admitted that he missed calls that altered the game. Before 2005, Seattle had the longest drought of playoff victories of any NFL team, dating back to the 1984 season. That drought was ended with a 20–10 win over the Washington Redskins in the 2005 playoffs.
In the 2009 NFL season, the Seahawks finished 3rd in the NFC West with a 5-11 record. Shortly after his first full season with the Seahawks, head coach Jim L. Mora was fired on January 8, 2010. Mora was replaced by former USC Trojans football head coach, Pete Carroll. Shortly thereafter, Mora became the head coach for the UCLA Bruins football team.
In the 2010 NFL season, the Seahawks made history by making it into the playoffs despite having a 7–9 record. They had the best record in a division full of teams with losing seasons (Seahawks 7–9, Rams 7–9, 49ers 6–10, Cardinals 5–11) and won the decisive season finale against the Rams (not only by overall record, but by division record, as both teams coming into the game had a 3–2 division record). In the playoffs, the Seahawks won in their first game against the defending Super Bowl XLIV champs, the New Orleans Saints, 41–36. The Seahawks made even more history during the game with Marshawn Lynch's 67-yard run, breaking 9 tackles, to clinch the victory. After the run, the fans reacted so loudly that a small earthquake (a bit above 2 on the Richter Scale) was recorded by seismic equipment around Seattle and was nicknamed the "Beast Quake". The Seahawks lost to the Bears in their second game, 35–24.
The 2012 NFL season started with doubt, as the Seahawks lost their season opener against the Arizona Cardinals, after the highly touted Seattle defense gave up a go-ahead score late in the fourth quarter, and rookie quarterback Russell Wilson failed to throw the game-winning touchdown after multiple attempts in the red-zone. However, Russell Wilson and the Seahawks went 4–1 in their next five games en route to an 11–5 overall record (their first winning record since 2007). Their 2012 campaign included big wins over the Green Bay Packers, New England Patriots, and San Francisco 49ers. The Seahawks went into the playoffs as the #5 seed and the only team that season to go undefeated at home. In the Wild Card Round, the Seahawks overcame a 14-point deficit to defeat the Washington Redskins. This was the first time since the 1983 Divisional Round that the Seahawks won a playoff game on the road. However, in the 2012 Divisional Round, overcoming a 20-point, fourth quarter deficit wouldn't be enough to defeat the #1 seed Atlanta Falcons. An ill-advised timeout and a defensive breakdown late in the game cost the Seahawks their season, as they lost, 30–28. QB Russell Wilson won the 2012 Pepsi Max Rookie of the Year award.
In the 2013 NFL season, the Seahawks continued their momentum from the previous season, finishing tied with the Denver Broncos for an NFL-best regular season record of 13–3, while earning the NFC's #1 playoff seed. Their 2013 campaign included big wins over the Carolina Panthers, New Orleans Saints, and the San Francisco 49ers. Six Seahawks players were named to the Pro Bowl: Quarterback Russell Wilson, center Max Unger, running back Marshawn Lynch, cornerback Richard Sherman, free safety Earl Thomas, and strong safety Kam Chancellor. However, none of them were able to play in the Pro Bowl, as the Seahawks defeated the New Orleans Saints 23–15 and the San Francisco 49ers 23–17, in the playoffs to advance to Super Bowl XLVIII against the Denver Broncos. On February 2, 2014, the Seahawks won the franchise's only Super Bowl Championship, defeating Denver 43–8. The Seahawks' defense performance in 2013 was acclaimed as one of the best in the Super Bowl era.
The 2014 campaign saw the team lose some key pieces, including WR Golden Tate to free agency and WR Sidney Rice and DE Chris Clemons to retirement. Percy Harvin was also let go mid-season after several underachieving weeks and clashes with the rest of the locker room. Despite starting 3–3, they rallied to a 12–4 record, good enough once again for the #1 seed in the NFC Playoffs. After dispatching the Carolina Panthers handily in the Divisional Round 31–17, they faced the Green Bay Packers in the NFC Championship Game. Despite five turnovers and trailing 19–3 late in the contest, the Seahawks would prevail in overtime to reach Super Bowl XLIX against New England Patriots, but an ill-fated interception at the 1-yard line late in the championship game stymied a comeback attempt and thwarted the Seahawks' bid to be the first repeat Super Bowl champions since the Patriots had won Super Bowls XXXVIII and XXXIX.
The Seahawks would return to the playoffs in both 2015 and 2016, but despite winning the Wild Card game in both years they failed to win either Divisional round game on the road. The 2017 iteration of the team, however, missed the playoffs for the first time in six years, as injuries to their core players coupled with disappointing acquisitions of RB Eddie Lacy and K Blair Walsh failed them in a competitive NFC. The team would cut ties with most of the remaining players that had been part of their meteoric rise and turnover both their Offensive and Defensive coaching staff in 2018, and an influx of young talent helped propel the team to a 10–6 record and another playoff berth that would ultimately end in a loss in the Wild Card game. In October 2018, owner Paul Allen died after a prolonged fight with cancer.
From 2011–2014, the Seahawks the San Francisco 49ers emerged as two of the best teams in the NFC, and naturally developed a heated rivalry as a result. The 49ers head coach at the time, Jim Harbaugh, had a contentious history with Seahawks coach Pete Carroll due to Harbaugh's previous job as coach of at Stanford against Carroll's USC Trojans. While the 49ers had the upper hand in the early stages of the rivalry, winning the first three head-to-head contests against the Carroll in 2011 and Week 7 of 2012, the tide began to turn when the Seahawks defeated the 49ers soundly in Week 16 of 2012 on prime time by a score of 42–13. Both teams reached the playoffs that year, and the 49ers reached Super Bowl XLVII only to lose to the Baltimore Ravens. In 2013, the Seahawks again thumped the 49ers 29–3 in a Week 2 contest, but the 49ers would triumph in Week 14 by a score of 19–17. The Seahawks would ultimately have the last laugh, however, when they beat the 49ers in the 2013 NFC Championship Game 23–17. The game was back and forth until the final moments, when a pass intended for 49ers WR Michael Crabtree was tipped by Richard Sherman and ultimately intercepted by LB Malcolm Smith to ice the game. The Seahawks won both games against the 49ers in 2014, notably trouncing them 19–3 on a Thanksgiving night game at Candlestick Park in San Francisco. Harbaugh was fired at the end of the season, effectively rendering the rivalry dormant.
Since rejoining the NFC West, the Seahawks lead the series 23–12 versus the 49ers, including playoffs. Overall, the Seahawks lead the series 25-16.
Since moving to the NFC, the Seahawks have faced the Green Bay Packers several times in the playoffs, developing an intense rivalry as well. Some notable moments include the clubs' first playoff meeting in in which Seahawks quarterback Matt Hasselbeck threw a game-losing pick-six in overtime after guaranteeing a game-winning drive, the Fail Mary, and Russell Wilson overcoming four interceptions and a 16–0 Packers lead to lead Seattle to a 28–22 overtime win to advance to Super Bowl XLIX.
From the 1980s to the 2002 league realignment, the Denver Broncos were a major rival for the Seahawks. With John Elway, the Broncos were one of the best teams in the NFL, going 200–124–1 overall, and were 32–18 against the Seahawks. Since 2002, Denver has won three of five interconference meetings, and the teams met in Super Bowl XLVIII on February 2, 2014, where the Seahawks won 43–8.
During the Seahawks' first ten seasons (1976–85), the team's headquarters was in Kirkland at the southern end of the Lake Washington Shipyard (now Carillon Point), on the shores of Lake Washington. The summer training camps were held across the state at Eastern Washington University in Cheney, southwest of Spokane.
When the team's new headquarters across town in Kirkland were completed in 1986, the Seahawks held training camp at home for the next eleven seasons (1986–96), staying in the dormitories of the adjacent Northwest College. In 1997, Dennis Erickson's third season as head coach, the team returned to the hotter and more isolated Cheney for training camp, which continued through 2006. In 2007, training camp returned to the Seahawk's Kirkland facility because of the scheduled China Bowl (NFL) game, which was later canceled. In 2008, the Seahawks held the first three weeks of camp in Kirkland, then moved to the new Virginia Mason Athletic Center (VMAC) on August 18 for the final week of training camp, where the team has held their training camps since. The new facility, adjacent to Lake Washington in Renton, has four full-size practice fields: three natural grass outdoors and one FieldTurf indoors.
When the Seahawks debuted in , the team's logo was a stylized royal blue and forest green osprey's head based on Kwakwakaʼwakw art masks. The helmet and pants were silver while the home jerseys were royal blue with white and green sleeve stripes and white numerals and names. The road jersey was white, with white, blue and green sleeve stripes and had blue numerals and names. The socks were blue and had the same green and white striping pattern seen on the blue jerseys. Black shoes were worn for the first four seasons, one of the few NFL teams that did so in the late 1970s, at a time when most teams were wearing white shoes. They would switch to white shoes in 1980.
In , coinciding with the arrival of Chuck Knox as head coach, the uniforms were updated slightly. The striping on the arms now incorporated the Seahawks logo, and the "TV numbers", previously located on the sleeves, moved onto the shoulders. The helmet facemasks changed from gray to blue. Also, the socks went solid blue at the top, and white on bottom. In the 1985 season, the team wore 10th Anniversary patches on the right side of their pants. It had the Seahawks logo streaking through the number 10. In 1994, the year of the NFL's 75th Anniversary, the Seahawks changed the style of their numbering to something more suitable for the team; Pro Block from then until 2001. That same year, the Seahawks wore a vintage jersey for select games resembling the 1976–82 uniforms. However, the helmet facemasks remained blue. The logos also became sewn on instead of being screen-printed. In 2000, Shaun Alexander's rookie year and Cortez Kennedy's last, the Seattle Seahawks celebrated their 25th Anniversary; the logo was worn on the upper left chest of the jersey. In 2001, the Seahawks switched to the new Reebok uniform system still in their then-current uniforms after that company signed a 10-year deal to be the exclusive uniform supplier to the NFL, but it would be their last in this uniform after the season ended. Prior to this, various companies made the team's uniforms.
On March 1, , to coincide with the team moving to the NFC as well as the opening of Seahawks Stadium (which would later be renamed Qwest Field, then CenturyLink Field), both the logo and the uniforms were heavily redesigned. The Wordmark was designed by Mark Verlander and the logo was designed by NFL Properties in-house design team. The colors were modified to a lighter "Seahawks Blue", a darker "Seahawks Navy" and lime green piping. The helmets also were changed from silver to the lighter "Seahawks Blue" color after a fan poll was conducted. Silver would not be seen again until 2012. The logo artwork was also subtly altered, with an arched eyebrow and a forward-facing pupil suggesting a more aggressive-looking bird. At first, the team had planned to wear silver helmets at home and blue helmets on the road, but since NFL rules forbid the use of multiple helmets, the team held the fan poll to decide which color helmet would be worn. The team had usually worn all blue at home and all white on the road since 2003, but late in the 2009 season, the Seahawks wore the white jersey-blue pants combo. The blue jersey and white pants combo has been worn for only one regular season game, the 2005 season opener at the Jacksonville Jaguars, while the white jersey and blue pants combination has not been worn regularly since late in the 2002 season, with the exception of late in the 2009 season. In 2009, the Seahawks once again wore the white jersey and blue pants combination for road games against Minnesota (November 22), St. Louis (November 29), Houston (December 13) and Green Bay (December 27).
The Seahawks wore their home blue jerseys during Super Bowl XL despite being designated as the visitor, since the Pittsburgh Steelers, the designated home team, elected to wear their white jerseys.
Since the Oakland Raiders wore their white jerseys at home for the first time ever in a game against the San Diego Chargers on September 28, 2008, the Seahawks are currently the only NFL team never to have worn their white jerseys at home.
On September 27, 2009, the Seahawks wore lime green jerseys for the first time, paired with new dark navy blue pants in a game against the Chicago Bears. The jerseys matched their new sister team, the expansion Seattle Sounders FC of Major League Soccer who wear green jerseys with blue pants. On December 6, 2009, the Seahawks wore their Seahawks blue jersey with the new dark navy blue pants for the first time, in a game against the San Francisco 49ers. The Seahawks broke out the same combo two weeks later against the Tampa Bay Buccaneers, and two weeks after that in the 2009 regular season finale against the Tennessee Titans. In December 2009, then-coach Jim Mora announced that the new lime green jerseys were being retired because the team did not win in them, because he liked the standard blue home jerseys better, and added that the home jersey is a better match for the navy pants. In the same press conference, he stated that the new navy pants "felt better" on players as opposed to the Seahawks blue pants. For the 2010 season, Seattle returned to the traditional all "Seahawks Blue" at home and all white on the road.
On April 3, 2012, Nike, which took over as the official uniform supplier for the league from Reebok, unveiled new uniform and logo designs for the Seahawks for the 2012 season. The new designs incorporate a new accent color, "Wolf Grey", and the main colors are "College Navy" and "Action Green". The uniforms incorporate "feather trims", multiple feathers on the crown of the helmet, twelve feathers printed on the neckline and down each pant leg to represent the "12th Man", referring to the team's fans. The Seahawks have three different jersey colors: navy blue, white, and an alternate grey jersey. The Seahawks will have three different pants: navy blue with green feathers, gray with navy blue feathers, and white with navy blue feathers. Their new logo replaces the Seahawk blue with wolf grey. Altogether, there are nine (9) different uniform combinations possible.
The Seahawks wore their Nike home blue jerseys for the first regular season game on September 16, 2012 against the Dallas Cowboys. The uniform Marshawn Lynch wore in that game is preserved at the Pro Football Hall of Fame. On September 9, 2012, the Seahawks wore their Nike white away jerseys for the first regular season game against the Arizona Cardinals; on October 14, 2012, with the Carolina Panthers wearing white at home, they wore their blue jerseys with gray pants (and would do so again against the Miami Dolphins seven weeks later); and on December 16, 2012, they wore their Alternate Wolf Grey jerseys for the first time against the Buffalo Bills.
The all-navy ensemble is currently the Seahawks' primary uniform option for home games, with the gray pants being used as an alternate. On the road, the Seahawks primarily pair their white uniforms with the navy pants (that combination was used during their Super Bowl XLVIII win), although they also pair the white uniforms with either white or gray pants on occasion. The all-gray uniforms are worn occasionally on the road.
In 2016, the Seahawks unveiled their NFL Color Rush uniform, an all-Action Green ensemble. They first wore the uniform December 16 against the Los Angeles Rams at home, marking the first time they wore green uniforms since 2009. The Seahawks continue to wear the Color Rush set as an alternate uniform alongside the all-gray combination.
During a home matchup with the Vikings on December 3, 2019, the Seahawks wore their Color Rush green tops and regular navy pants.
As of the end of the 2019 season, the Seattle Seahawks have competed in 44 NFL seasons, dating back to their expansion year of 1976. The team has compiled a 355–336-1 record (17–17 in the playoffs) for a .514 winning percentage (.500 in the playoffs). Seattle has reached the playoffs in 18 separate seasons, including in the 2005 season when they lost Super Bowl XL to the Pittsburgh Steelers, the 2013 season when they defeated the Denver Broncos to win Super Bowl XLVIII, and the 2014 season when they lost Super Bowl XLIX to the New England Patriots. In the 2010 season, the Seahawks became the first team in NFL history to earn a spot in the playoffs with a losing record (7–9, .438) in a full season; this was by virtue of winning the division. The Seahawks would go on to defeat the reigning Super Bowl champion New Orleans Saints in the , becoming the first team ever to win a playoff game with a losing record. Until Week 7 of the 2016 season against the Arizona Cardinals, the Seahawks had never recorded a tied game in their history.
The 35th Anniversary team was voted upon by users on Seahawks.com and announced in 2010.
Note: Although Mike McCormack served as head coach, president, and general manager for the Seahawks, he is "only" listed in the Pro Football Hall of Fame for his contributions as a tackle for the New York Yanks and the Cleveland Browns.
The Seahawks cheerleaders are called the Sea Gals. However, prior to the 2019 NFL season, the Seahawks re-branded its cheerleading group to include male dancers. That group is now known as the Seahawks Dancers. During the off-season, a select performing group from the Sea Gals travel parades and with other NFL Cheerleaders on the road.
The 12th man (also known as the 12s) refers to the fan support of the Seahawks. The team's first home stadium, the Kingdome, was one of the loudest and most disruptive environments in the NFL. Opponents were known to practice with rock music blaring at full blast to prepare for the often painfully high decibel levels generated at games in the Kingdome.
In 2002, the Seahawks began playing at what is now CenturyLink Field. Every regular season and playoff game at CenturyLink Field since the 2nd week of the 2003 season has been played before a sellout crowd. Like the Kingdome before it, CenturyLink Field is one of the loudest stadiums in the league. The stadium's partial roof and seating decks trap and amplify the noise and reflect it back down to the field. This noise has caused problems for opposing teams, causing them to commit numerous false-start penalties. From 2002 through 2012, there have been 143 false-start penalties on visiting teams in Seattle, second only to the Minnesota Vikings.
The Seahawks' fans have twice set the Guinness World Record for the loudest crowd noise at a sporting event, first on September 15, 2013, registering 136.6 dB during a game against the San Francisco 49ers and again on December 2, 2013, during a Monday Night Football game against the New Orleans Saints, with a roar of 137.6 dB. As of September 29, 2014, the record of 142.2 dB is held in Arrowhead Stadium by fans of the Kansas City Chiefs.
Prior to kickoff of each home game, the Seahawks salute their fans by raising a giant #12 flag at the south end of the stadium. Current and former players, coaches, local celebrities, prominent fans, Seattle-area athletes, and former owner Paul Allen have raised the flag. Earlier, the Seahawks retired the #12 jersey on December 15, 1984 as a tribute to their fans. Before their Super Bowl win, the Seahawks ran onto the field under a giant 12th Man flag.
In September 1990, Texas A&M filed, and was later granted, a trademark application for the "12th Man" term, based on their continual usage of the term since the 1920s. In January 2006, Texas A&M filed suit against the Seattle Seahawks to protect the trademark and in May 2006, the dispute was settled out of court. In the agreement, which expired in 2016, Texas A&M licensed the Seahawks to continue using the phrase, in exchange for a licensing fee, public acknowledgement of A&M's trademark when using the term, a restriction in usage of the term to seven states in the Northwest United States, and a prohibition from selling any "12th Man" merchandise. Once the agreement expired, the Seahawks were allowed to continue using the number "12" but were no longer permitted to use the "12th Man" phrase. In August 2015, the Seahawks decided to drop their signage of the "12th Man" term and shifted towards referring to their fans as the "12s" instead.
Starting in the 1998 season, Blitz has been the Seahawks' official mascot. In the 2003 and 2004 seasons, a hawk named Faith would fly around the stadium just before the team came out of the tunnel. However, because of her relative small size and an inability to be trained to lead the team out of a tunnel, Faith was replaced by an augur hawk named Taima before the start of the 2005 season. Taima started leading the team out of the tunnel in September 2006. Beginning in 2004, the Seahawks introduced their drum line, the Blue Thunder. The group plays at every home game as well as over 100 events in the Seattle community.
, the Seahawks' flagship station is – . Games are heard on 47 stations in five western states and Canada. Microsoft holds naming rights for the broadcasts for their web search engine under the moniker of the "Bing Radio Network". The current announcers are former Seahawks players Steve Raible (who was the team's color commentator from ) and Warren Moon. The Raible-Moon regular season pairing has been together since 2004 (during the preseason Moon works for the local television broadcast so the color commentary is split between former Seahawks Paul Moyer, Sam Adkins, and Brock Huard). Pete Gross, who called the games from until just days before his death from cancer in , is a member of the team's Ring of Honor. Other past announcers include Steve Thomas from , Lee Hamilton (also known as "Hacksaw") from , and Brian Davis from .
Preseason games not shown on national networks were produced by Seahawks Broadcasting and televised by KING-TV, channel 5 (and, in 2008, also on sister station KONG-TV since KING, an NBC affiliate, was committed to the Summer Olympics in China). Seahawks Broadcasting is the Emmy Award Winning in-house production and syndication unit for the Seattle Seahawks. Curt Menefee (the host of "Fox NFL Sunday") has been the Seahawks TV voice since the 2009 preseason. Since the 2012 season, KCPQ-TV, which airs most of the Seahawks' regular season games (as the Seattle-Tacoma area's Fox affiliate), is the television partner for the team and has replaced KING 5 as broadcaster for preseason games, while simulcasts of any Seahawks games on ESPN's "Monday Night Football" will air (as of the 2018 season) on CBS affiliate KIRO-TV. In addition, any Saturday or Sunday afternoon games broadcast by CBS (usually—but not always—with the Seahawks hosting an AFC opponent) will air on KIRO-TV.
Source:
Explanatory notes
Citations | https://en.wikipedia.org/wiki?curid=28390 |
The Saint (Simon Templar)
The Saint is the nickname of the fictional character Simon Templar, featured in a series of novels and short stories by Leslie Charteris published between 1928 and 1963. After that date, other authors collaborated with Charteris on books until 1983; two additional works produced without Charteris's participation were published in 1997. The character has also been portrayed in motion pictures, radio dramas, comic strips, comic books and three television series.
Simon Templar is a Robin Hood-like figure known as the Saint – plausibly from his initials, but the exact reason for his nickname is unknown (although the reader is told that he was given it at the age of nineteen). Templar has aliases, often using the initials S.T. such as "Sebastian Tombs" or "Sugarman Treacle". Blessed with boyish humour, he makes humorous and off-putting remarks and leaves a "calling card" at his "crimes," a stick figure of a man with a halo over his head. This is used as the logo of the books, the movies, and the three TV series. He is described as "a buccaneer in the suits of Savile Row, amused, cool, debonair, with hell-for-leather blue eyes and a saintly smile".
His origin remains a mystery; he is explicitly British, but in early books (e.g. "Meet the Tiger") there are references which suggest that he had spent some time in the United States battling Prohibition bad guys. Presumably, his acquaintance with Bronx sidekick Hoppy Uniatz dates from this period. In the books, his income is derived from the pockets of the "ungodly" (as he terms those who live by a lesser moral code than his own), whom he is given to "socking on the boko." There are references to a "ten percent collection fee" to cover expenses when he extracts large sums from victims, the remainder being returned to the owners, given to charity, shared among Templar's colleagues, or some combination of those possibilities.
Templar's targets include corrupt politicians, warmongers, and other low life. "He claims he's a Robin Hood," says one victim, "but to me he's just a robber and a hood." Robin Hood appears to be one inspiration for the character; Templar stories were often promoted as featuring "The Robin Hood of modern crime," and this phrase to describe Templar appears in several stories. A term used by Templar to describe his acquisitions is "boodle," a term also applied to the short story collection.
The Saint has a dark side, as he is willing to ruin the lives of the "ungodly," and even kill them, if he feels that more innocent lives can be saved. In the early books, Templar refers to this as murder, although he considers his actions justified and righteous, a view usually shared by partners and colleagues. Several adventures centre on his intention to kill. (For example, "Arizona" in "The Saint Goes West" has Templar planning to kill a Nazi scientist.)
During the 1920s and early 1930s, the Saint is fighting European arms dealers, drug runners, and white slavers while based in his London home. His battles with Rayt Marius mirror the 'four rounds with Carl Petersen' of Hugh "Bull-dog" Drummond. During the first half of the 1940s, Charteris cast Templar as a willing operative of the American government, fighting Nazi interests in the United States during World War II.
Beginning with the "Arizona" novella, Templar is fighting his own war against Germany. "The Saint Steps In" reveals that Templar is operating on behalf of a mysterious American government official known as Hamilton who appears again in the next WWII-era Saint book, "The Saint on Guard," and Templar is shown continuing to act as a secret agent for Hamilton in the first post-war novel, "The Saint Sees it Through." The later books move from confidence games, murder mysteries, and wartime espionage, and place Templar as a global adventurer.
According to "Saint" historian Burl Barer, Charteris made the decision to remove Templar from his usual confidence-game trappings, not to mention his usual co-stars Uniatz, girlfriend Patricia Holm, valet Orace, and police foil Claud Eustace Teal, as they were all inappropriate for the post-war stories he was writing.
Although the Saint functions as an ordinary detective in some stories, others depict ingenious plots to get even with vanity publishers and other rip-off artists, greedy bosses who exploit their workers, con men, etc.
The Saint has many partners, though none last throughout the series. For the first half until the late 1940s, the most recurrent is Patricia Holm, his girlfriend, who was introduced in the first story, the 1928 novel "Meet the Tiger," in which she shows herself a capable adventurer. Holm appeared erratically throughout the series, sometimes disappearing for books at a time. Templar and Holm lived together in a time when common-law relationships were uncommon and, in some areas, illegal.
They have an easy, non-binding relationship, as Templar is shown flirting with other women from time to time. However, his heart remains true to Holm in the early books, culminating in his considering marriage in the novella "The Melancholy Journey of Mr. Teal," only to have Holm say she had no interest in marrying. Holm disappeared in the late 1940s, and according to Barer's history of "The Saint," Charteris refused to allow Templar a steady girlfriend, or Holm to return. (However, according to the Saintly Bible website, Charteris did write a film story that would have seen Templar encountering a son he had had with Holm.) Holm's final appearance as a character was in the short stories "Iris," "Lida," and "Luella," contained within the 1948 collection "Saint Errant;" the next direct reference to her does not appear in print until the 1983 novel "Salvage for the Saint."
Another recurring character, Scotland Yard Inspector Claud Eustace Teal, could be found attempting to put the Saint behind bars, although in some books they work in partnership. In "The Saint in New York," Teal's American counterpart, NYPD Inspector John Henry Fernack, was introduced, and he would become, like Teal, an Inspector Lestrade-like foil and pseudo-nemesis in a number of books, notably the American-based World War II novels of the 1940s.
The Saint had a band of compatriots, including Roger Conway, Norman Kent, Archie Sheridan, Richard "Dicky" Tremayne (a name that appeared in the 1990s TV series, "Twin Peaks"), Peter Quentin, Monty Hayward, and his ex-military valet, Orace.
In later stories, the dim-witted and constantly soused but reliable American thug Hoppy Uniatz was at Templar's side. Of the Saint's companions, only Norman Kent was killed during an adventure (he sacrifices himself to save Templar in the novel "The Last Hero"); the other males are presumed to have settled down and married (two to former female criminals: Dicky Tremayne to "Straight Audrey" Perowne and Peter Quentin to Kathleen "The Mug" Allfield; Archie Sheridan is mentioned to have married in "The Lawless Lady" in "Enter the Saint," presumably to Lilla McAndrew after the events of the story "The Wonderful War" in "Featuring the Saint)."
Charteris gave Templar interests and quirks as the series went on. Early talents as an amateur poet and songwriter were displayed, often to taunt villains, though the novella "The Inland Revenue" established that poetry was also a hobby. That story revealed that Templar wrote an adventure novel featuring a South American hero not far removed from The Saint himself.
Templar also on occasion would break the fourth wall in an almost metafictional sense, making references to being part of a story and mentioning in one early story how he cannot be killed so early on; the 1960s television series would also have Templar address viewers. Charteris in his narrative also frequently breaks the fourth wall by making references to the "chronicler" of the Saint's adventures and directly addressing the reader. In the story "The Sizzling Saboteur" in "The Saint on Guard" Charteris inserts his own name. In the story "Judith" in "The Saint Errant" is the line, "'This,' the Saint said to nobody in particular, 'sounds like one of those stories that fellow Charteris might write.'" Furthermore, in the 1955 story "The Unkind Philanthropist," published in the collection "The Saint on the Spanish Main," Templar states outright that (in his fictional universe) his adventures are indeed written about by a man named Leslie Charteris.
The origins of the Saint can be found in early works by Charteris, some of which predated the first Saint novel, 1928's "Meet the Tiger", or were written after it but before Charteris committed to writing a Saint series. Burl Barer reveals that an obscure early work, "Daredevil", not only featured a heroic lead who shared "Saintly" traits (down to driving the same make of car) but also shared his adventures with Inspector Claud Eustace Teal—a character later a regular in Saint books. Barer writes that several early Saint stories were rewritten from non-Saint stories, including the novel "She Was a Lady", which appeared in magazine form featuring a different lead character.
Charteris utilized three formats for delivering his stories. Besides full-length novels, he wrote novellas for the most part published in magazines, notably developing the character in the pages of the British story-paper "The Thriller" under the tutelage of Monty Hayden, who was developing the ″Desperado″ character type for the magazine, and these were later collected in hardback books collecting two or three stories per volume. He also wrote short stories featuring the character, again mostly for magazines and later compiled into omnibus editions. In later years these short stories carried a common theme, such as the women Templar meets or exotic places he visits. With the exception of "Meet the Tiger", chapter titles of Templar novels usually contain a descriptive phrase describing the events of the chapter; for example, Chapter Four of "Knight Templar" is titled "How Simon Templar dozed in the Green Park and discovered a new use for toothpaste".
Although Charteris's novels and novellas had more conventional thriller plots than his confidence game short stories, both novels and stories are admired. As in the past, the appeal lies in the vitality of the character, a hero who can go into a brawl and come out with his hair combed and who, faced with death, lights a cigarette and taunts his enemy with the signature phrase "As the actress said to the bishop ..."
The period of the books begins in the 1920s and moves to the 1970s as the 50 books progress (the character being seemingly ageless). In early books most activities are illegal, although directed at villains. In later books, this becomes less so. In books written during World War II, the Saint was recruited by the government to help track spies and similar undercover work. Later he became a cold warrior fighting Communism. The quality of writing also changes; early books have a freshness which becomes replaced by cynicism in later works. A few Saint stories crossed into science fiction and fantasy, "The Man Who Liked Ants" and the early novel "The Last Hero" being examples; one Saint short story, "The Darker Drink" (also published as "Dawn"), was even published in the October 1952 issue of "The Magazine of Fantasy & Science Fiction". When early Saint books were republished in the 1960s to the 1980s, it was not uncommon to see freshly written introductions by Charteris apologizing for the out-of-date tone; according to a Charteris "apology" in a 1969 paperback of "Featuring the Saint", he attempted to update some earlier stories when they were reprinted but gave up and let them sit as period pieces. The 1963 edition of the short story collection "The Happy Highwayman" contains examples of abandoned revisions; in one story published in the 1930s ("The Star Producers"), references to actors of the 1930s were replaced for 1963 with names of current movie stars; another 1930s-era story, "The Man Who Was Lucky", added references to atomic power. Although Templar is depicted as ageless, Charteris occasionally acknowledged the passing of time for those around him, such as in the 1956 short story collection "The Saint Around the World" which features the retirement of Inspector Teal in one story.
Charteris started retiring from writing books following 1963's "The Saint in the Sun". The next book to carry Charteris's name, 1964's "Vendetta for the Saint", was written by science fiction author Harry Harrison, who had worked on the "Saint" comic strip, after which Charteris edited and revised the manuscript. Between 1964 and 1983, another 14 "Saint" books would be published, credited to Charteris but written by others. In his introduction to the first, "The Saint on TV", Charteris called these volumes a team effort in which he oversaw selection of stories, initially adaptations of scripts written for the 1962–1969 TV series "The Saint", and with Fleming Lee writing the adaptations (other authors took over from Lee). Charteris and Lee collaborated on two Saint novels in the 1970s, "The Saint in Pursuit" (based on a story by Charteris for the "Saint" comic strip) and "The Saint and the People Importers". The "team" writers were usually credited on the title page, if not the cover. One later volume, "Catch the Saint", was an experiment in returning The Saint to his period, prior to World War II (as opposed to recent Saint books set in the present day). Several later volumes also adapted scripts from the 1970s revival TV series "Return of the Saint".
The last "Saint" volume in the line of books starting with "Meet the Tiger" in 1928 was "Salvage for the Saint", published in 1983. According to the Saintly Bible website, every Saint book published between 1928 and 1983 saw the first edition issued by Hodder & Stoughton in the United Kingdom (a company that originally published only religious books) and The Crime Club (an imprint of Doubleday that specialized in mystery and detective fiction) in the United States. For the first 20 years, the books were first published in Britain, with the United States edition following up to a year later. By the late 1940s to early 1950s, this situation had been reversed. In one case—"The Saint to the Rescue"—a British edition did not appear until nearly two years after the American one.
French language books published over 30 years included translated volumes of Charteris originals as well as novelisations of radio scripts from the English-language radio series and comic strip adaptations. Many of these books credited to Charteris were written by others, including Madeleine Michel-Tyl.
Charteris died in 1993. Two additional Saint novels appeared around the time of the 1997 film starring Val Kilmer: a novelisation of the film (which had little connection to the Charteris stories) and "Capture the Saint", a more faithful work published by The Saint Club and originated by Charteris in 1936. Both books were written by Burl Barer, who in the early 1990s published a history of the character in books, radio, and television.
Charteris wrote 14 novels between 1928 and 1971 (the last two co-written), 34 novellas, and 95 short stories featuring Simon Templar. Between 1963 and 1997, an additional seven novels and fourteen novellas were written by others.
In 2014, all the Saint books from "Enter the Saint" to "Salvage for the Saint" (but not "Meet the Tiger" nor Burl Barer's "Capture the Saint") were republished in both the United Kingdom and United States.
Several radio drama series were produced in North America, Ireland, and Britain. The earliest was for Radio Éireann's Radio Athlone in 1940 and starred Terence De Marney. Both NBC and CBS produced "Saint" series during 1945, starring Edgar Barrier and Brian Aherne. Many early shows were adaptations of published stories, although Charteris wrote several storylines for the series which were novelised as short stories and novellas.
The longest-running radio incarnation was Vincent Price, who played the character in a series between 1947 and 1951 on three networks: CBS, Mutual and NBC. Like "The Whistler", the program had an opening whistle theme with footsteps. Price left in May 1951, to be replaced by Tom Conway, who played the role for several more months; his brother, George Sanders, had played Templar on film. For more about the Saint on American radio, see "The Saint (radio program)".
The next English-language radio series aired on Springbok Radio in South Africa between 1953 and 1957. These were fresh adaptations of the original stories and starred Tom Meehan. Around 1965 to 1966 the South African version of Lux Radio Theatre produced a single dramatization of "The Saint". The English service of South Africa produced another series radio adventures for six months in 1970–1971. The most recent English-language incarnation was a series of three one-hour-long radio plays on BBC Radio 4 in 1995, all adapted from Charteris novels: "Saint Overboard", "The Saint Closes The Case" and "The Saint Plays With Fire", starring Paul Rhys as Templar.
Not long after creating The Saint, Charteris began a long association with Hollywood as a screenwriter. He was successful in getting a major studio, RKO Radio Pictures, interested in a film based on one of his works. The first, "The Saint in New York" in 1938, based on the 1935 novel of the same name, starred Louis Hayward as Templar and Jonathan Hale as Inspector Henry Fernack, the American counterpart of Mr Teal.
The film was a success and RKO began a Saint series. Some of the films were based on Charteris's original novels or novellas; others were original stories based upon outlines by Charteris. George Sanders took over the leading role. Sanders's offhand manner captured the urbane yet daring qualities of the Saint character, but after five films RKO assigned him to a new series, "The Falcon", in which Sanders played the same kind of debonair adventurer. Charteris saw this as both plagiarism and an attempt to deprive him of royalties, and he sued RKO.
Hugh Sinclair replaced Sanders in 1941 and portrayed Templar in two films, both produced by RKO's British unit (the second film was ultimately released by Republic Pictures in 1943).
In 1953, British Hammer Film Productions produced "The Saint's Return" (known as "The Saint's Girl Friday" in the United States), for which Louis Hayward returned to the role. This was followed by an unsuccessful French production in 1960.
In the mid-1980s, the "National Enquirer" and other newspapers reported that Moore was planning to produce a movie based on "The Saint" with Pierce Brosnan as Templar, but it was never made. (Ironically Brosnan was to be Moore's successor as James Bond, after Timothy Dalton left the role.) In 1989, six movies were made by Taffner starring Simon Dutton. These were syndicated in the United States as part of a series of films titled "Mystery Wheel of Adventure," while in the United Kingdom they were shown as a series on ITV.
In 1991, as detailed by Burl Barer in his 1992 history of "The Saint," plans were announced for a series of motion pictures. Ultimately, however, no such franchise appeared. A feature film "The Saint" starring Val Kilmer was released in 1997, but it diverged in style from the Charteris books, although it did revive Templar's use of aliases. Kilmer's Saint is unable to defeat a Russian gangster in hand-to-hand combat and is forced to flee; this would have been unthinkable in a Charteris tale. Whereas the original Saint resorted to aliases that had the initials S.T., Kilmer's character used Christian saints, regardless of initials. This Saint refrained from killing, and even his main enemies live to stand trial, whereas Charteris's version had no qualms about taking another life. Kilmer's Saint is presented as a master of disguise, but Charteris's version hardly used the sophisticated ones shown in this film. The film mirrored aspects of Charteris's own life, notably his origins in the Far East, though not in an orphanage as the film portrayed. Sir Roger Moore features throughout in cameo as the BBC Newsreader heard in Simon Templar's Volvo.
Actor Roger Moore brought Simon Templar to the new medium of television in the series "The Saint", which ran from 1962 to 1969, and Moore remains the actor most closely identified with the character. According to the book Spy Television by Wesley Britton, the first actor offered the role was Patrick McGoohan of "Danger Man" and "The Prisoner."
Since Moore, other actors played him in later series, notably "Return of the Saint" (1978–1979) starring Ian Ogilvy; the series ran for one season on CBS and ITV. A television pilot for a series to be called "The Saint in Manhattan," starring Australian actor Andrew Clarke, was shown on CBS in 1987 as part of the "CBS Summer Playhouse;" this pilot was produced by Donald L. Taffner, but it never progressed beyond the pilot stage.
Inspector John Fernack of the NYPD, played by Kevin Tighe, made his first film appearance since the 1940s in that production, while Templar (sporting a moustache) got about in a black Lamborghini bearing the ST1 licence plate.
Since the 1997 Val Kilmer film The Saint, there have been several failed attempts at producing pilots for potential new "Saint" television series. On 13 March 2007, TNT said it was developing a one-hour series to be executive produced by William J. MacDonald and produced by Jorge Zamacona. James Purefoy was announced as the new Simon Templar. Production of the pilot, which was to have been directed by Barry Levinson, did not go ahead. Another attempt at production was planned for 2009 with Scottish actor Dougray Scott starring as Simon Templar. Roger Moore announced on his website that he would be appearing in the new production, which was being produced by his son, Geoffrey Moore, in a small role.
It was announced in December 2012 that a third attempt would be made to produce a pilot for a potential TV series. This time, English actor Adam Rayner was cast as Simon Templar and American actress Eliza Dushku as Patricia Holm (a character from the novels never before portrayed on television and only once in the films), with Roger Moore producing. Unlike the prior attempts, production of the Rayner pilot did commence in December 2012 and continued into early 2013, with Moore and Ogilvy making cameo appearances, according to a cast list posted on the official Leslie Charteris website and subsequently confirmed in the trailer that was released. The pilot was not picked up for a series and was broadcast as the TV movie "The Saint" on 11 July 2017.
Since 1938, numerous films have been produced in the United States, France and Australia based to varying degrees upon the Saint. A few were based, usually loosely, upon Charteris's stories, but most were original.
This is a list of the films featuring Simon Templar and of the actors who played the Saint:
In the 1930s, RKO purchased the rights to produce a film adaptation of "Saint Overboard," but no such movie was ever produced.
Three of the actors to play Templar — Roger Moore, Ian Ogilvy, and Simon Dutton — have been appointed vice presidents of "The Saint Club" that was founded by Leslie Charteris in 1936.
In the late 1940s Charteris and sometime Sherlock Holmes scriptwriter Denis Green wrote a stage play titled "The Saint Misbehaves".
It was never publicly performed, as soon after writing it Charteris decided to focus on non-Saint work. For many years it was thought to be lost; however, two copies are known to exist in private hands, and correspondence relating to the play can be found in the Leslie Charteris Collection at Boston University.
The Saint appeared in a long-running series starting as a daily comic strip 27 September 1948 with a Sunday added on 20 March the following year. The early strips were written by Leslie Charteris, who had previous experience writing comic strips, having replaced Dashiell Hammett as the writer of the "Secret Agent X-9" strip. The original artist was Mike Roy. In 1951, when John Spranger replaced Roy as the artist, he altered the Saint's appearance by depicting him with a beard. Bob Lubbers illustrated "The Saint" in 1959 and 1960. The final two years of the strip were drawn by Doug Wildey before it came to an end on 16 September 1961.
Concurrent with the comic strip, Avon Comics published 12 issues of a "The Saint" comic book between 1947 and 1952 (some of these stories were reprinted in the 1980s). Some issues included uncredited short stories; an additional short story, "Danger No. 5", appeared as filler in issue 2 of the 1952 war comic "Captain Steve Savage".
The 1960s TV series is unusual in that it is one of the few major programs of its genre that was not adapted as a comic book in the United States.
In Sweden, a long-running Saint comic book was published from 1966 to 1985 under the title "Helgonet". It originally reprinted the newspaper strip, but soon original stories were commissioned for "Helgonet". These stories were also later reprinted in other European countries. Two of the main writers were Norman Worker and Donne Avenell; the latter also co-wrote the novels "The Saint and the Templar Treasure" and the novella collection "Count on the Saint", while Worker contributed to the novella collection "Catch the Saint".
A new American comic book series was launched by Moonstone in the summer of 2012, but it never went beyond a single promotional issue "zero".
The original Saint novellas first appeared in "The Thriller" (1929–1940), edited by Monty Hayden, a friend of the author, who was sometimes given a thinly disguised role in the early stories. Charteris also edited or oversaw several magazines that tied in with the Saint. The first of these were anthologies titled "The Saint's Choice" that ran for seven issues in 1945–46. A few years later Charteris launched "The Saint Detective Magazine" (later titled "The Saint Mystery Magazine" and "The Saint Magazine"), which ran for 141 issues between 1953 and 1967, with a separate British edition that ran just as long but published different material. In most issues "Saint's Choice" and the later magazines Charteris included at least one Saint story, usually previously published in one of his books but occasionally original. In several mid-1960s issues, however, he substituted "Instead of the Saint", a series of essays on topics of interest to him. The rest of the material in the magazines consisted of novellas and short stories by other mystery writers of the day. An Australian edition was also published for a few years in the 1950s. In 1984 Charteris attempted to revive the "Saint" magazine, but it ran for only three issues.
Leslie Charteris himself portrayed The Saint in a photo play in "Life magazine": "The Saint Goes West".
Most Saint books were collections of novellas or short stories, some of which were published individually either in magazines or in smaller paperback form. Many of the books have also been published under different titles over the years; the titles used here are the more common ones for each book.
From 1964 to 1983, the Saint books were collaborative works; Charteris acted in an editorial capacity and received front cover author credit, while other authors wrote these stories and were credited inside the book. These collaborative authors are noted. (Sources: Barer and the editions themselves.)
A number of "Saint" adventures were published in French over a 30-year period, most of which have yet to be published in English. Many of these stories were ghostwritten by Madeleine Michel-Tyl and credited to Charteris (who exercised some editorial control). The French books were generally novelisations of scripts from the radio series, or novels adapted from stories in the American "Saint" comic strip. One of the writers who worked on the French series, Fleming Lee, later wrote for the English-language books.
Burl Barer's history of the Saint identifies two manuscripts that to date have not been published. The first is a collaboration between Charteris and Fleming Lee called "Bet on the Saint" that was rejected by Doubleday, the American publishers of the Saint series. Charteris, Barer writes, chose not to submit it to his United Kingdom publishers, Hodder & Stoughton. The rejection of the manuscript by Doubleday meant that The Crime Club's long-standing right of first refusal on any new Saint works was now ended and the manuscript was then submitted to other United States publishers, without success. Barer also tells of a 1979 novel titled "The Saint's Lady" by a Scottish fan, Joy Martin, which had been written as a present for and as a tribute to Charteris. Charteris was impressed by the manuscript and attempted to get it published, but it too was ultimately rejected. The manuscript, which according to Barer is in the archives of Boston University, features the return of Patricia Holm.
According to the Saintly Bible website, at one time Leslie Charteris biographer Ian Dickerson was working on a manuscript (based upon a film story idea by Charteris) for a new novel titled "Son of the Saint" in which Templar shares an adventure with his son by Patricia Holm. The book has, to date, not been published.
A fourth unpublished manuscript, this time written by Charteris himself, titled "The Saint's Second Front" was written during the Second World War but was rejected at the time; believed lost for decades, it emerged at an auction in 2017.
In the 2003 BBC documentary series "Heroes and Weapons of World War II" episode titled "The Man Who Designed the Spitfire" (Episode 2) at approximately 18 minutes in the film an RAF pilot is seen at rest in his dispersal hut with a large Saint stick-man logo on his flying gear (see image at right). He is perhaps showing some personal identification with Simon Templar's own war against Germany in the novella "Arizona".
In 1980 English punk band Splodgenessabounds released a single "Simon Templer" (misspelling intentional). It reached number 7 in the UK charts. The song appears mocking of the TV character, concluding "I think Simon's a bit of a bore/Ian Ogilvy and Podgy Moore.” | https://en.wikipedia.org/wiki?curid=28392 |
Stuttering
Stuttering, also known as stammering and dysphemia, is a speech disorder in which the flow of speech is disrupted by involuntary repetitions and prolongations of sounds, syllables, words or phrases as well as involuntary silent pauses or blocks in which the person who stutters is unable to produce sounds. The term "stuttering" is most commonly associated with involuntary sound repetition, but it also encompasses the abnormal hesitation or pausing before speech, referred to by people who stutter as "blocks", and the prolongation of certain sounds, usually vowels or semivowels. According to Watkins et al., stuttering is a disorder of "selection, initiation, and execution of motor sequences necessary for fluent speech production". For many people who stutter, repetition is the main problem. The term "stuttering" covers a wide range of severity, encompassing barely perceptible impediments that are largely cosmetic to severe symptoms that effectively prevent oral communication. In the world, approximately four times as many men as women stutter, encompassing 70 million people worldwide, or about 1% of the world's population.
The impact of stuttering on a person's functioning and emotional state can be severe. This may include fears of having to enunciate specific vowels or consonants, fears of being caught stuttering in social situations, self-imposed isolation, anxiety, stress, shame, low self-esteem, being a possible target of bullying (especially in children), having to use word substitution and rearrange words in a sentence to hide stuttering, or a feeling of "loss of control" during speech. Stuttering is sometimes popularly seen as a symptom of anxiety, but there is no direct correlation in that direction (though as mentioned the inverse can be true, as social anxiety may develop in individuals as a result of their stuttering).
Stuttering is generally not a problem with the physical production of speech sounds or putting thoughts into words. Acute nervousness and stress are not thought to cause stuttering, but they can trigger stuttering in people who have the speech disorder, and living with a stigmatized disability can result in anxiety and high allostatic stress load (chronic nervousness and stress) that reduce the amount of acute stress necessary to trigger stuttering in any given person who stutters, worsening the problem in the manner of a positive feedback system; the name 'stuttered speech syndrome' has been proposed for this condition. Neither acute nor chronic stress, however, itself creates any predisposition to stuttering.
The disorder is also "variable", which means that in certain situations, such as talking on the telephone or in a large group, the stuttering might be more severe or less, depending on whether or not the stutterer is self-conscious about their stuttering. Stutterers often find that their stuttering fluctuates and that they have "good" days, "bad" days and "stutter-free" days. The times in which their stuttering fluctuates can be random. Although the exact etiology, or cause, of stuttering is unknown, both genetics and neurophysiology are thought to contribute. There are many treatments and speech therapy techniques available that may help decrease speech disfluency in some people who stutter to the point where an untrained ear cannot identify a problem; however, there is essentially no cure for the disorder at present. The severity of the person's stuttering would correspond to the amount of speech therapy needed to decrease disfluency. For severe stuttering, long-term therapy and hard work is required to decrease disfluency.
Common stuttering behaviors are observable signs of speech disfluencies, for example: repeating sounds, syllables, words or phrases, silent blocks and prolongation of sounds. These differ from the normal disfluencies found in all speakers in that stuttering disfluencies may last longer, occur more frequently, and are produced with more effort and strain. Stuttering disfluencies also vary in quality: common disfluencies tend to be repeated movements, fixed postures, or superfluous behaviors. Each of these three categories is composed of subgroups of stutters and disfluencies.
The severity of a stutter is often not constant even for people who severely stutter. People who stutter commonly report dramatically increased fluency when talking in unison with another speaker, copying another's speech, whispering, singing, and acting or when talking to pets, young children, or themselves. Other situations, such as public speaking and speaking on the telephone, are often greatly feared by people who stutter, and increased stuttering is reported.
Stuttering could have a significant negative cognitive and affective impact on the person who stutters. It has been described in terms of the analogy to an iceberg, with the immediately visible and audible symptoms of stuttering above the waterline and a broader set of symptoms such as negative emotions hidden below the surface. Feelings of embarrassment, shame, frustration, fear, anger, and guilt are frequent in people who stutter, and may actually increase tension and effort, leading to increased stuttering. With time, continued exposure to difficult speaking experiences may crystallize into a negative self-concept and self-image. Many perceive stutterers as less intelligent due to their disfluency; however, as a group, individuals who stutter tend to be of above average intelligence. A person who stutters may project his or her attitudes onto others, believing that they think he or she is nervous or stupid. Such negative feelings and attitudes may need to be a major focus of a treatment program.
Many people who stutter report a high emotional cost, including jobs or promotions not received, as well as relationships broken or not pursued.
Linguistic tasks can invoke speech disfluency. People who stutter may experience varying disfluency. Tasks that trigger disfluency usually require a controlled-language processing, which involves linguistic planning. In stuttering, it is seen that many individuals do not demonstrate disfluencies when it comes to tasks that allow for automatic processing without substantial planning. For example, singing "Happy Birthday" or other relatively common, repeated linguistic discourses, could be fluid in people who stutter. Tasks like this reduce semantic, syntactic, and prosodic planning, whereas spontaneous, "controlled" speech or reading aloud requires thoughts to transform into linguistic material and thereafter syntax and prosody. Some researchers hypothesize that controlled-language activated circuitry consistently does not function properly in people who stutter, whereas people who do not stutter only sometimes display disfluent speech and abnormal circuitry.
No single, exclusive cause of developmental stuttering is known. A variety of hypotheses and theories suggests multiple factors contributing to stuttering. Among these is the strong evidence that stuttering has a genetic basis. Children who have first-degree relatives who stutter are three times as likely to develop a stutter. However, twin and adoption studies suggest that genetic factors interact with environmental factors for stuttering to occur, and many people who stutter have no family history of the disorder.
There is evidence that stuttering is more common in children who also have concurrent speech, language, learning or motor difficulties. Robert West, a pioneer of genetic studies in stuttering, has suggested that the presence of stuttering is connected to the fact that articulated speech is the last major acquisition in human evolution.
Another view is that a stutter or stammer is a complex tic. This view is held for the following reasons. It always arises from repetition of sounds or words. Young children like repetition and the more tense they are feeling, the more they like this outlet for their tension – an understandable and quite normal reaction. They are capable of repeating all types of behaviour. The more tension that is felt, the less one likes change. The more change, the greater can be the repetition. So, when a 3 year old finds he has a new baby brother or sister he may start repeating sounds. The repetitions can become conditioned and automatic and ensuing struggles against the repetitions result in prolongations and blocks in his speech. More boys stammer than girls, in the ratio of 3–4 boys : 1 girl. This is because the male Hypothalamic-Pituitary-Adrenal (HPA) Axis is more active. Whilst they are pumping out more cortisol than females under the same provocation, they can be tense or anxious and become repetitive.
In a 2010 article, three genes were found by Dennis Drayna and team to correlate with stuttering: GNPTAB, GNPTG, and NAGPA. Researchers estimated that alterations in these three genes were present in 9% of people who stutter who have a family history of stuttering.
For some people who stutter, congenital factors may play a role. These may include physical trauma at or around birth, learning disabilities, as well as cerebral palsy. In other people who stutter, there could be added impact due to stressful situations such as the birth of a sibling, moving, or a sudden growth in linguistic ability.
There is clear empirical evidence for structural and functional differences in the brains of people who stutter. Research is complicated somewhat by the possibility that such differences could be the consequences of stuttering rather than a cause, but recent research on older children confirms structural differences thereby giving strength to the argument that at least some of the differences are not a consequence of stuttering.
Auditory processing deficits have also been proposed as a cause of stuttering. Stuttering is less prevalent in deaf and hard-of-hearing individuals, and stuttering may be reduced when auditory feedback is altered, such as by masking, delayed auditory feedback (DAF), or frequency altered feedback. There is some evidence that the functional organization of the auditory cortex may be different in people who stutter.
There is evidence of differences in linguistic processing between people who stutter and people who do not stutter. Brain scans of adult people who stutter have found increased activation of the right hemisphere, which is associated with emotions, than in the left hemisphere, which is associated with speech. In addition, reduced activation in the left auditory cortex has been observed.
The "capacities and demands" model has been proposed to account for the heterogeneity of the disorder. In this approach, speech performance varies depending on the "capacity" that the individual has for producing fluent speech, and the "demands" placed upon the person by the speaking situation. Capacity for fluent speech may be affected by a predisposition to the disorder, auditory processing or motor speech deficits, and cognitive or affective issues. Demands may be increased by internal factors such as lack of confidence or self esteem or inadequate language skills or external factors such as peer pressure, time pressure, stressful speaking situations, insistence on perfect speech, and the like. In stuttering, the severity of the disorder is seen as likely to increase when demands placed on the person's speech and language system exceed their capacity to deal with these pressures. However, the precise nature of the capacity or incapacity has not been delineated.
Though neuroimaging studies have not yet found specific neural correlates, there is much evidence that the brains of adults who stutter differ from the brains of adults who do not stutter. Several neuroimaging studies have emerged to identify areas associated with stuttering. In general, during stuttering, cerebral activities change dramatically in comparison to silent rest or fluent speech between people who stutter and people who do not stutter. There is evidence that people who stutter activate motor programs before the articulatory or linguistic processing is initiated. Brain imaging studies have primarily been focused on adults. However, the neurological abnormalities found in adults does not determine whether childhood stuttering caused these abnormalities or whether the abnormalities cause stuttering.
Studies utilizing positron emission tomography (PET) have found during tasks that invoke disfluent speech, people who stutter show hypoactivity in cortical areas associated with language processing, such as Broca's area, but hyperactivity in areas associated with motor function. One such study that evaluated the stutter period found that there was over activation in the cerebrum and cerebellum, and relative deactivation of the left hemisphere auditory areas and frontal temporal regions.
Functional magnetic resonance imaging (fMRI) has found abnormal activation in the right frontal operculum (RFO), which is an area associated with time-estimation tasks, occasionally incorporated in complex speech.
Researchers have explored temporal cortical activations by utilizing magnetoencephalography (MEG). In single-word-recognition tasks, people who do not stutter showed cortical activation first in occipital areas, then in left inferior-frontal regions such as Broca's area, and finally, in motor and premotor cortices. The people who stutter also first had cortical activation in the occipital areas but the left inferior-frontal regions were activated only after the motor and premotor cortices were activated.
During speech production, people who stutter show overactivity in the anterior insula, cerebellum and bilateral midbrain. They show underactivity in the ventral premotor, Rolandic opercular and sensorimotor cortex bilaterally and Heschl's gyrus in the left hemisphere. Additionally, speech production in people who stutter yields underactivity in cortical motor and premotor areas.
Much evidence from neuroimaging techniques has supported the theory that the right hemisphere of people who stutter interferes with left-hemisphere speech production.
Adults who stutter have anatomical differences in gyri within the perisylvian frontotemporal areas. A large amount of white matter is found in the right hemisphere of the brain, including the region of the superior temporal gyrus. This was discovered using voxel-based morphometry (VBM). On the other hand, lesser amounts of white matter are found in the left inferior arcuate fasciculus connecting the temporal and frontal areas in stuttering adults.
Results have shown that there is less coordination between the speech motor and planning regions in the brain's left hemisphere of men and women who stutter, when compared to a non-stuttering control group. Anatomical connectivity of the speech motor and planning regions is less vigorous in adults who stutter, especially women. Men who stutter seem to have more right-sided motor connectivity. On the other hand, stuttering women have less connectivity with the right motor regions.
In non-stuttering, normal speech, PET scans show that both hemispheres are active but that the left hemisphere may be more active. By contrast, people who stutter yield more activity on the right hemisphere, suggesting that it might be interfering with left-hemisphere speech production. Another comparison of scans anterior forebrain regions are disproportionately active in stuttering subjects, while post-rolandic regions are relatively inactive.
Bilateral increases and unusual right-left asymmetry has been found in the planum temporale when comparing people who stutter and people who do not stutter. These studies have also found that there are anatomical differences in the Rolandic operculum and arcuate fasciculus.
The corpus callosum transfers information between the left and right cerebral hemispheres. The corpus callosum, rostrum, and the anterior mid-body sections are larger in adults who stutter as compared to normally fluent adults. This difference may be due to unusual functions of brain organization in stuttering adults and may be a result of how the stuttering adults performed language-relevant tasks. Furthermore, previous research has found that adults who stutter show cerebral hemispheres that contain uncommon brain proportions and allocations of gray and white matter tissue.
Recent studies have found that adults who stutter have elevated levels of the neurotransmitter dopamine, and have thus found dopamine antagonists that reduce stuttering (see anti-stuttering medication below). Overactivity of the midbrain has been found at the level of the substantia nigra extended to the red nucleus and subthalamic nucleus, which all contribute to the production of dopamine. However, increased dopamine does not imply increased excitatory function since dopamine's effect can be both excitatory or inhibitory depending upon which dopamine receptors (labelled D1 – D5) have been stimulated.
Some characteristics of stuttered speech are not as easy for listeners to detect. As a result, diagnosing stuttering requires the skills of a certified speech-language pathologist (SLP). Diagnosis of stuttering employs information both from direct observation of the individual and information about the individual's background, through a case history. Information from both sources should consider things such as age, the various times it has occurred, and other impediments. The SLP may collect a case history on the individual through a detailed interview or conversation with the parents (if client is a child). They may also observe parent-child interactions and observe the speech patterns of the child's parents. The overall goal of assessment for the SLP will be (1) to determine whether a speech disfluency exists, and (2) assess if its severity warrants concern for further treatment.
During direct observation of the client, the SLP will observe various aspects of the individual's speech behaviors. In particular, the therapist might test for factors including the types of disfluencies present (using a test such as the Disfluency Type Index (DTI)), their frequency and duration (number of iterations, percentage of syllables stuttered (%SS)), and speaking rate (syllables per minute (SPM), words per minute (WPM)). They may also test for naturalness and fluency in speaking (naturalness rating scale (NAT), test of childhood stuttering (TOCS)) and physical concomitants during speech ("Riley’s Stuttering Severity Instrument Fourth Edition (SSI-4)"). They might also employ a test to evaluate the severity of the stuttering and predictions for its course. One such test includes the stuttering prediction instrument for young children (SPI), which analyzes the child's case history, part-word repetitions and prolongations, and stuttering frequency in order to determine the severity of the disfluency and its prognosis for chronicity for the future.
Stuttering is a multifaceted, complex disorder that can impact an individual's life in a variety of ways. Children and adults are monitored and evaluated for evidence of possible social, psychological or emotional signs of stress related to their disorder. Some common assessments of this type measure factors including: anxiety (Endler multidimensional anxiety scales (EMAS)), attitudes (personal report of communication apprehension (PRCA)), perceptions of self (stutterers’ self-rating of reactions to speech situations (SSRSS)), quality of life (overall assessment of the speaker's experience of stuttering (OASES)), behaviors (older adult self-report (OASR)), and mental health (composite international diagnostic interview (CIDI)).
The SLP will then attempt to combine the information garnered from the client's case study along with the information acquired from the assessments in order to make a final decision regarding the existence of a fluency disorder and determine the best course of treatment for the client.
Stuttering can also diagnosed per the DSM-5 diagnostic codes by clinical psychologists with adequate expertise. The most recent version of the DSM-5 describes this speech disorder as "Childhood-Onset Fluency Disorder (Stuttering)" for developmental stuttering, and "Adult-onset Fluency Disorder". However, the specific rationale for this change from the DSM-IV is ill-documented in the APA's published literature, and is felt by some to promote confusion between the very different terms "fluency" and "disfluency".
Preschool aged children often have difficulties with speech concerning motor planning and execution; this often manifests as disfluencies related to speech development (referred to as normal dysfluency or "other disfluencies"). This type of disfluency is a normal part of speech development and temporarily present in preschool aged children who are learning to speak. These normal disfluencies can present as interjections ("Um"), multisyllabe repetitions ("I want I want to do that") or revised/abandoned utterances ("I want/ hey what's that?"). Normal disfluency should be ruled out before diagnosing stuttering.
Developmental stuttering (also known as childhood onset fluency disorder) is stuttering that originates when a child is learning to speak and may persist as the child matures into adulthood. Stuttering that persists after the age of seven is classified as persistent stuttering.
Other much less common causes of stuttering include neurogenic stuttering (stuttering that occurs secondary to brain damage, such as after a stroke) and psychogenic stuttering (stuttering related to a psychological condition).
Other disorders with symptoms resembling stuttering include autism, cluttering, Parkinson's disease, essential tremor, palilalia, spasmodic dysphonia, selective mutism, and social anxiety.
Stuttering is typically a developmental disorder beginning in early childhood and continuing into adulthood in at least 20% of affected children. The mean onset of stuttering is 30 months. Although there is variability, early stuttering behaviours usually consist of word or syllable repetitions, while secondary behaviours such as tension, avoidance or escape behaviours are absent. Most young children are unaware of the interruptions in their speech. With young stutterers, disfluency may be episodic, and periods of stuttering are followed by periods of relatively decreased disfluency.
Though the rate of early recovery is very high, with time a young person who stutters may transition from easy, relaxed repetition to more tense and effortful stuttering, including blocks and prolongations. Some propose that parental reactions may affect the development of a chronic stutter. Recommendations to "slow down", "take a breath", "say it again", etc., may increase the child's anxiety and fear, leading to more difficulties with speaking and, in the "cycle of stuttering," to yet more fear, anxiety and expectation of stuttering. With time secondary stuttering, including escape behaviours such as eye blinking and lip movements, may be used, as well as fear and avoidance of sounds, words, people, or speaking situations. Eventually, many become fully aware of their disorder and begin to identify themselves as stutterers. With this may come deeper frustration, embarrassment and shame. Other, rarer patterns of stuttering development have been described, including sudden onset with the child being unable to speak, despite attempts to do so. The child usually is unable to utter the first sound of a sentence, and shows high levels of awareness and frustration. Another variety also begins suddenly with frequent word and phrase repetition, and does not include the development of secondary stuttering behaviours.
Stuttering is also believed to be caused by neurophysiology. Neurogenic stuttering is a type of fluency disorder in which a person has difficulty in producing speech in a normal, smooth fashion. Individuals with fluency disorders may have speech that sounds fragmented or halting, with frequent interruptions and difficulty producing words without effort or struggle. Neurogenic stuttering typically appears following some sort of injury or disease to the central nervous system. Injuries to the brain and spinal cord, including cortex, subcortex, cerebellar, and even the neural pathway regions.
In rare cases, stuttering may be acquired in adulthood as the result of a neurological event such as a head injury, tumour, stroke, or drug use. The stuttering has different characteristics from its developmental equivalent: it tends to be limited to part-word or sound repetitions, and is associated with a relative lack of anxiety and secondary stuttering behaviors. Techniques such as altered auditory feedback (see below), which may promote decreasing disfluency in people who stutter with the developmental condition, are not effective with the acquired type.
Psychogenic stuttering may also arise after a traumatic experience such as a grief, the breakup of a relationship or as the psychological reaction to physical trauma. Its symptoms tend to be homogeneous: the stuttering is of sudden onset and associated with a significant event, it is constant and uninfluenced by different speaking situations, and there is little awareness or concern shown by the speaker.
Before beginning treatment, an assessment is needed, as diagnosing stuttering requires the skills of a certified speech-language pathologist (SLP).
While there is no complete cure for stuttering, several treatment options exist that help individuals to better control their speech. Many of the available treatments focus on learning strategies to minimize stuttering through speed reduction, breathing regulation, and gradual progression from single-syllable responses to longer words, and eventually more complex sentences. Furthermore, some stuttering therapies help to address the anxiety that is often caused by stuttering, and consequently worsens stuttering symptoms. This method of treatment is referred to as a comprehensive approach, in which the main emphasis of treatment is directed toward improving the speaker's attitudes toward communication and minimizing the negative impact stuttering can have on the speaker's life. Treatment from a qualified S-LP can benefit people who stutter of any age.
Speech language pathologists teach people who stutter to control and monitor the rate at which they speak. In addition, people may learn to start saying words in a slightly slower and less physically tense manner. They may also learn to control or monitor their breathing. When learning to control speech rate, people often begin by practising smooth, fluent speech at rates that are much slower than typical speech, using short phrases and sentences. Over time, people learn to produce smooth speech at faster rates, in longer sentences, and in more challenging situations until speech sounds both fluent and natural. When treating stuttering in children, some researchers recommend that an evaluation be conducted every three months in order to determine whether or not the selected treatment option is working effectively. "Follow-up" or "maintenance" sessions are often necessary after completion of formal intervention to prevent relapse.
Fluency shaping therapy, also known as "speak more fluently", "prolonged speech", or "connected speech", trains people who stutter to speak less disfluently by controlling their breathing, phonation, and articulation (lips, jaw, and tongue). It is based on operant conditioning techniques.
People who stutter are trained to reduce their speaking rate by stretching vowels and consonants, and using other disfluency-reducing techniques such as continuous airflow and soft speech contacts. The result is very slow, monotonic, but fluent speech, used only in the speech clinic. After the person who stutters masters these skills, the speaking rate and intonation are increased gradually. This more normal-sounding, fluent speech is then transferred to daily life outside the speech clinic, though lack of speech naturalness at the end of treatment remains a frequent criticism. Fluency shaping approaches are often taught in intensive group therapy programs, which may take two to three weeks to complete, but more recently the Camperdown program, using a much shorter schedule, has been shown to be effective.
The goal of stuttering modification therapy is not to eliminate stuttering but to modify it so that stuttering is easier and less effortful. The rationale is that since fear and anxiety causes increased stuttering, using easier stuttering and with less fear and avoidance, stuttering will decrease. The most widely known approach was published by Charles Van Riper in 1973 and is also known as block modification therapy. However, depending on the patient, speech therapy may be ineffective.
Altered auditory feedback, so that people who stutter hear their voice differently, has been used for over 50 years in the treatment of stuttering. Altered auditory feedback effect can be produced by speaking in chorus with another person, by blocking out the person who stutters' voice while talking (masking), by delaying slightly the voice of the person who stutters (delayed auditory feedback) or by altering the frequency of the feedback (frequency altered feedback). Studies of these techniques have had mixed results, with some people who stutter showing substantial reductions in stuttering, while others improved only slightly or not at all. In a 2006 review of the efficacy of stuttering treatments, none of the studies on altered auditory feedback met the criteria for experimental quality, such as the presence of control groups.
There are specialized mobile applications and PC programs for stutter treatment. The goal pursued by the applications of this kind is speech cycle restoration – I say –>I hear –>I build a phrase –>I say and so on, using various methods of stutter correction.
The user interacts with the application through altered auditory feedback: they say something into the headset's microphone and listen to their own voice in the headphones processed by a certain method.
The following stutter correction methods are typically used in applications:
Although no medication is FDA approved for stuttering, several studies have shown certain medications to have beneficial effects on reducing the severity of stuttering symptoms. Although different classes of medications have been investigated, those with dopamine blocking activity have been shown in numerous trials to have positive effects on stuttering. These medications are FDA approved in the United States and hold similar approval in most countries for other conditions and their safety profiles are well established in these disorders.
The best studied medication in stuttering is olanzapine whose effectiveness has been established in replicated trials. Olanzapine acts as a dopamine antagonist to D2 receptors in the mesolimbic pathway, and works similarly on serotonin 5HT2A receptors in the frontal cortex. At doses between 2.5–5 mg, olanzapine has been shown to be more effective than placebo at reducing stuttering symptoms, and may serve as a first-line pharmacological treatment for stuttering based on the preponderance of its efficacy data. However, other medications are generally better tolerated with less weight gain and less risk of metabolic effects than olanzapine.
Risperidone and haloperidol have also shown effectiveness in the treatment of stuttering. However, haloperidol in particular often result in poor long-term compliance due to concerning side effects such as movement disorders and prolactin elevation, which can also occur with risperidone. Other dopamine active medications reported to positively treat stuttering include aripiprazole, asenapine, lurasidone, which tend to be better tolerated than olanzapine with less weight gain. All these medications as well as olanzapine can carry the potential risk of a long-term movement disorder known as tardive dyskinesia.
The investigational compound, ecopipam is unique from other dopamine antagonists in that it acts on D1 receptors instead of D2, owing little, if any risk, of movement disorders. An open label study of ecopipam in adults demonstrated significantly improved stuttering symptoms with no reports of parkinsonian-like movement disorders or tardive dyskinesia which can be seen with D2 antagonists. In addition, ecopipam had no reported weight gain, but instead has been reported to lead to weight loss. In a preliminary study, it was well tolerated in subjects, effectively reduced stuttering severity, and was even associated in a short-term study with improved quality of life in persons who stutter. Further research is still warranted, but this novel mechanism is showing promise in the pharmacologic treatment of stuttering.
One should always consult with a medical doctor before considering medication treatment of stuttering to review potential risks and benefits.
With existing behavioral and prosthetic treatments providing limited relief and pharmacologic treatments in need of FDA approval for widespread use, support groups and the self-help movement continue to gain popularity and support from professionals and people who stutter. Self-help groups provide people who stutter a shared forum within which they can access resources and support from others facing the same challenges of stuttering. One of the basic tenets behind the self-help movement is that since a cure does not exist, quality of life can be improved by not thinking about the stammer for prolonged periods. Psychoanalysis has claimed success in the treatment of stuttering. Hypnotherapy has also been explored as a management alternative. Support groups further focus on the fact that stuttering is not a physical impediment but a psychological one.
Cognitive behavior therapy has been used to treat stuttering. Also sociological approaches has been explored regarding how social groups maintain stuttering through social norms.
Several treatment initiatives, for example the McGuire programme, and the Starfish Project advocate diaphragmatic breathing (or "costal breathing") as a means by which stuttering can be controlled.
Among preschoolers with stuttering, the prognosis for recovery is good. Based on research, about 65% to 87.5% of preschoolers who stutter recover spontaneously by 7 years of age or within the first 2 years of stuttering, and about 74% recover by their early teens. In particular, girls seem to recover well. For others, early intervention is effective in helping the child overcome disfluency.
Once stuttering has become established, and the child has developed secondary behaviors, the prognosis is more guarded, and only 18% of children who stutter after five years recover spontaneously. Stuttering that persists after the age of seven is classified as persistent stuttering, and is associated with a much lower chance of recovery. However, with treatment young children may be left with little evidence of stuttering.
With adult people who stutter, there is no known cure, though they may make partial recovery or even complete recovery with intervention. People who stutter often learn to stutter less severely, though others may make no progress with therapy.
Emotional sequelae associated with stuttering primarily relates to state-dependent anxiety related to the speech disorder itself. However, this is typically isolated to social contexts that require speaking, is not a trait anxiety, and this anxiety does not persist if stuttering remits spontaneously. Research attempting to correlate stuttering with generalized or state anxiety, personality profiles, trauma history, or decreased IQ have failed to find adequate empirical support for any of these claims.
The lifetime prevalence, or the proportion of individuals expected to stutter at one time in their lives, is about 5%, and overall males are affected two to five times more often than females. However, there is not much information known about the underlying cause for such a skewed sex ratio. Most stuttering begins in early childhood, and studies suggest that 2.5% of children under the age of 5 stutter. As seen in children who have just begun stuttering, there is an equivalent number of boys and girls who stutter. Still, the sex ratio appears to widen as children grow: among preschoolers, boys who stutter outnumber girls who stutter by about a two to one ratio, or less. This ratio widens to three to one during first grade, and five to one during fifth grade, as girls have higher recovery rates. Due to high (approximately 65–75%) rates of early recovery, the overall prevalence of stuttering is generally considered to be approximately 1%.
Cross-cultural studies of stuttering prevalence were very active in early and mid-20th century, particularly under the influence of the works of Wendell Johnson, who claimed that the onset of stuttering was connected to the cultural expectations and the pressure put on young children by anxious parents. Johnson claimed there were cultures where stuttering, and even the word "stutterer", were absent (for example, among some tribes of American Indians). Later studies found that this claim was not supported by the facts, so the influence of cultural factors in stuttering research declined. It is generally accepted by contemporary scholars that stuttering is present in every culture and in every race, although the attitude towards the actual prevalence differs. Some believe stuttering occurs in all cultures and races at similar rates, about 1% of general population (and is about 5% among young children) all around the world. A US-based study indicated that there were no racial or ethnic differences in the incidence of stuttering in preschool children. At the same time, there are cross-cultural studies indicating that the difference between cultures may exist. For example, summarizing prevalence studies, E. Cooper and C. Cooper conclude: "On the basis of the data currently available, it appears the prevalence of fluency disorders varies among the cultures of the world, with some indications that the prevalence of fluency disorders labeled as stuttering is higher among black populations than white or Asian populations" (Cooper & Cooper, 1993:197). In his "Stuttering and its Treatment: Eleven lectures" Mark Onslow remarked that "one recent study with many participants (N=119,367) convincingly reported more stuttering among African Americans than other Americans. Why this could be the case is challenging to explain..."
Different regions of the world are researched very unevenly. The largest number of studies has been conducted in European countries and in North America, where the experts agree on the mean estimate to be about 1% of the general population (Bloodtein, 1995. A Handbook on Stuttering). African populations, particularly from West Africa, might have the highest stuttering prevalence in the world—reaching in some populations 5%, 6% and even over 9%. Many regions of the world are not researched sufficiently, and for some major regions there are no prevalence studies at all (for example, in China). Some claim the reason for this might be a lower incidence in the general population in China.
Because of the unusual-sounding speech that is produced and the behaviors and attitudes that accompany a stutter, it has long been a subject of scientific interest and speculation as well as discrimination and ridicule. People who stutter can be traced back centuries to the likes of Demosthenes, who tried to control his disfluency by speaking with pebbles in his mouth. The Talmud interprets Bible passages to indicate Moses was also a person who stuttered, and that placing a burning coal in his mouth had caused him to be "slow and hesitant of speech" (Exodus 4, v.10).
Galen's humoral theories were influential in Europe in the Middle Ages for centuries afterward. In this theory, stuttering was attributed to imbalances of the four bodily humors—yellow bile, blood, black bile, and phlegm. Hieronymus Mercurialis, writing in the sixteenth century, proposed methods to redress the imbalance including changes in diet, reduced libido (in men only), and purging. Believing that fear aggravated stuttering, he suggested techniques to overcome this. Humoral manipulation continued to be a dominant treatment for stuttering until the eighteenth century. Partly due to a perceived lack of intelligence because of his stutter, the man who became the Roman emperor Claudius was initially shunned from the public eye and excluded from public office.
In and around eighteenth and nineteenth century Europe, surgical interventions for stuttering were recommended, including cutting the tongue with scissors, removing a triangular wedge from the posterior tongue, and cutting nerves, or neck and lip muscles. Others recommended shortening the uvula or removing the tonsils. All were abandoned due to the high danger of bleeding to death and their failure to stop stuttering. Less drastically, Jean Marc Gaspard Itard placed a small forked golden plate under the tongue in order to support "weak" muscles.
Italian pathologist Giovanni Morgagni attributed stuttering to deviations in the hyoid bone, a conclusion he came to via autopsy. Blessed Notker of St. Gall (c. 840–912), called Balbulus ("The Stutterer") and described by his biographer as being "delicate of body but not of mind, stuttering of tongue but not of intellect, pushing boldly forward in things Divine," was invoked against stammering.
A famous Briton who stammered was King George VI. George VI went through years of speech therapy, most successfully under Australian speech therapist Lionel Logue, for his stammer. This is dealt with in the Academy Award-winning film "The King's Speech" (2010) in which Colin Firth plays George VI. The film is based on an original screenplay by David Seidler who also used to stutter as a child until age 16.
Another notable case was that of British Prime Minister Winston Churchill. Churchill claimed, perhaps not directly discussing himself, that "[s]ometimes a slight and not unpleasing stammer or impediment has been of some assistance in securing the attention of the audience..." However, those who knew Churchill and commented on his stutter believed that it was or had been a significant problem for him. His secretary Phyllis Moir commented that "Winston Churchill was born and grew up with a stutter" in her 1941 book "I was Winston Churchill's Private Secretary". She also noted about one incident, "'It’s s-s-simply s-s-splendid,' he stuttered—as he always did when excited." Louis J. Alber, who helped to arrange a lecture tour of the United States, wrote in Volume 55 of "The American Mercury" (1942) that "Churchill struggled to express his feelings but his stutter caught him in the throat and his face turned purple" and that "born with a stutter and a lisp, both caused in large measure by a defect in his palate, Churchill was at first seriously hampered in his public speaking. It is characteristic of the man’s perseverance that, despite his staggering handicap, he made himself one of the greatest orators of our time."
For centuries "cures" such as consistently drinking water from a snail shell for the rest of one's life, "hitting a stutterer in the face when the weather is cloudy", strengthening the tongue as a muscle, and various herbal remedies were used. Similarly, in the past people have subscribed to theories about the causes of stuttering which today are considered odd. Proposed causes of stuttering have included tickling an infant too much, eating improperly during breastfeeding, allowing an infant to look in the mirror, cutting a child's hair before the child spoke his or her first words, having too small a tongue, or the "work of the devil".
Some people who stutter, who are part of the disability rights movement, have begun to embrace their stuttering voices as an important part of their identity. In July 2015 the UK Ministry of Defence announced the launch of the Defence Stammering Network to support and champion the interests of British military personnel and MOD civil servants who stammer and to raise awareness of the condition.
Bilingualism is the ability to speak two languages. Many bilingual people have been exposed to more than one language since birth and throughout childhood. Since language and culture are relatively fluid factors in a person's understanding and production of language, bilingualism may be a feature that impacts speech fluency. There are several ways during which stuttering may be noticed in bilingual children including the following.
Stuttering may present differently depending on the languages the individual uses. For example, morphological and other linguistic differences between languages may make presentation of disfluency appear to be more or less of a problem depending on the individual case.
Much research is being conducted to look at the prevalence of stuttering in bilingual populations and the differences between languages. For instance, one study concluded that bilingual children who spoke English and another language had an increased risk of stuttering and a lower chance of recovery from stuttering than monolingual speakers and speakers who spoke solely a language other than English. Another study, though methodologically weak, showed relatively indistinguishable percentages of monolingual and bilingual stutterers. Due to so much conflicting data, the relationship between bilingualism and stuttering has been called enigmatic, which can demonstrate the intricacies of the topic and encourages more research to be conducted in order to sway the belief of impact the relationship between bilingualism and stuttering has.
Jazz and Eurodance musician Scatman John wrote the song "Scatman (Ski Ba Bop Ba Dop Bop)" to help children who stutter overcome adversity. Born John Paul Larkin, Scatman spoke with a stutter himself and won the American Speech-Language-Hearing Association's Annie Glenn Award for outstanding service to the stuttering community. | https://en.wikipedia.org/wiki?curid=28394 |
Saxony
Saxony ( ; ), officially the Free State of Saxony (German: , Upper Sorbian: ), is a landlocked state of Germany, bordering the states of Brandenburg, Saxony-Anhalt, Thuringia, and Bavaria, as well as the countries of Poland and the Czech Republic. Its capital is Dresden, and its largest city is Leipzig. Saxony is the tenth largest of Germany's sixteen states, with an area of , and the sixth most populous, with more than 4 million inhabitants.
The history of Saxony spans more than a millennium. It has been a medieval duchy, an electorate of the Holy Roman Empire, a kingdom, and twice a republic. The first Free State of Saxony was established in 1918 as a constituent state of the Weimar Republic. After World War II, it became part of the German Democratic Republic and was abolished by the communist government in 1952. Following German reunification, the Free State of Saxony was reconstituted with slightly altered borders in 1990 and became one of the Federal Republic of Germany's new states.
The area of the modern state of Saxony should not be confused with Old Saxony, the area inhabited by Saxons. Old Saxony corresponds roughly to the modern German states of Lower Saxony, Saxony-Anhalt, and the Westphalian part of North Rhine-Westphalia.
Saxony has a long history as a duchy, an electorate of the Holy Roman Empire (the Electorate of Saxony), and finally as a kingdom (the Kingdom of Saxony). In 1918, after Germany's defeat in World War I, its monarchy was overthrown and a republican form of government was established under the current name. The state was broken up into smaller units during communist rule (1949–1989), but was re-established on 3 October 1990 on the reunification of East and West Germany.
In prehistoric times, the territory of Saxony was the site of some of the largest of the ancient central European monumental temples, dating from the fifth century BC. Notable archaeological sites have been discovered in Dresden and the villages of Eythra and Zwenkau near Leipzig. The Slavic and Germanic presence in the territory of today's Saxony is thought to have begun in the first century BC.
Parts of Saxony were possibly under the control of the Germanic King Marobod during the Roman era. By the late Roman period, several tribes known as the Saxons emerged, from which the subsequent state(s) draw their name.
The first medieval Duchy of Saxony was a late Early Middle Ages "Carolingian stem duchy", which emerged around the start of the 8th century AD and grew to include the greater part of Northern Germany, what are now the modern German states of Bremen, Hamburg, Lower Saxony, North Rhine-Westphalia, Schleswig-Holstein and Saxony-Anhalt. The Saxons converted to Christianity during this period.
While the Saxons were facing pressure from Charlemagne's Franks, they were also facing a westward push by Slavs to the east. The territory of the Free State of Saxony, called White Serbia was, since the 5th century, populated by Slavs before being conquered by Germans e.g. Saxons and Thuringii. A legacy of this period is the Sorb population in Saxony. Eastern parts of present Saxony were ruled by Poland between 1002 and 1032 and by Bohemia since 1293.
The territory of the Free State of Saxony became part of the Holy Roman Empire by the 10th century, when the dukes of Saxony were also kings (or emperors) of the Holy Roman Empire, comprising the Ottonian, or Saxon, Dynasty. Around this time, the Billungs, a Saxon noble family, received extensive fields in Saxony. The emperor eventually gave them the title of dukes of Saxony. After Duke Magnus died in 1106, causing the extinction of the male line of Billungs, oversight of the duchy was given to Lothar of Supplinburg, who also became emperor for a short time.
The Margravate of Meissen was founded in 985 as a frontier march, that soon extended to the Kwisa (Queis) river to the east and as far as the Ore Mountains. In the process of Ostsiedlung, settlement of German farmers in the sparsely populated area was promoted.
In 1137, control of Saxony passed to the Guelph dynasty, descendants of Wulfhild Billung, eldest daughter of the last Billung duke, and the daughter of Lothar of Supplinburg. In 1180 large portions west of the Weser were ceded to the Bishops of Cologne, while some central parts between the Weser and the Elbe remained with the Guelphs, becoming later the Duchy of Brunswick-Lüneburg. The remaining eastern lands, together with the title of Duke of Saxony, passed to an Ascanian dynasty (descended from Eilika Billung, Wulfhild's younger sister) and were divided in 1260 into the two small states of Saxe-Lauenburg and Saxe-Wittenberg. The former state was also named "Lower Saxony", the latter "Upper Saxony", thence the later names of the two Imperial Circles Saxe-Lauenburg and Saxe-Wittenberg. Both claimed the Saxon electoral privilege for themselves, but the Golden Bull of 1356 accepted only Wittenberg's claim, with Lauenburg nevertheless continuing to maintain its claim. In 1422, when the Saxon electoral line of the Ascanians became extinct, the Ascanian Eric V of Saxe-Lauenburg tried to reunite the Saxon duchies.
However, Sigismund, King of the Romans, had already granted Margrave Frederick IV the Warlike of Meissen (House of Wettin) an expectancy of the Saxon electorate in order to remunerate his military support. On 1 August 1425 Sigismund enfeoffed the Wettinian Frederick as Prince-Elector of Saxony, despite the protests of Eric V. Thus the Saxon territories remained permanently separated. The Electorate of Saxony was then merged with the much bigger Wettinian Margraviate of Meissen, however using the higher-ranking name Electorate of Saxony and even the Ascanian coat-of-arms for the entire monarchy. Thus Saxony came to include Dresden and Meissen. In the 18th and 19th centuries Saxe-Lauenburg was colloquially called the Duchy of Lauenburg, which in 1876 merged with Prussia as the Duchy of Lauenburg district.
Saxony-Wittenberg, in modern Saxony-Anhalt, became subject to the margravate of Meissen, ruled by the Wettin dynasty in 1423. This established a new and powerful state, occupying large portions of the present Free State of Saxony, Thuringia, Saxony-Anhalt and Bavaria (Coburg and its environs). Although the centre of this state was far to the southeast of the former Saxony, it came to be referred to as Upper Saxony and then simply Saxony, while the former Saxon territories were now known as Lower Saxony.
In 1485, Saxony was split. A collateral line of the Wettin princes received what later became Thuringia and founded several small states there (see "Ernestine duchies"). The remaining Saxon state became still more powerful and was known in the 18th century for its cultural achievements, although it was politically weaker than Prussia and Austria, states which oppressed Saxony from the north and south, respectively.
Between 1697 and 1763, the Electors of Saxony were also elected Kings of Poland in personal union.
In 1756, Saxony joined a coalition of Austria, France and Russia against Prussia. Frederick II of Prussia chose to attack preemptively and invaded Saxony in August 1756, precipitating the Third Silesian War (part of the Seven Years' War). The Prussians quickly defeated Saxony and incorporated the Saxon army into the Prussian army. At the end of the Seven Years' War, Saxony recovered its independence in the 1763 Treaty of Hubertusburg.
In 1806, French Emperor Napoleon abolished the Holy Roman Empire and established the Electorate of Saxony as a kingdom in exchange for military support. The Elector Frederick Augustus III accordingly became King Frederick Augustus I of Saxony. Frederick Augustus remained loyal to Napoleon during the wars that swept Europe in the following years; he was taken prisoner and his territories declared forfeit by the allies in 1813, after the defeat of Napoleon. Prussia intended the annexation of Saxony but the opposition of Austria, France, and the United Kingdom to this plan resulted in the restoration of Frederick Augustus to his throne at the Congress of Vienna although he was forced to cede the northern part of the kingdom to Prussia, which led to the loss of nearly 50% of the Saxon territory. These lands became the Prussian province of Saxony, now incorporated in the modern state of Saxony-Anhalt, except the westernmost part around Bad Langensalza, now in the state of Thuringia. Also Lower Lusatia became part of Province of Brandenburg and northeastern part of Upper Lusatia became part of Silesia Province. The remnant of the Kingdom of Saxony was roughly identical with the present state, albeit slightly smaller.
Meanwhile, in 1815, the southern part of Saxony, now called the "State of Saxony" joined the German Confederation. (This German Confederation should not be confused with the North German Confederation mentioned below.) In the politics of the Confederation, Saxony was overshadowed by Prussia. King Anthony of Saxony came to the throne of Saxony in 1827. Shortly thereafter, liberal pressures in Saxony mounted and broke out in revolt during 1830—a year of revolution in Europe. The revolution in Saxony resulted in a constitution for the State of Saxony that served as the basis for its government until 1918.
During the 1848–49 constitutionalist revolutions in Germany, Saxony became a hotbed of revolutionaries, with anarchists such as Mikhail Bakunin and democrats including Richard Wagner and Gottfried Semper taking part in the May Uprising in Dresden in 1849. (Scenes of Richard Wagner's participation in the May 1849 uprising in Dresden are depicted in the 1983 movie "Wagner" starring Richard Burton as Richard Wagner.) The May uprising in Dresden forced King Frederick Augustus II of Saxony to concede further reforms to the Saxon government.
In 1854 Frederick Augustus II's brother, King John of Saxony, succeeded to the throne. A scholar, King John translated Dante. King John followed a federalistic and pro-Austrian policy throughout the early 1860s until the outbreak of the Austro-Prussian War. During that war, Prussian troops overran Saxony without resistance and then invaded Austrian (today's Czech) Bohemia. After the war, Saxony was forced to pay an indemnity and to join the North German Confederation in 1867. Under the terms of the North German Confederation, Prussia took over control of the Saxon postal system, railroads, military and foreign affairs. In the Franco-Prussian War of 1870, Saxon troops fought together with Prussian and other German troops against France. In 1871, Saxony joined the newly formed German Empire.
After King Frederick Augustus III of Saxony abdicated on 13 November 1918, Saxony, remaining a constituent state of Germany (Weimar Republic), became the Free State of Saxony under a new constitution enacted on 1 November 1920. In October 1923 the federal government under Chancellor Gustav Stresemann overthrew the legally elected SPD-Communist coalition government of Saxony. The state retained its name and borders during the Nazi era as a (Gau Saxony), but lost its quasi-autonomous status and its parliamentary democracy.
As World War II drew to its end, U.S. troops under General George Patton occupied the western part of Saxony in April 1945, while Soviet troops occupied the eastern part. That summer, the entire state was handed over to Soviet forces as agreed in the London Protocol of September 1944. Britain, the US, and the USSR then negotiated Germany's future at the Potsdam Conference. Under the Potsdam Agreement, all German territory East of the Oder-Neisse line was annexed by Poland and the Soviet Union, and, unlike in the aftermath of World War I, the annexing powers were allowed to expel the inhabitants. During the following three years, Poland and Czechoslovakia forcibly expelled German-speaking people from their territories, and some of these expellees came to Saxony. Only a small area of Saxony lying east of the Neisse River and centred around the town of Reichenau (now called Bogatynia), was annexed by Poland. The Soviet Military Administration in Germany (SVAG) merged that very small part of the Prussian province of Lower Silesia that remained in Germany with Saxony.
Traditional close relations of Saxony with neighouring German-speaking Egerland was thus completely destroyed, making the border of Saxony along the Ore Mountains a linguistic border.
On 20 October 1946, SVAG organised elections for the Saxon state parliament (), but many people were arbitrarily excluded from candidacy and suffrage, and the Soviet Union openly supported the Socialist Unity Party of Germany (SED). The new minister-president Rudolf Friedrichs (SED), had been a member of the SPD until April 1946. He met his Bavarian counterparts in the U.S. zone of occupation in October 1946 and May 1947, but died suddenly in mysterious circumstances the following month. He was succeeded by Max Seydewitz, a loyal follower of Joseph Stalin.
The German Democratic Republic (East Germany), including Saxony, was established in 1949 out of the Soviet zone of Occupied Germany, becoming a constitutionally socialist state, part of COMECON and the Warsaw Pact, under the leadership of the SED. In 1952 the government abolished the Free State of Saxony, and divided its territory into three : Leipzig, Dresden, and Karl-Marx-Stadt (formerly and currently Chemnitz). Areas around Hoyerswerda were also part of the Cottbus Bezirk.
The Free State of Saxony was reconstituted with slightly altered borders in 1990, following German reunification. Besides the formerly Silesian area of Saxony, which was mostly included in the territory of the new Saxony, the free state gained further areas north of Leipzig that had belonged to Saxony-Anhalt until 1952.
The highest mountain in Saxony is the Fichtelberg (1,215 m) in the Western Ore Mountains.
There are numerous rivers in Saxony. The Elbe is the most dominant one. The Neisse defines the border between Saxony and Poland. Other rivers include the Mulde and the White Elster.
The largest cities and towns in Saxony according to the 30 September 2018 estimate are listed below.
To this can be added that Leipzig forms a metropolitan-like region with Halle, known as "Ballungsraum Leipzig/Halle". The latter city is located just across the border of Saxony-Anhalt. Leipzig shares, for instance, an S-train system (known as "S-Bahn Mitteldeutschland") and an airport with Halle.
Saxony is a parliamentary democracy. A Minister President heads the government of Saxony. Michael Kretschmer has been Minister President since 13 December 2017.
In the 2019 state election the AfD received its highest share of the vote in any state or federal election, while the CDU and The Left both fell to record lows in Saxony. The CDU formed a government coalition with the Greens and the SPD.
Summary of the 1 September 2019 election results for the Landtag of Saxony
! colspan="2" | Party
! Ideology
! Votes
! colspan="2" | Votes % (change)
! colspan="2" | Seats (change)
! Seats %
! colspan=9|
! align="right" colspan=2| Total
! align="right" |
! align="right" | 2,166,457
! align="right" | 100.0%
! align="right" |
! align="right" | 119
! align="right" |
! align="right" | 100.0%
! align="right" colspan=2| Blank and invalid votes
! align="right" |
! align="right" | 22,029
! align="right" | 1.02
! align="right" |
! align="right" |
! align="right" |
! align="right" |
! align="right" colspan=2| Registered voters / turnout
! align="right" |
! align="right" | 3,288,643
! align="right" | 66.5
! align="right" |
! align="right" |
! align="right" |
! align="right" |
Saxony is divided into 10 districts:
1. Bautzen (BZ)
2. Erzgebirgskreis (ERZ)
3. Görlitz (GR)
4. Leipzig (L)
5. Meissen (MEI) (Meissen)
6. Mittelsachsen (FG)
7. Nordsachsen (TDO)
8. Sächsische Schweiz-Osterzgebirge (PIR)
9. Vogtlandkreis (V)
10. Zwickau (Z)
In addition, three cities have the status of an urban district ():
Between 1990 and 2008, Saxony was divided into the three regions ("Regierungsbezirke") of Chemnitz, Dresden, and Leipzig. After a reform in 2008, these regions - with some alterations of their respective areas - were called "Direktionsbezirke". In 2012, the authorities of these regions were merged into one central authority, the "".
Saxony is a densely populated state if compared with more rural German states such as Bavaria or Lower Saxony. However, the population has declined over time. The population of Saxony began declining in the 1950s due to emigration, a process which accelerated after the fall of the Berlin Wall in 1989. After bottoming out in 2013, the population has stabilized thanks to increased immigration and higher fertility rates. The cities of Leipzig, Dresden and Chemnitz, and the towns of Radebeul and Markkleeberg in their vicinity, have seen their populations increase since 2000. The following tables illustrate the foreign resident populations and the population of Saxony since 1816:
The average number of children per woman in Saxony was 1.60 in 2018, the fourth-highest rate of all German states. Within Saxony, the highest is the Bautzen district with 1.77, while Leipzig is the lowest with 1.49. Dresden's fertility rate of 1.58 is the highest of all German cities with more than 500,000 inhabitants.
Saxony is home to the Sorbs. There are currently between 45,000 and 60,000 Sorbs living in Saxony (Upper Lusatia region). Today's Sorb minority is the remainder of the Slavic population which settled throughout Saxony in the early Middle Ages and which over time slowly assimilated into the German speaking society. Many geographic names in Saxony are of Sorbic origin (including the three largest cities Chemnitz, Dresden and Leipzig). The Sorbic language and culture are protected by special laws and cities and villages in eastern Saxony which are inhabited by a significant number of Sorbian inhabitants have bilingual street signs and administrative offices provide service in both, German and Sorbian language. The Sorbs enjoy cultural self-administration which is exercised through the Domowina. Former Minister President Stanislaw Tillich is of Sorbic ancestry and has been the first leader of a German state from a national minority.
As of 2011, the Evangelical Church in Germany represented the largest faith in the state, adhered to by 21.4% of the population. Members of the Roman Catholic Church formed a minority of 3.8%. About 0.9% of the Saxons belonged to an Evangelical free church ("Evangelische Freikirche", i.e. various Protestants outside the EKD), 0.3% to Orthodox churches and 1% to other religious communities, while 72.6% did not belong to any public-law religious society. The Moravian Church (see above) still maintains its religious centre in Herrnhut and it is there where 'The Daily Watchwords' (Losungen) are selected each year which are in use in many churches worldwide. In particular in the larger cities, there are numerous smaller religious communities. The international Church of Jesus Christ of Latter-day Saints has a presence in the Freiberg Germany Temple which was the first of its kind in Germany, opened in 1985 even before its counterpart in Western Germany. It now also serves as a religious center for the church members in Poland, the Czech Republic, Slovakia, and Hungary. In Leipzig, there is a significant Buddhist community, which mainly caters to the population of Vietnamese origin, with one Buddhist temple built in 2008 and another one currently under construction.
The Gross domestic product (GDP) of the state was 124.6 billion euros in 2018, accounting for 3.7% of German economic output. GDP per capita adjusted for purchasing power was 28,100 euros or 93% of the EU27 average in the same year. The GDP per employee was 85% of the EU average. The GDP per capita was the highest of the states of the former GDR. Saxony has a 'very high' Human Development Index value of 0.930 (2018), which is at the same level as Denmark. Within Germany Saxony is ranked 9th.
Saxony has, after Saxony Anhalt, the most vibrant economy of the states of the former East Germany (GDR). Its economy grew by 1.9% in 2010. Nonetheless, unemployment remains above the German average. The eastern part of Germany, excluding Berlin, qualifies as an "Objective 1" development-region within the European Union, and was eligible to receive investment subsidies up to 30% until 2013. FutureSAX, a business plan competition and entrepreneurial support organisation, has been in operation since 2002.
Microchip-makers near Dresden have given the region the nickname "Silicon Saxony". The publishing and porcelain industries of the region are well known, although their contributions to the regional economy are no longer significant. Today, the automobile industry, machinery production, and services mainly contribute to the economic development of the region.
Saxony reported an average unemployment of 5.5% in 2019.
The Leipzig area, which until recently was among the regions with the highest unemployment rate, could benefit greatly from investments by Porsche and BMW. With the VW Phaeton factory in Dresden, and many parts suppliers, the automobile industry has again become one of the pillars of Saxon industry, as it was in the early 20th century. Zwickau is another major Volkswagen location. Freiberg, a former mining town, has emerged as a foremost location for solar technology. Dresden and some other regions of Saxony play a leading role in some areas of international biotechnology, such as electronic bioengineering. While these high-technology sectors do not yet offer a large number of jobs, they have stopped or even reversed the brain drain that was occurring until the early 2000s in many parts of Saxony. Regional universities have strengthened their positions by partnering with local industries. Unlike smaller towns, Dresden and Leipzig in the past experienced significant population growth.
Saxony is a strongly export-oriented economy. In 2018, exports amounted to 40,48 billion euro while imports stood at 24,41 billion euro. The largest export partner of Saxony is China with an amount of 6,72 billion euro, while the second largest export market are the United States with 3.59 billion. The largest exporting sectors are the automobile industry and mechanical engineering.
Saxony is a renowned tourist destination in Germany. The cities of Dresden and Leipzig are two of Germany's most visited cities. Areas along the border with the Czech Republic, such as the Lusatian Mountains, Ore Mountains, Saxon Switzerland, and Vogtland, attract significant numbers of visitors. In addition, Saxony has well-preserved historic towns such as Görlitz, Bautzen, Freiberg, Pirna, Meissen and Stolpen as well as numerous castles and palaces. New tourist destinations are developing, notably in the Lusatian Lake District.
Saxony's school system belongs to the most excelling ones in Germany. It has been ranked first in the German school assessment (Bildungsmonitor) for several years.
Saxony has four large universities and five "Fachhochschulen" or Universities of Applied Sciences. The Dresden University of Technology (TU Dresden), founded in 1828, is one of Germany's oldest universities. With 36,066 students as of 2010, it is the largest university in Saxony and one of the ten largest universities in Germany. It is a member of TU9, a consortium of nine leading German Institutes of Technology.
Leipzig University is one of the oldest universities in the world and the second-oldest university (by consecutive years of existence) in Germany, founded in 1409. Famous alumni include Leibniz, Goethe, Ranke, Nietzsche, Wagner, Cai Yuanpei, Angela Merkel, Raila Odinga, Tycho Brahe, and nine Nobel laureates are associated with this university.
Saxony is part of 'Central Germany' as a cultural area. As such, throughout German history it played an important role in shaping German culture.
The most common patois spoken in Saxony are combined in the group of "Thuringian and Upper Saxon dialects". Due to the inexact use of the term "Saxon dialects" in colloquial language, the Upper Saxon attribute has been added to distinguish it from Old Saxon and Low Saxon. Other German dialects spoken in Saxony are the dialects of the Erzgebirge (Ore Mountains), which have been affected by Upper Saxon dialects, and the dialects of the Vogtland, which are more affected by the East Franconian languages.
Upper Sorbian (a West Slavic language) is still actively spoken in the parts of Upper Lusatia that are inhabited by the Sorbian minority. The Germans in Upper Lusatia speak distinct dialects of their own (Lusatian dialects).
Saxony is often seen as the "motherland of the Reformation". It was predominantly Lutheran Protestant from the Reformation until the late 20th century.
The Electoral Saxony, a predecessor of today's Saxony, was the original birthplace of the Reformation. The elector was Lutheran starting in 1525. The Lutheran church was organized through the late 1510s and the early 1520s. It was officially established in 1527 by John the Steadfast. Although some of the sites associated with Martin Luther also lie in the current state of Saxony-Anhalt (including Wittenberg, Eisleben and Mansfeld), today's Saxony is usually viewed as the formal successor to what used to be Luther's country back in the 16th century (i.e. the Electoral Saxony).
Martin Luther personally oversaw the Lutheran church in Saxony and shaped it consistently with his own views and ideas. The 16th, 17th and 18th centuries were heavily dominated by Lutheran orthodoxy. In addition, the Reformed faith made inroads with the so-called crypto Calvinists, but was strongly persecuted in an overwhelmingly Lutheran state. In the 17th century, Pietism became an important influence. In the 18th century, the Moravian Church was set up on Count von Zinzendorf's property at Herrnhut. From 1525, the rulers were traditionally Lutheran and widely acknowledged as defenders of the Protestant faith, although – beginning with Augustus II the Strong, who was required to convert to Roman Catholicism in 1697 in order to become King of Poland – its monarchs were exclusively Roman Catholic. That meant Augustus and the subsequent Electors of Saxony, who were Roman Catholic, ruled over a state with an almost entirely Protestant population.
In 1925, 90.3% of the Saxon population was Protestant, 3.6% was Roman Catholic, 0.4% was Jewish and 5.7% was placed in other religious categories.
After World War II, Saxony was incorporated into East Germany which pursued a policy of state atheism. After 45 years of Communist rule, the majority of the population has become unaffiliated. Nonetheless, even during this time Saxony remained an important place of religious dialogue and it was at Meissen where the agreement on mutual recognition between the German Evangelical Church and the Church of England was signed in 1988.
Saxon cuisine encompasses regional cooking traditions of Saxony. In general the cuisine is very hearty and features many peculiarities of Mid-Germany such as a great variety of sauces which accompany the main dish and the fashion to serve potatoe dumplings (Klöße/Knödel) as a side dish instead of potatoes, pasta or rice. Also much freshwater fish is used in Saxon cuisine. The area around Dresden is home to the easternmost wine region in Germany (see: Saxony (wine region)).
Saxony prides itself to have been one of the first places in the world where modern recreational rock climbing was developed. Falkenstein rock in the area of Bad Schandau is considered to be the place were the German rock climbing tradition started in 1864. | https://en.wikipedia.org/wiki?curid=28395 |
Scottish Gaelic
Scottish Gaelic ( or Scots Gaelic, sometimes referred to simply as Gaelic, is a Goidelic language (in the Celtic branch of the Indo-European language family) native to the Gaels of Scotland. As a Goidelic language, Scottish Gaelic, as well as both Modern Irish and Manx, has developed out of Old Irish. It became a distinct spoken language sometime in the 13th century in the Middle Irish period, although a common literary language was shared by Gaels in both Ireland and Scotland down to the 16th century. Most of modern Scotland was once Gaelic-speaking, as evidenced especially by Gaelic-language place names.
In the 2011 census of Scotland, 57,375 people (1.1% of the Scottish population aged over 3 years old) reported as able to speak Gaelic, 1,275 fewer than in 2001. The highest percentages of Gaelic speakers were in the Outer Hebrides. Nevertheless, there are revival efforts, and the number of speakers of the language under age 20 did not decrease between the 2001 and 2011 censuses. Outside Scotland, a dialect known as Canadian Gaelic has been spoken in eastern Canada since the 18th century. In the 2016 national census, nearly 4,000 Canadian residents claimed knowledge of Scottish Gaelic, with a particular concentration in Nova Scotia.
Scottish Gaelic is not an official language of the United Kingdom. However, it is classed as an indigenous language under the European Charter for Regional or Minority Languages, which the UK Government has ratified, and the Gaelic Language (Scotland) Act 2005 established a language-development body, .
Aside from "Scottish Gaelic", the language may also be referred to simply as "Gaelic", pronounced in English. "Gaelic" refers to the Irish language ("Gaeilge") and the Manx language ("Gaelg").
Scottish Gaelic is distinct from Scots, the Middle English-derived language varieties which had come to be spoken in most of the Lowlands of Scotland by the early modern era. Prior to the 15th century, these dialects were known as "Inglis" ("English") by its own speakers, with Gaelic being called "Scottis" ("Scottish"). Beginning in the late 15th century, it became increasingly common for such speakers to refer to Scottish Gaelic as "Erse" ("Irish") and the Lowland vernacular as "Scottis". Today, Scottish Gaelic is recognised as a separate language from Irish, so the word "Erse" in reference to Scottish Gaelic is no longer used.
Based on medieval traditional accounts and the apparent evidence from linguistic geography, Gaelic has been commonly believed to have been brought to Scotland, in the 4th–5th centuries CE, by settlers from Ireland who founded the Gaelic kingdom of on Scotland's west coast in present-day Argyll. An alternative view has recently been voiced by archaeologist Dr. Ewan Campbell, who has argued that the putative migration or takeover is not reflected in archeological or placename data (as pointed out earlier by Leslie Alcock). Campbell has also questioned the age and reliability of the medieval historical sources speaking of a conquest. Instead, he has inferred that Argyll formed part of a common Q-Celtic-speaking area with Ireland, connected rather than divided by the sea, since the Iron Age. These arguments have been opposed by some scholars defending the early dating of the traditional accounts and arguing for other interpretations of the archeological evidence. Regardless of how it came to be spoken in the region, Gaelic in Scotland was mostly confined to until the eighth century, when it began expanding into Pictish areas north of the Firth of Forth and the Firth of Clyde. By 900, Pictish appears to have become extinct, completely replaced by Gaelic. An exception might be made for the Northern Isles, however, where Pictish was more likely supplanted by Norse rather than by Gaelic. During the reign of (900–943), outsiders began to refer to the region as the kingdom of Alba rather than as the kingdom of the Picts. However, though the Pictish language did not disappear suddenly, a process of Gaelicisation (which may have begun generations earlier) was clearly under way during the reigns of and his successors. By a certain point, probably during the 11th century, all the inhabitants of Alba had become fully Gaelicised Scots, and Pictish identity was forgotten.
In 1018, after the conquest of the Lothians by the Kingdom of Scotland, Gaelic reached its social, cultural, political, and geographic zenith. Colloquial speech in Scotland had been developing independently of that in Ireland since the eighth century. For the first time, the entire region of modern-day Scotland was called ' in Latin, and Gaelic was the '. In southern Scotland, Gaelic was strong in Galloway, adjoining areas to the north and west, West Lothian, and parts of western Midlothian. It was spoken to a lesser degree in north Ayrshire, Renfrewshire, the Clyde Valley and eastern Dumfriesshire. In south-eastern Scotland, there is no evidence that Gaelic was ever widely spoken.
Many historians mark the reign of King Malcom Canmore (Malcolm III) as the beginning of Gaelic's eclipse in Scotland. His wife Margaret of Wessex spoke no Gaelic, gave her children Anglo-Saxon rather than Gaelic names, and brought many English bishops, priests, and monastics to Scotland. When Malcolm and Margaret died in 1093, the Gaelic aristocracy rejected their anglicised sons and instead backed Malcolm's brother Donald Bàn. Donald had spent 17 years in Gaelic Ireland and his power base was in the thoroughly Gaelic west of Scotland. He was the last Scottish monarch to be buried on Iona, the traditional burial place of the Gaelic Kings of and the Kingdom of Alba. However, during the reigns of Malcolm Canmore's sons, Edgar, Alexander I and David I (their successive reigns lasting 1097–1153), Anglo-Norman names and practices spread throughout Scotland south of the Forth–Clyde line and along the northeastern coastal plain as far north as Moray. Norman French completely displaced Gaelic at court. The establishment of royal burghs throughout the same area, particularly under David I, attracted large numbers of foreigners speaking Old English. This was the beginning of Gaelic's status as a predominantly rural language in Scotland.
Clan chiefs in the northern and western parts of Scotland continued to support Gaelic bards who remained a central feature of court life there. The semi-independent Lordship of the Isles in the Hebrides and western coastal mainland remained thoroughly Gaelic since the language's recovery there in the 12th century, providing a political foundation for cultural prestige down to the end of the 15th century.
By the mid-14th century what eventually came to be called Scots (at that time termed Inglis) emerged as the official language of government and law. Scotland's emergent nationalism in the era following the conclusion of the Wars of Scottish Independence was organized using Scots as well. For example, the nation's great patriotic literature including John Barbour's "The Brus" (1375) and Blind Harry's "The Wallace" (before 1488) was written in Scots, not Gaelic. By the end of the 15th century, English/Scots speakers referred to Gaelic instead as 'Yrisch' or 'Erse', i.e. Irish and their own language as 'Scottis'.
A steady shift away from Scottish Gaelic continued into and through the modern era. Some of this was driven by policy decisions by government or other organisations, some originated from social changes. In the last quarter of the 20th century, efforts began to encourage use of the language.
The Statutes of Iona, enacted by James VI in 1609, was one piece of legislation that addressed, among other things, the Gaelic language. It compelled the heirs of clan chiefs to be educated in lowland, Protestant, English-speaking schools. James VI took several such measures to impose his rule on the Highland and Island region. In 1616 the Privy Council proclaimed that schools teaching in English should be established. Gaelic was seen, at this time, as one of the causes of the instability of the region. It was also associated with Catholicism.
The Society in Scotland for the Propagation of Christian Knowledge (SSPCK) was founded in 1709. They met in 1716, immediately after the failed Jacobite rebellion of 1715, to consider the reform and civilisation of the Highlands, which they sought to achieve by teaching English and the Protestant religion. Initially their teaching was entirely in English, but soon the impracticality of educating Gaelic-speaking children in this way gave rise to a modest concession: in 1723 teachers were allowed to translate English words in the Bible into Gaelic to aid comprehension, but there was no further permitted use. Other less prominent schools worked in the Highlands at the same time, also teaching in English. This process of anglicisation paused when evangelical preachers arrived in the Highlands, convinced that people should be able to read religious texts in their own language. The first well-known translation of the Bible into Scottish Gaelic was made in 1767 when Dr James Stuart of Killin and Dugald Buchanan of Rannoch produced a translation of the New Testament. In 1798 4 tracts in Gaelic were published by the Society for Propagating the Gospel at Home. 5,000 copies of each were printed. Other publications followed, with a full Gaelic Bible in 1801. The influential and effective Gaelic Schools Society was founded in 1811. Their purpose was to teach Gaels to read the Bible in their own language. In the first quarter of the 19th century, the SSPCK (despite their anti-Gaelic attitude in prior years) and the British and Foreign Bible Society distributed 60,000 Gaelic Bibles and 80,000 New Testaments. It is estimated that this overall schooling and publishing effort gave some 300,000 people in the Highlands some basic literacy. Very few European languages have made the transition to a modern literary language without an early modern translation of the Bible; the lack of a well-known translation may have contributed to the decline of Scottish Gaelic.
Counterintuitively, access to schooling in Gaelic increased knowledge of English. In 1829 the Gaelic Schools Society reported that parents were unconcerned about their children learning Gaelic, but were anxious to have them taught English. The SSPCK also found Highlanders to have significant prejudice against Gaelic. T M Devine attributes this to an association between English and the prosperity of employment: the Highland economy relied greatly on seasonal migrant workers travelling outside the . In 1863, an observer sympathetic to Gaelic stated that "knowledge of English is indispensible to any poor islander who wishes to learn a trade or to earn his bread beyond the limits of his native Isle." Generally, rather than Gaelic speakers, it was Celtic societies in the cities and professors of Celtic from universities who sought to preserve the language.
The Education (Scotland) Act 1872 provided universal education in Scotland, but completely ignored Gaelic in its plans. The mechanism for supporting Gaelic through the Education Codes issued by the Scottish Education Department were steadily used to overcome this omission, with many concessions in place by 1918. However, the members of Highland school boards tended to have anti-Gaelic attitudes and served as an obstacle to Gaelic education in the late 19th and early 20th century.
The Linguistic Survey of Scotland surveyed both the dialect of the Scottish Gaelic language, and also mixed use of English and Gaelic across the Highlands and Islands.
Dialects of Lowland Gaelic have been defunct since the 18th century. Gaelic in the Eastern and Southern Scottish Highlands, although alive in the mid-20th century, is now largely defunct. Although modern Scottish Gaelic is dominated by the dialects of the Outer Hebrides and Isle of Skye, there remain some speakers of the Inner Hebridean dialects of Tiree and Islay, and even a few native speakers from Highland areas including Wester Ross, northwest Sutherland, Lochaber, and Argyll. Dialects on both sides of the Straits of Moyle (the North Channel) linking Scottish Gaelic with Irish are now extinct, though native speakers were still to be found on the Mull of Kintyre, on Rathlin and in North East Ireland as late as the mid-20th century. Records of their speech show that Irish and Scottish Gaelic existed in a dialect chain with no clear language boundary. Some features of moribund dialects have been preserved in Nova Scotia, including the pronunciation of the broad or velarised l () as , as in the Lochaber dialect.
The Endangered Languages Project lists Gaelic's status as "threatened", with "20,000 to 30,000 active users". UNESCO classifies Gaelic as "definitely endangered".
The 1755–2001 figures are census data quoted by MacAulay. The 2011 Gaelic speakers figures come from table KS206SC of the 2011 Census. The 2011 total population figure comes from table KS101SC. Note that the numbers of Gaelic speakers relate to the numbers aged 3 and over, and the percentages are calculated using those and the number of the total population aged 3 and over.
The 2011 UK Census showed a total of 57,375 Gaelic speakers in Scotland (1.1% of population over three years old), of whom only 32,400 could also read and write, due to the lack of Gaelic medium education in Scotland. Compared to the 2001 Census, there has been a diminution of approximately 1,300 people. This is the smallest drop between censuses since the Gaelic language question was first asked in 1881. The Scottish Government's language minister and took this as evidence that Gaelic's long decline has slowed.
The main stronghold of the language continues to be the Outer Hebrides (), where the overall proportion of speakers is 52.2%. Important pockets of the language also exist in the Highlands (5.4%) and in Argyll and Bute (4.0%), and Inverness, where 4.9% speak the language. The locality with the largest absolute number is Glasgow with 5,878 such persons, who make up over 10% of all of Scotland's Gaelic speakers.
Gaelic continues to decline in its traditional heartland. Between 2001 and 2011, the absolute number of Gaelic speakers fell sharply in the Western Isles (−1,745), Argyll & Bute (−694), and Highland (−634). The drop in Stornoway, the largest parish in the Western Isles by population, was especially acute, from 57.5% of the population in 1991 to 43.4% in 2011. The only parish outside the Western Isles over 40% Gaelic-speaking is Kilmuir in Northern Skye at 46%. The islands in the Inner Hebrides with significant percentages of Gaelic speakers are Tiree (38.3%), Raasay (30.4%), Skye (29.4%), Lismore (26.9%), Colonsay (20.2%), and Islay (19.0%).
As a result of continued decline in the traditional Gaelic heartlands, today no civil parish in Scotland has a proportion of Gaelic speakers greater than 65% (the highest value is in Barvas, Lewis, with 64.1%). In addition, no civil parish on mainland Scotland has a proportion of Gaelic speakers greater than 20% (the highest value is in Ardnamurchan, Highland, with 19.3%). Out of a total of 871 civil parishes in Scotland, the proportion of Gaelic speakers exceeds 50% in 7 parishes, exceeds 25% in 14 parishes, and exceeds 10% in 35 parishes. Decline in traditional areas has recently been balanced by growth in the Scottish Lowlands. Between the 2001 and 2011 censuses, the number of Gaelic speakers rose in nineteen of the country's 32 council areas. The largest absolute gains were in Aberdeenshire (+526), North Lanarkshire (+305), Aberdeen City (+216), and East Ayrshire (+208). The largest relative gains were in Aberdeenshire (+0.19%), East Ayrshire (+0.18%), Moray (+0.16%), and Orkney (+0.13%).
In 2018, the census of pupils in Scotland showed 520 students in publicly funded schools had Gaelic as the main language at home, an increase of 5% from 497 in 2014. During the same period, Gaelic medium education in Scotland has grown, with 4,343 pupils (6.3 per 1000) being educated in a Gaelic-immersion environment in 2018, up from 3,583 pupils (5.3 per 1000) in 2014. Data collected in 2007–08 indicated that even among pupils enrolled in Gaelic medium schools, 81% of primary students and 74% of secondary students report using English more often than Gaelic when speaking with their mothers at home. The effect on this of the significant increase in pupils in Gaelic medium education since that time is unknown.
Gaelic has long suffered from its lack of use in educational and administrative contexts and was long suppressed.
The UK government has ratified the European Charter for Regional or Minority Languages in respect of Gaelic. Gaelic, along with Irish and Welsh, is designated under Part III of the Charter, which requires the UK Government to take a range of concrete measures in the fields of education, justice, public administration, broadcasting and culture. It has not received the same degree of official recognition from the UK Government as Welsh. With the advent of devolution, however, Scottish matters have begun to receive greater attention, and it achieved a degree of official recognition when the Gaelic Language (Scotland) Act was enacted by the Scottish Parliament on 21 April 2005.
The key provisions of the Act are:
Following a consultation period, in which the government received many submissions, the majority of which asked that the bill be strengthened, a revised bill was published; the main alteration was that the guidance of the is now statutory (rather than advisory). In the committee stages in the Scottish Parliament, there was much debate over whether Gaelic should be given 'equal validity' with English. Due to executive concerns about resourcing implications if this wording was used, the Education Committee settled on the concept of 'equal respect'. It is not clear what the legal force of this wording is.
The Act was passed by the Scottish Parliament unanimously, with support from all sectors of the Scottish political spectrum, on 21 April 2005. Under the provisions of the Act, it will ultimately fall to BnG to secure the status of the Gaelic language as an official language of Scotland.
Some commentators, such as (2006) argue that the Gaelic Act falls so far short of the status accorded to Welsh that one would be foolish or naïve to believe that any substantial change will occur in the fortunes of the language as a result of 's efforts.
On 10 December 2008, to celebrate the 60th anniversary of the Universal Declaration of Human Rights, the Scottish Human Rights Commission had the UDHR translated into Gaelic for the first time.
However, given there are no longer any monolingual Gaelic speakers, following an appeal in the court case of "Taylor v Haughney" (1982), involving the status of Gaelic in judicial proceedings, the High Court ruled against a general right to use Gaelic in court proceedings.
The Scottish Qualifications Authority offer two streams of Gaelic examination across all levels of the syllabus: Gaelic for learners (equivalent to the modern foreign languages syllabus) and Gaelic for native speakers (equivalent to the English syllabus).
In October 2009, a new agreement was made which allows Scottish Gaelic to be used formally between Scottish Government ministers and European Union officials. The deal was signed by Britain's representative to the EU, Sir Kim Darroch, and the Scottish government. This does not give Scottish Gaelic official status in the EU, but gives it the right to be a means of formal communications in the EU's institutions. The Scottish government will have to pay for the translation from Gaelic to other European languages. The deal was received positively in Scotland; Secretary of State for Scotland Jim Murphy said the move was a strong sign of the UK government's support for Gaelic. He said that "Allowing Gaelic speakers to communicate with European institutions in their mother tongue is a progressive step forward and one which should be welcomed". Culture Minister Mike Russell said that "this is a significant step forward for the recognition of Gaelic both at home and abroad and I look forward to addressing the council in Gaelic very soon. Seeing Gaelic spoken in such a forum raises the profile of the language as we drive forward our commitment to creating a new generation of Gaelic speakers in Scotland."
Bilingual road signs, street names, business and advertisement signage (in both Gaelic and English) are gradually being introduced throughout Gaelic-speaking regions in the Highlands and Islands, including Argyll. In many cases, this has simply meant re-adopting the traditional spelling of a name (such as ' or ' rather than the anglicised forms "Ratagan" or "Lochailort" respectively).
Bilingual railway station signs are now more frequent than they used to be. Practically all the stations in the Highland area use both English and Gaelic, and the spread of bilingual station signs is becoming ever more frequent in the Lowlands of Scotland, including areas where Gaelic has not been spoken for a long time.
This has been welcomed by many supporters of the language as a means of raising its profile as well as securing its future as a 'living language' (i.e. allowing people to use it to navigate from A to B in place of English) and creating a sense of place. However, in some places, such as Caithness, the Highland Council's intention to introduce bilingual signage has incited controversy.
The Ordnance Survey has acted in recent years to correct many of the mistakes that appear on maps. They announced in 2004 that they intended to correct them and set up a committee to determine the correct forms of Gaelic place names for their maps. Ainmean-Àite na h-Alba ("Place names in Scotland") is the national advisory partnership for Gaelic place names in Scotland.
In the nineteenth century, Canadian Gaelic was the third-most widely spoken European language in British North America and Gaelic-speaking immigrant communities could be found throughout what is modern-day Canada. Gaelic poets in Canada produced a significant literary tradition. The number of Gaelic-speaking individuals and communities declined sharply, however, after the First World War.
At the start of the 21st century, it was estimated that no more than 500 people in Nova Scotia still spoke Scottish Gaelic as a first language. In the 2011 census, 300 people claimed to have Gaelic as their first language (a figure that may include Irish Gaelic). In the same 2011 census, 1,275 people claimed to speak Gaelic, a figure that not only included all Gaelic languages but also those people who are not first language speakers, of whom 300 claim to have Gaelic as their "mother tongue."
The Nova Scotia government maintains the Office of Gaelic Affairs ("Iomairtean na Gàidhlig"), which is dedicated to the development of Scottish Gaelic language, culture and tourism in Nova Scotia, and which estimates about 2,000 total Gaelic speakers to be in the province. As in Scotland, areas of North-Eastern Nova Scotia and Cape Breton have bilingual street signs. Nova Scotia also has " (The Gaelic Council of Nova Scotia), a non-profit society dedicated to the maintenance and promotion of the Gaelic language and culture in Maritime Canada. In 2018, the Nova Scotia government launched a new Gaelic vehicle license plate to raise awareness of the language and help fund Gaelic language and culture initiatives.
Maxville Public School in Maxville, Glengarry, Ontario, offers Scottish Gaelic lessons weekly.
In Prince Edward Island, the Colonel Gray High School now offers both an introductory and an advanced course in Gaelic; both language and history are taught in these classes. This is the first recorded time that Gaelic has ever been taught as an official course on Prince Edward Island.
The province of British Columbia is host to the " (The Gaelic Society of Vancouver), the Vancouver Gaelic Choir, the Victoria Gaelic Choir, as well as the annual Gaelic festival " Vancouver". The city of Vancouver's Scottish Cultural Centre also holds seasonal Scottish Gaelic evening classes.
The BBC operates a Gaelic-language radio station "" as well as a television channel, . Launched on 19 September 2008, BBC Alba is widely available in the UK (on Freeview, Freesat, Sky and Virgin Media). It also broadcasts across Europe on the Astra 2 satellites. The channel is being operated in partnership between BBC Scotland and – an organisation funded by the Scottish Government, which works to promote the Gaelic language in broadcasting. The ITV franchise in central Scotland, STV Central, produces a number of Scottish Gaelic programmes for both BBC Alba and its own main channel.
Until BBC Alba was broadcast on Freeview, viewers were able to receive the channel , which broadcast for an hour every evening. Upon BBC Alba's launch on Freeview, it took the channel number that was previously assigned to TeleG.
There are also television programmes in the language on other BBC channels and on the independent commercial channels, usually subtitled in English. The ITV franchise in the north of Scotland, STV North (formerly "Grampian Television") produces some non-news programming in Scottish Gaelic.
The Education (Scotland) Act 1872, which completely ignored Gaelic, and led to generations of Gaels being forbidden to speak their native language in the classroom, is now recognised as having dealt a major blow to the language. People still living in 2001 could recall being beaten for speaking Gaelic in school. Even later, when these attitudes had changed, little provision was made for Gaelic medium education in Scottish schools. As late as 1958, even in Highland schools, only 20% of primary students were taught Gaelic as a subject, and only 5% were taught other subjects through the Gaelic language.
Gaelic-medium playgroups for young children began to appear in Scotland during the late 1970s and early 1980s. Parent enthusiasm may have been a factor in the "establishment of the first Gaelic medium primary school units in Glasgow and Inverness in 1985".
The first modern solely Gaelic-medium secondary school, ("Glasgow Gaelic School"), was opened at Woodside in Glasgow in 2006 (61 partially Gaelic-medium primary schools and approximately a dozen Gaelic-medium secondary schools also exist). According to , a total of 2,092 primary pupils were enrolled in Gaelic-medium primary education in 2008–09, as opposed to 24 in 1985.
The Columba Initiative, also known as " (formerly ), is a body that seeks to promote links between speakers of Scottish Gaelic and Irish.
In November 2019, the language-learning app Duolingo opened a beta course in Gaelic.
Starting from summer 2020, children starting school in the Western Isles will be enrolled in GME (Gaelic-medium education) unless parents request differently. Children will be taught Scottish Gaelic from P1 to P4 and then English will be introduced to give them a bilingual education.
In May 2004, the Nova Scotia government announced the funding of an initiative to support the language and its culture within the province. Several public schools in Northeastern Nova Scotia and Cape Breton offer Gaelic classes as part of the high-school curriculum.
Maxville Public School in Maxville, Glengarry, Ontario, offers Scottish Gaelic lessons weekly. In Prince Edward Island, the Colonel Gray High School offer an introductory and an advanced course in Scottish Gaelic.
A number of Scottish and some Irish universities offer full-time degrees including a Gaelic language element, usually graduating as Celtic Studies.
In Nova Scotia, Canada, St. Francis Xavier University, the Gaelic College of Celtic Arts and Crafts and Cape Breton University (formerly known as the "University College of Cape Breton") offer Celtic Studies degrees and/or Gaelic language programs. The government's Office of Gaelic Affairs offers lunch-time lessons to public servants in Halifax.
In Russia the Moscow State University offers Gaelic language, history and culture courses.
The University of the Highlands and Islands offers a range of Gaelic language, history and culture courses at the National Certificate, Higher National Diploma, Bachelor of Arts (ordinary), Bachelor of Arts (Honours) and Master of Science levels. It offers opportunities for postgraduate research through the medium of Gaelic. Residential courses at on the Isle of Skye offer adults the chance to become fluent in Gaelic in one year. Many continue to complete degrees, or to follow up as distance learners. A number of other colleges offer a one-year certificate course, which is also available online (pending accreditation).
Lews Castle College's Benbecula campus offers an independent 1-year course in Gaelic and Traditional Music (FE, SQF level 5/6).
In the Western Isles, the isles of Lewis, Harris and North Uist have a Presbyterian majority (largely Church of Scotland – ' in Gaelic, Free Church of Scotland and Free Presbyterian Church of Scotland). The isles of South Uist and Barra have a Catholic majority. All these churches have Gaelic-speaking congregations throughout the Western Isles. Notable city congregations with regular services in Gaelic are St Columba's Church, Glasgow and Greyfriars Tolbooth & Highland Kirk, Edinburgh. '—a shorter Gaelic version of the English-language Book of Common Order—was published in 1996 by the Church of Scotland.
The widespread use of English in worship has often been suggested as one of the historic reasons for the decline of Gaelic. The Church of Scotland is supportive today, but has a shortage of Gaelic-speaking ministers. The Free Church also recently announced plans to abolish Gaelic-language communion services, citing both a lack of ministers and a desire to have their congregations united at communion time.
From the sixth century to the present day, Scottish Gaelic has been used as the language of literature. Two prominent writers of the twentieth century are Anne Frater and Sorley Maclean.
Gaelic has its own version of European-wide names which also have English forms, for example: ' (John), ' (Alexander), ' (William), ' (Catherine), ' (Robert), ' (Christina), ' (Ann), ' (Mary), ' (James), ' (Patrick) and " (Thomas).
Not all traditional Gaelic names have direct equivalents in English: ', which is normally rendered as "Euphemia" (Effie) or "Henrietta" (Etta) (formerly also as Henny or even as Harriet), or, ', which is "matched" with "Dorothy", simply on the basis of a certain similarity in spelling. Many of these traditional Gaelic-only names are now regarded as old-fashioned, and hence are rarely or never used.
Some names have come into Gaelic from Old Norse; for example, ' ( < '), ' (< '), ' or ' (< '), ' (< '), ' (""). These are conventionally rendered in English as "Sorley" (or, historically, "Somerled"), "Norman", "Ronald" or "Ranald", "Torquil" and "Iver" (or "Evander").
Some Scottish names are Anglicized forms of Gaelic names: ' → (Angus), '→ (Donald), for instance. ', and the recently established ' (pronounced ) come from the Gaelic for, respectively, James, and Mary, but derive from the form of the names as they appear in the vocative case: ' (James) (nom.) → ' (voc.), and, ' (Mary) (nom.) → ' (voc.).
The most common class of Gaelic surnames are those beginning with (Gaelic for "son"), such as / (MacLean). The female form is (Gaelic for "daughter"), so Catherine MacPhee is properly called in Gaelic, (strictly, is a contraction of the Gaelic phrase , meaning "daughter of the son", thus really means "daughter of MacDonald" rather than "daughter of Donald"). The "of" part actually comes from the genitive form of the patronymic that follows the prefix; in the case of , ("of Donald") is the genitive form of ("Donald").
Several colours give rise to common Scottish surnames: (Bain – white), (Roy – red), (Dow, Duff – black), (Dunn – brown), (Bowie – yellow) although in Gaelic these occur as part of a fuller form such as 'son of the servant of', i.e. .
Most varieties of Gaelic show either 8 or 9 vowel qualities () in their inventory or vowel phonemes, which can be either long or short. There are also two reduced vowels () which only occur short. Although some vowels are strongly nasal, instances of distinctive nasality are rare. There are about nine diphthongs and a few triphthongs.
Most consonants have both palatal and non-palatal counterparts, including a very rich system of liquids, nasals and trills (i.e. 3 contrasting "l" sounds, 3 contrasting "n" sounds and 3 contrasting "r" sounds). The historically voiced stops have lost their voicing, so the phonemic contrast today is between unaspirated and aspirated . In many dialects, these stops may however gain voicing through secondary articulation through a preceding nasal, for examples ' "door" but ' "the door" as or .
In some fixed phrases, these changes are shown permanently, as the link with the base words has been lost, as in ' "now", from ' "this time/period".
In medial and final position, the aspirated stops are preaspirated rather than aspirated.
Scottish Gaelic is an Indo-European language with an inflecting morphology, verb–subject–object word order and two grammatical genders.
Gaelic nouns inflect for four cases (nominative/accusative, vocative, genitive and dative) and three numbers (singular, dual and plural).
They are also normally classed as either masculine or feminine. A small number of words that used to belong to the neuter class show some degree of gender confusion. For example, in some dialects ' "the sea" behaves as a masculine noun in the nominative case, but as a feminine noun in the genitive (').
Nouns are marked for case in a number of ways, most commonly involving various combinations of lenition, palatalisation and suffixation.
There are 12 irregular verbs. Most other verbs follow a fully predictable paradigm, although polysyllabic verbs ending in laterals can deviate from this paradigm as they show syncopation.
There are:
Word order is strictly verb–subject–object, including questions, negative questions and negatives. Only a restricted set of preverb particles may occur before the verb.
The majority of the vocabulary of Scottish Gaelic is native Celtic. There are a large number of borrowings from Latin (', ' from '), Norse (' from ', ' from '), French (' from ') and Scots (', ").
There are also many Brythonic influences on Scottish Gaelic. Scottish Gaelic contains a number of apparently P-Celtic loanwords, but it is not always possible to disentangle P and Q Celtic words. However some common words such as = Welsh , Cumbric are clearly of P-Celtic origin.
In common with other Indo-European languages, the neologisms coined for modern concepts are typically based on Greek or Latin, although often coming through English; "television", for instance, becomes ' and "computer" becomes '. Some speakers use an English word even if there is a Gaelic equivalent, applying the rules of Gaelic grammar. With verbs, for instance, they will simply add the verbal suffix (', or, in Lewis, ', as in, "" "watch' (Lewis, "watch'") ' telly" (I am watching the television), instead of "'". This phenomenon was described over 170 years ago, by the minister who compiled the account covering the parish of Stornoway in the "New Statistical Account of Scotland", and examples can be found dating to the eighteenth century. However, as Gaelic medium education grows in popularity, a newer generation of literate Gaels is becoming more familiar with modern Gaelic vocabulary.
Scottish Gaelic has also influenced the Scots language and English, particularly Scottish Standard English. Loanwords include: whisky, slogan, brogue, jilt, clan, trousers, gob, as well as familiar elements of Scottish geography like ben ('), glen (') and . Irish has also influenced Lowland Scots and English in Scotland, but it is not always easy to distinguish its influence from that of Scottish Gaelic.
The modern Scottish Gaelic alphabet has 18 letters:
The letter "", now mostly used to indicate lenition (historically sometimes inaccurately called "aspiration") of a consonant, was in general not used in the oldest orthography, as lenition was instead indicated with a dot over the lenited consonant. The letters of the alphabet were traditionally named after trees, but this custom has fallen out of use.
Long vowels are marked with a grave accent ('), indicated through digraphs (e.g. ' is ) or conditioned by certain consonant environments (e.g. a ' preceding a non-intervocalic ' is ). Traditional spelling systems also use the acute accent on the letters ', ' and "" to denote a change in vowel quality rather than length, but the reformed spellings have replaced these with the grave.
Certain 18th century sources used only an acute accent along the lines of Irish, such as in the writings of Alasdair mac Mhaighstir Alasdair (1741–51) and the earliest editions (1768–90) of Duncan Ban MacIntyre.
The 1767 New Testament set the standard for Scottish Gaelic. The 1981 Scottish Examination Board recommendations for Scottish Gaelic, the Gaelic Orthographic Conventions, were adopted by most publishers and agencies, although they remain controversial among some academics, most notably Ronald Black.
The quality of consonants (palatalised or non-palatalised) is indicated in writing by the vowels surrounding them. So-called "slender" consonants are palatalised while "broad" consonants are neutral or velarised. The vowels ' and ' are classified as slender, and ', ', and ' as broad. The spelling rule known as ' ("slender to slender and broad to broad") requires that a word-medial consonant or consonant group followed by a written ' or ' be also preceded by an ' or '; and similarly if followed by ', ' or ' be also preceded by an ', ', or '.
This rule sometimes leads to the insertion of an orthographic vowel that does not influence the pronunciation of the vowel. For example, plurals in Gaelic are often formed with the suffix ' , for example, ' (shoe) / ' (shoes). But because of the spelling rule, the suffix is spelled ' (but pronounced the same, ) after a slender consonant, as in ' ((a) people) / ' (peoples) where the written ' is purely a graphic vowel inserted to conform with the spelling rule because an ' precedes the "".
Unstressed vowels omitted in speech can be omitted in informal writing. For example:
Gaelic orthographic rules are mostly regular; however, English sound-to-letter correspondences cannot be applied to written Gaelic.
Scots English orthographic rules have also been used at various times in Gaelic writing. Notable examples of Gaelic verse composed in this manner are the Book of the Dean of Lismore and the manuscript.
Note: Items in brackets denote archaic or dialectal forms | https://en.wikipedia.org/wiki?curid=28397 |
Seleucid Empire
The Seleucid Empire (; , "Basileía tōn Seleukidōn") was a Hellenistic state ruled by the Seleucid dynasty which existed from 312 BC to 63 BC; Seleucus I Nicator founded it following the division of the Macedonian Empire vastly expanded by Alexander the Great. Seleucus received Babylonia (321 BC) and from there expanded his dominions to include much of Alexander's near-eastern territories. At the height of its power, the Empire included central Anatolia, Persia, the Levant, Mesopotamia, and what is now Kuwait, Afghanistan, and parts of Pakistan and Turkmenistan.
The Seleucid Empire became a major center of Hellenistic culture – it maintained the preeminence of Greek customs where a Greek political elite dominated, mostly in the urban areas. The Greek population of the cities who formed the dominant elite were reinforced by immigration from Greece. Seleucid attempts to defeat their old enemy Ptolemaic Egypt were frustrated by Roman demands. Having come into conflict in the east (305 BC) with the Maurya Empire, Seleucus I entered into an agreement with its leader, Chandragupta, whereby he ceded vast territory west of the Indus, including the Hindu Kush, modern-day Afghanistan, and the Balochistan province of Pakistan and offered his daughter in marriage to the Maurya Emperor to formalize the alliance.
Antiochus III the Great attempted to project Seleucid power and authority into Hellenistic Greece, but his attempts were thwarted by the Roman Republic and by Greek allies such as the Kingdom of Pergamon, culminating in a Seleucid defeat at the 190 BC Battle of Magnesia. In the subsequent Treaty of Apamea in 188 BC, the Seleucids were compelled to pay costly war reparations and relinquished claims to territories west of the Taurus Mountains.
The Parthians under Mithridates I of Parthia conquered much of the remaining eastern part of the Seleucid Empire in the mid-2nd century BC, while the independent Greco-Bactrian Kingdom continued to flourish in the northeast. However, the Seleucid kings continued to rule a rump state from Syria until the invasion by Armenian king Tigranes the Great in 83 BC and their ultimate overthrow by the Roman general Pompey in 63 BC.
Contemporary sources, such as a loyalist decree honoring Antiochus I from Ilium, in Greek language define the Seleucid state both as an empire ("arche") and as a kingdom ("basileia"). Similarly, Seleucid rulers were described as kings in Babylonia.
Starting from the 2nd century BC, ancient writers referred to the Seleucid ruler as the King of Syria, Lord of Asia, and other designations; the evidence for the Seleucid rulers representing themselves as kings of Syria is provided by the inscription of Antigonus son of Menophilus, who described himself as the "admiral of Alexander, king of Syria". He refers to either Alexander Balas or Alexander II Zabinas as a ruler.
Alexander, who quickly conquered the Persian Empire under its last Achaemenid dynast, Darius III, died young in 323 BC, leaving an expansive empire of partly Hellenised culture without an adult heir. The empire was put under the authority of a regent in the person of Perdiccas, and the territories were divided among Alexander's generals, who thereby became satraps, at the Partition of Babylon, all in that same year.
Alexander's generals (the Diadochi) jostled for supremacy over parts of his empire. Ptolemy, a former general and the satrap of Egypt, was the first to challenge the new system; this led to the demise of Perdiccas. Ptolemy's revolt led to a new subdivision of the empire with the Partition of Triparadisus in 320 BC. Seleucus, who had been "Commander-in-Chief of the Companion cavalry" ("hetairoi") and appointed first or court chiliarch (which made him the senior officer in the Royal Army after the regent and commander-in-chief Perdiccas since 323 BC, though he helped to assassinate him later) received Babylonia and, from that point, continued to expand his dominions ruthlessly. Seleucus established himself in Babylon in 312 BC, the year used as the foundation date of the Seleucid Empire.
The rise of Seleucus in Babylon threatened the eastern extent of Antigonus I territory in Asia. Antigonus, along with his son Demetrius I of Macedon, unsuccessfully led a campaign to annex Babylon. The victory of Seleucus ensured his claim of Babylon and legitimacy. He ruled not only Babylonia, but the entire enormous eastern part of Alexander's empire, as described by Appian:
In the region of Punjab, Chandragupta Maurya (Sandrokottos) founded the Maurya Empire in 321 BC. Chandragupta conquered the Nanda Empire in Magadha, and relocated to the capital of Pataliputra. Chandragupta then redirected his attention back to the Indus and by 317 BC he conquered the remaining Greek satraps left by Alexander. Expecting a confrontation, Seleucus gathered his army and marched to the Indus. It is said that Chandragupta himself fielded an army of 600,000 men and 9,000 war elephants.
Mainstream scholarship asserts that Chandragupta received, formalized through a treaty, vast territory west of the Indus, including the Hindu Kush, modern day Afghanistan, and the Balochistan province of Pakistan. Archaeologically, concrete indications of Mauryan rule, such as the inscriptions of the Edicts of Ashoka, are known as far as Kandahar in southern Afghanistan. According to Appian:
It is generally thought that Chandragupta married Seleucus's daughter, or a Macedonian princess, a gift from Seleucus to formalize an alliance. In a return gesture, Chandragupta sent 500 war elephants, a military asset which would play a decisive role at the Battle of Ipsus in 301 BC. In addition to this treaty, Seleucus dispatched an ambassador, Megasthenes, to Chandragupta, and later Deimakos to his son Bindusara, at the Mauryan court at Pataliputra (modern Patna in Bihar state). Megasthenes wrote detailed descriptions of India and Chandragupta's reign, which have been partly preserved to us through Diodorus Siculus. Later Ptolemy II Philadelphus, the ruler of Ptolemaic Egypt and contemporary of Ashoka the Great, is also recorded by Pliny the Elder as having sent an ambassador named Dionysius to the Mauryan court.
Other territories ceded before Seleucus' death were Gedrosia in the south-east of the Iranian plateau, and, to the north of this, Arachosia on the west bank of the Indus River.
Following his and Lysimachus' victory over Antigonus Monophthalmus at the decisive Battle of Ipsus in 301 BC, Seleucus took control over eastern Anatolia and northern Syria.
In the latter area, he founded a new capital at Antioch on the Orontes, a city he named after his father. An alternative capital was established at Seleucia on the Tigris, north of Babylon. Seleucus's empire reached its greatest extent following his defeat of his erstwhile ally, Lysimachus, at Corupedion in 281 BC, after which Seleucus expanded his control to encompass western Anatolia. He hoped further to take control of Lysimachus's lands in Europe – primarily Thrace and even Macedonia itself, but was assassinated by Ptolemy Ceraunus on landing in Europe.
His son and successor, Antiochus I Soter, was left with an enormous realm consisting of nearly all of the Asian portions of the Empire, but faced with Antigonus II Gonatas in Macedonia and Ptolemy II Philadelphus in Egypt, he proved unable to pick up where his father had left off in conquering the European portions of Alexander's empire.
Antiochus I (reigned 281–261 BC) and his son and successor Antiochus II Theos (reigned 261–246 BC) were faced with challenges in the west, including repeated wars with Ptolemy II and a Celtic invasion of Asia Minor—distracting attention from holding the eastern portions of the Empire together. Towards the end of Antiochus II's reign, various provinces simultaneously asserted their independence, such as Bactria and Sogdiana under Diodotus, Cappadocia under Ariarathes III, and Parthia under Andragoras. A few years later, the latter was defeated and killed by the invading Parni of Arsaces – the region would then become the core of the Parthian Empire.
Diodotus, governor for the Bactrian territory, asserted independence in around 245 BC, although the exact date is far from certain, to form the Greco-Bactrian Kingdom. This kingdom was characterized by a rich Hellenistic culture and was to continue its domination of Bactria until around 125 BC when it was overrun by the invasion of northern nomads. One of the Greco-Bactrian kings, Demetrius I of Bactria, invaded India around 180 BC to form the Indo-Greek Kingdoms.
The rulers of Persis, called Fratarakas, also seem to have established some level of independence from the Seleucids during the 3rd century BC, especially from the time of Vahbarz. They would later overtly take the title of Kings of Persis, before becoming vassals to the newly formed Parthian Empire.
The Seleucid satrap of Parthia, named Andragoras, first claimed independence, in a parallel to the secession of his Bactrian neighbour. Soon after, however, a Parthian tribal chief called Arsaces invaded the Parthian territory around 238 BC to form the Arsacid dynasty, from which the Parthian Empire originated.
Antiochus II's son Seleucus II Callinicus came to the throne around 246 BC. Seleucus II was soon dramatically defeated in the Third Syrian War against Ptolemy III of Egypt and then had to fight a civil war against his own brother Antiochus Hierax. Taking advantage of this distraction, Bactria and Parthia seceded from the empire. In Asia Minor too, the Seleucid dynasty seemed to be losing control: the Gauls had fully established themselves in Galatia, semi-independent semi-Hellenized kingdoms had sprung up in Bithynia, Pontus, and Cappadocia, and the city of Pergamum in the west was asserting its independence under the Attalid Dynasty. The Seleucid economy started to show the first signs of weakness, as Galatians gained independence and Pergamum took control of coastal cities in Anatolia. Consequently, they managed to partially block contact with the West.
A revival would begin when Seleucus II's younger son, Antiochus III the Great, took the throne in 223 BC. Although initially unsuccessful in the Fourth Syrian War against Egypt, which led to a defeat at the Battle of Raphia (217 BC), Antiochus would prove himself to be the greatest of the Seleucid rulers after Seleucus I himself. He spent the next ten years on his anabasis (journey) through the eastern parts of his domain and restoring rebellious vassals like Parthia and Greco-Bactria to at least nominal obedience. He won the Battle of the Arius and besieged the Bactrian capital, and even emulated Alexander with an expedition into India where he met with king Sophagasenus (Sanskrit: "Subhagasena") receiving war elephants.
Actual translation of Polybius 11.34 (No other source except Polybius makes any reference to Sophagasenus):
"He (Antiochus) crossed the Caucasus Indicus (Paropamisus) ("Hindu Kush") and descended into India; renewed his friendship with Sophagasenus the king of the Indians; received more elephants, until he had a hundred and fifty altogether; and having once more provisioned his troops, set out again personally with his army: leaving Androsthenes of Cyzicus the duty of taking home the treasure which this king had agreed to hand over to him. Having traversed Arachosia and crossed the river Enymanthus, he came through Drangene to Carmania; and as it was now winter, he put his men into winter quarters there."
When he returned to the west in 205 BC, Antiochus found that with the death of Ptolemy IV, the situation now looked propitious for another western campaign. Antiochus and Philip V of Macedon then made a pact to divide the Ptolemaic possessions outside of Egypt, and in the Fifth Syrian War, the Seleucids ousted Ptolemy V from control of Coele-Syria. The Battle of Panium (198 BC) definitively transferred these holdings from the Ptolemies to the Seleucids. Antiochus appeared, at the least, to have restored the Seleucid Kingdom to glory.
Following the defeat of his erstwhile ally Philip by Rome in 197 BC, Antiochus saw the opportunity for expansion into Greece itself. Encouraged by the exiled Carthaginian general Hannibal, and making an alliance with the disgruntled Aetolian League, Antiochus launched an invasion across the Hellespont. With his huge army he aimed to establish the Seleucid empire as the foremost power in the Hellenic world, but these plans put the empire on a collision course with the new rising power of the Mediterranean, the Roman Republic. At the battles of Thermopylae (191 BC) and Magnesia (190 BC), Antiochus's forces suffered resounding defeats, and he was compelled to make peace and sign the Treaty of Apamea (188 BC), the main clause of which saw the Seleucids agree to pay a large indemnity, to retreat from Anatolia and to never again attempt to expand Seleucid territory west of the Taurus Mountains. The Kingdom of Pergamum and the Republic of Rhodes, Rome's allies in the war, gained the former Seleucid lands in Anatolia. Antiochus died in 187 BC on another expedition to the east, where he sought to extract money to pay the indemnity.
The reign of his son and successor Seleucus IV Philopator (187–175 BC) was largely spent in attempts to pay the large indemnity, and Seleucus was ultimately assassinated by his minister Heliodorus.
Seleucus' younger brother, Antiochus IV Epiphanes, now seized the throne. He attempted to restore Seleucid power and prestige with a successful war against the old enemy, Ptolemaic Egypt, which met with initial success as the Seleucids defeated and drove the Egyptian army back to Alexandria itself. As the king planned on how to conclude the war, he was informed that Roman commissioners, led by the Proconsul Gaius Popillius Laenas, were near and requesting a meeting with the Seleucid king. Antiochus agreed, but when they met and Antiochus held out his hand in friendship, Popilius placed in his hand the tablets on which was written the decree of the senate and told him to read it. When the king said that he would call his friends into council and consider what he ought to do, Popilius drew a circle in the sand around the king's feet with the stick he was carrying and said, "Before you step out of that circle give me a reply to lay before the senate." For a few moments he hesitated, astounded at such a peremptory order, and at last replied, "I will do what the senate thinks right." He then chose to withdraw rather than set the empire to war with Rome again.
The latter part of his reign saw a further disintegration of the Empire despite his best efforts. Weakened economically, militarily and by loss of prestige, the Empire became vulnerable to rebels in the eastern areas of the empire, who began to further undermine the empire while the Parthians moved into the power vacuum to take over the old Persian lands. Antiochus' aggressive Hellenizing (or de-Judaizing) activities provoked a full scale armed rebellion in Judea—the Maccabean Revolt. Efforts to deal with both the Parthians and the Jews as well as retain control of the provinces at the same time proved beyond the weakened empire's power. Antiochus died during a military expedition against the Parthians in 164 BC.
After the death of Antiochus IV Epiphanes, the Seleucid Empire became increasingly unstable. Frequent civil wars made central authority tenuous at best. Epiphanes' young son, Antiochus V Eupator, was first overthrown by Seleucus IV's son, Demetrius I Soter in 161 BC. Demetrius I attempted to restore Seleucid power in Judea particularly, but was overthrown in 150 BC by Alexander Balas – an impostor who (with Egyptian backing) claimed to be the son of Epiphanes. Alexander Balas reigned until 145 BC when he was overthrown by Demetrius I's son, Demetrius II Nicator. Demetrius II proved unable to control the whole of the kingdom, however. While he ruled Babylonia and eastern Syria from Damascus, the remnants of Balas' supporters – first supporting Balas' son Antiochus VI, then the usurping general Diodotus Tryphon – held out in Antioch.
Meanwhile, the decay of the Empire's territorial possessions continued apace. By 143 BC, the Jews in the form of the Maccabees had fully established their independence. Parthian expansion continued as well. In 139 BC, Demetrius II was defeated in battle by the Parthians and was captured. By this time, the entire Iranian Plateau had been lost to Parthian control.
Demetrius Nicator's brother, Antiochus VII Sidetes, took the throne after his brother's capture. He faced the enormous task of restoring a rapidly crumbling empire, one facing threats on multiple fronts. Hard-won control of Coele-Syria was threatened by the Jewish Maccabee rebels. Once-vassal dynasties in Armenia, Cappadocia, and Pontus were threatening Syria and northern Mesopotamia; the nomadic Parthians, brilliantly led by Mithridates I of Parthia, had overrun upland Media (home of the famed Nisean horse herd); and Roman intervention was an ever-present threat. Sidetes managed to bring the Maccabees to heel and frighten the Anatolian dynasts into a temporary submission; then, in 133, he turned east with the full might of the Royal Army (supported by a body of Jews under the Hasmonean prince, John Hyrcanus) to drive back the Parthians.
Sidetes' campaign initially met with spectacular success, recapturing Mesopotamia, Babylonia, and Media. In the winter of 130/129 BC, his army was scattered in winter quarters throughout Media and Persis when the Parthian king, Phraates II, counter-attacked. Moving to intercept the Parthians with only the troops at his immediate disposal, he was ambushed and killed. Antiochus Sidetes is sometimes called the last great Seleucid king.
After the death of Antiochus VII Sidetes, all of the recovered eastern territories were recaptured by the Parthians. The Maccabees again rebelled, civil war soon tore the empire to pieces, and the Armenians began to encroach on Syria from the north.
By 100 BC, the once formidable Seleucid Empire encompassed little more than Antioch and some Syrian cities. Despite the clear collapse of their power, and the decline of their kingdom around them, nobles continued to play kingmakers on a regular basis, with occasional intervention from Ptolemaic Egypt and other outside powers. The Seleucids existed solely because no other nation wished to absorb them – seeing as they constituted a useful buffer between their other neighbours. In the wars in Anatolia between Mithridates VI of Pontus and Sulla of Rome, the Seleucids were largely left alone by both major combatants.
Mithridates' ambitious son-in-law, Tigranes the Great, king of Armenia, however, saw opportunity for expansion in the constant civil strife to the south. In 83 BC, at the invitation of one of the factions in the interminable civil wars, he invaded Syria and soon established himself as ruler of Syria, putting the Seleucid Empire virtually at an end.
Seleucid rule was not entirely over, however. Following the Roman general Lucullus' defeat of both Mithridates and Tigranes in 69 BC, a rump Seleucid kingdom was restored under Antiochus XIII. Even so, civil wars could not be prevented, as another Seleucid, Philip II, contested rule with Antiochus. After the Roman conquest of Pontus, the Romans became increasingly alarmed at the constant source of instability in Syria under the Seleucids. Once Mithridates was defeated by Pompey in 63 BC, Pompey set about the task of remaking the Hellenistic East, by creating new client kingdoms and establishing provinces. While client nations like Armenia and Judea were allowed to continue with some degree of autonomy under local kings, Pompey saw the Seleucids as too troublesome to continue; doing away with both rival Seleucid princes, he made Syria into a Roman province.
The Seleucid empire's geographical span, from the Aegean Sea to what is now Afghanistan and Pakistan, created a melting pot of various peoples, such as Greeks, Armenians, Georgians, Persians, Medes, Assyrians and Jews. The immense size of the empire, followed by its encompassing nature, encouraged the Seleucid rulers to implement a policy of ethnic unity—a policy initiated by Alexander.
The Hellenization of the Seleucid empire was achieved by the establishment of Greek cities throughout the empire. Historically significant towns and cities, such as Antioch, were created or renamed with more appropriate Greek names. The creation of new Greek cities and towns was aided by the fact that the Greek mainland was overpopulated and therefore made the vast Seleucid empire ripe for colonization. Colonization was used to further Greek interest while facilitating the assimilation of many native groups. Socially, this led to the adoption of Greek practices and customs by the educated native classes in order to further themselves in public life, and at the same time the ruling Macedonian class gradually adopted some of the local traditions. By 313 BC, Hellenic ideas had begun their almost 250-year expansion into the Near East, Middle East, and Central Asian cultures. It was the empire's governmental framework to rule by establishing hundreds of cities for trade and occupational purposes. Many of the existing cities began—or were compelled by force—to adopt Hellenized philosophic thought, religious sentiments, and politics although the Seleucid rulers did incorporate Babylonian religious tenets to gain support.
Synthesizing Hellenic and indigenous cultural, religious, and philosophical ideas met with varying degrees of success—resulting in times of simultaneous peace and rebellion in various parts of the empire. Such was the case with the Jewish population of the Seleucid empire; the Jews' refusal to willingly Hellenize their religious beliefs or customs posed a significant problem which eventually led to war. Contrary to the accepting nature of the Ptolemaic empire towards native religions and customs, the Seleucids gradually tried to force Hellenization upon the Jewish people in their territory by outlawing Judaism. This eventually led to the revolt of the Jews under Seleucid control, which would later lead to the Jews achieving independence from the Seleucid empire. | https://en.wikipedia.org/wiki?curid=28398 |
Silesia
Silesia (, also , ) is a historical region of Central Europe located mostly in Poland, with small parts in the Czech Republic and Germany. Its area is approximately and the population is estimated at around 8,000,000 inhabitants. Silesia is split into two main sub-regions of Lower Silesia in the west and Upper Silesia in the east. Throughout history, Silesia developed a unique culture featuring diverse architecture, costumes, cuisine, traditions and the Silesian language.
Silesia is located along the Oder River, with the Sudeten Mountains extending across the southern border. The region possesses many historical landmarks and UNESCO World Heritage Sites. It is also rich in mineral and natural resources, and includes several important industrial areas. Silesia's largest city and historical capital is Wrocław. The biggest metropolitan area is the Upper Silesian metropolitan area, the centre of which is Katowice. Parts of the Czech city of Ostrava and the German city of Görlitz fall within the borders of Silesia.
Silesia's borders and national affiliation have changed over time, both when it was a hereditary possession of noble houses and after the rise of modern nation-states. The varied history with changing aristocratic possessions resulted in an abundance of castles, especially in the Jelenia Góra valley. The first known states to hold power in Silesia were probably those of Greater Moravia at the end of the 9th century and Bohemia early in the 10th century. In the 10th century, Silesia was incorporated into the early Polish state, and after its division in the 12th century became a Piast duchy. In the 14th century, it became a constituent part of the Bohemian Crown Lands under the Holy Roman Empire, which passed to the Austrian Habsburg Monarchy in 1526. As a result of the Silesian Wars, the region was annexed by Prussia in 1742.
After World War I, the easternmost part of Upper Silesia was granted to Poland by the Entente Powers after insurrections by Poles and the Upper Silesian plebiscite. The remaining former Austrian parts of Silesia were partitioned to Czechoslovakia, forming part of Czechoslovakia's Sudetenland region, and are today part of the Czech Republic. In 1945, after World War II, the bulk of Silesia was transferred to Polish jurisdiction by the Potsdam Agreement between the victorious Allies and became part of Poland, whose Communist government expelled the majority of Silesia's previous population. The small Lusatian strip west of the Oder–Neisse line, which had belonged to Silesia since 1815, remained in Germany.
As the result of the forced population shifts of 1945–48, today's inhabitants of Silesia speak the national languages of their respective countries. Previously German-speaking Lower Silesia has developed a new mixed Polish dialect and novel costumes. An ongoing debate exists whether the Silesian speech should be considered a dialect of Polish or a separate language. The Lower Silesian German dialect is nearing extinction due to their speakers' expulsion.
The names of Silesia in different languages most likely share their etymology— ; ; ; ; ; ; ; Latin, Spanish and English: "Silesia"; ; ; ; ; ; ; ; . The names all relate to the name of a river (now Ślęza) and mountain (Mount Ślęża) in mid-southern Silesia, which served as a place of cult for pagans before Christianization.
"Ślęża" is listed as one of the numerous Pre-Indo-European topographic names in the region (see old European hydronymy).
According to some Polonists, the name "Ślęża" or "Ślęż" is directly related to the Old Polish words "ślęg" or "śląg" , which means dampness, moisture, or humidity. They disagree with the hypothesis of an origin for the name "Śląsk" from the name of the Silings tribe, an etymology preferred by some German authors.
In the fourth century BC from the south, through the Kłodzko (Glatz) Valley, the Celts entered Silesia, and settled around Mount Ślęża near modern Wrocław, Oława and Strzelin.
Germanic Lugii tribes were first recorded within Silesia in the 1st century. West Slavs and Lechites arrived in the region around the 7th century, and by the early ninth century, their settlements had stabilized. Local West Slavs started to erect boundary structures like the Silesian Przesieka and the Silesia Walls. The eastern border of Silesian settlement was situated to the west of the Bytom, and east from Racibórz and Cieszyn. East of this line dwelt a closely related Lechitic tribe, the Vistulans. Their northern border was in the valley of the Barycz River, north of which lived the Western Polans tribe who gave Poland its name.
The first known states in Silesia were Greater Moravia and Bohemia. In the 10th century, the Polish ruler Mieszko I of the Piast dynasty incorporated Silesia into the Polish state. During the Fragmentation of Poland, Silesia and the rest of the country were divided among many independent duchies ruled by various Silesian dukes. During this time, German cultural and ethnic influence increased as a result of immigration from German-speaking parts of the Holy Roman Empire. In 1178, parts of the Duchy of Kraków around Bytom, Oświęcim, Chrzanów, and Siewierz were transferred to the Silesian Piasts, although their population was primarily Vistulan and not of Silesian descent.
In 1241, after raiding Lesser Poland region, the Mongols invaded Europe and Silesia, causing widespread panic and mass flight. They looted much of the region and defeated the combined Polish and German forces under Henry II the Pious at the Battle of Legnica, which took place at Legnickie Pole near the Silsian city of Legnica. Upon the death of Orda Khan, the Mongols chose not to press forward further into Europe, but returned east to participate in the election of a new Grand Khan (leader).
Between 1289 and 1292, Bohemian king Wenceslaus II became "suzerain" of some of the Upper Silesian duchies. Polish monarchs had not renounced their hereditary rights to Silesia until 1335. The province became part of the Bohemian Crown under the Holy Roman Empire, and passed with that crown to the Habsburg Monarchy of Austria in 1526.
In the 15th century, several changes were made to Silesia's borders. Parts of the territories which had been transferred to the Silesian Piasts in 1178 were bought by the Polish kings in the second half of the 15th century (the Duchy of Oświęcim in 1457; the Duchy of Zator in 1494). The Bytom area remained in the possession of the Silesian Piasts, though it was a part of the Diocese of Kraków. The Duchy of Crossen was inherited by the Margraviate of Brandenburg in 1476, and with the renunciation of King Ferdinand I and the estates of Bohemia in 1538, became an integral part of Brandenburg.
In 1742, most of Silesia was seized by King Frederick the Great of Prussia in the War of the Austrian Succession, eventually becoming the Prussian Province of Silesia in 1815; consequently, Silesia became part of the German Empire when it was proclaimed in 1871.
After World War I, a part of Silesia, Upper Silesia, was contested by Germany and the newly independent Second Polish Republic. The League of Nations organized a plebiscite to decide the issue in 1921. It resulted in 60% of votes being cast for Germany and 40% for Poland. Following the third Silesian Uprising (1921), however, the easternmost portion of Upper Silesia (including Katowice), with a majority ethnic Polish population, was awarded to Poland, becoming the Silesian Voivodeship. The Prussian Province of Silesia within Germany was then divided into the provinces of Lower Silesia and Upper Silesia. Meanwhile, Austrian Silesia, the small portion of Silesia retained by Austria after the Silesian Wars, was mostly awarded to the new Czechoslovakia (becoming known as Czech Silesia and Zaolzie), although most of Cieszyn and territory to the east of it went to Poland.
Polish Silesia was among the first regions invaded during Germany's 1939 attack on Poland. One of the claimed goals of Nazi occupation, particularly in Upper Silesia, was the extermination of those whom Nazis viewed as subhuman, namely Jews and ethnic Poles. The Polish and Jewish population of the then Polish part of Silesia was subjected to genocide involving ethnic cleansing and mass murder, while Germans were settled in pursuit of "Lebensraum". Two thousand Polish intellectuals, politicians, and businessmen were murdered in the "Intelligenzaktion Schlesien" in 1940 as part of a Poland-wide Germanization program. Silesia also housed one of the two main wartime centers where medical experiments were conducted on kidnapped Polish children by Nazis.
The Potsdam Conference of 1945 defined the Oder-Neisse line as the border between Germany and Poland, pending a final peace conference with Germany which eventually never took place. At the end of WWII, Germans in Silesia fled from the battle ground, assuming to return when the war was over. However, they could not return and those who had stayed, were expelled and new Polish population from Central Poland, or themselves forcibly re-settled from the Soviet Union took their place. After 1945 and in 1946, nearly all of the 4.5 million Silesians of German descent fled, or were interned in camps and forcibly expelled, including some thousand German Jews who survived the Holocaust and had returned to Silesia; 634,106 Silesians died in the expulsion, nearly 14% of the population. The newly formed Polish United Workers' Party created a Ministry of the Recovered Territories that claimed half of the available arable land for state-run collectivized farms. Many of the new Polish Silesians who resented the Germans for their invasion in 1939 and brutality in occupation now resented the newly formed Polish communist government for their population shifting and interference in agricultural and industrial affairs.
The administrative division of Silesia within Poland has changed several times since 1945. Since 1999, it has been divided between Lubusz Voivodeship, Lower Silesian Voivodeship, Opole Voivodeship, and Silesian Voivodeship. Czech Silesia is now part of the Czech Republic, forming the Moravian-Silesian Region and the northern part of the Olomouc Region. Germany retains the Silesia-Lusatia region ("Niederschlesien-Oberlausitz" or "Schlesische Oberlausitz") west of the Neisse, which is part of the federal state of Saxony.
Most of Silesia is relatively flat, although its southern border is generally mountainous. It is primarily located in a swath running along both banks of the upper and middle Oder (Odra) River, but it extends eastwards to the upper Vistula River. The region also includes many tributaries of the Oder, including the Bóbr (and its tributary the Kwisa), the Barycz and the Nysa Kłodzka. The Sudeten Mountains run along most of the southern edge of the region, though at its south-eastern extreme it reaches the Silesian Beskids and Moravian-Silesian Beskids, which belong to the Carpathian Mountains range.
Historically, Silesia was bounded to the west by the Kwisa and Bóbr Rivers, while the territory west of the Kwisa was in Upper Lusatia (earlier "Milsko"). However, because part of Upper Lusatia was included in the Province of Silesia in 1815, in Germany Görlitz, Niederschlesischer Oberlausitzkreis and neighbouring areas are considered parts of historical Silesia. Those districts, along with Poland's Lower Silesian Voivodeship and parts of Lubusz Voivodeship, make up the geographic region of Lower Silesia.
Silesia has undergone a similar notional extension at its eastern extreme. Historically, it extended only as far as the Brynica River, which separates it from Zagłębie Dąbrowskie in the Lesser Poland region. However, to many Poles today, Silesia ("Śląsk") is understood to cover all of the area around Katowice, including Zagłębie. This interpretation is given official sanction in the use of the name Silesian Voivodeship ("województwo śląskie") for the province covering this area. In fact, the word "Śląsk" in Polish (when used without qualification) now commonly refers exclusively to this area (also called "Górny Śląsk" or Upper Silesia).
As well as the Katowice area, historical Upper Silesia also includes the Opole region (Poland's Opole Voivodeship) and Czech Silesia. Czech Silesia consists of a part of the Moravian-Silesian Region and the Jeseník District in the Olomouc Region.
Silesia is a resource-rich and populous region. Since the middle of the 18th century, coal has been mined. The industry had grown while Silesia was part of Germany, and peaked in the 1970s under the People's Republic of Poland. During this period, Silesia became one of the world's largest producers of coal, with a record tonnage in 1979. Coal mining declined during the next two decades, but has increased again following the end of Communist rule.
The 41 coal mines in Silesia are mostly part of the Upper Silesian Coal Basin, which lies in the Silesian Upland. The coalfield has an area of about 4,500 km2. Deposits in Lower Silesia have proven to be difficult to exploit and the area's unprofitable mines were closed in 2000. In 2008, an estimated 35 billion tonnes of lignite reserves were found near Legnica, making them some of the largest in the world.
From the fourth century BC, iron ore has been mined in the upland areas of Silesia. The same period had lead, copper, silver, and gold mining. Zinc, cadmium, arsenic, and uranium have also been mined in the region. Lower Silesia features large copper mining and processing between the cities of Legnica, Głogów, Lubin, and Polkowice.
The region is known for stone quarrying to produce limestone, marl, marble, and basalt.
The region also has a thriving agricultural sector, which produces cereals (wheat, rye, barley, oats, corn), potatoes, rapeseed, sugar beets and others. Milk production is well developed. The Opole Silesia has for decades occupied the top spot in Poland for their indices of effectiveness of agricultural land use.
Mountainous parts of southern Silesia feature many significant and attractive tourism destinations (e.g., Karpacz, Szczyrk, Wisła). Silesia is generally well forested. This is because greenness is generally highly desirable by the local population, particularly in the highly industrialized parts of Silesia.
Silesia has been historically diverse in every aspect. Nowadays, the largest part of Silesia is located in Poland; it is often cited as one of the most diverse regions in that country.
United States Immigration Commission in its "Dictionary of races or peoples" (published in 1911, during the period of intense immigration from Silesia to the USA) considered Silesian as a geographical (not ethnic) term, denoting the inhabitants of Silesia. It is also mentioned the existence of both Polish Silesian and German Silesian dialects in that region.
Modern Silesia is inhabited by Poles, Silesians, Germans, and Czechs. Germans first came to Silesia during the Late Medieval Ostsiedlung. The last Polish census of 2011 showed that the Silesians are the largest ethnic or national minority in Poland, Germans being the second; both groups are located mostly in Upper Silesia. The Czech part of Silesia is inhabited by Czechs, Moravians, Silesians, and Poles.
In the early 19th century the population of the Prussian part of Silesia was between 2/3 and 3/4 German-speaking, between 1/5 and 1/3 Polish-speaking, with Sorbs, Czechs, Moravians and Jews forming other smaller minorities (see Table 1. below).
Before the Second World War, Silesia was inhabited mostly by Germans, with Poles a large minority, forming a majority in Upper Silesia. Silesia was also home of Czech and Jewish minorities. The German population tended to be based in the urban centres and in the rural areas to the north and west, whilst the Polish population was mostly rural and could be found in the east and in the south.
Ethnic structure of Prussian Upper Silesia (Opole regency) during the 19th century and the early 20th century can be found in Table 2.:
Austrian part of Silesia had a mixed German, Polish and Czech population, with Polish-speakers forming a majority in Cieszyn Silesia.
Historically, Silesia was about equally split between Protestants (overwhelmingly Lutherans) and Roman Catholics. In an 1890 census taken in the German part, Roman Catholics made up a slight majority of 53%, while the remaining 47% were almost entirely Lutheran. Geographically speaking, Lower Silesia was mostly Lutheran except for the Glatzer Land (now Kłodzko County). Upper Silesia was mostly Roman Catholic except for some of its northwestern parts, which were predominantly Lutheran. Generally speaking, the population was mostly Protestant in the western parts, and it tended to be more Roman Catholic the further east one went. In Upper Silesia, Protestants were concentrated in larger cities and often identified as German. After World War II, the religious demographics changed drastically as Germans, who constituted the bulk of the Protestant population, were forcibly expelled. Poles, who were mostly Roman Catholic, were resettled in their place. Today, Silesia remains predominantly Roman Catholic.
Existing since the 12th century, Silesia's Jewish community was concentrated around Wrocław and Upper Silesia, and numbered 48,003 (1.1% of the population) in 1890, decreasing to 44,985 persons (0.9%) by 1910. In Polish East Upper Silesia, the number of Jews was around 90,000–100,000. Historically the community had suffered a number of localised expulsions such as their 1453 expulsion from Wrocław. From 1712 to 1820 a succession of men held the title Chief Rabbi of Silesia ("Landesrabbiner"): Naphtali ha-Kohen (1712–16); Samuel ben Naphtali (1716–22); Ḥayyim Jonah Te'omim (1722–1727); Baruch b. Reuben Gomperz (1733–54); Joseph Jonas Fränkel (1754–93); Jeremiah Löw Berliner (1793–99); Lewin Saul Fränkel (1800–7); Aaron Karfunkel (1807–16); and Abraham ben Gedaliah Tiktin (1816–20).
After the German invasion of Poland in 1939, following Nazi racial policy, the Jewish population of Silesia was subjected to Nazi genocide with executions performed by Einsatzgruppe z. B.V. led by Udo von Woyrsch and Einsatzgruppe I led by Bruno Streckenbach, imprisonment in ghettos and ethnic cleansing to the General Government. In their efforts to exterminate the Jews through murder and ethnic cleansing Nazi established in Silesia province the Auschwitz and Gross-Rosen camps. Expulsions were carried out openly and reported in the local press. Those sent to ghettos would from 1942 be expelled to concentration and work camps. Between 5 May and 17 June, 20,000 Silesian Jews were sent to Birkenau to gas chambers and during August 1942, 10,000 to 13,000 Silesian Jews were murdered by gassing at Auschwitz. Most Jews in Silesia were exterminated by the Nazis. After the war Silesia became a major centre for repatriation of Jewish population in Poland which survived Nazi German extermination and in autumn 1945, 15,000 Jews were in Lower Silesia, mostly Polish Jews returned from territories now belonging to Soviet Union, rising in 1946 to seventy thousand as Jewish survivors from other regions in Poland were relocated.
The majority of Germans fled or were expelled from the present-day Polish and Czech parts of Silesia during and after World War II. From June 1945 to January 1947, 1.77 million Germans were expelled from Lower Silesia, and 310,000 from Upper Silesia. Today, most German Silesians and their descendants live in the territory of the Federal Republic of Germany, many of them in the Ruhr area working as miners, like their ancestors in Silesia. To smooth their integration into West German society after 1945, they were placed into officially recognized organizations, like the Landsmannschaft Schlesien, with financing from the federal West German budget. One of its most notable but controversial spokesmen was the Christian Democratic Union politician Herbert Hupka.
The expulsion of Germans led to widespread underpopulation. The population of the town of Glogau fell from 33,500 to 5,000, and from 1939 to 1966 the population of Wrocław fell by 25%. Attempts to repopulate Silesia proved unsuccessful in the 1940s and 1950s, and Silesia's population did not reach pre-war levels until the late 1970s. The Polish settlers who repopulated Silesia were partly from the former Polish Eastern Borderlands, which was annexed by the Soviet Union in 1939. The former German city of Breslau was partly repopulated with refugees from the formerly Polish city of Lwów.
The following table lists the cities in Silesia with a population greater than 30,000 (2015).
The emblems of Lower Silesia and Upper Silesia originate from the emblems of the Piasts of Lower Silesia and Upper Silesia. The coat of arms of Upper Silesia depicts the golden eagle on the blue shield. The coat of arms of Lower Silesia depicts a black eagle on a golden (yellow) shield.
Flags with their colors refer to the coat of arms of Silesia. | https://en.wikipedia.org/wiki?curid=28399 |
Sudetes
The Sudetes (; also known as the Sudeten after their German name; Czech: "Krkonošsko-jesenická subprovincie" or "Sudety"; Polish: "Sudety") are a mountain range in Central Europe. They are the highest part of Bohemian Massif. They stretch from the Saxon capital of Dresden in the northwest, to the Głubczyce plateau ("Płaskowyż Głubczycki") in Poland and to the Ostrava Basin and Moravian Gate ("Moravská brána") in the Czech Republic in the east. Geographically the Sudetes are a "Mittelgebirge" with some characteristics typical of high mountains. Its plateaus and subtle summit relief makes the Sudetes more akin to mountains of Northern Europe than to the Alps.
In the west, the Sudetes border with the Elbe Sandstone Mountains. The westernmost point of the Sudetes lies in the Dresden Heath ("Dresdner Heide"), the westernmost part of the West Lusatian Hill Country and Uplands, in Dresden. In the east of the Sudetes, the Moravian Gate and Ostrava Basin separates from the Carpathian Mountains. The Sudetes' highest mountain is Mount Sněžka/Śnieżka (1,603 m/5,259 ft), which is also the highest mountain of the Czech Republic, Bohemia, Silesia, and Lower Silesian Voivodeship, in the Krkonoše/Karkonosze Mountains, lying on the border between the Czech Republic and Poland. Mount Praděd (1,491 m/4,893 ft) in the Hrubý Jeseník Mountains is the highest mountain of Moravia. Lusatia's highest point (1,072 m/3,517 ft) lies on Mount Smrk/Smrek in the Jizera Mountains, and the Sudetes' highest mountain in Germany, which is also the country's highest mountain east of the River Elbe, is Mount Lausche/Luž (Upper Sorbian: "Łysa"; 793 m/2,600 ft) in the Zittau Mountains, the highest part of the Lusatian Mountains. The most notable rivers rising in the Sudetes are Elbe, Oder, Spree, Morava, Bóbr, Lusatian Neisse, Eastern Neisse, Jizera and Kwisa. The highest parts of the Sudetes are protected by national parks; Karkonosze and Stołowe in Poland and Krkonoše in the Czech Republic.
The Sudeten Germans (the German-speaking inhabitants of Czechoslovakia) as well as the Sudetenland (the border regions of Bohemia, Moravia, and Czech Silesia they inhabited) are named after the Sudetes.
The name "Sudetes" is derived from "Sudeti montes", a Latinization of the name "Soudeta ore" used in the "Geographia" by the Greco-Roman writer Ptolemy (Book 2, Chapter 10) c. AD 150 for a range of mountains in Germania in the general region of the modern Czech republic.
There is no consensus about which mountains he meant, and he could for example have intended the Ore Mountains, joining the modern Sudetes to their west, or even (according to Schütte) the Bohemian Forest (although this is normally considered to be equivalent to Ptolemy's Gabreta forest). The modern Sudetes are probably Ptolemy's Askiburgion mountains.
Ptolemy wrote "Σούδητα" in Greek, which is a neuter plural. Latin mons, however, is a masculine, hence Sudeti. The Latin version, and the modern geographical identification, is likely to be a scholastic innovation, as it is not attested in classical Latin literature. The meaning of the name is not known. In one hypothetical derivation, it means "Mountains of Wild Boars", relying on Indo-European *su-, "pig". A better etymology perhaps is from Latin sudis, plural sudes, "spines", which can be used of spiny fish or spiny terrain.
The Sudetes are usually divided into:
High Sudetes (, , ) is together name for the Krkonoše, Hrubý Jeseník and Śnieżnik mountain ranges. The Sudetes also comprise larger basins like the Jelenia Góra and the Kłodzko Valley.
The highest mountains, those located along the Czech-Polish border have annual precipitations around 1500 mm. The Stołowe Mountains that reach 919 m have precipitations ranging from 750 mm at lower locations to 920 mm in the upper parts with July being the rainiest month. Snow cover at the Stołowe Mountains typically last 70 to 95 days depending on altitude.
Settlement, logging and clearance has left forest pockets in the foothills with dense and continuous forest being found in the upper parts of the mountains. Due to logging in the last centuries little remains of the broad-leaf trees like beech, sycamore, ash and littleleaf linden that were once common in the Sudetes. Instead Norway spruce was planted in their place in the early 19th century, in some places amounting to monocultures. To provide more space for spruce plantations various peatlands were drained in the 19th and 20th century. Some spruce plantations have suffered severe damage as the seeds used came from lowland specimens that were not adapted to mountain conditions. Silver fir grow naturally in the Sudetes being more widespread in past times, before clearance since the Late Middle Ages and subsequent industrial pollution reduced the stands.
Many arctic-alpine and alpine vascular plants have a disjunct distribution being notably absent from the central Sudetes despite suitable habitats. Possibly this is the result a warm period during the Holocene (last 10,000 years) which wiped out cold-adapted vascular plants in the medium-sized mountains of the central Sudetes where there was no higher ground that could serve as refugia. Besides altitude the distribution of some alpine plants is influenced by soil. This is the case of "Aster alpinus" that grows preferentially on calcareous ground. Other alpine plants such as "Cardamine amara", "Epilobium anagallidifolium", "Luzula sudetica" and "Solidago virgaurea" occur beyond their altitudinal zonation in very humid areas.
Peatlands are common in the mountains occurring on high plateaus or in valley bottoms. Fens occur at slopes.
The higher mountains of the Sudetes lie above the timber line which is made up of Norway spruce. Spruces in wind-exposed areas display features such as flag tree disposition of branches, tilted stems and elongated stem cross sections. Forest-free areas above the timber line have increased historically by deforestation yet lowering of the timber line by human activity is minimal. Areas above the timber line appear discontinuously as "islands" in the Sudetes. At Krkonoše the timer line lies at "c". 1230 m a.s.l. while to the southeast in the Hrubý Jeseník mountains it lie at "c". 1310 m a.s.l. Part of the Hrubý Jeseník mountains have been above the timber line for no less than 5000 years. Mountains rise considerably above the timber line, at most 400 m, a characteristic that sets the Sudetes apart of other "Mittelgebirge" of Central Europe.
Geological research has been hampered by the multinational geography of the Sudetes with and the limitation of studies to state boundaries.
The igneous and metamorphic rocks of the Sudetes originated during the Variscan orogeny and its aftermath. The Sudetes are the northeasternmost accessible part of Variscan orogen as in the North European Plain the orogen is buried beneath sediments. Plate tectonic movements during the Variscan orogeny assembled together four major and two to three lesser tectonostratigraphic terranes. The assemblage of the terranes ought to have involved the closure of at least two ocean basins containing oceanic crust and marine sediments. This is reflected in the ophiolites, MORB-basalts, blueschists and eclogites that occur in-between terranes. Various terranes of the Sudetes are likely extensions of the Armorican terrane while other terranes may be the fringes of the ancient Baltica continent. One possibility for the amalgamation of terranes in the Sudetes is that the Góry Sowie-Kłodzko terrane collided with the Orlica-Śnieżnik terrane causing the closure of a small oceanic basin. This event led to obduction of the Central Sudetic ophiolite in the Devonian period. In the Early Caboniferous the joint Góry Sowie-Kłodzko-Orlica-Śnieżnik terrane collided with the Brunovistulian terrane. This last terrane was part of the Old Red Continent and could correspond either to Baltica or the eastern tip of the narrow Avalonia terrane. Also by the Early Carboniferous the Saxothuringian terrane collided with the Góry Sowie-Kłodzko-Orlica-Śnieżnik terrane closing the Rheic Ocean.
Once the main phase of deformation of the orogeny was over basins that had formed in-between metamorphic rock massifs were filled by sedimentary rock in the Devonian and Carboniferous periods. During and after sedimentation large granitic plutons intruded the crust. Viewed in a map today these plutons make up about 15% the Sudetes. Granites are of S-type. The granites and grantic-gneisses of Izera in the west Sudetes are disassociated from orogeny and thought to have formed during rifting along a passive continental margin. The Karkonosze Granite, also in the west Sudetes, have been dated to have formed "c". 318 million years ago at the beginning of the Variscan orogeny. The Karkonosze Granite is intruded by somewhat younger lamprophyre dykes.
A NW-SE to WNW-ESE oriented strike-slip fault —the Intra-Sudetic fault— runs through the length of the Sudetes. The Intra-Sudetic fault is parallel with the Upper Elbe fault and Middle Odra fault. Other main faults at the sudetes are also NW-SE oriented, dextral and of strike slip type. These include the Tłumaczów-Sienna Fault and the Marginal Sudetic Fault.
There are remnants of lava flows and volcanic plugs in the Sudetes. The volcanic rocks making up these outcrops are of mafic chemistry and include basanite and represent episodes of volcanism in the Oligocene and Miocene periods. Volcanism affected not only the Sudetes but also parts of the Sudetic foreland being part of a SW-NE oriented Bohemo-Silesian Belt of volcanic rocks. Mantle xenoliths have been recovered from the lavas of a volcano at Ještěd-Kozákov Ridge in the Czech western Sudetes. These pyroxenite xenoliths arrived to surface from approximate depths of 35, 70 and 73 km and indicate a complex history for the mantle beneath the Sudetes.
There are thermal springs in the Sudetes with measured temperatures of 29 to 44 °C. Drilling has revealed the existence of waters at 87 °C at depths of 2000 m. These modern waters are believed to be associated to the Late Cenozoic volcanism in Central Europe.
The Sudetes forms the NE border of the Bohemian Massif. In detail the Sudetes is made up of a series of massifs that are rectangular and rhomboid in plan view. These mountains corresponds to horsts and domes separated by basins, including grabens. The mountains took their present form after the Late Mesozoic retreat of the seas from the area which left the Sudetes subject to denudation for at least 65 million years. This meant that during the Late Cretaceous and Early Cenozoic 8 to 4 km of rock was eroded from the top of what is now the Sudetes. Concurrently with the Cenozoic denudation the climate cooled due to the northward drift of Europe. The collision between Africa and Europe has resulted in the deformation and uplift of the Sudetes. As such the uplift is related to the contemporary rise of the Alps and Carpathians. Uplift was accomplished by the creation or reactivation of numerous faults leading to a reshaping of the relief by renewed erosion. Various "hanging valleys" attest to this uplift. Block tectonics has uplifted or sunken crustal blocks. While the Late Cenozoic uplift has uplifted the Sudetes as a whole some grabens precede this uplift.
Weathering during the Cenozoic led to the formation of an etchplain in parts of Sudetes. While this etchplain has been eroded various landforms and weathering mantles have been suggested to attest its former existence. At present the mountain range shows a remarkable diversity of landforms. Some of the landforms present are escarpments, inselbergs, bornhardts, granitic domes, tors, flared slopes and weathering pits. Various escarpments have originated from faults and may reach heights of up to 500 m. To the northeast the Sudetes is separated from the Sudetic foreland by a sharp mountain front made up of an escarpment linked to the Sudetic Marginal Fault. Near Kaczawa this escarpment reaches 80 to 120 m in height. The relative influence of Pliocene-Quaternary tectonic movements and erosion in shaping the mountain landscape may vary along the northern front of the Sudetes.
During the Quaternary glaciations the Krkonoše mountains was the most glaciated part of the Sudetes. Evidence of this are its glacial cirques and the glacial valleys that develop next to it. The precise timing of the glaciations in the Sudetes is poorly constrained. Parts of the Sudetes remained free from glacier ice developing permafrost soils and periglacial landforms such as rock glaciers, nivation hollows, patterned ground, blockfields, solifluction landforms, blockstreams, tors and cryoplanation terraces. The occurrence or not of these periglacial landforms depends on altitude, the steepness and direction of slopes and the underlying rock type.
Other than debris flows there is little contemporary mass wasting in the mountains. Avalanches are common in the Sudetes.
The area around the Sudetes had by the 12th century been relatively densely settled with agriculture and settlements expanding further in the High Middle Ages from the 13th century onward. The majority of settlers were Germans from neighbouring Silesia, founding typical Waldhufendörfer. As this trend went on thinning of forest and deforestation had turned clearly unsustainable by the 14th century. In the 15th and 16th centuries agriculture had reached the inner part of Stołowe Mountains in the Central Sudetes. Destruction and degradation of the Sudetes forest peaked in the 16th and 17th centuries with demand of firewood coming from glasshouses that operated through the area in the early modern period.
Some limited form of forest management begun in the 18th century while in the industrial age demand for firewood was sustained by metallurgic industries in the settlements and cities around the mountains. In the 19th century the Central Sudetes had an economic boom with sandstone quarrying and a flourishing tourism industry centered on the natural scenery. Despite of this there was at least since the 1880s a trend of depopulation of villages and hamlets which continued into the 20th century. Since World War II various areas that were cleared of forest have been re-naturalized. Industrial activity across Europe has caused considerable damage to the forests as acid rain and heavy metals has arrived with westerly and southwesterly winds. Silver firs have proven particularly vulnerable to industrial soil contamination.
After World War I the name "Sudetenland" came into use to describe areas of the First Czechoslovak Republic with large ethnic German populations. In 1918 the short-lived rump state of German-Austria proclaimed a Province of the Sudetenland in northern Moravia and Austrian Silesia around the city of Opava ("Troppau").
The term was used in a wider sense when on 1 October 1933 Konrad Henlein founded the Sudeten German Party and in Nazi German parlance "Sudetendeutsche" (Sudeten Germans) referred to all indigenous ethnic Germans in Czechoslovakia. They were heavily clustered in the entire mountainous periphery of Czechoslovakia—not only in the former Moravian "Provinz Sudetenland" but also along the northwestern Bohemian borderlands with German Lower Silesia, Saxony and Bavaria, in an area formerly called German Bohemia. In total the German minority population of pre-World War II Czechoslovakia numbered around 20% of the total national population.
Sparking a "Sudeten Crisis", Hitler got his future enemies to concede the "Sudetenland" with most of the Czechoslovak border fortifications in the 1938 Munich Agreement, leaving the remainder of Czechoslovakia shorn of its natural borders and buffer zone, finally occupied by Germany in March 1939. After being annexed by Nazi Germany, much of the region was redesignated as the "Reichsgau Sudetenland".
After World War II, most of the previous population of the Sudetes was forcibly expelled on the basis of the Potsdam Agreement and the Beneš decrees, and the region was re-settled by new Polish and Czechoslovak citizens. A considerable proportion of the Czechoslovak populace thereafter strongly objected to the use of the term "Sudety". In the Czech Republic the designation "Krkonošsko-jesenická subprovincie" is used officially and in maps etc. usually only the discrete Czech names for the individual mountain ranges (e.g. Krkonoše) appear, as under Subdivisions above.
Part of the economy of the Sudetes is dedicated to tourism. Coal mining towns like Wałbrzych have re-oriented their economies towards tourism since the decline of mining in the 1980s. As of 2000 scholar Krzysztof R. Mazurski judged that the Sudetes, much like the Poland's Baltic coast and the Carpathians, were unlikely to attract much foreign tourism. Sandstone has been quarried in Sudetes during the 19th and 20th centuries. Likewise volcanic rock has also been quarried to such degree untouched volcanoes are scarce. Sandstone labyrinths have been a notable tourist attraction since the 19th century with considerable investments being done in projecting trails some of which involve rock engineering.
In the Sudetes there are many spa towns with sanatoria. In many places the developed tourist base – hotels, guest houses, ski infrastructure.
The nearest international airports are Dresden Airport in Dresden and Copernicus Airport Wrocław in Wrocław.
Notable towns in this area include: | https://en.wikipedia.org/wiki?curid=28401 |
Sigismund Báthory
Sigismund Báthory (; 1573 – 27 March 1613) was Prince of Transylvania several times between 1586 and 1602, and Duke of Racibórz and Opole in Silesia in 1598. His father, Christopher Báthory, ruled Transylvania as voivode (or deputy) of the absent prince, Stephen Báthory. Sigismund was still a child when the Diet of Transylvania elected him voivode at his dying father's request in 1581. Initially, regency councils administered Transylvania on his behalf, but Stephen Báthory made János Ghyczy the sole regent in 1585. Sigismund adopted the title of prince after Stephen Báthory died.
The Diet proclaimed Sigismund to be of age in 1588, but only after he agreed to expel the Jesuits. Pope Sixtus V excommunicated him, but the ban was lifted in 1590, and the Jesuits returned a year later. His blatant favoritism towards the Catholics made him unpopular among his Protestant subjects. He decided to join the Holy League against the Ottoman Empire. Since he could not convince the Diet to support his plan, he renounced the throne in July 1594, but the commanders of the army convinced him to revoke his abdication. At their proposal, he purged the noblemen who opposed the war against the Ottomans. He officially joined the Holy League and married Maria Christina of Habsburg, a niece of the Holy Roman Emperor, Rudolph II. The marriage was never consummated.
Michael the Brave, Voivode of Wallachia, and Ștefan Răzvan, Voivode of Moldavia, acknowledged his suzerainty. Their united forces defeated an Ottoman army in the Battle of Giurgiu. The triumph was followed by a series of Ottoman victories, and Sigismund abdicated in favor of Rudolph II in early 1598, receiving the duchies of Racibórz and Opole as a compensation. His maternal uncle, Stephen Bocskai, persuaded him to return in late summer, but he could not make peace with the Ottoman Empire. He renounced Transylvania in favor of Andrew Báthory and settled in Poland in 1599. During the following years, Transylvania was regularly pillaged by unpaid mercenaries and Ottoman marauders. Sigismund returned at the head of a Polish army in 1601, but he could not strengthen his position. He again abdicated in favor of Rudolph and settled in Bohemia in July 1602. After he was accused of a conspiracy against the emperor, he spent fourteen months in jail in Prague in 1610 and 1611. He died at his Bohemian estate.
Sigismund was the son of Christopher Báthory and his second wife, Elisabeth Bocskai. He was born in Várad (now Oradea in Romania) in 1573, according to the Transylvanian historian, István Szamosközy. At the time of Sigismund's birth, his uncle, Stephen Báthory, was the voivode of Transylvania. After being elected King of Poland in late 1575, Stephen Báthory adopted the title of Prince of Transylvania and made Sigismund's father voivode. Stephen Báthory set up a separate chancellery in Kraków to supervise the administration of the principality.
Sigismund's father and uncle were Roman Catholic, but his mother was Calvinist. According to the Jesuit Antonio Possevino, Sigismund demonstrated his devotion to Catholicism already at the age of seven. His mother mocked him for his piety, saying that he only wanted to secure his uncle's goodwill. Sigismund was especially hostile towards the Anti-Trinitarians in his youth. His mother died in early 1581.
Christopher Báthory fell seriously ill after his wife's death. At his request, the Diet of Transylvania elected Sigismund voivode in Kolozsvár (present-day Cluj-Napoca in Romania) around 15May 1581. Since Sigismund was still a minor, his dying father tasked a council of twelve noblemen with the government. Christopher Báthory's cousin, Dénes Csáky, and his brother-in-law, Stephen Bocskai, headed the council. Christopher Báthory died on 27May.
The Ottoman Sultan, Murad III, confirmed Sigismund's election on 3July 1581, reminding him of his obligation to pay a yearly tribute of 15,000 florins. However, Pál Márkházy, a young nobleman who lived in Istanbul, offered to double the tribute and to pay an additional tax of 100,000 florins if he was made the ruler of Transylvania. The Grand Vizier, Koca Sinan Pasha, supported Márkházy's claim. Taking advantage of the situation, Murad demanded the same payments from Sigismund, but Stephen Báthory and the "Three Nations of Transylvania" resisted. After receiving the customary tribute from Transylvania, the sultan again confirmed Sigismund's rule in November 1581.
Stephen Báthory who took charge of Sigismund's education confirmed the position of his Jesuit tutors, János Leleszi and Gergely Vásárhelyi. According to Szamosközy, Stephen Báthory also ordered Sigismund's companions to talk of foreign lands, wars, and hunting with him during their dinners together. He reorganized the government on 3May 1583, charging Sándor Kendi, Farkas Kovacsóczy, and László Sombori with the administration of Transylvania during Sigismund's minority. The Diet suggested to Stephen Báthory that he dismiss them, but he only dissolved the council on 1May 1585. He replaced the three councillors with the devout Calvinist János Ghyczy, making him regent for Sigismund.
Sigismund adopted the title of Prince of Transylvania after Stephen Báthory died on 13 December 1586. He was still a minor, and Ghyczy continued to rule as regent. Sigismund was one of the candidates to the throne of the Polish–Lithuanian Commonwealth. His advisors knew that he had little chance to win, but they wanted to demonstrate that the Báthorys had a valid claim to rule the Commonwealth. Kovacsóczy officially announced Sigismund's application at the "Sejm" (or general assembly) on 14August 1587. Five days later, the assembly elected Sigismund III Vasa king. During the ensuing war of succession, Transylvanian troops supported SigismundIII against Maximilian of Habsburg, who had also laid claim to Poland and Lithuania.
Sigismund's cousins, Balthasar and Stephen Báthory, returned from Poland to Transylvania. Balthasar wanted to take charge of the government, making his court at Fogaras (present-day Făgăraș in Romania) the center of those who opposed Ghyczy's rule. Kovacsóczy, the chancellor of Transylvania, remained neutral in the conflict.
In October 1588 the Diet proposed to declare the sixteen-year-old Sigismund of age if he banished the Jesuits from Transylvania. He did not accept the offer, mainly because he did not want to expel his confessor, Alfonso Carillo. The Diet was dissolved, but Sigismund's cousins convinced him not to resist the Diet, which was dominated by Protestant delegates. The Diet was again summoned in late 1588; on 8December it ordered the expulsion of the Jesuits and declared Sigismund to be of age.
Sigismund took the customary oath of the Transylvanian monarchs on 23December 1588. Pope Sixtus V excommunicated him for the expulsion of the Jesuits. Sigismund's cousin, Cardinal Andrew Báthory, urged the pope to lift the ban, saying that the prince's Protestant advisors had forced him to throw out the priests. The pope authorized Sigismund to employ a confessor in May 1589, and the excommunication was revoked on Easter 1590.
Sigismund made several attempts to strengthen the position of the Roman Catholic Church, especially by appointing Catholics to the highest positions of state administration. Carillo and other Jesuit priests returned to Sigismund's court in disguise in early 1591. Sigismund met Andrew and Balthasar Báthory in August to seek their support for the legalization of the Jesuits' presence, but they refused to stand by the priests at the Diet.
Sigismund dispatched his favorite, István Jósika, to Tuscany to start negotiations regarding his marriage to Eleonora Orsini (a niece of Ferdinando I de' Medici), although his cousins had sharply opposed Jósika's appointment. He also invited Italian artists and artisans to his court, making them his advisors or butlers. Szamosközy described them as "the trashiest representatives of the noblest nation". The delegates of the "Three Nations" criticized Sigismund for his prodigal way of life at the Diet in Gyulafehérvár in November. To reduce his authority, the Diet prescribed that Sigismund should only make decisions in the royal council. Sigismund deprived his cousins of the allowances that the royal treasury had paid to them.
Gossip about conspiracies spread during the following months. Sándor Kendi accused Sigismund's former tutor, János Gálffy, of deliberately stirring up debates between the prince and his cousins. Other courtiers claimed that Balthasar Báthory was planning to dethrone Sigismund. A Jesuit priest was informed at Vienna that Gálffy and his allies wanted to murder the prince and his cousins. In late 1591 Sigismund stated that he was willing to renounce in favor of Balthasar if the members of the royal council favored his cousin. His offer was refused, but during the debate Kendi referred to Sigismund and Balthasar as the "two monsters and greatest disasters of the Transylvanian realm". Pope Clement VIII's legate, Attilio Amalteo, mediated a reconciliation between Sigismund and his cousins in the summer of 1592. The pope also urged Sigismund to marry a Catholic princess from the House of Lorraine.
At the demand of the sultan, Transylvania troops assisted Aaron the Tyrant, Voivode of Moldavia. The sultan also ordered Sigismund to pay double the amount of the yearly tribute. Balthasar Báthory murdered Sigismund's secretary, Pál Gyulai, on 10December 1592. He also persuaded Sigismund to order the execution of Gálffy on 8March 1593. That summer, Sigismund went to Kraków in disguise to start negotiations regarding his marriage with Anna, the sister of SigismundIII of Poland. The Holy See had proposed the marriage, which could have enabled Sigismund to rule Poland during the absence of the king, who was also King of Sweden, but the plan came to nothing.
Murad III declared war against the Holy Roman Emperor, Rudolph in August. The sultan ordered Sigismund to send reinforcements to support the Ottoman army in Royal Hungary. According to diplomatic sources, the grand vizier was planning to occupy Transylvania. At the proposal of Jan Zamoyski, Chancellor of Poland, Sigismund sent envoys to Elizabeth I of England, asking her to intervene on his behalf at the Sublime Porte. She ordered her ambassador at Istanbul, Edward Barton, to support Sigismund.
Pope Clement VIII wanted to persuade Sigismund to join the Holy League that the pope had organized against the Ottoman Empire. After Rudolph's troops defeated the Ottomans in a series of battles in the autumn of 1593, Sigismund decided to join the Holy League, provided that Rudolph acknowledged the independence of Transylvania from the Hungarian Crown. However, the delegates of the Three Nations refused to declare war against the Ottoman Empire at three consecutive Diets between May and July. Sigismund abdicated, tasking Balthasar Báthory with the government in late July. Balthasar wanted to seize the throne, but Kovacsóczy, Kendi, and the other leading officials decided to set up an aristocratic council to administer Transylvania.
The commanders of the army (including Stephen Bocskai), and Friar Carillo jointly convinced Sigismund to return on 8August. They also persuaded him to order the arrest of Kovacsóczy, Kendi, Balthasar Báthory, and twelve other noblemen who had opposed the war against the Ottomans on 28August, accusing them of plotting. Sándor and Gábor Kendi were beheaded along with two other members of the royal council; Balthasar Báthory, Kovacsóczy, and Ferenc Kendi were strangled in prison. All but one murdered noblemen were Protestants, mostly Unitarians. Many of their relatives converted to Catholicism to prevent the confiscation of their estates.
Sigismund decided to join the Holy League together with Aaron the Tyrant, voivode of Moldavia, and Michael the Brave, voivode of Wallachia, on 5October 1594. The two voivodes had started direct negotiations with the Holy See, but Sigismund, who claimed suzerainty over them, prevented them from conducting further direct negotiations. Sigismund's envoy, Stephen Bocskai, signed the document that confirmed the membership of Transylvania in the Holy League in Prague on 28January 1595. According to the treaty, RudolphII recognized Sigismund's hereditary right to rule Transylvania and Partium and to use the title of prince, but he also stipulated that the principality was to be re-united with the Hungarian Crown if Sigismund's family died out. The Diet of Transylvania confirmed the treaty on 16April. The Diet also prohibited religious innovations, which gave rise to the persecution of Szekler Sabbatarians in Udvarhelyszék.
The Wallachian boyars and prelates recognized Sigismund's suzerainty over Wallachia on behalf of Michael the Brave in Gyulafehérvár on 20May 1595. According to the treaty, Michael was forbidden to enter into an alliance with foreign powers without Sigismund's approval. The voivode's right to sentence his boyars to death was also limited. The Diet of Transylvanian was authorized to impose taxes in Wallachia with a council of twelve boyars. After Aaron the Tyrant refused to sign a similar treaty, Sigismund invaded Moldavia and captured him in Iași. He made Ștefan Răzvan the new voivode on 3June, forcing him to swear fealty to him. Thereafter, Sigismund styled himself "By the Grace of God, Prince of Transylvania, Wallachia and Moldavia, Prince of the Holy Roman Empire, Count of the Székelys and Lord of Parts of the Kingdom of Hungary".
Sigismund married Maria Christierna of Habsburg, a niece of RudolphII, on 6August. However, the marriage was never consummated. Sigismund accused Margaret Majláth (who was the mother of his executed cousin, Balthasar Báthory) of witchcraft, causing his impotence. Historian László Nagy notes that Sigismund's contemporaries made no reference to his relationship with women, showing that Sigismund was homosexual.
György Borbély, Ban of Karánsebes, captured Lippa (now Lipova in Romania) and other Ottoman fortresses along the Maros River before the end of August. Koca Sinan Pasha broke into Wallachia, forcing Michael the Brave to retreat towards Transylvania. Michael routed the invaders in the Battle of Călugăreni, but he could not prevent them from seizing Târgoviște and Bucharest. He withdrew to Stoenești to await the arrival of the Transylvanian and Moldavian troops.
Since the Ottoman army outnumbered the forces at Sigismund's disposal, he proposed the Székely commoners (who had been reduced to serfdom in the 1560s) to restore their freedom if they joined his campaign against the Ottomans. The Székelys accepted his offer, enabling Sigismund to launch a counter-invasion in Wallachia in early October. The united forces of Transylvania, Wallachia, and Moldavia defeated the retreating Ottoman army in the Battle of Giurgiu on 25October. Although the victory was not decisive, the battle enabled the two voivodes to maintain their alliance with the Holy League.
Ignoring the Székely warriors' preeminent role during the war, the Diet of Transylvania refused to restore their freedom on 15December. Sigismund left for Prague to start negotiations with RudolphII in early January 1596, tasking his wife and Stephen Bocskai with the government. The Székelys tried to secure their freedom, but Bocskai repressed their movement with extraordinary cruelty during the "Bloody Carnival" in early 1596.
Rudolph II promised Sigismund to send reinforcements and money to continue the war against the Ottomans. Sigismund returned to Transylvania on 4March. He laid siege to Temesvár (now Timișoara in Romania), but he lifted the siege when an Ottoman army of 20,000 strong approached the fortress. The Ottoman Sultan Mehmed III invaded Royal Hungary in summer. Sigismund joined his forces with the royal army, which was under the command of Maximilian of Habsburg. However, the Ottomans routed their united army in the Battle of Mezőkeresztes between 23 and 26 October.
Sigismund again went to Prague to meet RudolphII and offered to abdicate in January 1597. After he returned to Transylvania, he restored the Roman Catholic bishopric in Gyulafehérvár. He sent envoys to Italy to demand the supreme command of a new Christian army, but his delegates at Istanbul started negotiations regarding a reconciliation with the sultan.
The failure of his marriage and the defeats of the Holy League diminished Sigismund's self-confidence. He sent his envoys to RudolphII and again offered to abdicate in September 1597. An agreement regarding his abdication was signed on 23December 1597. RudolphII granted Sigismund the Silesian duchies of Racibórz and Opole and a yearly subsidy of 50,000 thalers. The agreement was kept secret for months.
The Diet of Transylvania acknowledged Sigismund's abdication on 23March 1598. Maria Christierna took charge of the government until the arrival of Maximilian of Habsburg, whom RudolphII had appointed to administer Transylvania. Sigismund went to Silesia, but he did not like his new duchies. Bocskai, who had been dismissed after Sigismund's abdication, urged him to return.
Sigismund came to Kolozsvár on 21 August. On the following day, Bocskai convoked the Diet to his military camp at Szászsebes (now Sebeș in Romania), and the delegates proclaimed Sigismund prince. Most Transylvanians accepted the decision, but György Király, the deputy captain of Várad, remained loyal to RudolphII. In September an Ottoman army invaded the principality, capturing the fortresses along the Maros. Sigismund sent his envoys to the commander of the army, Mehmed, convincing him to attack Várad instead of breaking into Transylvania proper.
All of Sigismund's attempts to make peace with the sultan failed. He sent his envoys to Prague to negotiate with RudolphII, while his confessor, Carillo, started negotiations with Jan Zamoyski in Poland. At Sigismund's invitation, his cousin, Andrew Báthory, returned from Poland. Sigismund abdicated at the Diet in Medgyes (now Mediaș in Romania) on 21March 1599. Eight days later, the Diet proclaimed Andrew Báthory prince, hoping that Andrew could make peace with the Ottomans with the assistance of Poland. Sigismund left Transylvania for Poland in June. His marriage with Maria Christierna was declared invalid in Rome in August.
Andrew Báthory lost his throne and his life fighting against Michael the Brave and his Székely allies in autumn. Michael the Brave administered Transylvania as RudolphII's governor, but his rule was unpopular among the noblemen, especially because of the pillaging raids made by his unpaid soldiers. As early as 9February 1600 Sigismund announced that he was ready to return to Transylvania. Moses Székely, a commander-in-chief during Michael the Brave's campaign against Moldavia in May, deserted Michael and came to Poland to meet Sigismund.
The elected leader of the Transylvanian noblemen, István Csáky, sought assistance from RudolphII's military commander, Giorgio Basta, against Michael. Basta invaded Transylvania and expelled Michael the Brave in September. Basta's unpaid soldiers regularly pillaged the principality, while Ottoman and Tatar marauders made frequent incursions across the frontiers. Sigismund returned to Transylvania across Moldavia at the head of a Polish army on 24March 1601. The Diet proclaimed him prince in Kolozsvár on 3April. Basta and Michael the Brave invaded Transylvania in summer. They routed Sigismund's army in the Battle of Goroszló on 3August 1601. After the battle, Sigismund fled to Moldavia, but he returned on 6September.
The sultan's envoy confirmed Sigismund's position as Prince of Transylvania in Brassó (now Brașov in Romania) on 2October. At the head of an army which also included Ottoman and Tatar soldiers, Sigismund expanded his rule over most regions of the principality, but he could not capture Kolozsvár in late November. He started new negotiations with Basta over his abdication in March 1602, because he did not trust his own supporters. He referred to them as "intoxicated and brutish sons of a bitch" and asked István Csáky to help him to leave their camp on 2July. He left Transylvania for the last time on 26July 1602.
Basta's soldiers accompanied Sigismund to Tokaj. Before long, he went to Prague to beg for RudolphII's mercy. He received the "incolatus" (or the right to own lands in Bohemia) in 1604. After the Diet of Transylvania proclaimed Stephen Bocskai prince in February 1605, Rudolph tried to persuade Sigismund to return to Transylvania, but he did not accept the offer. The ambassadors of Venice and Spain and the emperor again tried to convince him to lay claim to Transylvania in July 1606, but Sigismund refused, saying that he had no information about the affairs of his former principality. In December he again met Rudolph in Prague, but still resisted the emperor's offer.
Sigismund received the domain of Libochovice in Bohemia. After one of his employees accused him of plotting against the emperor, Sigismund was imprisoned for fourteen months in the jails of Prague Castle in 1610. Sigismund died of a stroke in Libochovice on 27March 1613. He was buried in a crypt in the St. Vitus Cathedral in Prague. | https://en.wikipedia.org/wiki?curid=28405 |
Scopes Trial
The Scopes Trial, formally known as The State of Tennessee v. John Thomas Scopes and commonly referred to as the Scopes Monkey Trial, was an American legal case in July 1925 in which a high school teacher, John T. Scopes, was accused of violating Tennessee's Butler Act, which had made it unlawful to teach human evolution in any state-funded school. The trial was deliberately staged in order to attract publicity to the small town of Dayton, Tennessee, where it was held. Scopes was unsure whether he had ever actually taught evolution, but he incriminated himself purposely so the case could have a defendant.
Scopes was found guilty and fined $100 (), but the verdict was overturned on a technicality. The trial served its purpose of drawing intense national publicity, as national reporters flocked to Dayton to cover the big-name lawyers who had agreed to represent each side. William Jennings Bryan, three-time presidential candidate, argued for the prosecution, while Clarence Darrow, the famed defense attorney, spoke for Scopes. The trial publicized the Fundamentalist–Modernist controversy, which set Modernists, who said evolution was not inconsistent with religion, against Fundamentalists, who said the Word of God as revealed in the Bible took priority over all human knowledge. The case was thus seen both as a theological contest and as a trial on whether modern science should be taught in schools.
State Representative John W. Butler, a Tennessee farmer and head of the World Christian Fundamentals Association, lobbied state legislatures to pass anti-evolution laws. He succeeded when the Butler Act was passed in Tennessee, on March 25, 1925. Butler later stated, "I didn't know anything about evolution... I'd read in the papers that boys and girls were coming home from school and telling their fathers and mothers that the Bible was all nonsense." Tennessee governor Austin Peay signed the law to gain support among rural legislators, but believed the law would neither be enforced nor interfere with education in Tennessee schools. William Jennings Bryan thanked Peay enthusiastically for the bill: "The Christian parents of the state owe you a debt of gratitude for saving their children from the poisonous influence of an unproven hypothesis."
In response, the American Civil Liberties Union financed a test case in which John Scopes, a Tennessee high school science teacher, agreed to be tried for violating the Act. Scopes, who had substituted for the regular biology teacher, was charged on May 5, 1925, with teaching evolution from a chapter in George William Hunter's textbook, "Civic Biology: Presented in Problems" (1914), which described the theory of evolution, race, and eugenics. The two sides brought in the biggest legal names in the nation, William Jennings Bryan for the prosecution and Clarence Darrow for the defense, and the trial was followed on radio transmissions throughout the United States.
The American Civil Liberties Union (ACLU) offered to defend anyone accused of teaching the theory of evolution in defiance of the Butler Act. On April 5, 1925, George Rappleyea, local manager for the Cumberland Coal and Iron Company, arranged a meeting with county superintendent of schools Walter White and local attorney Sue K. Hicks at Robinson's Drug Store, convincing them that the controversy of such a trial would give Dayton much needed publicity. According to Robinson, Rappleyea said, "As it is, the law is not enforced. If you win, it will be enforced. If I win, the law will be repealed. We're game, aren't we?" The men then summoned 24-year-old John T. Scopes, a Dayton high school science and math teacher. The group asked Scopes to admit to teaching the theory of evolution.
Rappleyea pointed out that, while the Butler Act prohibited the teaching of the theory of evolution, the state required teachers to use a textbook that explicitly described and endorsed the theory of evolution, and that teachers were, therefore, effectively required to break the law. Scopes mentioned that while he couldn't remember whether he had actually taught evolution in class, he had, however, gone through the evolution chart and chapter with the class. Scopes added to the group: "If you can prove that I've taught evolution and that I can qualify as a defendant, then I'll be willing to stand trial."
Scopes urged students to testify against him and coached them in their answers. He was indicted on May 25, after three students testified against him at the grand jury; one student afterwards told reporters, "I believe in part of evolution, but I don't believe in the monkey business." Judge John T. Raulston accelerated the convening of the grand jury and "...all but instructed the grand jury to indict Scopes, despite the meager evidence against him and the widely reported stories questioning whether the willing defendant had ever taught evolution in the classroom". Scopes was charged with having taught from the chapter on evolution to a high-school class in violation of the Butler Act and nominally arrested, though he was never actually detained. Paul Patterson, owner of "The Baltimore Sun", put up $500 in bail for Scopes.
The original prosecutors were Herbert E. and Sue K. Hicks, two brothers who were local attorneys and friends of Scopes, but the prosecution was ultimately led by Tom Stewart, a graduate of Cumberland School of Law, who later became a U.S. Senator. Stewart was aided by Dayton attorney Gordon McKenzie, who supported the anti-evolution bill on religious grounds, and described evolution as "detrimental to our morality" and an assault on "the very citadel of our Christian religion".
Hoping to attract major press coverage, George Rappleyea went so far as to write to the British novelist H. G. Wells asking him to join the defense team. Wells replied that he had no legal training in Britain, let alone in America, and declined the offer. John R. Neal, a law school professor from Knoxville, announced that he would act as Scopes' attorney whether Scopes liked it or not, and he became the nominal head of the defense team.
Baptist pastor William Bell Riley, the founder and president of the World Christian Fundamentals Association, was instrumental in calling lawyer and three-time Democratic presidential nominee, former United States Secretary of State, and lifelong Presbyterian William Jennings Bryan to act as that organization's counsel. Bryan had originally been invited by Sue Hicks to become an associate of the prosecution and Bryan had readily accepted, despite the fact he had not tried a case in thirty-six years. As Scopes pointed out to James Presley in the book "Center of the Storm", on which the two collaborated: "After [Bryan] was accepted by the state as a special prosecutor in the case, there was never any hope of containing the controversy within the bounds of constitutionality."
In response, the defense sought out Clarence Darrow, an agnostic. Darrow originally declined, fearing his presence would create a circus atmosphere, but eventually realized that the trial would be a circus with or without him, and agreed to lend his services to the defense, later saying he "realized there was no limit to the mischief that might be accomplished unless the country was aroused to the evil at hand". After many changes back and forth, the defense team consisted of Darrow, ACLU attorney Arthur Garfield Hays, Dudley Field Malone, an international divorce lawyer who had worked at the State Department, W.O. Thompson, who was Darrow's law partner, and F.B. McElwee. The defense was also assisted by librarian and Biblical authority Charles Francis Potter, who was a Modernist Unitarian preacher.
The prosecution team was led by Tom Stewart, district attorney for the 18th Circuit (and future United States Senator), and included, in addition to Herbert and Sue Hicks, Ben B. McKenzie and William Jennings Bryan.
The trial was covered by famous journalists from the South and around the world, including H. L. Mencken for "The Baltimore Sun", which was also paying part of the defense's expenses. It was Mencken who provided the trial with its most colorful labels such as the "Monkey Trial" of "the infidel Scopes". It was also the first United States trial to be broadcast on national radio.
The ACLU had originally intended to oppose the Butler Act on the grounds that it violated the teacher's individual rights and academic freedom, and was therefore unconstitutional. Principally because of Clarence Darrow, this strategy changed as the trial progressed. The earliest argument proposed by the defense once the trial had begun was that there was actually no conflict between evolution and the creation account in the Bible; later, this viewpoint would be called theistic evolution. In support of this claim, they brought in eight experts on evolution. But other than Dr. Maynard Metcalf, a zoologist from Johns Hopkins University, the judge would not allow these experts to testify in person. Instead, they were allowed to submit written statements so their evidence could be used at the appeal. In response to this decision, Darrow made a sarcastic comment to Judge Raulston (as he often did throughout the trial) on how he had been agreeable only on the prosecution's suggestions. Darrow apologized the next day, keeping himself from being found in contempt of court.
The presiding judge, John T. Raulston, was accused of being biased towards the prosecution and frequently clashed with Darrow. At the outset of the trial, Raulston quoted Genesis and the Butler Act. He also warned the jury not to judge the merit of the law (which would become the focus of the trial) but on the violation of the Act, which he called a 'high misdemeanor'. The jury foreman himself was unconvinced of the merit of the Act but he acted, as did most of the jury, on the instructions of the judge.
Bryan chastised evolution for teaching children that humans were but one of 35,000 types of mammals and bemoaned the notion that human beings were descended "Not even from American monkeys, but from old world monkeys".
Malone responded for the defense in a speech that was universally considered the oratorical climax of the trial. Arousing fears of "inquisitions", Malone argued that the Bible should be preserved in the realm of theology and morality and not put into a course of science. In his conclusion, Malone declared that Bryan's "duel to the death" against evolution should not be made one-sided by a court ruling that took away the chief witnesses for the defense. Malone promised there would be no duel because "there is never a duel with the truth." The courtroom went wild when Malone finished; Scopes declared Malone's speech to be the dramatic high point of the entire trial and insisted that part of the reason Bryan wanted to go on the stand was to regain some of his tarnished glory.
On the sixth day of the trial, the defense ran out of witnesses. The judge declared that all the defense testimony on the Bible was irrelevant and should not be presented to the jury (which had been excluded during the defense). On the seventh day of the trial, the defense asked the judge to call Bryan as a witness to question him on the Bible, as their own experts had been rendered irrelevant; Darrow had planned this the day before and called Bryan a "Bible expert". This move surprised those present in the court, as Bryan was a counsel for the prosecution and Bryan himself (according to a journalist reporting the trial) never made a claim of being an expert, although he did tout his knowledge of the Bible. This testimony revolved around several questions regarding Biblical stories and Bryan's beliefs (as shown below); this testimony culminated in Bryan declaring that Darrow was using the court to "slur the Bible" while Darrow replied that Bryan's statements on the Bible were "foolish".
On the seventh day of the trial, Clarence Darrow took the unorthodox step of calling William Jennings Bryan, counsel for the prosecution, to the stand as a witness in an effort to demonstrate that belief in the historicity of the Bible and its many accounts of miracles was unreasonable. Bryan accepted, on the understanding that Darrow would in turn submit to questioning by Bryan. Although Hays would claim in his autobiography that the examination of Bryan was unplanned, Darrow spent the night before in preparation. The scientists the defense had brought to Dayton—and Charles Francis Potter, a modernist minister who had engaged in a series of public debates on evolution with the fundamentalist preacher John Roach Straton—prepared topics and questions for Darrow to address to Bryan on the witness stand. Kirtley Mather, chairman of the geology department at Harvard and also a devout Baptist, played Bryan and answered questions as he believed Bryan would. Raulston had adjourned court to the stand on the courthouse lawn, ostensibly because he was "afraid of the building" with so many spectators crammed into the courtroom, but probably because of the stifling heat.
An area of questioning involved the book of Genesis, including questions about whether Eve was actually created from Adam's rib, where Cain got his wife, and how many people lived in Ancient Egypt. Darrow used these examples to suggest that the stories of the Bible could not be scientific and should not be used in teaching science with Darrow telling Bryan, "You insult every man of science and learning in the world because he does not believe in your fool religion." Bryan's declaration in response was, "The reason I am answering is not for the benefit of the superior court. It is to keep these gentlemen from saying I was afraid to meet them and let them question me, and I want the Christian world to know that any atheist, agnostic, unbeliever, can question me anytime as to my belief in God, and I will answer him."
Stewart objected for the prosecution, demanding to know the legal purpose of Darrow's questioning. Bryan, gauging the effect the session was having, snapped that its purpose was "to cast ridicule on everybody who believes in the Bible". Darrow, with equal vehemence, retorted, "We have the purpose of preventing bigots and ignoramuses from controlling the education of the United States."
A few more questions followed in the charged open-air courtroom. Darrow asked where Cain got his wife; Bryan answered that he would "leave the agnostics to hunt for her". When Darrow addressed the issue of the temptation of Eve by the serpent, Bryan insisted that the Bible be quoted verbatim rather than allowing Darrow to paraphrase it in his own terms. However, after another angry exchange, Judge Raulston banged his gavel, adjourning the court.
The confrontation between Bryan and Darrow lasted approximately two hours on the afternoon of the seventh day of the trial. It is likely that it would have continued the following morning but for Judge Raulston's announcement that he considered the whole examination irrelevant to the case and his decision that it should be "expunged" from the record. Thus Bryan was denied the chance to cross-examine the defense lawyers in return, although after the trial Bryan would distribute nine questions to the press to bring out Darrow's "religious attitude". The questions and Darrow's short answers were published in newspapers the day after the trial ended, with "The New York Times" characterizing Darrow as answering Bryan's questions "with his agnostic's creed, 'I don't know,' except where he could deny them with his belief in natural, immutable law".
After the defense's final attempt to present evidence was denied, Darrow asked the judge to bring in the jury only to have them come to a guilty verdict:
We claim that the defendant is not guilty, but as the court has excluded any testimony, except as to the one issue as to whether he taught that man descended from a lower order of animals, and we cannot contradict that testimony, there is no logical thing to come except that the jury find a verdict that we may carry to the higher court, purely as a matter of proper procedure. We do not think it is fair to the court or counsel on the other side to waste a lot of time when we know this is the inevitable result and probably the best result for the case.
After they were brought in, Darrow then addressed the jury:
We came down here to offer evidence in this case and the court has held under the law that the evidence we had is not admissible, so all we can do is to take an exception and carry it to a higher court to see whether the evidence is admissible or not... we cannot even explain to you that we think you should return a verdict of not guilty. We do not see how you could. We do not ask it.
Darrow closed the case for the defense without a final summation. Under Tennessee law, when the defense waived its right to make a closing speech, the prosecution was also barred from summing up its case, preventing Bryan from presenting his prepared summation.
Scopes never testified since there was never a factual issue as to whether he had taught evolution. Scopes later admitted that, in reality, he was unsure of whether he had taught evolution (another reason the defense did not want him to testify), but the point was not contested at the trial.
William Jennings Bryan's summation of the Scopes trial (distributed to reporters but not read in court):
Science is a magnificent force, but it is not a teacher of morals. It can perfect machinery, but it adds no moral restraints to protect society from the misuse of the machine. It can also build gigantic intellectual ships, but it constructs no moral rudders for the control of storm-tossed human vessel. It not only fails to supply the spiritual element needed but some of its unproven hypotheses rob the ship of its compass and thus endanger its cargo. In war, science has proven itself an evil genius; it has made war more terrible than it ever was before. Man used to be content to slaughter his fellowmen on a single plane, the earth's surface. Science has taught him to go down into the water and shoot up from below and to go up into the clouds and shoot down from above, thus making the battlefield three times as bloody as it was before; but science does not teach brotherly love. Science has made war so hellish that civilization was about to commit suicide; and now we are told that newly discovered instruments of destruction will make the cruelties of the late war seem trivial in comparison with the cruelties of wars that may come in the future. If civilization is to be saved from the wreckage threatened by intelligence not consecrated by love, it must be saved by the moral code of the meek and lowly Nazarene. His teachings, and His teachings alone, can solve the problems that vex the heart and perplex the world.
After eight days of trial, it took the jury only nine minutes to deliberate. Scopes was found guilty on July 21 and ordered by Raulston to pay a $100 fine (). Raulston imposed the fine before Scopes was given an opportunity to say anything about why the court should not impose punishment upon him and after Neal brought the error to the judge's attention the defendant spoke for the first and only time in court:
Your honor, I feel that I have been convicted of violating an unjust statute. I will continue in the future, as I have in the past, to oppose this law in any way I can. Any other action would be in violation of my ideal of academic freedom—that is, to teach the truth as guaranteed in our constitution, of personal and religious freedom. I think the fine is unjust.
Bryan died suddenly five days after the trial's conclusion. The connection between the trial and his death is still debated by historians.
Scopes' lawyers appealed, challenging the conviction on several grounds. First, they argued that the statute was overly vague because it prohibited the teaching of "evolution", a very broad term. The court rejected that argument, holding:
Evolution, like prohibition, is a broad term. In recent bickering, however, evolution has been understood to mean the theory which holds that man has developed from some pre-existing lower type. This is the popular significance of evolution, just as the popular significance of prohibition is prohibition of the traffic in intoxicating liquors. It was in that sense that evolution was used in this act. It is in this sense that the word will be used in this opinion, unless the context otherwise indicates. It is only to the theory of the evolution of man from a lower type that the act before us was intended to apply, and much of the discussion we have heard is beside this case.
Second, the lawyers argued that the statute violated Scopes' constitutional right to free speech because it prohibited him from teaching evolution. The court rejected this argument, holding that the state was permitted to regulate his speech as an employee of the state:
He was an employee of the state of Tennessee or of a municipal agency of the state. He was under contract with the state to work in an institution of the state. He had no right or privilege to serve the state except upon such terms as the state prescribed. His liberty, his privilege, his immunity to teach and proclaim the theory of evolution, elsewhere than in the service of the state, was in no wise touched by this law.
Third, it was argued that the terms of the Butler Act violated the Tennessee State Constitution, which provided that "It shall be the duty of the General Assembly in all future periods of this government, to cherish literature and science." The argument was that the theory of the descent of man from a lower order of animals was now established by the preponderance of scientific thought, and that the prohibition of the teaching of such theory was a violation of the legislative duty to cherish science. The court rejected this argument, holding that the determination of what laws cherished science was an issue for the legislature, not the judiciary:
The courts cannot sit in judgment on such acts of the Legislature or its agents and determine whether or not the omission or addition of a particular course of study tends to cherish science.
Fourth, the defense lawyers argued that the statute violated the provisions of the Tennessee Constitution that prohibited the establishment of a state religion. The Religious Preference provisions of the Tennessee Constitution (Section3 of ArticleI) stated, "no preference shall ever be given, by law, to any religious establishment or mode of worship".
Writing for the court, Chief Justice Grafton Green rejected this argument, holding that the Tennessee Religious Preference clause was designed to prevent the establishment of a state religion as had been the experience in England and Scotland at the writing of the Constitution, and held:
We are not able to see how the prohibition of teaching the theory that man has descended from a lower order of animals gives preference to any religious establishment or mode of worship. So far as we know, there is no religious establishment or organized body that has in its creed or confession of faith any article denying or affirming such a theory. So far as we know, the denial or affirmation of such a theory does not enter into any recognized mode of worship. Since this cause has been pending in this court, we have been favored, in addition to briefs of counsel and various amici curiae, with a multitude of resolutions, addresses, and communications from scientific bodies, religious factions, and individuals giving us the benefit of their views upon the theory of evolution. Examination of these contributions indicates that Protestants, Catholics, and Jews are divided among themselves in their beliefs, and that there is no unanimity among the members of any religious establishment as to this subject. Belief or unbelief in the theory of evolution is no more a characteristic of any religious establishment or mode of worship than is belief or unbelief in the wisdom of the prohibition laws. It would appear that members of the same churches quite generally disagree as to these things.
Further, the court held that while the statute "forbade" the teaching of evolution (as the court had defined it) it did not "require" teaching any other doctrine and thus did not benefit any one religious doctrine or sect over others.
Nevertheless, having found the statute to be constitutional, the court set aside the conviction on appeal because of a legal technicality: the jury should have decided the fine, not the judge, since under the state constitution, Tennessee judges could not at that time set fines above $50, and the Butler Act specified a minimum fine of $100.
Justice Green added a totally unexpected recommendation:
The court is informed that the plaintiff in error is no longer in the service of the state. We see nothing to be gained by prolonging the life of this bizarre case. On the contrary, we think that the peace and dignity of the state, which all criminal prosecutions are brought to redress, will be the better conserved by the entry of a "nolle prosequi" herein. Such a course is suggested to the Attorney General.
Attorney General L. D. Smith immediately announced that he would not seek a retrial, while Scopes' lawyers offered angry comments on the stunning decision.
In 1968, the Supreme Court of the United States ruled in "Epperson v. Arkansas" 393 U.S. 97 (1968) that such bans contravene the Establishment Clause of the First Amendment because their primary purpose is religious. Tennessee had repealed the Butler Act the previous year.
The trial revealed a growing chasm in American Christianity and two ways of finding truth, one "biblical" and one "evolutionist". Author David Goetz writes that the majority of Christians denounced evolution at the time.
Author Mark Edwards contests the conventional view that in the wake of the Scopes trial, a humiliated fundamentalism retreated into the political and cultural background, a viewpoint evidenced in the film "Inherit the Wind" (1960) and the majority of contemporary historical accounts. Rather, the cause of fundamentalism's retreat was the death of its leader, Bryan. Most fundamentalists saw the trial as a victory and not a defeat, but Bryan's death soon after created a leadership void that no other fundamentalist leader could fill. Bryan, unlike the other leaders, brought name recognition, respectability, and the ability to forge a broad-based coalition of fundamentalist and mainline religious groups to argue for the anti-evolutionist position.
Adam Shapiro criticized the view that the Scopes trial was an essential and inevitable conflict between religion and science, claiming that such a view was "self-justifying". Shapiro instead emphasizes that the Scopes trial was the result of particular circumstances, such as politics postponing adoption of new textbooks.
The trial escalated the political and legal conflict in which strict creationists and scientists struggled over the teaching of evolution in Arizona and California science classes. Before the Dayton trial only the South Carolina, Oklahoma, and Kentucky legislatures had dealt with anti-evolution laws or riders to educational appropriations bills.
After Scopes was convicted, creationists throughout the United States sought similar anti-evolution laws for their states.
By 1927, there were 13 states, both in the North and in the South, that had deliberated over some form of anti-evolution law. At least 41 bills or resolutions were introduced into the state legislatures, with some states facing the issue repeatedly. Nearly all these efforts were rejected, but Mississippi and Arkansas did put anti-evolution laws on the books after the Scopes trial, laws that would outlive the Butler Act (which survived until 1967).
In the Southwest, anti-evolution crusaders included ministers R. S. Beal and Aubrey L. Moore in Arizona and members of the Creation Research Society in California. They sought to ban evolution as a topic for study in the schools or, failing that, to relegate it to the status of unproven hypothesis perhaps taught alongside the biblical version of creation. Educators, scientists, and other distinguished laymen favored evolution. This struggle occurred later in the Southwest than elsewhere, finally collapsing in the Sputnik era after 1957, when the national mood inspired increased trust for science in general and for evolution in particular.
The opponents of evolution made a transition from the anti-evolution crusade of the 1920s to the creation science movement of the 1960s. Despite some similarities between these two causes, the creation science movement represented a shift from overtly religious to covertly religious objections to evolutionary theory—sometimes described as a Wedge Strategy—raising what it claimed was scientific evidence in support of a literal interpretation of the Bible. Creation science also differed in terms of popular leadership, rhetorical tone, and sectional focus. It lacked a prestigious leader like Bryan, utilized pseudoscientific rather than religious rhetoric, and was a product of California and Michigan instead of the South.
The Scopes trial had both short- and long-term effects in the teaching of science in schools in the United States. Though often portrayed as influencing public opinion against fundamentalism, the victory was not complete. Though the ACLU had taken on the trial as a cause, in the wake of Scopes' conviction they were unable to find more volunteers to take on the Butler law and, by 1932, had given up. The anti-evolutionary legislation was not challenged again until 1965, and in the meantime, William Jennings Bryan's cause was taken up by a number of organizations, including the Bryan Bible League and the Defenders of the Christian Faith.
The effects of the Scopes Trial on high school biology texts has not been unanimously agreed by scholars. Of the most widely used textbooks after the trial, only one included the word "evolution" in its index; the relevant page includes biblical quotations. Some scholars have accepted that this was the result of the Scopes Trial: for example Hunter, the author of the biology text which Scopes was on trial for teaching, revised the text by 1926 in response to the Scopes Trial Controversy. However, George Gaylord Simpson challenged this notion as confusing cause and effect, and instead posited that the trend of anti-evolution movements and laws that provoked the Scopes Trial was also to blame for the removal of evolution from biological texts, and that the trial itself had little effect. The fundamentalists' target slowly veered off evolution in the mid-1930s. Miller and Grabiner suggest that as the anti-evolutionist movement died out, biology textbooks began to include the previously removed evolutionary theory. This also corresponds to the emerging demand that science textbooks be written by scientists rather than educators or education specialists.
This account of history has also been challenged. In "Trying Biology" Robert Shapiro examines many of the eminent biology textbooks in the 1910–1920s, and finds that while they may have avoided the word "evolution" to placate anti-evolutionists, the overall focus on the subject was not greatly diminished, and the books were still implicitly evolution based. It has also been suggested that the narrative of evolution's being removed from textbooks due to religious pressure, only to be reinstated decades later, was an example of "Whig history" propagated by the Biological Sciences Curriculum Study, and that the shift in the ways biology textbooks discussed evolution can be attributed to other race and class based factors.
In 1958 the National Defense Education Act was passed with the encouragement of many legislators who feared the United States education system was falling behind that of the Soviet Union. The act yielded textbooks, produced in cooperation with the American Institute of Biological Sciences, which stressed the importance of evolution as the unifying principle of biology. The new educational regime was not unchallenged. The greatest backlash was in Texas where attacks were launched in sermons and in the press. Complaints were lodged with the State Textbook Commission. However, in addition to federal support, a number of social trends had turned public discussion in favor of evolution. These included increased interest in improving public education, legal precedents separating religion and public education, and continued urbanization in the South. This led to a weakening of the backlash in Texas, as well as to the repeal of the Butler Law in Tennessee in 1967.
Edward J. Larson, a historian who won the Pulitzer Prize for History for his book "Summer for the Gods: The Scopes Trial and America's Continuing Debate Over Science and Religion" (2004), notes: "Like so many archetypal American events, the trial itself began as a publicity stunt." The press coverage of the "Monkey Trial" was overwhelming. The front pages of newspapers like "The New York Times" were dominated by the case for days. More than 200 newspaper reporters from all parts of the country and two from London were in Dayton. Twenty-two telegraphers sent out 165,000 words per day on the trial, over thousands of miles of telegraph wires hung for the purpose; more words were transmitted to Britain about the Scopes trial than for any previous American event. Trained chimpanzees performed on the courthouse lawn. Chicago's WGN radio station broadcast the trial with announcer Quin Ryan via clear-channel broadcasting first on-the-scene coverage of the criminal trial. Two movie cameramen had their film flown out daily in a small plane from a specially prepared airstrip.
H.L. Mencken's trial reports were heavily slanted against the prosecution and the jury, which were "unanimously hot for Genesis". He mocked the town's inhabitants as "yokels" and "morons". He called Bryan a "buffoon" and his speeches "theologic bilge". In contrast, he called the defense "eloquent" and "magnificent". Even today, some American creationists, fighting in courts and state legislatures to demand that creationism be taught on an equal footing with evolution in the schools, have claimed that it was Mencken's trial reports in 1925 that turned public opinion against creationism. The media's portrayal of Darrow's cross-examination of Bryan, and the play and movie "Inherit the Wind" (1960), caused millions of Americans to ridicule religious-based opposition to the theory of evolution.
The trial also brought publicity to the town of Dayton, Tennessee, and was hatched as a publicity stunt. From "The Salem Republican", June 11, 1925:
The whole matter has assumed the portion of Dayton and her merchants endeavoring to secure a large amount of notoriety and publicity with an open question as whether Scopes is a party to the plot or not.
In a $1 million restoration of the Rhea County Courthouse in Dayton, completed in 1979, the second-floor courtroom was restored to its appearance during the Scopes trial. A museum of trial events in its basement contains such memorabilia as the microphone used to broadcast the trial, trial records, photographs, and an audiovisual history. Every July, local people re-enact key moments of the trial in the courtroom. In front of the courthouse stands a commemorative plaque erected by the Tennessee Historical Commission, reading as follows:
2B 23 THE SCOPES TRIAL
Here, from July 10 to 21, 1925 JohnThomas Scopes, a County High School teacher, was tried for teaching that a man descended from a lower order of animals in violation of a lately passed state law. William Jennings Bryan assisted the prosecution; Clarence Darrow, Arthur Garfield Hays, and Dudley Field Malone the defense. Scopes was convicted.
The Rhea County Courthouse was designated a National Historic Landmark by the National Park Service in 1976. It was placed on the National Register of Historic Places in 1972.
Anticipating that Scopes would be found guilty, the press fitted the defendant for martyrdom and created an onslaught of ridicule, and hosts of cartoonists added their own portrayals to the attack. For example:
Overwhelmingly, the butt of these jokes was the prosecution and those aligned with it: Bryan, the city of Dayton, the state of Tennessee, and the entire South, as well as fundamentalist Christians and anti-evolutionists. Rare exceptions were found in the Southern press, where the fact that Darrow had saved Leopold and Loeb from the death penalty continued to be a source of ugly humor. The most widespread form of this ridicule was directed at the inhabitants of Tennessee. "Life" described Tennessee as "not up to date in its attitude to such things as evolution". "Time" magazine related Bryan's arrival in town with the disparaging comment, "The populace, Bryan's to a moron, yowled a welcome."
Attacks on Bryan were frequent and acidic: "Life" awarded him its "Brass Medal of the Fourth Class" for having "successfully demonstrated by the alchemy of ignorance hot air may be transmuted into gold, and that the Bible is infallibly inspired except where it differs with him on the question of wine, women, and wealth".
Famously vituperative attacks came from journalist H. L. Mencken, whose syndicated columns from Dayton for "The Baltimore Sun" drew vivid caricatures of the "backward" local populace, referring to the people of Rhea County as "Babbits", "morons", "peasants", "hill-billies", "yaps", and "yokels". He chastised the "degraded nonsense which country preachers are ramming and hammering into yokel skulls". However, Mencken did enjoy certain aspects of Dayton, writing,
The town, I confess, greatly surprised me. I expected to find a squalid Southern village, with darkies snoozing on the horse-blocks, pigs rooting under the houses and the inhabitants full of hookworm and malaria. What I found was a country town full of charm and even beauty—a somewhat smallish but nevertheless very attractive Westminster or Balair.
He described Rhea County as priding itself on a kind of tolerance or what he called "lack of Christian heat", opposed to outside ideas but without hating those who held them. He pointed out, "The Klan has never got a foothold here, though it rages everywhere else in Tennessee." Mencken attempted to perpetrate a hoax, distributing flyers for the "Rev. Elmer Chubb", but the claims that Chubb would drink poison and preach in lost languages were ignored as commonplace by the people of Dayton, and only "Commonweal" magazine bit. Mencken continued to attack Bryan, including in his famously withering obituary of Bryan, "In Memoriam: W.J.B.", in which he charged Bryan with "insincerity"—not for his religious beliefs but for the inconsistent and contradictory positions he took on a number of political questions during his career. Years later, Mencken did question whether dismissing Bryan "as a quack pure and unadulterated" was "really just". Mencken's columns made the Dayton citizens irate and drew general indignation from the Southern press. After Raulston ruled against the admission of scientific testimony, Mencken left Dayton, declaring in his last dispatch, "All that remains of the great cause of the State of Tennessee against the infidel Scopes is the formal business of bumping off the defendant." Consequently, the journalist missed Darrow's cross-examination of Bryan on Monday.
Stage, film and television
Art
Literature
Music
Non-fiction
Notes
Bibliography
Further reading
Original materials from and news coverage of the trial: | https://en.wikipedia.org/wiki?curid=28406 |
Stephen Báthory
Stephen Báthory (; ; Lithuanian: ; 27 September 1533 – 12 December 1586) was Voivode of Transylvania (1571–1576), Prince of Transylvania (1576–1586), from 1576 Queen Anna Jagiellon's husband and "jure uxoris" King of Poland and Grand Duke of Lithuania (1576–1586).
The son of Stephen VIII Báthory and a member of the Hungarian Báthory noble family, Báthory was a ruler of Transylvania in the 1570s, defeating another challenger for that title, Gáspár Bekes. In 1576 Báthory became the third elected king of Poland. He worked closely with chancellor Jan Zamoyski. The first years of his reign were focused on establishing power, defeating a fellow claimant to the throne, Maximilian II, Holy Roman Emperor, and quelling rebellions, most notably, the Danzig rebellion. He reigned only a decade, but is considered one of the most successful kings in Polish history, particularly in the realm of military history. His signal achievement was his victorious campaign in Livonia against Russia in the middle part of his reign, in which he repulsed a Russian invasion of Commonwealth borderlands and secured a highly favorable treaty of peace (the Peace of Jam Zapolski).
Stephen Báthory was born on 27 September 1533 in the castle at Somlyó, also known as Szilágysomlyó (today's Șimleu Silvaniei). He was the son of Stephen VIII Báthory (d. 1534) of the noble Hungarian Báthory family and his wife Catherine Telegdi. He had at least five siblings: two brothers and three sisters.
Little is known about his childhood. Around 1549–1550, he briefly visited Italy and probably spent a few months attending lectures at the Padua University. Upon his return, he joined the army of Ferdinand I, Holy Roman Emperor, and took part in his military struggle against the Turks. Some time after 1553, Báthory was captured by the Turks, and after Ferdinand I refused to pay his ransom, joined the opposing side, supporting John II Sigismund Zápolya in his struggle for power in the Eastern Hungarian Kingdom. As Zápolya's supporter, Báthory acted both as a feudal lord, military commander and a diplomat. During one of his trips to Vienna he was put under house arrest for two years. During this time he fell out of favour at Zápolya's court, and his position was largely assumed by another Hungarian noble, Gáspár Bekes. Báthory briefly retired from politics, but he still wielded considerable influence and was seen as a possible successor to Zápolya.
After Zápolya's death in 1571, the Transylvanian estates elected Báthory Voivode of Transylvania. Bekes, supported by the Habsburgs, disputed his election, but by 1573, Báthory emerged victorious in the resulting civil war and drove Bekes out of Transylvania. He subsequently attempted to play the Ottomans and the Holy Roman Empire against one another in an attempt to strengthen the Transylvania position.
In 1572, the throne of the Polish–Lithuanian Commonwealth, at the time the largest and one of the most populous states in Europe, was vacated when King Sigismund II Augustus died without heirs. The Sejm was given the power to elect a new king, and in the 1573 Polish–Lithuanian royal election chose Henry of France; Henry soon ascended the French throne and forfeited the Polish one by returning to France. Báthory decided to enter into the election; in the meantime he had to defeat another attempt by Bekes to challenge his authority in Transylvania, which he did by defeating Bekes at the Battle of Kerelőszentpál.
On 12 December 1575, after an interregnum of roughly one and a half years, primate of Poland Jakub Uchański, representing a pro-Habsburg faction, declared Emperor Maximilian II as the new monarch. However, chancellor Jan Zamoyski and other opponents of Habsburgs persuaded many of the lesser nobility to demand a "Piast king", a Polish king. After a heated discussion, it was decided that Anna Jagiellon, sister of the former King Sigismund II Augustus, should be elected king and marry Stephen Báthory. In January 1576 Báthory passed the mantle of voivode of Transylvania to his brother Christopher Báthory and departed for Poland. On 1 May 1576 Báthory married Anna and was crowned king of Poland and grand duke of Lithuania. After being chosen as king in the 1576 Polish–Lithuanian royal election, Báthory also began using the title of the prince of Transylvania.
Báthory's position was at first extremely difficult, as there was still some opposition to his election. Emperor Maximilian, insisting on his earlier election, fostered internal opposition and prepared to enforce his claim by military action. At first the representatives of the Grand Duchy of Lithuania refused to recognize Báthory as grand duke, and demanded concessions - that he return the estates of his wife Anne to the Lithuanian treasury, hold Sejm conventions in both Lithuania and Poland, and reserve the highest governmental official offices in Lithuania for Lithuanians. He accepted the conditions. In June Báthory was recognized as Grand Duke of Lithuania, Duke of Ruthenia and Samogitia.
With Lithuania secure, the other major region refusing to recognize his election was Prussia. Maximilian's sudden death improved Báthory's situation, but the city of Danzig (Gdańsk) still refused to recognize his election without significant concessions. The Hanseatic League city, bolstered by its immense wealth, fortifications, and the secret support of Maximilian, had supported the Emperor's election and decided not to recognize Báthory as legitimate ruler. The resulting conflict was known as the Danzig rebellion. Most armed opposition collapsed when the prolonged Siege of Danzig by Báthory's forces was lifted as an agreement was reached. The Danzig army was utterly defeated in a field battle on 17 April 1577. However, since Báthory's armies were unable to take the city by force, a compromise was reached. In exchange for some of Danzig's demands being favorably reviewed, the city recognised Báthory as ruler of Poland and paid the sum of 200,000 zlotys in gold as compensation. Tying up administration of the Commonwealth's northern provinces, in February 1578 he acknowledged George Frederick as the ruler of Duchy of Prussia, receiving his feudal tribute.
After securing control over the Commonwealth, Báthory had a chance to devote himself to strengthening his authority, in which he was supported by his chancellor Jan Zamoyski, who would soon become one of the king's most trusted advisers. Báthory reorganised the judiciary by formation of legal tribunals (the Crown Tribunal in 1578 and the Lithuanian Tribunal in 1581). While this somewhat weakened the royal position, it was of little concern to Báthory, as the loss of power was not significant in the short term, and he was more concerned with the hereditary Hungarian throne. In exchange, the Sejm allowed him to raise taxes and push a number of reforms strengthening the military, including the establishment of the "piechota wybraniecka", an infantry formation composed of peasants. Many of his projects aimed to modernize the Commonwealth army, reforming it in a model of Hungarian troops of Transylvania. He also founded the Academy of Vilnius, the third university in the Commonwealth, transforming what had been a Jesuit college into a major university. He founded several other Jesuit colleges, and was active in propagating Catholicism, while at the same time being respectful of the Commonwealth policy of religious tolerance, issuing a number of decrees offering protection to Polish Jews, and denouncing any religious violence.
In external relations, Báthory sought peace through strong alliances. Though remaining distrustful of the Habsburgs, he maintained the tradition of good relations that the Commonwealth enjoyed with its Western neighbor and confirmed past treaties between the Commonwealth and Holy Roman Empire with diplomatic missions received by Maximilian's successor, Rudolf II. The troublesome south-eastern border with the Ottoman Empire was temporarily quelled by truces signed in July 1577 and April 1579. The Sejm of January 1578 gathered in Warsaw was persuaded to grant Báthory subsidies for the inevitable war against Muscovy.
A number of his trusted advisers were Hungarian, and he remained interested in Hungarian politics. Báthory wished to recreate his native country into an independent, strong power, but the unfavorable international situation did not allow him significantly to advance any of his plans in that area. In addition to Hungarian, he was well versed in Latin, and spoke Italian and German; he never learned the Polish language.
In his personal life, he was described as rather frugal in his personal expenditures, with hunting and reading as his favorite pastimes.
Before Báthory's election to the throne of the Commonwealth, Ivan the Terrible of Russia had begun encroaching on its sphere of interest in the northeast, eventually invading the Commonwealth borderlands in Livonia; the conflict would grow to involve a number of nearby powers (outside Russia and Poland-Lithuania, also Sweden, the Kingdom of Livonia and Denmark-Norway). Each of them was vying for control of Livonia, and the resulting conflict, lasting for several years, became known as the Livonian War. By 1577, Ivan was in control of most of the disputed territory, but his conquest was short-lived. In 1578, Commonwealth forces scored a number of victories in Liviona and begun pushing Ivan's forces back; this marked the turning point in the war. Báthory, together with his chancellor Zamoyski, led the army of the Commonwealth in a series of decisive campaigns taking Polotsk in 1579 and Velikiye Luki in 1580.
In 1581, Stephen penetrated once again into Russia and, on 22 August, laid siege to the city of Pskov. While the city held, on 13 December 1581 Ivan the Terrible began negotiations that concluded with the Truce of Jam Zapolski on 15 January 1582. The treaty was favorable to the Commonwealth, as Ivan ceded Polatsk, Veliz and most of the Duchy of Livonia in exchange for regaining Velikiye Luki and Nevel.
In 1584, Báthory allowed Zamoyski to execute Samuel Zborowski, whose death sentence for treason and murder had been pending for roughly a decade. This political conflict between Báthory and the Zborowski family, framed as the clash between the monarch and the nobility, would be a major recurring controversy in internal Polish politics for many years. In external politics, Báthory was considering another war with Russia, but his plans were delayed to the lack of support from the Sejm, which refused to pass requested tax raises.
Báthory's health had been declining for several years. He died on 12 December 1586. He had no legitimate children, though contemporary rumours suggested he might have had several illegitimate children. None of these rumours have been confirmed by modern historians. His death was followed by an interregnum of one year. Maximilian II's son, Archduke Maximilian III, was elected king but was contested by the Swedish Sigismund III Vasa, who defeated Maximilian at the Byczyna and succeeded as ruler of the Commonwealth.
Báthory actively promoted his own legend, sponsoring a number of works about his life and achievements, from historical treatises to poetry. In his lifetime, he was featured in the works of Jan Kochanowski, Mikołaj Sęp Szarzyński and many others. He became a recurring character in Polish poetry and literature and featured as a central figure in poems, novels and drama by Jakub Jasiński, Józef Ignacy Kraszewski, Julian Ursyn Niemcewicz, Henryk Rzewuski and others. He has been a subject of numerous paintings, both during his life and posthumously. Among the painters who took him as a subject were Jan Matejko and Stanisław Wyspiański.
A statue of Báthory by Giovanni Ferrari was raised in 1789 in Padua, Italy, sponsored by the last king of the Commonwealth, Stanisław August Poniatowski. Other monuments to him include one in the Łazienki Palace (1795 by Andrzej Le Brun) and one in Sniatyn (1904, destroyed in 1939). He was a patron of the Vilnius University (then known as the Stefan Batory University) and several units in the Polish Army from 1919 to 1939. His name was borne by two 20th-century passenger ships of the Polish Merchant Navy, the MS Batory and TSS Stefan Batory. In modern Poland, he is the namesake of the Batory Steelmill, a nongovernmental Stefan Batory Foundation, the Polish 9th Armored Cavalry Brigade, and numerous Polish streets and schools. One of the districts of the town of Chorzów is named after him.
Immediately after his death, he was not fondly remembered in the Commonwealth. Many nobles took his behavior in the Zborowski affair and his domestic policies as indicating an interest in curtailing the nobility's Golden Freedoms and establishing an absolute monarchy. His contemporaries were also rankled by his favoritism toward Hungarians over nationals of the Commonwealth. He was also remembered, more trivially, for his Hungarian-style cap and saber (szabla "batorówka").
His later resurgence in Polish memory and historiography can be traced to the 19th-century era of partitions of Poland, when the Polish state lost its independence. He was remembered for his military triumphs and praised as an effective ruler by many, including John Baptist Albertrandi, Jerzy Samuel Bandtkie, Michał Bobrzyński, Józef Szujski and others. Though some historians like Tadeusz Korzon, Joachim Lelewel and Jędrzej Moraczewski remained more reserved, in 1887, Wincenty Zakrzewski noted that Báthory is "the darling of both the Polish public opinion and Polish historians". During the interwar period in the Second Polish Republic he was a cult figure, often compared - with the government's approval - to the contemporary dictator of Poland, Józef Piłsudski. After the Second World War, in the communist People's Republic of Poland, he became more of a controversial figure, with historians more ready to question his internal politics and attachment to Hungary. Nonetheless his good image remained intact, reinforced by the positive views of a popular Polish historian of that period, Paweł Jasienica. | https://en.wikipedia.org/wiki?curid=28407 |
Subud
Subud (pronounced ), acronym of Susila Budhi Dharma, is an international spiritual movement that began in Indonesia in the 1920s, founded by Muhammad Subuh Sumohadiwidjojo (1901-1987). The basis of Subud is a spiritual exercise called the "latihan kejiwaan", which was said by Muhammad Subuh to represent guidance from "the Power of God" or "the Great Life Force". He claimed that Subud was not a new teaching or religion, and recommended that Subud members practice an established religion; he left the choice of religion up to the individual. Some members have converted to Islam; others have found that their faith in and practice of Christianity or Judaism, for example, has deepened after practising the "latihan". There are Subud groups in about 83 countries, with a worldwide membership of about 10,000.
The name "Subud" is an acronym that stands for three Javanese words, Susila Budhi Dharma, which are derived from the Sanskrit terms "suśīla" (good-tempered), buddhi, and dharma.
The meaning depends on the context in which they are being used. The original Sanskrit root words are defined differently than Pak Subuh indicates:
Sanskrit
Pak Subuh gives the following definitions:
Muhammad Subuh Sumohadiwidjojo explained in talks to Subud members, beginning in the 1940s, that during 1925 he was taking a late-night walk when he had an unexpected and unusual experience. He said he found himself enveloped in a brilliant light, and looked up to see what seemed like the sun falling directly onto his body, and he thought that he was having a heart attack. He said he went directly home, lay down on his bed, and prepared to die with the feeling that maybe it was his time, and that he could not fight it, so he surrendered himself to God.
According to the story, however, instead of dying he was moved from within to stand up and perform movements similar to his normal Muslim prayer routine. It seemed that he was not moving through his own volition; but was being guided by what he interpreted as the power of God. This same kind of experience reportedly happened to him for a few hours each night over a period of about 1000 days during which he slept little but was able to continue working full-time. He said he experienced a kind of "inner teaching" whereby he was given to understand a variety of things spontaneously.
As these experiences proceeded, Pak Subuh explained, he gained spontaneous insight into people and situations that he had not possessed before. Around 1933, as he reported, he received that if other people were physically near him while he was in a state of "latihan", then the experience would begin in them also. While still in his early thirties, Pak Subuh's reputation as someone with spiritual insight apparently grew, and people went to him to be 'opened'. They in turn could open others, and this is how Subud eventually spread around the world.
In Jakarta, Husein Rofé, an English linguist who had been living in Indonesia since 1950, met Pak Subuh. Rofé had been searching for a spiritual path and became the first non-Indonesian to be opened.
Subud moved outside of Indonesia when Rofé attended a religious congress in Japan in 1954. Subud first spread internationally into Japan, followed by Hong Kong and Cyprus. In 1957, Rofé (who was then in London) suggested that Pak Subuh visit Britain. Pak Subuh accepted the invitation and visited the home of John G. Bennett in Coombe Springs. It was at this time that many UK followers of George Gurdjieff were initiated into Subud, including Bennett himself, though he later left the group. Over the next 14 months Pak Subuh visited many countries before returning to Indonesia.
The Subud symbol was envisioned by Pak Subuh in 1959. The design consists of seven concentric circles and seven spokes, which, in traditional Javanese mysticism, represent seven levels of life forces as well as the Great Life Force that connects them. Each circle grows wider, the further out from the center, and each spoke narrows as it approaches the center. The space between the circles remains constant.
The symbol is often printed in black and white when color printing is not available. When colors are used, usually the circles and spokes are gold and the background is dark blue to black. However, the symbol is also sometimes blue on white or white on blue. The World Subud Association has registered this design, as well as the name "Subud", as a trade, service or collective membership mark in many countries.
The core of Subud is the "latihan" experience. Pak Subuh gives the following descriptions of "Subud":
The central practice of Subud is the "latihan kejiwaan" (literally "spiritual exercise" or "training of the spirit") or simply "the latihan". This exercise is not thought about, learned or trained for; it is unique for each person and the ability to "receive" it is passed on by being in the presence of another practicing member at the "opening" (see below). About twice a week, Subud members go to a local center to participate in a group latihan, men and women separately. The experience takes place in a room or a hall with open space. After a period of sitting quietly, the members are typically asked to stand and relax by a "helper" (see below), who then announces the start of the exercise.
In the practice of the exercise, members are typically advised to follow "what arises from within", not expecting anything in advance. One is recommended not to focus on any image or recite any mantra, nor to mix the exercise with other activities like meditation or use of drugs, but simply to intend to surrender to the Divine or the transcendent good or the will of God. (The term "God" is used here with a broad and inclusive intention. An individual is at liberty to substitute interpretations that they feel more in tune with.) One is not to pay attention to others in the room, each of whom is doing his or her own latihan. During the exercise, practitioners may find that, in terms of physical and emotional expression, they involuntarily move, make sounds, walk around, dance, jump, skip, laugh, cry or whatever. The experience varies greatly for different people, but the practitioner is always wholly conscious throughout and free to stop the exercise at any time.
Many Subud members believe that this experience, apparently arising from within each person, provides them with something of what they currently need in life. For some, the latihan may appear to initially involve a "purification", which possibly permits subsequently deeper experience. Members may describe their latihan as leaving them feeling "cleansed", "centered", "at peace", or "energized". The latihan is sometimes said to "work" 24 hours a day, not only when one is explicitly "doing" it. Supposedly, the regular practice of the latihan will enable people to experience positive development in various aspects of their daily life and being. The official website talks of "a deepening of the natural connection with wisdom, one's higher self, the divine, or God, depending on one's preferred terminology". (see links)
Although the latihan can be practised alone, members are advised to participate regularly, ideally twice a week, in a group latihan. When a member has enough experience to reliably sense the appropriate time to finish his or her latihan session, he or she may add perhaps one more weekly session of the latihan at home.
While the suggestions of Subud's founder are held as valuable by many members, there is no requirement to believe anything, and the latihan is open to individuals of all faiths - or none. Subud officially endorses no doctrine regarding the latihan's nature or benefits.
The "opening" refers to a person's first latihan, which is specially arranged to pass on the "contact", metaphorically resembling a candle flame that lights a new candle with no difference in quality of the flame. Only after the formal opening process, in most cases, is a person able to receive for himself or herself, and is then welcome to participate in the group latihan. In the opening, the person is accompanied by one or more experienced members called "helpers", and is asked to simply stand and relax with the helpers standing nearby. A simple statement or agreed set of "opening words" is read by one of the helpers that acknowledges the person's wish to receive the contact. The helpers then begin the exercise as they would normally do. The contact is passed on to the new member without effort or intention on the part of anyone present. This is the moment of the person's first connection with the latihan kejiwaan of Subud.
Testing is a distinct variety of the latihan directed toward receiving guidance or insight on a particular issue. Some question or request for clarification is acknowledged, and then the exercise is performed with openness to the issue. The original word for testing used by Muhammad Subuh was "terimah," which is Indonesian for "receiving". Many people who have been practicing the latihan for some time claim to be able to recognize indications or intuitions "from their inner feeling" in response to questions that are put forward.
Such indications may take various forms, including sounds, visions, vibrations and/or spontaneous physical movements similar to, though perhaps more intense than, those experienced in the usual latihan. However, it appears that such indications often defy intellectual analysis and that the supposed guidance can be obscured or biased by the mental or emotional attitudes of those present. Testing is generally viewed as an instrument for helping to clarify issues in the present, but may lead to confusion if treated as a kind of fortune-telling. Nevertheless, many Subud members claim to benefit from testing in terms of resolving issues.
Testing is normally used to help select helpers, and often committee members, throughout the World Subud Association. Pak Subuh's book "Susila Budhi Dharma" cites examples of situations in which testing may be useful, including self-training in putting any benefits of the latihan into practice. (Throughout Muhammad Subuh's book "Susila Budhi Dharma", which was written in 1952, testing is always referred to as "feeling" or "receiving". The first time "testing" was called by that name was in 1957 by John G. Bennett.)
Individual Subud members often voluntarily engage in occasional fasting as Pak Subuh recommended. Each year, some members fast at the same time as the Muslim fast of Ramadan which Pak Subuh, himself a Muslim, claimed to be suitable for non-Muslims. Others fast during Lent or simply on a regular, private basis. In this context, fasting is regarded by many Subud members as spiritually edifying, although its practice is not expected.
Pak Subuh provided advice and guidance in his talks to provide direction to members as their latihan deepens. Although in general there are no rules in the practice of the latihan, non-members may not attend the latihan exercise without first receiving the contact referred to above, known as their opening.
Subud's founder wanted the latihan to be accessible to people of all cultures, faiths and ethnicities. Respect for the diversity of personal backgrounds and the uniqueness of each individual, along with a general absence of "thou shalt nots", are aspects of the organization that have been attractive to many members.
Members who wish to take on organizational responsibility in Subud can volunteer as a committee member or as a helper. Each responsibility can be performed at the local, regional, national, and international levels. Members often move from one responsibility to another, as needed.
The broadest organizational responsibility rests with the World Subud Association, which organizes a World Congress every four years and consists of the Subud World Council, Subud representatives from each country, and individual members who wish to participate, although only representatives can vote. The headquarters of the international organization moves to a different country every four years.
Each level of the association has members called "helpers" whose role is to coordinate the timing of group latihan, witness the opening of new members, speak to those interested in the latihan, be available to discuss problems relating to the latihan, and sometimes attend to the latihan needs of isolated or indisposed Subud members. Helpers are usually selected from members who are willing to perform the duties, and selection generally occurs through testing. In no way does selection mean that a person is more spiritually advanced than a member who is not a helper.
Helpers exist at the local, regional (in some countries), national and international levels. Helpers' geographical status relates to the regional or national supportive duties they are expected to provide – otherwise, there are no geographical restrictions on where a helper is considered to be a helper. A local helper from London who travels to Jakarta, for example, will be seen as a helper there, and can do testing or participate in a new member's opening in the same way as any Indonesian helper.
There are normally 18 international helpers—nine men and nine women. Three men and three women are assigned to each of the three areas in Subud:
The international helpers are members of the World Subud Council. They serve on a voluntary basis for a four-year term, which runs from World Congress to World Congress. There is no distinction in rank between local, national, or international helpers. Nor is there a difference in status between helpers, committee or members. Being a helper is seen not as a talent but as a service role.
Ibu Siti Rahayu Wiryohudoyo is Pak Subuh's eldest daughter. In a talk given on 5 March 2010 to a National Gathering in Semarang, Indonesia, Ibu Siti Rahayu explains how she came to be appointed "spiritual advisor" by the Subud International Congress.
Most Subud groups have a committee, typically including a chairperson, vice-chair, treasurer and secretary. This committee is responsible for making sure there is a place to do group latihan, communications, budgets, and supporting the mutual efforts of members at the local group. A similar structure functions at the regional (in certain countries), national, zonal and international levels.
The international executive is the International Subud Committee (ISC). Apart from ensuring communication, publishing, budgeting, archives and support of affiliates, it organizes a World Congress every four years. The ISC chairperson sits on the World Subud Council.
For purposes of a practical organizational structure, the Subud association is divided into nine multinational zones, more or less as follows:
Each Zone has its own four representatives that are the voting members on the World Subud Council. They also serve as volunteers a four-year term like helpers. They are selected at Zone Meetings.
The chairperson of the World Subud Association serves a four-year term from one World Congress to the next and is also the chairperson of the World Subud Council. The World Subud Council is responsible for ensuring that decisions made at World Congress are carried through.
Subud affiliates (sometimes called ‘wings’) are subsidiary organizations that focus on specific projects at a national or international level. They are technically independent organizations but have overlapping boards of trustees. They include:
Some chairpersons of these affiliates also sit on the World Subud Council and serve a four-year term.
In addition to the above affiliates, a foundation – the Muhammad Subuh Foundation (MSF) – has been set up, whose main work is helping groups acquire their own latihan premises.
Informal networks and interest groups initiated by members include a Peace Network, a Spiritual Support Network (Yahoo group) and several Facebook groups.
When Subud first spread outside Indonesia, Pak Subuh talked mainly about the spiritual exercise. He started to encourage Subud members to engage in enterprises and donate a proportion of profits to welfare projects and to maintaining the Subud organisation. He explained that the fact of the latihan "bringing to life" the physical body indicates that worship need not be viewed as narrowly as prayer in places of worship; that people's ordinary lives, when following and guided by the Power of God, are ongoing worship, such that there is a dynamic interplay between "material" life and "spiritual" life. Therefore, his encouragement for Subud members to engage in enterprise is seen in the context of putting the latihan into practice.
Membership is open to any person over 17 years of age, irrespective of the person's religion or lack of religion. (As Pak Subuh saw it, the latihan is for "all of mankind.") The exception is that someone suffering from a serious mental illness may not be initiated as a member.
There is normally a waiting period of up to three months before a person may be opened. During this period, the enquirer is expected to meet a few times with the local helpers so that he or she can have questions answered and doubts clarified.
There is no membership fee, but most Subud members contribute, for example, to the rent or upkeep of premises where they meet. | https://en.wikipedia.org/wiki?curid=28408 |
Stolen Generations
The Stolen Generations (also known as Stolen Children) were the children of Australian Aboriginal and Torres Strait Islander descent who were removed from their families by the Australian federal and state government agencies and church missions, under acts of their respective parliaments. The removals of those referred to as "half-caste" children were conducted in the period between approximately 1905 and 1967, although in some places mixed-race children were still being taken into the 1970s.
Official government estimates are that in certain regions between one in ten and one in three Indigenous Australian children were forcibly taken from their families and communities between 1910 and 1970.
Numerous 19th and early 20th-century contemporaneous documents indicate that the policy of removing mixed-race Aboriginal children from their mothers related to an assumption that the Aboriginal peoples were dying off. Given their catastrophic population decline after white contact, whites assumed that the full-blood tribal Aboriginal population would be unable to sustain itself, and was doomed to extinction. The idea expressed by A. O. Neville, the Chief Protector of Aborigines for Western Australia, and others as late as 1930 was that mixed-race children could be trained to work in white society, and over generations would marry white and be assimilated into the society.
Some European Australians considered any proliferation of mixed-descent children (labelled "half-castes", "crossbreeds", "quadroons", and "octoroons", terms now considered derogatory to Indigenous Australians) to be a threat to the stability of the prevailing culture, or to a perceived racial or cultural "heritage". The Northern Territory Chief Protector of Aborigines, Dr. Cecil Cook, argued that "everything necessary [must be done] to convert the half-caste into a white citizen".
In the Northern Territory, the segregation of Indigenous Australians of mixed descent from "full-blood" Indigenous people began with the government removing children of mixed descent from their communities and placing them in church-run missions, and later creating segregated reserves and compounds to hold all Indigenous Australians. This was a response to public concern over the increase in the number of mixed-descent children and sexual exploitation of young Aboriginal women by non-Indigenous men, as well as fears among non-Indigenous people of being outnumbered by a mixed-descent population.
Under the "Northern Territory Aboriginals Act 1910", the Chief Protector of Aborigines was appointed the "legal guardian of every Aboriginal and every half-caste child up to the age of 18 years", thus providing the legal basis for enforcing segregation. After the Commonwealth took control of the Territory, under the "Aboriginals Ordinance 1918", the Chief Protector was given total control of all Indigenous women regardless of their age, unless married to a man who was "substantially of European origin", and his approval was required for any marriage of an Indigenous woman to a non-Indigenous man.
The "Victorian Aboriginal Protection Act 1869" included the earliest legislation to authorise child removal from Aboriginal parents. The Central Board for the Protection of Aborigines had been advocating such powers since 1860. Passage of the Act gave the colony of Victoria a wide suite of powers over Aboriginal and "half-caste" persons, including the forcible removal of children, especially "at-risk" girls. Through the late 19th and early 20th century, similar policies and legislation were adopted by other states and territories, such as the "Aboriginals Protection and Restriction of the Sale of Opium Act 1897" (Qld), the "Aboriginals Ordinance 1918" (NT), the "Aborigines Act 1934" (SA), and the "1936 Native Administration Act" (WA).
As a result of such legislation, states arranged widespread removal of (primarily) mixed-race children from their Aboriginal mothers. In addition, appointed Aboriginal protectors in each state exercised wide-ranging guardianship powers over Aboriginal people up to the age of 16 or 21, often determining where they could live or work. Policemen or other agents of the state (some designated as "Aboriginal Protection Officers") were given the power to locate and transfer babies and children of mixed descent from their mothers, families, and communities into institutions for care. In these Australian states and territories, institutions (both government and missionary) for half-caste children were established in the early decades of the 20th century to care and educate the mixed-race children taken from their families. Examples of such institutions include Moore River Native Settlement in Western Australia, Doomadgee Aboriginal Mission in Queensland, Ebenezer Mission in Victoria, and Wellington Valley Mission in New South Wales, as well as Catholic missions such as Beagle Bay and Garden Point.
The exact number of children removed is unknown. Estimates of numbers have been widely disputed. The "Bringing Them Home" report says that "at least 100,000" children were removed from their parents. This figure was estimated by multiplying the Aboriginal population in 1994 (303,000), by the report's maximum estimate of "one in three" Aboriginal persons separated from their families. The report stated that "between one in three and one in ten" children were separated from their families. Given differing populations over a long period of time, different policies at different times in different states (which also resulted in different definitions of target children), and incomplete records, accurate figures are difficult to establish. The academic Robert Manne has stated that the lower-end figure of one in 10 is more likely; he estimates that between 20,000 and 25,000 Aboriginal children were removed over six decades, based on a survey of self-identified Indigenous people by the television station ABS. According to the "Bringing Them Home" report:
The report closely examined the distinctions between "forcible removal", "removal under threat or duress", "official deception", "uninformed voluntary release", and "voluntary release". The evidence indicated that in numerous cases, children were brutally and forcibly removed from their parent or parents, possibly even from the hospital shortly after birth, when identified as mixed-race babies. Aboriginal Protection Officers often made the judgement to remove certain children. In some cases, families were required to sign legal documents to relinquish care to the state. In Western Australia, the "Aborigines Act 1905" removed the legal guardianship of Aboriginal parents. It made all their children legal wards of the state, so the government did not require parental permission to relocate the mixed-race children to institutions.
In 1915, in New South Wales, the "Aborigines Protection Amending Act 1915" gave the Aborigines' Protection Board authority to remove Aboriginal children "without having to establish in court that they were neglected." At the time, some members of Parliament objected to the NSW amendment; one member stated it enabled the Board to "steal the child away from its parents." At least two members argued that the amendment would result in children being subjected to unpaid labour (at institutions or farms) tantamount to "slavery". Writing in the 21st century, Professor Peter Read said that Board members, in recording reasons for removal of children, noted simply "For being Aboriginal." But the number of files bearing such a comment appear to be on the order of either one or two, with two others being noted only with "Aboriginal".
In 1909, the Protector of Aborigines in South Australia, William Garnet South, reportedly "lobbied for the power to remove Aboriginal children without a court hearing because the courts sometimes refused to accept that the children were neglected or destitute". South argued that "all children of mixed descent should be treated as neglected". His lobbying reportedly played a part in the enactment of the "Aborigines Act 1911." This designated his position as the legal guardian of every Aboriginal child in South Australia, not only the so-called "half-castes".
The "Bringing Them Home" report identified instances of official misrepresentation and deception, such as when caring and able parents were incorrectly described by Aboriginal Protection Officers as not being able to properly provide for their children. In other instances, parents were told by government officials that their child or children had died, even though this was not the case. One first-hand account referring to events in 1935 stated:
I was at the post office with my Mum and Auntie [and cousin]. They put us in the police ute and said they were taking us to Broome. They put the mums in there as well. But when we'd gone [about ] they stopped, and threw the mothers out of the car. We jumped on our mothers' backs, crying, trying not to be left behind. But the policemen pulled us off and threw us back in the car. They pushed the mothers away and drove off, while our mothers were chasing the car, running and crying after us. We were screaming in the back of that car. When we got to Broome they put me and my cousin in the Broome lock-up. We were only ten years old. We were in the lock-up for two days waiting for the boat to Perth.
The report discovered that removed children were, in most cases, placed into institutional facilities operated by religious or charitable organisations. A significant number, particularly females, were "fostered" out. Children taken to such institutions were trained to be assimilated to Anglo-Australian culture. Policies included punishment for speaking their local Indigenous languages. The intention was to educate them for a different future and to prevent their being socialised in Aboriginal cultures. The boys were generally trained as agricultural labourers and the girls as domestic servants; these were the chief occupations of many Europeans at the time in the largely rural areas outside cities.
A common aspect of the removals was the failure by these institutions to keep records of the actual parentage of the child, or such details as the date or place of birth. As is stated in the report:
the physical infrastructure of missions, government institutions and children's homes was often very poor and resources were insufficient to improve them or to keep the children adequately clothed, fed and sheltered.
The children were taken into care purportedly to protect them from neglect and abuse. However, the report said that, among the 502 inquiry witnesses, 17% of female witnesses and 7.7% of male witnesses reported having suffered a sexual assault while in an institution, at work, or while living with a foster or adoptive family.
Documentary evidence, such as newspaper articles and reports to parliamentary committees, suggest a range of rationales. Apparent motivations included the belief that the Aboriginal people would die out, given their catastrophic population decline after white contact, the belief that they were heathens and were better off in non-indigenous households, and the belief that full-blooded Aboriginal people resented miscegenation and the mixed-race children fathered and abandoned by white men.
The stated aim of the "resocialisation" program was to improve the integration of Aboriginal people into modern [European-Australian] society; however, a recent study conducted in Melbourne reported that there was no tangible improvement in the social position of "removed" Aboriginal people as compared to "non-removed". Particularly in the areas of employment and post-secondary education, the removed children had about the same results as those who were not removed. In the early decades of the program, post-secondary education was limited for most Australians, but the removed children lagged behind their white contemporaries as educational opportunities improved.
The study indicated that removed Aboriginal people were less likely to have completed a secondary education, three times as likely to have acquired a police record, and were twice as likely to use illicit drugs as were Aboriginal people who grew up in their ethnic community. The only notable advantage "removed" Aboriginal people achieved was a higher average income. The report noted this was likely due to the increased urbanisation of removed individuals, and greater access to welfare payments than for Aboriginal people living in remote communities. There seemed to be little evidence that removed mixed-race Aboriginal people had been successful in gaining better work even in urbanised areas.
By around the age of 18, the children were released from government control. In cases where their files were available, individuals were sometimes allowed to view their own files. According to the testimony of one Aboriginal person:
I was requested to attend at the Sunshine Welfare Offices, where they formerly (sic) discharged me from State ward ship. It took the Senior Welfare Officer a mere 20 minutes to come clean, and tell me everything that my heart had always wanted to know...that I was of "Aboriginal descent", that I had a Natural mother, father, three brothers and a sister, who were alive...He placed in front of me 368 pages of my file, together with letters, photos and birthday cards. He informed me that my surname would change back to my Mother's maiden name of Angus.
The "Bringing Them Home" report condemned the policy of disconnecting children from their cultural heritage. One witness said to the commission:
I've got everything that could be reasonably expected: a good home environment, education, stuff like that, but that's all material stuff. It's all the non-material stuff that I didn't have — the lineage... You know, you've just come out of nowhere; there you are.
In 2015, many of the recommendations of "Bringing Them Home" were yet to be fulfilled. In 2017, 35% of all children in out-of-home care in Australia identify as being Aboriginal, an increase from 20% in 1997 when "Bringing Them Home" was published.
A 2019 study by the Australian Institute of Health and Welfare (AIHW) found that children living in households with members of the Stolen Generations are more likely "to experience a range of adverse outcomes", including poor health, especially mental health, missing school and living in poverty. There are high incidences of anxiety, depression, PTSD and suicide, along with alcohol abuse, among the Stolen Generations, with this resulting in unstable parenting and family situations.
Historian Professor Peter Read, then at the Australian National University, was the first to use the phrase "stolen generation". He published a magazine article on the topic with this title, based on his research. He expanded the article into a book, "The Stolen Generations" (1981). Widespread awareness of the Stolen Generations, and the practices that created them, grew in the late 1980s through the efforts of Aboriginal and white activists, artists, and (Archie Roach's "Took the Children Away" and Midnight Oil's "The Dead Heart" being examples of the latter). The "Mabo v Queensland (No 2)" case (commonly known as the "Mabo case") attracted great media and public attention to itself and to all issues related to the government treatment of Aboriginal people and Torres Strait Islanders in Australia, and most notably the Stolen Generations.
In early 1995, Rob Riley, an activist with the Aboriginal Legal Service, published "Telling Our Story." It described the large-scale negative effects of past government policies that resulted in the removal of thousands of mixed-race Aboriginal children from their families and their being reared in a variety of conditions in missions, orphanages, reserves, and white foster homes.
The Australian Human Rights and Equal Opportunity Commission's "National Inquiry into the Separation of Aboriginal and Torres Strait Islander Children from Their Families" commenced in May 1995, presided over by the Commission's president Sir Ronald Wilson and its Aboriginal and Torres Strait Islander Social Justice Commissioner Mick Dodson. During the ensuing 17 months, the Inquiry visited every state and Territory in Australia, heard testimony from 535 Aboriginal Australians, and received submissions of evidence from more than 600 more. In April 1997, the Commission released its official "Bringing Them Home" report.
Between the commissioning of the National Inquiry and the release of the final report in 1997, the government of John Howard had replaced the Paul Keating government. At the Australian Reconciliation Convention in May 1997, Howard was quoted as saying: "Australians of this generation should not be required to accept guilt and blame for past actions and policies."
Following publication of the report, the parliament of the Northern Territory and the state parliaments of Victoria, South Australia, and New South Wales passed formal apologies to the Aboriginal people affected. On 26 May 1998, the first "National Sorry Day" was held; reconciliation events were held nationally, and attended by a total of more than one million people. As public pressure continued to increase on the government, Howard drafted a motion of "deep and sincere regret over the removal of Aboriginal children from their parents", which was passed by the federal parliament in August 1999. Howard said that the Stolen Generation represented "the most blemished chapter in the history of this country."
Activists took the issue of the Stolen Generations to the United Nations Commission on Human Rights. At its hearing on this subject in July 2000, the Commission on Human Rights strongly criticised the Howard government for its handling of issues related to the Stolen Generations. The UN Committee on the Elimination of Racial Discrimination concluded its discussion of Australia's 12th report on its actions by acknowledging "the measures taken to facilitate family reunion and to improve counselling and family support services for the victims", but expressed concern:
that the Commonwealth Government does not support a formal national apology and that it considers inappropriate the provision of monetary compensation for those forcibly and unjustifiably separated from their families, on the grounds that such practices were sanctioned by law at the time and were intended to "assist the people whom they affected".The Committee recommended "that the State party consider the need to address appropriately the extraordinary harm inflicted by these racially discriminatory practices."
Activists highlighted the Stolen Generations and related Aboriginal issues during the Sydney 2000 Summer Olympics. They set up a large "Aboriginal Tent City" on the grounds of Sydney University to bring attention to Aboriginal issues in general. Cathy Freeman is an Aboriginal athlete who was chosen to light the Olympic flame and won the gold medal in the 400 metre sprint. In interviews, she said that her own grandmother was a victim of forced removal. The internationally successful rock group Midnight Oil attracted worldwide media interest by performing at the Olympic closing ceremony in black sweatsuits with the word "SORRY" emblazoned across them.
In 2000, Phillip Knightley summed up the Stolen Generations in these terms:
This cannot be over-emphasized—the Australian government literally kidnapped these children from their parents as a matter of policy. White welfare officers, often supported by police, would descend on Aboriginal camps, round up all the children, separate the ones with light-coloured skin, bundle them into trucks and take them away. If their parents protested they were held at bay by police.
According to the archaeologist and writer Josephine Flood, "The well-meaning but ill-conceived policy of forced assimilation of mixed-race Aborigines is now universally condemned for the trauma and loss of language and culture it brought to the stolen children and their families."
One of the recommendations of the 1997 Bringing Them Home report was for Australian parliaments to offer an official apology. A decade later, on 13 February 2008, Prime Minister Kevin Rudd presented an apology to Indigenous Australians as a motion to be voted on by the house. The apology text was as follows:
The text of the apology did not refer to compensation to Aboriginal people generally or to members of the Stolen Generations specifically. Rudd followed the apology with a 20-minute speech to the house about the need for this action. The government's apology and his speech were widely applauded among both Indigenous Australians and the non-Indigenous general public.
Opposition leader Brendan Nelson also delivered a 20-minute speech. He endorsed the apology but in his speech Nelson referred to the "under-policing" of child welfare in Aboriginal communities, as well as a host of social ills blighting the lives of Aboriginal people. His speech was considered controversial and received mixed reactions. Thousands of people who had gathered in public spaces in around Australia to hear the apology turned their backs on the screens that broadcast Nelson speaking. In Perth, people booed and jeered until the screen was switched off. In Parliament House's Great Hall, elements of the audience began a slow clap, with some finally turning their backs.
The apology was unanimously adopted by the House of Representatives, although six members of Nelson's opposition caucus left the House in protest at the apology. Later that day, the Senate considered a motion for an identical apology, which was also passed unanimously. Beforehand, the Leader of the Greens, Senator Bob Brown, attempted to amend the motion to include words committing parliament to offering compensation to those who suffered loss under past Indigenous policies, but was opposed by all the other parties.
The legal circumstances regarding the Stolen Generations remain unclear. Although some compensation claims are pending, a court cannot rule on behalf of plaintiffs simply because they were removed, because, at the time, such removals were authorised under Australian law. Australian federal and state governments' statute law and associated regulations provided for the removal from their birth families and communities of known mixed-race Aboriginal children, or those who visibly appeared mixed.
Compensation claims have been heard by the NSW Supreme Court's Court of Appeal in "Williams v The Minister Aboriginal Land Rights Act 1983 and New South Wales" [2000] NSWCA 255 and the Australian Federal Court in "Cubillo v Commonwealth of Australia" [2000] FCA 1084. In "Williams", an individual (rather than a group of plaintiffs) made claims in negligence arising from having been placed under the control of the Aborigines Welfare Board pursuant to s 7(2) of the "Aborigines Welfare Act 1909" shortly after her birth, and was placed by the Board with the United Aborigines Mission at its Aborigines Children Home at Bomaderry near Nowra, NSW. The trial judge found that there was no duty of care and therefore that an action in negligence could not succeed. This was upheld by the NSW Court of Appeal in 2000.
In relation to whether the action in NSW courts was limited by the passage of time, the Court of Appeal, reversing Studert J, extended the limitation period for the non-equitable claims by about three decades pursuant to s 60G of the Limitation Act 1969 (NSW): Williams v Minister, Aboriginal Land Rights Act 1983 (1994) 35 NSWLR 497.
The apology is not expected to have any legal effect on claims for compensation.
The word "stolen" is used here to refer to the Aboriginal children having been taken away from their families. It has been in use for this since the early 20th century. For instance, Patrick McGarry, a member of the Parliament of New South Wales, objected to the "Aborigines Protection Amending Act 1915" which authorised the Aborigines' Protection Board to remove Aboriginal children from their parents without having to establish cause. McGarry described the policy as "steal[ing] the child away from its parents".
In 1924, the "Adelaide" "Sun" wrote: "The word 'stole' may sound a bit far-fetched but by the time we have told the story of the heart-broken Aboriginal mother we are sure the word will not be considered out of place."
In most jurisdictions, Indigenous Australians were put under the authority of a Protector, effectively being made wards of the State. The protection was done through each jurisdiction's Aboriginal Protection Board; in Victoria and Western Australia these boards were also responsible for applying what were known as "Half-Caste Acts".
More recent usage has developed since Peter Read's publication of "The Stolen Generations: The Removal of Aboriginal Children in New South Wales 1883 to 1969" (1981), which examined the history of these government actions. The 1997 publication of the government's "Bringing Them Home – Report of the National Inquiry into the Separation of Aboriginal and Torres Strait Islander Children from Their Families" heightened awareness of the Stolen Generations. The acceptance of the term in Australia is illustrated by the 2008 formal apology to the Stolen Generations, led by Prime Minister Kevin Rudd and passed by both houses of the Parliament of Australia. Previous apologies had been offered by State and Territory governments in the period 1997–2001.
There is some opposition to the concept of the term "Stolen Generations". Former Prime Minister John Howard did not believe the government should apologise to the Australian Aboriginal peoples. Then Minister for Aboriginal and Torres Strait Islander Affairs John Herron disputed usage of the term in April 2000. Others who disputed the use of the term include Peter Howson, Minister for Aboriginal Affairs from 1971 to 1972, and Keith Windschuttle, an historian who argues that some of the abuses towards Australian Aboriginal peoples have been exaggerated and in some cases invented. | https://en.wikipedia.org/wiki?curid=28412 |
Septimius Severus
Septimius Severus ( ; Lucius Septimius Severus Eusebes Pertinax; 11 April 145 – 4 February 211) was Roman emperor from 193 to 211. He was born in Leptis Magna in the Roman province of Africa. As a young man he advanced through the customary succession of offices under the reigns of Marcus Aurelius and Commodus. Severus seized power after the death of Emperor Pertinax in 193 during the Year of the Five Emperors.
After deposing and killing the incumbent emperor Didius Julianus, Severus fought his rival claimants, the Roman generals Pescennius Niger and Clodius Albinus. Niger was defeated in 194 at the Battle of Issus in Cilicia. Later that year Severus waged a short punitive campaign beyond the eastern frontier, annexing the Kingdom of Osroene as a new province. Severus defeated Albinus three years later at the Battle of Lugdunum in Gaul.
After consolidating his rule over the western provinces, Severus waged another brief, more successful war in the east against the Parthian Empire, sacking their capital Ctesiphon in 197 and expanding the eastern frontier to the Tigris. He then enlarged and fortified the "Limes Arabicus" in Arabia Petraea. In 202, he campaigned in Africa and Mauretania against the Garamantes, capturing their capital Garama and expanding the "Limes Tripolitanus" along the southern desert frontier of the empire. He proclaimed as Augusti (co-emperors) his elder son Caracalla in 198 and his younger son Geta in 209, both born of his second wife Julia Domna.
Severus travelled to Britain in 208, strengthening Hadrian's Wall and reoccupying the Antonine Wall. In AD 209 he invaded Caledonia (modern Scotland) with an army of 50,000 men but his ambitions were cut short when he fell fatally ill of an infectious disease in late 210. He died in early 211 at Eboracum (today York, England), and was succeeded by his sons, thus founding the Severan dynasty. It was the last dynasty of the Roman Empire before the Crisis of the Third Century.
Born on 11 April 145 at Leptis Magna (in present-day Libya) as the son of Publius Septimius Geta and Fulvia Pia, Septimius Severus came from a wealthy and distinguished family of equestrian rank. He had Italian Roman ancestry on his mother's side, and was descended from Punic forebears on his father's side.
Severus' father, an obscure provincial, held no major political status, but he had two cousins, Publius Septimius Aper and Gaius Septimius Severus, who served as consuls under the emperor Antoninus Pius . His mother's ancestors had moved from Italy to North Africa; they belonged to the "gens" Fulvia, an Italian patrician family that originated in Tusculum. Septimius Severus had two siblings: an elder brother, Publius Septimius Geta; and a younger sister, Septimia Octavilla. Severus's maternal cousin was the praetorian prefect and consul Gaius Fulvius Plautianus.
Septimius Severus grew up in Leptis Magna. He spoke the local Punic language fluently, but he was also educated in Latin and Greek, which he spoke with a slight accent. Little else is known of the young Severus' education but, according to Cassius Dio, the boy had been eager for more education than he actually received. Presumably Severus received lessons in oratory: at the age of 17 he gave his first public speech.
Severus sought a public career in Rome in around 162. At the recommendation of his relative Gaius Septimius Severus, Emperor Marcus Aurelius () granted him entry into the senatorial ranks. Membership in the senatorial order was a prerequisite to attain positions within the "cursus honorum" and to gain entry into the Roman Senate. Nevertheless, it appears that Severus' career during the 160s met with some difficulties. It is likely that he served as a "vigintivir" in Rome, overseeing road maintenance in or near the city, and he may have appeared in court as an advocate. At the time of Marcus Aurelius he was the State Attorney ("Advocatus fisci"). However, he omitted the military tribunate from the "cursus honorum" and had to delay his quaestorship until he had reached the required minimum age of 25. To make matters worse, the Antonine Plague swept through the capital in 166.
With his career at a halt, Severus decided to temporarily return to Leptis, where the climate was healthier. According to the "Historia Augusta", a usually unreliable source, he was prosecuted for adultery during this time but the case was ultimately dismissed. At the end of 169 Severus was of the required age to become a quaestor and journeyed back to Rome. On 5December, he took office and was officially enrolled in the Roman Senate. Between 170 and 180 his activities went largely unrecorded, in spite of the fact that he occupied an impressive number of posts in quick succession. The Antonine Plague had thinned the senatorial ranks and, with capable men now in short supply, Severus' career advanced more steadily than it otherwise might have.
The sudden death of his father necessitated another return to Leptis Magna to settle family affairs. Before he was able to leave Africa, Mauri tribesmen invaded southern Spain. Control of the province was handed over to the Emperor, while the Senate gained temporary control of Sardinia as compensation. Thus, Septimius Severus spent the remainder of his second term as quaestor on the island of Sardinia.
In 173, Severus' kinsman Gaius Septimius Severus was appointed proconsul of the Province of Africa. The elder Severus chose his cousin as one of his two "legati pro praetore", a senior military appointment. Following the end of this term, Septimius Severus returned to Rome, taking up office as tribune of the plebs, a senior legislative position, with the distinction of being the "candidatus" of the emperor.
About 175, Septimius Severus, in his early thirties at the time, contracted his first marriage, to Paccia Marciana, a woman from Leptis Magna. He probably met her during his tenure as legate under his uncle. Marciana's name suggests Punic or Libyan origin, but nothing else is known of her. Septimius Severus does not mention her in his autobiography, though he commemorated her with statues when he became Emperor. The unreliable "Historia Augusta" claims that Marciana and Severus had two daughters, but no other attestation of them has survived. It appears that the marriage produced no surviving children, despite lasting for more than ten years.
Marciana died of natural causes around 186. Septimius Severus, now in his forties, childless and eager to remarry, began enquiring into the horoscopes of prospective brides. The "Historia Augusta" relates that he heard of a woman in Syria of whom it had been foretold that she would marry a king, and so Severus sought her as his wife. This woman was an Emesan Syrian named Julia Domna. Her father, Julius Bassianus, descended from the Arab Emesan dynasty and served as a high priest to the local cult of the sun god Elagabal. Domna's older sister, Julia Maesa, would become the grandmother of the future emperors Elagabalus and Alexander Severus.
Bassianus accepted Severus' marriage proposal in early 187, and in the summer the couple married in Lugdunum (modern-day Lyon, France), of which Severus was the governor. The marriage proved happy, and Severus cherished Julia and her political opinions. Julia built "the most splendid reputation" by applying herself to letters and philosophy. They had two sons, Lucius Septimius Bassianus (later nicknamed Caracalla, born 4April 188 in Lugdunum) and Publius Septimius Geta (born 7March 189 in Rome).
In 191, on the advice of Quintus Aemilius Laetus, prefect of the Praetorian Guard, Commodus appointed Severus as governor of Pannonia Superior. Commodus was assassinated the following year. Pertinax was acclaimed emperor, but he was then killed by the Praetorian Guard in early 193. In response to the murder of Pertinax, Severus's legion "XIV Gemina" proclaimed him Emperor at Carnuntum. Nearby legions, such as "X Gemina" at Vindobona, soon followed suit. Having assembled an army, Severus hurried to Italy.
Pertinax's successor in Rome, Didius Julianus, had bought the emperorship in an auction. Julianus was condemned to death by the Senate and killed. Severus took possession of Rome without opposition. He executed Pertinax's murderers and dismissed the rest of the Praetorian Guard, filling its ranks with loyal troops from his own legions.
The legions of Syria had proclaimed Pescennius Niger emperor. At the same time Severus felt it reasonable to offer Clodius Albinus, the powerful governor of Britannia, who had probably supported Didius against him, the rank of Caesar, which implied some claim to succession. With his rear safe, he moved to the East and crushed Niger's forces at the Battle of Issus (194). While campaigning against Byzantium, he ordered that the tomb of his fellow-Carthaginian Hannibal be covered with fine marble.
He devoted the following year to suppressing Mesopotamia and other Parthian vassals who had backed Niger. Afterwards Severus declared his son Caracalla as his successor, which caused Albinus to be hailed emperor by his troops and to invade Gallia. After a short stay in Rome, Severus moved north to meet him. On 19February 197 at the Battle of Lugdunum, with an army of about 75,000 men, mostly composed of Pannonian, Moesian and Dacian legions and a large number of auxiliaries, Severus defeated and killed Clodius Albinus, securing his full control over the empire.
In early 197 Severus departed Rome and travelled to the east by sea. He embarked at Brundisium and probably landed at the port of Aegeae in Cilicia, travelling to Syria by land. He immediately gathered his army and crossed the Euphrates. Abgar IX, titular King of Osroene but essentially only the ruler of Edessa since the annexation of his kingdom as a Roman province, handed over his children as hostages and assisted Severus' expedition by providing archers. King Khosrov I of Armenia also sent hostages, money and gifts.
Severus traveled on to Nisibis, which his general Julius Laetus had prevented from falling into enemy hands. Afterwards Severus returned to Syria to plan a more ambitious campaign. The following year he led another, more successful campaign against the Parthian Empire, reportedly in retaliation for the support it had given to Pescennius Niger. His legions sacked the Parthian royal city of Ctesiphon and he annexed the northern half of Mesopotamia to the empire, taking the title "", following the example of Trajan. However, he was unable to capture the fortress of Hatra even after two lengthy sieges, just like Trajan who had tried nearly a century before. During his time in the east, though, he also expanded the "Limes Arabicus", building new fortifications in the Arabian Desert from Basie to Dumatha.
Severus' relations with the Senate were never good. He was unpopular with them from the outset, having seized power with the help of the military, and he returned the sentiment. Severus ordered the execution of a large number of Senators on charges of corruption or conspiracy against him and replaced them with his favourites. Although his actions turned Rome more into a military dictatorship, he was popular with the citizens of Rome, having stamped out the rampant corruption of Commodus's reign. When he returned from his victory over the Parthians, he erected the Arch of Septimius Severus in Rome.
According to Cassius Dio, however, after 197 Severus fell heavily under the influence of his Praetorian Prefect, Gaius Fulvius Plautianus, who came to have almost total control of the imperial administration. Plautianus's daughter, Fulvia Plautilla, was married to Severus's son, Caracalla. Plautianus's excessive power came to an end in 204, when he was denounced by the Emperor's dying brother. In January 205 Caracalla accused Plautianus of plotting to kill him and Severus. The powerful prefect was executed while he was trying to defend his case in front of the two emperors. One of the two following "praefecti" was the famous jurist Papinian. Executions of senators did not stop: Cassius Dio records that many of them were put to death, some after being formally tried.
Upon his arrival at Rome in 193, Severus discharged the Praetorian Guard, which had murdered Pertinax and had then auctioned the Roman Empire to Didius Julianus. Its members were stripped of their ceremonial armour and forbidden to come within miles of the city on pain of death. Severus replaced the old guard with 10 new cohorts recruited from veterans of his Danubian legions.
Around 197 he increased the number of legions from 30 to 33, with the introduction of the three new legions: I, II, and III "Parthica". He garrisoned Legio II Parthica at Albanum, only from Rome. He gave his soldiers a donative of a thousand "sesterces" (250 "denarii") each, and raised the annual wage for a soldier in the legions from 300 to 400 "denarii".
Severus was the first Roman emperor to station some of the imperial army in Italy. He realized that Rome needed a military central reserve with the capability to be sent anywhere.
At the beginning of Severus' reign, Trajan's policy toward the Christians was still in force. That is, Christians were only to be punished if they refused to worship the emperor and the gods, but they were not to be sought out. Therefore, persecution was inconsistent, local, and sporadic. Faced with internal dissidence and external threats, Severus felt the need to promote religious harmony by promoting syncretism. He, possibly, issued an edict that punished conversion to Judaism and Christianity.
A number of persecutions of Christians occurred in the Roman Empire during his reign and are traditionally attributed to Severus by the early Christian community. This is based on the decree mentioned in the "Historia Augusta", an unreliable mix of fact and fiction. Early church historian Eusebius described Severus as a persecutor. The Christian apologist Tertullian stated that Severus was well disposed towards Christians, employed a Christian as his personal physician and had personally intervened to save several high-born Christians known to him from the mob. Eusebius' description of Severus as a persecutor likely derives merely from the fact that numerous persecutions occurred during his reign, including those known in the "Roman Martyrology" as the martyrs of Madauros, Charalambos and Perpetua and Felicity in Roman-ruled Africa. These were probably the result of local persecutions rather than empire-wide actions or decrees by Severus.
In late 202 Severus launched a campaign in the province of Africa. The legate of Legio III Augusta, Quintus Anicius Faustus, had been fighting against the Garamantes along the "Limes Tripolitanus" for five years. He captured several settlements such as Cydamus, Gholaia, Garbia, and their capital Garama – over south of Leptis Magna. The province of Numidia was also enlarged: the empire annexed the settlements of Vescera, Castellum Dimmidi, Gemellae, Thabudeos and Thubunae. By 203 the entire southern frontier of Roman Africa had been dramatically expanded and re-fortified. Desert nomads could no longer safely raid the region's interior and escape back into the Sahara.
In 208 Severus travelled to Britain with the intention of conquering Caledonia. Modern archaeological discoveries illuminate the scope and direction of his northern campaign. Severus probably arrived in Britain with an army over 40,000, considering some of the camps constructed during his campaign could house this number.
He strengthened Hadrian's Wall and reconquered the Southern Uplands up to the Antonine Wall, which was also enhanced. Severus built a camp south of the Antonine Wall at Trimontium, probably assembling his forces there. Severus then thrust north with his army across the wall into Caledonian territory. Retracing the steps of Agricola of over a century before, Severus rebuilt and garrisoned many abandoned Roman forts along the east coast, such as Carpow.
He was supported and supplied by a strong naval force.
Around this time Severus' wife, Julia Domna, reportedly criticised the sexual morals of the Caledonian women. The wife of Caledonian chief Argentocoxos replied: "We fulfill the demands of nature in a much better way than do you Roman women; for we consort openly with the best men, whereas you let yourselves be debauched in secret by the vilest".
Cassius Dio's account of the invasion reads:
By 210 Severus' campaigning had made significant gains, despite Caledonian guerrilla tactics and purportedly heavy Roman casualties. The Caledonians sued for peace, which Severus granted on condition they relinquish control of the Central Lowlands. This is evidenced by extensive Severan-era fortifications in the Central Lowlands. The Caledonians, short on supplies and feeling that their position was desperate, revolted later that year with the Maeatae. Severus prepared for another protracted campaign within Caledonia. He was now intent on exterminating the Caledonians, telling his soldiers: "Let no-one escape sheer destruction, no-one our hands, not even the babe in the womb of the mother, if it be male; let it nevertheless not escape sheer destruction."
Severus' campaign was cut short when he fell ill. He withdrew to Eboracum (York) and died there in 211. Although his son Caracalla continued campaigning the following year, he soon settled for peace. The Romans never campaigned deep into Caledonia again. Shortly after this the frontier was permanently withdrawn south to Hadrian's Wall.
Severus is famously said to have given the advice to his sons: "Be harmonious, enrich the soldiers, scorn all others" before he died on 4 February 211. On his death, Severus was deified by the Senate and succeeded by his sons, Caracalla and Geta, who were advised by his wife Julia Domna. Severus was buried in the Mausoleum of Hadrian in Rome. His remains are now lost.
Though his military expenditure was costly to the empire, Severus was a strong and able ruler. The Roman Empire reached its greatest extent under his reignover million square kilometres.
According to Gibbon, "his daring ambition was never diverted from its steady course by the allurements of pleasure, the apprehension of danger, or the feelings of humanity." His enlargement of the Limes Tripolitanus secured Africa, the agricultural base of the empire where he was born. His victory over the Parthian Empire was for a time decisive, securing Nisibis and Singara for the empire and establishing a "status quo" of Roman dominance in the region until 251. His policy of an expanded and better-rewarded army was criticised by his contemporaries Cassius Dio and Herodianus: in particular, they pointed out the increasing burden, in the form of taxes and services, the civilian population had to bear to maintain the new and better paid army. The large and ongoing increase in military expenditure caused problems for all of his successors.
To maintain his enlarged military, he debased the Roman currency. Upon his accession he decreased the silver purity of the denarius from 81.5% to 78.5%, although the silver weight actually increased, rising from 2.40 grams to 2.46 grams. Nevertheless, the following year he debased the denarius again because of rising military expenditures. The silver purity decreased from 78.5% to 64.5% – the silver weight dropping from 2.46 grams to 1.98 grams. In 196 he reduced the purity and silver weight of the denarius again, to 54% and 1.82 grams respectively. Severus' currency debasement was the largest since the reign of Nero, compromising the long-term strength of the economy.
Severus was also distinguished for his buildings. Apart from the triumphal arch in the Roman Forum carrying his full name, he also built the Septizodium in Rome. He enriched his native city of Leptis Magna, including commissioning a triumphal arch on the occasion of his visit of 203. The greater part of the Flavian Palace overlooking the Circus Maximus was undertaken in his reign. | https://en.wikipedia.org/wiki?curid=28413 |
San Francisco Giants
The San Francisco Giants are an American professional baseball team based in San Francisco, California. The Giants compete in Major League Baseball (MLB) as a member club of the National League (NL) West division. Founded in 1883 as the New York Gothams, and renamed three years later the New York Giants, the team eventually moved to San Francisco in 1958.
As one of the longest-established and most successful professional baseball teams, the franchise has won the most games of any team in the history of American baseball. The team was the first major league team based in New York City, most memorably playing at the legendary Polo Grounds. They have won 23 NL pennants and have played in 20 World Series competitions – both NL records. The Giants' eight World Series championships rank second in the National League and fifth overall (the New York Yankees are first with 27, then the St. Louis Cardinals (the National League record-holders) with 11, and the Oakland Athletics and the Boston Red Sox both with 9). The Giants have played in the World Series 20 times – 14 times in New York, six in San Francisco – but boycotted the event in 1904.
Playing as the New York Giants, they won 14 pennants and five World Series championships behind managers such as John McGraw and Bill Terry and players such as Christy Mathewson, Carl Hubbell, Mel Ott, Bobby Thomson, and Willie Mays. The Giants' franchise has the most Hall of Fame players in all of professional baseball. The Giants' rivalry with the Los Angeles Dodgers is one of the longest-standing and biggest rivalries in American sports. The teams began their rivalry as the New York Giants and Brooklyn Dodgers, respectively, before both franchises moved west for the 1958 season.
The Giants have won six National League pennants and three World Series championships since relocating to San Francisco. Those three championships came in 2010, 2012, and most recently in 2014, when they defeated the Kansas City Royals four games to three.
The Giants were the only major professional sports team based in the City and County of San Francisco, following the San Francisco 49ers' relocation to Santa Clara in 2014, until the Golden State Warriors moved to the Chase Center in 2019.
From 1883 to 2019, the Giants' overall win–loss record was 11,165–9,687 (a winning "percentage" of 0.535).
The Giants originated in New York City as the New York Gothams in 1883 and were known as the New York Giants from 1885 until the team relocated to San Francisco after the 1957 season. During most of their 75 seasons in New York City, the Giants played home games at various incarnations of the Polo Grounds in Upper Manhattan.
Numerous inductees of the National Baseball Hall of Fame and Museum played for the New York Giants, including John McGraw, Mel Ott, Bill Terry, Willie Mays, Monte Irvin, and Travis Jackson. During the club's tenure in New York, they produced five of the franchise's eight World Series wins (1905, 1921, 1922, 1933, 1954) and 17 of its 23 National League pennants. Famous moments in the Giants' New York history include the 1922 World Series, in which the Giants swept the Yankees in four games, the 1951 home run by New York Giants outfielder and third baseman Bobby Thomson known as the "Shot Heard 'Round the World", and the defensive feat by Mays during Game 1 of the 1954 World Series known as "the Catch".
The Giants had intense rivalries with their fellow New York teams, the New York Yankees and the Brooklyn Dodgers. The Giants faced the Yankees in six World Series and played the league rival Dodgers multiple times per season. Games between any two of these three teams were known collectively as the Subway Series. The Dodgers-Giants rivalry continues, as both teams moved to California after the 1957 season, with the Dodgers relocating to Los Angeles. The New York Giants of the National Football League are named after the team.
The Giants, along with their rival Los Angeles Dodgers, became the first Major League Baseball teams to ever play on the west coast. On April 15, 1958, the Giants played their first game in San Francisco, defeating the former Brooklyn and now Los Angeles Dodgers, 8–0. The Giants played for two seasons at Seals Stadium before moving to Candlestick Park in 1960. The Giants played at Candlestick Park until 1999, before opening Pacific Bell Park (now known as Oracle Park) in 2000, where the Giants currently play.
The Giants were unable to sustain success in their first 50 years in San Francisco. They made nine playoff appearances and won three NL pennants between 1958 and 2009. The Giants lost the 1962 World Series in seven games to the New York Yankees. The Giants were swept in the 1989 World Series by their cross-town rival Oakland Athletics, a series best known for the 1989 Loma Prieta earthquake causing a 10-day delay between Games 2 and 3. The Giants also lost the 2002 World Series to the Anaheim Angels. One of the team's biggest highlights during this time was the 2001 season, in which OF Barry Bonds hit 73 home runs, breaking the record for most home runs in a season. In 2007, Bonds would surpass Hank Aaron's career record of 755 home runs. Bonds finished his career with 762 home runs (586 hit with the Giants), still the MLB record.
The Giants won three World Series championships in 2010, 2012, and 2014, giving the team eight total World Series titles, including the five won as the New York Giants.
Players inducted into the National Baseball Hall of Fame and Museum as San Francisco Giants include 1B Orlando Cepeda P Juan Marichal, 1B Willie McCovey, and P Gaylord Perry.
The Giants' rivalry with the Los Angeles Dodgers dates back to when the two teams were based in New York, as does their rivalry with the New York Yankees. The Dodger and Giants rivalry is one of the longest rivalries in sports history. Their rivalry with the Oakland Athletics dates back to when the Giants were in New York and the A's were in Philadelphia and played each other in the 1905, 1911, & 1913 World Series, and was renewed in 1968 when the Athletics moved from Kansas City and the teams again played each other in the earthquake-interrupted 1989 Bay Bridge World Series. The 2010 NLCS inaugurated a Giants rivalry with the Philadelphia Phillies after confrontations between Jonathan Sánchez and Chase Utley, and between Ramón Ramírez and Shane Victorino. However, with the Philadelphia Phillies dropping off as one of the premier teams of the National League, this rivalry has died down since 2010 and 2011. Another rivalry that has intensified recently is with the St. Louis Cardinals, whom the team has faced 4 times in the NLCS.
The rivalry between the New York Giants and Chicago Cubs in the early 20th century was once regarded as one of the most heated in baseball, with Merkle's Boner leading to a 1908 season-ending matchup in New York of particular note. That historical rivalry was revisited when the Giants beat the Cubs in the 1989 NL playoffs, in their tiebreaker game in Chicago at the end of the 1998 season, and on June 6, 2012 in a "Turn Back The Century" game in which both teams wore replica 1912 uniforms.
The Giants-Dodgers rivalry is one of the greatest and longest-standing rivalries in team sports, and has been regarded as the most intense in American baseball.
The Giants-Dodgers feud began in the late 19th century when both clubs were based in New York City, with the Dodgers based in Brooklyn and the Giants playing at the Polo Grounds in upper Manhattan. After the 1957 season, Dodgers owner Walter O'Malley decided to move the team to Los Angeles primarily for financial reasons. Along the way, he managed to convince Giants owner Horace Stoneham (who was considering moving his team to Minnesota) to preserve the rivalry by taking his team to San Francisco as well. New York baseball fans were stunned and heartbroken by the move. Given that the cities of Los Angeles and San Francisco have long been competitors in economic, cultural and political arenas, their new California venues became fertile ground for transplantation of the ancient rivalry.
Both teams' having endured for over a century while leaping across an entire continent, as well as the rivalry's growth from cross-city to cross-state, have led to its being considered one of the greatest in sports history.
The Giants-Dodgers rivalry has been marked by the Giants' slightly better success. While the Giants have more total wins, head-to-head wins, and World Series titles in their franchise histories, the Dodgers have won the National League West 10 more times than the Giants since the start of division play in 1969. Both teams have made the postseason as a National League wild card twice. The Giants won their first world championship in California in 2010, while the Dodgers won their last world title in 1988. As of the end of the 2019 baseball season, the Los Angeles Dodgers lead the San Francisco Giants in California World Series triumphs, 5–3, whereas in 20th century New York, the Giants led the Dodgers in World Series championships, 5–1. The combined franchise histories give the Giants an 8–6 edge in MLB championships, overall.
A geographic rivalry with the cross-Bay American League Athletics greatly increased with the 1989 World Series, nicknamed the "Battle of the Bay", which Oakland swept (and which was interrupted by the Loma Prieta earthquake moments before the scheduled start of Game 3 in San Francisco). In addition, the introduction of interleague play in 1997 has pitted the two teams against each other for usually six games every season since 1997, three in each city (but only four in 2013, two in each city). Before 1997, they played each other only in Cactus League spring training. Their interleague play wins and losses (63–57 in favor of the A's) have been fairly evenly divided despite differences in league, style of play, stadium, payroll, fan base stereotypes, media coverage and World Series records, all of which have heightened the rivalry in recent years. The intensity of the rivalry and how it is understood varies among Bay Area fans. A's fans generally view the Giants as a hated rival, while Giants fans generally view the A's as a friendly rival much lower on the scale. This is most likely due to the A's lack of a historical rival, while the Giants have their heated rivalry with the Dodgers. Some Bay Area fans are fans of both teams. The "split hats" that feature the logos of both teams best embodies the shared fan base. Other Bay Area fans view the competition between the two teams as a "friendly rivalry", with little actual hatred compared to similar ones such as the Subway Series (New York Mets vs. New York Yankees), the Red Line Series (Chicago Cubs vs. Chicago White Sox) and the Freeway Series (Los Angeles Dodgers vs. Los Angeles Angels of Anaheim).
The Giants and A's enjoyed a limited rivalry at the start of the 20th century before the Yankees began to dominate after the acquisition of Babe Ruth in 1920, when the Giants were in New York and the A's were in Philadelphia. The teams were managed by legendary leaders John McGraw and Connie Mack, who were considered not only friendly rivals but the premier managers during that era, especially in view of their longevity (Mack for 50 years, McGraw for 30) since both were majority owners. Each team played in five of the first 15 World Series (tying them with the Red Sox and Cubs for most World Series appearances during that time period). As the New York Giants and the Philadelphia A's, they met in three World Series, with the Giants winning in and the A's in & . After becoming the San Francisco Giants and Oakland A's, they met in a fourth Series in resulting in the A's last world championship (as of 2018).
Though in different leagues, the Giants have also been historical rivals of the Yankees, starting in New York before the Giants moved to the West Coast. Before the institution of interleague play in 1997, the two teams had little opportunity to play each other except in seven World Series: , , , , , and , the Yankees winning last five of the seven Series. The teams have met five times in regular season interleague play: In 2002 at the old Yankee Stadium, in 2007 at Oracle Park (then known as AT&T Park), in 2013 and 2016 at the current Yankee Stadium, and in 2019 at Oracle Park. The teams' next regular season meetings will occur in 2022.
In a September 2013 meeting, Yankees 3B Alex Rodriguez hit a grand slam, breaking Lou Gehrig's grand slam record.
In his July 4, 1939 farewell speech ending with the renowned "Today, I consider myself the luckiest man on the face of the earth", Yankee slugger Lou Gehrig, who played in 2,130 consecutive games, declared that the Giants were a team he "would give his right arm to beat, and vice versa."
As of 2012, the Major League Baseball Hall of Fame has inducted 66 representatives of the Giants (55 players and 11 managers) into the Hall of Fame, more than any other team in the history of baseball.
The following inducted members of the Hall of Fame played or managed for the Giants, but either played for the Giants and were inducted as a manager having never managed the Giants, or managed the Giants and were inducted as a player having never played for the Giants:
Broadcasters Russ Hodges, Lon Simmons, and Jon Miller are permanently honored in the Hall's "Scribes & Mikemen" exhibit as a result of winning the Ford C. Frick Award in 1980, 2004, and 2010 respectively. As with all Frick Award winners, none are officially recognized as an inducted member of the Hall of Fame.
The Giants Wall of Fame recognizes retired players whose records stand highest among their teammates on the basis of longevity and achievements.
Those honored have played a minimum of nine seasons for the San Francisco Giants, or five seasons with at least one All-Star selection as a Giant.
The Giants have retired 11 numbers in the history of the franchise, most recently Barry Bonds' number 25 in 2018.
Of the Giants whose numbers have been retired, all but Bonds have been elected to the National Baseball Hall of Fame. In 1944, Carl Hubbell (#11) became the first National Leaguer to have his number retired by his team. Bill Terry (#3), Mel Ott (#4), and Hubbell played and/or managed their entire careers for the New York Giants. Willie Mays (#24) began his career in New York, moving with the Giants to San Francisco in 1958; he did not play in most of 1952 and all of 1953 due to his service in the Korean War. Mathewson and McGraw are honored by the Giants, but played in an era before uniform numbers became standard in baseball.
It was announced that the Giants will retire Will Clark's #22 on July 11, 2020.
John McGraw (3B, 1902–06; manager, 1902–32) and Christy Mathewson (P, 1900–16), who were members of the New York Giants before the introduction of uniform numbers, have the letters "NY" displayed in place of a number.
Broadcasters Lon Simmons (1958–73, 1976–78, 1996–2002 & 2006), Russ Hodges (1949–70), and Jon Miller (1997–current) are each represented by an old-style radio microphone displayed in place of a number.
The Giants present the Willie Mac Award annually to the player that best exemplifies the spirit and leadership shown by Willie McCovey throughout his career.
The Giants have had a number of captains over the years:
All-time regular season record: 11,165–9,687 (.535) (through 2019 season)
The San Francisco Giants farm system consists of eight minor league affiliates.
Giants' television telecasts are split between NBC-owned KNTV (broadcast) and NBC Sports Bay Area (cable). KNTV's broadcast contract with the Giants began in 2008, one year after the team and KTVU mutually ended a relationship that dated to 1961. Jon Miller regularly calls the action on KNTV, while the announcing team for NBCSBA telecasts is Mike Krukow and Duane Kuiper, affectionately known as "Kruk and Kuip" (pronounced "Kruke" and "Kype"). During the 2016 season, the Giants had an average 4.71 rating and 117,000 viewers on primetime TV broadcasts.
The Giants' flagship radio station is KNBR (680 AM). KNBR's owner, Cumulus Media, is a limited partner in San Francisco Baseball Associates LP, the owner of the team. Jon Miller and Dave Flemming are the regular play-by-play announcers. In addition to KNBR, the Giants can be heard throughout Northern California and parts of Nevada, Oregon, and Hawaii on the Giants Radio Network. When games are televised on KNTV, Kuiper replaces Miller on the radio, and Miller goes to television. Erwin Higueros and Tito Fuentes handle Spanish-language radio broadcasts on KXZM (93.7 FM).
On May 28, 2006, Flemming called the 715th career home run of Barry Bonds, which moved Bonds into second on the all-time home run list. Unfortunately, the power from Flemming's microphone to the transmitter cut off while the ball was in flight, so the radio audience heard only crowd noise. Greg Papa took over the broadcast and apologized to listeners. Kuiper's TV call was submitted to the Baseball Hall of Fame as an artifact, instead of the usual radio call.
First used for Giants radio broadcasts on KSFO, the team's fight song "Bye, Bye Baby!" is currently used following any Giants home run. The song is played in the stadium, and an instrumental version is played on telecasts when the inning in which the home run was hit concludes. The title and chorus "Bye bye baby!" coming from famed former Giants broadcaster Russ Hodges, which was his home run call.
Following a Giants home win, Tony Bennett's "I Left My Heart in San Francisco" is played in Oracle Park in celebration.
If the Giants are leading after the 8th inning, they play Journey's "When the Lights Go Down in the City". If they are trailing, they play Journey's "Don't Stop Believin'". | https://en.wikipedia.org/wiki?curid=28416 |
San Diego Padres
The San Diego Padres are an American professional baseball team based in San Diego, California. They compete in Major League Baseball (MLB) as a member club of the National League (NL) West division. Founded in 1969, the Padres have won two NL pennants—in 1984 and 1998, losing in the World Series both years. As of 2018, they have had 14 winning seasons in franchise history. The Padres are one of two Major League Baseball teams (the other being the Los Angeles Angels) in California to originate from that state; the Athletics were originally from Philadelphia (and moved to the state from Kansas City), and the Dodgers and Giants are originally from two New York City boroughs—Brooklyn and Manhattan, respectively. The Padres are the only MLB team that does not share its city with another franchise in the four major American professional sports leagues. The Padres are the only major professional sports franchise to be located in San Diego, following the relocation of the Chargers to Los Angeles in 2017. They are also the only franchise in the MLB not to have a no-hitter, having gone 8,020 games without throwing one, a major league record to begin a franchise.
The Padres adopted their name from the Pacific Coast League team that arrived in San Diego in 1936. That minor league franchise won the PCL title in 1937, led by 18-year-old Ted Williams, the future Hall-of-Famer who was a native of San Diego. The team's name, Spanish for "fathers", refers to the Spanish Franciscan friars who founded San Diego in 1769.
In 1969, the Padres joined the ranks of Major League Baseball as one of four new expansion teams, along with the Montreal Expos (now the Washington Nationals), the Kansas City Royals, and the Seattle Pilots (now the Milwaukee Brewers). Their original owner was C. Arnholt Smith, a prominent San Diego businessman and former owner of the PCL Padres whose interests included banking, tuna fishing, hotels, real estate and an airline. Despite initial excitement, the guidance of longtime baseball executives, Eddie Leishman and Buzzie Bavasi as well as a new playing field, the team struggled; the Padres finished in last place in each of its first six seasons in the NL West, losing 100 games or more four times. One of the few bright spots on the team during the early years was first baseman and slugger Nate Colbert, an expansion draftee from the Houston Astros and still the Padres' career leader in home runs.
The team's fortunes gradually improved as they won five National League West titles and reached the World Series twice, in 1984 and in 1998, but lost both times. The Padres' main draw during the 1980s and 1990s was Tony Gwynn, who won eight league batting titles. They moved into their current stadium, Petco Park, in 2004.
As of 2019, the Padres are the only team in MLB yet to throw a no-hitter. On September 5, 1997, Andy Ashby took a no-hitter into the 9th inning, which is as close as the team has come to achieving this feat.
The team has played its spring training games at the Peoria Sports Complex in Peoria, Arizona since 1994. They share the stadium with the Seattle Mariners.
From 1969 to 1993, the Padres held spring training in Yuma, Arizona at Desert Sun Stadium. Due to the short driving distance and direct highway route (170 miles, all on Interstate 8), Yuma was very popular with Padres fans, and many fans would travel by car from San Diego for spring training games. The move from Yuma to Peoria was very controversial, but was defended by the team as a reflection on the low quality of facilities in Yuma and the long travel necessary to play against other Arizona-based spring training teams (whose sites were all in the Phoenix and Tucson areas, both rather far from Yuma).
Throughout the team's history, the San Diego Padres have used multiple logos, uniform, and color combinations. Their first logo depicted a friar swinging a bat with Padres written at the top while standing in a sun-like figure with San Diego Padres on the exterior of it. The "Swinging Friar" has popped up on the uniform on and off ever since. Although the "Swinging Friar" is no longer used as the primary logo, it remains as the mascot of the team and is now utilized as an alternate logo and on the uniform sleeve.
In 1985, the Padres switched to using a script-like logo in which "Padres" was written sloped up. That would later become a script logo for the Padres. The team's colors were changed to brown and orange and remained this way through the 1990 season.
In 1989, the Padres took the scripted Padres logo that was used from 1985 to 1988 and put it in a gray ring that read "San Diego Baseball Club" with a striped center. In 1991, the color of the ring was changed to silver, and the Padres script was changed from brown to blue. The logo only lasted one year, as the Padres changed their logo for the third time in three years, again by switching colors of the ring. The logo became a white ring with fewer stripes in the center and a darker blue Padres script with orange shadows. In 1991, the team's colors were also changed, to a combination of orange and navy blue.
For the 2002 season, the Padres removed the stripes off the home jersey and went with a white home jersey with a cleaner look. The pinstripe jerseys were worn as alternate jerseys on certain occasions throughout the 2002 season. The Padres kept this design for two seasons until their 2004 season, in which they moved into their new ballpark.
The logo was completely changed when the team changed stadiums between the 2003 and 2004 seasons, with the new logo looking similar to home plate with "San Diego" written in sand font at the top right corner and the Padres new script written completely across the center. Waves finished the bottom of the plate. Navy remained but a sandy beige replaced orange as a secondary color. The team's colors were also changed, to navy blue and sand brown. For the next seven seasons the Padres were the only team in Major League Baseball that did not have a gray jersey, with the team typically playing in either blue or sand jerseys on the road and white or blue jerseys at home. In 2009, the "San Diego" was removed from the top right corner of the logo and two years later, the away uniform changed from sand to gray.
For the 2012 season, the Padres unveiled a new primary logo, featuring the cap logo inside a navy blue circle with the words "San Diego Padres Baseball Club" adorning the outer circle. The "swinging friar" logo was recolored to the current colors of navy blue and white. Another secondary logo features the Padres script carried over from the previous year's primary logo below the depiction of Petco Park in sand and above the year of the team's first season (EST. 1969). Until 2015, the blue and sand version was used on the home uniform, while the blue and white version was used on the away and alternate uniforms.
In the 2016 season, the Padres wore a navy blue and yellow color scheme, similar to the one used on the 2016 Major League Baseball All-Star Game logo. To coincide with the change, the Padres added a new brown and yellow alternate uniform to be worn mostly during Friday home games.
For the 2017 season, the Padres revealed a new color scheme and new jerseys for the second straight year. The yellow was scrapped from the home uniform and the team reverted to a navy blue-and-white combo. The word "Padres" returned to the front of the home uniform, but with a new script, while the script on the road uniform reverted to the "San Diego" wordmark style it used from 2004–11. Despite this major change, the brown and yellow alternate uniform from the previous set was retained.
The club announced on January 25, 2019 that the original brown and gold colors would return for the 2020 Major League Baseball season. The new uniform designs featuring the brown and gold colors were officially unveiled on November 9. The team featured brown and gold on each of the three unveiled jerseys, including the return of pinstripes to the Padre home jersey for the first time since 2001 and a non-gray road jersey for the first time since 2010.
Starting in 1996, the Padres became the first national sports team to have an annual military appreciation event. Following in 2000, the Padres began wearing a camouflage to honor the military. The jersey will now have had seven different versions since. Starting in 2008, the Padres began wearing camouflage jerseys for every Sunday home game. They also wear these uniforms on Memorial Day, Independence Day, and Labor Day. For 2011, the Padres changed the camouflage design to a more modern "digital" design, using the MARPAT design after receiving permission from then-Commandant Conway, and dropped the green from the lettering and logo of the jersey. Green was replaced by a sand-olive color (also in the cap worn with the jersey). For 2016, to coincide with hosting the 2016 Major League Baseball All-Star Game, the Padres changed the camouflage jersey once again; this time to navy blue, however, this design was only worn for one season as for 2017, the Padres switched the camouflage jersey to Marine, which was used through 2019. For 2020, the Padres will begin using two different camouflage jersey colors: green and sand-olive, both with the current "Padres" wordmark. Since 1995 Marine Recruits from the nearby Marine Corps Recruit Depot often visit the games en masse during Military Appreciation Day, in uniform, often filling entire sections in the upper deck of Petco Park. When they are present, the team commemorates this with a special Fourth Inning Stretch featuring the Marine Hymn. Through April 2005 over 60,000 marine recruits were hosted by the Padres. This is part of an extensive military outreach program, which also includes a series of Military Appreciation Night games, and game tapes mailed to deployed United States Navy ships of the Pacific Fleet for onboard viewing (a large portion of the Pacific Fleet is homeported in San Diego).
The San Diego area is home to a number of military installations, including several Navy and Coast Guard bases centered on San Diego Bay, Marine Corps Air Station Miramar (former home of the "Top Gun" training program), and the Marine Corps training ground at Camp Pendleton. Civilians employed at those bases account for around 5% of the county's working population.
The "Swinging Friar" is currently the mascot of the team. Some in the past have confused The Famous Chicken as the mascot of the Padres. Although he does make appearances occasionally at San Diego sporting events, he has never been the official mascot of any San Diego sports team.
The following elected members of the Baseball Hall of Fame played and/or managed for the Padres.
The Padres have retired six numbers. Five were in honor of Padre players and one was Jackie Robinson's number 42, which was retired by all of Major League Baseball. The retired numbers are displayed on the upper deck facade behind home plate.
The Padres also have a "star on the wall" in honor of broadcaster Jerry Coleman, in reference to his trademark phrase "Oh Doctor! You can hang a star on that baby!" Nearby the initials of the late owner Ray Kroc are also displayed. Both the star and the initials are painted in gold on the front of the pressbox down the right field line accompanied by the name of the person in white. Kroc was honored in 1984, Coleman in 2001.
The following 14 people have been inducted into the San Diego Padres Hall of Fame since it was founded in 1999.
Gwynn, Winfield, Fingers, Gossage, Randy Jones, and Graig Nettles (3B, 1984–1987) are members of the San Diego Hall of Champions, which is open to athletes native to the San Diego area (such as Nettles) as well as to those who played for San Diego teams (such as Gwynn).
The San Diego Padres farm system consists of eight minor league affiliates.
Padres' games are currently televised by Fox Sports San Diego. Don Orsillo is the play-by-play announcer, with Mark Grant as color analyst and either Julie Alexandria, Ron Zinter, or Bob Scanlan as field reporter. Mike Pomeranz hosts the "Padres Live" pre- and post-game show along with Mark Sweeney.
As of the 2018 season, Padres radio broadcasts in English are carried by KWFN "97.3 The Fan", after having previously been carried by sister station 94.9 KBZT upon the acquisition of the radio rights by Entercom in 2017. Ted Leitner is the primary play-by-play announcer, with Jesse Agler working the middle innings of each game and Bob Scanlan serving as color analyst. The games are also broadcast in Spanish on XEMO-AM,"La Poderosa 860 AM", with Eduardo Ortega, Carlos Hernández and Pedro Gutiérrez announcing. Padre games were also aired from 2006–2010 on XHPRS-FM 105.7.
Spanish language telecasts of Sunday games are seen XHAS-TDT channel 33. Until September 2007, Friday and Saturday games were seen in Spanish on KBOP-CA channel 43, until that station changed to an all-infomercial format. This makes XHAS-TDT the only over-the-air-television station carrying Padres baseball. English-language Padres over-the-air broadcasts aired through the years on XETV-TV 6, KCST-TV 39, KUSI-TV 51, KFMB-TV 8 and KSWB-TV 69.
John Demott was the Padres' first public address announcer when the team began in 1969. By the late 1970s Bruce Binkowski had taken over as PA announcer, and became the longest-serving public address announcer in the team's history, remaining until the end of the 1999 season. First DeMott and then Binkowski also were responsible with PA announcing duties for the San Diego Chargers and the San Diego State University Aztecs, both of which were joint tenants at Qualcomm Stadium with the Padres until the Padres moved into Petco Park. From Petco Park's opening in 2004 until 2013, the PA announcer was Frank Anthony, a radio host with 105.7 XHPRS-FM. On April 19, 2014, Alex Miniak was announced as the new Public Address announcer for the San Diego Padres. Miniak was formerly the PA announcer for the New Hampshire Fisher Cats, the Double-A affiliate of the Toronto Blue Jays.
The San Diego Padres were first portrayed in the 1979 NBC made-for-TV film "The Kid from Left Field," starring Gary Coleman as Jackie Robinson "J.R." Cooper, a youngster who is passionate about baseball, and puts his knowledge to good use when he becomes the manager of the Padres and helps them lead to the World Series.
In 2016, the San Diego Padres were portrayed once again in the one-season Fox television series "Pitch", starring Kylie Bunbury as Ginny Baker, the first female to play in Major League Baseball.
The San Diego Padres established The Padres Scholars program, the first of its kind among professional sports. Originally each Padres scholar was selected as a seventh grader and received a $5,000 scholarship after graduation from high school to go towards higher education. This program has reached 389 students from its establishment in 1995 to now. Over the past few years the program has undergone a few changes to be effective an education standpoint. This program focuses on creating a close relationship between the chosen scholars and the team. As of 2011, 3 high school seniors will be chosen to receive a $30,000 scholarship to be awarded through the course of their higher education. Maintaining this prestigious award is conditional on maintaining contact with the Padres and providing proof of good academic standing.
The San Diego Padres are the sponsors of and heavily involved in most aspects of the Sports Business Management MBA degree program offered in conjunction with San Diego State University's College of Business Administration. SDSU's Sports MBA is the only program of its kind created in partnership with a professional sports franchise. The curriculum focuses on the entire sports business industry, not just baseball. The program includes an internship. Members of Padres senior management regularly participate, including work with the development and continued coordination of SDSU's International Case Competition, which annually attracts participation from top business schools. | https://en.wikipedia.org/wiki?curid=28417 |
Sinclair QL
The Sinclair QL (for "Quantum leap") microcomputer is a personal computer launched by Sinclair Research in 1984, as an upper-end counterpart to the Sinclair ZX Spectrum. The QL was aimed at the serious home user and professional and executive users markets from small to medium-sized businesses and higher educational establishments, but failed to achieve commercial success.
Based on a Motorola 68008 processor clocked at 7.5 MHz, the QL included 128 KB of RAM, which was officially expandable to 640 KB and in practice, 896 KB. It could be connected to a monitor or TV for display. Two built-in Microdrive tape-loop cartridge drives provided mass storage, in place of the more expensive floppy disk drives found on similar systems of the era. Microdrives had been introduced for the Sinclair ZX Spectrum in July 1983, although the QL used a different logical tape format. Interfaces included an expansion slot, ROM cartridge socket, dual RS-232 ports, proprietary QLAN local area network ports, dual joystick ports and an external Microdrive bus. Two video modes were available, 256×256 pixels with 8 RGB colours and per-pixel flashing, or 512×256 pixels with four colours: black, red, green and white. The supported colours could be stippled in 2×2 blocks to simulate up to 256 colours, an effect which did not copy reliably on a TV, especially over an RF connection. Both screen modes used a 32 KB framebuffer in main memory. The hardware was capable of switching between two different areas of memory for the frame buffer, thus allowing double buffering. However, this would have used 64 KB of the standard machine's 128 KB of RAM and there was no support for this feature in the QL's original firmware. The alternative and much improved operating system Minerva does provide full support for the second frame buffer. When connected to a normally-adjusted TV or monitor, the QL's video output would overscan horizontally. This was reputed to have been due to the timing constants in the ZX8301 chip being optimised for the flat-screen CRT display originally intended for the QL.
Internally, the QL comprised the CPU, two ULAs (ZX8301 and ZX8302) and an Intel 8049 microcontroller known as the IPC, or "Intelligent Peripheral Controller". The ZX8301 or "Master Chip" implemented the video display generator and also provided DRAM refresh. The ZX8302, or "Peripheral Chip", interfaced to the RS-232 ports (transmit only) Microdrives, QLAN ports, real-time clock and the 8049 via a synchronous serial link. The 8049 included at late stage in the QL's design, the ZX8302 originally being intended to perform its functions ran at 11 MHz and acted as a keyboard/joystick interface, RS-232 receive buffer and audio generator.
QDOS, a pre-emptive multitasking operating system primarily designed by Tony Tebby, was included on ROM, as was an advanced structured BASIC interpreter, named SuperBASIC designed by Jan Jones, which was also used as the command-line interpreter. The QL was bundled with an office suite, consisting of a word processor, spreadsheet, database, and business graphics written by Psion.
Physically, the QL was the same black colour as the preceding ZX81 and Sinclair ZX Spectrum models, but introduced a new angular styling theme and keyboard design which would later be seen in the ZX Spectrum+.
The QL used British Telecom type 631W plugs of similar design to British telephone sockets for serial cables except for QLs built by Samsung for export markets, which had DE-9 sockets. Joysticks connected to the QL with similar type 630W plugs.
The QL was originally conceived in 1981 under the code-name "ZX83", as a portable computer for business users, with a built-in ultra-thin flat-screen CRT display similar to the later TV80 pocket TV, printer and modem. As development progressed it eventually became clear that the portability features were over-ambitious and the specification was reduced to a conventional desktop configuration.
The electronics were primarily designed by David Karlin, who joined Sinclair Research in summer 1982. The industrial design was done by Rick Dickinson, who already designed the ZX81 and ZX Spectrum range of products. Sinclair had commissioned GST Computer Systems to produce the operating system for the machine, but switched to Domesdos, developed by Tony Tebby as an in-house alternative, before launch. GST's OS, designed by Tim Ward, was later made available as 68K/OS, in the form of an add-on ROM card.
The tools developed by GST for the QL would later be used on the Atari ST, where GST object format became standard.
The QL was designed to be more powerful than the IBM Personal Computer, and comparable to Apple's Macintosh. While the CPU clock speed was comparable to that of the Macintosh, and the later Atari ST and Amiga, the 8-bit databus and cycle stealing of the ZX8301 gate array limited the QL's performance. At the time of the rushed launch, on 12 January 1984, the QL was far from being ready for production--there being no complete working prototype in existence. Although Sinclair started taking orders immediately, promising delivery within 28 days, first customer deliveries only started, slowly, in April. This provoked much criticism of the company and the attention of the Advertising Standards Authority.
Due to its premature launch, the QL was plagued by a number of problems from the start. Early production QLs were shipped with preliminary versions of firmware containing numerous bugs, mainly in SuperBASIC. Part of the firmware was held on an external 16 KB ROM cartridge also known as the "kludge" or "dongle", until the QL was redesigned to accommodate the necessary 48 KB of ROM internally, instead of the 32 KB initially specified. The QL also suffered from reliability problems of its Microdrives. These problems were later rectified, by Sinclair engineers, especially on Samsung produced models, as well as by aftermarket firms such as Adman Services and TF Services, to the point where several QL users report the Samsung Microdrives in particular working perfectly even after almost 17 years of service; but in any case much too late to redeem the negative image they had already created.
Although the computer was hyped as being advanced for its time, and relatively cheap, it failed to sell well, and UK production was suspended in 1985, due to lack of demand. After Amstrad acquired Sinclair's computer products lines in April 1986, the QL was officially discontinued. Apart from its reliability issues, the target business market was becoming wedded to the IBM PC platform, whilst the majority of ZX Spectrum owners were uninterested in upgrading to a machine which had a minimal library of games. Sinclair's persistence with the non-standard Microdrive and uncomfortable keyboard did not endear it to the business market; coupled with the machine's resemblance to a ZX Spectrum+, they led many to perceive the QL as something akin to a toy. Software publishers were also reluctant to support the QL due to the necessity of using Microdrive cartridges as a distribution medium.
The QL's CPU, ZX8301 and ZX8302 ASICs and Microdrives formed the basis of International Computers Limited's (ICL's) One Per Desk (OPD) - also marketed by British Telecom as the Merlin Tonto and by Telecom Australia as the Computerphone. The result of a three-year collaboration between Sinclair Research, ICL and British Telecom, the OPD had the addition of a telephone handset at one end of the keyboard, and rudimentary Computer-Telephony Integration (CTI) software.
This machine interested a number of high-profile business customers, including certain divisions of the former UK Customs and Excise Department, but its success was generally limited.
In the late 1980s they were used in bingo halls to allow a country wide networked bingo game.
Linus Torvalds has attributed his eventually developing the Linux kernel, likewise having pre-emptive multitasking, in part to having owned a Sinclair QL in the 1980s. Because of the lack of support, particularly in Finland, Torvalds became used to writing his own software rather than relying on programs written by others. His frustration with the Sinclair led, years later, to his purchasing a more standard IBM PC compatible on which he would develop Linux.
After Amstrad abandoned the QL in 1986, several companies previously involved in the QL peripherals market stepped in to fill the void. These included CST and DanSoft, creators of the Thor line of compatible systems; Miracle Systems, creator of the Gold Card and Super Gold Card processor/memory upgrade cards and the QXL PC-based hardware emulator; and Qubbesoft, with the Aurora, the first replacement QL mainboard, with enhanced graphics modes.
In the late 1990s, two partly QL-compatible motherboards named Q40 and Q60 (collectively referred to as Qx0) were designed by Peter Graf and marketed by D&D Systems. The Q40 and Q60, based on the Motorola 68040 and 68060 CPUs respectively, were much more powerful than the original QL and have the ability among other things (such as multimedia, high resolution graphics, Ethernet networking etc.) to run the Linux operating system.
In 2013 Peter Graf announced that he's working on the Q68, a FPGA based QL compatible single board computer. The Q68 was first presented to the public in April 2014 and became available in autumn 2017. It is produced and marketed by Derek Stewart (of former D&D Systems).
Hardware add-ons such as new developments like the QL-SD (designed by Peter Graf)
and reengineered or even expanded 1990s designs such as QubIDE interfaces (by José Leandro Novellón)
, Trump, Gold & Super Gold Cards (by Tetroid)
are still being produced for the original QL.
RWAP Software supplies various hardware and software upgrades and spare parts.
Patched or reengineered versions of QDOS were produced, most notably Minerva which gradually evolved into a completely rewritten operating system, offering improved speed, with multitasking SuperBASIC interpreters. Tony Tebby went on to produce another updated operating system, SMSQ/E, which has continued to be developed for the Sinclair QL and emulators, offering many more features.
Quite a few emulators and virtual QLs became available over time of which QPC2 (Windows), SMSQmulator (Java) and ZEsarUX (Windows/Mac/Linux) are actively maintained. Several distributions of emulators, applications and information have been produced, of which Black Phoenix and QL/E are the most actively maintained. | https://en.wikipedia.org/wiki?curid=28418 |
Specific heat capacity
The specific heat capacity of a substance is the heat capacity of a sample of the substance divided by the mass of the sample. Informally, it is the amount of energy that must be added, in the form of heat, to one unit of mass of the substance in order to cause an increase of one unit in its temperature. The SI unit of specific heat is joule per kelvin and kilogram, J/(K kg).
For example, at a temperature of (the specific heat capacity can vary with the temperature), the heat required to raise the temperature of of water by (equivalent to ) is , meaning that the specific heat of water is .
The specific heat often varies with temperature, and is different for each state of matter. Liquid water has one of the highest specific heats among common substances, about 4182 J/(K kg) at 20 °C; but that of ice just below 0 °C is only 2093 J/(K kg). The specific heats of iron, granite, and hydrogen gas are about 449, 790, and 14300 J/(K kg), respectively. While the substance is undergoing a phase transition, such as melting or boiling, its specific heat is technically infinite, because the heat goes into changing its state rather than raising its temperature.
The specific heat of a substance, especially a gas, may be significantly higher when it is allowed to expand as it is heated (specific heat "at constant pressure") than when is heated in a closed vessel that prevents expansion (specific heat "at constant volume"). These two values are usually denoted by formula_1 and formula_2, respectively; their quotient formula_3is the heat capacity ratio.
In some contexts, however, the term "specific heat capacity" (or "specific heat") may refer to the ratio between the specific heats of a substance at a given temperature and of a reference substance at a reference temperature, such as water at 15 °C; much in the fashion of specific gravity.
Specific heat relates to other intensive measures of heat capacity with other denominators. If the amount of substance is measured as a number of moles, one gets the molar heat capacity instead (whose SI unit is joule per kelvin per mole, J/(K mol). If the amount is taken to be the volume of the sample (as is sometimes done in engineering), one gets the volumetric heat capacity (whose SI unit is joule per kelvin per cubic meter, J/K/m3).
One of the first scientists to use the concept was Joseph Black, 18th-century medical doctor and professor of Medicine at Glasgow University. He measured the specific heat of many substances, using the term "capacity for heat".
The specific heat capacity of a substance, usually denoted by formula_4, is the heat capacity formula_5 of a sample of the substance, divided by the mass formula_6 of the sample:
where formula_8 represents the amount of heat needed to uniformly raise the temperature of the sample by a small increment formula_9.
Like the heat capacity of an object, the specific heat of a substance may vary, sometimes substantially, depending on the starting temperature formula_10 of the sample and the pressure formula_11 applied to it. Therefore, it should be considered a function formula_12 of those two variables.
These parameters are usually specified when giving the specific heat of a substance. For example, "Water (liquid): formula_13 = 4185.5 J/K/kg (15 °C, 101.325 kPa)" When not specified, published values of the specific heat formula_4 generally are valid for some standard conditions for temperature and pressure.
However, the dependency of formula_4 on starting temperature and pressure can often be ignored in practical contexts, e.g. when working in narrow ranges of those variables. In those contexts one usually omits the qualifier formula_16, and approximates the specific heat by a constant formula_4 suitable for those ranges.
Specific heat is an intensive property of a substance, an intrinsic characteristic that does not depend on the size or shape of the amount in consideration. (The qualifier "specific" in front of an extensive property often indicates an intensive property derived from it.)
The injection of heat energy into a substance, besides raising its temperature, usually causes an increase in its volume and/or its pressure, depending on how the sample is confined. The choice made about the latter affects the measured specific heat, even for the same starting pressure formula_11 and starting temperature formula_10. Two particular choices are widely used:
The value of formula_22 is usually less than the value of formula_13. This difference is particularly notable in gases where values under constant pressure are typically 30% to 66.7% greater than those at constant volume. Hence the heat capacity ratio of gases is typically between 1.3 and 1.67.
The specific heat can be defined and measured for gases, liquids, and solids of fairly general composition and molecular structure. These include gas mixtures, solutions and alloys, or heterogenous materials such as milk, sand, granite, and concrete, if considered at a sufficiently large scale.
The specific heat can be defined also for materials that change state or composition as the temperature and pressure change, as long as the changes are reversible and gradual. Thus, for example, the concepts are definable for a gas or liquid that dissociates as the temperature increases, as long as the products of the dissociation promptly and completely recombine when it drops.
The specific heat is not meaningful if the substance undergoes irreversible chemical changes, or if there is a phase change, such as melting or boiling, at a sharp temperature within the range of temperatures spanned by the measurement.
The specific heat of a substance is typically determined according to the definition; namely, by measuring the heat capacity of a sample of the substance, usually with a calorimeter, and dividing by the sample's mass . Several techniques can be applied for estimating the heat capacity of a substance as for example fast differential scanning calorimetry.
The specific heat of gases can be measured at constant volume, by enclosing the sample in a rigid container. On the other hand, measuring the specific heat at constant volume can be prohibitively difficult for liquids and solids, since one often would need impractical pressures in order to prevent the expansion that would be caused by even small increases in temperature. Instead, the common practice is to measure the specific heat at constant pressure (allowing the material to expand or contract as it wishes), determine separately the coefficient of thermal expansion and the compressibility of the material, and compute the specific heat at constant volume from these data according to the laws of thermodynamics.
The SI unit for specific heat is joule per kelvin per kilogram (J/K/kg, J/(kg K), J K−1 kg−1, etc.). Since an increment of temperature of one degree Celsius is the same as an increment of one kelvin, that is the same as joule per degree Celsius per kilogram (J/°C/kg). Sometimes the gram is used instead of kilogram for the unit of mass: 1 J/K/kg = 0.001 J/K/g.
The specific heat of a substance (per unit of mass) has dimension L2·Θ−1·T−2, or (L/T)2/Θ. Therefore, the SI unit J/K/kg is equivalent to metre squared per second squared per kelvin (m2 K−1 s−2).
Professionals in construction, civil engineering, chemical engineering, and other technical disciplines, especially in the United States, may use the so-called English Engineering units, that include the Imperial pound (lb = 0.45459237 kg) as the unit of mass, the degree Fahrenheit or Rankine (°F = 5/9 K, about 0.555556 K) as the unit of temperature increment, and the British thermal unit (BTU ≈ 1055.06 J), as the unit of heat.
In those contexts, the unit of specific heat is BTU/°F/lb = 4177.6 J/K/kg. The BTU was originally defined so that the average specific heat of water would be 1 BTU/°F/lb.
In chemistry, heat amounts were often measured in calories. Confusingly, two units with that name, denoted "cal" or "Cal", have been commonly used to measure amounts of heat:
While these units are still used in some contexts (such as kilogram calorie in nutrition), their use is now deprecated in technical and scientific fields. When heat is measured in these units, the unit of specific heat is usually
In either unit, the specific heat of water is approximately 1. The combinations cal/°C/kg = 4.184 J/K/kg and kcal/°C/g = 4184,000 J/K/kg do not seem to be widely used.
The temperature of a sample of a substance reflects the average kinetic energy of its constituent particles (atoms or molecules) relative to its center of mass. However, not all energy provided to a sample of a substance will go into raising its temperature, exemplified via the equipartition theorem.
Quantum mechanics predicts that, at room temperature and ordinary pressures, an isolated atom in a gas cannot store any significant amount of energy except in the form of kinetic energy. Thus, heat capacity per mole is the same for all monoatomic gases (such as the noble gases). More precisely, formula_2712.5 J/K/mol and formula_2821 J/K/mol, where formula_298.31446 J/K/mol is the ideal gas unit (which is the product of Boltzmann conversion constant from kelvin microscopic energy unit to the macroscopic energy unit "joule", and Avogadro’s number).
Therefore, the specific heat (per unit of mass, not per mole) of a monoatomic gas will be inversely proportional to its (adimensional) atomic weight formula_30. That is, approximately,
For the noble gases, from helium to xenon, these computed values are
On the other hand, a polyatomic gas molecule (consisting of two or more atoms bound together) can store heat energy in other forms besides its kinetic energy. These forms include rotation of the molecule, and vibration of the atoms relative to its center of mass.
These extra degrees of freedom or "modes" contribute to the specific heat of the substance. Namely, when heat energy is injected into a gas with polyatomic molecules, only part of it will go into increasing their kinetic energy, and hence the temperature; the rest will go to into those other degrees of freedom. In order to achieve the same increase in temperature, more heat energy will have to be provided to a mol of that substance than to a mol of a monoatomic gas. Therefore, the specific heat of a polyatomic gas depends not only on its molecular mass, but also on the number of degrees of freedom that the molecules have.
Quantum mechanics further says that each rotational or vibrational mode can only take or lose energy in certain discrete amount (quanta). Depending on the temperature, the average heat energy per molecule may be too small compared to the quanta needed to activate some of those degrees of freedom. Those modes are said to be "frozen out". In that case, the specific heat of the substance is going to increase with temperature, sometimes in a step-like fashion, as more modes become unfrozen and start absorbing part of the input heat energy.
For example, the molar heat capacity of nitrogen at constant volume is formula_34 20.6 J/K/mol (at 15 °C, 1 atm), which is 2.49formula_35. That is the value expected from theory if each molecule had 5 degrees of freedom. These turn out to be three degrees of the molecule's velocity vector, plus two degrees from its rotation about an axis through the center of mass and perpendicular to the line of the two atoms. Because of those two extra degrees of freedom, the specific heat formula_2 of (736 J/K/kg) is greater than that of an hypothetical monoatomic gas with the same molecular mass 28 (445 J/K/kg), by a factor of 5/3.
This value for the specific heat of nitrogen is practically constant from below −150 °C to about 300 °C. In that temperature range, the two additional degrees of freedom that correspond to vibrations of the atoms, stretching and compressing the bond, are still "frozen out". At about that temperature, those modes begin to "un-freeze", and as a result formula_2 starts to increase rapidly at first, then slower as it tends to another constant value. It is 35.5 J/K/mol at 1500 °C, 36.9 at 2500 °C, and 37.5 at 3500 °C. The last value corresponds almost exactly to the predicted value for 7 degrees of freedom per molecule.
In theory, the specific heat of a substance can also be derived from its abstract thermodynamic modeling by an equation of state and an internal energy function.
To apply the theory, one considers the sample of the substance (solid, liquid, or gas) for which the specific heat can be defined; in particular, that it has homogeneous composition and fixed mass formula_6. Assume that the evolution of the system is always slow enough for the internal pressure formula_39 and temperature formula_10 be considered uniform throughout. The pressure formula_39 would be equal to the pressure applied to it by the enclosure or some surrounding fluid, such as air.
The state of the material can then be specified by three parameters: its temperature formula_10, the pressure formula_39, and its specific volume formula_44, where formula_45 is the volume of the sample. (This quantity is the reciprocal formula_46 of the material's density formula_47.) Like formula_10 and formula_39, the specific volume formula_50 is an intensive property of the material and its state, that does not depend on the amount of substance in the sample.
Those variables are not independent. The allowed states are defined by an equation of state relating those three variables: formula_51 The function formula_52 depends on the material under consideration. The specific internal energy stored internally in the sample, per unit of mass, will then be another function formula_53 of these state variables, that is also specific of the material. The total internal energy in the sample then will be formula_54.
For some simple materials, like an ideal gas, one can derive from basic theory the equation of state formula_55 and even the specific internal energy formula_56 In general, these functions must be determined experimentally for each substance.
The absolute value of this quantity is undefined, and (for the purposes of thermodynamics) the state of "zero internal energy" can be chosen arbitrarily. However, by the law of conservation of energy, any infinitesimal increase formula_57 in the total internal energy formula_58 must be matched by the net flow of heat energy formula_59 into the sample, plus any net mechanical energy provided to it by enclosure or surrounding medium on it. The latter is formula_60, where formula_61 is the change in the sample's volume in that infinitesimal step. Therefore
hence
If the volume of the sample (hence the specific volume of the material) is kept constant during the injection of the heat amount formula_59, then the term formula_65 is zero (no mechanical work is done). Then, dividing by formula_9,
where formula_68 is the change in temperature that resulted from the heat input. The left-hand side is the specific heat at constant volume formula_2 of the material.
For the heat capacity at constant pressure, it is useful to define the specific enthalpy of the system as the sum formula_70. An infinitesimal change in the specific enthalpy will then be
therefore
If the pressure is kept constant, the second term on the left-hand side is zero, and
The left-hand side is the specific heat at constant pressure formula_1 of the material.
In general, the infinitesimal quantities formula_75 are constrained by the equation of state and the specific internal energy function. Namely,
Here formula_77 denotes the (partial) derivative of the state equation formula_52 with respect to its formula_10 argument, keeping the other two arguments fixed, evaluated at the state formula_80 in question. The other partial derivatives are defined in the same way. These two equations on the four infinitesimal increments normally constrain them to a two-dimensional linear subspace space of possible infinitesimal state changes, that depends on the material and on the state. The constant-volume and constant-pressure changes are only two particular directions in this space.
This analysis also holds no matter how the energy increment formula_59 is injected into the sample (by heat conduction, irradiation, electromagnetic induction, radioactive decay, etc.
For any specific volume formula_50, denote formula_83 the function that describes how the pressure varies with the temperature formula_10, as allowed by the equation of state, when the specific volume of the material is forcefully kept constant at formula_50. Analogously, for any pressure formula_39, let formula_87 be the function that describes how the specific volume varies with the temperature, when the pressure is kept constant at formula_39. Namely, those functions are such that
for any values of formula_91. In other words, the graphs of formula_83 and formula_87 are slices of the surface defined by the state equation, cut by planes of constant formula_50 and constant formula_39, respectively.
Then, from the fundamental thermodynamic relation it follows that
This equation can be rewritten as
where
both depending on the state formula_80.
The heat capacity ratio, or adiabatic index, is the ratio formula_101 of the heat capacity at constant pressure to heat capacity at constant volume. It is sometimes also known as the isentropic expansion factor.
The path integral Monte Carlo method is a numerical approach for determining the values of heat capacity, based on quantum dynamical principles. However, good approximations can be made for gases in many states using simpler methods outlined below. For many solids composed of relatively heavy atoms (atomic number > iron), at non-cryogenic temperatures, the heat capacity at room temperature approaches 3"R" = 24.94 joules per kelvin per mole of atoms (Dulong–Petit law, "R" is the gas constant). Low temperature approximations for both gases and solids at temperatures less than their characteristic Einstein temperatures or Debye temperatures can be made by the methods of Einstein and Debye discussed below.
For an ideal gas, evaluating the partial derivatives above according to the equation of state, where "R" is the gas constant, for an ideal gas
Substituting
this equation reduces simply to Mayer's relation:
The differences in heat capacities as defined by the above Mayer relation is only exact for an ideal gas and would be different for any real gas. | https://en.wikipedia.org/wiki?curid=28420 |
Slingshot
A slingshot (US) or catapult (UK), ging (primarily Australian and New Zealand), shanghai (Australian and New Zealand) or kettie (South Africa) is normally a small hand-powered projectile weapon. The classic form consists of a Y-shaped frame held in the off hand (nondominant hand), with two natural-rubber strips attached to the uprights. The other ends of the strips lead back to a pocket that holds the projectile. The dominant hand grasps the pocket and draws it back to the desired extent to provide power for the projectile—up to a full span of the arm with sufficiently long bands.
Slingshots depend on strong elastic materials, typically vulcanized natural rubber or the equivalent, and thus date no earlier than the invention of vulcanized rubber by Charles Goodyear in 1839 (patented in 1844). By 1860, this "new engine" had already established a reputation for juvenile use in vandalism. For much of their early history, slingshots were a "do-it-yourself" item, typically made from a forked branch to form the "Y" shaped handle, with rubber strips sliced from items as inner tubes or other sources of good vulcanized rubber and firing suitably sized stones.
While early slingshots were most associated with young vandals, they were also capable hunting arms in the hands of a skilled user. Firing projectiles, such as lead musket balls, buckshot, steel ball bearings, air gun pellets, or small nails, slingshot was capable of taking game such as quail, pheasant, rabbit, dove, and squirrel. Placing multiple balls in the pouch produces a shotgun effect (even though not very accurate), such as firing a dozen BBs at a time for hunting small birds. With the addition of a suitable rest, the slingshot can also be used to shoot arrows, allowing the hunting of medium-sized game at short ranges.
While commercially made slingshots date from at least 1918, with the introduction of the Zip-Zip, a cast iron model, it was not until the post World War II years saw a surge in the popularity, and legitimacy, of slingshots. They were still primarily a home-built proposition; a 1946 Popular Science article details a slingshot builder and hunter using home-built slingshots made from forked dogwood sticks to take small game at ranges of up to with No. 0 lead buckshot ( diameter).
The Wham-O company, founded in 1948, was named after their first product, the Wham-O slingshot. It was made of ash wood and used flat rubber bands. The Wham-O was suitable for hunting with a draw weight of up to , and was available with an arrow rest.
The 1940s also saw the creation of the National Slingshot Association, headquartered in San Marino, California, which organised slingshot clubs and competitions nationwide. Despite the slingshot's reputation as a tool of juvenile delinquents, the NSA reported that 80% of slingshot sales were to men over 30 years old, many of them professionals. John Milligan, a part-time manufacturer of the aluminium-framed John Milligan Special, a hunting slingshot, reported that about a third of his customers were physicians.
The middle 1950s saw two major innovations in slingshot manufacture, typified by the Wrist-Rocket which was produced by the Saunders Archery Co. of Columbus, Nebraska. The Wrist-Rocket was made from bent aluminum alloy rods that formed not only the handle and fork, but also a brace that extended backwards over the wrist, and provided support on the forearm to counter the torque of the bands. The Wrist-Rocket also used surgical rubber tubing rather than flat bands, attached to the backwards-facing fork ends by sliding the tubing ends over the tips of the forks, where it was held by friction or adhered with the addition of liquid rosin.
The early production of the Wrist-Rocket slingshot was a joint effort between Saunders Archery Co., who came up with the trademark and developed the automated forming machinery, and Mark Ellenburg who came up with the basic design. A few years later Mark Ellenburg split away forming his own company called Tru-mark Manufacturing Company. Today Saunders Archery is still a major innovator in the slingshot industry with its line of flatband slingshots which use locking clips for band attachment and tuning.
Slingshots are also occasionally used in angling to disperse bait into the water over a wide area, so that multiple fish are attracted near the angler's fishing rod.
A home-made derivative of a slingshot also exists, consisting of a rubber balloon cut in half and tied to a tubular object such as the neck of a plastic bottle, or a small pipe. The projectile is inserted through the tube and into the cut balloon, and the user stretches the balloon to launch the projectile. These so-called "balloon guns" are sometimes made as a substitute to ordinary slingshot, and are often used to create the "shotgun" effect with multiple projectiles fired at once.
The world record for the most energetic shot with a handheld slingshot was 135 Joules (99.57 Foot-pounds). It was shot with a forward extended slingshot, also known as a "starship", which achieves more power by increasing draw length.
Slingshots have been used as military weapons, but primarily by guerrilla forces due to the primitive resources and technology required to construct one. Such guerrilla groups included the Irish Republican Army; prior to the 2003 invasion of Iraq, Saddam Hussein released a propaganda video demonstrating slingshots as a possible insurgency weapon for use against invading forces.
Slingshots have also been used by the military to launch unmanned aerial vehicles (UAVs). Two crew members form the fork, with an elastic cord stretched between them to provide power to launch the small aircraft.
On the Battle of Marawi, the soldiers of the Philippine Army's elite Scout Rangers were observed using slingshots with grenades as an improvised mortar to eliminate the Maute and Abu Sayyaf Terrorists.
There are competitions, quite popular in Spain, Italy and China.
One of the dangers inherent in slingshots is the high probability that the bands will fail. Most bands are made from latex, which degrades with time and use, causing the bands to eventually fail under load. Failures at the pouch end are safest, as they result in the band rebounding away from the user. Failures at the fork end, however, send the band back towards the shooter's face, which can cause eye and facial injuries. One method to minimize the chance of a fork end failure is to utilize a tapered band, thinner at the pouch end, and thicker and stronger at the fork end. Designs that use loose parts at the fork are the most dangerous, as they can result in those parts being propelled back towards the shooter's face, such as the ball attachment used in the recalled Daisy "Natural" line of slingshots (see image). The band could slip out of the slot in which it rested, and the hard ball in the tube resulted in cases of blindness and broken teeth. Daisy models using plain tubular bands were not covered in the recall, because the elastic tubing does not cause severe injuries upon failure. Another big danger is the fork breakage, some commercial slingshots made from cheap zinc alloy may break and severely injure shooters' eyes & face.
Many jurisdictions prohibit the use of arm braced slingshots. For example, New York law 265.01 defines it as a Class-4 misdemeanor.
The slingshot is heavily featured in the popular gaming franchise "Angry Birds", used as the primary launching device for shooting birds at enemy pigs.
Bart Simpson is often depicted using a slingshot to engage many pranks.
The slingshot is the signature weapon of protagonist Jimmy Hopkins in the video game "Bully". A basic slingshot is available at the beginning of the game: a more advanced slingshot with ergonomic grip and "scope" can also be unlocked fairly early in the game. | https://en.wikipedia.org/wiki?curid=28422 |
Starship Troopers
Starship Troopers is a military science fiction novel by American writer Robert A. Heinlein. Written in a few weeks in reaction to the U.S. suspending nuclear tests, the story was first published as a two-part serial in "The Magazine of Fantasy & Science Fiction" as Starship Soldier, and published as a book by G. P. Putnam's Sons in December 1959.
The story is set in a future society ruled by a human interstellar government dominated by a military elite, referred to as the "Terran Federation". The first-person narrative follows Juan "Johnny" Rico through his military service in the Mobile Infantry. Rico progresses from recruit to officer against the backdrop of an interstellar war between humans and an alien species known as "Arachnids" or "Bugs". Interspersed with the primary plot are classroom scenes in which Rico and others discuss philosophical and moral issues, including aspects of suffrage, civic virtue, juvenile delinquency, and war; these discussions have been described as expounding Heinlein's own political views. "Starship Troopers" has been identified with a tradition of militarism in U.S. science fiction, and draws parallels between the conflict between humans and the Bugs, and the Cold War. A coming-of-age novel, "Starship Troopers" also critiques U.S. society of the 1950s, argues that a lack of discipline had led to a moral decline, and advocates corporal and capital punishment.
"Starship Troopers" brought to an end Heinlein's series of juvenile novels. It became one of his best-selling books, and is considered his most widely known work. It won the Hugo Award for Best Novel in 1960, and garnered praise from reviewers for its scenes of training and combat and its visualization of a future military. It also became enormously controversial because of the political views it seemed to support. Reviewers were strongly critical of the book's intentional glorification of the military, an aspect described as propaganda and likened to recruitment. The ideology of militarism and the fact that only military veterans had the right to vote in the novel's fictional society led to it being frequently described as fascist. Others disagree, arguing that Heinlein was only exploring the idea of limiting the right to vote to a certain group of people. Heinlein's depiction of gender has also been questioned, while reviewers have said that the terms used to describe the aliens were akin to racial epithets.
Despite the controversy, "Starship Troopers" had wide influence both within and outside science fiction. Ken MacLeod stated that "the political strand in [science fiction] can be described as a dialogue with Heinlein". Science fiction critic Darko Suvin wrote that "Starship Troopers" is the "ancestral text of U.S. science fiction militarism" and that it shaped the debate about the role of the military in society for many years. The novel has been credited with popularizing the idea of powered armor, which has since become a recurring feature in science fiction books and films, as well as an object of scientific research. Heinlein's depiction of a futuristic military was also influential. Later science fiction books, such as Joe Haldeman's 1974 anti-war novel "The Forever War", have been described as reactions to "Starship Troopers". The story has been adapted several times, including in a 1997 film version directed by Paul Verhoeven that sought to satirize what the director saw as the fascist aspects of the novel.
Robert Heinlein was among the best-selling science fiction authors of the 1940s and 1950s, along with Isaac Asimov and Arthur C. Clarke; they were known as the "big three" that dominated U.S. science fiction. In contrast to the others, Heinlein firmly endorsed the anti-communist sentiment of the Cold War era in his writing. Heinlein served in the U.S. Navy for five years after graduating from the United States Naval Academy in 1929. His experience in the military profoundly influenced his fiction. At some point between 1958 and 1959, Heinlein put aside the novel that would become "Stranger in a Strange Land" and wrote "Starship Troopers". His motivation arose partially from his anger at U.S. President Dwight Eisenhower's decision to suspend U.S. nuclear tests, and the Soviet tests that occurred soon afterward. Writing in his 1980 volume "Expanded Universe", Heinlein would say that the publication of a newspaper advertisement placed by the National Committee for a Sane Nuclear Policy on April 5, 1958, calling for a unilateral suspension of nuclear weapons testing by the United States sparked his desire to write "Starship Troopers". Heinlein and his wife Virginia created the "Patrick Henry League" in an attempt to create support for the U.S. nuclear testing program. Heinlein stated that he used the novel to clarify his military and political views.
As was the case with many of Heinlein's books, "Starship Troopers" was completed in a few weeks. It was originally written as a juvenile novel for New York publishing house Scribner; Heinlein had previously had success with this format, having written several such novels published by Scribner. The manuscript was rejected, prompting Heinlein to end his association with the publisher completely, and resume writing books with adult themes. Scholars have suggested that Scribner's rejection was based on ideological objections to the content of the novel, particularly its treatment of military conflict.
"The Magazine of Fantasy & Science Fiction" first published "Starship Troopers" in October and November 1959 as a two-part serial titled "Starship Soldier". A senior editor at Putnam's, Peter Israel, purchased the manuscript and approved revisions that made it more marketable to adults. Asked whether it was aimed at children or adults, he said at a sales conference "Let's let the readers decide who likes it." The novel was eventually published by G. P. Putnam's Sons.
Set approximately 700 years from the present day, the human society in "Starship Troopers" is ruled by the Terran Federation, a form of world government dominated by a military elite. The society is depicted as affluent, and futuristic technology shown as coexisting with educational methods from the 20th century. The rights of a full citizen, to vote and hold public office, are not universally guaranteed, but must be earned through Federal Service. Those who do not perform this service, which usually takes the form of military service, retain the rights of free speech and assembly, but cannot vote or hold public office. People of either gender above the age of 18 are permitted to enlist. Those who leave before completing their service do not receive the vote. Important government jobs are reserved for federal service veterans. This structure arose "ad hoc" after the collapse of the "20th century Western democracies", driven in part by an inability to control crime and juvenile delinquency, particularly in North America, and a war between an alliance of the US, the UK and Russia against the "Chinese Hegemony".
Two extraterrestrial civilizations are also depicted. The "Pseudo-Arachnids" or "Bugs" are shown as communal beings originating from the planet of Klendathu. They have multiple castes; workers, soldiers, brains, and queens, similar to ants and termites. The soldiers are the only ones who fight, and are unable to surrender in battle. The "Skinnies" are depicted as less communal than the Arachnids but more so than human beings. The events of the novel take place during an interstellar war between the Terran Federation and the Arachnids. At the beginning of the story, Earth is not at war, but war has been declared by the time Rico has completed his training. The Skinnies are initially allies of the Pseudo-Arachnids, but switch to being allies of the humans midway through the novel. Faster-than-light travel exists in this future: spacecraft operate under the "Cherenkov drive", and can travel "Sol to Capella, forty-six lightyears, in under six weeks".
"Starship Troopers" is narrated by the main protagonist Juan "Johnny" Rico, a member of the "Mobile Infantry". It is one of the few Heinlein novels which intersperses his typical linear narrative structure with a series of flashbacks. These flashbacks are frequently to Rico's History and Moral Philosophy course in school, in which the teacher discusses the history of the structure of their society. Rico is depicted as a man of Filipino ancestry, although there has been disagreement on this matter among fans. He is from a wealthy family, whose members had never served in the army. Rico's ancestry is depicted to be a thing of no consequence; the society he lives in appears to have abandoned racial and gender-based prejudice.
The novel opens with Rico aboard the corvette transport "Rodger Young" (named after Medal of Honor recipient Rodger Wilton Young), serving with the platoon known as "Rasczak's Roughnecks". The platoon carries out a raid against a planetary colony held by Skinnies. The raid is relatively brief: the platoon lands on the planet, destroys its targets, and retreats, suffering two casualties in the process. One of them, Dizzy Flores, dies while returning to orbit. The narrative then flashes back to Rico's graduation from high school. Rico and his best friend Carl are considering joining the Federal Service after graduation; Rico is hesitant, partly due to his father's attitude towards the military. Rico makes his decision after discovering that his classmate Carmen Ibañez also intends to enlist.
Rico's choice is taken poorly by his parents, and he leaves with a sense of estrangement. He is assigned to the Mobile Infantry, and moves to Camp Arthur Currie (named for Arthur Currie who rose through the ranks to General in WWI) on the Canadian prairie for his training under Sergeant Charles Zim. The training is extremely demanding. Rico receives combat training of all types, including simulated fights in armored suits. A fellow recruit is court-martialled, flogged, and dismissed for striking a drill instructor who was also his company commander. Jean V. Dubois, who taught Rico's History and Moral Philosophy in school, sends Rico a letter, revealing that he is a Mobile Infantry veteran himself. The letter helps Rico stay motivated enough not to resign. Rico himself is given five lashes for firing a rocket during a drill with armored suits and simulated nuclear weapons without checking to see that no friendlies were within the blast zone, which in combat would have resulted in the death of a fellow soldier. Another recruit, who murdered a baby girl after deserting the army, is hanged by his battalion after his arrest by civilian police. Eventually, after further training at another camp near Vancouver, Rico graduates with 187 others, of the 2,009 who had begun training in that regiment.
The "Bug War" has changed from minor incidents to a full-scale war during Rico's training. An Arachnid attack that annihilates the city of Buenos Aires alerts civilians to the situation; Rico's mother is killed in the attack. Rico participates in the Battle of Klendathu, an attack on the Arachnid's home world, which turns into a disastrous defeat for the Terran Federation. Rico's ship, the "Valley Forge", is destroyed, and his unit is decimated; he is reassigned to the Roughnecks on board the "Rodger Young," led by Lieutenant Rasczak and Sergeant Jelal. The unit carries out several raids, and Rico is promoted to corporal by Jelal, after Rasczak dies in combat.
One of his comrades in the Roughnecks suggests that Rico go to officer training school and try to become an officer. Rico ends up going to see Jelal, and finds that Jelal already had the paperwork ready. Rico enters Officer Candidate School for a second course of training, including further courses in "History and Moral Philosophy". En route from the Roughnecks to the school, Rico encounters his father, who has also enlisted and is now a corporal, and the two reconcile. He is also visited in school by Carmen, now an ensign and ship's pilot officer in the Navy, and the two discuss their friend Carl, who had been killed earlier in the war.
Rico is commissioned a temporary third lieutenant for his final test: a posting to a combat unit. Under the tutelage of his company commander, Captain Blackstone, and with the aid of his platoon sergeant, his boot camp drill instructor Fleet Sergeant Zim, who was reassigned from Mobile Infantry boot camp (Camp Currie), Rico commands a platoon during "Operation Royalty", a raid to capture members of the Arachnid brain caste and queens. Rico then returns to the officer school to graduate.
The novel ends with him holding the rank of Second Lieutenant, in command of his old platoon in the "Rodger Young", with his father as his platoon sergeant. The platoon has been renamed "Rico's Roughnecks", and is about to participate in an attack on Klendathu.
Commentators have written that "Starship Troopers" is not driven by its plot, though it contains scenes of military combat. Instead, much of the novel is given over to a discussion of ideas. In particular, the discussion of political views is a recurring feature of what scholar Jeffrey Cass described as an "ideologically intense" book. A 1997 review in "Salon" categorized it as a "philosophical novel". Critics have debated to what extent the novel promotes Heinlein's own political views. Some contend that the novel maintains a sense of irony that allows readers to draw their own conclusions; others argue that Heinlein is sermonizing throughout the book, and that its purpose is to expound Heinlein's militaristic philosophy.
"Starship Troopers" has been identified as being a part of a tradition in U.S. science fiction that assumes that violent conflict and the militarization of society are inevitable and necessary. Although the Mobile Infantry, the unit to which Rico is assigned, is seen as a lowly post by the characters in the story, the novel itself suggests that it is the heart of the army and the most honorable unit in it. In a commentary written in 1980, Heinlein agreed that "Starship Troopers" "glorifies the military ... Specifically the P.B.I., the Poor Bloody Infantry, the mudfoot who places his frail body between his loved home and the war's desolation – but is rarely appreciated... he has the toughest job of all and should be honored." The story is based on the social Darwinist idea of society as a struggle for survival based on military strength. It suggests that some conflicts must be resolved by force: one of the lessons Rico is repeatedly taught is that violence can be an effective method of settling conflict. These suggestions derive in part from Heinlein's view that in the 1950s the U.S. government was being too conciliatory in its dealings with communist China and the Soviet Union.
Heinlein draws an analogy between the human society in the novel, which is well-to-do but needs to be vigilant against the imperialist threat of the Arachnids, and U.S. society of the 1950s. Reviewers have suggested that the Arachnids are Heinlein's analogue for communists. Traits used to support this include the communal nature of the Arachnids, which makes them capable of a much higher degree of coordination than the humans. Bug society is once explicitly described as communist, and is moreover depicted as communist by nature; this has been read as implying that those with a different political ideology are analogous to alien beings. The related motifs of alien invasion, patriotism, and personal sacrifice during war, are present, as are other aspects of U.S. popular culture of the 1950s. Commentators have argued that Heinlein's portrayal of aliens, as well as being a reference to people in communist countries, invokes the trope of a return to the frontier. The concept of the frontier includes a social-Darwinist argument of constantly fighting for survival, even at the expense of indigenous people or, in the case of "Starship Troopers", of aliens. Heinlein suggests that without territorial expansion involving violent conquest of other races, humans would be destroyed. Scholar Jamie King has stated that Heinlein does not address the question of what the military government and Federal Service would do in peacetime, and argues that Heinlein has set up a society designed to be continuously at war, and to keep expanding its territory.
"Starship Troopers" has been referred to as a "bildungsroman" or "coming-of-age" story for Rico, as he matures through his tenure in the infantry. His training, both at boot camp and at officer candidate school, involves learning the value of militarism, thus inviting the reader to learn it as well. This is especially true of the parts of his training that involve indoctrination, such as the claim by one of his instructors that rule by military veterans is the ideal form of government, because only they understand how to put collective well-being above the individual. The story traces Rico's transformation from a boy into a soldier, while exploring issues of identity and motivation, and traces his overall moral and social development, in a manner identified by commentators as similar to many stories about German soldiers in World War I. Rico's transformation has been likened to the common narrative within stories with military themes by scholar Howard Franklin. This typical narrative is that of a sloppy and unfit civilian being knocked into shape by tough officers, whose training is "calculated sadism" but is depicted as fundamentally being on the right side. The letter Rico receives from Dubois, partly responsible for Rico "crossing the hump" with his training, is shown as a turning point in his development. The classroom scenes embedded in the story serve to explain Rico's adventures, and highlight his reactions to events around. A notable example is the execution Rico is forced to witness after a deserter from his unit murders a young girl; Rico is uncertain of his own reaction until he remembers a lecture by Dubois in which the latter argues that "moral sense" derives entirely from the will to survive. The concept of the American frontier is also related to the coming-of-age theme. Young protagonists across Heinlein's novels attain manhood by confronting a hostile "wilderness" in space; coming-of-age in a military, alien context is a common theme in Heinlein's earlier works as well. Rico's coming-of-age has also been described as being related to his relationship with his father; the journey "outward" through the novel also contains a search for Rico's childhood and a reunion with his estranged parent.
"Starship Troopers" also critiques U.S. society of the 1950s, suggesting that it had led young people to be spoiled and undisciplined. These beliefs are expressed through the classroom lectures of Dubois, Rico's teacher for History and Moral Philosophy. Dubois praises flogging and other types of corporal punishment as a means of addressing juvenile crimes. It has been suggested that Heinlein endorsed this view, although the fact that Dubois also compares raising children to training a puppy has been used to argue that Heinlein was making use of irony. The story is strongly in favor of corporal punishment and capital punishment, as a means of correcting juvenile delinquents, part of a trend in science fiction which examines technology and outer space in an innovative manner, but is reactionary with respect to human relationships. As with other books by Heinlein, traditional schools are denigrated, while learning "on the spot" is extolled: Rico is able to master the things required of him in military training without undue difficulty.
Dubois also ridicules the idea of inalienable rights, such as "Life, Liberty and the pursuit of Happiness", arguing that people only have the rights that they are willing to fight and die for to protect. The novel appeals to scientific authority to justify this position; Dubois repeatedly states that his argument is mathematically demonstrable, statements which have led scholars to label the novel "hard science fiction", despite its social and political themes. The "moral decline" caused by this situation is depicted as having caused a global war between an alliance of the U.S., Britain, and Russia against the "Chinese Hegemony" in the year 1987. Despite the alliance between the U.S. and Russia, this war has been described as demonstrating Heinlein's anti-communist beliefs, which saw "swarming hordes" of Chinese as a bigger threat. The novel draws some comparisons between the Chinese and the Arachnids, and suggests that the lessons of one war could be applied to the other.
To Heinlein's surprise, "Starship Troopers" won the Hugo Award for Best Novel in 1960. It has been acknowledged as one of the best-known and most influential works of science fiction. The novel is considered a landmark for the genre, having been described by a 1960 review as one of the ten best genre books of 1959, in a 2009 review as a key science fiction novel of the 1950s, and as the best-known example of military science fiction. It was also a personal landmark for Heinlein; it was one of his best-selling books, and is his best-known novel. The novel has been described as marking Heinlein's transition from writing juvenile fiction to a "more mature phase" as an author. Reviewing the book with others written for children, Floyd C. Gale of "Galaxy Science Fiction" wrote in 1960 that "Heinlein has penned a juvenile that "really" is not. This is a new and bitter and disillusioned Heinlein". Rating it 2.5 stars out of five for children, 4.5 stars for adults, and "?" for civilians, he believed that the novel would be "of exceptional interest to veterans with battle experience ... but youngsters will find it melancholy and verbose". Conversely, Michael Moorcock described it as Heinlein's last "straight" science fiction, before he turned to more serious writing such as "Stranger in a Strange Land".
By 1980, twenty years after its release, "Starship Troopers" had been translated into eleven languages and was still selling strongly. Heinlein nevertheless complained that, despite this success, almost all the mail he received about it was negative and he only heard about it "when someone wants to chew me out". The novel is highly contentious. Controversy surrounded its praise of the military and approval of violence, to the extent that it has frequently been described as fascist, and its implication that militarism is superior to traditional democracy. Heinlein's peers were among those who argued over the book; a comparison between a quote in "Starship Troopers" that "the noblest fate that a man can endure is to place his own mortal body between his loved home and war's desolation" and the anti-war poem "Dulce et Decorum Est" by Wilfred Owen began a two-year discussion in the "Proceedings of the Institute for Twenty-First Century Studies" from 1959 to 1961, with James Blish, Poul Anderson, Philip José Farmer, Anthony Boucher, John Brunner, Brian Aldiss, among those debating "Starship Troopers"s quality of writing, philosophy, and morality.
The writing in "Starship Troopers" has received varied responses, with the scenes of military training and combat receiving praise. In a 2009 retrospective, Jo Walton wrote that "Starship Troopers" was "military SF done extremely well". She went on to argue that "Heinlein was absolutely at his peak when he wrote this in 1959. He had so much technical stylistic mastery of the craft of writing science fiction that he could [tell the story] 'backwards and in high heels' and get away with it." Others referred to it as very readable, and found the military scenes compelling. Heinlein's descriptions of training and boot camp in the novel, based on his own experiences in the military, have been described as being rendered with remarkable skill. A 1960 review in the "New York Herald Tribune" praised the "brilliantly written" passages describing infantry combat, and also called attention to the discussion of weapons and armor, which, according to other reviewers, demonstrated Heinlein's "undiminished talent for invention". Scholar George Slusser described the book in 1986 as the "ultimately convincing space-war epic", praising in particular the "precisely imagined" weapons and tactics, while a 1979 science fiction encyclopedia referred to it as the "slickest" of Heinlein's juvenile books.
Criticism of the style of the book has centered on its political aspects. Heinlein's discussions of his political beliefs were criticized as "didactic", and the novel was derided for "exposition [that was] inserted in large indigestible chunks". Author Ken MacLeod's 2003 analysis of the political nature of "Starship Troopers" stated that it was "a book where civics infodumps and accounts of brutal boot-camp training far outweigh the thin and tensionless combat scenes". Scientist and author Brunner compared it to a "Victorian children's book", while the "Science Fiction Handbook" published in 2009 said that the novel provided "compelling images of a futuristic military" and that it raised important questions, even for those who disagree with its political ideology. However, it stated that the story was weak as a tale of an alien encounter, as it did not explore alien society in any detail, but presented the Arachnids as nameless and faceless creatures that wished to destroy humanity. Boucher, founder of "The Magazine of Fantasy & Science Fiction", remarked in 1960 that Heinlein had "forgotten to insert a story". A 1979 summary said that though Heinlein's vision might verge on fascism, his tightly controlled narrative made his ideology seem "vibrantly appealing".
"Starship Troopers" is generally considered to promote militarism, the glorification of war and of the military. Scholar Bruce Franklin referred to it in 1980 as a "bugle-blowing, drum-beating glorification" of military service, and wrote that militarism and imperialism were the explicit message of the book. Science fiction writer Dean McLaughlin called it "a book-length recruiting poster". In 1968 science fiction critic Alexei Panshin called "Starship Troopers" a militaristic polemic and compared it to a recruiting film, stating that it "purports to show the life of a typical soldier, with a soundtrack commentary by earnest sincere Private Jones who interprets what we see for us." Panshin stated that there was no "sustained human conflict" in the book: instead, "All the soldiers we see are tough, smart, competent, cleancut, clean-shaven, and noble." Panshin, a veteran of the peacetime military, argued that Heinlein glossed over the reality of military life, and that the Terran Federation-Arachnid conflict existed simply because, "Starship troopers are not half so glorious sitting on their butts polishing their weapons for the tenth time for lack of anything else to do." Literature scholar George Slusser, in describing the novel as "wrong-headed and retrogressive", argued that calling its ideology militarism or imperialism was inadequate, as these descriptions suggested an economic motive. Slusser instead says that Heinlein advocates for a complete "technological subjugation of nature", of which the Arachnids are a symbol, and that this subjugation itself is depicted as a sign of human advancement.
A 1997 review in "Salon" stated that the novel could almost be described as propaganda, and was terrifying as a result, particularly in its belief that the boot camp had to be an ingredient of any civilization. This was described as a highly unusual utopian vision. Moorcock stated that the lessons Rico learns in boot camp: "wars are inevitable, [and] that the army is always right". In discussing the book's utility in classroom discussions of the form of government, Alan Myers stated that its depiction of the military was of an "unashamedly Earth-chauvinist nature". In the words of science fiction scholar Darko Suvin, "Starship Troopers" was an "unsubtle but powerful black-and-white paean to combat life", and an example of agitprop in favor of military values.
Other writers defended Heinlein. George Price argued that "[Heinlein] implies, first, that war is something endured, not enjoyed, and second, that war is so unpleasant, so desolate, that it must at all costs be kept away from one's home." Poul Anderson also defended some of the novel's positions, arguing "Heinlein has recognized the problem of selective versus nonselective franchise, and his proposed solution does merit discussion." Complaints were made against Heinlein for the lack of conscription in "Starship Troopers". When he wrote the novel, the military draft was still in effect in the U.S.
The society within the book has frequently been described as fascist. According to the 2009 "Science Fiction Handbook", it had the effect of giving Heinlein a reputation as a "fanatical warmongering fascist". Scholar Jeffrey Cass has referred to the setting of the book as "unremittingly grim fascism". He has stated that the novel made an analogy between its military conflict and those of the U.S. after World War II, and that it justified U.S. imperialism in the name of fighting another form of imperialism. Jasper Goss has referred to it as "crypto-fascist". Suvin compares Heinlein's suggestion that "all wars arise from population pressure" to the Nazi concept of "Lebensraum" or "living space" for a superior society that was used to justify territorial expansion.
Some reviewers have suggested that Heinlein was simply discussing the merits of a selective versus a nonselective franchise. Heinlein made a similar claim, over two decades after "Starship Troopers"'s publication, in his "Expanded Universe" and further claimed that 95 percent of "veterans" were not military personnel but members of the civil service. Heinlein's own description has been disputed, even among the book's defenders. Heinlein scholar James Gifford has argued that a number of quotes within the novel suggest that the characters within the book assume that the Federal Service is largely military. For instance, when Rico tells his father that he is interested in Federal Service, his father immediately explains his belief that Federal Service is a bad idea because there is no war in progress, indicating that he sees Federal Service as military in nature. Gifford states that although Heinlein's intentions may have been that Federal Service be 95 percent non-military, in relation to the actual contents of the book, Heinlein "is wrong on this point. Flatly so."
Dennis Showalter, writing in 1975, defended "Starship Troopers", stating that the society depicted in it did not contain many elements of fascism. He argues that the novel does not include outright opposition to bolshevism and liberalism that would be expected in a fascist society. Others have responded by saying Showalter's argument is based on a literal reading of the novel, and that the story glorifies militarism to a large extent. Ken Macleod argues that the book does not actually advocate fascism because anybody capable of understanding the oath of Federal Service is able to enlist and thereby obtain political power. Macleod states that Heinlein's books are consistently liberal, but cover a spectrum from democratic to elitist forms of liberalism, "Starship Troopers" being on the latter end of the spectrum. It has been argued that Heinlein's militarism is more libertarian than fascist, and that this trend is also present in Heinlein's other popular books of the period, such as "Stranger in a Strange Land" (1961) and "The Moon is a Harsh Mistress" (1966). This period of Heinlein's writing has received more critical attention than any other, though he continued to write into the 1980s.
The setting of the book has been described as dystopian, but it is presented by Heinlein as utopian; its leaders are shown as good and wise, and the population as free and prosperous. Slusser wrote in 1987 that "Starship Troopers" depicts a world that is "hell for human beings", but nonetheless celebrates the ideology of its fictional society. The rulers are claimed to be the best in history, because they understand that human nature is to fight for power through the use of force. The suggestion of utopia is not explored in depth, as the lives of those outside the military are not shown in any detail. The novel suggests that the militarist philosophy espoused by many of the characters has a mathematical backing, though reviewers have commented that Heinlein does not present any basis for this.
Writers such as Robert A. W. Lowndes, Farmer, and Michael Moorcock have criticized the novel for being a hypothetical utopia, in the sense that while Heinlein's ideas sound plausible, they have never been put to the test. Moorcock wrote an essay entitled "Starship Stormtroopers" in which he attacked Heinlein and other writers over similar "Utopian fiction". Lowndes accused Heinlein of using straw man arguments, "countering ingenuous half-truths with brilliant half-truths". Lowndes further argued that the Terran Federation could never be as idealistic as Heinlein portrays it to be because he never properly addressed "whether or not [non-citizens] have at least as full a measure of civil redress against official injustice as we have today". Farmer agreed, arguing that a "world ruled by veterans would be as mismanaged, graft-ridden, and insane as one ruled by men who had never gotten near the odor of blood and guts".
Authors and commentators have stated that the manner in which the extraterrestrial beings are portrayed in "Starship Troopers" has racist aspects, arguing that the nicknames "Bugs" and "Skinnies" carry racial overtones. John Brunner compared them to calling Koreans "gooks". Slusser argued that the term "Bugs" was an "abusive and biologically inaccurate" word that justified the violence against alien beings, a tendency which, according to Slusser, the book shared with other commercially successful science fiction.
Some of Heinlein's other works have also been described as racist, though Franklin argues that this was not unique to Heinlein, and that he was less racist than the U.S. government of the time. Heinlein's early novel "Sixth Column" was called a "racist paean" to a white resistance movement against an Asian horde derived from the Yellow Peril. In 1978, Moorcock wrote that "Starship Troopers" "set the pattern for Heinlein's more ambitious paternalistic, xenophobic" stories. Robert Lowndes argues that the war between the Terrans and the Arachnids is not about a quest for racial purity, but rather an extension of Heinlein's belief that man is a wild animal. According to this theory, if man lacks a moral compass beyond the will to survive, and he was confronted by another species with a similar lack of morality, then the only possible moral result would be warfare.
The fact that all pilots in the novel are women (in contrast to the infantry, which is entirely male) has been cited as evidence of progressive gender politics within the story, although the idea expressed by Rico that women are the motivation for men to fight in the military is a counter-example to this. A 1996 science fiction encyclopedia said that like much of Heinlein's fiction, "Starship Troopers" exemplified "macho male culture". The prosthetically enhanced soldiers in the novel, all of whom are men, have been described as an example of the "hyper-masculinity" brought on by the proximity of these men to technology. The story portrays the Arachnids as so alien that the only response to them can be war. Feminist scholars have described this reaction as a "conventionally masculinist" one. Steffen Hantke has described the mechanized suits in the novel, which make the wearer resemble a "steel gorilla," as defining masculinity as "something intensely physical, based on animal power, instinct, and aggression". He calls this form of masculinity "all body, so to speak, and no brain". Thus, in Hantke's reading, "Starship Troopers" expresses fears of how masculinity may be preserved in an environment of high technology. This fear is exacerbated by the motifs of pregnancy and birth that Heinlein uses when describing how the soldiers in suits are dropped from spaceships piloted by women. Though Rico says he finds women "marvelous", he shows no desire for sexual activity; the war seems to have subsumed sex in this respect. A 1979 summary argued that despite the gestures towards women's equality, women in the story were still objects, to be protected, and to fight wars over.
Heinlein's books, and "Starship Troopers" in particular, had an enormous impact on political science fiction, to the extent that author Ken MacLeod has stated that "the political strand in [science fiction] can be described as a dialogue with Heinlein," although many participants in this dialogue disagree with Heinlein. Science fiction critic Darko Suvin states that "Starship Troopers" is the "ancestral text of U.S. science fiction militarism" and that it shaped the debate about the role of the military in society for many years.
As well as his political views, Heinlein's ideas about a futuristic military as depicted in the novel were deeply influential among films, books, and television shows in later years. Roger Beaumont has suggested that "Starship Troopers" may some day be considered a manual for extraterrestrial warfare. Suvin refers to Juan Rico as the "archetypal Space Soldier". "Starship Troopers" included concepts in military engineering which have since been widely used in other fiction, and which have occasionally been paralleled by scientific research. The novel has been cited as the source of the idea of powered armor exoskeletons, which Heinlein describes in great detail. Such suits became a staple of military science fiction. Franchises that have employed this technology include "Halo", "Elysium", "District 9", "Iron Man", and "Edge of Tomorrow". During the shooting of the classic science fiction film "Aliens", director James Cameron required the actors playing space marines to read Starship Troopers to understand their part, and also cited it as an influence for the space drop, terms like "bug hunt", and the cargo-loader exoskeleton.
"Starship Troopers" had a direct influence on many later science fiction stories. John Steakley's 1984 novel "Armor" was, according to the author, born out of frustration with the small amount of actual combat in "Starship Troopers" and because he wanted this aspect developed further. The 1988 Gainax OVA series "Gunbuster" has plot elements similar to Heinlein's novel, depicting humanity arrayed against an alien military. Scholars have identified elements of Heinlein's influence in "Ender's Game", by Orson Scott Card, as well. Hantke, in particular, compares the battle room in "Ender's Game" to Heinlein's prosthetic suits, stating that they both regulate but also enhance human agency. Suvin suggests parallels between the plots of the two novels, with human society in both stories at war against insect-like aliens, but states that the story of Ender Wiggin takes a very different direction, as Ender regrets his genocidal actions and dedicates his efforts to protecting his erstwhile targets.
Conversely, Joe Haldeman's 1974 anti-war, Hugo- and Nebula-winning science fiction novel "The Forever War" is popularly thought to be a direct reply to "Starship Troopers", and though Haldeman has stated that it is actually a result of his personal experiences in the Vietnam War, he has admitted to being influenced by "Starship Troopers". Haldeman said that he disagreed with "Starship Troopers" because it "glorifies war", but added that "it's a very well-crafted novel, and I believe Heinlein was honest with it". "The Forever War" contains several parallels to "Starship Troopers", including its setting. Commentators have described it as a reaction to Heinlein's novel, a suggestion Haldeman denies; the two novels are very different in terms of their attitude towards the military. "The Forever War" does not depict war as a noble pursuit, with the sides clearly defined as good and evil; instead, the novel explores the dehumanizing effect of war, influenced by the real world context of the Vietnam War. Haldeman received a letter from Heinlein, congratulating him on his Nebula Award, which "meant more than the award itself". According to author Spider Robinson, Heinlein approached Haldeman at the awards banquet and said the book "may be the best future war story I've ever read!"
Harry Harrison's 1965 novel "Bill, the Galactic Hero" has also been described as a reaction to "Starship Troopers", while Gordon R. Dickson's 1961 novel "Naked to the Stars" has been called "an obvious rejoinder" to "Starship Troopers". "Ring of Swords", written by Eleanor Arnason in 1993, also depicts a war between two highly aggressive species, of which humans are one. The story deliberately inverts several aspects of "Starship Troopers"; the story is told from the point of view of diplomats seeking to prevent war, rather than soldiers fighting it; and the conflict is the result of the two species being extremely similar, rather than different.
The film rights to the novel were licensed in the 1990s, several years after Heinlein's death. The project was originally entitled "Bug Hunt at Outpost Nine", and had been in production before the producers bought the rights to "Starship Troopers". The film was directed by Paul Verhoeven (who found the book too boring to finish), and released in 1997. The screenplay, by Ed Neumeier, shared character names and some plot details with the novel. The film contained several elements that differed from the book, including a military that is completely integrated with respect to sex. It had the stated intention of treating its material in an ironic or sarcastic manner, to undermine the political ideology of the novel. The mechanized suits that featured prominently in the novel were absent from the film, due to budget constraints.
The film utilized fascist imagery throughout, including portraying the Terran Federation's personnel wearing uniforms strongly reminiscent of those worn by the SS, the Nazi paramilitary. Verhoeven stated in 1997 that the first scene of the film—an advertisement for the Mobile Infantry—was adapted shot-for-shot from a scene in Leni Riefenstahl's "Triumph of the Will" (1935), specifically an outdoor rally for the Reichsarbeitsdienst. Other references to Nazism include the Albert Speer-style architecture and the propagandistic dialogue (""Violence is the supreme authority!""). According to Verhoeven, the references to Nazism reflected his own experience in the Nazi-occupied Netherlands during World War II.
The film reignited the debate over the nature of the Terran society in Heinlein's world, and several critics accused Verhoeven of creating a fascist universe. Others, and Verhoeven himself, have stated that the film was intended to be ironic, and to critique fascism. The film has also been described as criticizing the jingoism of U.S. foreign policy, the military industrial complex, and the society in the film, which elevates violence over sensitivity. It received several negative critical reviews, reviewers suggesting that it was unsophisticated and targeted a juvenile audience, although some scholars and critics have also supported its description as satirical. The absence of the powered armor technology drew criticism from fans. The success of the film's endeavor to critique the ideology of the novel has been disputed.
Four sequels, "" (2004), "" (2008), "" (2012) and "" (2017) were released as straight-to-DVD films respectively. In December 2011, Neal H. Moritz, producer of films such as "The Fast and the Furious" series and "I Am Legend", announced plans for a remake of the film that he claims will be more faithful to the source material. In 2016 Mark Swift and Damian Shannon were reported to be writing the film. Commentators have suggested that a reboot would be as controversial as the original book.
Dark Horse Comics, Mongoose Publishing and Markosia holds the license to produce comic books based on "Starship Troopers", written by authors including Warren Ellis, Gordon Rennie and Tony Lee. From October to December 1988, Sunrise and Bandai Visual produced a six-episode Japanese original video animation locally titled "Uchū no Senshi" with mobile infantry power armor designs by Kazutaka Miyatake, based on "Starship Troopers". Avalon Hill published "Robert Heinlein's Starship Troopers" in 1976, a map-and-counter board wargame featuring a number of scenarios as written in the novel. In 1998, Mythic Entertainment released "". The web-based interactive game, in which players battled each other in overhead space combat, allowed players to assume either Klendathu or Federation roles, was developed alongside the film adaptation. "" was released by Mongoose Publishing in 2005, a miniature wargame which used material from the novel, film, and animated TV series. Spectre Media released "Starship Troopers: Invasion Mobile Infantry", a game for PCs, in 2012.
Notes
Bibliography | https://en.wikipedia.org/wiki?curid=28426 |
Telephone switchboard
Throughout the 20th century, telephone switchboards were devices used to connect circuits of telephones to establish telephone calls between users and/or other switchboards. The switchboard was an essential component of a manual telephone exchange, and was operated by switchboard operators who used electrical cords or switches to establish the connections.
The electromechanical automatic telephone exchange, invented by Almon Strowger in 1888, gradually replaced manual switchboards in central telephone exchanges around the world. In 1919, the Bell System in Canada also adopted automatic switching as its future technology, after years of reliance on manual systems.
Nevertheless, many manual branch exchanges remained operational into the second half of the 20th century in many enterprises. Later electronic devices and computer technology gave the operator access to an abundance of features. A private branch exchange (PBX) in a business usually has an attendant console for the operator, or an auto-attendant, which bypasses the operator entirely.
Following the invention of the telephone in 1876, the first telephones were rented in pairs which were limited to conversation between the parties operating those two instruments. The use of a central exchange was soon found to be even more advantageous than in telegraphy. In January 1878 the Boston Telephone Dispatch company had started hiring boys as telephone operators. Boys had been very successful as telegraphy operators, but their attitude, lack of patience, and behavior was unacceptable for live telephone contact, so the company began hiring women operators instead. Thus, on September 1, 1878, Boston Telephone Dispatch hired Emma Nutt as the first woman operator. Small towns typically had the switchboard installed in the operator's home so that he or she could answer calls on a 24-hour basis. In 1894, New England Telephone and Telegraph Company installed the first battery-operated switchboard on January 9 in Lexington, Massachusetts.
Early switchboards in large cities usually were mounted floor to ceiling in order to allow the operators to reach all the lines in the exchange. The operators were boys who would use a ladder to connect to the higher jacks. Late in the 1890s this measure failed to keep up with the increasing number of lines, and Milo G. Kellogg devised the Divided Multiple Switchboard for operators to work together, with a team on the "A board" and another on the "B". These operators were almost always women until the early 1970s, when men were once again hired. Cord switchboards were often referred to as "cordboards" by telephone company personnel. Conversion to Panel switch and other automated switching systems first eliminated the "B" operator and then, usually years later, the "A". Rural and suburban switchboards for the most part remained small and simple. In many cases, customers came to know their operator by name.
As telephone exchanges converted to automatic (dial) service, switchboards continued to serve specialized purposes. Before the advent of direct-dialed long-distance calls, a subscriber would need to contact the long-distance operator in order to place a toll call. In large cities, there was often a special number, such as 112, which would ring the long-distance operator directly. Elsewhere, the subscriber would ask the local operator to ring the long-distance operator.
The long-distance operator would record the name and city of the person to be called, and the operator would advise the calling party to hang up and wait for the call to be completed. Each toll center had only a limited number of trunks to distant cities, and if those circuits were busy, the operator would try alternate routings through intermediate cities. The operator would plug into a trunk for the destination city, and the inward operator would answer. The inward operator would obtain the number from the local information operator, and ring the call. Once the called party answered, the originating operator would advise him or her to stand by for the calling party, whom she'd then ring back, and record the starting time, once the conversation began.
In the 1940s, with the advent of dial pulse and multi-frequency operator dialing, the operator would plug into a tandem trunk and dial the NPA (area code) and operator code for the information operator in the distant city. For instance, the New York City information operator was 212-131. If the customer knew the number, and the point was direct-dialable, the operator would dial the call. If the distant city did not have dialable numbers, the operator would dial the code for the inward operator serving the called party, and ask her to ring the number.
In the 1960s, once most phone subscribers had direct long-distance dialing, a single type of operator began to serve both the local and long-distance functions. A customer might call to request a collect call, a call billed to a third number, or a person-to-person call. All toll calls from coin phones required operator assistance. The operator was also available to help complete a local or long-distance number which did not complete. For example, if a customer encountered a reorder tone (a fast busy signal), it could indicate "all circuits busy," or a problem in the destination exchange. The operator might be able to use a different routing to complete the call. If the operator could not get through by dialing the number, she could call the inward operator in the destination city, and ask her to try the number, or to test a line to see if it was busy or out of order.
Cord switchboards used for these purposes were replaced in the 1970s and 1980s by TSPS and similar systems, which greatly reduced operator involvement in calls. The customer would, instead of simply dialing "0" for the operator, dial 0+NPA+7digits, after which an operator would answer and provide the desired service (coin collection, obtaining acceptance on a collect call, etc.), and then release the call to be automatically handled by the TSPS.
Before the late 1970s and early 1980s, it was common for many smaller cities to have their own operators. An NPA (area code) would usually have its largest city as its primary toll center, with smaller toll centers serving the secondary cities scattered throughout the NPA. TSPS allowed telephone companies to close smaller toll centers and consolidate operator services in regional centers which might be hundreds of miles from the subscriber.
The switchboard is usually designed to accommodate the operator, who sits facing it. It has a high back panel, which consists of rows of female jacks, each jack designated and wired as a local extension of the switchboard (which serves an individual subscriber) or as an incoming or outgoing trunk line. The jack is also associated with a lamp.
On the table or desk area in front of the operator are columns of 3-position toggle switches termed "keys", lamps, and cords. Each column consists of a front key and a rear key, a front lamp and a rear lamp, followed by a front cord and a rear cord, making up together a cord circuit. The front key is the "talk" key allowing the operator to speak with that particular cord pair. The rear key on older "manual" boards and PBXs is used to ring a telephone physically. On newer boards, the back key is used to collect (retrieve) money from coin telephones. Each of the keys has three positions: back, normal and forward. When a key is in the normal position an electrical talk path connects the front and rear cords. A key in the forward position (front key) connects the operator to the cord pair, and a key in the back position sends a ring signal out on the cord (on older manual exchanges). Each cord has a three-wire TRS phone connector: tip and ring for testing, ringing and voice; and a sleeve wire for busy signals.
When a call is received, a jack lamp lights on the back panel and the operator responds by placing the rear cord into the corresponding jack and throwing the front key forward. The operator then converses with the caller, who informs the operator to whom he or she would like to speak. If it is another extension, the operator places the front cord in the associated jack and pulls the front key backwards to ring the called party. After connecting, the operator leaves both cords "up" with the keys in the normal position so the parties can converse. The supervision lamps light to alert the operator when the parties finish their conversation and go on-hook. Either party could "flash" the operator's supervision lamps by depressing their switch hook for a second and releasing it, in case they needed assistance with a problem. When the operator pulls down a cord, a pulley weight behind the switchboard pulls it down to prevent it from tangling.
On a trunk, on-hook and off-hook signals must pass in both directions. In a one-way trunk, the originating or A board sends a short for off-hook, and an open for on-hook, while the terminating or B board sends normal polarity or reverse polarity. This "reverse battery" signaling was carried over to later automatic exchanges. | https://en.wikipedia.org/wiki?curid=28427 |
Space exploration
Space exploration is the use of astronomy and space technology to explore outer space. While the exploration of space is carried out mainly by astronomers with telescopes, its physical exploration though is conducted both by unmanned robotic space probes and human spaceflight. Space exploration, like its classical form astronomy, is one of the main sources for space science.
While the observation of objects in space, known as astronomy, predates reliable recorded history, it was the development of large and relatively efficient rockets during the mid-twentieth century that allowed physical space exploration to become a reality. Common rationales for exploring space include advancing scientific research, national prestige, uniting different nations, ensuring the future survival of humanity, and developing military and strategic advantages against other countries.
The early era of space exploration was driven by a "Space Race" between the Soviet Union and the United States. The launch of the first human-made object to orbit Earth, the Soviet Union's Sputnik 1, on 4 October 1957, and the first Moon landing by the American Apollo 11 mission on 20 July 1969 are often taken as landmarks for this initial period. The Soviet space program achieved many of the first milestones, including the first living being in orbit in 1957, the first human spaceflight (Yuri Gagarin aboard "Vostok 1") in 1961, the first spacewalk (by Alexei Leonov) on 18 March 1965, the first automatic landing on another celestial body in 1966, and the launch of the first space station ("Salyut 1") in 1971.
After the first 20 years of exploration, focus shifted from one-off flights to renewable hardware, such as the Space Shuttle program, and from competition to cooperation as with the International Space Station (ISS).
With the substantial completion of the ISS following STS-133 in March 2011, plans for space exploration by the U.S. remain in flux. Constellation, a Bush Administration program for a return to the Moon by 2020 was judged inadequately funded and unrealistic by an expert review panel reporting in 2009.
The Obama Administration proposed a revision of Constellation in 2010 to focus on the development of the capability for crewed missions beyond low Earth orbit (LEO), envisioning extending the operation of the ISS beyond 2020, transferring the development of launch vehicles for human crews from NASA to the private sector, and developing technology to enable missions to beyond LEO, such as Earth–Moon L1, the Moon, Earth–Sun L2, near-Earth asteroids, and Phobos or Mars orbit.
In the 2000s, the People's Republic of China initiated a successful manned spaceflight program, while the European Union, Japan, and India have also planned future crewed space missions. China, Russia, Japan, and India have advocated crewed missions to the Moon during the 21st century, while the European Union has advocated manned missions to both the Moon and Mars during the 20th and 21st century.
From the 1990s onwards, private interests began promoting space tourism and then public space exploration of the Moon (see Google Lunar X Prize).
The first telescope was said to be invented in 1608 in the Netherlands by an eyeglass maker named Hans Lippershey. The Orbiting Astronomical Observatory 2 was the first space telescope launched on December 7, 1968. As of February 2, 2019, there was 3,891 confirmed exoplanets discovered. The Milky Way is estimated to contain 100–400 billion stars and more than 100 billion planets. There are at least 2 trillion galaxies in the observable universe. GN-z11 is the most distant known object from Earth, reported as 32 billion light-years away.
In 1949, the Bumper-WAC reached an altitude of , becoming the first human-made object to enter space, according to NASA, although V-2 Rocket MW 18014 crossed the Kármán line earlier, in 1944.
The first successful orbital launch was of the Soviet uncrewed "Sputnik 1" ("Satellite 1") mission on 4 October 1957. The satellite weighed about , and is believed to have orbited Earth at a height of about . It had two radio transmitters (20 and 40 MHz), which emitted "beeps" that could be heard by radios around the globe. Analysis of the radio signals was used to gather information about the electron density of the ionosphere, while temperature and pressure data was encoded in the duration of radio beeps. The results indicated that the satellite was not punctured by a meteoroid. "Sputnik 1" was launched by an R-7 rocket. It burned up upon re-entry on 3 January 1958.
The first successful human spaceflight was "Vostok 1" ("East 1"), carrying 27-year-old Russian cosmonaut Yuri Gagarin on 12 April 1961. The spacecraft completed one orbit around the globe, lasting about 1 hour and 48 minutes. Gagarin's flight resonated around the world; it was a demonstration of the advanced Soviet space program and it opened an entirely new era in space exploration: human spaceflight.
The first artificial object to reach another celestial body was Luna 2 reaching the Moon in 1959. The first soft landing on another celestial body was performed by Luna 9 landing on the Moon on February 3, 1966. Luna 10 became the first artificial satellite of the Moon, entering Moon Orbit on April 3, 1966.
The first crewed landing on another celestial body was performed by Apollo 11 on July 20, 1969, landing on the Moon. There have been a total of six spacecraft with humans landing on the Moon starting from 1969 to the last human landing in 1972.
The first interplanetary flyby was the 1961 Venera 1 flyby of Venus, though the 1962 Mariner 2 was the first flyby of Venus to return data (closest approach 34,773 kilometers). Pioneer 6 was the first satellite to orbit the Sun, launched on December 16, 1965. The other planets were first flown by in 1965 for Mars by Mariner 4, 1973 for Jupiter by "Pioneer 10", 1974 for Mercury by Mariner 10, 1979 for Saturn by "Pioneer 11", 1986 for Uranus by "Voyager 2", 1989 for Neptune by "Voyager 2". In 2015, the dwarf planets Ceres and Pluto were orbited by "Dawn" and passed by "New Horizons", respectively. This accounts for flybys of each of the eight planets in our Solar System, the Sun, the Moon and Ceres & Pluto (2 of the 5 recognized dwarf planets).
The first interplanetary surface mission to return at least limited surface data from another planet was the 1970 landing of Venera 7 which returned data to Earth for 23 minutes from Venus. In 1975 the Venera 9 was the first to return images from the surface of another planet, returning images from Venus. In 1971 the Mars 3 mission achieved the first soft landing on Mars returning data for almost 20 seconds. Later much longer duration surface missions were achieved, including over six years of Mars surface operation by Viking 1 from 1975 to 1982 and over two hours of transmission from the surface of Venus by Venera 13 in 1982, the longest ever Soviet planetary surface mission. Venus and Mars are the two planets outside of Earth, humans have conducted surface missions on with unmanned robotic spacecraft.
Salyut 1 was the first space station of any kind, launched into low Earth orbit by the Soviet Union on April 19, 1971. The International Space Station is currently the only fully functional space station, with continuous inhabitance since the year 2000.
"Voyager 1" became the first human-made object to leave our Solar System into interstellar space on August 25, 2012. The probe passed the heliopause at 121 AU to enter interstellar space.
The Apollo 13 flight passed the far side of the Moon at an altitude of above the lunar surface, and 400,171 km (248,655 mi) from Earth, marking the record for the farthest humans have ever traveled from Earth in 1970.
"Voyager 1" is currently at a distance of (21.708 billion kilometers; 13.489 billion miles) from Earth as of January 1, 2019. It is the most distant human-made object from Earth.
GN-z11 is the most distant known object from Earth, reported as 13.4 billion light-years away.
The dream of stepping into the outer reaches of Earth's atmosphere was driven by the fiction of Jules Verne and H. G. Wells, and rocket technology was developed to try to realize this vision. The German V-2 was the first rocket to travel into space, overcoming the problems of thrust and material failure. During the final days of World War II this technology was obtained by both the Americans and Soviets as were its designers. The initial driving force for further development of the technology was a weapons race for intercontinental ballistic missiles (ICBMs) to be used as long-range carriers for fast nuclear weapon delivery, but in 1961 when the Soviet Union launched the first man into space, the United States declared itself to be in a "Space Race" with the Soviets.
Konstantin Tsiolkovsky, Robert Goddard, Hermann Oberth, and Reinhold Tiling laid the groundwork of rocketry in the early years of the 20th century.
Wernher von Braun was the lead rocket engineer for Nazi Germany's World War II V-2 rocket project. In the last days of the war he led a caravan of workers in the German rocket program to the American lines, where they surrendered and were brought to the United States to work on their rocket development ("Operation Paperclip"). He acquired American citizenship and led the team that developed and launched "Explorer 1", the first American satellite. Von Braun later led the team at NASA's Marshall Space Flight Center which developed the Saturn V moon rocket.
Initially the race for space was often led by Sergei Korolev, whose legacy includes both the R7 and Soyuz—which remain in service to this day. Korolev was the mastermind behind the first satellite, first man (and first woman) in orbit and first spacewalk. Until his death his identity was a closely guarded state secret; not even his mother knew that he was responsible for creating the Soviet space program.
Kerim Kerimov was one of the founders of the Soviet space program and was one of the lead architects behind the first human spaceflight ("Vostok 1") alongside Sergey Korolev. After Korolev's death in 1966, Kerimov became the lead scientist of the Soviet space program and was responsible for the launch of the first space stations from 1971 to 1991, including the Salyut and Mir series, and their precursors in 1967, the Cosmos 186 and Cosmos 188.
Other key people:
Starting in the mid-20th century probes and then human mission were sent into Earth orbit, and then on to the Moon. Also, probes were sent throughout the known Solar system, and into Solar orbit. Unmanned spacecraft have been sent into orbit around Saturn, Jupiter, Mars, Venus, and Mercury by the 21st century, and the most distance active spacecraft, Voyager 1 and 2 traveled beyond 100 times the Earth-Sun distance. The instruments were enough though that it is thought they have left the Sun's heliosphere, a sort of bubble of particles made in the Galaxy by the Sun's solar wind.
The Sun is a major focus of space exploration. Being above the atmosphere in particular and Earth's magnetic field gives access to the solar wind and infrared and ultraviolet radiations that cannot reach Earth's surface. The Sun generates most space weather, which can affect power generation and transmission systems on Earth and interfere with, and even damage, satellites and space probes. Numerous spacecraft dedicated to observing the Sun, beginning with the Apollo Telescope Mount, have been launched and still others have had solar observation as a secondary objective. Parker Solar Probe, launched in 2018, will approach the Sun to within 1/8th the orbit of Mercury.
Mercury remains the least explored of the Terrestrial planets. As of May 2013, the Mariner 10 and "MESSENGER" missions have been the only missions that have made close observations of Mercury. "MESSENGER" entered orbit around Mercury in March 2011, to further investigate the observations made by Mariner 10 in 1975 (Munsell, 2006b).
A third mission to Mercury, scheduled to arrive in 2025, BepiColombo is to include two probes. BepiColombo is a joint mission between Japan and the European Space Agency. "MESSENGER" and BepiColombo are intended to gather complementary data to help scientists understand many of the mysteries discovered by Mariner 10's flybys.
Flights to other planets within the Solar System are accomplished at a cost in energy, which is described by the net change in velocity of the spacecraft, or delta-v. Due to the relatively high delta-v to reach Mercury and its proximity to the Sun, it is difficult to explore and orbits around it are rather unstable.
Venus was the first target of interplanetary flyby and lander missions and, despite one of the most hostile surface environments in the Solar System, has had more landers sent to it (nearly all from the Soviet Union) than any other planet in the Solar System. The first flyby was the 1961 Venera 1, though the 1962 Mariner 2 was the first flyby to successfully return data. Mariner 2 has been followed by several other flybys by multiple space agencies often as part of missions using a Venus flyby to provide a gravitational assist en route to other celestial bodies. In 1967 Venera 4 became the first probe to enter and directly examine the atmosphere of Venus. In 1970, Venera 7 became the first successful lander to reach the surface of Venus and by 1985 it had been followed by eight additional successful Soviet Venus landers which provided images and other direct surface data. Starting in 1975 with the Soviet orbiter Venera 9 some ten successful orbiter missions have been sent to Venus, including later missions which were able to map the surface of Venus using radar to pierce the obscuring atmosphere.
Space exploration has been used as a tool to understand Earth as a celestial object in its own right. Orbital missions can provide data for Earth that can be difficult or impossible to obtain from a purely ground-based point of reference.
For example, the existence of the Van Allen radiation belts was unknown until their discovery by the United States' first artificial satellite, "Explorer 1". These belts contain radiation trapped by Earth's magnetic fields, which currently renders construction of habitable space stations above 1000 km impractical.
Following this early unexpected discovery, a large number of Earth observation satellites have been deployed specifically to explore Earth from a space based perspective. These satellites have significantly contributed to the understanding of a variety of Earth-based phenomena. For instance, the hole in the ozone layer was found by an artificial satellite that was exploring Earth's atmosphere, and satellites have allowed for the discovery of archeological sites or geological formations that were difficult or impossible to otherwise identify.
The Moon was the first celestial body to be the object of space exploration. It holds the distinctions of being the first remote celestial object to be flown by, orbited, and landed upon by spacecraft, and the only remote celestial object ever to be visited by humans.
In 1959 the Soviets obtained the first images of the far side of the Moon, never previously visible to humans. The U.S. exploration of the Moon began with the Ranger 4 impactor in 1962. Starting in 1966 the Soviets successfully deployed a number of landers to the Moon which were able to obtain data directly from the Moon's surface; just four months later, "Surveyor 1" marked the debut of a successful series of U.S. landers. The Soviet uncrewed missions culminated in the Lunokhod program in the early 1970s, which included the first uncrewed rovers and also successfully brought lunar soil samples to Earth for study. This marked the first (and to date the only) automated return of extraterrestrial soil samples to Earth. Uncrewed exploration of the Moon continues with various nations periodically deploying lunar orbiters, and in 2008 the Indian Moon Impact Probe.
Crewed exploration of the Moon began in 1968 with the Apollo 8 mission that successfully orbited the Moon, the first time any extraterrestrial object was orbited by humans. In 1969, the Apollo 11 mission marked the first time humans set foot upon another world. Crewed exploration of the Moon did not continue for long, however. The Apollo 17 mission in 1972 marked the sixth landing and the most recent human visit there. Artemis 2 will flyby the Moon in 2022. Robotic missions are still pursued vigorously.
The exploration of Mars has been an important part of the space exploration programs of the Soviet Union (later Russia), the United States, Europe, Japan and India. Dozens of robotic spacecraft, including orbiters, landers, and rovers, have been launched toward Mars since the 1960s. These missions were aimed at gathering data about current conditions and answering questions about the history of Mars. The questions raised by the scientific community are expected to not only give a better appreciation of the red planet but also yield further insight into the past, and possible future, of Earth.
The exploration of Mars has come at a considerable financial cost with roughly two-thirds of all spacecraft destined for Mars failing before completing their missions, with some failing before they even began. Such a high failure rate can be attributed to the complexity and large number of variables involved in an interplanetary journey, and has led researchers to jokingly speak of "The Great Galactic Ghoul" which subsists on a diet of Mars probes. This phenomenon is also informally known as the "Mars Curse".
In contrast to overall high failure rates in the exploration of Mars, India has become the first country to achieve success of its maiden attempt. India's Mars Orbiter Mission (MOM) is one of the least expensive interplanetary missions ever undertaken with an approximate total cost of 450 Crore (). The first mission to Mars by any Arab country has been taken up by the United Arab Emirates. Called the Emirates Mars Mission, it is scheduled for launch in 2020. The uncrewed exploratory probe has been named "Hope Probe" and will be sent to Mars to study its atmosphere in detail.
SpaceX CEO Elon Musk hopes that the SpaceX Starship will explore the Mars.
The Russian space mission Fobos-Grunt, which launched on 9 November 2011 experienced a failure leaving it stranded in low Earth orbit. It was to begin exploration of the Phobos and Martian circumterrestrial orbit, and study whether the moons of Mars, or at least Phobos, could be a "trans-shipment point" for spaceships traveling to Mars.
The exploration of Jupiter has consisted solely of a number of automated NASA spacecraft visiting the planet since 1973. A large majority of the missions have been "flybys", in which detailed observations are taken without the probe landing or entering orbit; such as in Pioneer and Voyager programs. The "Galileo" and "Juno" spacecraft are the only spacecraft to have entered the planet's orbit. As Jupiter is believed to have only a relatively small rocky core and no real solid surface, a landing mission is precluded.
Reaching Jupiter from Earth requires a delta-v of 9.2 km/s, which is comparable to the 9.7 km/s delta-v needed to reach low Earth orbit. Fortunately, gravity assists through planetary flybys can be used to reduce the energy required at launch to reach Jupiter, albeit at the cost of a significantly longer flight duration.
Jupiter has 79 known moons, many of which have relatively little known information about them.
Saturn has been explored only through uncrewed spacecraft launched by NASA, including one mission ("Cassini–Huygens") planned and executed in cooperation with other space agencies. These missions consist of flybys in 1979 by "Pioneer 11", in 1980 by "Voyager 1", in 1982 by "Voyager 2" and an orbital mission by the "Cassini" spacecraft, which lasted from 2004 until 2017.
Saturn has at least 62 known moons, although the exact number is debatable since Saturn's rings are made up of vast numbers of independently orbiting objects of varying sizes. The largest of the moons is Titan, which holds the distinction of being the only moon in the Solar System with an atmosphere denser and thicker than that of Earth. Titan holds the distinction of being the only object in the Outer Solar System that has been explored with a lander, the "Huygens" probe deployed by the "Cassini" spacecraft.
The exploration of Uranus has been entirely through the "Voyager 2" spacecraft, with no other visits currently planned. Given its axial tilt of 97.77°, with its polar regions exposed to sunlight or darkness for long periods, scientists were not sure what to expect at Uranus. The closest approach to Uranus occurred on 24 January 1986. "Voyager 2" studied the planet's unique atmosphere and magnetosphere. "Voyager 2" also examined its ring system and the moons of Uranus including all five of the previously known moons, while discovering an additional ten previously unknown moons.
Images of Uranus proved to have a very uniform appearance, with no evidence of the dramatic storms or atmospheric banding evident on Jupiter and Saturn. Great effort was required to even identify a few clouds in the images of the planet. The magnetosphere of Uranus, however, proved to be unique, being profoundly affected by the planet's unusual axial tilt. In contrast to the bland appearance of Uranus itself, striking images were obtained of the Moons of Uranus, including evidence that Miranda had been unusually geologically active.
The exploration of Neptune began with the 25 August 1989 "Voyager 2" flyby, the sole visit to the system as of 2014. The possibility of a Neptune Orbiter has been discussed, but no other missions have been given serious thought.
Although the extremely uniform appearance of Uranus during "Voyager 2"s visit in 1986 had led to expectations that Neptune would also have few visible atmospheric phenomena, the spacecraft found that Neptune had obvious banding, visible clouds, auroras, and even a conspicuous anticyclone storm system rivaled in size only by Jupiter's small Spot. Neptune also proved to have the fastest winds of any planet in the Solar System, measured as high as 2,100 km/h. "Voyager 2" also examined Neptune's ring and moon system. It discovered 900 complete rings and additional partial ring "arcs" around Neptune. In addition to examining Neptune's three previously known moons, "Voyager 2" also discovered five previously unknown moons, one of which, Proteus, proved to be the last largest moon in the system. Data from "Voyager 2" supported the view that Neptune's largest moon, Triton, is a captured Kuiper belt object.
The dwarf planet Pluto presents significant challenges for spacecraft because of its great distance from Earth (requiring high velocity for reasonable trip times) and small mass (making capture into orbit very difficult at present). "Voyager 1" could have visited Pluto, but controllers opted instead for a close flyby of Saturn's moon Titan, resulting in a trajectory incompatible with a Pluto flyby. "Voyager 2" never had a plausible trajectory for reaching Pluto.
Pluto continues to be of great interest, despite its reclassification as the lead and nearest member of a new and growing class of distant icy bodies of intermediate size (and also the first member of the important subclass, defined by orbit and known as "plutinos"). After an intense political battle, a mission to Pluto dubbed "New Horizons" was granted funding from the United States government in 2003. "New Horizons" was launched successfully on 19 January 2006. In early 2007 the craft made use of a gravity assist from Jupiter. Its closest approach to Pluto was on 14 July 2015; scientific observations of Pluto began five months prior to closest approach and continued for 16 days after the encounter.
Until the advent of space travel, objects in the asteroid belt were merely pinpricks of light in even the largest telescopes, their shapes and terrain remaining a mystery.
Several asteroids have now been visited by probes, the first of which was "Galileo", which flew past two: 951 Gaspra in 1991, followed by 243 Ida in 1993. Both of these lay near enough to "Galileo"'s planned trajectory to Jupiter that they could be visited at acceptable cost. The first landing on an asteroid was performed by the "NEAR Shoemaker" probe in 2000, following an orbital survey of the object. The dwarf planet Ceres and the asteroid 4 Vesta, two of the three largest asteroids, were visited by NASA's "Dawn" spacecraft, launched in 2007.
Although many comets have been studied from Earth sometimes with centuries-worth of observations, only a few comets have been closely visited. In 1985, the "International Cometary Explorer" conducted the first comet fly-by (21P/Giacobini-Zinner) before joining the Halley Armada studying the famous comet. The "Deep Impact" probe smashed into 9P/Tempel to learn more about its structure and composition and the "Stardust" mission returned samples of another comet's tail. The "Philae" lander successfully landed on Comet Churyumov–Gerasimenko in 2014 as part of the broader "Rosetta" mission.
"Hayabusa" was a robotic spacecraft developed by the Japan Aerospace Exploration Agency to return a sample of material from the small near-Earth asteroid 25143 Itokawa to Earth for further analysis. Hayabusa was launched on 9 May 2003 and rendezvoused with Itokawa in mid-September 2005. After arriving at Itokawa, "Hayabusa" studied the asteroid's shape, spin, topography, color, composition, density, and history. In November 2005, it landed on the asteroid twice to collect samples. The spacecraft returned to Earth on 13 June 2010.
Deep space exploration is the branch of astronomy, astronautics and space technology that is involved with the exploration of distant regions of outer space. Physical exploration of space is conducted both by human spaceflights (deep-space astronautics) and by robotic spacecraft.
Some of the best candidates for future deep space engine technologies include anti-matter, nuclear power and beamed propulsion. The latter, beamed propulsion, appears to be the best candidate for deep space exploration presently available, since it uses known physics and known technology that is being developed for other purposes.
Breakthrough Starshot is a research and engineering project by the Breakthrough Initiatives to develop a proof-of-concept fleet of light sail spacecraft named "StarChip", to be capable of making the journey to the Alpha Centauri star system 4.37 light-years away. It was founded in 2016 by Yuri Milner, Stephen Hawking, and Mark Zuckerberg.
An article in science magazine "Nature" suggested the use of asteroids as a gateway for space exploration, with the ultimate destination being Mars. In order to make such an approach viable, three requirements need to be fulfilled: first, "a thorough asteroid survey to find thousands of nearby bodies suitable for astronauts to visit"; second, "extending flight duration and distance capability to ever-increasing ranges out to Mars"; and finally, "developing better robotic vehicles and tools to enable astronauts to explore an asteroid regardless of its size, shape or spin." Furthermore, using asteroids would provide astronauts with protection from galactic cosmic rays, with mission crews being able to land on them without great risk to radiation exposure.
The James Webb Space Telescope (JWST or "Webb") is a space telescope that is planned to be the successor to the Hubble Space Telescope. The JWST will provide greatly improved resolution and sensitivity over the Hubble, and will enable a broad range of investigations across the fields of astronomy and cosmology, including observing some of the most distant events and objects in the universe, such as the formation of the first galaxies. Other goals include understanding the formation of stars and planets, and direct imaging of exoplanets and novas.
The primary mirror of the JWST, the Optical Telescope Element, is composed of 18 hexagonal mirror segments made of gold-plated beryllium which combine to create a diameter mirror that is much larger than the Hubble's mirror. Unlike the Hubble, which observes in the near ultraviolet, visible, and near infrared (0.1 to 1 μm) spectra, the JWST will observe in a lower frequency range, from long-wavelength visible light through mid-infrared (0.6 to 27 μm), which will allow it to observe high redshift objects that are too old and too distant for the Hubble to observe. The telescope must be kept very cold in order to observe in the infrared without interference, so it will be deployed in space near the Earth–Sun Lagrangian point, and a large sunshield made of silicon- and aluminum-coated Kapton will keep its mirror and instruments below .
The Artemis program is an ongoing crewed spaceflight program carried out by NASA, U.S. commercial spaceflight companies, and international partners such as ESA, with the goal of landing "the first woman and the next man" on the Moon, specifically at the lunar south pole region by 2024. Artemis would be the next step towards the long-term goal of establishing a sustainable presence on the Moon, laying the foundation for private companies to build a lunar economy, and eventually sending humans to Mars.
In 2017, the lunar campaign was authorized by Space Policy Directive 1, utilizing various ongoing spacecraft programs such as Orion, the Lunar Gateway, Commercial Lunar Payload Services, and adding an undeveloped crewed lander. The Space Launch System will serve as the primary launch vehicle for Orion, while commercial launch vehicles are planned for use to launch various other elements of the campaign. NASA requested $1.6 billion in additional funding for Artemis for fiscal year 2020, while the Senate Appropriations Committee requested from NASA a five-year budget profile which is needed for evaluation and approval by Congress.
The research that is conducted by national space exploration agencies, such as NASA and Roscosmos, is one of the reasons supporters cite to justify government expenses. Economic analyses of the NASA programs often showed ongoing economic benefits (such as NASA spin-offs), generating many times the revenue of the cost of the program. It is also argued that space exploration would lead to the extraction of resources on other planets and especially asteroids, which contain billions of dollars that worth of minerals and metals. Such expeditions could generate a lot of revenue. In addition, it has been argued that space exploration programs help inspire youth to study in science and engineering. Space exploration also gives scientists the ability to perform experiments in other settings and expand humanity's knowledge.
Another claim is that space exploration is a necessity to mankind and that staying on Earth will lead to extinction. Some of the reasons are lack of natural resources, comets, nuclear war, and worldwide epidemic. Stephen Hawking, renowned British theoretical physicist, said that "I don't think the human race will survive the next thousand years, unless we spread into space. There are too many accidents that can befall life on a single planet. But I'm an optimist. We will reach out to the stars." Arthur C. Clarke (1950) presented a summary of motivations for the human exploration of space in his non-fiction semi-technical monograph "Interplanetary Flight". He argued that humanity's choice is essentially between expansion off Earth into space, versus cultural (and eventually biological) stagnation and death.
NASA has produced a series of public service announcement videos supporting the concept of space exploration.
Overall, the public remains largely supportive of both crewed and uncrewed space exploration. According to an Associated Press Poll conducted in July 2003, 71% of U.S. citizens agreed with the statement that the space program is "a good investment", compared to 21% who did not.
"Spaceflight" is the use of space technology to achieve the flight of spacecraft into and through outer space.
Spaceflight is used in space exploration, and also in commercial activities like space tourism and satellite telecommunications. Additional non-commercial uses of spaceflight include space observatories, reconnaissance satellites and other Earth observation satellites.
A spaceflight typically begins with a rocket launch, which provides the initial thrust to overcome the force of gravity and propels the spacecraft from the surface of Earth. Once in space, the motion of a spacecraft—both when unpropelled and when under propulsion—is covered by the area of study called astrodynamics. Some spacecraft remain in space indefinitely, some disintegrate during atmospheric reentry, and others reach a planetary or lunar surface for landing or impact.
Satellites are used for a large number of purposes. Common types include military (spy) and civilian Earth observation satellites, communication satellites, navigation satellites, weather satellites, and research satellites. Space stations and human spacecraft in orbit are also satellites.
Current examples of the commercial use of space include satellite navigation systems, satellite television and satellite radio. Space tourism is the recent phenomenon of space travel by individuals for the purpose of personal pleasure.
Private spaceflight companies such as SpaceX and Blue Origin, and commercial space stations such as the Axiom Space and the Bigelow Commercial Space Station have dramatically changed the landscape of space exploration, and will continue to do so in the near future.
Astrobiology is the interdisciplinary study of life in the universe, combining aspects of astronomy, biology and geology. It is focused primarily on the study of the origin, distribution and evolution of life. It is also known as exobiology (from Greek: έξω, "exo", "outside"). The term "Xenobiology" has been used as well, but this is technically incorrect because its terminology means "biology of the foreigners". Astrobiologists must also consider the possibility of life that is chemically entirely distinct from any life found on Earth. In the Solar System some of the prime locations for current or past astrobiology are on Enceladus, Europa, Mars, and Titan.
To date, the longest human occupation of space is the International Space Station which has been in continuous use for . Valeri Polyakov's record single spaceflight of almost 438 days aboard the Mir space station has not been surpassed. The health effects of space have been well documented through years of research conducted in the field of aerospace medicine. Analog environments similar to those one may experience in space travel (like deep sea submarines) have been used in this research to further explore the relationship between isolation and extreme environments. It is imperative that the health of the crew be maintained as any deviation from baseline may compromise the integrity of the mission as well as the safety of the crew, hence the reason why astronauts must endure rigorous medical screenings and tests prior to embarking on any missions. However, it does not take long for the environmental dynamics of spaceflight to commence its toll on the human body; for example, space motion sickness (SMS) - a condition which affects the neurovestibular system and culminates in mild to severe signs and symptoms such as vertigo, dizziness, fatigue, nausea, and disorientation - plagues almost all space travelers within their first few days in orbit. Space travel can also have a profound impact on the psyche of the crew members as delineated in anecdotal writings composed after their retirement. Space travel can adversely affect the body's natural biological clock (circadian rhythm); sleep patterns causing sleep deprivation and fatigue; and social interaction; consequently, residing in a Low Earth Orbit (LEO) environment for a prolonged amount of time can result in both mental and physical exhaustion. Long-term stays in space reveal issues with bone and muscle loss in low gravity, immune system suppression, and radiation exposure. The lack of gravity causes fluid to rise upward which can cause pressure to build up in the eye, resulting in vision problems; the loss of bone minerals and densities; cardiovascular deconditioning; and decreased endurance and muscle mass.
Radiation is perhaps the most insidious health hazard to space travelers as it is invisible to the naked eye and can cause cancer. Space craft are no longer protected from the sun's radiation as they are positioned above the Earth's magnetic field; the danger of radiation is even more potent when one enters deep space. The hazards of radiation can be ameliorated through protective shielding on the spacecraft, alerts, and dosimetry.
Fortunately, with new and rapidly evolving technological advancements, those in Mission Control are able to monitor the health of their astronauts more closely utilizing telemedicine. One may not be able to completely evade the physiological effects of space flight, but they can be mitigated. For example, medical systems aboard space vessels such as the International Space Station (ISS) are well equipped and designed to counteract the effects of lack of gravity and weightlessness; on-board treadmills can help prevent muscle loss and reduce the risk of developing premature osteoporosis. Additionally, a crew medical officer is appointed for each ISS mission and a flight surgeon is available 24/7 via the ISS Mission Control Center located in Houston, Texas. Although the interactions are intended to take place in real time, communications between the space and terrestrial crew may become delayed - sometimes by as much as 20 minutes - as their distance from each other increases when the spacecraft moves further out of LEO; because of this the crew are trained and need to be prepared to respond to any medical emergencies that may arise on the vessel as the ground crew are hundreds of miles away. As one can see, travelling and possibly living in space poses many challenges. Many past and current concepts for the continued exploration and colonization of space focus on a return to the Moon as a "stepping stone" to the other planets, especially Mars. At the end of 2006 NASA announced they were planning to build a permanent Moon base with continual presence by 2024.
Beyond the technical factors that could make living in space more widespread, it has been suggested that the lack of private property, the inability or difficulty in establishing property rights in space, has been an impediment to the development of space for human habitation. Since the advent of space technology in the latter half of the twentieth century, the ownership of property in space has been murky, with strong arguments both for and against. In particular, the making of national territorial claims in outer space and on celestial bodies has been specifically proscribed by the Outer Space Treaty, which had been, , ratified by all spacefaring nations.
Space colonization, also called space settlement and space humanization, would be the permanent autonomous (self-sufficient) human habitation of locations outside Earth, especially of natural satellites or planets such as the Moon or Mars, using significant amounts of in-situ resource utilization.
The first woman to ever enter space was Valentina Tereshkova. She flew in 1963 but it was not until the 1980s that another woman entered space again. All astronauts were required to be military test pilots at the time and women were not able to enter this career, this is one reason for the delay in allowing women to join space crews. After the rule changed, Svetlana Savitskaya became the second woman to enter space, she was also from the Soviet Union. Sally Ride became the next woman to enter space and the first woman to enter space through the United States program.
Since then, eleven other countries have allowed women astronauts. Due to some slow changes in the space programs to allow women.
The first all female space walk occurred in 2018, including Christina Koch and Jessica Meir. These two women have both participated in separate space walks with NASA. The first woman to go to the moon is planned for 2024.
Despite these developments women are still underrepresented among astronauts and especially cosmonauts. Issues that block potential applicants from the programs and limit the space missions they are able to go on, are for example:
Additionally women have been discriminately treated for example as with Sally Ride by beeing scrutinized more than her male counterparts and asked sexist questions by the press.
Artistry in and from space ranges from signals, capturing and arranging material like Yuri Gagarin's selfie in space or the image The Blue Marble, over drawings like the first one in space by cosmonaut and artist Alexei Leonov, music videos like Chris Hadfield's cover of Space Oddity onboard the ISS, to permanent installations on celestial bodies like on the Moon. | https://en.wikipedia.org/wiki?curid=28431 |
Outline of space science
The following outline is provided as an overview of and topical guide to space science:
Space science encompasses all of the scientific disciplines that involve space exploration and study natural phenomena and physical bodies occurring in outer space, such as space medicine and astrobiology.
The following outline is an overview of and topical guide to space science:
See astronomical object for a list of specific types of entities which scientists study. See Earth's location in the universe for an orientation.
Astronautics – science and engineering of spacefaring and spaceflight, a subset of Aerospace engineering (which includes atmospheric flight) | https://en.wikipedia.org/wiki?curid=28434 |
Shepherd Neame Brewery
Shepherd Neame is an English independent brewery founded in 1698 in Faversham, Kent, and family-owned since 1864. The brewery produces a range of cask ales and filtered beers. Production is around 210,000 brewers' barrels a year. It owns 328 pubs and hotels, predominantly in Kent, London and South East England. The company exports to more than 35 countries including India, Sweden, Italy, Brazil and Canada.
The Neame family were relative latecomers in the overall development of the Shepherd Neame Brewery but, as substantial property owners in the district, Charles Neame of Harefield Court and John Neame of Selling Court were acknowledged to be among the most valuable hop growers in East Kent. Theo Barker explains in the official account of the Brewery, that it all began with a Captain Richard Marsh who in 1678 is recorded in the Faversham "Wardmote Books" as contributing by far the largest of the ‘Brewers Fines’ made at that date.
Shepherd Neame as such is reported as having been established in 1698, in an advertisement of the "Kentish Gazette" for 11 April 1865. Richard Marsh lived until 1727 when his Brewery was bequeathed to his widow, and then to his daughter, who sold the property on to Samuel Shepherd around 1741. Samuel Shepherd was from Deal, Kent. He had an interest in malting when he moved to Faversham around 1730 and had established himself as a Brewer of Malt by 1734. Shepherd expanded on his interest, through acquiring a number of public houses, but it was his son, Julius Shepherd, who extended this trend still further upon his inheritance of the Brewery in 1770, when the company held four such outlets. In 1789, he set about modernising the process of malt grinding and pumping, which had been previously worked with the employment of horses, by introducing what was reputed to be the first steam engine (Boulton and Watt) to be used for this purpose outside London, and was then able to describe his business as the "Faversham Steam Brewery".
Henry, his second son, born in 1780, continued the family tradition, and raised his son of the same name into the business. It was this Henry Shepherd (1816~77) who was to be the last of the Shepherds actively involved in the Company. The death of Henry senior at the age of eighty-two occurred in 1862 and although his own son was not a businessman of the same determination, the firm's expansion continued adequately with John Mares, who had come to the financial assistance of the Shepherd Brewery during the recession of the mid-1840s and continued as the impetus behind "Shepherd and Mares" until Percy Beale Neame joined the Brewery in 1864. Mares had seen the potential of the Brewery's growth with the arrival of the long delayed railway service in 1858. He pressed the firm to actively prepare for such growth. Horse-drawn drays were used to carry the Brewery's ales throughout Kent, and malts were imported by barge at Faversham Creek at its own wharf which was also used as the means to deliver its product to London, until the 1850s when steamboats were beginning to prove more expeditious to the task. The railways soon even outpaced and replaced the steamboats.
Mares' unexpected death at the age of 45 in 1864 placed Percy Neame, at the age of 28, as the stronger partner with Henry Shepherd, and with the challenge left to him in Mares' successful expansion programme he brought the Faversham Brewery well into the Neame family's dominion.
Shepherd Neame have embraced 21st-century brewing techniques, for instance using PDX Reactor Technology for the heat treatment of wort, rather than the traditional method, using a calandria. This has led to a reduction in energy consumption of 50%.
Along with the Three Tuns Brewery in Shropshire, Shepherd Neame claims to the oldest brewery in Great Britain. Three Tuns was licensed in 1642, 56 years earlier than Shepherd Neame. However, there is evidence that brewing has taken place on the Shepherd Neame site since at least 1573, over a century before the establishment of the current brewery.
Shepherd Neame has been making beer in Faversham, Kent, for more than 300 years. It claims to use traditional methods and 100% natural ingredients. The brewery uses chalk-filtered mineral water from the brewery's own artesian well, deep below the brewery, and 93% of the hops used in its beers are grown in Kent. Centuries of brewing experience have been passed down to the current team of brewers, who still use many traditional methods, including handcrafting beer in the UK's last remaining unlined solid oak mash tuns.
The beer is named after the Supermarine Spitfire aircraft designed by R. J. Mitchell. Winner of a Gold Medal and Best Strong Cask-Conditioned Beer of the World at the Brewing Industry International Awards, Spitfire has Protected Geographical Indication, the same regional produce protection afforded to Champagne and Parma Ham.
Shepherd Neame has created a range of innovative limited edition beers under its No.18 Yard Brewhouse brand, using experimental brewing ingredients:
Shepherd Neame originally adopted the Faversham Steam Brewery moniker in the late 18th century when it became one of the first steam-powered breweries outside London. The brewery bought a five horse power engine from steam pioneers Boulton and Watt which powered all processes on site, pairing the revolutionary machinery with the finest local ingredients to create exceptional beers. It has now revived the title to reflect the traditional provenance of the Whitstable Bay collection.
See also Whitstable Bay Black Oyster Stout and Whitstable Bay Blonde Lager under Keg.
Shepherd Neame produces brewery-conditioned draught beers which are brewed in exactly the same way as traditional, cask beers but filtered before being packaged into pressurised kegs. This ensures consistency of taste, and is the preferred option in bars where there is limited, or no, cellar space.
In addition to the bottled versions of some of its most popular beers such as Spitfire and Bishops Finger, Shepherd Neame also produces some beers in bottle only.
1698 Bottle Conditioned Kentish Strong Ale (bottle 6.5% abv). First brewed in 1998 to celebrate Shepherd Neame's tercentenary, 1698 is thrice hopped and bottle conditioned. A silver medal winner in the Taste of Britain Awards, 1698 has been included in the International Beer Challenge's World's Top 50 Beers and has won a Gold Award from the British Bottlers' Institute. It has Protected Geographical Indication, the same regional produce protection afforded to Champagne and Parma Ham.
The brewery also produces a range of lagers, mainly under licence, such as Samuel Adams Boston Lager, Holsten Export, Oranjeboom and Kingfisher, but also Hürlimann "Sternbrau Lager: Export Bier" [330 ml or 500 ml Bottle, 500 ml can, or draft keg; 4.8% ABV]. Shepherd Neame manufactures this beer in Britain and exports it to Europe. A bock style beer is also brewed. These are served on draught in the brewery's pubs and receive more frontage than non-brewery brands.
The brewery owns around 330 pubs and establishments, mostly in Kent, but extending across the South East of England. These are predominantly tenanted public houses situated in towns and villages. The brewery also manages its own chain of hotels, including The Royal Albion in Broadstairs and The George Hotel in Cranbrook, Kent. The brewery's own brands are typically given prominence in terms of frontage with extensive branding. All fonts and pumps bear the distinctive logos and branding, glasses are branded and bar runners that advertise the house beers are commonplace. | https://en.wikipedia.org/wiki?curid=28435 |
Saint
A saint is a person who is recognized as having an exceptional degree of holiness or likeness or closeness to God. However, the use of the term "saint" depends on the context and denomination. In Catholic, Eastern Orthodox, Anglican, Oriental Orthodox, and Lutheran doctrine, all of their faithful deceased in Heaven are considered to be saints, but some are considered worthy of greater honor or emulation; official ecclesiastical recognition, and consequently veneration, is given to some saints through the process of canonization in the Catholic Church or glorification in the Eastern Orthodox Church.
While the English word "saint" originated in Christianity, historians of religion now use the appellation "in a more general way to refer to the state of special holiness that many religions attribute to certain people", with the Jewish tzadik, the Islamic walī, the Hindu rishi or Sikh guru, the Shintoist kami, and the Buddhist arhat or bodhisattva also being referred to as saints. Depending on the religion, saints are recognized either by official ecclesiastical declaration, as in the Catholic faith, or by popular acclamation (see folk saint).
The English word ""saint" comes from the Latin ""sanctus"". The word translated in Greek is "ἅγιος" ("hagios"), which means "holy". The word ἅγιος appears 229 times in the Greek New Testament, and its English translation 60 times in the corresponding text of the King James Version of the Bible.
The word "sanctus" was originally a technical one in ancient Roman religion, but due to its "globalized" use in Christianity the modern word "saint" in English and its equivalent in Romance languages is now also used as a translation of comparable terms for persons "worthy of veneration for their holiness or sanctity" in other religions.
Many religions also use similar concepts (but different terminology) to venerate persons worthy of some honor. Author John A. Coleman, SJ, of the Graduate Theological Union, Berkeley, California, wrote that saints across various cultures and religions have the following family resemblances:
The anthropologist Lawrence Babb in an article about Indian guru Sathya Sai Baba asks the question "Who is a saint?", and responds by saying that in the symbolic infrastructure of some religions, there is the image of a certain extraordinary spiritual king's "miraculous powers", to whom frequently a certain moral presence is attributed. These saintly figures, he asserts, are "the focal points of spiritual force-fields". They exert "powerful attractive influence on followers but touch the inner lives of others in transforming ways as well".
According to the Catholic Church, a "saint" is anyone in heaven, whether recognized on Earth or not, who form the "great cloud of witnesses" (Hebrews 12:1). These "may include our own mothers, grandmothers or other loved ones (cf. 2 Tim 1:5)" who may have not always lived perfect lives but "amid their faults and failings they kept moving forward and proved pleasing to the Lord". The title "Saint" denotes a person who has been formally canonized, that is, officially and authoritatively declared a saint, by the Church as holder of the Keys of the Kingdom of Heaven, and is therefore believed to be in Heaven by the grace of God. There are many persons that the Church believes to be in Heaven who have not been formally canonized and who are otherwise titled "saints" because of the fame of their holiness. Sometimes the word "saint" also denotes living Christians.
According to the Catechism of the Catholic Church Chapter 2, Article 1, 61, "The patriarchs, prophets, and certain other Old Testament figures have been and always will be honored as saints in all the church's liturgical traditions."
In his book "Saint of the Day", editor Leonard Foley, OFM, says this: the "[Saints'] surrender to God's love was so generous an approach to the total surrender of Jesus that the Church recognizes them as heroes and heroines worthy to be held up for our inspiration. They remind us that the Church is holy, can never stop being holy and is called to show the holiness of God by living the life of Christ."
The Catholic Church teaches that it does not "make" or "create" saints, but rather recognizes them. Proofs of heroic virtue required in the process of beatification will serve to illustrate in detail the general principles exposed above upon proof of their "holiness" or likeness to God.
On 3 January 993, Pope John XV became the first pope to proclaim a person a "saint" from outside the diocese of Rome: on the petition of the German ruler, he had canonized Bishop Ulrich of Augsburg. Before that time, the popular "cults", or venerations, of saints had been local and spontaneous and were confirmed by the local bishop. Pope John XVIII subsequently permitted a cult of five Polish martyrs. Pope Benedict VIII later declared the Armenian hermit Symeon to be a saint, but it was not until the pontificate of Pope Innocent III that the Popes reserved to themselves the exclusive authority to canonize saints, so that local bishops needed the confirmation of the Pope. Walter of Pontoise was the last person in Western Europe to be canonized by an authority other than the Pope: Hugh de Boves, the Archbishop of Rouen, canonized him in 1153. Thenceforth a decree of Pope Alexander III in 1170 reserved the prerogative of canonization to the Pope, insofar as the Latin Church was concerned.
One source claims that "there are over 10,000 named saints and beatified people from history, the Roman Martyrology and Orthodox sources, but no definitive head count".
Alban Butler published "Lives of the Saints" in 1756, including a total of 1,486 saints. The latest revision of this book, edited by Herbert Thurston and Donald Attwater, contains the lives of 2,565 saints. Monsignor Robert Sarno, an official of the Congregation for the Causes of Saints of the Holy See, expressed that it is impossible to give an exact number of saints.
The veneration of saints, in Latin "cultus", or the "cult of the Saints", describes a particular popular devotion or entrustment of one's self to a particular saint or group of saints. Although the term "worship" is sometimes used, it is only used with the older English connotation of honoring or respecting ("dulia") a person. According to the Church, Divine worship is in the strict sense reserved only to God ("latria") and never to the saints. One is permitted to ask the saints to intercede or pray to God for persons still on Earth, just as one can ask someone on Earth to pray for him.
A saint may be designated as a patron saint of a particular cause, profession, or locale, or invoked as a protector against specific illnesses or disasters, sometimes by popular custom and sometimes by official declarations of the Church. Saints are not believed to have power of their own, but only that granted by God. Relics of saints are respected, or "venerated", similar to the veneration of holy images and icons. The practice in past centuries of venerating relics of saints with the intention of obtaining healing from God through their intercession is taken from the early Church. For example, an American deacon claimed in 2000 that St John Henry Cardinal Newman (then blessed) interceded with God to cure him of a physical illness. The deacon, Jack Sullivan, asserted that after addressing Newman he was cured of spinal stenosis in a matter of hours. In 2009, a panel of theologians concluded that Sullivan's recovery was the result of his prayer to Newman. According to the Church, to be deemed a miracle, "a medical recovery must be instantaneous, not attributable to treatment, disappear for good."
Once a person has been canonized, the deceased body of the saint is considered holy as a relic. The remains of saints are called holy relics and are usually used in churches. Saints' personal belongings may also be used as relics. Some of the saints have a special symbol by tradition, e.g., Saint Lawrence, deacon and martyr, is identified by a gridiron because he is believed to have been burned to death on one. This symbol is found, for instance, in the Canadian heraldry of the office responsible for the St. Lawrence Seaway.
Formal canonization is a lengthy process, often of many years or even centuries. There are four major steps to become a saint. The first stage in this process is an investigation of the candidate's life by an expert. After this, the official report on the candidate is submitted to the bishop of the pertinent diocese and more study is undertaken. The information is then sent to the Congregation for the Causes of Saints of the Holy See for evaluation at the universal level of the Church. If the application is approved the candidate may be granted the title "Venerable" (stage 2). Further investigation, step 3, may lead to the candidate's beatification with the title "Blessed", which is elevation to the class of the "Beati". Next, and at a minimum, proof of two important miracles obtained from God through the intercession of the candidate are required for formal canonization as a saint. These miracles must be posthumous. Finally, in the last stage, after all of these procedures are complete, the Pope may canonize the candidate as a saint for veneration by the universal Church.
In the Eastern Orthodox Church a saint is defined as anyone who is in heaven, whether recognized here on earth, or not. By this definition, Adam and Eve, Moses, the various prophets, except for the angels and archangels are all given the title of "Saint". Sainthood in the Orthodox Church does not necessarily reflect a moral model, but the communion with God: there are countless examples of people who lived in great sin and became saints by humility and repentance, such as Mary of Egypt, Moses the Ethiopian, and Dysmas, the repentant thief who was crucified. Therefore, a more complete Eastern Orthodox definition of what a saint is, has to do with the way that saints, through their humility and their love of humankind, saved inside them the entire Church, and loved all people.
Orthodox belief considers that God reveals saints through answered prayers and other miracles. Saints are usually recognized by a local community, often by people who directly knew them. As their popularity grows they are often then recognized by the entire church. The word "canonization" means that a Christian has been found worthy to have his name placed in the canon (official list) of saints of the Church. The formal process of recognition involves deliberation by a synod of bishops. The Orthodox Church does not require the manifestation of miracles; what is required is evidence of a virtuous life.
If the ecclesiastical review is successful, this is followed by a service of Glorification in which the Saint is given a day on the church calendar to be celebrated by the entire church. This does not, however, make the person a saint; the person already was a saint and the Church ultimately recognized it.
As a general rule only clergy will touch relics in order to move them or carry them in procession, however, in veneration the faithful will kiss the relic to show love and respect toward the saint. The altar in an Orthodox church usually contains relics of saints, often of martyrs. Church interiors are covered with the Icons of saints. When an Orthodox Christian venerates icons of a saint he is venerating the image of God which he sees in the saint.
Because the Church shows no true distinction between the living and the dead (the saints are considered to be alive in Heaven), saints are referred to as if they were still alive. Saints are venerated but not worshiped. They are believed to be able to intercede for salvation and help mankind either through direct communion with God or by personal intervention.
In the Eastern Orthodox Church, the title Ὅσιος, "Hosios" (f. Ὁσία "Hosia") is also used. This is a title attributed to saints who had lived a monastic or eremitic life, and it is equal to the more usual title of "Saint".
The Oriental Orthodox churches ‒ the Armenian Apostolic Church, the Coptic Orthodox Church of Alexandria, the Tewahedo Church, the Malankara Orthodox Syrian Church, and the Syriac Orthodox Church ‒ follow a canonization process unique to each church. The Coptic Orthodox Church of Alexandria, for example, has the requirement that at least 50 years must pass following a prospective saint's death before the Coptic Orthodox Church's pope can canonize the saint.
In the Anglican Communion and the Continuing Anglican movement, the title of Saint refers to a person who has been elevated by popular opinion as a pious and holy person. The saints are seen as models of holiness to be imitated, and as a 'cloud of witnesses' that strengthen and encourage the believer during his or her spiritual journey (). The saints are seen as elder brothers and sisters in Christ. Official Anglican creeds recognise the existence of the saints in heaven.
In high-church contexts, such as Anglo-Catholicism, a saint is generally one to whom has been attributed (and who has generally demonstrated) a high level of holiness and sanctity. In this use, a saint is therefore not merely a believer, but one who has been transformed by virtue. In Catholicism, a saint is a special sign of God's activity. The veneration of saints is sometimes misunderstood to be worship, in which case it is derisively termed "hagiolatry".
So far as invocation of the saints is concerned, one of the Church of England's Articles of Religion "Of Purgatory" condemns "the Romish Doctrine concerning...(the) Invocation of Saints" as "a fond thing vainly invented, and grounded upon no warranty of Scripture, but rather repugnant to the Word of God". Anglo-Catholics in Anglican provinces using the Articles often make a distinction between a "Romish" and a "Patristic" doctrine concerning the invocation of saints, permitting the latter in accordance with Article XXII. Indeed, the theologian E.J. Bicknell stated that the Anglican view acknowledges that the term "invocation may mean either of two things: the simple request to a saint for his prayers (intercession), 'ora pro nobis,' or a request for some particular benefit. In medieval times the saints had come to be regarded as themselves the authors of blessings. Such a view was condemned but the former was affirmed."
Some Anglicans and Anglican churches, particularly Anglo-Catholics, personally ask prayers of the saints. However, such a practice is seldom found in any official Anglican liturgy. Unusual examples of it are found in The Korean Liturgy 1938, the liturgy of the Diocese of Guiana 1959 and The Melanesian English Prayer Book.
Anglicans believe that the only effective Mediator between the believer and God the Father, in terms of redemption and salvation, is God the Son, Jesus Christ. Historical Anglicanism has drawn a distinction between the intercession of the saints and the invocation of the saints. The former was generally accepted in Anglican doctrine, while the latter was generally rejected. There are some, however, in Anglicanism, who do beseech the saints' intercession. Those who beseech the saints to intercede on their behalf make a distinction between "mediator" and "intercessor", and claim that asking for the prayers of the saints is no different in kind than asking for the prayers of living Christians. Anglican Catholics understand sainthood in a more Catholic or Orthodox way, often praying for intercessions from the saints and celebrating their feast days.
According to the Church of England, a saint is one who is sanctified, as it translates in the Authorised King James Version (1611) 2 Chronicles 6:41:
Now therefore arise, O God, into thy resting place, thou, and the ark of thy strength: let thy priests, O God, be clothed with salvation, and let thy saints rejoice in goodness.
In the Lutheran Church, all Christians, whether in heaven or on earth, are regarded as saints. However, the church still recognizes and honors specific saints, including some of those recognized by the Catholic Church, but in a qualified way: according to the Augsburg Confession, the term "saint" is used in the manner of the Catholic Church only insofar as to denote a person who received exceptional grace, was sustained by faith, and whose good works are to be an example to any Christian. Traditional Lutheran belief accounts that prayers "to" the saints are prohibited, as they are not mediators of redemption. But, Lutherans do believe that saints pray for the Christian Church in general. Philip Melanchthon, the author of the Apology of the Augsburg Confession, approved honoring the saints by saying they are honored in three ways:
The Lutheran Churches also have liturgical calendars in which they honor individuals as saints.
The intercession of saints was criticized in the "". This criticism was rebutted by the Catholic side in the "Confutatio Augustana", which in turn was rebutted by the Lutheran side in the "Apology to the Augsburg Confession".
While Methodists as a whole do not venerate saints, they do honor and admire them. Methodists believe that all Christians are "saints", but mainly use the term to refer to biblical figures, Christian leaders, and martyrs of the faith. Many Methodist churches are named after saints—such as the Twelve Apostles, John Wesley, etc.—although most are named after geographical locations associated with an early circuit or prominent location. Methodist congregations observe All Saints' Day. Many encourage the study of saints, that is, the biographies of holy people.
The 14th Article of Religion in the United Methodist "Book of Discipline" states:
The Romish doctrine concerning purgatory, pardon, worshiping, and adoration, as well of images as of relics, and also invocation of saints, is a fond thing, vainly invented, and grounded upon no warrant of Scripture, but repugnant to the Word of God.
In many Protestant churches, the word "saint" is used more generally to refer to anyone who is a Christian. This is similar in usage to Paul's numerous references in the New Testament of the Bible. In this sense, anyone who is within the Body of Christ (i.e., a professing Christian) is a "saint" because of their relationship with Christ Jesus. Many Protestants consider intercessory prayers to the saints to be idolatry, since an application of divine worship that should be given only to God himself is being given to other believers, dead or alive.
Within some Protestant traditions, "saint" is also used to refer to any born-again Christian. Many emphasize the traditional New Testament meaning of the word, preferring to write "saint" to refer to any believer, in continuity with the doctrine of the priesthood of all believers.
The beliefs within The Church of Jesus Christ of Latter-day Saints (LDS Church) with regard to saints are similar but not quite the same as the Protestant tradition. In the New Testament, saints are all those who have entered into the Christian covenant of baptism. The qualification "latter-day" refers to the doctrine that members are living in the "latter days", before the Second Coming of Christ, and is used to distinguish the members of the church, which considers itself the restoration of the ancient Christian church. Members are therefore often referred to as "Latter-day Saints" or "LDS", and among themselves as "saints".
The use of the term "saint" is not exclusive to Christianity. In many religions, there are people who have been recognized within their tradition as having fulfilled the highest aspirations of religious teaching. In English, the term saint is often used to translate this idea from many world religions. The Jewish "hasid" or "tsaddiq", the Islamic "qidees", the Zoroastrian "fravashi", the Hindu "rsi" or "guru," the Buddhist "arahant" or "bodhisattva," the Daoist "shengren," the Shinto "kami," and others have all been referred to as saints."
Cuban Santería, Haitian Vodou, Trinidad Orisha-Shango, Brazilian Umbanda, Candomblé, and other similar syncretist religions adopted the Catholic saints, or at least the images of the saints, and applied their own spirits/deities to them. They are worshiped in churches (where they appear as saints) and in religious festivals, where they appear as the deities. The name "santería" was originally a pejorative term for those whose worship of saints deviated from Catholic norms.
Buddhists in both the Theravada and Mahayana traditions hold the "Arhats" in special esteem, as well as highly developed Bodhisattvas.
Tibetan Buddhists hold the "tulkus" (reincarnates of deceased eminent practitioners) as living saints on earth.
Hindu saints are those recognized by Hindus as showing a great degree of holiness and sanctity. Hinduism has a long tradition of stories and poetry about saints. There is no formal canonization process in Hinduism, but over time, many men and women have reached the status of saints among their followers and among Hindus in general. Unlike in Christianity, Hinduism does not canonize people as saints after death, but they can be accepted as saints during their lifetime. Hindu saints have often renounced the world, and are variously called gurus, sadhus, rishis, devarishis, rajarshis, saptarishis, brahmarshis, swamis, pundits, purohits, pujaris, acharyas, pravaras, yogis, yoginis, and other names.
Some Hindu saints are given god-like status, being seen as incarnations of Vishnu, Shiva, Devi, and other aspects of the Divine—this can happen during their lifetimes, or sometimes many years after their deaths. This explains another common name for Hindu saints: godmen.
Islam has had a rich history of veneration of saints (often called "wali", which literally means "Friend [of God]"), which has declined in some parts of the Islamic world in the twentieth century due to the influence of the various streams of Salafism. In Sunni Islam, the veneration of saints became a very common form of devotion early on, and saints came to be defined in the eighth-century as a group of "special people chosen by God and endowed with exceptional gifts, such as the ability to work miracles." The classical Sunni scholars came to recognize and honor these individuals as venerable people who were both "loved by God and developed a close relationship of love to Him." "Belief in the miracles of saints ("karāmāt al-awliyāʾ") ... [became a] requirement in Sunni Islam [during the classical period]," with even medieval critics of the ubiquitous practice of grave visitation like Ibn Taymiyyah emphatically declaring: "The miracles of saints are absolutely true and correct, and acknowledged by all Muslim scholars. The Quran has pointed to it in different places, and the sayings of the Prophet have mentioned it, and whoever denies the miraculous power of saints are innovators or following innovators." The vast majority of saints venerated in the classical Sunni world were the Sufis, who were all Sunni mystics who belonged to one of the four orthodox legal schools of Sunni law.
Veneration of saints eventually became one of the most widespread Sunni practices for more than a millennium, before it was opposed in the twentieth century by the Salafi movement, whose various streams regard it as "being both un-Islamic and backwards ... rather than the integral part of Islam which they were for over a millennium." In a manner similar to the Protestant Reformation, the specific traditional practices which Salafism has tried to curtail in both Sunni and Shia contexts include those of the veneration of saints, visiting their graves, seeking their intercession, and honoring their relics. As Christopher Taylor has remarked: "[Throughout Islamic history] a vital dimension of Islamic piety was the veneration of Muslim saints…. [Due, however to] certain strains of thought within the Islamic tradition itself, particularly pronounced in the nineteenth and the twentieth centuries ... [some modern day] Muslims have either resisted acknowledging the existence of Muslim saints altogether or have viewed their presence and veneration as unacceptable deviations."
The term "Tzadik" ("righteous"), and its associated meanings, developed in rabbinic thought from its Talmudic contrast with "Hasid" ("pious"), to its exploration in ethical literature, and its esoteric spiritualisation in Kabbalah. In Hasidic Judaism, the institution of the Tzadik assumed central importance, combining former elite mysticism with social movement for the first time.
The concept of "sant" or "bhagat" is found in North Indian religious thought including Sikhism, most notably in the Guru Granth Sahib. Figures such as Kabir, Ravidas, Namdev, and others are known as "Sants" or "Bhagats". The term "Sant" is applied in the Sikh and related communities to beings that have attained enlightenment through God realization and spiritual union with God via repeatedly reciting the name of God (Naam Japo). Countless names of God exist. In Sikhism, "Naam" (spiritual internalization of God's name) is commonly attained through the name of Waheguru, which translates to "Wondrous Guru".
Sikhs are encouraged to follow the congregation of a Sant (Sadh Sangat) or "The Company of the Holy". "Sants" grace the Sadh Sangat with knowledge of the Divine God, and how to take greater steps towards obtaining spiritual enlightenment through "Naam". "Sants" are to be distinguished from "Guru" (such as Guru Nanak) who have compiled the path to God enlightenment in the Sri Guru Granth Sahib. Gurus are the physical incarnation of God upon Earth. Sikhism states however, that any beings that have become one with God are considered synonymous with God. As such, the fully realized Sant, Guru, and God are considered one. | https://en.wikipedia.org/wiki?curid=28436 |
Simple harmonic motion
In mechanics and physics, simple harmonic motion is a special type of periodic motion where the restoring force on the moving object is directly proportional to, and opposite of, the object's displacement vector. It results in an oscillation which, if uninhibited by friction or any other dissipation of energy, continues indefinitely.
Simple harmonic motion can serve as a mathematical model for a variety of motions, but is typified by the oscillation of a mass on a spring when it is subject to the linear elastic restoring force given by Hooke's law. The motion is sinusoidal in time and demonstrates a single resonant frequency. Other phenomena can be modeled by simple harmonic motion, including the motion of a simple pendulum, although for it to be an accurate model, the net force on the object at the end of the pendulum must be proportional to the displacement (and even so, it is only a good approximation when the angle of the swing is small; see small-angle approximation). Simple harmonic motion can also be used to model molecular vibration as well.
Simple harmonic motion provides a basis for the characterization of more complicated periodic motion through the techniques of Fourier analysis.
The motion of a particle moving along a straight line with an acceleration whose direction is always towards a fixed point on the line and whose magnitude is proportional to the distance from the fixed point is called simple harmonic motion [SHM].
In the diagram, a simple harmonic oscillator, consisting of a weight attached to one end of a spring, is shown. The other end of the spring is connected to a rigid support such as a wall. If the system is left at rest at the equilibrium position then there is no net force acting on the mass. However, if the mass is displaced from the equilibrium position, the spring exerts a restoring elastic force that obeys Hooke's law.
Mathematically, the restoring force is given by
where is the restoring elastic force exerted by the spring (in SI units: N), is the spring constant (N·m−1), and is the displacement from the equilibrium position (m).
For any simple mechanical harmonic oscillator:
Once the mass is displaced from its equilibrium position, it experiences a net restoring force. As a result, it accelerates and starts going back to the equilibrium position. When the mass moves closer to the equilibrium position, the restoring force decreases. At the equilibrium position, the net restoring force vanishes. However, at , the mass has momentum because of the acceleration that the restoring force has imparted. Therefore, the mass continues past the equilibrium position, compressing the spring. A net restoring force then slows it down until its velocity reaches zero, whereupon it is accelerated back to the equilibrium position again.
As long as the system has no energy loss, the mass continues to oscillate. Thus simple harmonic motion is a type of periodic motion.
Note if the real space and phase space diagram are not co-linear, the phase space motion becomes elliptical. The area enclosed depends on the amplitude and the maximum momentum.
In Newtonian mechanics, for one-dimensional simple harmonic motion, the equation of motion, which is a second-order linear ordinary differential equation with constant coefficients, can be obtained by means of Newton's 2nd law and Hooke's law for a mass on a spring.
where is the inertial mass of the oscillating body, is its displacement from the equilibrium (or mean) position, and is a constant (the spring constant for a mass on a spring).
Therefore,
Solving the differential equation above produces a solution that is a sinusoidal function:
This equation can also be written in the form:
where
and, since where is the time period,
These equations demonstrate that the simple harmonic motion is isochronous (the period and frequency are independent of the amplitude and the initial phase of the motion).
Substituting with , the kinetic energy of the system at time is
and the potential energy is
In the absence of friction and other energy loss, the total mechanical energy has a constant value
The following physical systems are some examples of simple harmonic oscillator.
A mass attached to a spring of spring constant exhibits simple harmonic motion in closed space. The equation for describing the period
shows the period of oscillation is independent of both the amplitude and gravitational acceleration, though in practice the amplitude should be small. The above equation is also valid in the case when an additional constant force is being applied on the mass, i.e. the additional constant force cannot change the period of oscillation.
Simple harmonic motion can be considered the one-dimensional projection of uniform circular motion. If an object moves with angular speed around a circle of radius centered at the origin of the -plane, then its motion along each coordinate is simple harmonic motion with amplitude and angular frequency .
In the small-angle approximation, the motion of a simple pendulum is approximated by simple harmonic motion. The period of a mass attached to a pendulum of length with gravitational acceleration formula_23 is given by
This shows that the period of oscillation is independent of the amplitude and mass of the pendulum but not of the acceleration due to gravity, formula_23, therefore a pendulum of the same length on the Moon would swing more slowly due to the Moon's lower gravitational field strength. Because the value of formula_23 varies slightly over the surface of the earth, the time period will vary slightly from place to place and will also vary with height above sea level.
This approximation is accurate only for small angles because of the expression for angular acceleration being proportional to the sine of the displacement angle:
where is the moment of inertia. When is small, and therefore the expression becomes
which makes angular acceleration directly proportional to , satisfying the definition of simple harmonic motion.
A Scotch yoke mechanism can be used to convert between rotational motion and linear reciprocating motion. The linear motion can take various forms depending on the shape of the slot, but the basic yoke with a constant rotation speed produces a linear motion that is simple harmonic in form. | https://en.wikipedia.org/wiki?curid=28437 |
Syracuse, Sicily
Syracuse is a historic city on the island of Sicily, the capital of the Italian province of Syracuse. The city is notable for its rich Greek and Roman history, culture, amphitheatres, architecture, and as the birthplace of the preeminent mathematician and engineer Archimedes. This 2,700-year-old city played a key role in ancient times, when it was one of the major powers of the Mediterranean world. Syracuse is located in the southeast corner of the island of Sicily, next to the Gulf of Syracuse beside the Ionian Sea.
The city was founded by Ancient Greek Corinthians and Teneans and became a very powerful city-state. Syracuse was allied with Sparta and Corinth and exerted influence over the entirety of Magna Graecia, of which it was the most important city. Described by Cicero as "the greatest Greek city and the most beautiful of them all", it equaled Athens in size during the fifth century BC. It later became part of the Roman Republic and the Byzantine Empire. Under Emperor Constans II, it served as the capital of the Byzantine Empire (663–669). Palermo later overtook it in importance, as the capital of the Kingdom of Sicily. Eventually the kingdom would be united with the Kingdom of Naples to form the Two Sicilies until the Italian unification of 1860.
In the modern day, the city is listed by UNESCO as a World Heritage Site along with the Necropolis of Pantalica. In the central area, the city itself has a population of around 125,000 people. Syracuse is mentioned in the Bible in the Acts of the Apostles book at 28:12 as Paul stayed there. The patron saint of the city is Saint Lucy; she was born in Syracuse and her feast day, Saint Lucy's Day, is celebrated on 13 December.
Syracuse and its surrounding area have been inhabited since ancient times, as shown by the findings in the villages of Stentinello, Ognina, Plemmirio, Matrensa, Cozzo Pantano and "Thapsos", which already had a relationship with Mycenaean Greece.
Syracuse was founded in 734 or 733 BC by Greek settlers from Corinth and Tenea, led by the "oecist" (colonizer) Archias. There are many attested variants of the name of the city including "Syrakousai", "Syrakosai" and "Syrakō". A possible origin of the city's name was given by Vibius Sequester citing first Stephanus Byzantius in that there was a Syracusian marsh () called "Syrako" and secondly Marcian's "Periegesis" wherein Archias gave the city the name of a nearby marsh; hence one gets "Syrako" (and thereby "Syrakousai" and other variants) for the name of Syracuse, a name also attested by Epicharmus. The settlement of Syracuse was a planned event, as a strong central leader, Arkhias the aristocrat, laid out how property would be divided up for the settlers, as well as plans for how the streets of the settlement should be arranged, and how wide they should be. The nucleus of the ancient city was the small island of Ortygia. The settlers found the land fertile and the native tribes to be reasonably well-disposed to their presence. The city grew and prospered, and for some time stood as the most powerful Greek city anywhere in the Mediterranean. Colonies were founded at Akrai (664 BC), Kasmenai (643 BC), Akrillai (7th century BC), Helorus (7th century BC) and Kamarina (598 BC).
The descendants of the first colonists, called "Gamoroi", held power until they were expelled by the "Killichiroi", the lower class of the city. The former, however, returned to power in 485 BC, thanks to the help of Gelo, ruler of Gela. Gelo himself became the despot of the city, and moved many inhabitants of Gela, Kamarina and Megara to Syracuse, building the new quarters of Tyche and Neapolis outside the walls. His program of new constructions included a new theatre, designed by Damocopos, which gave the city a flourishing cultural life: this in turn attracted personalities as Aeschylus, Ario of Methymna and Eumelos of Corinth. The enlarged power of Syracuse made unavoidable the clash against the Carthaginians, who ruled western Sicily. In the Battle of Himera, Gelo, who had allied with Theron of Agrigento, decisively defeated the African force led by Hamilcar. A temple dedicated to Athena (on the site of today's Cathedral), was erected in the city to commemorate the event.
Syracuse grew considerably during this time. Its walls encircled in the fifth century, but as early as the 470s BC the inhabitants started building outside the walls. The complete population of its territory approximately numbered 250,000 in 415 BC and the population size of the city itself was probably similar to Athens.
Gelo was succeeded by his brother Hiero, who fought against the Etruscans at Cumae in 474 BC. His rule was eulogized by poets like Simonides of Ceos, Bacchylides and Pindar, who visited his court. A democratic regime was introduced by Thrasybulos (467 BC). The city continued to expand in Sicily, fighting against the rebellious Siculi, and on the Tyrrhenian Sea, making expeditions up to Corsica and Elba. In the late 5th century BC, Syracuse found itself at war with Athens, which sought more resources to fight the Peloponnesian War. The Syracusans enlisted the aid of a general from Sparta, Athens' foe in the war, to defeat the Athenians, destroy their ships, and leave them to starve on the island (see Sicilian Expedition). In 401 BC, Syracuse contributed a force of 300 hoplites and a general to Cyrus the Younger's Army of the Ten Thousand.
Then in the early 4th century BC, the tyrant Dionysius the Elder was again at war against Carthage and, although losing Gela and Camarina, kept that power from capturing the whole of Sicily. After the end of the conflict Dionysius built a massive fortress on Ortygia and 22 km-long walls around all of Syracuse. Another period of expansion saw the destruction of Naxos, Catania and Lentini; then Syracuse entered again in war against Carthage (397 BC). After various changes of fortune, the Carthaginians managed to besiege Syracuse itself, but were eventually pushed back by a pestilence. A treaty in 392 BC allowed Syracuse to enlarge further its possessions, founding the cities of Adranon, Tyndarion and Tauromenos, and conquering Rhegion on the continent. In the Adriatic, to facilitate trade, Dionysius the Elder founded Ancona, Adria and Issa. Apart from his battle deeds, Dionysius was famous as a patron of art, and Plato himself visited Syracuse several times.
His successor was Dionysius the Younger, who was however expelled by Dion in 356 BC. But the latter's despotic rule led in turn to his expulsion, and Dionysius reclaimed his throne in 347 BC. Dionysius was besieged in Syracuse by the Syracusan general Hicetas in 344 BC. The following year the Corinthian Timoleon installed a democratic regime in the city after he exiled Dionysius and defeated Hicetas. The long series of internal struggles had weakened Syracuse's power on the island, and Timoleon tried to remedy this, defeating the Carthaginians in the Battle of the Crimissus (339 BC).
After Timoleon's death the struggle among the city's parties restarted and ended with the rise of another tyrant, Agathocles, who seized power with a coup in 317 BC. He resumed the war against Carthage, with alternate fortunes. He was besieged in Syracuse by the Carthaginians in 311 BC, but he escaped from the city with a small fleet. He scored a moral success, bringing the war to the Carthaginians' native African soil, inflicting heavy losses to the enemy. The defenders of Syracuse destroyed the Carthaginian army which besieged them. However, Agathocles was eventually defeated in Africa as well. The war ended with another treaty of peace which did not prevent the Carthaginians from interfering in the politics of Syracuse after the death of Agathocles (289 BC). They laid siege to Syracuse for the fourth and last time in 278 BC. They retreated at the arrival of king Pyrrhus of Epirus, whom Syracuse had asked for help. After a brief period under the rule of Epirus, Hiero II seized power in 275 BC.
Hiero inaugurated a period of 50 years of peace and prosperity, in which Syracuse became one of the most renowned capitals of Antiquity. He issued the so-called "Lex Hieronica", which was later adopted by the Romans for their administration of Sicily; he also had the theatre enlarged and a new immense altar, the "Hiero's Ara", built. Under his rule lived the most famous Syracusan, the mathematician and natural philosopher Archimedes. Among his many inventions were various military engines including the claw of Archimedes, later used to resist the Roman siege of 214–212 BC. Literary figures included Theocritus and others.
Hiero's successor, the young Hieronymus (ruled from 215 BC), broke the alliance with the Romans after their defeat at the Battle of Cannae and accepted Carthage's support. The Romans, led by consul Marcus Claudius Marcellus, besieged the city in 214 BC. The city held out for three years, but fell in 212 BC. The successes of the Syracusians in repelling the Roman siege had made them overconfident. In 212 BC, the Romans received information that the city's inhabitants were to participate in the annual festival to their goddess Artemis. A small party of Roman soldiers approached the city under the cover of night and managed to scale the walls to get into the outer city and with reinforcements soon took control, killing Archimedes in the process, but the main fortress remained firm. After an eight-month siege and with parleys in progress, an Iberian captain named Moeriscus is believed to have let the Romans in near the Fountains of Arethusa. On the agreed signal, during a diversionary attack, he opened the gate. After setting guards on the houses of the pro-Roman faction, Marcellus gave Syracuse to plunder.
Though declining slowly through the years, Syracuse maintained the status of capital of the Roman government of Sicily and seat of the praetor. It remained an important port for trade between the Eastern and the Western parts of the Empire. Christianity spread in the city through the efforts of Paul of Tarsus and Saint Marziano, the first bishop of the city, who made it one of the main centres of proselytism in the West. In the age of Christian persecutions massive catacombs were carved, whose size is second only to those of Rome.
After a period of Vandal rule, 469–477, Syracuse and the island was recovered for Roman rule under Odoacer, 476–491 and Theodoric the Great, 491–526, by Belisarius for the Byzantine Empire (31 December 535). From 663 to 668 Syracuse was the seat of Emperor Constans II, as well as a capital of the Roman Empire and metropolis of the whole Sicilian Church.
The city was besieged by the Aghlabids for almost a year in 827–828, but Byzantine reinforcements prevented its fall. It remained the center of Byzantine resistance to the gradual Muslim conquest of Sicily until it fell to the Aghlabids after another siege on 20/21 May 878. During the two centuries of Muslim rule, the capital of the Emirate of Sicily was moved from Syracuse to Palermo. The Cathedral was converted into a mosque and the quarter on the Ortygia island was gradually rebuilt along Islamic styles. The city, nevertheless, maintained important trade relationships, and housed a relatively flourishing cultural and artistic life: several Arab poets, including Ibn Hamdis, the most important Sicilian Arab poet of the 12th century, flourished in the city.
In 1038, the Byzantine general George Maniakes reconquered the city, sending the relics of St. Lucy to Constantinople. The eponymous castle on the cape of Ortygia bears his name, although it was built under the Hohenstaufen rule. In 1085 the Normans entered Syracuse, one of the last Arab strongholds, after a summer-long siege by Roger I of Sicily and his son Jordan of Hauteville, who was given the city as count. New quarters were built, and the cathedral was restored, as well as other churches.
In 1194, Emperor Henry VI occupied the Sicilian kingdom, including Syracuse. After a short period of Genoese rule (1205–1220) under the notorious admiral and pirate Alamanno da Costa, which favoured a rise of trades, royal authority was re-asserted in the city by Frederick II. He began the construction of the Castello Maniace, the Bishops' Palace and the Bellomo Palace. Frederick's death brought a period of unrest and feudal anarchy. In the War of the Sicilian Vespers between the Angevin and Aragonese dynasties for control of Sicily, Syracuse sided with the Aragonese and expelled the Angevins in 1298, receiving from the Spanish sovereigns great privileges in reward. The preeminence of baronial families is also shown by the construction of the palaces of Abela, Chiaramonte, Nava, Montalto .
The city was struck by two ruinous earthquakes in 1542 and 1693, and a plague in 1729. The 17th century destruction changed the appearance of Syracuse forever, as well as the entire Val di Noto, whose cities were rebuilt along the typical lines of Sicilian Baroque, considered one of the most typical expressions of the architecture of Southern Italy. The spread of cholera in 1837 led to a revolt against the Bourbon government. The punishment was the move of the province capital seat to Noto, but the unrest had not been totally choked, as the Siracusani took part in the Sicilian revolution of 1848.
After the Unification of Italy of 1865, Syracuse regained its status of provincial capital. In 1870 the walls were demolished and a bridge connecting the mainland to Ortygia island was built. In the following year a railway link was constructed.
Both Allied and German bombings in 1943 caused heavy destruction during World War II. "Operation Husky", the codename for the Allied invasion of Sicily, was launched on the night between 9–10 July 1943 with British forces attacking the southeast of the island. The plan was for the British 5th Infantry Division, part of General Sir Bernard Montgomery's Eighth Army to capture Syracuse on the first day of the invasion. This part of the operation went completely according to plan, and British forces captured Syracuse on the first night of the operation. The port was then used as a base for the British Royal Navy. To the west of the city is a Commonwealth War Graves cemetery where about 1,000 men are buried. After the end of the war the northern quarters of Syracuse experienced a heavy, often chaotic, expansion, favoured by the quick process of industrialization.
Syracuse today has about 125,000 inhabitants and numerous attractions for the visitor interested in historical sites (such as the Ear of Dionysius). A process of recovering and restoring the historical centre has been ongoing since the 1990s. Nearby places of note include Catania, Noto, Modica and Ragusa.
Syracuse experiences a hot-summer Mediterranean climate (Köppen climate classification "Csa") with mild, wet winters and warm to hot, dry summers. Snow is infrequent but not rare at all; the last heavy snowfall in the city occurred in December 2014 but frosts are very rare, the last one happening in December 2014 with 0 °C.
In 2016, there were 122,051 people residing in Syracuse, located in the province of Syracuse, Sicily, of whom 48.7% were male and 51.3% were female. Minors (children ages 18 and younger) totalled 18.9 percent of the population compared to pensioners who number 16.9 percent. This compares with the Italian average of 18.1 percent (minors) and 19.9 percent (pensioners). The average age of Syracuse resident is 40 compared to the Italian average of 42. In the five years between 2002 and 2007, the population of Syracuse declined by 0.5 percent, while Italy as a whole grew by 3.6 percent. The reason for decline is a population flight to the suburbs, and northern Italy. The current birth rate of Syracuse is 9.75 births per 1,000 inhabitants compared to the Italian average of 9.45 births.
, 97.9% of the population was of Italian descent. The largest immigrant group came from other European nations (particularly those from Poland, and the United Kingdom): 0.6%, North Africa (mostly Tunisian): 0.5%, and South Asia: 0.4%.
Since 2005, the entire city of Syracuse, along with the Necropolis of Pantalica which falls within the province of Syracuse, were listed as a World Heritage Site by UNESCO. This programme aims to catalogue, name and conserve sites of outstanding cultural or natural importance to the common heritage of humanity. The deciding committee which evaluates potential candidates described their reasons for choosing Syracuse because "monuments and archeological sites situated in Syracuse are the finest example of outstanding architectural creation spanning several cultural aspects; Greek, Roman and Baroque", following on that Ancient Syracuse was "directly linked to events, ideas and literary works of outstanding universal significance".
Syracuse is home to association football club A.S.D. Città di Siracusa, the latest reincarnation of several clubs dating back to 1924. The common feature is the azure shirts, hence the nickname "Azzurri". Siracusa play at the Stadio Nicola De Simone with an approximate capacity between 5,000–6,000. | https://en.wikipedia.org/wiki?curid=28441 |
Sorting algorithm
In computer science, a sorting algorithm is an algorithm that puts elements of a list in a certain order. The most frequently used orders are numerical order and lexicographical order. Efficient sorting is important for optimizing the efficiency of other algorithms (such as search and merge algorithms) that require input data to be in sorted lists. Sorting is also often useful for canonicalizing data and for producing human-readable output. More formally, the output of any sorting algorithm must satisfy two conditions:
Further, the input data is often stored in an array, which allows random access, rather than a list, which only allows sequential access; though many algorithms can be applied to either type of data after suitable modification.
Sorting algorithms are often referred to as a word followed by the word "sort" and grammatically are used in English as noun phrases, for example in the sentence, "it is inefficient to use insertion sort on large lists" the phrase "insertion sort" refers to the insertion sort sorting algorithm.
From the beginning of computing, the sorting problem has attracted a great deal of research, perhaps due to the complexity of solving it efficiently despite its simple, familiar statement. Among the authors of early sorting algorithms around 1951 was Betty Holberton (née Snyder), who worked on ENIAC and UNIVAC. Bubble sort was analyzed as early as 1956. Comparison sorting algorithms have a fundamental requirement of Ω("n" log "n") comparisons (some input sequences will require a multiple of "n" log "n" comparisons); algorithms not based on comparisons, such as counting sort, can have better performance. Asymptotically optimal algorithms have been known since the mid-20th century—useful new algorithms are still being invented, with the now widely used Timsort dating to 2002, and the library sort being first published in 2006.
Sorting algorithms are prevalent in introductory computer science classes, where the abundance of algorithms for the problem provides a gentle introduction to a variety of core algorithm concepts, such as big O notation, divide and conquer algorithms, data structures such as heaps and binary trees, randomized algorithms, best, worst and average case analysis, time–space tradeoffs, and upper and lower bounds.
Sorting a small arrays optimally (in least amount of comparisons and swaps) or fast (i.e. taking into account machine specific details) is still an open research problem, with solutions only known for very small arrays (<20 elements). Similarly optimal (by various definition) sorting on a parallel machine is an open research topic.
Sorting algorithms are often classified by:
Stable sort algorithms sort repeated elements in the same order that they appear in the input. When sorting some kinds of data, only part of the data is examined when determining the sort order. For example, in the card sorting example to the right, the cards are being sorted by their rank, and their suit is being ignored. This allows the possibility of multiple different correctly sorted versions of the original list. Stable sorting algorithms choose one of these, according to the following rule: if two items compare as equal, like the two 5 cards, then their relative order will be preserved, so that if one came before the other in the input, it will also come before the other in the output.
Stability is important for the following reason: say that student records consisting of name and class section are sorted dynamically on a web page, first by name, then by class section in a second operation. If a stable sorting algorithm is used in both cases, the sort-by-class-section operation will not change the name order; with an unstable sort, it could be that sorting by section shuffles the name order. Using a stable sort, users can choose to sort by section and then by name, by first sorting using name and then sort again using section, resulting in the name order being preserved. (Some spreadsheet programs obey this behavior: sorting by name, then by section yields an alphabetical list of students by section.)
More formally, the data being sorted can be represented as a record or tuple of values, and the part of the data that is used for sorting is called the "key". In the card example, cards are represented as a record (rank, suit), and the key is the rank. A sorting algorithm is stable if whenever there are two records R and S with the same key, and R appears before S in the original list, then R will always appear before S in the sorted list.
When equal elements are indistinguishable, such as with integers, or more generally, any data where the entire element is the key, stability is not an issue. Stability is also not an issue if all keys are different.
Unstable sorting algorithms can be specially implemented to be stable. One way of doing this is to artificially extend the key comparison, so that comparisons between two objects with otherwise equal keys are decided using the order of the entries in the original input list as a tie-breaker. Remembering this order, however, may require additional time and space.
One application for stable sorting algorithms is sorting a list using a primary and secondary key. For example, suppose we wish to sort a hand of cards such that the suits are in the order clubs (♣), diamonds (♦), hearts (♥), spades (♠), and within each suit, the cards are sorted by rank. This can be done by first sorting the cards by rank (using any sort), and then doing a stable sort by suit:
Within each suit, the stable sort preserves the ordering by rank that was already done. This idea can be extended to any number of keys and is utilised by radix sort. The same effect can be achieved with an unstable sort by using a lexicographic key comparison, which, e.g., compares first by suit, and then compares by rank if the suits are the same.
In this table, is the number of records to be sorted. The columns "Average" and "Worst" give the time complexity in each case, under the assumption that the length of each key is constant, and that therefore all comparisons, swaps, and other needed operations can proceed in constant time. "Memory" denotes the amount of auxiliary storage needed beyond that used by the list itself, under the same assumption. The run times and the memory requirements listed below should be understood to be inside big O notation, hence the base of the logarithms does not matter; the notation means .
Below is a table of comparison sorts. A comparison sort cannot perform better than .
The following table describes integer sorting algorithms and other sorting algorithms that are not comparison sorts. As such, they are not limited to . Complexities below assume items to be sorted, with keys of size , digit size , and the range of numbers to be sorted. Many of them are based on the assumption that the key size is large enough that all entries have unique key values, and hence that , where ≪ means "much less than". In the unit-cost random access machine model, algorithms with running time of formula_1, such as radix sort, still take time proportional to , because is limited to be not more than formula_2, and a larger number of elements to sort would require a bigger in order to store them in the memory.
Samplesort can be used to parallelize any of the non-comparison sorts, by efficiently distributing data into several buckets and then passing down sorting to several processors, with no need to merge as buckets are already sorted between each other.
Some algorithms are slow compared to those discussed above, such as the bogosort with unbounded run time and the stooge sort which has "O"("n"2.7) run time. These sorts are usually described for educational purposes in order to demonstrate how run time of algorithms is estimated. The following table describes some sorting algorithms that are impractical for real-life use in traditional software contexts due to extremely poor performance or specialized hardware requirements.
Theoretical computer scientists have detailed other sorting algorithms that provide better than "O"("n" log "n") time complexity assuming additional constraints, including:
While there are a large number of sorting algorithms, in practical implementations a few algorithms predominate. Insertion sort is widely used for small data sets, while for large data sets an asymptotically efficient sort is used, primarily heap sort, merge sort, or quicksort. Efficient implementations generally use a hybrid algorithm, combining an asymptotically efficient algorithm for the overall sort with insertion sort for small lists at the bottom of a recursion. Highly tuned implementations use more sophisticated variants, such as Timsort (merge sort, insertion sort, and additional logic), used in Android, Java, and Python, and introsort (quicksort and heap sort), used (in variant forms) in some C++ sort implementations and in .NET.
For more restricted data, such as numbers in a fixed interval, distribution sorts such as counting sort or radix sort are widely used. Bubble sort and variants are rarely used in practice, but are commonly found in teaching and theoretical discussions.
When physically sorting objects (such as alphabetizing papers, tests or books) people intuitively generally use insertion sorts for small sets. For larger sets, people often first bucket, such as by initial letter, and multiple bucketing allows practical sorting of very large sets. Often space is relatively cheap, such as by spreading objects out on the floor or over a large area, but operations are expensive, particularly moving an object a large distance – locality of reference is important. Merge sorts are also practical for physical objects, particularly as two hands can be used, one for each list to merge, while other algorithms, such as heap sort or quick sort, are poorly suited for human use. Other algorithms, such as library sort, a variant of insertion sort that leaves spaces, are also practical for physical use.
Two of the simplest sorts are insertion sort and selection sort, both of which are efficient on small data, due to low overhead, but not efficient on large data. Insertion sort is generally faster than selection sort in practice, due to fewer comparisons and good performance on almost-sorted data, and thus is preferred in practice, but selection sort uses fewer writes, and thus is used when write performance is a limiting factor.
"Insertion sort" is a simple sorting algorithm that is relatively efficient for small lists and mostly sorted lists, and is often used as part of more sophisticated algorithms. It works by taking elements from the list one by one and inserting them in their correct position into a new sorted list similar to how we put money in our wallet. In arrays, the new list and the remaining elements can share the array's space, but insertion is expensive, requiring shifting all following elements over by one. Shellsort (see below) is a variant of insertion sort that is more efficient for larger lists.
"Selection sort" is an in-place comparison sort. It has O("n"2) complexity, making it inefficient on large lists, and generally performs worse than the similar insertion sort. Selection sort is noted for its simplicity, and also has performance advantages over more complicated algorithms in certain situations.
The algorithm finds the minimum value, swaps it with the value in the first position, and repeats these steps for the remainder of the list. It does no more than "n" swaps, and thus is useful where swapping is very expensive.
Practical general sorting algorithms are almost always based on an algorithm with average time complexity (and generally worst-case complexity) O("n" log "n"), of which the most common are heap sort, merge sort, and quicksort. Each has advantages and drawbacks, with the most significant being that simple implementation of merge sort uses O("n") additional space, and simple implementation of quicksort has O("n"2) worst-case complexity. These problems can be solved or ameliorated at the cost of a more complex algorithm.
While these algorithms are asymptotically efficient on random data, for practical efficiency on real-world data various modifications are used. First, the overhead of these algorithms becomes significant on smaller data, so often a hybrid algorithm is used, commonly switching to insertion sort once the data is small enough. Second, the algorithms often perform poorly on already sorted data or almost sorted data – these are common in real-world data, and can be sorted in O("n") time by appropriate algorithms. Finally, they may also be unstable, and stability is often a desirable property in a sort. Thus more sophisticated algorithms are often employed, such as Timsort (based on merge sort) or introsort (based on quicksort, falling back to heap sort).
"Merge sort" takes advantage of the ease of merging already sorted lists into a new sorted list. It starts by comparing every two elements (i.e., 1 with 2, then 3 with 4...) and swapping them if the first should come after the second. It then merges each of the resulting lists of two into lists of four, then merges those lists of four, and so on; until at last two lists are merged into the final sorted list. Of the algorithms described here, this is the first that scales well to very large lists, because its worst-case running time is O("n" log "n"). It is also easily applied to lists, not only arrays, as it only requires sequential access, not random access. However, it has additional O("n") space complexity, and involves a large number of copies in simple implementations.
Merge sort has seen a relatively recent surge in popularity for practical implementations, due to its use in the sophisticated algorithm Timsort, which is used for the standard sort routine in the programming languages Python and Java (as of JDK7). Merge sort itself is the standard routine in Perl, among others, and has been used in Java at least since 2000 in JDK1.3.
"Heapsort" is a much more efficient version of selection sort. It also works by determining the largest (or smallest) element of the list, placing that at the end (or beginning) of the list, then continuing with the rest of the list, but accomplishes this task efficiently by using a data structure called a heap, a special type of binary tree. Once the data list has been made into a heap, the root node is guaranteed to be the largest (or smallest) element. When it is removed and placed at the end of the list, the heap is rearranged so the largest element remaining moves to the root. Using the heap, finding the next largest element takes O(log "n") time, instead of O("n") for a linear scan as in simple selection sort. This allows Heapsort to run in O("n" log "n") time, and this is also the worst case complexity.
"Quicksort" is a divide and conquer algorithm which relies on a "partition" operation: to partition an array, an element called a "pivot" is selected. All elements smaller than the pivot are moved before it and all greater elements are moved after it. This can be done efficiently in linear time and in-place. The lesser and greater sublists are then recursively sorted. This yields average time complexity of O("n" log "n"), with low overhead, and thus this is a popular algorithm. Efficient implementations of quicksort (with in-place partitioning) are typically unstable sorts and somewhat complex, but are among the fastest sorting algorithms in practice. Together with its modest O(log "n") space usage, quicksort is one of the most popular sorting algorithms and is available in many standard programming libraries.
The important caveat about quicksort is that its worst-case performance is O("n"2); while this is rare, in naive implementations (choosing the first or last element as pivot) this occurs for sorted data, which is a common case. The most complex issue in quicksort is thus choosing a good pivot element, as consistently poor choices of pivots can result in drastically slower O("n"2) performance, but good choice of pivots yields O("n" log "n") performance, which is asymptotically optimal. For example, if at each step the median is chosen as the pivot then the algorithm works in O("n" log "n"). Finding the median, such as by the median of medians selection algorithm is however an O("n") operation on unsorted lists and therefore exacts significant overhead with sorting. In practice choosing a random pivot almost certainly yields O("n" log "n") performance.
"Shellsort" was invented by Donald Shell in 1959. It improves upon insertion sort by moving out of order elements more than one position at a time. The concept behind Shellsort is that insertion sort performs in time, where k is the greatest distance between two out-of-place elements. This means that generally, they perform in "O"("n"2), but for data that is mostly sorted, with only a few elements out of place, they perform faster. So, by first sorting elements far away, and progressively shrinking the gap between the elements to sort, the final sort computes much faster. One implementation can be described as arranging the data sequence in a two-dimensional array and then sorting the columns of the array using insertion sort.
The worst-case time complexity of Shellsort is an open problem and depends on the gap sequence used, with known complexities ranging from "O"("n"2) to "O"("n"4/3) and Θ("n" log2 "n"). This, combined with the fact that Shellsort is in-place, only needs a relatively small amount of code, and does not require use of the call stack, makes it is useful in situations where memory is at a premium, such as in embedded systems and operating system kernels.
Bubble sort, and variants such as the shell sort and cocktail sort, are simple, highly-inefficient sorting algorithms. They are frequently seen in introductory texts due to ease of analysis, but they are rarely used in practice.
"Bubble sort" is a simple sorting algorithm. The algorithm starts at the beginning of the data set. It compares the first two elements, and if the first is greater than the second, it swaps them. It continues doing this for each pair of adjacent elements to the end of the data set. It then starts again with the first two elements, repeating until no swaps have occurred on the last pass. This algorithm's average time and worst-case performance is O("n"2), so it is rarely used to sort large, unordered data sets. Bubble sort can be used to sort a small number of items (where its asymptotic inefficiency is not a high penalty). Bubble sort can also be used efficiently on a list of any length that is nearly sorted (that is, the elements are not significantly out of place). For example, if any number of elements are out of place by only one position (e.g. 0123546789 and 1032547698), bubble sort's exchange will get them in order on the first pass, the second pass will find all elements in order, so the sort will take only 2"n" time.
"Comb sort" is a relatively simple sorting algorithm based on bubble sort and originally designed by Włodzimierz Dobosiewicz in 1980. It was later rediscovered and popularized by Stephen Lacey and Richard Box with a "Byte" Magazine article published in April 1991. The basic idea is to eliminate "turtles", or small values near the end of the list, since in a bubble sort these slow the sorting down tremendously. ("Rabbits", large values around the beginning of the list, do not pose a problem in bubble sort) It accomplishes this by initially swapping elements that are a certain distance from one another in the array, rather than only swapping elements if they are adjacent to one another, and then shrinking the chosen distance until it is operating as a normal bubble sort. Thus, if Shellsort can be thought of as a generalized version of insertion sort that swaps elements spaced a certain distance away from one another, comb sort can be thought of as the same generalization applied to bubble sort.
"Distribution sort" refers to any sorting algorithm where data is distributed from their input to multiple intermediate structures which are then gathered and placed on the output. For example, both bucket sort and flashsort are distribution based sorting algorithms. Distribution sorting algorithms can be used on a single processor, or they can be a distributed algorithm, where individual subsets are separately sorted on different processors, then combined. This allows external sorting of data too large to fit into a single computer's memory.
Counting sort is applicable when each input is known to belong to a particular set, "S", of possibilities. The algorithm runs in O(|"S"| + "n") time and O(|"S"|) memory where "n" is the length of the input. It works by creating an integer array of size |"S"| and using the "i"th bin to count the occurrences of the "i"th member of "S" in the input. Each input is then counted by incrementing the value of its corresponding bin. Afterward, the counting array is looped through to arrange all of the inputs in order. This sorting algorithm often cannot be used because "S" needs to be reasonably small for the algorithm to be efficient, but it is extremely fast and demonstrates great asymptotic behavior as "n" increases. It also can be modified to provide stable behavior.
Bucket sort is a divide and conquer sorting algorithm that generalizes counting sort by partitioning an array into a finite number of buckets. Each bucket is then sorted individually, either using a different sorting algorithm, or by recursively applying the bucket sorting algorithm.
A bucket sort works best when the elements of the data set are evenly distributed across all buckets.
"Radix sort" is an algorithm that sorts numbers by processing individual digits. "n" numbers consisting of "k" digits each are sorted in O("n" · "k") time. Radix sort can process digits of each number either starting from the least significant digit (LSD) or starting from the most significant digit (MSD). The LSD algorithm first sorts the list by the least significant digit while preserving their relative order using a stable sort. Then it sorts them by the next digit, and so on from the least significant to the most significant, ending up with a sorted list. While the LSD radix sort requires the use of a stable sort, the MSD radix sort algorithm does not (unless stable sorting is desired). In-place MSD radix sort is not stable. It is common for the counting sort algorithm to be used internally by the radix sort. A hybrid sorting approach, such as using insertion sort for small bins, improves performance of radix sort significantly.
When the size of the array to be sorted approaches or exceeds the available primary memory, so that (much slower) disk or swap space must be employed, the memory usage pattern of a sorting algorithm becomes important, and an algorithm that might have been fairly efficient when the array fit easily in RAM may become impractical. In this scenario, the total number of comparisons becomes (relatively) less important, and the number of times sections of memory must be copied or swapped to and from the disk can dominate the performance characteristics of an algorithm. Thus, the number of passes and the localization of comparisons can be more important than the raw number of comparisons, since comparisons of nearby elements to one another happen at system bus speed (or, with caching, even at CPU speed), which, compared to disk speed, is virtually instantaneous.
For example, the popular recursive quicksort algorithm provides quite reasonable performance with adequate RAM, but due to the recursive way that it copies portions of the array it becomes much less practical when the array does not fit in RAM, because it may cause a number of slow copy or move operations to and from disk. In that scenario, another algorithm may be preferable even if it requires more total comparisons.
One way to work around this problem, which works well when complex records (such as in a relational database) are being sorted by a relatively small key field, is to create an index into the array and then sort the index, rather than the entire array. (A sorted version of the entire array can then be produced with one pass, reading from the index, but often even that is unnecessary, as having the sorted index is adequate.) Because the index is much smaller than the entire array, it may fit easily in memory where the entire array would not, effectively eliminating the disk-swapping problem. This procedure is sometimes called "tag sort".
Another technique for overcoming the memory-size problem is using external sorting, for example one of the ways is to combine two algorithms in a way that takes advantage of the strength of each to improve overall performance. For instance, the array might be subdivided into chunks of a size that will fit in RAM, the contents of each chunk sorted using an efficient algorithm (such as quicksort), and the results merged using a "k"-way merge similar to that used in mergesort. This is faster than performing either mergesort or quicksort over the entire list.
Techniques can also be combined. For sorting very large sets of data that vastly exceed system memory, even the index may need to be sorted using an algorithm or combination of algorithms designed to perform reasonably with virtual memory, i.e., to reduce the amount of swapping required.
Related problems include partial sorting (sorting only the "k" smallest elements of a list, or alternatively computing the "k" smallest elements, but unordered) and selection (computing the "k"th smallest element). These can be solved inefficiently by a total sort, but more efficient algorithms exist, often derived by generalizing a sorting algorithm. The most notable example is quickselect, which is related to quicksort. Conversely, some sorting algorithms can be derived by repeated application of a selection algorithm; quicksort and quickselect can be seen as the same pivoting move, differing only in whether one recurses on both sides (quicksort, divide and conquer) or one side (quickselect, decrease and conquer).
A kind of opposite of a sorting algorithm is a shuffling algorithm. These are fundamentally different because they require a source of random numbers. Shuffling can also be implemented by a sorting algorithm, namely by a random sort: assigning a random number to each element of the list and then sorting based on the random numbers. This is generally not done in practice, however, and there is a well-known simple and efficient algorithm for shuffling: the Fisher–Yates shuffle. | https://en.wikipedia.org/wiki?curid=28442 |
Syracuse, New York
Syracuse () is a city in and the county seat of Onondaga County, New York, United States. It is the fifth-most populous city in the state of New York following New York City, Buffalo, Rochester, and Yonkers.
At the 2010 census, the city population was 145,252, and its metropolitan area had a population of 662,577. It is the economic and educational hub of Central New York, a region with over one million inhabitants. Syracuse is also well-provided with convention sites, with a downtown convention complex. Syracuse was named after the classical Greek city Syracuse ("Siracusa" in Italian), a city on the eastern coast of the Italian island of Sicily.
The city has functioned as a major crossroads over the last two centuries, first between the Erie Canal and its branch canals, then of the railway network. Today, Syracuse is at the intersection of Interstates 81 and 90. Its airport is the largest in the region. Syracuse is home to Syracuse University, a major research university; Le Moyne College, a Jesuit liberal arts college; SUNY Upstate Medical University, a public medical school; and SUNY College of Environmental Science & Forestry, a public university focusing on forestry, the environment, and natural resources.
French missionaries were the first Europeans to come to this area, arriving to work with the Native Americans in the 1600s. At the invitation of the Onondaga Nation, one of the five nations of the Iroquois Confederacy, a group of Jesuit priests, soldiers, and coureurs des bois (including Pierre Esprit Radisson) set up a mission, known as Sainte Marie among the Iroquois, or Ste. Marie de Gannentaha, on the northeast shore of Onondaga Lake.
Jesuit missionaries reported salty brine springs around the southern end of what they referred to as "Salt Lake", known today as Onondaga Lake in honor of the historic tribe. French fur traders established trade throughout the New York area among the Iroquois. Dutch and English colonists also were traders, and the English nominally claimed the area, from their upstate base at Albany, New York. During the American Revolutionary War, the highly decentralized Iroquois divided into groups and bands that supported the British, and two tribes that supported the American-born rebels, or patriots.
Settlers came into central and western New York from eastern parts of the state and New England after the American Revolutionary War and various treaties with and land sales by Native American tribes. The subsequent designation of this area by the state of New York as the Onondaga Salt Springs Reservation provided the basis for commercial salt production. Such production took place from the late 1700s through the early 1900s. Brine from wells that tapped into halite (common salt) beds in the Salina shale near Tully, New York, 15 miles south of the city, was developed in the 19th century. It is the north-flowing brine from Tully that is the source of salt for the "salty springs" found along the shoreline of Onondaga Lake. The rapid development of this industry in the 18th and 19th centuries led to the nicknaming of this area as "The Salt City".
The original settlement of Syracuse was a conglomeration of several small towns and villages and was not recognized with a post office by the United States Government. Establishing the post office was delayed because the settlement did not have a name. Joshua Forman wanted to name the village Corinth. When John Wilkinson applied for a post office in that name in 1820, it was denied because the same name was already in use in Saratoga County, New York. Having read a poetical description of Syracuse, Sicily (Siracusa), Wilkinson saw similarities to the lake and salt springs of this area, which had both "salt and freshwater mingling together". On February 4, 1820, Wilkinson proposed the name "Syracuse" to a group of fellow townsmen; it became the name of the village and the new post office.
The first Solvay Process Company plant in the United States was erected on the southwestern shore of Onondaga Lake in 1884. The village was called Solvay to commemorate the inventor, Ernest Solvay. In 1861, he developed the ammonia-soda process for the manufacture of soda ash (anhydrous sodium carbonate) from brine wells dug in the southern end of Tully valley (as a source of sodium chloride) and limestone (as a source of calcium carbonate). The process was an improvement over the earlier Leblanc process. The Syracuse Solvay plant was the incubator for a large chemical industry complex owned by Allied Signal in Syracuse. While this industry stimulated development and provided many jobs in Syracuse, it left Onondaga Lake as the most polluted in the nation.
The salt industry declined after the Civil War, but a new manufacturing industry arose in its place. Throughout the late 1800s and early 1900s, numerous businesses and stores were established, including the Franklin Automobile Company, which produced the first air-cooled engine in the world; the Century Motor Vehicle Company; the Smith Corona company; and the Craftsman Workshops, the center of Gustav Stickley's handmade furniture empire.
The Geneva Medical College was founded in 1834. It is now known as Upstate Medical University, one of four medical colleges in the State University of New York system, and one of only five medical schools in the state north of New York City.
On March 24, 1870, Syracuse University was founded. The State of New York granted the new university its own charter, independent of Genesee College, which had unsuccessfully tried to move to Syracuse the year before. The university was founded as coeducational. President Peck stated at the opening ceremonies, "The conditions of admission shall be equal to all persons... there shall be no invidious discrimination here against woman... brains and heart shall have a fair chance... " Syracuse implemented this policy and attracted a high proportion of women students. In the College of Liberal Arts, the ratio between male and female students during the 19th century was approximately even. The College of Fine Arts was predominantly female, and a low ratio of women enrolled in the College of Medicine and the College of Law.
The first New York State Fair was held in Syracuse in 1841. Between 1842 and 1889, the Fair was held among 11 New York cities before finding a permanent home in Syracuse. It has been an annual event since then, except between 1942 and 1947, when the grounds were used as a military base during World War II.
As part of the racial incidents happening all over the country during the 1919 Red Summer, on July 31, 1919, there was a violent riot between white and black workers of the Syracuse Globe Malleable Iron Works.
World War II stimulated significant industrial expansion in the area: specialty steel, fasteners, and custom machining. After the war, two of the Big Three automobile manufacturers (General Motors and Chrysler) had major operations in the area. Syracuse was also headquarters for Carrier Corporation, and Crouse-Hinds manufactured traffic signals in the city. General Electric, with its headquarters in Schenectady to the east, had its main television manufacturing plant at Electronics Parkway in Syracuse.
The manufacturing industry in Syracuse began to falter in the 1970s, as industry restructured nationwide. Many small businesses failed during this time, which contributed to the already increasing unemployment rate. Rockwell International moved its factory outside New York state. General Electric moved its television manufacturing operations to Suffolk, Virginia, and later offshore to Asia. The Carrier Corporation moved its headquarters out of Syracuse, relocated its manufacturing operations out of state, and outsourced some of its production to Asian facilities. Although the city population has declined since 1950, the Syracuse metropolitan area population has remained fairly stable, growing by 2.5 percent since 1970. While this growth rate is greater than much of Upstate New York, it is far below the national average during that period.
Syracuse is located at (43.046899, -76.144423). It is located about east of Rochester, east of Buffalo, and west of the state capital Albany. It is also the halfway point between New York City and Toronto, about from each, Toronto to the northwest and NYC to the southeast.
According to the United States Census Bureau, the city has a total area of , of which is land and (2.15%) water.
The city developed at the northeast corner of the Finger Lakes region. The city has many neighborhoods that were originally independent villages, which joined the city over the years. Although the central part of Syracuse is flat, many of its neighborhoods are on small hills such as University Hill and Tipperary Hill. Land to the north of Syracuse is generally flat, while land to the south is hilly.
About 27 percent of Syracuse's land area is covered by 890,000 trees — a higher percentage than in Albany, Rochester or Buffalo. The Labor Day Storm of 1998 was a derecho that destroyed approximately 30,000 trees. The sugar maple accounts for 14.2 percent of Syracuse's trees, followed by the Northern white cedar (9.8 percent) and the European buckthorn (6.8 percent). The most common street tree is the Norway maple (24.3 percent), followed by the honey locust (9.3 percent).
The densest tree cover in Syracuse is in the two Valley neighborhoods, where 46.6 percent of the land is covered by trees. The lowest tree cover percentage is found in the densely developed downtown, which has only 4.6 percent trees.
Syracuse's main water source is Skaneateles Lake, one of the country's cleanest lakes, located southwest of the city. Water from nearby Onondaga Lake is not drinkable due to the industrial dumping that spanned many decades, leaving the lake heavily polluted. Incoming water is left unfiltered, and chlorine is added to prevent bacterial growth. Most of the environmental work to achieve lake cleanup was scheduled to be completed by 2016; however Honeywell, the company tasked with the cleanup, announced the projects completion in late 2017 . For periods of drought, there is also a backup line which uses water from Lake Ontario.
Onondaga Creek, a waterway that runs through downtown, flows northward through the city. The Onondaga Creekwalk borders it, connecting the Lakefront, Inner Harbor, Franklin Square and Armory Square neighborhoods. The creek continues through the Valley and ultimately to the Onondaga Nation. The creek is navigable but it can be a challenge. Its channelized nature speeds up its flow, particularly in the spring, when it may be dangerous. After some youngsters drowned in the creek, some residential areas fenced off the creek in their neighborhoods.
Syracuse has a humid continental climate ("Dfb") and is known for its high snowfall. Boasting on average, Syracuse receives the most annual average snow of any metropolitan area in the United States. Syracuse usually wins the Golden Snowball Award, among Upstate cities. Its record seasonal (July 1 to June 30 of the following year) snowfall so far is during the winter of 1992–93, while the snowiest calendar month was January 2004, with accumulated. The high snowfall is a result of the city receiving both heavy snow from the lake effect of nearby Lake Ontario (of the Great Lakes) and nor'easter snow from storms driven from the Atlantic Ocean. Snow most often falls in small (about ), almost daily doses, over a period of several days. Larger snowfalls do occur occasionally, and even more so in the northern suburbs.
The Blizzard of 1993 was described as the Storm of the Century. Some fell on the city within 48 hours, with falling within the first 24 hours. Syracuse received more snow than any other city in the country during this storm, which shattered a total of eight local records, including the most snow in a single snowstorm. A second notable snowfall was the Blizzard of 1966, with . The Blizzard of '58 occurred in February (16-17th) across Oswego and Onondaga counties. This storm was classified as a blizzard due to the high winds, blowing snow, and cold; of snow was measured at Syracuse and drifts reached in Oswego County. (See Thirtieth Publication of the Oswego County Historical Society, (1969) and The Climate and Snow Climatology of Oswego N.Y., (1971))
Syracuse on average receives an annual precipitation of , with the months of July through September being the wettest in terms of total precipitation, while precipitation occurs on more days each month during the snow season.
The normal monthly mean temperature ranges from in January to in July. The record high of was recorded on July 9, 1936, and the record low of has occurred three times since 1942, the last being February 18, 1979.
In the early 21st century, previous heat records have been broken in the city. For example, the summers of 2005 and 2012 are, respectively, the hottest and fourth-hottest summers on record. Additionally, 2017 and 2018 saw consecutive monthly high temperature records broken in February, of 71 on February 24, 2017, and 75 degrees on February 21, 2018, in addition to four 60-degree days in a row. The latter was the warmest winter day on record.
As of the census of 2010, there were 145,170 people, 57,355 households, and 28,455 families residing in the city. The racial makeup of the city was 56.0% White, 29.5% African American, 1.1% Native American, 5.5% Asian, 0.03% Pacific Islander, 2.7% from other races, and 5.1% from two or more races. Hispanic or Latino of any race were 8.3% of the population.
The largest ancestries include African (29.5%), Irish (12.4%), Italian (12.3%), German (9.6%), English (4.5%), and Polish (3.6%). Non-Hispanic Whites were 52.8% of the population in 2010, down from 87.2% in 1970. Suburbanization attracted residents outside the city, even as new immigrant and migrant groups increased.
There were 57,355 households out of which 29% had children under the age of 18 living with them, 9.3% were married couples living together, 20.8% had a female householder with no husband present, and 50.4% were non-families. 38.4% of all households were made up of individuals and 10.4% had someone living alone who was 65 years of age or older. The average household size was 2.31 and the average family size was 3.14.
In the city, the population was spread out with 19% under the age of 15, 23% from 15 to 24, 25.6% from 25 to 44, 21.7% from 45 to 64, and 10.5% who were 65 years of age or older. The median age was 29.6 years. For every 100 females, there were 91 males. For every 100 females age 18 and over, there were 87.89 males.
According to the 2014 estimates from the American Community Survey, the median income for a household in the city was $31,566, and the median income for a family was $38,794. Males had a median income of $39,537 versus $33,983 for females. The per capita income for the city was $19,283. About 28.2% of families and 35.1% of the population were below the poverty line, including 50% of those under age 18 and 16.7% of those age 65 and over.
As of 2017, the United States Census Bureau indicated an estimated population of 143,396.
According to the 2010 United States Census, the population ages 16 and older commuted to work as follows:
Syracuse ranks 50th in the United States for transit ridership and 12th for most pedestrian commuters. Each day, 38,332 people commute into Onondaga County from the four adjoining counties (2006).
Work Area Profile Report
Worker Age
Earnings
Christianity: Most Christians in Syracuse are Catholic, reflecting the influence of 19th and early 20th-century immigration patterns, when numerous Irish, German, Italian and eastern European Catholics settled in the city. The city has the Roman Catholic Cathedral of the Immaculate Conception. Syracuse is also home to the combined novitiate of the United States Northeast (UNE) and Maryland Provinces of the Society of Jesus (Jesuits). The historic Basilica of the Sacred Heart of Jesus is located near downtown (Roman Catholic, with Mass, offered in English and Polish).
Another major historic church is the Episcopal St. Paul's Cathedral. Both cathedrals are located at Columbus Circle. They represent their respective dioceses, the Diocese of Syracuse (Roman Catholic) and the Diocese of Central New York (Episcopal).
The Assembly of God, the American Baptist Churches of the USA, the Southern Baptist Convention, and the United Church of Christ are other Protestant denominations, and they have their state offices in the Greater Syracuse area. The dozens of churches in Syracuse include Eastern Orthodox, Jehovah's Witness, Christian Science, Reformed Presbyterian, and Metaphysical Christian.
Buddhism: Buddhism is represented by the Zen Center of Syracuse on the Seneca Turnpike; as well as a center on Park Street, on the city's Northside.
Hinduism: Hindu houses of worship include the Hindu Mandir of Central New York in Liverpool.
Islam: The Islamic Society of Central New York Mosque is located on Comstock Avenue and Muhammad's Study Group on West Kennedy Street.
Judaism: Several synagogues are located in the Syracuse metropolitan area, including Beth Shalom-Chevra Chas, Temple Adath Yeshurun, Shaarei Torah Orthodox Congregation of Syracuse, and Temple Concord, considered the ninth-oldest Jewish house of worship in the United States.
Sikhism: The gurdwara is at the Sikh Foundation of Syracuse, in Liverpool.
Unitarian Universalism: Two Unitarian Universalist societies in Syracuse: May Memorial Unitarian Society and First Unitarian Universalist Society of Syracuse.
Formerly a manufacturing center, Syracuse's economy has faced challenges over the past decades as industrial jobs have left the area. The number of local and state government jobs also has been declining for several years. Syracuse's top employers now are primarily in higher education, research, health care, and services; some high-tech manufacturing remains. University Hill is Syracuse's fastest growing neighborhood, fueled by expansions by Syracuse University and Upstate Medical University (a division of the State University of New York), as well as dozens of small medical office complexes.
Top employers in the Syracuse region and the size of their workforce include the following:
Bristol-Myers Squibb, founded by alumni of nearby Hamilton College, has a complex in East Syracuse.
Syracuse's unemployment rate in August 2017 was 4.6 percent, comparable to the national rate of 4.5.
Since 1927 the State Tower Building has been the tallest in Syracuse.
The City of Syracuse officially recognizes 26 neighborhoods within its boundaries. Some of these have small additional neighborhoods and districts inside of them. In addition, Syracuse also owns and operates Syracuse Hancock International Airport on the territory of four towns north of the city.
Syracuse's neighborhoods reflect the historically ethnic and multicultural population. Traditionally, Irish, Polish and Ukrainian Americans settled on its west side; Jewish Americans on its east side; German and Italian Americans on the north side; and African-Americans on its south side.
In addition to the dominant Destiny USA shopping mall in Syracuse's Lakefront neighborhood, many of the city's more traditional neighborhoods continue to have active business districts:
Residents are assigned to schools in the Syracuse City School District. Syracuse City School District consists of 34 schools and 4 alternative education programs. In the 2014–2015 school year, the K-12 enrollment was 20,084. 15% of students were classified as English Language Learners, 20% as students with disabilities, and 77% as economically disadvantaged. The drop-out rate was 6%. Syracuse City School District is collaborating with Say Yes to Education with the goal of every public school student graduating high school with the preparation and support to attain, afford, and complete a college or other postsecondary education. They are also one of the "Big 5," which consists of the five New York State School districts with populations over 125,000. "Big 5" school budgets are approved by annually by the Board of Education and city government as opposed to voters in an annual vote.
One of Syracuse's major research universities is Syracuse University, located on University Hill. It had an enrollment of 22,484 for the 2017–2018 academic year.
Immediately adjacent to Syracuse University are two doctoral-degree granting State University (SUNY) schools, the SUNY College of Environmental Science and Forestry and SUNY Upstate Medical University. Both institutions have long-standing ties to Syracuse University. SUNY Upstate Medical University is also one of Syracuse's major research universities and is one of only about 125 academic medical centers in the country. It is the region's largest employer.
Also serving Syracuse are Le Moyne College on the city's eastern border, and Onondaga Community College, which has its main campus in the adjacent Town of Onondaga and has two smaller campuses downtown and in Liverpool. A branch of SUNY's Empire State College is in downtown Syracuse, along with a campus of the nationwide Bryant & Stratton College. There are also the Pomeroy College of Nursing at Crouse Hospital and St. Joseph's College of Nursing.
Other colleges and universities in the area include Cornell University and Ithaca College in Ithaca, Hamilton College in Clinton, Oswego State College in Oswego, SUNY Cortland in Cortland, Morrisville State College in Morrisville, Colgate University in Hamilton, Cazenovia College in Cazenovia, Wells College in Aurora, and both Utica College and SUNY Institute of Technology in Utica.
Onondaga County Public Library (OCPL) operates Syracuse's public libraries. Including the Central Library, ten city libraries, and 21 independent libraries in suburban Onondaga County. A library card from any OCPL library will work at any of the other OCPL libraries.
City libraries
Suburban libraries
Live jazz music is the centerpiece of two annual outdoor festivals in Syracuse, the Syracuse Jazz Festival, Polish Festival as well as the CNY Jazz Arts Foundation's Jazz In The Square Festival. Performers in the last five years have included Chuck Mangione, Joshua Redman, Smokey Robinson, Branford Marsalis, The Bad Plus, Randy Brecker, Stanley Clarke, Jimmy Heath, Terrence Blanchard, Slide Hampton, Bobby Watson, Dr. John, and Aretha Franklin. The Polish Festival hosted Grammy winners Jimmy Sturr and his Orchestra, Polish music legend Stan Borys and Irena Jarocka, Grammy nominee Lenny Goumulka, LynnMarie, Dennis Polisky & The Maestro's Men, Jerry Darlak and the Buffalo Touch & The John Gora Band.
Syracuse was home to the 75-member Syracuse Symphony Orchestra (SSO), founded in 1961. The SSO's former Music Directors include Daniel Hege, Frederik Prausnitz and Kazuyoshi Akiyama. The orchestra performed over 200 concerts annually for an audience of over 250,000. The SSO filed for Chapter 7 Bankruptcy in 2011 and was replaced by the Syracuse Symphoria in 2013.
The Clinton String Quartet has been active for over 15 years and is based in the Syracuse area. All four members were also members of the Syracuse Symphony Orchestra.
The Syracuse Friends of Chamber Music for more than a half century have presented a series of concerts by various chamber ensembles.
The Society for New Music, founded in 1971, is the oldest new music organization in the state outside of New York City, and the only year-round new music group in upstate New York. The Society commissions at least one new work each year from a regional composer who awards the annual Brian Israel Prize to a promising composer under 30 years of age and produces the weekly "Fresh Ink" radio broadcast for WCNY-FM.
The Syracuse Opera Company is a professional company that generally performs three operas each season. Founded in 1963 as the Opera Chorus of the Syracuse Symphony Orchestra, it became independent in 1973. In addition to full performances, it offers several free outdoor concerts each year in Armory Square, Thornden Park, and elsewhere. The company has an annual budget of US$1 million and is the only professional opera company in upstate New York.
The Syracuse Shakespeare Festival is a charitable, educational, not-for-profit corporation dedicated to performing the works of William Shakespeare. It was founded in 2002 and is best known for its annual free Shakespeare-in-the-Park program at the Thornden Park Amphitheatre that has attracted more than 12,000 people since its inception.
Syracuse Stage presents experimental and creative theater; a number of its productions have been world premieres and have moved to Broadway. The venue was designed by its most famous former artistic director Arthur Storch. Its artistic director is Robert Hupp.
The Red House Arts Center, which opened in 2004, is a small theater housed in a converted hotel that offers performances by local, national, and international artists, and hosts regular exhibits in its art gallery, and screenings of independent films.
Syracuse is also known for a large contemporary music scene, particularly in the heavy metal, hardcore, ska, and punk rock genres.
The City of Syracuse maintains over 170 parks, fields, and recreation areas, totaling over . Burnet Park includes the first public golf course in the United States (1901) and Rosamond Gifford Zoo. Other major parks include Thornden Park, Schiller Park, Sunnycrest Park, Onondaga Park and Kirk Park. There are 12 public pools, two public ice rinks, and two public nine-hole golf courses in the city.
Right outside the city proper, along the east side and north end of Onondaga Lake, is Onondaga Lake Park. The adjacent Onondaga Lake Parkway is closed to vehicular traffic several hours on Sundays during the summer months, so it can be used for walking, running, biking, and rollerblading. During the holiday season, the park hosts Lights on the Lake, a drive-through light show.
Syracuse is served by the Central New York Regional Transportation Authority, or Centro. Centro operates bus service in Syracuse and its suburbs, as well as to outlying metropolitan area cities such as Auburn, Fulton, and Oswego.
Proposed public transit projects
In 2005, local millionaire Tom McDonald proposed an aerial tramway system, called Salt City Aerial Transit (S.C.A.T.), to link the university to the transportation center. The first segment from Syracuse University to downtown was estimated to cost $5 million, which McDonald planned to raise himself. Due to perceived low operating costs, the system was envisioned as running continuously.
The Pyramid Companies have also proposed a monorail linking Syracuse University to Hancock International Airport, via downtown, (Downtown Syracuse), their proposed Destiny USA, the William F. Walsh Regional Transportation Center, and their proposed Destiny Technology Park. The cost of such a line has been estimated at $750 million.
The city is served by Amtrak's Empire Service, Lake Shore Limited, and Maple Leaf lines. Amtrak's station is part of the William F. Walsh Regional Transportation Center.
The Empire Service runs twice daily in each direction between Niagara Falls, NY and New York Penn Station, with major stops in Buffalo, Rochester, Syracuse, Utica, and Albany along the way. The Maple Leaf runs once daily in each direction, and follows the same route as the Empire Service, however instead of terminating in Niagara Falls, it continues on to Toronto.
The Lake Shore Limited runs once daily in each direction between Chicago and Boston or New York (via two sections east of Albany). It follows the same route as the Empire Service and Maple Leaf between New York City and Buffalo-Depew, where it diverges and continues on through Cleveland and Toledo to Chicago.
A regional commuter rail service, OnTrack, was active from 1994 until it was discontinued in 2007 due to low ridership. Its sole route connected the Carousel Center to southern Syracuse, often extending to Jamesville in the summer.
Greyhound Lines, Megabus, OurBus and Trailways provide long-distance bus service to destinations including New York City, Boston, Buffalo, Albany, and Toronto. Greyhound, Megabus, and Trailways use the William F. Walsh Regional Transportation Center in the northern area of the city, while OurBus stops near the campus of Syracuse University.
Syracuse is served by the Syracuse Hancock International Airport in nearby Salina, near Mattydale. The airport is served by 6 major airlines, which provide non-stop flights to important airline hubs and business centers such as Atlanta, Boston, Charlotte, Chicago, Detroit, Denver, Ft. Lauderdale, New York City, Orlando, Philadelphia, Tampa, Washington, DC, and 147 foreign cities from 87 different countries, not including USA. Cargo carriers FedEx and UPS also serve the airport. New York City can be reached in under an hour flight.
Four Interstate Highways run through the Syracuse area:
Two US Highways run through the Syracuse area:
New York State Route Expressways:
New York State Routes
Public services such as garbage pickup, street plowing, sewage, street, and park maintenance, and traffic maintenance are provided by the Department of Public Works (DPW).
The Syracuse water system was one of the few water systems built and operated before federal funding. The water system was constructed mainly to support the industries around Syracuse, New York. Construction of Syracuse's water system began in 1868.
In 2015, the city experienced an average of at least one water main break per day. Between 2005 and 2015, the city suffered 2,000 water main breaks. Mayor Stephanie Miner estimated of the cost to fix the city's water infrastructure at $1 billion over a 10-15 year period. On February 25, 2015, Miner testified before a joint hearing of the state Assembly Ways and Means Committee and state Senate Finance Committee. Miner testified that the 2014 polar vortex contributed to the increase in Syracuse's water main break.
On March 3, the 100th water main break in Syracuse in 2015 occurred on James Street. Early in 2015, Miner lobbied the state for funding to fix the city's aging water system. New York Governor Andrew Cuomo declined to help, stating that the city should improve its economy and increase tax revenues, which would enable the city to fund their own water pipe repairs.
The city is headed by an elected mayor who is limited to two four-year terms. On November 7, 2017, Ben Walsh was elected mayor. He began in January 2018 as the first independent mayor of Syracuse in over 100 years. The last independent mayor of Syracuse was Louis Will, who was elected in 1913. The previous mayor was former Common Councilor At Large Stephanie Miner, who was elected on November 3, 2009; she was the first female mayor of Syracuse. Minor was preceded by former Syracuse Common Council President Matthew Driscoll, who first assumed the position in 2001 after the former mayor, Roy Bernardi, resigned upon his appointment by President George W. Bush to a position in the Department of Housing and Urban Development. After serving the remaining term, Driscoll was re-elected that year, and again in 2005.
The legislative branch of Syracuse is the Syracuse Common Council. It consists of a president and nine members:
The Onondaga County Supreme and County Court is the trial court of general jurisdiction for Syracuse. It is also the administrative court for the Fifth District of the New York State Unified Court System. Judges for these courts are elected at-large.
The U.S. District Court for the Northern District of New York also holds court in downtown Syracuse at the James Hanley Federal Building.
The Syracuse Police Department (SPD) is the principal law enforcement agency of the city of Syracuse, New York. For 2017–18, the police department budget was $48.5 million. Effective December 3, 2018, Kenton Buckner is the city's new Chief of Police. Police headquarters is in the John C. Dillon Public Safety Building at 511 South State Street.
Established in 2011, SPD operates a network of 140 surveillance cameras called the Criminal Observation and Protection System (COPS). Between 2011 and 2014 more than 40 utility pole mounted cameras were installed, mainly in the Southwest and Northeast neighborhoods. The cameras were funded by federal, state, and private grants. In Summer 2014, 10 cameras were approved for installation in Downtown Syracuse, the first area not targeted because of high levels of violent crime. Live monitoring of Clinton Square for suspicious people during events and festivals was planned, although police agreed to a prohibition on the use of cameras to monitor protests. Twenty-five additional cameras were planned to be installed in 2016.
In spring 2017 the surveillance system was augmented with the installation of ShotSpotter gunshot detection sensors. Syracuse Mayor Stephanie Miner cited increasing public acceptance of police cameras and lower technology costs as factors in the decision.
The Syracuse Fire Department (SFD) protects the City of Syracuse from fires and other dangers. The Department provides multiple services in addition to fire related calls: multi-county regional HAZ-MAT response, first response to medical and trauma calls, unmanned aerial vehicle (drone) capabilities, and teams experienced in high-angle rope, swift water, and confined space rescues. The Chief of Fire is Michael J. Monds. SFD headquarters is in the John C. Dillon Public Safety Building at 511 South State Street. The Department has a Class 1 rating from the Insurance Services Office. This is the best rating obtainable and has a direct effect on the fire insurance of any property within the city. The SFD currently operates out of 11 fire stations, organized into three districts (akin to battalions), located throughout the city. The SFD maintains nine engine companies (operating nine corresponding mini units), five truck companies, one heavy rescue company, a manpower-squad company, and several special and support units. The department also provides primary response coverage and ARFF coverage to the Syracuse Clarance E. Hancock International Airport (station 4).
Syracuse has one major daily morning newspaper, "The Post-Standard". Until 2001, Syracuse also had an evening paper, "The Herald-Journal". Besides a Syracuse/Onondaga County edition, "The Post-Standard" publishes three additional editions: Cayuga, Madison, and Oswego for the other three counties of the metropolitan area, plus an additional edition on Sundays. It has six news bureaus throughout Central New York, as well as one in Albany (state capital) and Washington, DC.
Before the merger with the evening paper, the "Post-Standard" was named among the "10 best newspapers in America with a circulation of under 100,000" by Al Neuharth of USA Today (run by a competing organization). Since the merger, circulation has increased to over 120,000. Even outside of its four-county delivery area, the paper is available in many convenience stores and supermarkets from the Canada–US border to the New York–Pennsylvania border. The newspaper partly caters to this audience as well, covering many stories from the Ithaca, Utica, and Watertown areas. Since opening a new printing press in 2002, the paper calls itself "America's Most Colorful Newspaper," as almost every page contains color.
"The Daily Orange", the newspaper of Syracuse University and SUNY ESF students, is read by over 20,000 people daily, and is widely distributed in the University Hill neighborhood and Armory Square. "The Dolphin", the weekly student newspaper of Le Moyne College is also available, read mainly by Le Moyne students.
There are other popular free newspapers, including "Eagle Newspaper"'s downtown edition, the "City Eagle", and "Table Hopping", which focuses on the restaurant and entertainment scene. Additionally, there's a weekly newspaper, CNY Vision, that publishes news and information focusing on the local African American community.
Syracuse has seven full-power broadcast television stations and one major low-power station:
*NBC 3, CBS 5, and CW 6 are all owned and operated by Sinclair Media under the name CNY-Central
Additionally, networks such as Cornerstone Television channel 11 & 22, Univision, and MTV2 are broadcast by low-power television stations.
Syracuse University's student-run TV station is CitrusTV. CitrusTV programming is broadcast on the university campus on the Orange Television Network. The station also provides content to Spectrum Sports. Online, CitrusTV programs can be found on CitrusTV.net and the Post-Standard's Syracuse.com.
Syracuse's cable television provider is Charter Spectrum (Charter Communications acquired Time Warner Cable in 2016), which, as a part of its regular and digital offerings, provides a 24-hour local news channel (Spectrum News Central New York), public access channel, and an additional PBS channel. Several suburbs also have access to Verizon Fios for cable television.
Dish Network and DirecTV also provide local satellite television subscribers with local broadcast stations.
Professional teams in Syracuse include:
College teams in Syracuse include:
Syracuse University sports are by far the most attended sporting events in the Syracuse area. Basketball games often draw over 30,000 fans, and football games over 40,000. The university has bred dozens of famous professional players since starting an athletics program in the late nineteenth century, including all-time greats Jim Brown, Larry Csonka and Dave Bing. Both teams play in the Carrier Dome.
It was suddenly announced in June 2018 that the Syracuse Silver Knights would move to Utica, NY. They will be playing as Utica City FC in the Adirondack Bank Center.
Syracuse's sister cities are: | https://en.wikipedia.org/wiki?curid=28443 |
Sleep apnea
Sleep apnea, also spelled sleep apnoea, is a sleep disorder where a person has pauses in breathing or periods of shallow breathing during sleep. Each pause can last for a few seconds to a few minutes and they happen many times a night. In the most common form, this follows loud snoring. There may be a choking or snorting sound as breathing resumes. Because the disorder disrupts normal sleep, those affected may experience sleepiness or feel tired during the day. In children it may cause hyperactivity or problems in school.
Sleep apnea may be either obstructive sleep apnea (OSA) in which breathing is interrupted by a blockage of air flow, central sleep apnea (CSA) in which regular unconscious breath simply stops, or a combination of the two. Obstructive (OSA) is the most common form. Risk factors for OSA include being overweight, a family history of the condition, allergies, a small breathing airway, and enlarged tonsils. Some people with sleep apnea are unaware they have the condition. In many cases it is first observed by a family member. Sleep apnea is often diagnosed with an overnight sleep study. For a diagnosis of sleep apnea, more than five episodes per hour must occur.
Treatment may include lifestyle changes, mouthpieces, breathing devices, and surgery. Lifestyle changes may include avoiding alcohol, losing weight, stopping smoking, and sleeping on one's side. Breathing devices include the use of a CPAP machine. Without treatment, sleep apnea may increase the risk of heart attack, stroke, diabetes, heart failure, irregular heartbeat, obesity, and motor vehicle collisions.
OSA affects 1 to 6% of adults and 2% of children. It affects males about twice as often as females. While people at any age can be affected, it occurs most commonly among those 55 to 60 years old. CSA affects less than 1% of people. A type of CSA was described in the German myth of Ondine's curse where the person when asleep would forget to breathe.
People with sleep apnea have problems with excessive daytime sleepiness (EDS), impaired alertness, and vision problems. OSA may increase risk for driving accidents and work-related accidents. If OSA is not treated, people are at increased risk of other health problems, such as diabetes. Death could occur from untreated OSA due to lack of oxygen to the body.
Due to the disruption in daytime cognitive state, behavioral effects may be present. These can include moodiness, belligerence, as well as a decrease in attentiveness and energy. These effects may become intractable, leading to depression.
There is evidence that the risk of diabetes among those with moderate or severe sleep apnea is higher. There is increasing evidence that sleep apnea may lead to liver function impairment, particularly fatty liver diseases (see steatosis). Finally, because there are many factors that could lead to some of the effects previously listed, some people are not aware that they have sleep apnea and are either misdiagnosed or ignore the symptoms altogether.
Sleep apnea can affect people regardless of sex, race, or age. However, risk factors include:
Alcohol, sedatives and tranquilizers may also promote sleep apnea by relaxing throat muscles. People who smoke tobacco have sleep apnea at three times the rate of people who have never done so.
Central sleep apnea is more often associated with any of the following risk factors:
High blood pressure is very common in people with sleep apnea.
When breathing is paused, carbon dioxide builds up in the bloodstream. Chemoreceptors in the blood stream note the high carbon dioxide levels. The brain is signaled to awaken the person, which clears the airway and allows breathing to resume. Breathing normally will restore oxygen levels and the person will fall asleep again. This carbon dioxide build-up may be due to the decrease of output of the brainstem regulating the chest wall or pharyngeal muscles, which causes the pharynx to collapse. People with sleep apnea experience reduced or no slow-wave sleep and spend less time in REM sleep.
Despite this medical consensus, the variety of apneic events (e.g., hypopnea vs apnea, central vs obstructive), the variability of patients' physiologies, and the inherent shortcomings and variability of equipment and methods, this field is subject to debate.
Within this context, the definition of an event depends on several factors (e.g., patient's age) and account for this variability through a multi-criteria decision rule described in several, sometimes conflicting, guidelines.
Oximetry, which may be performed over one or several nights in a person's home, is a simpler, but less reliable alternative to a polysomnography. The test is recommended only when requested by a physician and should not be used to test those without symptoms. Home oximetry may be effective in guiding prescription for automatically self-adjusting continuous positive airway pressure.
There are three types of sleep apnea. OSA accounts for 84%, CSA for 0.4%, and 15% of cases are mixed.
Obstructive sleep apnea (OSA) is the most common category of sleep-disordered breathing. The muscle tone of the body ordinarily relaxes during sleep, and at the level of the throat, the human airway is composed of collapsible walls of soft tissue that can obstruct breathing. Mild occasional sleep apnea, such as many people experience during an upper respiratory infection, may not be significant, but chronic severe obstructive sleep apnea requires treatment to prevent low blood oxygen (hypoxemia), sleep deprivation, and other complications.
Individuals with low muscle-tone and soft tissue around the airway (e.g., because of obesity) and structural features that give rise to a narrowed airway are at high risk for obstructive sleep apnea. The elderly are more likely to have OSA than young people. Men are more likely to suffer sleep apnea than women and children are, though it is not uncommon in the last two population groups.
The risk of OSA rises with increasing body weight, active smoking and age. In addition, patients with diabetes or "borderline" diabetes have up to three times the risk of having OSA.
Common symptoms include loud snoring, restless sleep, and sleepiness during the daytime. Diagnostic tests include home oximetry or polysomnography in a sleep clinic.
Some treatments involve lifestyle changes, such as avoiding alcohol or muscle relaxants, losing weight, and quitting smoking. Many people benefit from sleeping at a 30-degree elevation of the upper body or higher, as if in a recliner. Doing so helps prevent the gravitational collapse of the airway. Lateral positions (sleeping on a side), as opposed to supine positions (sleeping on the back), are also recommended as a treatment for sleep apnea, largely because the gravitational component is smaller in the lateral position. Some people benefit from various kinds of oral appliances such as the Mandibular advancement splint to keep the airway open during sleep. Continuous positive airway pressure (CPAP) is the most effective treatment for severe obstructive sleep apnea, but oral appliances are considered a first-line approach equal to CPAP for mild to moderate sleep apnea, according to the AASM parameters of care. There are also surgical procedures to remove and tighten tissue and widen the airway.
Snoring is a common finding in people with this syndrome. Snoring is the turbulent sound of air moving through the back of the mouth, nose, and throat. Although not everyone who snores is experiencing difficulty breathing, snoring in combination with other risk factors has been found to be highly predictive of OSA. The loudness of the snoring is not indicative of the severity of obstruction, however. If the upper airways are tremendously obstructed, there may not be enough air movement to make much sound. Even the loudest snoring does not mean that an individual has sleep apnea syndrome. The sign that is most suggestive of sleep apneas occurs when snoring "stops".
Up to 78% of genes associated with habitual snoring also increase the risk for OSA.
Other indicators include (but are not limited to): hypersomnolence, obesity BMI >30, large neck circumference ( in women, in men), enlarged tonsils and large tongue volume, micrognathia, morning headaches, irritability/mood-swings/depression, learning and/or memory difficulties, and sexual dysfunction.
The term "sleep-disordered breathing" is commonly used in the U.S. to describe the full range of breathing problems during sleep in which not enough air reaches the lungs (hypopnea and apnea). Sleep-disordered breathing is associated with an increased risk of cardiovascular disease, stroke, high blood pressure, arrhythmias, diabetes, and sleep deprived driving accidents. When high blood pressure is caused by OSA, it is distinctive in that, unlike most cases of high blood pressure (so-called essential hypertension), the readings do "not" drop significantly when the individual is sleeping. Stroke is associated with obstructive sleep apnea.
It has been revealed that people with OSA show tissue loss in brain regions that help store memory, thus linking OSA with memory loss. Using magnetic resonance imaging (MRI), the scientists discovered that people with sleep apnea have mammillary bodies that are about 20 percent smaller, particularly on the left side. One of the key investigators hypothesized that repeated drops in oxygen lead to the brain injury.
Obstructive sleep apnea is associated with problems in daytime functioning, such as daytime sleepiness, motor vehicle crashes, psychological problems, decreased cognitive functioning, and reduced quality of life. Other associated problems include cerebrovascular diseases (hypertension, coronary artery disease, and stroke) and diabetes. These problems could be, at least in part, caused by risk factors of OSA.
In pure central sleep apnea or Cheyne–Stokes respiration, the brain's respiratory control centers are imbalanced during sleep. Blood levels of carbon dioxide, and the neurological feedback mechanism that monitors them, do not react quickly enough to maintain an even respiratory rate, with the entire system cycling between apnea and hyperpnea, even during wakefulness. The sleeper stops breathing and then starts again. There is no effort made to breathe during the pause in breathing: there are no chest movements and no struggling. After the episode of apnea, breathing may be faster (hyperpnea) for a period of time, a compensatory mechanism to blow off retained waste gases and absorb more oxygen.
While sleeping, a normal individual is "at rest" as far as cardiovascular workload is concerned. Breathing is regular in a healthy person during sleep, and oxygen levels and carbon dioxide levels in the bloodstream stay fairly constant. Any sudden drop in oxygen or excess of carbon dioxide (even if tiny) strongly stimulates the brain's respiratory centers to breathe.
In central sleep apnea (CSA), the basic neurological controls for breathing rate malfunction and fail to give the signal to inhale, causing the individual to miss one or more cycles of breathing. If the pause in breathing is long enough, the percentage of oxygen in the circulation will drop to a lower than normal level (hypoxaemia) and the concentration of carbon dioxide will build to a higher than normal level (hypercapnia). In turn, these conditions of hypoxia and hypercapnia will trigger "additional" effects on the body. Brain cells need constant oxygen to live, and if the level of blood oxygen goes low enough for long enough, the consequences of brain damage and even death will occur. However, central sleep apnea is more often a chronic condition that causes much milder effects than sudden death. The exact effects of the condition will depend on how severe the apnea is and on the individual characteristics of the person having the apnea. Several examples are discussed below, and more about the nature of the condition is presented in the section on Clinical Details.
In any person, hypoxia and hypercapnia have certain common effects on the body. The heart rate will increase, unless there are such severe co-existing problems with the heart muscle itself or the autonomic nervous system that makes this compensatory increase impossible. The more translucent areas of the body will show a bluish or dusky cast from cyanosis, which is the change in hue that occurs owing to lack of oxygen in the blood ("turning blue"). Overdoses of drugs that are respiratory depressants (such as heroin, and other opiates) kill by damping the activity of the brain's respiratory control centers. In central sleep apnea, the effects of sleep "alone" can remove the brain's mandate for the body to breathe.
Some people with sleep apnea have a combination of both types; its prevalence ranges from 0.56% to 18%. The condition is generally detected when obstructive sleep apnea is treated with CPAP and central sleep apnea emerges. The exact mechanism of the loss of central respiratory drive during sleep in OSA is unknown but is most likely related to incorrect settings of the CPAP treatment and other medical conditions the person has.
The treatment of obstructive sleep apnea is different than that of central sleep apnea. Treatment often starts with behavioral therapy. Many people are told to avoid alcohol, sleeping pills, and other sedatives, which can relax throat muscles, contributing to the collapse of the airway at night.
For moderate to severe sleep apnea, the most common treatment is the use of a continuous positive airway pressure (CPAP) or automatic positive airway pressure (APAP) device. These splint the person's airway open during sleep by means of pressurized air. The person typically wears a plastic facial mask, which is connected by a flexible tube to a small bedside CPAP machine.
With proper use, CPAP improves outcomes. Whether or not it decreases the risk of death or heart disease is controversial with some reviews finding benefit and others not. This variation across studies might be driven by low rates of compliance—analyses of those who use CPAP for at least four hours a night suggests a decrease in cardiovascular events. Evidence suggests that CPAP may improve sensitivity to insulin, blood pressure, and sleepiness. Long term compliance, however, is an issue with more than half of people not appropriately using the device.
Although CPAP therapy is effective in reducing apneas and less expensive than other treatments, some people find it uncomfortable. Some complain of feeling trapped, having chest discomfort, and skin or nose irritation. Other side effects may include dry mouth, dry nose, nosebleeds, sore lips and gums.
Excess body weight is thought to be an important cause of sleep apnea. People who are over weight have more tissues in the back of their throat which can restrict the airway especially when sleeping. In weight loss studies of overweight individuals, those who lose weight show reduced apnea frequencies and improved Apnoea–Hypopnoea Index (AHI). Weight loss effective enough to relieve Obesity hypoventilation syndrome (OHS) must be 25% - 30% of body weight. It is difficult to achieve and maintain this result without bariatric surgery.
Several surgical procedures (sleep surgery) are used to treat sleep apnea, although they are normally a third line of treatment for those who reject or are not helped by CPAP treatment or dental appliances. Surgical treatment for obstructive sleep apnea needs to be individualized to address all anatomical areas of obstruction.
Often, correction of the nasal passages needs to be performed in addition to correction of the oropharynx passage. Septoplasty and turbinate surgery may improve the nasal airway.
Tonsillectomy and uvulopalatopharyngoplasty (UPPP or UP3) are available to address pharyngeal obstruction.
The "Pillar" device is a treatment for snoring and obstructive sleep apnea; it is thin, narrow strips of polyester. Three strips are inserted into the roof of the mouth (the soft palate) using a modified syringe and local anesthetic, in order to stiffen the soft palate. This procedure addresses one of the most common causes of snoring and sleep apnea — vibration or collapse of the soft palate. It was approved by the FDA for snoring in 2002 and for obstructive sleep apnea in 2004. A 2013 meta-analysis found that "the Pillar implant has a moderate effect on snoring and mild-to-moderate obstructive sleep apnea" and that more studies with high level of evidence were needed to arrive at a definite conclusion; it also found that the polyester strips work their way out of the soft palate in about 10% of the people in whom they are implanted.
Base-of-tongue advancement by means of advancing the genial tubercle of the mandible, tongue suspension, or hyoid suspension (aka hyoid myotomy and suspension or hyoid advancement) may help with the lower pharynx.
Other surgery options may attempt to shrink or stiffen excess tissue in the mouth or throat; procedures done at either a doctor's office or a hospital. Small shots or other treatments, sometimes in a series, are used for shrinkage, while the insertion of a small piece of stiff plastic is used in the case of surgery whose goal is to stiffen tissues.
Maxillomandibular advancement (MMA) is considered the most effective surgery for people with sleep apnea, because it increases the posterior airway space (PAS). However, health professionals are often unsure as to who should be referred for surgery and when to do so: some factors in referral may include failed use of CPAP or device use; anatomy which favors rather than impedes surgery; or significant craniofacial abnormalities which hinder device use.
Several inpatient and outpatient procedures use sedation. Many drugs and agents used during surgery to relieve pain and to depress consciousness remain in the body at low amounts for hours or even days afterwards. In an individual with either central, obstructive or mixed sleep apnea, these low doses may be enough to cause life-threatening irregularities in breathing or collapses in a patient's airways. Use of analgesics and sedatives in these patients postoperatively should therefore be minimized or avoided.
Surgery on the mouth and throat, as well as dental surgery and procedures, can result in postoperative swelling of the lining of the mouth and other areas that affect the airway. Even when the surgical procedure is designed to improve the airway, such as tonsillectomy and adenoidectomy or tongue reduction, swelling may negate some of the effects in the immediate postoperative period. Once the swelling resolves and the palate becomes tightened by postoperative scarring, however, the full benefit of the surgery may be noticed.
A person with sleep apnea undergoing any medical treatment must make sure his or her doctor and anesthetist are informed about the sleep apnea. Alternative and emergency procedures may be necessary to maintain the airway of sleep apnea patients.
Diaphragm pacing, which involves the rhythmic application of electrical impulses to the diaphragm, has been used to treat central sleep apnea.
In April 2014 the U.S. Food and Drug Administration granted pre-market approval for use of an upper airway stimulation system in people who cannot use a continuous positive airway pressure device. The Inspire Upper Airway Stimulation system senses respiration and applies mild electrical stimulation during inspiration, which pushes the tongue slightly forward to open the airway.
There is currently insufficient evidence to recommend any medication for OSA. There is limited evidence for medication but acetazolamide "may be considered" for the treatment of central sleep apnea; it also found that zolpidem and triazolam may be considered for the treatment of central sleep apnea, but "only if the patient does not have underlying risk factors for respiratory depression". Low doses of oxygen are also used as a treatment for hypoxia but are discouraged due to side effects.
An oral appliance, often referred to as a mandibular advancement splint, is a custom-made mouthpiece that shifts the lower jaw forward and opens the bite slightly, opening up the airway. These devices can be fabricated by a general dentist. Oral appliance therapy (OAT) is usually successful in patients with mild to moderate obstructive sleep apnea. While CPAP is more effective for sleep apnea than oral appliances, oral appliances do improve sleepiness and quality of life and are often better tolerated than CPAP.
Nasal EPAP is a bandage-like device placed over the nostrils that uses a person's own breathing to create positive airway pressure to prevent obstructed breathing.
Oral pressure therapy uses a device that creates a vacuum in the mouth, pulling the soft palate tissue forward. It has been found useful in about 25 to 37% of people.
The Wisconsin Sleep Cohort Study estimated in 1993 that roughly one in every 15 Americans was affected by at least moderate sleep apnea. It also estimated that in middle-age as many as nine percent of women and 24 percent of men were affected, undiagnosed and untreated.
The costs of untreated sleep apnea reach further than just health issues. It is estimated that in the U.S., the average untreated sleep apnea patient's annual health care costs $1,336 more than an individual without sleep apnea. This may cause $3.4 billion/year in additional medical costs. Whether medical cost savings occur with treatment of sleep apnea remains to be determined.
A type of CSA was described in the German myth of Ondine's curse where the person when asleep would forget to breathe. The clinical picture of this condition has long been recognized as a character trait, without an understanding of the disease process. The term "Pickwickian syndrome" that is sometimes used for the syndrome was coined by the famous early 20th-century physician William Osler, who must have been a reader of Charles Dickens. The description of Joe, "the fat boy" in Dickens's novel "The Pickwick Papers", is an accurate clinical picture of an adult with obstructive sleep apnea syndrome.
The early reports of obstructive sleep apnea in the medical literature described individuals who were very severely affected, often presenting with severe hypoxemia, hypercapnia and congestive heart failure.
The management of obstructive sleep apnea was improved with the introduction of continuous positive airway pressure (CPAP), first described in 1981 by Colin Sullivan and associates in Sydney, Australia. The first models were bulky and noisy, but the design was rapidly improved and by the late 1980s CPAP was widely adopted. The availability of an effective treatment stimulated an aggressive search for affected individuals and led to the establishment of hundreds of specialized clinics dedicated to the diagnosis and treatment of sleep disorders. Though many types of sleep problems are recognized, the vast majority of patients attending these centers have sleep-disordered breathing. Sleep apnea awareness day is April 18 in recognition of Colin Sullivan. | https://en.wikipedia.org/wiki?curid=28445 |
South African English
South African English (SAfrE, SAfrEng, SAE, en-ZA) is the set of English dialects native to South Africans.
British colonists first colonised the South African region in 1795, when they established a military holding operation at the Cape Colony. The goal of this first endeavour was to gain control of a key Cape sea route, not to establish a permanent settler colony. The first major influx of English speakers arrived in 1820. About 5000 British settlers, mostly rural or working class, settled in the eastern Cape. Though the British were a minority colonist group (the Dutch had been in the region since 1652, when traders from the Dutch East India Company developed an outpost), the Cape Colony governor, Lord Charles Somerset, declared English an official language in 1822. To spread the influence of English in the colony, officials began to recruit British schoolmasters and Scottish clergy to occupy positions in the education and church systems. Another group of English speakers arrived from Britain in the 1840s and 1850s, along with the Natal settlers. These individuals were largely "standard speakers" like retired military personnel and aristocrats. A third wave of English settlers arrived between 1875 and 1904, and brought with them a diverse variety of English dialects. These last two waves did not have as large of an influence on South African English (SAE), for "the seeds of development were already sown in 1820". However, the Natal wave brought nostalgia for British customs and helped to define the idea of a "standard" variety that resembled Southern British English.
When the Union of South Africa was formed in 1910, English and Dutch were the official state languages, although Afrikaans effectively replaced Dutch in 1925. After 1994, these two languages along with nine other Southern Bantu languages achieved equal official status.
SAE is an extraterritorial (ET) variety of English, or a language variety that has been transported outside its mainland home. More specifically, SAE is a Southern hemisphere ET originating from later English colonisation in the 18th and 19th centuries (Zimbabwean, Australian, and New Zealand English are also Southern hemisphere ET varieties). SAE resembles British English more closely than it does American English due to the close ties that South African colonies maintained with the mainland in the 19th and 20th centuries. However, with the increasing influence of American pop-culture around the world via modes of contact like television, American English has become more familiar in South Africa. Indeed, some American lexical items are becoming alternatives to comparable British terms.
Several South African English varieties have emerged, accompanied by varying levels of perceived social prestige. Roger Lass describes White South African English as a system of three sub-varieties spoken primarily by White South Africans, called "The Great Trichotomy" (a term first used to categorise Australian English varieties and subsequently applied to South African English). In this classification, the "Cultivated" variety closely approximates England's standard Received Pronunciation and is associated with the upper class; the "General" variety is a social indicator of the middle class and is the common tongue; and the "Broad" variety is most associated with the working class, low socioeconomic status, and little education. These three sub-varieties have also been called "Conservative SAE", "Respectable SAE", and "Extreme SAE", respectively. Broad White SAE closely approximates the second-language variety of (Afrikaans-speaking) Afrikaners called Afrikaans English. This variety has been stigmatised by middle and upper class SAE speakers and is considered a vernacular form of SAE.
Black South African English, or BSAE, is spoken by individuals whose first language is an indigenous African tongue. BSAE is considered a "new" English because it has emerged through the education system among second-language speakers in places where English is not the majority language. At least two sociolinguistic variants have been definitively studied on a post-creole continuum for the second-language Black South African English spoken by most Black South Africans: a high-end, prestigious "acrolect" and a more middle-ranging, mainstream "mesolect". The "basilect" variety is less similar to the colonial language (natively-spoken English), while the "mesolect" is somewhat more so. Historically, BSAE has been considered a "non-standard" variety of English, inappropriate for formal contexts and influenced by indigenous African languages.
According to the Central Statistical Services, as of 1994 about 7 million black people spoke English in South Africa. BSAE originated in the South African school system, when the 1953 Bantu Education Act mandated the use of native African languages in the classroom. When this law was established, most of the native English-speaking teachers were removed from schools. This limited the exposure that black students received to standard varieties of English. As a result, the English spoken in black schools developed distinctive patterns of pronunciation and syntax, leading to the formation of BSAE. Some of these characteristic features can be linked to the mother tongues of the early BSAE speakers. The policy of mother tongue promotion in schools ultimately failed, and in 1979, the Department of Bantu Education allowed schools to choose their own language of instruction. English was largely the language of choice, because it was viewed as a key tool of social and economic advancement.
Indian South African English (ISAE) is a sub-variety that developed among the descendants of Indian immigrants to South Africa. The Apartheid policy, in effect from 1948 to 1991, prevented Indian children from publicly interacting with people of English heritage. This separation caused an Indian variety to develop independently from White South African English, though with phonological and lexical features still fitting under the South African English umbrella. Indian South African English includes a "basilect", "mesolect", and "acrolect". These terms describe varieties of a given language on a spectrum of similarity to the colonial version of that language: the "acrolect" being the most similar. Today, basilect speakers are generally older non-native speakers with little education; acrolect speakers closely resemble colonial native English speakers, with a few phonetic/syntactic exceptions; and mesolect speakers fall somewhere in-between.
ISAE resembles Indian English in some respects, possibly because the varieties contain speakers with shared mother tongues or because early English teachers were brought to South Africa from India, or both. Four prominent education-related lexical features shared by ISAE and Indian English are: "tuition(s)," which means "extra lessons outside school that one pays for"; "further studies", which means "higher education"; "alphabets", which means "the alphabet, letters of the alphabet"; and "by-heart", which means "to learn off by heart"; these items show the influence of Indian English teachers in South Africa. Phonologically, ISAE also shares several similarities with Indian English, though certain common features are decreasing in the South African variety. For instance, consonant retroflexion in phonemes like /ḍ/ and strong aspiration in consonant production (common in North Indian English) are present in both varieties, but declining in ISAE. Syllable-timed rhythm, instead of stress-timed rhythm, is still a prominent feature in both varieties, especially in more colloquial sub-varieties.
Another variety of South African English is Cape Flats English, originally and best associated with inner-city Cape Coloured speakers.
In 1913, Charles Pettman created the first South African English dictionary, entitled "Africanderisms". This work sought to identify Afrikaans terms that were emerging in the English language in South Africa. In 1924, the Oxford University Press published its first version of a South African English dictionary, "The South African Pocket Oxford Dictionary." Subsequent editions of this dictionary have tried to take a "broad editorial approach" in including vocabulary terms native to South Africa, though the extent of this inclusion has been contested. Rhodes University (South Africa) and Oxford University (Great Britain) worked together to produce the 1978 "Dictionary of South African English," which adopted a more conservative approach in its inclusion of terms. This dictionary did include, for the first time, what the dictionary writers deemed "the jargon of townships", or vocabulary terms found in Black journalism and literary circles. Dictionaries specialising in scientific jargon, such as the common names of South African plants, also emerged in the twentieth century. However, these works still often relied on Latin terminology and European pronunciation systems. As of 1992, Rajend Mesthrie had produced the only available dictionary of South African Indian English.
SAE includes lexical items borrowed from other South African Languages. The following list provides a sample of some of these terms:
SAE also contains several lexical items that demonstrate the British influence on this variety:
A range of SAE expressions have been borrowed from other South African languages, or are uniquely used in this variety of English. Some common expressions include:
Pharos, 2014 "Come with?" is also encountered in areas of the Upper Midwest of the United States, which had a large number of Scandinavian, Dutch and German immigrants, who, when speaking English, translated equivalent phrases directly from their own languages.
The South African National Census of 2011 found a total of 4,892,623 speakers of English as a first language, making up 9.6% of the national population. The provinces with significant English-speaking populations were the Western Cape (20.2% of the provincial population), Gauteng (13.3%) and KwaZulu-Natal (13.2%).
English was spoken across all ethnic groups in South Africa. The breakdown of English-speakers according to the conventional racial classifications used by Statistics South Africa is described in the following table.
The following examples of South African accents were obtained from George Mason University: | https://en.wikipedia.org/wiki?curid=28447 |
Speech processing
Speech processing is the study of speech signals and the processing methods of signals. The signals are usually processed in a digital representation, so speech processing can be regarded as a special case of digital signal processing, applied to speech signals. Aspects of speech processing includes the acquisition, manipulation, storage, transfer and output of speech signals. The input is called speech recognition and the output is called speech synthesis.
Early attempts at speech processing and recognition were primarily focused on understanding a handful of simple phonetic elements such as vowels. In 1952, three researchers at Bell Labs, Stephen. Balashek, R. Biddulph, and K. H. Davis, developed a system that could recognize digits spoken by a single speaker.
Linear predictive coding (LPC), a speech processing algorithm, was first proposed by Fumitada Itakura of Nagoya University and Shuzo Saito of Nippon Telegraph and Telephone (NTT) in 1966. Further developments in LPC technology were made by Bishnu S. Atal and Manfred R. Schroeder at Bell Labs during the 1970s. LPC was the basis for voice-over-IP (VoIP) technology, as well as speech synthesizer chips, such as the Texas Instruments LPC Speech Chips used in the Speak & Spell toys from 1978.
One of the first commercially available speech recognition products was Dragon Dictate, released in 1990. In 1992, technology developed by Lawrence Rabiner and others at Bell Labs was used by AT&T in their Voice Recognition Call Processing service to route calls without a human operator. By this point, the vocabulary of these systems was larger than the average human vocabulary.
By the early 2000s, the dominant speech processing strategy started to shift away from Hidden Markov Models towards more modern neural networks and deep learning.
Dynamic time warping (DTW) is an algorithm for measuring similarity between two temporal sequences, which may vary in speed. In general, DTW is a method that calculates an optimal match between two given sequences (e.g. time series) with certain restriction and rules. The optimal match is denoted by the match that satisfies all the restrictions and the rules and that has the minimal cost, where the cost is computed as the sum of absolute differences, for each matched pair of indices, between their values.
A hidden Markov model can be represented as the simplest dynamic Bayesian network. The goal of the algorithm is to estimate a hidden variable x(t) given a list of observations y(t). By applying the Markov property, the conditional probability distribution of the hidden variable "x"("t") at time "t", given the values of the hidden variable "x" at all times, depends "only" on the value of the hidden variable "x"("t" − 1). Similarly, the value of the observed variable "y"("t") only depends on the value of the hidden variable "x"("t") (both at time "t").
An artificial neural network (ANN) is based on a collection of connected units or nodes called artificial neurons, which loosely model the neurons in a biological brain. Each connection, like the synapses in a biological brain, can transmit a signal from one artificial neuron to another. An artificial neuron that receives a signal can process it and then signal additional artificial neurons connected to it. In common ANN implementations, the signal at a connection between artificial neurons is a real number, and the output of each artificial neuron is computed by some non-linear function of the sum of its inputs. | https://en.wikipedia.org/wiki?curid=28448 |
Swahili language
Swahili, also known by its native name , is a Bantu language and the first language of the Swahili people. It is a lingua franca of the African Great Lakes region and other parts of East and Southern Africa, including Kenya, Tanzania, Uganda, Rwanda, Burundi, some parts of Malawi, Somalia, Zambia, Mozambique and the Democratic Republic of the Congo (DRC). Comorian, spoken in the Comoros Islands, is sometimes considered a dialect of Swahili, although other authorities consider it a distinct language.
The exact number of Swahili speakers, be they native or second-language speakers, is unknown and a matter of debate. Various estimates have been put forward, which vary widely, ranging from 100 million to 150 million. Swahili serves as a national language of the DRC, Kenya, Tanzania, and Uganda. Shikomor, an official language in Comoros and also spoken in Mayotte (), is related to Swahili. Swahili is also one of the working languages of the African Union and officially recognised as a of the East African Community. In 2018, South Africa legalized the teaching of Swahili in South African schools as an optional subject to begin in 2020.
Swahili is a Bantu language of the Sabaki branch. In Guthrie's geographic classification, Swahili is in Bantu zone G, whereas the other Sabaki languages are in zone E70, commonly under the name "Nyika." Historical linguists do not consider the Arabic influence on Swahili to be significant enough to classify it as a mixed language, since Arabic influence is limited to lexical items, most of which have only been borrowed after 1500, while the grammatical and syntactic structure of the language is typically Bantu.
The Swahili language dates its origin to the Bantu people of the coast of East Africa. Much of Swahili's Bantu vocabulary has cognates in the Pokomo, Taita and Mijikenda languages and, to a lesser extent, other East African Bantu languages. While it is purported that approximately 30% of the Swahili vocabulary is derived from Arabic, Persian, Hindustani, Portuguese, and Malay with Arabic contributing a majority of the foreign loan words in the Swahili language, sources differ. It is also important to bear in mind that such sources carry strong colonial bias that would rather indicate the evolution of Kiswahili as a creole of sorts, a language completed with the coming of Arab conquest, rather than a language fully formed with a borrowing of words at the advent of trade and conquest along the coast. This assumption, questionably, further implied that the Arab invasion along the Eastern Coast of Africa was necessary to its formation as a language. In the text "Early Swahili History Reconsidered", Thomas Spear noted that Swahili retained a tremendous amount of grammar, vocabulary, and sounds inherited from the Sabaki Language. In fact, while taking account of daily vocabulary, using lists of one hundred words , 72-91% were inherited from the Sabaki language (which is reported as a parent language) whereas 4-17% were loan words from other African languages.To a lesser proportion, 2-8% were from non-African languages. Arabic loan words consited a fraction of the 2-8%. What also remained unconsidered was that a good number of the borrowed terms had native equivalents. The preferred use of Arabic loan words is prevalent along the coast, where natives, in a cultural show of proximity to, or descent from Arab culture, would rather express in loan word terms, whereas the natives in the interior tend to use the native equivalents. It was originally written in Arabic script.
The earliest known documents written in Swahili are letters written in Kilwa in 1711 in the Arabic script that were sent to the Portuguese of Mozambique and their local allies. The original letters are preserved in the Historical Archives of Goa, India.
Its name comes from Arabic: "sāħil" = "coast", broken plural "sawāħil" = "coasts", "sawāħilï" = "of coasts".
Various colonial powers that ruled on the coast of East Africa played a role in the growth and spread of Swahili. With the arrival of the Arabs in East Africa, they used Swahili as a language of trade as well as for teaching Islam to the local Bantu peoples. This resulted in Swahili first being written in the Arabic alphabet. The later contact with the Portuguese resulted in the increase of vocabulary of the Swahili language. The language was formalised in an institutional level when the Germans took over after the Berlin conference. After seeing there was already a widespread language, the Germans formalised it as the official language to be used in schools. Thus schools in Swahili are called Shule (from German ) in government, trade and the court system. With the Germans controlling the major Swahili-speaking region in East Africa, they changed the alphabet system from Arabic to Latin. After the first World war, Britain took over East Africa, where they found Swahili rooted in most areas, not just the coastal regions. The British decided to formalise it as the language to be used across the East African region (although in British East Africa [Kenya and Uganda] most areas used English and various Nilotic and other Bantu languages while Swahili was mostly restricted to the coast). In June 1928, an inter-territorial conference attended by representatives of Kenya, Tanganyika, Uganda, and Zanzibar took place in Mombasa. The Zanzibar dialect was chosen as standard Swahili for those areas, and the standard orthography for Swahili was adopted.
Swahili has become a second language spoken by tens of millions in three African Great Lakes countries (Kenya, Uganda, and Tanzania), where it is an official or national language, while being the first language to many people in Tanzania especially in the coastal regions of Tanga, Pwani, Dar es salaam, Mtwara and Lindi. In the inner regions of Tanzania, Swahili is spoken with an accent influenced by local languages and dialects, and as a first language for most people born in the cities, whilst being spoken as a second language in rural areas. Swahili and closely related languages are spoken by relatively small numbers of people in Burundi, Comoros, Malawi, Mozambique, Zambia and Rwanda. The language was still understood in the southern ports of the Red Sea in the 20th century. Swahili speakers may number 120 to 150 million in total.
Swahili is among the first languages in Africa for which language technology applications have been developed. Arvi Hurskainen is one of the early developers. The applications include a spelling checker, part-of-speech tagging, a language learning software, an analysed Swahili text corpus of 25 million words, an electronic dictionary, and machine translation between Swahili and English. The development of language technology also strengthens the position of Swahili as a modern medium of communication.
The widespread use of Swahili as a nation language in Tanzania came after Tanganyika gained independece in 1961 and the government decided that it would be used as a language to unify the new nation. This saw the use of Swahili in all levels of government, trade, art as well as schools in which primary school children are taught in Swahili, before switching to English in Secondary schools (although Swahili is still taught as an independent subject). In 1985, with the 8–4–4 system of education, Swahili was made a compulsory subject in all Kenyan schools.
After Tanganyika and Zanzibar unification in 1964, "Taasisi ya Uchunguzi wa Kiswahili" (TUKI, Institute of Swahili Research) was created from the Interterritorial Language Committee. In 1970 TUKI was merged with the University of Dar es salaam, while "Baraza la" "Kiswahili la Taifa" (BAKITA) was formed. BAKITA is an organisation dedicated to the development and advocacy of Swahili as a means of national integration in Tanzania. Key activities mandated for the organization include creating a healthy atmosphere for the development of Swahili, encouraging use of the language in government and business functions, coordinating activities of other organizations involved with Swahili, standardizing the language. Although other bodies and agencies can propose new vocabularies, BAKITA is the only organisation that can approve its usage in the Swahili language.
In Kenya, "Chama cha Kiswahili cha Taifa" (CHAKITA) was established in 1998 to research and propose means by which Kiswahili can be integrated to a national language.
Swahili played a major role in spreading both Christianity and Islam in East Africa. From their arrival in East Africa, Arabs brought Islam and set up Madrasas, where they used Swahili to teach Islam to the natives. As the Arab presence grew, more and more natives were converted to Islam and were taught using the Swahili language.
From the arrival of Europeans in East Africa, Christianity was introduced in East Africa. While the Arabs were mostly based in the coastal areas, European missionaries went further inland spreading Christianity. But since the first missionary posts in East Africa were in the coastal areas, missionaries picked up Swahili and used it to spread Christianity since it had a lot of similarities with many of the other indigenous languages in the region.
During the struggle for Tanganyika independence, the Tanganyika African National Union used Swahili as language of mass organisation and political movement. This included publishing pamphlets and radio broadcasts to rally the people to fight for independence. After independence, Swahili was adopted as the national language of the nation. Till this day, Tanzanians carry a sense of pride when it comes to Swahili especially when it is used to unite over 120 tribes across Tanzania. Swahili was used to strengthen solidarity among the people and a sense of togetherness and for that Swahili remains a key identity of the Tanzanian people.
Standard Swahili has five vowel phonemes: , , , , and . Vowels are never reduced, regardless of stress. Swahili vowels can be long; these are written as two vowels (example: Kondoo, meaning "sheep"). This is due to a historical process in which the L became deleted between two examples of the same vowel (Kondoo was originally pronounced Kondolo, which survives in certain dialects). However, these long vowels are not considered to be phonemic. A similar process exists in Zulu.
Some dialects of Swahili may also have the aspirated phonemes though they are unmarked in Swahili's orthography. Multiple studies favour classifying prenasalization as consonant clusters, not as separate phonemes. The /r/ phoneme is realised as either a short trill [] or more commonly as a single tap [] by most speakers. In some Arabic loans (nouns, verbs, adjectives), emphasis or intensity is expressed by reproducing the original emphatic consonants and the uvular , or lengthening a vowel, where aspiration would be used in inherited Bantu words.
Swahili is now written in the Latin alphabet. There are a few digraphs for native sounds, "ch", "sh", "ng" and "ny"; "q" and "x" are not used, "c" is not used apart from unassimilated English loans and, occasionally, as a substitute for "k" in advertisements. There are also several digraphs for Arabic sounds, which many speakers outside of ethnic Swahili areas have trouble differentiating.
The language used to be written in the Arabic script. Unlike adaptations of the Arabic script for other languages, relatively little accommodation was made for Swahili. There were also differences in orthographic conventions between cities and authors and over the centuries, some quite precise but others different enough to cause difficulties with intelligibility.
Several Swahili consonants do not have equivalents in Arabic, and for them, often no special letters were created unlike, for example, Urdu script. Instead, the closest Arabic sound is substituted. Not only did that mean that one letter often stands for more than one sound, but also writers made different choices of which consonant to substitute. Here are some of the equivalents between Arabic Swahili and Roman Swahili:
That was the general situation, but conventions from Urdu were adopted by some authors so as to distinguish aspiration and from : 'gazelle', 'roof'. Although it is not found in Standard Swahili today, there is a distinction between dental and alveolar consonants in some dialects, which is reflected in some orthographies, for example in ' 'to meet' vs. ' 'to be satisfied'. A "k" with the dots of "y", , was used for "ch" in some conventions; "ky" being historically and even contemporaneously a more accurate transcription than Roman "ch". In Mombasa, it was common to use the Arabic emphatics for Cw, for example in ' (standard ') 'we' and ' (standard ') 'head'.
Particles such as ' are joined to the following noun, and possessives such as ' and ' are joined to the preceding noun, but verbs are written as two words, with the subject and tense–aspect–mood morphemes separated from the object and root, as in ' "he who told me".
Swahili nouns are separable into classes, which are roughly analogous to genders in other languages. For example, just as suffix in Spanish marks masculine objects, and marks feminine ones, so, in Swahili, prefixes mark groups of similar objects: marks single human beings ("mtoto" 'child'), marks multiple humans ("watoto" 'children'), marks abstract nouns ("utoto" 'childhood'), and so on. Similar prefixes must be used on verbs and particles in agreement with the governing noun in a phrase. This is a characteristic feature of all the Bantu languages of sub-Saharan Africa, and traces of it are also found in the other Niger-Congo languages of West Africa.
The "ki-/vi-" class historically consisted of two separate genders, artefacts (Bantu class 7/8, utensils and hand tools mostly) and diminutives (Bantu class 12/13), which were conflated at a stage ancestral to Swahili. Examples of the former are "kisu" "knife", "kiti" "chair" (from "mti" "tree, wood"), "chombo" "vessel" (a contraction of "ki-ombo"). Examples of the latter are "kitoto" "infant", from "mtoto" "child"; "kitawi" "frond", from "tawi" "branch"; and "chumba" ("ki-umba") "room", from "nyumba" "house". It is the diminutive sense that has been furthest extended. An extension common to diminutives in many languages is "approximation" and "resemblance" (having a 'little bit' of some characteristic, like "-y" or "-ish" in English). For example, there is "kijani" "green", from "jani" "leaf" (compare English 'leafy'), "kichaka" "bush" from "chaka" "clump", and "kivuli" "shadow" from "uvuli" "shade". A 'little bit' of a verb would be an instance of an action, and such "instantiations" (usually not very active ones) are found: "kifo" "death", from the verb "-fa" "to die"; "kiota" "nest" from "-ota" "to brood"; "chakula" "food" from "kula" "to eat"; "kivuko" "a ford, a pass" from "-vuka" "to cross"; and "kilimia" "the Pleiades", from "-limia" "to farm with", from its role in guiding planting. A resemblance, or being a bit like something, implies marginal status in a category, so things that are marginal examples of their class may take the "ki-/vi-" prefixes. One example is "chura" ("ki-ura") "frog", which is only half terrestrial and therefore is marginal as an animal. This extension may account for disabilities as well: "kilema" "a cripple", "kipofu" "a blind person", "kiziwi" "a deaf person". Finally, diminutives often denote contempt, and contempt is sometimes expressed against things that are dangerous. This might be the historical explanation for "kifaru" "rhinoceros", "kingugwa" "spotted hyena", and "kiboko" "hippopotamus" (perhaps originally meaning "stubby legs").
Another class with broad semantic extension is the "m-/mi-" class (Bantu classes 3/4). This is often called the 'tree' class, because "mti, miti" "tree(s)" is the prototypical example. However, it seems to cover vital entities neither human nor typical animals: trees and other plants, such as "mwitu" 'forest' and "mtama" 'millet' (and from there, things made from plants, like "mkeka" 'mat'); supernatural and natural forces, such as "mwezi" 'moon', "mlima" 'mountain', "mto" 'river'; active things, such as "moto" 'fire', including active body parts ("moyo" 'heart', "mkono" 'hand, arm'); and human groups, which are vital but not themselves human, such as "mji" 'village', and, by analogy, "mzinga" 'beehive/cannon'. From the central idea of "tree", which is thin, tall, and spreading, comes an extension to other long or extended things or parts of things, such as "mwavuli" 'umbrella', "moshi" 'smoke', "msumari" 'nail'; and from activity there even come active instantiations of verbs, such as "mfuo" "metal forging", from "-fua" "to forge", or "mlio" "a sound", from "-lia" "to make a sound". Words may be connected to their class by more than one metaphor. For example, "mkono" is an active body part, and "mto" is an active natural force, but they are also both long and thin. Things with a trajectory, such as "mpaka" 'border' and "mwendo" 'journey', are classified with long thin things, as in many other languages with noun classes. This may be further extended to anything dealing with time, such as "mwaka" 'year' and perhaps "mshahara" 'wages'. Animals exceptional in some way and so not easily fitting in the other classes may be placed in this class.
The other classes have foundations that may at first seem similarly counterintuitive. In short,
Swahili phrases agree with nouns in a system of concord but, if the noun refers to a human, they accord with noun classes 1–2 regardless of their noun class. Verbs agree with the noun class of their subjects and objects; adjectives, prepositions and demonstratives agree with the noun class of their nouns. In Standard Swahili "(Kiswahili sanifu)", based on the dialect spoken in Zanzibar, the system is rather complex; however, it is drastically simplified in many local variants where Swahili is not a native language, such as in Nairobi. In non-native Swahili, concord reflects only animacy: human subjects and objects trigger "a-, wa-" and "m-, wa-" in verbal concord, while non-human subjects and objects of whatever class trigger "i-, zi-". Infinitives vary between standard "ku-" and reduced "i-." ("Of" is animate "wa" and inanimate "ya, za.")
In Standard Swahili, human subjects and objects of whatever class trigger animacy concord in "a-, wa-" and "m-, wa-," and non-human subjects and objects trigger a variety of gender-concord prefixes.
This list is based on "Swahili and Sabaki: a linguistic history".
Modern standard Swahili is based on "Kiunguja," the dialect spoken in Zanzibar Town, but there are numerous dialects of Swahili, some of which are mutually unintelligible, such as the following:
Maho (2009) considers these to be distinct languages:
The rest of the dialects are divided by him into two groups:
Maho includes the various Comorian dialects as a third group. Most other authorities consider Comorian to be a Sabaki language, distinct from Swahili.
In Somalia, where the Afroasiatic Somali language predominates, a variant of Swahili referred to as Chimwiini (also known as Chimbalazi) is spoken along the Benadir coast by the Bravanese people. Another Swahili dialect known as Kibajuni also serves as the mother tongue of the Bajuni minority ethnic group, which lives in the tiny Bajuni Islands as well as the southern Kismayo region.
In Oman, there are an estimated 22,000 people who speak Swahili. Most are descendants of those repatriated after the fall of the Sultanate of Zanzibar. | https://en.wikipedia.org/wiki?curid=28450 |
Summary offence
A summary offence is a crime in some common law jurisdictions that can be proceeded against summarily, without the right to a jury trial and/or indictment (required for an indictable offence).
In Canada, summary offences are referred to as summary conviction offences. As in other jurisdictions, summary conviction offences are considered less serious than indictable offences because they are punishable by shorter prison sentences and smaller fines. These offences appear both in the federal laws of Canada and in the legislation of Canada's provinces and territories. For summary conviction offences that fall under the jurisdiction of the federal government (which includes all criminal law), section 787 of the Criminal Code specifies that, unless another punishment is provided for by law, the maximum penalty for a summary conviction offence is a sentence of 2 years less a day of imprisonment, a fine of $5,000 or both.
As a matter of practical effect, some common differences between summary conviction and indictable offences are provided below.
In Hong Kong, trials for summary offences are heard in one of the territory's Magistrates' Courts.
In relation to England and Wales, the expression "summary trial" means a trial in the magistrates' court. In such proceedings there is no jury; the appointed judge, or a panel of three lay magistrates, decides the guilt or innocence of the accused. Each summary offence is specified by statute which describes the (usually minor) offence and the judge to hear it. A summary procedure can result in a summary conviction. A "summary offence" is one which, if charged to an adult, can only be tried by summary procedure. Similar procedures are also used in Scotland.
Certain offences that may be tried in a Crown Court (by jury) may be required to be tried summarily if the value involved is small; such offences are still considered either way offences, so are not thereby "summary offences" in the meaning of that term defined by statute. Contrariwise, certain summary offences may in certain circumstances be tried on indictment along with other offences that are themselves indictable; they do not thereby become "indictable offences" or "either way offences" but remain "summary offences", though tried by jury.
Sir William Blackstone, in his Commentaries on the Laws of England (1765–1769), described summary offences thus:
In the United Kingdom, trials for summary offences are heard in one of a number of types of lower court. For England and Wales this is the Magistrates' Court. In Scotland, it is the Sheriff Court or Justice of the peace court, depending on the offence (the latter being primarily for the most minor of offences). Northern Ireland has its own Magistrates' Court system.
In the United States, "there are certain minor or petty offenses that may be proceeded against summarily, and without a jury". These include criminal citations. Any crime punishable by more than six months of imprisonment must have some means for a jury trial. Some states, such as California, provide that all common law crimes and misdemeanors require a jury trial. Some states provide that in all offenses the defendant may demand a jury trial.
Contempt of court is considered a prerogative of the court, as "the requirement of a jury does not apply to 'contempts committed in disobedience of any lawful writ, process, order, rule, decree, or command entered in any suit or action brought or prosecuted in the name of, or on behalf of, the United States. There have been criticisms over the practice. In particular, Supreme Court Justice Hugo Black wrote in a 1964 dissent, "It is high time, in my judgment, to wipe out root and branch the judge-invented and judge-maintained notion that judges can try criminal contempt cases without a jury." | https://en.wikipedia.org/wiki?curid=28452 |
Perjury
Perjury is the intentional act of swearing a false oath or falsifying an affirmation to tell the truth, whether spoken or in writing, concerning matters material to an official proceeding. In some jurisdictions, contrary to popular misconception, no crime has occurred when a false statement is (intentionally or unintentionally) made while under oath or subject to penalty. Instead, criminal culpability attaches only at the instant the declarant falsely asserts the truth of statements (made or to be made) that are material to the outcome of the proceeding. For example, it is not perjury to lie about one's age except if age is a fact material to influencing the legal result, such as eligibility for old age retirement benefits or whether a person was of an age to have legal capacity.
Perjury is considered a serious offense, as it can be used to usurp the power of the courts, resulting in miscarriages of justice. In the United States, for example, the general perjury statute under federal law classifies perjury as a felony and provides for a prison sentence of up to five years. The California Penal Code allows for perjury to be a capital offense in cases causing wrongful execution. Perjury which caused the wrongful execution of another or in the pursuit of causing the wrongful execution of another is respectively construed as murder or attempted murder, and is normally itself punishable by execution in countries that retain the death penalty. Perjury is considered a felony in most U.S. states as well as most Australian states. In Queensland, under Section 124 of the Queensland Criminal Code Act 1899, perjury is punishable by up to life in prison if it is committed to procure an innocent person for a crime that is punishable by life in prison. However, prosecutions for perjury are rare. In some countries such as France and Italy, suspects cannot be heard under oath or affirmation and so cannot commit perjury, regardless of what they say during their trial.
The rules for perjury also apply when a person has made a statement "under penalty of perjury" even if the person has not been sworn or affirmed as a witness before an appropriate official. An example is the US income tax return, which, by law, must be signed as true and correct under penalty of perjury (see ). Federal tax law provides criminal penalties of up to three years in prison for violation of the tax return perjury statute. See:
Statements that entail an "interpretation" of fact are not perjury because people often draw inaccurate conclusions unwittingly or make honest mistakes without the intent to deceive. Individuals may have honest but mistaken beliefs about certain facts or their recollection may be inaccurate, or may have a different perception of what is the accurate way to state the truth. Like most other crimes in the common law system, to be convicted of perjury one must have had the "intention" ("mens rea") to commit the act and to have "actually committed" the act ("actus reus"). Further, statements that "are facts" cannot be considered perjury, even if they might arguably constitute an omission, and it is not perjury to lie about matters that are immaterial to the legal proceeding.
In the United States, Kenya, Scotland and several other English-speaking Commonwealth nations, subornation of perjury, which is attempting to induce another person to commit perjury, is itself a crime.
The offence of perjury is codified by section 132 of the Criminal Code. It is defined by section 131, which provides:
As to corroboration, see section 133.
Mode of trial and sentence
Every one who commits perjury is guilty of an indictable offence and liable to imprisonment for a term not exceeding fourteen years.
A person who, before the Court of Justice of the European Union, swears anything which he knows to be false or does not believe to be true is, whatever his nationality, guilty of perjury. Proceedings for this offence may be taken in any place in the State and the offence may for all incidental purposes be treated as having been committed in that place.
Perjury is a statutory offence in England and Wales. It is created by section 1(1) of the Perjury Act 1911. Section 1 of that Act reads:
The words omitted from section 1(1) were repealed by section 1(2) of the Criminal Justice Act 1948.
A person guilty of an offence under section 11(1) of the European Communities Act 1972 (i.e. perjury before the Court of Justice of the European Union) may be proceeded against and punished in England and Wales as for an offence under section 1(1).
Section 1(4) has effect in relation to proceedings in the Court of Justice of the European Union as it has effect in relation to a judicial proceeding in a tribunal of a foreign state.
Section 1(4) applies in relation to proceedings before a relevant convention court under the European Patent Convention as it applies to a judicial proceeding in a tribunal of a foreign state.
A statement made on oath by a witness outside the United Kingdom and given in evidence through a live television link by virtue of section 32 of the Criminal Justice Act 1988 must be treated for the purposes of section 1 as having been made in the proceedings in which it is given in evidence.
Section 1 applies in relation to a person acting as an intermediary as it applies in relation to a person lawfully sworn as an interpreter in a judicial proceeding; and for this purpose, where a person acts as an intermediary in any proceeding which is not a judicial proceeding for the purposes of section 1, that proceeding must be taken to be part of the judicial proceeding in which the witness's evidence is given.
Where any statement made by a person on oath in any proceeding which is not a judicial proceeding for the purposes of section 1 is received in evidence in pursuance of a special measures direction, that proceeding must be taken for the purposes of section 1 to be part of the judicial proceeding in which the statement is so received in evidence.
The definition in section 1(2) is not "comprehensive".
The book "Archbold" says that it appears to be immaterial whether the court before which the statement is made has jurisdiction in the particular cause in which the statement is made, because there is no express requirement in the Act that the court be one of "competent jurisdiction" and because the definition in section 1(2) does not appear to require this by implication either.
The actus reus of perjury might be considered to be the making of a statement, whether true or false, on oath in a judicial proceeding, where the person knows the statement to be false or believes it to be false.
Perjury is a conduct crime.
Perjury is triable only on indictment.
A person convicted of perjury is liable to imprisonment for a term not exceeding seven years, or to a fine, or to both.
The following cases are relevant:
See also the Crown Prosecution Service sentencing manual.
In Anglo-Saxon legal procedure, the offence of perjury could only be committed by both jurors and by compurgators. With time witnesses began to appear in court they were not so treated despite the fact that their functions were akin to that of modern witnesses. This was due to the fact that their role were not yet differentiated from those of the juror and so evidence or perjury by witnesses was not made a crime. Even in the 14th century, when witnesses started appearing before the jury to testify, perjury by them was not made a punishable offence. The maxim then was that every witness's evidence on oath was true. Perjury by witnesses began to be punished before the end of the 15th century by the Star Chamber.
The immunity enjoyed by witnesses began also to be whittled down or interfered with by the Parliament in England in 1540 with subornation of perjury and, in 1562, with perjury proper. The punishment for the offence then was in the nature of monetary penalty, recoverable in a civil action and not by penal sanction. In 1613, the Star Chamber declared perjury by a witness to be a punishable offence at common law.
Prior to the 1911 Act, perjury was governed by section 3 of the Maintenance and Embracery Act 1540 5 Eliz 1 c. 9 (An Act for the Punyshement of suche persones as shall procure or comit any wyllful Perjurye; repealed 1967) and the Perjury Act 1728.
The requirement that the statement be material can be traced back to and has been credited to Edward Coke, who said:
Perjury is a statutory offence in Northern Ireland. It is created by article 3(1) of the Perjury (Northern Ireland) Order 1979 (S.I. 1979/1714 (N.I. 19)). This replaces the Perjury Act (Northern Ireland) 1946 (c. 13) (N.I.).
Perjury operates in American law as an inherited principle of the common law of England, which defined the act as the "willful and corrupt giving, upon a lawful oath, or in any form allowed by law to be substituted for an oath, in a judicial proceeding or course of justice, of a false testimony material to the issue or matter of inquiry".
William Blackstone touched on the subject in his "Commentaries on the Laws of England", establishing perjury as "a crime committed when a lawful oath is administered, in some judicial proceeding, to a person who swears willfully, absolutely, and falsely, in a matter material to the issue or point in question". The punishment for perjury under the common law has varied from death to banishment and has included such grotesque penalties as severing the tongue of the perjurer. The definitional structure of perjury provides an important framework for legal proceedings, as the component parts of this definition have permeated jurisdictional lines, finding a home in American legal constructs. As such, the main tenets of perjury, including mens rea, a lawful oath, occurring during a judicial proceeding, a false testimony have remained necessary pieces of perjury's definition in the United States.
Perjury's current position in the American legal system takes the form of state and federal statutes. Most notably, the United States Code prohibits perjury, which is defined in two senses for federal purposes as someone who:
The above statute provides for a fine and/or up to five years in prison as punishment. Within federal jurisdiction, statements made in two broad categories of judicial proceedings may qualify as perjurious: 1) Federal official proceedings, and 2) Federal Court or Grand Jury proceedings. A third type of perjury entails the procurement of perjurious statements from another person. More generally, the statement must occur in the "course of justice," but this definition leaves room open for interpretation.
One particularly precarious aspect of the phrasing is that it entails knowledge of the accused person's perception of the truthful nature of events and not necessarily the actual truth of those events. It is important to note the distinction here, between giving a false statement under oath and merely misstating a fact accidentally, but the distinction can be especially difficult to discern in court of law.
The development of perjury law in the United States centers on "United States v. Dunnigan", a seminal case that set out the parameters of perjury within United States law. The court uses the Dunnigan-based legal standard to determine if an accused person: "testifying under oath or affirmation violates this section if she gives false testimony concerning a material matter with the willful intent to provide false testimony, rather than as a result of confusion, mistake, or faulty memory." However, a defendant shown to be willfully ignorant may in fact be eligible for perjury prosecution.
"Dunnigan" distinction manifests its importance with regard to the relation between two component parts of perjury's definition: in willfully giving a false statement, a person must understand that she is giving a false statement to be considered a perjurer under the "Dunnigan" framework. Deliberation on the part of the defendant is required for a statement to constitute perjury. Jurisprudential developments in the American law of perjury have revolved around the facilitation of "perjury prosecutions and thereby enhance the reliability of testimony before federal courts and grand juries".
With that goal in mind, Congress has sometimes expanded the grounds on which an individual may be prosecuted for perjury, with section 1623 of the United States Code recognizing the utterance of two mutually incompatible statements as grounds for perjury indictment even if neither can unequivocally be proven false. However, the two statements must be so mutually incompatible that at least one must necessarily be false; it is irrelevant whether the false statement can be specifically identified from among the two. It thus falls on the government to show that a defendant (a) knowingly made a (b) false (c) material statement (d) under oath (e) in a legal proceeding. The proceedings can be ancillary to normal court proceedings, and thus, even such menial interactions as bail hearings can qualify as protected proceedings under this statute.
Wilfulness is an element of the offense. The mere existence of two mutually-exclusive factual statements is not sufficient to prove perjury; the prosecutor nonetheless has the duty to plead and prove that the statement was willfully made. Mere contradiction will not sustain the charge; there must be strong corroborative evidence of the contradiction.
One significant legal distinction lies in the specific realm of knowledge necessarily possessed by a defendant for her statements to be properly called perjury. Though the defendant must knowingly render a false statement in a legal proceeding or under federal jurisdiction, the defendant need not know that they are speaking under such conditions for the statement to constitute perjury. All tenets of perjury qualification persist: the "knowingly" aspect of telling the false statement simply does not apply to the defendant's knowledge about the person whose deception is intended.
The evolution of United States perjury law has experienced the most debate with regards to the materiality requirement. Fundamentally, statements that are literally true cannot provide the basis for a perjury charge (as they do not meet the falsehood requirement) just as answers to truly ambiguous statements cannot constitute perjury. However, such fundamental truths of perjury law become muddled when discerning the materiality of a given statement and the way in which it was material to the given case. In "United States v. Brown", the court defined material statements as those with "a natural tendency to influence, or is capable of influencing, the decision of the decision-making body to be addressed," such as a jury or grand jury.
While courts have specifically made clear certain instances that have succeeded or failed to meet the nebulous threshold for materiality, the topic remains unresolved in large part, except in certain legal areas where intent manifests itself in an abundantly clear fashion, such as with the so-called perjury trap, a specific situation in which a prosecutor calls a person to testify before a grand jury with the intent of drawing a perjurious statement from the person being questioned.
Despite a tendency of US perjury law toward broad prosecutory power under perjury statutes, American perjury law has afforded potential defendants a new form of defense not found in the British Common Law. This defense requires that an individual admit to making a perjurious statement during that same proceeding and recanting the statement. Though this defensive loophole slightly narrows the types of cases which may be prosecuted for perjury, the effect of this statutory defense is to promote a truthful retelling of facts by witnesses, thus helping to ensure the reliability of American court proceedings just as broadened perjury statutes aimed to do.
Subornation of perjury stands as a subset of US perjury laws and prohibits an individual from inducing another to commit perjury. Subornation of perjury entails equivalent possible punishments as perjury on the federal level. The crime requires an extra level of satisfactory proof, as prosecutors must show not only that perjury occurred but also that the defendant positively induced said perjury. Furthermore, the inducing defendant must know that the suborned statement is a false, perjurious statement.
Notable people who have been accused of perjury include: | https://en.wikipedia.org/wiki?curid=23688 |
Phosphate
In chemistry, a phosphate is an anion, salt, functional group or ester derived from a phosphoric acid. It most commonly means orthophosphate, a derivative of orthophosphoric acid .
The phosphate or orthophosphate ion is derived from phosphoric acid by the removal of three protons . Removal of one or two protons gives the dihydrogen phosphate ion and the hydrogen phosphate ion ion, respectively. These names are also used for salts of those anions, such as ammonium dihydrogen phosphate and trisodium phosphate.
In organic chemistry, phosphate or orthophosphate is an organophosphate, an ester of orthophosphoric acid of the form where one or more hydrogen atoms are replaced by organic groups. An example is trimethyl phosphate, . The term also refers to the trivalent functional group in such esters.
Orthophosphates are especially important among the various phosphates because of their key roles in biochemistry, biogeochemistry, and ecology, and their economic importance for agriculture and industry. The addition and removal of phosphate groups (phosphorylation and dephosphorylation) are key steps in cell metabolism.
Orthophosphates can condense to form pyrophosphates.
The phosphate ion has a molar mass of 94.97 g/mol, and consists of a central phosphorus atom surrounded by four oxygen atoms in a tetrahedral arrangement. It is the conjugate base of the hydrogen phosphate ion , which in turn is the conjugate base of the dihydrogen phosphate ion , which in turn is the conjugate base of orthophosphoric acid, .
Many phosphates are not soluble in water at standard temperature and pressure. The sodium, potassium, rubidium, caesium, and ammonium phosphates are all water-soluble. Most other phosphates are only slightly soluble or are insoluble in water. As a rule, the hydrogen and dihydrogen phosphates are slightly more soluble than the corresponding phosphates.
In water solution, orthophosphoric acid and its three derived anions coexist according to the dissociation and recombination equilibria below
Values are at 25°C and 0 ionic strength.
The p"K""a" values are the pH values where the concentration of each species is equal to that of its conjugate bases. At pH 1 or lower, the phosphoric acid is practically undissociated. Around pH 4.7 (mid-way between the first two p"K""a" values) the dihydrogen phosphate ion, , is practically the only species present. Around pH 9.8 (mid-way between the second and third p"K""a" values) the monohydrogen phosphate ion, , is the only species present. At pH 13 or higher, the acid is completely dissociated as the phosphate ion, .
This means that salts of the mono- and di-phosphate ions can be selectively crystallised from aqueous solution by setting the pH value to either 4.7 or 9.8.
In effect, , and behave as separate weak acids because the successive p"K""a" differ by more than 4.
Phosphate can form many polymeric ions such as pyrophosphate), , and triphosphate, . The various metaphosphate ions (which are usually long linear polymers) have an empirical formula of and are found in many compounds.
In biological systems, phosphorus can be found as free phosphate anions in solution (inorganic phosphate) or bound to organic molecules as various organophosphates.
Inorganic phosphate is generally denoted Pi and at physiological (homeostatic) pH primarily consists of a mixture of and ions. At a neutral pH, as in the cytosol (pH = 7.0), the concentrations of the orthophoshoric acid and its three anions have the ratios
Thus, only and ions are present in significant amounts in the cytosol (62% , 38% ). In extracellular fluid (pH = 7.4), this proportion is inverted (61% , 39% ).
Inorganic phosphate can be present also as of pyrophosphate anions , which can give orthophosphate by hydrolysis:
Organic phosphates are commonly found in the form of esters as nucleotides (e.g. AMP, ADP, and ATP) and in DNA and RNA. Free orthophosphate anions can be released by the hydrolysis of the phosphoanhydride bonds in ATP or ADP. These phosphorylation and dephosphorylation reactions are the immediate storage and source of energy for many metabolic processes. ATP and ADP are often referred to as high-energy phosphates, as are the phosphagens in muscle tissue. Similar reactions exist for the other nucleoside diphosphates and triphosphates.
An important occurrence of phosphates in biological systems is as the structural material of bone and teeth. These structures are made of crystalline calcium phosphate in the form of hydroxyapatite. The hard dense enamel of mammalian teeth consists of fluoroapatite, a hydroxy calcium phosphate where some of the hydroxyl groups have been replaced by fluoride ions.
Orthophosphate salts of sodium and potassium are common agents for the preparation of buffer solutions for animal cells.
Plants take up phosphorus through several pathways: the arbuscular mycorrhizal pathway and the direct uptake pathway.
Phosphates are the naturally occurring form of the element phosphorus, found in many phosphate minerals. In mineralogy and geology, phosphate refers to a rock or ore containing phosphate ions. Inorganic phosphates are mined to obtain phosphorus for use in agriculture and industry.
The largest global producer and exporter of phosphates is Morocco. Within North America, the largest deposits lie in the Bone Valley region of central Florida, the Soda Springs region of southeastern Idaho, and the coast of North Carolina. Smaller deposits are located in Montana, Tennessee, Georgia, and South Carolina. The small island nation of Nauru and its neighbor Banaba Island, which used to have massive phosphate deposits of the best quality, have been mined excessively. Rock phosphate can also be found in Egypt, Israel, Western Sahara, Navassa Island, Tunisia, Togo, and Jordan, countries that have large phosphate-mining industries.
Phosphorite mines are primarily found in:
In 2007, at the current rate of consumption, the supply of phosphorus was estimated to run out in 345 years. However, some scientists thought that a "peak phosphorus" will occur in 30 years and Dana Cordell from Institute for Sustainable Futures said that at "current rates, reserves will be depleted in the next 50 to 100 years". Reserves refer to the amount assumed recoverable at current market prices, and, in 2012, the USGS estimated 71 billion tons of world reserves, while 0.19 billion tons were mined globally in 2011. Phosphorus comprises 0.1% by mass of the average rock (while, for perspective, its typical concentration in vegetation is 0.03% to 0.2%), and consequently there are quadrillions of tons of phosphorus in Earth's 3 * 1019 ton crust, albeit at predominantly lower concentration than the deposits counted as reserves from being inventoried and cheaper to extract; if it is assumed that the phosphate minerals in phosphate rock are hydroxyapatite and fluoroapatite, phosphate minerals contain roughly 18.5% phosphorus by weight and if phosphate rock contains around 20% of these minerals, the average phosphate rock has roughly 3.7% phosphorus by weight.
Some phosphate rock deposits, such as Mulberry in Florida, are notable for their inclusion of significant quantities of radioactive uranium isotopes. This syndrome is noteworthy because radioactivity can be released into surface waters in the process of application of the resultant phosphate fertilizer (e.g. in many tobacco farming operations in the southeast US).
In December 2012, Cominco Resources announced an updated JORC compliant resource of their Hinda project in Congo-Brazzaville of 531 Mt, making it the largest measured and indicated phosphate deposit in the world.
The three principal phosphate producer countries (China, Morocco and the United States) account for about 70% of world production.
In ecological terms, because of its important role in biological systems, phosphate is a highly sought after resource. Once used, it is often a limiting nutrient in environments, and its availability may govern the rate of growth of organisms. This is generally true of freshwater environments, whereas nitrogen is more often the limiting nutrient in marine (seawater) environments. Addition of high levels of phosphate to environments and to micro-environments in which it is typically rare can have significant ecological consequences. For example, blooms in the populations of some organisms at the expense of others, and the collapse of populations deprived of resources such as oxygen (see eutrophication) can occur. In the context of pollution, phosphates are one component of total dissolved solids, a major indicator of water quality, but not all phosphorus is in a molecular form that algae can break down and consume.
Calcium hydroxyapatite and calcite precipitates can be found around bacteria in alluvial topsoil. As clay minerals promote biomineralization, the presence of bacteria and clay minerals resulted in calcium hydroxyapatite and calcite precipitates.
Phosphate deposits can contain significant amounts of naturally occurring heavy metals. Mining operations processing phosphate rock can leave tailings piles containing elevated levels of cadmium, lead, nickel, copper, chromium, and uranium. Unless carefully managed, these waste products can leach heavy metals into groundwater or nearby estuaries. Uptake of these substances by plants and marine life can lead to concentration of toxic heavy metals in food products. | https://en.wikipedia.org/wiki?curid=23690 |
Prime number theorem
In number theory, the prime number theorem (PNT) describes the asymptotic distribution of the prime numbers among the positive integers. It formalizes the intuitive idea that primes become less common as they become larger by precisely quantifying the rate at which this occurs. The theorem was proved independently by Jacques Hadamard and Charles Jean de la Vallée Poussin in 1896 using ideas introduced by Bernhard Riemann (in particular, the Riemann zeta function).
The first such distribution found is , where is the prime-counting function and is the natural logarithm of . This means that for large enough , the probability that a random integer not greater than is prime is very close to . Consequently, a random integer with at most digits (for large enough ) is about half as likely to be prime as a random integer with at most digits. For example, among the positive integers of at most 1000 digits, about one in 2300 is prime (), whereas among positive integers of at most 2000 digits, about one in 4600 is prime (). In other words, the average gap between consecutive prime numbers among the first integers is roughly .
Let be the prime-counting function that gives the number of primes less than or equal to , for any real number . For example, because there are four prime numbers (2, 3, 5 and 7) less than or equal to 10. The prime number theorem then states that is a good approximation to (where log here means the natural logarithm), in the sense that the limit of the "quotient" of the two functions and as increases without bound is 1:
known as the asymptotic law of distribution of prime numbers. Using asymptotic notation this result can be restated as
This notation (and the theorem) does "not" say anything about the limit of the "difference" of the two functions as increases without bound. Instead, the theorem states that approximates in the sense that the relative error of this approximation approaches 0 as increases without bound.
The prime number theorem is equivalent to the statement that the th prime number satisfies
the asymptotic notation meaning, again, that the relative error of this approximation approaches 0 as increases without bound. For example, the th prime number is , and ()log() rounds to , a relative error of about 6.4%.
As outlined below, the prime number theorem is also equivalent to
where and are the first and the second Chebyshev functions respectively.
Based on the tables by Anton Felkel and Jurij Vega, Adrien-Marie Legendre conjectured in 1797 or 1798 that is approximated by the function , where and are unspecified constants. In the second edition of his book on number theory (1808) he then made a more precise conjecture, with and . Carl Friedrich Gauss considered the same question at age 15 or 16 "in the year 1792 or 1793", according to his own recollection in 1849. In 1838 Peter Gustav Lejeune Dirichlet came up with his own approximating function, the logarithmic integral (under the slightly different form of a series, which he communicated to Gauss). Both Legendre's and Dirichlet's formulas imply the same conjectured asymptotic equivalence of and stated above, although it turned out that Dirichlet's approximation is considerably better if one considers the differences instead of quotients.
In two papers from 1848 and 1850, the Russian mathematician Pafnuty Chebyshev attempted to prove the asymptotic law of distribution of prime numbers. His work is notable for the use of the zeta function , for real values of the argument "", as in works of Leonhard Euler, as early as 1737. Chebyshev's papers predated Riemann's celebrated memoir of 1859, and he succeeded in proving a slightly weaker form of the asymptotic law, namely, that if the limit as goes to infinity of exists at all, then it is necessarily equal to one. He was able to prove unconditionally that this ratio is bounded above and below by two explicitly given constants near 1, for all sufficiently large . Although Chebyshev's paper did not prove the Prime Number Theorem, his estimates for were strong enough for him to prove Bertrand's postulate that there exists a prime number between and for any integer .
An important paper concerning the distribution of prime numbers was Riemann's 1859 memoir "On the Number of Primes Less Than a Given Magnitude", the only paper he ever wrote on the subject. Riemann introduced new ideas into the subject, the chief of them being that the distribution of prime numbers is intimately connected with the zeros of the analytically extended Riemann zeta function of a complex variable. In particular, it is in this paper of Riemann that the idea to apply methods of complex analysis to the study of the real function originates. Extending the ideas of Riemann, two proofs of the asymptotic law of the distribution of prime numbers were obtained independently by Jacques Hadamard and Charles Jean de la Vallée Poussin and appeared in the same year (1896). Both proofs used methods from complex analysis, establishing as a main step of the proof that the Riemann zeta function is non-zero for all complex values of the variable that have the form with .
During the 20th century, the theorem of Hadamard and de la Vallée Poussin also became known as the Prime Number Theorem. Several different proofs of it were found, including the "elementary" proofs of Atle Selberg and Paul Erdős (1949). While the original proofs of Hadamard and de la Vallée Poussin are long and elaborate, later proofs introduced various simplifications through the use of Tauberian theorems but remained difficult to digest. A short proof was discovered in 1980 by American mathematician Donald J. Newman. Newman's proof is arguably the simplest known proof of the theorem, although it is non-elementary in the sense that it uses Cauchy's integral theorem from complex analysis.
Here is a sketch of the proof referred to in one of Terence Tao's lectures. Like most proofs of the PNT, it starts out by reformulating the problem in terms of a less intuitive, but better-behaved, prime-counting function. The idea is to count the primes (or a related set such as the set of prime powers) with "weights" to arrive at a function with smoother asymptotic behavior. The most common such generalized counting function is the Chebyshev function , defined by
This is sometimes written as
where is the von Mangoldt function, namely
It is now relatively easy to check that the PNT is equivalent to the claim that
Indeed, this follows from the easy estimates
and (using big notation) for any ,
The next step is to find a useful representation for . Let be the Riemann zeta function. It can be shown that is related to the von Mangoldt function , and hence to , via the relation
A delicate analysis of this equation and related properties of the zeta function, using the Mellin transform and Perron's formula, shows that for non-integer the equation
holds, where the sum is over all zeros (trivial and nontrivial) of the zeta function. This striking formula is one of the so-called explicit formulas of number theory, and is already suggestive of the result we wish to prove, since the term (claimed to be the correct asymptotic order of ) appears on the right-hand side, followed by (presumably) lower-order asymptotic terms.
The next step in the proof involves a study of the zeros of the zeta function. The trivial zeros −2, −4, −6, −8, ... can be handled separately:
which vanishes for a large . The nontrivial zeros, namely those on the critical strip , can potentially be of an asymptotic order comparable to the main term if , so we need to show that all zeros have real part strictly less than 1.
To do this, we take for granted that is meromorphic in the half-plane , and is analytic there except for a simple pole at , and that there is a product formula
for . This product formula follows from the existence of unique prime factorization of integers, and shows that is never zero in this region, so that its logarithm is defined there and
Write ; then
Now observe the identity
so that
for all . Suppose now that . Certainly is not zero, since has a simple pole at . Suppose that and let tend to 1 from above. Since formula_19 has a simple pole at and stays analytic, the left hand side in the previous inequality tends to 0, a contradiction.
Finally, we can conclude that the PNT is heuristically true. To rigorously complete the proof there are still serious technicalities to overcome, due to the fact that the summation over zeta zeros in the explicit formula for does not converge absolutely but only conditionally and in a "principal value" sense. There are several ways around this problem but many of them require rather delicate complex-analytic estimates. Edwards's book provides the details. Another method is to use Ikehara's Tauberian theorem, though this theorem is itself quite hard to prove. D. J. Newman observed that the full strength of Ikehara's theorem is not needed for the prime number theorem, and one can get away with a special case that is much easier to prove.
In a handwritten note on a reprint of his 1838 paper "", which he mailed to Gauss, Dirichlet conjectured (under a slightly different form appealing to a series rather than an integral) that an even better approximation to is given by the offset logarithmic integral function , defined by
Indeed, this integral is strongly suggestive of the notion that the "density" of primes around should be . This function is related to the logarithm by the asymptotic expansion
So, the prime number theorem can also be written as . In fact, in another paper in 1899 de la Vallée Poussin proved that
for some positive constant , where is the big notation. This has been improved to
In 2016, Trudgian proved an explicit upper bound for the difference between formula_25 and formula_26:
for formula_28.
Because of the connection between the Riemann zeta function and , the Riemann hypothesis has considerable importance in number theory: if established, it would yield a far better estimate of the error involved in the prime number theorem than is available today. More specifically, Helge von Koch showed in 1901 that if the Riemann hypothesis is true, the error term in the above relation can be improved to
(this last estimate is in fact equivalent to the Riemann hypothesis). The constant involved in the big notation was estimated in 1976 by Lowell Schoenfeld: assuming the Riemann hypothesis,
for all . He also derived a similar bound for the Chebyshev prime-counting function :
for all . This latter bound has been shown to express a variance to mean power law (when regarded as a random function over the integers), noise and to also correspond to the Tweedie compound Poisson distribution. Parenthetically, the Tweedie distributions represent a family of scale invariant distributions that serve as foci of convergence for a generalization of the central limit theorem.
The logarithmic integral is larger than for "small" values of . This is because it is (in some sense) counting not primes, but prime powers, where a power of a prime is counted as of a prime. This suggests that should usually be larger than by roughly , and in particular should always be larger than . However, in 1914, J. E. Littlewood proved that formula_32 changes sign infinitely often.
The first value of where exceeds is probably around ; see the article on Skewes' number for more details. (On the other hand, the offset logarithmic integral is smaller than already for ; indeed, , while .)
In the first half of the twentieth century, some mathematicians (notably G. H. Hardy) believed that there exists a hierarchy of proof methods in mathematics depending on what sorts of numbers (integers, reals, complex) a proof requires, and that the prime number theorem (PNT) is a "deep" theorem by virtue of requiring complex analysis. This belief was somewhat shaken by a proof of the PNT based on Wiener's tauberian theorem, though this could be set aside if Wiener's theorem were deemed to have a "depth" equivalent to that of complex variable methods.
In March 1948, Atle Selberg established, by "elementary" means, the asymptotic formula
where
for primes . By July of that year, Selberg and Paul Erdős had each obtained elementary proofs of the PNT, both using Selberg's asymptotic formula as a starting point. These proofs effectively laid to rest the notion that the PNT was "deep", and showed that technically "elementary" methods were more powerful than had been believed to be the case. On the history of the elementary proofs of the PNT, including the Erdős–Selberg priority dispute, see an article by Dorian Goldfeld.
There is some debate about the significance of Erdős and Selberg's result. There is no rigorous and widely accepted definition of the notion of elementary proof in number theory, so it is not clear exactly in what sense their proof is "elementary". Although it does not use complex analysis, it is in fact much more technical than the standard proof of PNT. One possible definition of an "elementary" proof is "one that can be carried out in first order Peano arithmetic." There are number-theoretic statements (for example, the Paris–Harrington theorem) provable using second order but not first order methods, but such theorems are rare to date. Erdős and Selberg's proof can certainly be formalized in Peano arithmetic, and in 1994, Charalambos Cornaros and Costas Dimitracopoulos proved that their proof can be formalized in a very weak fragment of PA, namely , However, this does not address the question of whether or not the standard proof of PNT can be formalized in PA.
In 2005, Avigad "et al." employed the Isabelle theorem prover to devise a computer-verified variant of the Erdős–Selberg proof of the PNT. This was the first machine-verified proof of the PNT. Avigad chose to formalize the Erdős–Selberg proof rather than an analytic one because while Isabelle's library at the time could implement the notions of limit, derivative, and transcendental function, it had almost no theory of integration to speak of.
In 2009, John Harrison employed HOL Light to formalize a proof employing complex analysis. By developing the necessary analytic machinery, including the Cauchy integral formula, Harrison was able to formalize "a direct, modern and elegant proof instead of the more involved 'elementary' Erdős–Selberg argument".
Let denote the number of primes in the arithmetic progression less than . Dirichlet and Legendre conjectured, and de la Vallée Poussin proved, that, if and are coprime, then
where is Euler's totient function. In other words, the primes are distributed evenly among the residue classes modulo with 1. This is stronger than Dirichlet's theorem on arithmetic progressions (which only states that there is an infinity of primes in each class) and can be proved using similar methods used by Newman for his proof of the prime number theorem.
The Siegel–Walfisz theorem gives a good estimate for the distribution of primes in residue classes.
Although we have in particular
empirically the primes congruent to 3 are more numerous and are nearly always ahead in this "prime number race"; the first reversal occurs at . However Littlewood showed in 1914 that there are infinitely many sign changes for the function
so the lead in the race switches back and forth infinitely many times. The phenomenon that is ahead most of the time is called Chebyshev's bias. The prime number race generalizes to other moduli and is the subject of much research; Pál Turán asked whether it is always the case that and change places when and are coprime to . Granville and Martin give a thorough exposition and survey.
The prime number theorem is an "asymptotic" result. It gives an ineffective bound on as a direct consequence of the definition of the limit: for all , there is an such that for all ,
However, better bounds on are known, for instance Pierre Dusart's
The first inequality holds for all and the second one for .
A weaker but sometimes useful bound for is
In Pierre Dusart's thesis there are stronger versions of this type of inequality that are valid for larger . Later in 2010, Dusart proved:
The proof by de la Vallée Poussin implies the following.
For every , there is an such that for all ,
As a consequence of the prime number theorem, one gets an asymptotic expression for the th prime number, denoted by :
A better approximation is
Again considering the th prime number , this gives an estimate of ; the first 5 digits match and relative error is about 0.00005%.
Rosser's theorem states that
This can be improved by the following pair of bounds:
The table compares exact values of to the two approximations and . The last column, , is the average prime gap below .
The value for was originally computed assuming the Riemann hypothesis; it has since been verified unconditionally.
There is an analogue of the prime number theorem that describes the "distribution" of irreducible polynomials over a finite field; the form it takes is strikingly similar to the case of the classical prime number theorem.
To state it precisely, let be the finite field with elements, for some fixed , and let be the number of monic "irreducible" polynomials over whose degree is equal to . That is, we are looking at polynomials with coefficients chosen from , which cannot be written as products of polynomials of smaller degree. In this setting, these polynomials play the role of the prime numbers, since all other monic polynomials are built up of products of them. One can then prove that
If we make the substitution , then the right hand side is just
which makes the analogy clearer. Since there are precisely monic polynomials of degree (including the reducible ones), this can be rephrased as follows: if a monic polynomial of degree is selected randomly, then the probability of it being irreducible is about .
One can even prove an analogue of the Riemann hypothesis, namely that
The proofs of these statements are far simpler than in the classical case. It involves a short, combinatorial argument, summarised as follows: every element of the degree extension of is a root of some irreducible polynomial whose degree divides ; by counting these roots in two different ways one establishes that
where the sum is over all divisors of . Möbius inversion then yields
where is the Möbius function. (This formula was known to Gauss.) The main term occurs for , and it is not difficult to bound the remaining terms. The "Riemann hypothesis" statement depends on the fact that the largest proper divisor of can be no larger than . | https://en.wikipedia.org/wiki?curid=23692 |
Conflict of laws
Conflict of laws (sometimes called private international law) concerns relations across different legal jurisdictions between natural persons, companies, corporations and other legal entities, their legal obligations and the appropriate forum and procedure for resolving disputes between them. Conflict of laws especially affects private international law, but may also affect domestic legal disputes e.g. determination of which state law applies in the United States, or where a contract makes incompatible reference to more than one legal framework.
Courts faced with a choice of law issue have a two-stage process:
In divorce cases, when a court is attempting to distribute marital property, if the divorcing couple is local and the property is local, then the court applies its domestic law "lex fori". The case becomes more complicated if foreign elements are thrown into the mix, such as when the place of marriage is different from the territory where divorce was filed; when the parties' nationalities and residences do not match; when there is property in a foreign jurisdiction; or when the parties have changed residence several times during the marriage.
Whereas commercial agreements or prenuptial agreements generally do not require legal formalities to be observed, when married couples enter a property agreement (agreement for the division of property at the termination of the marriage), stringent requirements are imposed, including notarization, witnesses, special acknowledgment forms. In some countries, these must be filed (or docketed) with a domestic court, and the terms must be "so ordered" by a judge. This is done in order to ensure that no undue influence or oppression has been exerted by one spouse against the other. Upon presenting a property agreement between spouses to a court of divorce, that court will generally assure itself of the following factors: signatures, legal formalities, intent, later intent, free will, lack of oppression, reasonableness and fairness, consideration, performance, reliance, later repudiation in writing or by conduct, and whichever other concepts of contractual bargaining apply in the context.
Many contracts and other forms of legally binding agreement include a jurisdiction or arbitration clause specifying the parties' choice of venue for any litigation (called a forum selection clause). In the EU, this is governed by the Rome I Regulation. Choice of law clauses may specify which laws the court or tribunal should apply to each aspect of the dispute. This matches the substantive policy of freedom of contract and will be determined by the law of the state where the choice of law clause confers its competence. Oxford Professor Adrian Briggs suggests that this is doctrinally problematic as it is emblematic of 'pulling oneself up by the bootstraps'.
Judges have accepted that the principle of party autonomy allows the parties to select the law most appropriate to their transaction. This judicial acceptance of subjective intent excludes the traditional reliance on objective connecting factors; it also harms consumers as vendors often impose one-sided contractual terms selecting a venue far from the buyer's home or workplace. Contractual clauses relating to consumers, employees, and insurance beneficiaries are regulated under additional terms set out in Rome I, which may modify the contractual terms imposed by vendors.
To apply one national legal system as against another may never be an entirely satisfactory approach. The parties' interests may always be better protected by applying a law conceived with international realities in mind. The Hague Conference on Private International Law is a treaty organization that oversees conventions designed to develop a uniform system. The deliberations of the conference have recently been the subject of controversy over the extent of cross-border jurisdiction on electronic commerce and defamation issues. There is a general recognition that there is a need for an international law of contracts: for example, many nations have ratified the "Vienna Convention on the International Sale of Goods", the "Rome Convention on the Law Applicable to Contractual Obligations" offers less specialized uniformity, and there is support for the "UNIDROIT Principles of International Commercial Contracts", a private restatement, all of which represent continuing efforts to produce international standards as the Internet and other technologies encourage ever more interstate commerce.
Other branches of the law are less well served and the dominant trend remains the role of the forum law rather than a supranational system for conflict purposes. Even the EU, which has institutions capable of creating uniform rules with direct effect, has failed to produce a universal system for the common market. Nevertheless, the Treaty of Amsterdam does confer authority on the community's institutions to legislate by Council Regulation in this area with supranational effect. Article 177 would give the Court of Justice jurisdiction to interpret and apply their principles so, if the political will arises, uniformity may gradually emerge in letter. Whether the domestic courts of the Member States would be consistent in applying those letters is speculative. | https://en.wikipedia.org/wiki?curid=23693 |
Timeline of programming languages
This is a record of historically important programming languages, by decade. | https://en.wikipedia.org/wiki?curid=23696 |
International Fixed Calendar
The International Fixed Calendar (also known as the Cotsworth plan, the Cotsworth calendar and the Eastman plan) is a solar calendar proposal for calendar reform designed by Moses B. Cotsworth, who presented it in 1902. It divides the solar year into 13 months of 28 days each. It is therefore a perennial calendar, with every date fixed to the same weekday every year. Though it was never officially adopted in any country, entrepreneur George Eastman adopted it for use in his Eastman Kodak Company, where it was used from 1928 to 1989.
It is sometimes also called "the" 13-month calendar or "the" equal-month calendar, but there are multiple alternative calendar designs that these descriptive labels apply to as well.
The calendar year has 13 months with 28 days each, divided into exactly 4 weeks (13 × 28 = 364). An extra day added as a holiday at the end of the year (after December 28, i.e. equal December 31 Gregorian), sometimes called "Year Day", does not belong to any week and brings the total to 365 days. Each year coincides with the corresponding Gregorian year, so January 1 in the Cotsworth calendar always falls on Gregorian January 1. Twelve months are named and ordered the same as those of the Gregorian calendar, except that the extra month is inserted between June and July, and called "Sol". Situated in mid-summer (from the point of view of its Northern Hemisphere authors) and including the mid-year "solstice", the name of the new month was chosen in homage to the sun.
Leap years in the International Fixed Calendar contain 366 days, and its occurrence follows the Gregorian rule. There is a leap year in every year whose number is divisible by 4, but not if the year number is divisible by 100, unless it is also divisible by 400. So although the year 2000 was a leap year, the years 1700, 1800, and 1900 were common years. The International Fixed Calendar inserts the extra day in leap years as June 29 - between Saturday June 28 and Sunday Sol 1.
Each month begins on a Sunday, and ends on a Saturday; consequently, every year begins on Sunday. Neither Year Day nor Leap Day are considered to be part of any week; they are preceded by a Saturday and are followed by a Sunday.
All the months look like this:
The following shows how the 13 months and extra days of the International Fixed Calendar occur in relation to the dates of the Gregorian calendar:
*These Gregorian dates between March and June are a day earlier in a Gregorian leap year. March in the Fixed Calendar always has a fixed number of days (28), and includes the Gregorian February 29 (on Gregorian leap years).
Lunisolar calendars, with fixed weekdays, existed in many ancient cultures, with certain holidays always falling on the same dates of the month and days of the week.
The simple idea of a 13-month perennial calendar has been around since at least the middle of the 18th century. Versions of the idea differ mainly on how the months are named, and the treatment of the extra day in leap year.
The "Georgian calendar" was proposed in 1745 by the Rev. Hugh Jones, an American colonist from Maryland writing under the pen name Hirossa Ap-Iccim. The author named the plan, and the thirteenth month, after King George II of Great Britain. The 365th day each year was to be set aside as Christmas. The treatment of leap year varied from the Gregorian rule, however, and the year would begin closer to the winter solstice. In a later version of the plan, published in 1753, the 13 months were all renamed for Christian saints.
In 1849 the French philosopher Auguste Comte (1798–1857) proposed the 13-month "Positivist Calendar", naming the months: Moses, Homer, Aristotle, Archimedes, Caesar, St. Paul, Charlemagne, Dante, Gutenberg, Shakespeare, Descartes, Frederic and Bichat. The days of the year were likewise dedicated to "saints" in the Positivist Religion of Humanity. Positivist weeks, months, and years begin with Monday instead of Sunday. Comte also reset the year number, beginning the era of his calendar (year 1) with the Gregorian year 1789. For the extra days of the year not belonging to any week or month, Comte followed the pattern of Ap-Iccim (Jones), ending each year with a festival on the 365th day, followed by a subsequent feast day occurring only in leap years.
Whether Moses Cotsworth was familiar with the 13-month plans that preceded his International Fixed Calendar is not known. He did follow Ap-Iccim (Jones) in designating the 365th day of the year as Christmas. His suggestion was that this last day of the year should be designated a Sunday, and hence, because the following day would be New Year's Day and a Sunday also, he called it a Double Sunday. Since Cotsworth's goal was a simplified, more "rational" calendar for business and industry, he would carry over all the features of the Gregorian calendar consistent with this goal, including the traditional month names, the week beginning on Sunday (still traditionally used in US, but uncommon in most other countries and in the ISO (International Organization for Standardization) week standard, starting their weeks on Monday), and the Gregorian leap-year rule.
To promote Cotsworth's calendar reform the International Fixed Calendar League was founded in 1923, just after the plan was selected by the League of Nations as the best of 130 calendar proposals put forward. Sir Sandford Fleming, the inventor and driving force behind worldwide adoption of standard time, became the first president of the IFCL. The League opened offices in London and later in Rochester, New York. George Eastman, of the Eastman Kodak Company, became a fervent supporter of the IFC, and instituted its use at Kodak. The International Fixed Calendar League ceased operations shortly after the calendar plan failed to win final approval of the League of Nations in 1937.
The several advantages of the International Fixed Calendar are mainly related to its organization. | https://en.wikipedia.org/wiki?curid=23698 |
Potential energy
In physics, potential energy is the energy held by an object because of its position relative to other objects, stresses within itself, its electric charge, or other factors.
Common types of potential energy include the gravitational potential energy of an object that depends on its mass and its distance from the center of mass of another object, the elastic potential energy of an extended spring, and the electric potential energy of an electric charge in an electric field. The unit for energy in the International System of Units (SI) is the joule, which has the symbol J.
The term "potential energy" was introduced by the 19th-century Scottish engineer and physicist William Rankine, although it has links to Greek philosopher Aristotle's concept of potentiality.
Potential energy is associated with forces that act on a body in a way that the total work done by these forces on the body depends only on the initial and final positions of the body in space. These forces, that are called "conservative forces", can be represented at every point in space by vectors expressed as gradients of a certain scalar function called "potential".
Since the work of potential forces acting on a body that moves from a start to an end position is determined only by these two positions, and does not depend on the trajectory of the body, there is a function known as "potential" that can be evaluated at the two positions to determine this work.
There are various types of potential energy, each associated with a particular type of force. For example, the work of an elastic force is called elastic potential energy; work of the gravitational force is called gravitational potential energy; work of the Coulomb force is called electric potential energy; work of the strong nuclear force or weak nuclear force acting on the baryon charge is called nuclear potential energy; work of intermolecular forces is called intermolecular potential energy. Chemical potential energy, such as the energy stored in fossil fuels, is the work of the Coulomb force during rearrangement of configurations of electrons and nuclei in atoms and molecules. Thermal energy usually has two components: the kinetic energy of random motions of particles and the potential energy of their configuration.
Forces derivable from a potential are also called conservative forces. The work done by a conservative force is
where formula_2 is the change in the potential energy associated with the force. The negative sign provides the convention that work done against a force field increases potential energy, while work done by the force field decreases potential energy. Common notations for potential energy are "PE", "U", "V", and "Ep".
Potential energy is the energy by virtue of an object's position relative to other objects. Potential energy is often associated with restoring forces such as a spring or the force of gravity. The action of stretching a spring or lifting a mass is performed by an external force that works against the force field of the potential. This work is stored in the force field, which is said to be stored as potential energy. If the external force is removed the force field acts on the body to perform the work as it moves the body back to the initial position, reducing the stretch of the spring or causing a body to fall.
Consider a ball whose mass is m and whose height is h. The acceleration g of free fall is approximately constant, so the weight force of the ball mg is constant. Force × displacement gives the work done, which is equal to the gravitational potential energy, thus
The more formal definition is that potential energy is the energy difference between the energy of an object in a given position and its energy at a reference position.
Potential energy is closely linked with forces. If the work done by a force on a body that moves from "A" to "B" does not depend on the path between these points (if the work is done by a conservative force), then the work of this force measured from "A" assigns a scalar value to every other point in space and defines a scalar potential field. In this case, the force can be defined as the negative of the vector gradient of the potential field.
If the work for an applied force is independent of the path, then the work done by the force is evaluated at the start and end of the trajectory of the point of application. This means that there is a function "U"(x), called a "potential," that can be evaluated at the two points xA and xB to obtain the work over any trajectory between these two points. It is tradition to define this function with a negative sign so that positive work is a reduction in the potential, that is
where "C" is the trajectory taken from A to B. Because the work done is independent of the path taken, then this expression is true for any trajectory, "C", from A to B.
The function "U"(x) is called the potential energy associated with the applied force. Examples of forces that have potential energies are gravity and spring forces.
In this section the relationship between work and potential energy is presented in more detail. The line integral that defines work along curve "C" takes a special form if the force F is related to a scalar field φ(x) so that
In this case, work along the curve is given by
which can be evaluated using the gradient theorem to obtain
This shows that when forces are derivable from a scalar field, the work of those forces along a curve "C" is computed by evaluating the scalar field at the start point "A" and the end point "B" of the curve. This means the work integral does not depend on the path between "A" and "B" and is said to be independent of the path.
Potential energy "U"=-φ(x) is traditionally defined as the negative of this scalar field so that work by the force field decreases potential energy, that is
In this case, the application of the del operator to the work function yields,
and the force F is said to be "derivable from a potential." This also necessarily implies that F must be a conservative vector field. The potential "U" defines a force F at every point x in space, so the set of forces is called a force field.
Given a force field F(x), evaluation of the work integral using the gradient theorem can be used to find the scalar function associated with potential energy. This is done by introducing a parameterized curve γ(t)=r(t) from γ(a)=A to γ(b)=B, and computing,
For the force field F, let v= dr/dt, then the gradient theorem yields,
The power applied to a body by a force field is obtained from the gradient of the work, or potential, in the direction of the velocity v of the point of application, that is
Examples of work that can be computed from potential functions are gravity and spring forces.
For small height changes, gravitational potential energy can be computed using
where m is the mass in kg, g is the local gravitational field (9.8 metres per second squared on earth), h is the height above a reference level in metres, and U is the energy in joules.
In classical physics, gravity exerts a constant downward force F=(0, 0, "Fz") on the center of mass of a body moving near the surface of the Earth. The work of gravity on a body moving along a trajectory r(t) = ("x"(t), "y"(t), "z"(t)), such as the track of a roller coaster is calculated using its velocity, v=("v"x, "v"y, "v"z), to obtain
where the integral of the vertical component of velocity is the vertical distance. The work of gravity depends only on the vertical movement of the curve r(t).
A horizontal spring exerts a force F = (−"kx", 0, 0) that is proportional to its deformation in the axial or "x" direction. The work of this spring on a body moving along the space curve s("t") = ("x"("t"), "y"("t"), "z"("t")), is calculated using its velocity, v = ("v"x, "v"y, "v"z), to obtain
For convenience, consider contact with the spring occurs at "t" = 0, then the integral of the product of the distance "x" and the "x"-velocity, "xvx", is "x"2/2.
The function
is called the potential energy of a linear spring.
Elastic potential energy is the potential energy of an elastic object (for example a bow or a catapult) that is deformed under tension or compression (or stressed in formal terminology). It arises as a consequence of a force that tries to restore the object to its original shape, which is most often the electromagnetic force between the atoms and molecules that constitute the object. If the stretch is released, the energy is transformed into kinetic energy.
The gravitational potential function, also known as gravitational potential energy, is:
The negative sign follows the convention that work is gained from a loss of potential energy.
The gravitational force between two bodies of mass "M" and "m" separated by a distance "r" is given by Newton's law
where formula_19 is a vector of length 1 pointing from "M" to "m" and "G" is the gravitational constant.
Let the mass "m" move at the velocity v then the work of gravity on this mass as it moves from position r(t1) to r(t2) is given by
The position and velocity of the mass "m" are given by
where e"r" and e"t" are the radial and tangential unit vectors directed relative to the vector from "M" to "m". Use this to simplify the formula for work of gravity to,
This calculation uses the fact that
The electrostatic force exerted by a charge "Q" on another charge "q" separated by a distance "r" is given by Coulomb's Law
where formula_19 is a vector of length 1 pointing from "Q" to "q" and "ε"0 is the vacuum permittivity. This may also be written using Coulomb constant .
The work "W" required to move "q" from "A" to any point "B" in the electrostatic force field is given by the potential function
The potential energy is a function of the state a system is in, and is defined relative to that for a particular state. This reference state is not always a real state; it may also be a limit, such as with the distances between all bodies tending to infinity, provided that the energy involved in tending to that limit is finite, such as in the case of inverse-square law forces. Any arbitrary reference state could be used; therefore it can be chosen based on convenience.
Typically the potential energy of a system depends on the "relative" positions of its components only, so the reference state can also be expressed in terms of relative positions.
Gravitational energy is the potential energy associated with gravitational force, as work is required to elevate objects against Earth's gravity. The potential energy due to elevated positions is called gravitational potential energy, and is evidenced by water in an elevated reservoir or kept behind a dam. If an object falls from one point to another point inside a gravitational field, the force of gravity will do positive work on the object, and the gravitational potential energy will decrease by the same amount.
Consider a book placed on top of a table. As the book is raised from the floor to the table, some external force works against the gravitational force. If the book falls back to the floor, the "falling" energy the book receives is provided by the gravitational force. Thus, if the book falls off the table, this potential energy goes to accelerate the mass of the book and is converted into kinetic energy. When the book hits the floor this kinetic energy is converted into heat, deformation, and sound by the impact.
The factors that affect an object's gravitational potential energy are its height relative to some reference point, its mass, and the strength of the gravitational field it is in. Thus, a book lying on a table has less gravitational potential energy than the same book on top of a taller cupboard and less gravitational potential energy than a heavier book lying on the same table. An object at a certain height above the Moon's surface has less gravitational potential energy than at the same height above the Earth's surface because the Moon's gravity is weaker. "Height" in the common sense of the term cannot be used for gravitational potential energy calculations when gravity is not assumed to be a constant. The following sections provide more detail.
The strength of a gravitational field varies with location. However, when the change of distance is small in relation to the distances from the center of the source of the gravitational field, this variation in field strength is negligible and we can assume that the force of gravity on a particular object is constant. Near the surface of the Earth, for example, we assume that the acceleration due to gravity is a constant ("standard gravity"). In this case, a simple expression for gravitational potential energy can be derived using the "W" = "Fd" equation for work, and the equation
The amount of gravitational potential energy held by an elevated object is equal to the work done against gravity in lifting it. The work done equals the force required to move it upward multiplied with the vertical distance it is moved (remember "W = Fd"). The upward force required while moving at a constant velocity is equal to the weight, "mg", of an object, so the work done in lifting it through a height "h" is the product "mgh". Thus, when accounting only for mass, gravity, and altitude, the equation is:
where "U" is the potential energy of the object relative to its being on the Earth's surface, "m" is the mass of the object, "g" is the acceleration due to gravity, and "h" is the altitude of the object. If "m" is expressed in kilograms, "g" in m/s2 and "h" in metres then "U" will be calculated in joules.
Hence, the potential difference is
However, over large variations in distance, the approximation that "g" is constant is no longer valid, and we have to use calculus and the general mathematical definition of work to determine gravitational potential energy. For the computation of the potential energy, we can integrate the gravitational force, whose magnitude is given by Newton's law of gravitation, with respect to the distance "r" between the two bodies. Using that definition, the gravitational potential energy of a system of masses "m"1 and "M"2 at a distance "r" using gravitational constant "G" is
where "K" is an arbitrary constant dependent on the choice of datum from which potential is measured. Choosing the convention that "K"=0 (i.e. in relation to a point at infinity) makes calculations simpler, albeit at the cost of making "U" negative; for why this is physically reasonable, see below.
Given this formula for "U", the total potential energy of a system of "n" bodies is found by summing, for all formula_31 pairs of two bodies, the potential energy of the system of those two bodies.
Considering the system of bodies as the combined set of small particles the bodies consist of, and applying the previous on the particle level we get the negative gravitational binding energy. This potential energy is more strongly negative than the total potential energy of the system of bodies as such since it also includes the negative gravitational binding energy of each body. The potential energy of the system of bodies as such is the negative of the energy needed to separate the bodies from each other to infinity, while the gravitational binding energy is the energy needed to separate all particles from each other to infinity.
therefore,
As with all potential energies, only differences in gravitational potential energy matter for most physical purposes, and the choice of zero point is arbitrary. Given that there is no reasonable criterion for preferring one particular finite "r" over another, there seem to be only two reasonable choices for the distance at which "U" becomes zero: formula_34 and formula_35. The choice of formula_36 at infinity may seem peculiar, and the consequence that gravitational energy is always negative may seem counterintuitive, but this choice allows gravitational potential energy values to be finite, albeit negative.
The singularity at formula_34 in the formula for gravitational potential energy means that the only other apparently reasonable alternative choice of convention, with formula_36 for formula_34, would result in potential energy being positive, but infinitely large for all nonzero values of "r", and would make calculations involving sums or differences of potential energies beyond what is possible with the real number system. Since physicists abhor infinities in their calculations, and "r" is always non-zero in practice, the choice of formula_36 at infinity is by far the more preferable choice, even if the idea of negative energy in a gravity well appears to be peculiar at first.
The negative value for gravitational energy also has deeper implications that make it seem more reasonable in cosmological calculations where the total energy of the universe can meaningfully be considered; see inflation theory for more on this.
Gravitational potential energy has a number of practical uses, notably the generation of pumped-storage hydroelectricity. For example, in Dinorwig, Wales, there are two lakes, one at a higher elevation than the other. At times when surplus electricity is not required (and so is comparatively cheap), water is pumped up to the higher lake, thus converting the electrical energy (running the pump) to gravitational potential energy. At times of peak demand for electricity, the water flows back down through electrical generator turbines, converting the potential energy into kinetic energy and then back into electricity. The process is not completely efficient and some of the original energy from the surplus electricity is in fact lost to friction.
Gravitational potential energy is also used to power clocks in which falling weights operate the mechanism. It's also used by counterweights for lifting up an elevator, crane, or sash window.
Roller coasters are an entertaining way to utilize potential energy – chains are used to move a car up an incline (building up gravitational potential energy), to then have that energy converted into kinetic energy as it falls.
Another practical use is utilizing gravitational potential energy to descend (perhaps coast) downhill in transportation such as the descent of an automobile, truck, railroad train, bicycle, airplane, or fluid in a pipeline. In some cases the kinetic energy obtained from the potential energy of descent may be used to start ascending the next grade such as what happens when a road is undulating and has frequent dips. The commercialization of stored energy (in the form of rail cars raised to higher elevations) that is then converted to electrical energy when needed by an electrical grid, is being undertaken in the United States in a system called Advanced Rail Energy Storage (ARES).
Chemical potential energy is a form of potential energy related to the structural arrangement of atoms or molecules. This arrangement may be the result of chemical bonds within a molecule or otherwise. Chemical energy of a chemical substance can be transformed to other forms of energy by a chemical reaction. As an example, when a fuel is burned the chemical energy is converted to heat, same is the case with digestion of food metabolized in a biological organism. Green plants transform solar energy to chemical energy through the process known as photosynthesis, and electrical energy can be converted to chemical energy through electrochemical reactions.
The similar term chemical potential is used to indicate the potential of a substance to undergo a change of configuration, be it in the form of a chemical reaction, spatial transport, particle exchange with a reservoir, etc.
An object can have potential energy by virtue of its electric charge and several forces related to their presence. There are two main types of this kind of potential energy: electrostatic potential energy, electrodynamic potential energy (also sometimes called magnetic potential energy).
Electrostatic potential energy between two bodies in space is obtained from the force exerted by a charge "Q" on another charge "q" which is given by
where formula_19 is a vector of length 1 pointing from "Q" to "q" and "ε"0 is the vacuum permittivity. This may also be written using Coulomb's constant .
If the electric charge of an object can be assumed to be at rest, then it has potential energy due to its position relative to other charged objects. The electrostatic potential energy is the energy of an electrically charged particle (at rest) in an electric field. It is defined as the work that must be done to move it from an infinite distance away to its present location, adjusted for non-electrical forces on the object. This energy will generally be non-zero if there is another electrically charged object nearby.
The work "W" required to move "q" from "A" to any point "B" in the electrostatic force field is given by
typically given in "J" for Joules. A related quantity called "electric potential" (commonly denoted with a "V" for voltage) is equal to the electric potential energy per unit charge.
The energy of a magnetic moment formula_44 in an externally produced magnetic B-field has potential energy
The magnetization in a field is
where the integral can be over all space or, equivalently, where is nonzero.
Magnetic potential energy is the form of energy related not only to the distance between magnetic materials, but also to the orientation, or alignment, of those materials within the field. For example, the needle of a compass has the lowest magnetic potential energy when it is aligned with the north and south poles of the Earth's magnetic field. If the needle is moved by an outside force, torque is exerted on the magnetic dipole of the needle by the Earth's magnetic field, causing it to move back into alignment. The magnetic potential energy of the needle is highest when its field is in the same direction as the Earth's magnetic field. Two magnets will have potential energy in relation to each other and the distance between them, but this also depends on their orientation. If the opposite poles are held apart, the potential energy will be higher the further they are apart and lower the closer they are. Conversely, like poles will have the highest potential energy when forced together, and the lowest when they spring apart.
Nuclear potential energy is the potential energy of the particles inside an atomic nucleus. The nuclear particles are bound together by the strong nuclear force. Weak nuclear forces provide the potential energy for certain kinds of radioactive decay, such as beta decay.
Nuclear particles like protons and neutrons are not destroyed in fission and fusion processes, but collections of them can have less mass than if they were individually free, in which case this mass difference can be liberated as heat and radiation in nuclear reactions (the heat and radiation have the missing mass, but it often escapes from the system, where it is not measured). The energy from the Sun is an example of this form of energy conversion. In the Sun, the process of hydrogen fusion converts about 4 million tonnes of solar matter per second into electromagnetic energy, which is radiated into space.
Potential energy is closely linked with forces. If the work done by a force on a body that moves from "A" to "B" does not depend on the path between these points, then the work of this force measured from "A" assigns a scalar value to every other point in space and defines a scalar potential field. In this case, the force can be defined as the negative of the vector gradient of the potential field.
For example, gravity is a conservative force. The associated potential is the gravitational potential, often denoted by formula_47 or formula_48, corresponding to the energy per unit mass as a function of position. The gravitational potential energy of two particles of mass "M" and "m" separated by a distance "r" is
The gravitational potential (specific energy) of the two bodies is
where formula_51 is the reduced mass.
The work done against gravity by moving an infinitesimal mass from point A with formula_52 to point B with formula_53 is formula_54 and the work done going back the other way is formula_55 so that the total work done in moving from A to B and returning to A is
If the potential is redefined at A to be formula_57 and the potential at B to be formula_58, where formula_59 is a constant (i.e. formula_59 can be any number, positive or negative, but it must be the same at A as it is at B) then the work done going from A to B is
as before.
In practical terms, this means that one can set the zero of formula_62 and formula_47 anywhere one likes. One may set it to be zero at the surface of the Earth, or may find it more convenient to set zero at infinity (as in the expressions given earlier in this section).
A conservative force can be expressed in the language of differential geometry as a closed form. As Euclidean space is contractible, its de Rham cohomology vanishes, so every closed form is also an exact form, and can be expressed as the gradient of a scalar field. This gives a mathematical justification of the fact that all conservative forces are gradients of a potential field. | https://en.wikipedia.org/wiki?curid=23703 |
Pyramid
A pyramid (from "") is a structure whose outer surfaces are triangular and converge to a single step at the top, making the shape roughly a pyramid in the geometric sense. The base of a pyramid can be trilateral, quadrilateral, or of any polygon shape. As such, a pyramid has at least three outer triangular surfaces (at least four faces including the base). The square pyramid, with a square base and four triangular outer surfaces, is a common version.
A pyramid's design, with the majority of the weight closer to the ground, and with the pyramidion at the apex, means that less material higher up on the pyramid will be pushing down from above. This distribution of weight allowed early civilizations to create stable monumental structures.
Civilizations in many parts of the world have built pyramids. The largest pyramid by volume is the Great Pyramid of Cholula, in the Mexican state of Puebla. For thousands of years, the largest structures on Earth were pyramids—first the Red Pyramid in the Dashur Necropolis and then the Great Pyramid of Khufu, both in Egypt—the latter is the only one of the Seven Wonders of the Ancient World still remaining.
The Mesopotamians built the earliest pyramidal structures, called "ziggurats". In ancient times, these were brightly painted in gold/bronze. Since they were constructed of sun-dried mud-brick, little remains of them. Ziggurats were built by the Sumerians, Babylonians, Elamites, Akkadians, and Assyrians for local religions. Each ziggurat was part of a temple complex which included other buildings. The precursors of the ziggurat were raised platforms that date from the Ubaid period during the fourth millennium BC. The earliest ziggurats began near the end of the Early Dynastic Period. The latest Mesopotamian ziggurats date from the 6th century BC.
Built in receding tiers upon a rectangular, oval, or square platform, the ziggurat was a pyramidal structure with a flat top. Sun-baked bricks made up the core of the ziggurat with facings of fired bricks on the outside. The facings were often glazed in different colors and may have had astrological significance. Kings sometimes had their names engraved on these glazed bricks. The number of tiers ranged from two to seven. It is assumed that they had shrines at the top, but there is no archaeological evidence for this and the only textual evidence is from Herodotus. Access to the shrine would have been by a series of ramps on one side of the ziggurat or by a spiral ramp from base to summit.
The most famous pyramids are the Egyptian — huge structures built of brick or stone, some of which are among the world's largest constructions. They are shaped as a reference to the rays of the sun. Most pyramids had a polished, highly reflective white limestone surface, to give them a shining appearance when viewed from a distance. The capstone was usually made of hard stone – granite or basalt – and could be plated with gold, silver, or electrum and would also be highly reflective. After 2700 BC, the ancient Egyptians began building pyramids, until around 1700 BC. The first pyramid was erected during the Third Dynasty by the Pharaoh Djoser and his architect Imhotep. This step pyramid consisted of six stacked mastabas. The largest Egyptian pyramids are those at the Giza pyramid complex.
The age of the pyramids reached its zenith at Giza in 2575–2150 BC. Ancient Egyptian pyramids were in most cases placed west of the river Nile because the divine pharaoh's soul was meant to join with the sun during its descent before continuing with the sun in its eternal round. As of 2008, some 135 pyramids have been discovered in Egypt. The Great Pyramid of Giza is the largest in Egypt and one of the largest in the world. At 481ft, it was the tallest building in the world until Lincoln Cathedral was finished in 1311 AD. The base is over in area. The Great Pyramid of Giza is one of the Seven Wonders of the Ancient World. It is the only one to survive into modern times. The Ancient Egyptians covered the faces of pyramids with polished white limestone, containing great quantities of fossilized seashells. Many of the facing stones have fallen or have been removed and used for construction in Cairo.
Most pyramids are located near Cairo, with only one royal pyramid being located south of Cairo, at the Abydos temple complex. The pyramid at Abydos, Egypt were commissioned by Ahmose I who founded the 18th Dynasty and the New Kingdom. The building of pyramids began in the Third Dynasty with the reign of King Djoser. Early kings such as Snefru built several pyramids, with subsequent kings adding to the number of pyramids until the end of the Middle Kingdom.
The last king to build royal pyramids was Ahmose, with later kings hiding their tombs in the hills, such as those in the Valley of the Kings in Luxor's West Bank. In Medinat Habu, or Deir el-Medina, smaller pyramids were built by individuals. Smaller pyramids with steeper sides were also built by the Nubians who ruled Egypt in the Late Period.
While pyramids are associated with Egypt, the nation of Sudan has 220 extant pyramids, the most numerous in the world.
Nubian pyramids were constructed (roughly 240 of them) at three sites in Sudan to serve as tombs for the kings and queens of Napata and Meroë. The pyramids of Kush, also known as Nubian Pyramids, have different characteristics than the pyramids of Egypt. The Nubian pyramids were constructed at a steeper angle than Egyptian ones. Pyramids were still being built in Sudan as late as 200 AD.
One of the unique structures of Igbo culture was the Nsude Pyramids, at the Nigerian town of Nsude, northern Igboland. Ten pyramidal structures were built of clay/mud. The first base section was 60 ft. in circumference and 3 ft. in height. The next stack was 45 ft. in circumference. Circular stacks continued, till it reached the top. The structures were temples for the god Ala, who was believed to reside at the top. A stick was placed at the top to represent the god's residence. The structures were laid in groups of five parallel to each other. Because it was built of clay/mud like the Deffufa of Nubia, time has taken its toll requiring periodic reconstruction.
Pausanias (2nd century AD) mentions two buildings resembling pyramids, one, 19 kilometres (12 mi) southwest of the still standing structure at Hellenikon, a common tomb for soldiers who died in a legendary struggle for the throne of Argos and another which he was told was the tomb of Argives killed in a battle around 669/8 BC. Neither of these still survive and there is no evidence that they resembled Egyptian pyramids.
There are also at least two surviving pyramid-like structures still available to study, one at Hellenikon and the other at Ligourio/Ligurio, a village near the ancient theatre Epidaurus. These buildings were not constructed in the same manner as the pyramids in Egypt. They do have inwardly sloping walls but other than those there is no obvious resemblance to Egyptian pyramids. They had large central rooms (unlike Egyptian pyramids) and the Hellenikon structure is rectangular rather than square, which means that the sides could not have met at a point. The stone used to build these structures was limestone quarried locally and was cut to fit, not into freestanding blocks like the Great Pyramid of Giza.
The dating of these structures has been made from the pot shards excavated from the floor and on the grounds. The latest dates available from scientific dating have been estimated around the 5th and 4th centuries. Normally this technique is used for dating pottery, but here researchers have used it to try to date stone flakes from the walls of the structures. This has created some debate about whether or not these structures are actually older than Egypt, which is part of the Black Athena controversy.
Mary Lefkowitz has criticised this research. She suggests that some of the research was done not to determine the reliability of the dating method, as was suggested, but to back up an assumption of age and to make certain points about pyramids and Greek civilization. She notes that not only are the results not very precise, but that other structures mentioned in the research are not in fact pyramids, e.g. a tomb alleged to be the tomb of Amphion and Zethus near Thebes, a structure at Stylidha (Thessaly) which is just a long wall, etc. She also notes the possibility that the stones that were dated might have been recycled from earlier constructions. She also notes that earlier research from the 1930s, confirmed in the 1980s by Fracchia was ignored. She argues that they undertook their research using a novel and previously untested methodology in order to confirm a predetermined theory about the age of these structures.
Liritzis responded in a journal article published in 2011, stating that Lefkowitz failed to understand and misinterpreted the methodology.
The Pyramids of Güímar refer to six rectangular pyramid-shaped, terraced structures, built from lava stone without the use of mortar. They are located in the district of Chacona, part of the town of Güímar on the island of Tenerife in the Canary Islands. The structures have been dated to the 19th century and their original function explained as a byproduct of contemporary agricultural techniques.
Autochthonous Guanche traditions as well as surviving images indicate that similar structures (also known as, "Morras", "Majanos", "Molleros", or "Paredones") could once have been found in many locations on the island. However, over time they have been dismantled and used as a cheap building material. In Güímar itself there were nine pyramids, only six of which survive.
There are many square flat-topped mound tombs in China. The First Emperor Qin Shi Huang (circa 221 BC, who unified the 7 pre-Imperial Kingdoms) was buried under a large mound outside modern day Xi'an. In the following centuries about a dozen more Han Dynasty royals were also buried under flat-topped pyramidal earthworks.
A number of Mesoamerican cultures also built pyramid-shaped structures. Mesoamerican pyramids were usually stepped, with temples on top, more similar to the Mesopotamian ziggurat than the Egyptian pyramid.
The largest pyramid by volume is the Great Pyramid of Cholula, in the Mexican state of Puebla. Constructed from the 3rd century BC to the 9th century AD, this pyramid is considered the largest monument ever constructed anywhere in the world, and is still being excavated. The third largest pyramid in the world, the Pyramid of the Sun, at Teotihuacan is also located in Mexico. There is an unusual pyramid with a circular plan at the site of Cuicuilco, now inside Mexico City and mostly covered with lava from an eruption of the Xitle Volcano in the 1st century BC. There are several circular stepped pyramids called Guachimontones in Teuchitlán, Jalisco as well.
Pyramids in Mexico were often used as places of human sacrifice. For the re-consecration of Great Pyramid of Tenochtitlan in 1487, Where, according to Michael Harner, "one source states 20,000, another 72,344, and several give 80,400".
Many pre-Columbian Native American societies of ancient North America built large pyramidal earth structures known as platform mounds. Among the largest and best-known of these structures is Monks Mound at the site of Cahokia in what became Illinois, completed around 1100 AD, which has a base larger than that of the Great Pyramid at Giza. Many of the mounds underwent multiple episodes of mound construction at periodic intervals, some becoming quite large. They are believed to have played a central role in the mound-building peoples' religious life and documented uses include semi-public chief's house platforms, public temple platforms, mortuary platforms, charnel house platforms, earth lodge/town house platforms, residence platforms, square ground and rotunda platforms, and dance platforms. Cultures who built substructure mounds include the Troyville culture, Coles Creek culture, Plaquemine culture and Mississippian cultures.
The 27-metre-high Pyramid of Cestius was built by the end of the 1st century BC and still exists today, close to the Porta San Paolo. Another one, named "Meta Romuli", standing in the "Ager Vaticanus" (today's Borgo), was destroyed at the end of the 15th century.
Pyramids have occasionally been used in Christian architecture of the feudal era, e.g. as the tower of Oviedo's Gothic Cathedral of San Salvador.
Many giant granite temple pyramids were made in South India during the Chola Empire, many of which are still in religious use today. Examples of such pyramid temples include Brihadisvara Temple at Thanjavur, the Brihadisvara Temple at Gangaikonda Cholapuram and the Airavatesvara Temple at Darasuram. However, temple pyramid the largest area is the Ranganathaswamy Temple in Srirangam, Tamil Nadu. The Thanjavur temple was built by Raja Raja Chola in the 11th century. The Brihadisvara Temple was declared a World Heritage Site by UNESCO in 1987; the Temple of Gangaikondacholapuram and the Airavatesvara Temple at Darasuram were added as extensions to the site in 2004.
Next to menhir, stone table, and stone statue; Austronesian megalithic culture in Indonesia also featured earth and stone step pyramid structures called "punden berundak" as discovered in Pangguyangan site near Cisolok and in Cipari near Kuningan. The construction of stone pyramids is based on the native beliefs that mountains and high places are the abode for the spirit of the ancestors.
The step pyramid is the basic design of 8th century Borobudur Buddhist monument in Central Java. However the later temples built in Java were influenced by Indian Hindu architecture, as displayed by the towering spires of Prambanan temple. In the 15th century Java during late Majapahit period saw the revival of Austronesian indigenous elements as displayed by Sukuh temple that somewhat resemble Mesoamerican pyramid, and also stepped pyramids of Mount Penanggungan.
Andean cultures had used pyramids in various architectural structures such as the ones in Caral, Túcume and Chavín de Huantar.
With the Egyptian Revival movement in the nineteenth and early twentieth century, pyramids were becoming more common in funerary architecture. This style was especially popular with tycoons in the US. Hunt's Tomb in Phoenix, Arizona and Schoenhofen Pyramid Mausoleum in Chicago are some of the notable examples but Henry Bergh, Charles Debrille Poston and many others were buried in pyramid shape mausoleums. People in Europe also adopted this style. One of them was Branislav Nušić who was buried in one such tomb. Even today some people build pyramid tombs for themselves. Nicolas Cage bought a pyramid tomb for himself in a famed New Orleans graveyard. | https://en.wikipedia.org/wiki?curid=23704 |
Predestination
Predestination, in Christian theology, is the doctrine that all events have been willed by God, usually with reference to the eventual fate of the individual soul. Explanations of predestination often seek to address the "paradox of free will", whereby God's omniscience seems incompatible with human free will. In this usage, predestination can be regarded as a form of religious determinism; and usually predeterminism, also known as theological determinism.
There is some disagreement among scholars regarding the views on predestination of first-century AD Judaism, out of which Christianity came. Josephus wrote during the first century that the three main Jewish sects differed on this question. He argued that the Essenes and Pharisees argued that God's providence orders all human events, but the Pharisees still maintained that people are able to choose between right and wrong. He wrote that the Sadducees did not have a doctrine of providence.
Biblical scholar N. T. Wright argues that Josephus's portrayal of these groups is incorrect, and that the Jewish debates referenced by Josephus should be seen as having to do with God's work to liberate Israel rather than philosophical questions about predestination. Wright asserts that Essenes were content to wait for God to liberate Israel while Pharisees believed Jews needed to act in cooperation with God. John Barclay responded that Josephus's description was an over-simplification and there were likely to be complex differences between these groups which may have been similar to those described by Josephus. Francis Watson has also argued on the basis of 4 Ezra, a document dated to the first century AD, that Jewish beliefs in predestination are primarily concerned with God's choice to save some individual Jews.
In the New Testament, Romans 8–11 presents a statement on predestination. In Romans 8:28–30, Paul writes,
Biblical scholars have interpreted this passage in several ways. Many say this only has to do with service, and is not about salvation. The Catholic biblical commentator Brendan Byrne wrote that the predestination mentioned in this passage should be interpreted as applied to the Christian community corporately rather than individuals. Another Catholic commentator, Joseph Fitzmyer, wrote that this passage teaches that God has predestined the salvation of all humans. Douglas Moo, a Protestant biblical interpreter, reads the passage as teaching that God has predestined a certain set of people to salvation, and predestined the remainder of humanity to reprobation (damnation). Similarly, Wright's interpretation is that in this passage Paul teaches that God will save those whom he has chosen, but Wright also emphasizes that Paul does not intend to suggest that God has eliminated human free will or responsibility. Instead, Wright asserts, Paul is saying that God's will works through that of humans to accomplish salvation.
Origen, writing in the third century, taught that God's providence extends to every individual. He believed God's predestination was based on God's foreknowledge of every individual's merits, whether in their current life or a previous life.
Later in the fourth and fifth centuries, Augustine of Hippo (354–430) also taught that God orders all things while preserving human freedom. Prior to 396, Augustine believed that predestination was based on God's foreknowledge of whether individuals would believe, that God's grace was "a reward for human assent". Later, in response to Pelagius, Augustine said that the sin of pride consists in assuming that "we are the ones who choose God or that God chooses us (in his foreknowledge) because of something worthy in us", and argued that it is God's grace that causes the individual act of faith. Scholars are divided over whether Augustine's teaching implies double predestination, or the belief that God chooses some people for damnation as well as some for salvation. Catholic scholars tend to deny that he held such a view while some Protestants and secular scholars affirm that Augustine did believe in double predestination.
Augustine's position raised objections. Julian of Eclanum expressed the view that Augustine was bringing Manichean thoughts into the church. For Vincent of Lérins, this was a disturbing innovation. This new tension eventually became obvious with the confrontation between Augustine and Pelagius culminating in condemnation of Pelagianism (as interpreted by Augustine) at the Council of Ephesus in 431. Pelagius denied Augustine's view of predestination in order to affirm that salvation is achieved by an act of free will.
The Council of Arles in the late fifth century condemned the position "that some have been condemned to death, others have been predestined to life", though this may seem to follow from Augustine's teaching. The Second Council of Orange in 529 also condemned the position that "some have been truly predestined to evil by divine power".
In the eighth century, John of Damascus emphasized the freedom of the human will in his doctrine of predestination, and argued that acts arising from peoples' wills are not part of God's providence at all. Damascene teaches that people's good actions are done in cooperation with God, but are not caused by him.
Gottschalk of Orbais, a ninth-century Saxon monk, argued that God predestines some people to hell as well as predestining some to heaven, a view known as double predestination. He was condemned by several synods, but his views remained popular. Irish theologian John Scottus Eriugena wrote a refutation of Gottschalk. Eriugena abandoned Augustine's teaching on predestination. He wrote that God's predestination should be equated with his foreknowledge of people's choices.
In the twelfth century, Thomas Aquinas taught that God predestines certain people to the beatific vision based solely on his own goodness rather than that of creatures. Aquinas also believed that people are free in their choices, fully cause their own sin, and are solely responsible for it. According to Aquinas, there are several ways in which God wills actions. He directly wills the good, indirectly wills evil consequences of good things, and only permits evil. Aquinas held that in permitting evil, God does not will it to be done or not to be done.
In the thirteenth century, William of Ockham taught that God does not cause human choices and equated predestination with divine foreknowledge. Though Ockham taught that God predestines based on people's foreseen works, he maintained that God's will was not constrained to do this.
John Calvin rejected the idea that God permits rather than actively decrees the damnation of sinners, as well as other evil. Calvin did not believe God to be guilty of sin, but rather he considered God inflicting sin upon his creations to be an unfathomable mystery (it would seem that God was simultaneously willing and not willing sin to befall humans). Though he maintained God's predestination applies to damnation as well as salvation, he taught that the damnation of the damned is caused by their sin, but that the salvation of the saved is solely caused by God. Other Protestant Reformers, including Huldrych Zwingli, also held double predestinarian views.
The Eastern Orthodox view was summarized by Bishop Theophan the Recluse in response to the question, "What is the relationship between the Divine provision and our free will?"
Catholicism teaches the doctrine of predestination, while rejecting the classical Calvinist view known as "double predestination". This means that while it is held that those whom God has elected to eternal life will infallibly attain it, and are therefore said to be predestined to salvation by God, those who perish are not predestined to damnation. According to the Catholic Church, God predestines no one to go to hell, for this, a willful turning away from God (a mortal sin) is necessary, and persistence in it until the end." Catholicism has been generally discouraging to human attempts to guess or predict the Divine Will. The "Catholic Encyclopedia" entry on predestination says:
Pope John Paul II wrote:
The Catholic Catechism says, "To God, all moments of time are present in their immediacy. When therefore he establishes his eternal plan of "predestination", he includes in it each person's free response to his grace."
Catholics do not believe that any hints or evidence of the predestined status of individuals is available to humans, and predestination generally plays little or no part in Catholic teaching to the faithful, being a topic addressed in a professional theological context only.
Augustine of Hippo laid the foundation for much of the later Catholic teaching on predestination. His teachings on grace and free will were largely adopted by the Second Council of Orange (529), whose decrees were directed against the Semipelagians. Augustine wrote,
Augustine also teaches that people have free will. For example, in "On Grace and Free Will", (see especially chapters II–IV) Augustine states that "He [God] has revealed to us, through His Holy Scriptures, that there is in man a free choice of will," and that "God's precepts themselves would be of no use to a man unless he had free choice of will, so that by performing them he might obtain the promised rewards." (chap. II)
Thomas Aquinas' views concerning predestination are largely in agreement with Augustine and can be summarized by many of his writings in his "Summa Theologiæ":
This table summarizes the classical views of three different Protestant beliefs.
Lutherans historically hold to unconditional election to salvation. However, some do not believe that there are certain people that are predestined to salvation, but salvation is predestined for those who seek God. Lutherans believe Christians should be assured that they are among the predestined. However, they disagree with those who make predestination the source of salvation rather than Christ's suffering, death, and resurrection. Unlike some Calvinists, Lutherans do not believe in a predestination to damnation. Instead, Lutherans teach eternal damnation is a result of the unbeliever's sins, rejection of the forgiveness of sins, and unbelief.
Martin Luther's attitude towards predestination is set out in his "On the Bondage of the Will", published in 1525. This publication by Luther was in response to the published treatise by Desiderius Erasmus in 1524 known as "On Free Will". Luther based his views on Ephesians 2:8–10, which says:
The Belgic Confession of 1561 affirmed that God "delivers and preserves" from perdition "all whom he, in his eternal and unchangeable council, of mere goodness hath elected in Christ Jesus our Lord, without respect to their works" (Article XVI).
Calvinists believe that God picked those who he will save and bring with him to Heaven before the world was created. They also believe that those people God does not save will go to Hell. John Calvin thought people who were saved could never lose their salvation and the "elect" (those God saved) would know they were saved because of their actions.
In this common, loose sense of the term, to affirm or to deny predestination has particular reference to the Calvinist doctrine of unconditional election. In the Calvinist interpretation of the Bible, this doctrine normally has only pastoral value related to the assurance of salvation and the absolution of salvation by grace alone. However, the philosophical implications of the doctrine of election and predestination are sometimes discussed beyond these systematic bounds. Under the topic of the doctrine of God (theology proper), the predestinating decision of God cannot be contingent upon anything outside of himself, because all other things are dependent upon him for existence and meaning. Under the topic of the doctrines of salvation (soteriology), the predestinating decision of God is made from God's knowledge of his own will (Romans 9:15), and is therefore not contingent upon human decisions (rather, free human decisions are outworkings of the decision of God, which sets the total reality within which those decisions are made in exhaustive detail: that is, nothing left to chance). Calvinists do not pretend to understand how this works; but they are insistent that the Scriptures teach both the sovereign control of God and the responsibility and freedom of human decisions.
Calvinist groups use the term Hyper-Calvinism to describe Calvinistic systems that assert without qualification that God's intention to destroy some is equal to his intention to save others. Some forms of Hyper-Calvinism have racial implications, as when Dutch Calvinist theologian Franciscus Gomarus however argued that Jews, because of their refusal to worship Jesus Christ, were members of the non-elect, as also argued by John Calvin himself, based on I John 2:22–23 in The New Testament of the Bible. Some Dutch settlers in South Africa argued that black people were sons of Ham, whom Noah had cursed to be slaves, according to Genesis 9:18–19, or drew analogies between them and the Canaanites, suggesting a "chosen people" ideology similar to that espoused by proponents of the Jewish nation. This justified racial hierarchy on earth, as well as racial segregation of congregations, but did not exclude blacks from being part of the elect. Other Calvinists vigorously objected to these arguments (see Afrikaner Calvinism).
Expressed sympathetically, the Calvinist doctrine is that God has mercy or withholds it, with particular consciousness of who are to be the recipients of mercy in Christ. Therefore, the particular persons are chosen, out of the total number of human beings, who will be rescued from enslavement to sin and the fear of death, and from punishment due to sin, to dwell forever in his presence. Those who are being saved are assured through the gifts of faith, the sacraments, and communion with God through prayer and increase of good works, that their reconciliation with him through Christ is settled by the sovereign determination of God's will. God also has particular consciousness of those who are passed over by his selection, who are without excuse for their rebellion against him, and will be judged for their sins.
Calvinists typically divide on the issue of predestination into infralapsarians (sometimes called 'sublapsarians') and supralapsarians. Infralapsarians interpret the biblical election of God to highlight his love (1 John 4:8; Ephesians 1:4b–5a) and chose his elect considering the situation after the Fall, while supralapsarians interpret biblical election to highlight God's sovereignty (Romans 9:16) and that the Fall was ordained by God's decree of election. In infralapsarianism, election is God's response to the Fall, while in supralapsarianism the Fall is part of God's plan for election. In spite of the division, many Calvinist theologians would consider the debate surrounding the infra- and supralapsarian positions one in which scant Scriptural evidence can be mustered in either direction, and that, at any rate, has little effect on the overall doctrine.
Some Calvinists decline to describe the eternal decree of God in terms of a sequence of events or thoughts, and many caution against the simplifications involved in describing any action of God in speculative terms. Most make distinctions between the positive manner in which God chooses some to be recipients of grace, and the manner in which grace is consciously withheld so that some are destined for everlasting punishments.
Debate concerning predestination according to the common usage concerns the destiny of the damned: whether God is just if that destiny is settled prior to the existence of any actual volition of the individual, and whether the individual is in any meaningful sense responsible for his destiny if it is settled by the eternal action of God.
Arminians hold that God does not predetermine, but instead infallibly knows who will believe and perseveringly be saved. This view is known as conditional election, because it states that election is conditional on the one who wills to have faith in God for salvation. Although God knows from the beginning of the world who will go where, the choice is still with the individual. The Dutch Calvinist theologian Franciscus Gomarus strongly opposed the views of Jacobus Arminius with his doctrine of supralapsarian predestination.
The Church of Jesus Christ of Latter-day Saints (LDS Church) rejects the doctrine of predestination, but does believe in foreordination. Foreordination, an important doctrine of the LDS Church, teaches that during the pre-mortal existence, God selected ("foreordained") particular people to fulfill certain missions ("callings") during their mortal lives. For example, prophets were foreordained to be the Lord's servants (see Jeremiah 1:5), all who receive the priesthood were foreordained to that calling, and Jesus was foreordained to enact the atonement.
The LDS Church teaches the doctrine of moral agency, the ability to choose and act for oneself, and decide whether to accept Christ's atonement.
Conditional election is the belief that God chooses for eternal salvation those whom he foresees will have faith in Christ. This belief emphasizes the importance of a person's free will. The counter-view is known as unconditional election, and is the belief that God chooses whomever he will, based solely on his purposes and apart from an individual's free will. It has long been an issue in Calvinist–Arminian debate. An alternative viewpoint is Corporate election, which distinguishes God's election and predestination for corporate entities such as the community "in Christ," and individuals who can benefit from that community's election and predestination so long as they continue belonging to that community.
Infralapsarianism (also called sublapsarianism) holds that predestination logically coincides with the preordination of Man's fall into sin. That is, God predestined sinful men for salvation. Therefore, according to this view, God is the ultimate cause, but not the proximate source or "author" of sin. Infralapsarians often emphasize a difference between God's decree (which is inviolable and inscrutable), and his revealed will (against which man is disobedient). Proponents also typically emphasize the grace and mercy of God toward all men, although teaching also that only some are predestined for salvation.
In common English parlance, the doctrine of predestination often has particular reference to the doctrines of Calvinism. The version of predestination espoused by John Calvin, after whom Calvinism is named, is sometimes referred to as "double predestination" because in it God predestines some people for salvation (i.e. unconditional election) and some for condemnation (i.e. Reprobation) which results by allowing the individual's own sins to condemn them. Calvin himself defines predestination as "the eternal decree of God, by which he determined with himself whatever he wished to happen with regard to every man. Not all are created on equal terms, but some are preordained to eternal life, others to eternal damnation; and, accordingly, as each has been created for one or other of these ends, we say that he has been predestined to life or to death."
On the spectrum of beliefs concerning predestination, Calvinism is the strongest form among Christians. It teaches that God's predestining decision is based on the knowledge of his own will rather than foreknowledge, concerning every particular person and event; and, God continually acts with entire freedom, in order to bring about his will in completeness, but in such a way that the freedom of the creature is not violated, "but rather, established".
Calvinists who hold the infralapsarian view of predestination usually prefer that term to "sublapsarianism," perhaps with the intent of blocking the inference that they believe predestination is on the basis of foreknowledge ("sublapsarian" meaning, assuming the fall into sin). The different terminology has the benefit of distinguishing the Calvinist double predestination version of infralapsarianism from Lutheranism's view that predestination is a mystery, which forbids the unprofitable intrusion of prying minds since God only reveals partial knowledge to the human race.
Supralapsarianism is the doctrine that God's decree of predestination for salvation and reprobation logically precedes his preordination of the human race's fall into sin. That is, God decided to save, and to damn; he then determined the means by which that would be made possible. It is a matter of controversy whether or not Calvin himself held this view, but most scholars link him with the infralapsarian position. It is known, however, that Calvin's successor in Geneva, Theodore Beza, held to the supralapsarian view.
Double predestination, or the double decree, is the doctrine that God actively reprobates, or decrees damnation of some, as well as salvation for those whom he has elected. Augustine made statements that on their own seem to teach such a doctrine, but in the context of his other writings it is not clear whether he held it. Augustine's doctrine of predestination does seem to imply a double predestinarian view. Gottschalk of Orbais taught it more explicitly in the ninth century, and Gregory of Rimini in the fourteenth. During the Protestant Reformation John Calvin also held double predestinarian views. John Calvin states: "By predestination we mean the eternal decree of God, by which he determined with himself whatever he wished to happen with regard to every man. All are not created on equal terms, but some are preordained to eternal life, others to eternal damnation; and, accordingly, as each has been created for one or other of these ends, we say that he has been predestinated to life or to death."
Open theism advocates the non-traditional Arminian view of election that predestination is corporate. In corporate election, God does not choose which individuals he will save prior to creation, but rather God chooses the church as a whole. Or put differently, God chooses what type of individuals he will save. Another way the New Testament puts this is to say that God chose the church in Christ (Eph. 1:4). In other words, God chose from all eternity to save all those who would be found in Christ, by faith in God. This choosing is not primarily about salvation from eternal destruction either but is about God's chosen agency in the world. Thus individuals have full freedom in terms of whether they become members of the church or not. Corporate election is thus consistent with the open view's position on God's omniscience, which states that God's foreknowledge does not determine the outcomes of individual free will.
Middle Knowledge is a concept that was developed by Jesuit theologian Luis de Molina, and exists under a doctrine called Molinism. It attempts to deal with the topic of predestination by reconciling God's sovereign providence with the notion of libertarian free will. The concept of Middle Knowledge holds that God has a knowledge of true pre-volitional counterfactuals for all free creatures. That is, what any individual creature with a free will (e.g. a human) would do under any given circumstance. God's knowledge of counterfactuals is reasoned to occur logically prior to his divine creative decree (that is, prior to creation), and after his knowledge of necessary truths. Thus, Middle Knowledge holds that before the world was created, God knew what every existing creature capable of libertarian freedom (e.g. every individual human) would freely choose to do in all possible circumstances. It then holds that based on this information, God elected from a number of these possible worlds, the world most consistent with his ultimate will, which is the actual world that we live in.
For example:
Based on this Middle Knowledge, God has the ability to actualise the world in which A is placed in a circumstance that he freely chooses to do what is consistent with Gods ultimate will. If God determined that the world most suited to his purposes is a world in which A would freely choose Y instead of Z, God can actualise a world in which Free Creature A finds himself in Circumstance B.
In this way, Middle Knowledge is thought of by its proponents to be consistent with any theological doctrines that assert God as having divine providence and man having a libertarian freedom (e.g. Calvinism, Catholicism, Lutheranism), and to offer a potential solution to the concerns that God's providence somehow nullifies man from having true liberty in his choices. | https://en.wikipedia.org/wiki?curid=23705 |
Primitive notion
In mathematics, logic, philosophy, and formal systems, a primitive notion is a concept that is not defined in terms of previously defined concepts. It is often motivated informally, usually by an appeal to intuition and everyday experience. In an axiomatic theory, relations between primitive notions are restricted by axioms. Some authors refer to the latter as "defining" primitive notions by one or more axioms, but this can be misleading. Formal theories cannot dispense with primitive notions, under pain of infinite regress (per the regress problem).
For example, in contemporary geometry, "point", "line", and "contains" are some primitive notions. Instead of attempting to define them, their interplay is ruled (in Hilbert's axiom system) by axioms like "For every two points there exists a line that contains them both".
Alfred Tarski explained the role of primitive notions as follows:
An inevitable regress to primitive notions in the theory of knowledge was explained by Gilbert de B. Robinson:
The necessity for primitive notions is illustrated in several axiomatic foundations in mathematics: | https://en.wikipedia.org/wiki?curid=23706 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.