text stringlengths 10 951k | source stringlengths 39 44 |
|---|---|
Nial
Nial (from "Nested Interactive Array Language") is a high-level array programming language developed from about 1981 by Mike Jenkins of Queen's University, Kingston, Ontario, Canada. Jenkins co-created the Jenkins–Traub algorithm.
Nial combines a functional programming notation for arrays based on an array theory developed by Trenchard More with structured programming concepts for numeric, character and symbolic data.
It is most often used for prototyping and artificial intelligence.
In 1982, Jenkins formed a company (Nial Systems Ltd) to market the language and the Q'Nial implementation of Nial. As of 2014, the company website supports an Open Source project for the Q'Nial software with the binary and source available for download. Its license is derived from Artistic License 1.0, the only differences being the preamble, the definition of "Copyright Holder" (which is changed from "whoever is named in the copyright or copyrights for the package" to "NIAL Systems Limited"), and an instance of "whoever" (which is changed to "whomever").
Nial uses a generalized and expressive Array Theory in its Version 4, but sacrificed some of the generality of functional model, and modified the Array Theory in the Version 6. Only Version 6 is available now.
Nial defines all its data types as nested rectangular arrays. ints, booleans, chars etc. are considered as a solitary array or an array containing a single member. Arrays themselves can contain other arrays to form arbitrarily deep structures. Nial also provides Records. They are defined as non-homogenous array structure.
Functions in Nial are called Operations. From Nial manual: "An operation is a functional object that is given an argument array and returns a result array. The process of executing an operation by giving it an argument value is called an operation call or an operation application."
Nial like other APL-derived languages allows the unification of binary operators and operations. Thus the below notations have the same meaning.
Note: codice_1 is same as codice_2
Binary operation:
Array notation:
Strand notation:
Grouped notation:
Nial also uses transformers which are higher order functions. They use the argument operation to construct a new modified operation.
An atlas in Nial is an operation made up of an array of component operations. When an atlas is applied to a value, each element of the atlas is applied in turn to the value to provide an end result. This is used to provide point free (without-variables) style of definitions. It is also used by the transformers. In the below examples 'inner [+,*]' the list '[+,*]' is an atlas.
count 6
Arrays can also be literal
Shape gives the array dimensions and reshape can be used to reshape the dimensions.
Definitions are of the form ' is '
fact is recur [ 0 =, 1 first, pass, product, -1 +]
rev is reshape [ shape, across [pass, pass, converse append ] ]
Contrast with [[APL programming language|APL]]
Checking the divisibility of A by B
Defining is_prime filter
Count generates an array [1..N] and pass is N (identity operation).
eachright applies is_divisible(pass,element) in each element of count-generated array.
Thus this transforms the count-generated array into an array where numbers that can divide N are replaced by '1' and others by '0'. Hence if the number N is prime, sum [transformed array] must be 2 (itself and 1).
Now all that remains is to generate another array using count N, and filter all that are not prime.
quicksort is fork [ >= [1 first,tally],
Using it:
[[Category:Array programming languages]] | https://en.wikipedia.org/wiki?curid=21571 |
Niels Henrik Abel
Niels Henrik Abel (; ; 5 August 1802 – 6 April 1829) was a Norwegian mathematician who made pioneering contributions in a variety of fields. His most famous single result is the first complete proof demonstrating the impossibility of solving the general quintic equation in radicals. This question was one of the outstanding open problems of his day, and had been unresolved for over 250 years. He was also an innovator in the field of elliptic functions, discoverer of Abelian functions. He made his discoveries while living in poverty and died at the age of 26 from tuberculosis.
Most of his work was done in six or seven years of his working life. Regarding Abel, the French mathematician Charles Hermite said: "Abel has left mathematicians enough to keep them busy for five hundred years." Another French mathematician, Adrien-Marie Legendre, said: ""quelle tête celle du jeune Norvégien!"" ("what a head the young Norwegian has!").
The Abel Prize in mathematics, originally proposed in 1899 to complement the Nobel Prizes, is named in his honour.
Niels Henrik Abel was born in Nedstrand, Norway, as the second child of the pastor Søren Georg Abel and Anne Marie Simonsen. When Niels Henrik Abel was born, the family was living at a rectory on Finnøy. Much suggests that Niels Henrik was born in the neighboring parish, as his parents were guests of the bailiff in Nedstrand in July / August of his year of birth.
Niels Henrik Abel's father, Søren Georg Abel, had a degree in theology and philosophy and served as pastor at Finnøy. Søren's father, Niels's grandfather, Hans Mathias Abel, was also a pastor, at Gjerstad Church near the town of Risør. Søren had spent his childhood at Gjerstad, and had also served as chaplain there; and after his father's death in 1804, Søren was appointed pastor at Gjerstad and the family moved there. The Abel family originated in Schleswig and came to Norway in the 17th century.
Anne Marie Simonsen was from Risør; her father, Niels Henrik Saxild Simonsen, was a tradesman and merchant ship-owner, and said to be the richest person in Risør. Anne Marie had grown up with two stepmothers, in relatively luxurious surroundings. At Gjerstad rectory, she enjoyed arranging balls and social gatherings. Much suggests she was early on an alcoholic and took little interest in the upbringing of the children. Niels Henrik and his brothers were given their schooling by their father, with handwritten books to read. An addition table in a book of mathematics reads: 1+0=0.
With Norwegian independence and the first election held in Norway, in 1814, Søren Abel was elected as a representative to the Storting. Meetings of the Storting were held until 1866 in the main hall of the Cathedral School in Christiania (now known as Oslo). Almost certainly, this is how he came into contact with the school, and he decided that his eldest son, Hans Mathias, should start there the following year. However, when the time for his departure approached, Hans was so saddened and depressed over having to leave home that his father did not dare send him away. He decided to send Niels instead.
In 1815, Niels Abel entered the Cathedral School at the age of 13. His elder brother Hans joined him there a year later. They shared rooms and had classes together. Hans got better grades than Niels; however, a new mathematics teacher, Bernt Michael Holmboe, was appointed in 1818. He gave the students mathematical tasks to do at home. He saw Niels Henrik's talent in mathematics, and encouraged him to study the subject to an advanced level. He even gave Niels private lessons after school.
In 1818, Søren Abel had a public theological argument with the theologian Stener Johannes Stenersen regarding his catechism from 1806. The argument was well covered in the press. Søren was given the nickname "Abel Treating" "(Norwegian: "Abel Spandabel")". Niels' reaction to the quarrel was said to have been "excessive gaiety". At the same time, Søren also almost faced impeachment after insulting Carsten Anker, the host of the Norwegian Constituent Assembly; and in September 1818 he returned to Gjerstad with his political career in ruins. He began drinking heavily and died only two years later, in 1820, aged 48.
Bernt Michael Holmboe supported Niels Henrik Abel with a scholarship to remain at the school and raised money from his friends to enable him to study at the Royal Frederick University.
When Abel entered the university in 1821, he was already the most knowledgeable mathematician in Norway. Holmboe had nothing more he could teach him and Abel had studied all the latest mathematical literature in the university library. During that time, Abel started working on the quintic equation in radicals. Mathematicians had been looking for a solution to this problem for over 250 years. In 1821, Abel thought he had found the solution. The two professors of mathematics in Christiania, Søren Rasmussen and Christopher Hansteen, found no errors in Abel's formulas, and sent the work on to the leading mathematician in the Nordic countries, Carl Ferdinand Degen in Copenhagen. He too found no faults but still doubted that the solution, which so many outstanding mathematicians had sought for so long, could really have been found by an unknown student in far-off Christiania. Degen noted, however, Abel's unusually sharp mind, and believed that such a talented young man should not waste his abilities on such a "sterile object" as the fifth degree equation, but rather on elliptic functions and transcendence; for then, wrote Degen, he would "discover Magellanian thoroughfares to large portions of a vast analytical ocean". Degen asked Abel to give a numerical example of his method. While trying to provide an example, Abel found a mistake in his paper. This led to a discovery in 1823 that a solution to a fifth- or higher-degree equation was impossible.
Abel graduated in 1822. His performance was exceptionally high in mathematics and average in other matters.
After he graduated, professors from university supported Abel financially, and Professor Christopher Hansteen let him live in a room in the attic of his home. Abel would later view Ms. Hansteen as his second mother. While living here, Abel helped his younger brother, Peder Abel, through examen artium. He also helped his sister Elisabeth to find work in the town.
In early 1823, Niels Abel published his first article in "Magazin for Naturvidenskaberne", Norway's first scientific journal, which had been co-founded by Professor Hansteen. Abel published several articles, but the journal soon realized that this was not material for the common reader. In 1823, Abel also wrote a paper in French. It was "a general representation of the possibility to integrate all differential formulas" ("Norwegian: en alminnelig Fremstilling af Muligheten at integrere alle mulige Differential-Formler)". He applied for funds at the university to publish it. However the work was lost, while being reviewed, never to be found thereafter.
In mid-1823, Professor Rasmussen gave Abel a gift of 100 speciedaler so he could travel to Copenhagen and visit Ferdinand Degen and other mathematicians there. While in Copenhagen, Abel did some work on Fermat's Last Theorem. Abel's uncle, Peder Mandrup Tuxen, lived at the naval base in Christianshavn, Copenhagen, and at a ball there Niels Abel met Christine Kemp, his future fiancée. In 1824, Christine moved to Son, Norway to work as a governess and the couple got engaged over Christmas.
After returning from Copenhagen, Abel applied for a government scholarship in order to visit top mathematicians in Germany and France, but he was instead granted 200 speciedaler yearly for two years, to stay in Christiania and study German and French. In the next two years, he was promised a scholarship of 600 speciedaler yearly and he would then be permitted to travel abroad. While studying these languages, Abel published his first notable work in 1824, "Mémoire sur les équations algébriques où on démontre l'impossibilité de la résolution de l'équation générale du cinquième degré" (Memoir on algebraic equations, in which the impossibility of solving the general equation of the fifth degree is proven). For, in 1823, Abel had at last proved the impossibility of solving the quintic equation in radicals (now referred to as the Abel–Ruffini theorem). However, this paper was in an abstruse and difficult form, in part because he had restricted himself to only six pages, in order to save money on printing. A more detailed proof was published in 1826 in the first volume of "Crelle's Journal".
In 1825, Abel wrote a personal letter to King Carl Johan of Norway/Sweden requesting permission to travel abroad. He was granted this permission, and in September 1825 he left Christiania together with four friends from university (Christian P.B Boeck, Balthazar M. Keilhau, Nicolay B. Møller and Otto Tank). These four friends of Abel were traveling to Berlin and to the Alps to study geology. Abel wanted to follow them to Copenhagen and from there make his way to Göttingen. The terms for his scholarship stipulated that he was to visit Gauss in Göttingen and then continue to Paris. However, when he got as far as Copenhagen he changed his plans. He wanted to follow his friends to Berlin instead, intending to visit Göttingen and Paris afterwards.
On the way, he visited the astronomer Heinrich Christian Schumacher in Altona, now a district of Hamburg. He then spent four months in Berlin, where he became well acquainted with August Leopold Crelle, who was then about to publish his mathematical journal, "Journal für die reine und angewandte Mathematik". This project was warmly encouraged by Abel, who contributed much to the success of the venture. Abel contributed seven articles to it in its first year.
From Berlin Abel also followed his friends to the Alps. He went to Leipzig and Freiberg to visit Georg Amadeus Carl Friedrich Naumann and his brother the mathematician August Naumann. In Freiberg Abel did research in the theory of functions, particularly, elliptic, hyperelliptic, and a new class now known as abelian functions.
From Freiberg they went on to Dresden, Prague, Vienna, Trieste, Venice, Verona, Bolzano, Innsbruck, Luzern and Basel. From July 1826 Abel traveled on his own from Basel to Paris. Abel had sent most of his work to Berlin to be published in Crelle's Journal, but he had saved what he regarded as his most important work for the French Academy of Sciences, a theorem on addition of algebraic differentials. With the help of a painter, Johan Gørbitz, he found an apartment in Paris and continued his work on the theorem. He finished in October 1826, and submitted it to the academy. It was to be reviewed by Augustin-Louis Cauchy. Abel's work was scarcely known in Paris, and his modesty restrained him from proclaiming his research. The theorem was put aside and forgotten until his death.
Abel's limited finances finally compelled him to abandon his tour in January 1827. He returned to Berlin, and was offered a position as editor of Crelle's Journal, but opted out. By May 1827 he was back in Norway. His tour abroad was viewed as a failure. He had not visited Gauss in Göttingen and he had not published anything in Paris. His scholarship was therefore not renewed and he had to take up a private loan in Norges Bank of 200 spesidaler. He never repaid this loan. He also started tutoring. He continued to send most of his work to Crelle's Journal. But in mid-1828 he published, in rivalry with Carl Jacobi, an important work on elliptic functions in "Astronomische Nachrichten" in Altona.
While in Paris, Abel contracted tuberculosis. At Christmas 1828, he traveled by sled to Froland, Norway to visit his fiancée. He became seriously ill on the journey; and, although a temporary improvement allowed the couple to enjoy the holiday together, he died relatively soon after on 6 April 1829, just two days before a letter arrived from August Crelle. Crelle had been searching for a new job for Abel in Berlin and had actually managed to have him appointed as a Professor at the University of Berlin. Crelle wrote to Abel to tell him, but the good news came too late.
Abel showed that there is no general algebraic solution for the roots of a quintic equation, or any general polynomial equation of degree greater than four, in terms of explicit algebraic operations. To do this, he invented (independently of Galois) a branch of mathematics known as group theory, which is invaluable not only in many areas of mathematics, but for much of physics as well. Abel sent a paper on the unsolvability of the quintic equation to Carl Friedrich Gauss, who proceeded to discard without a glance what he believed to be the worthless work of a crank.
As a 16-year-old, Abel gave a rigorous proof of the binomial theorem valid for all numbers, extending Euler's result which had held only for rationals. Abel wrote a fundamental work on the theory of elliptic integrals, containing the foundations of the theory of elliptic functions.
While travelling to Paris he published a paper revealing the double periodicity of elliptic functions, which Adrien-Marie Legendre later described to Augustin-Louis Cauchy as "a monument more lasting than bronze" (borrowing a famous sentence by the Roman poet Horatius). The paper was, however, misplaced by Cauchy.
While abroad Abel had sent most of his work to Berlin to be published in the "Crelle's Journal", but he had saved what he regarded as his most important work for the French Academy of Sciences, a theorem on addition of algebraic differentials. The theorem was put aside and forgotten until his death. While in Freiberg, Abel did research in the theory of functions, particularly, elliptic, hyperelliptic, and a new class now known as abelian functions.
In 1823 Abel wrote a paper titled "a general representation of the possibility to integrate all differential formulas" (Norwegian: "en alminnelig Fremstilling af Muligheten at integrere alle mulige Differential-Formler"). He applied for funds at the university to publish it. However the work was lost, while being reviewed, never to be found thereafter.
Abel said famously of Carl Friedrich Gauss's writing style, "He is like the fox, who effaces his tracks in the sand with his tail." Gauss replied him by saying, "No self respecting architect leaves the scaffolding in place after completing his building."
Under Abel's guidance, the prevailing obscurities of analysis began to be cleared, new fields were entered upon and the study of functions so advanced as to provide mathematicians with numerous ramifications along which progress could be made. His works, the greater part of which originally appeared in "Crelle's Journal", were edited by Bernt Michael Holmboe and published in 1839 by the Norwegian government, and a more complete edition by Ludwig Sylow and Sophus Lie was published in 1881. The adjective "abelian", derived from his name, has become so commonplace in mathematical writing that it is conventionally spelled with a lower-case initial "a" (e.g., abelian group, abelian category, and abelian variety).
On 6 April 1929, four Norwegian stamps were issued for the centenary of Abel's death. His portrait appears on the 500-kroner banknote (version V) issued during 1978–1985. On 5 June 2002, four Norwegian stamps were issued in honour of Abel two months before the bicentenary of his birth. There is also a 20-kroner coin issued by Norway in his honour. A statue of Abel stands in Oslo, and crater Abel on the Moon was named after him. In 2002, the Abel Prize was established in his memory.
Mathematician Felix Klein wrote about Abel: | https://en.wikipedia.org/wiki?curid=21573 |
Nationality
Nationality is a legal relationship between an individual person and a state. Nationality affords the state jurisdiction over the person and affords the person the protection of the state. What these rights and duties are varies from state to state. This relationship generally enables intervention by a State to provide help and protection to its nationals when they are harmed by other States.
By custom and international conventions, it is the right of each state to determine who its nationals are. Such determinations are part of nationality law. In some cases, determinations of nationality are also governed by public international law—for example, by treaties on statelessness and the European Convention on Nationality.
Nationality differs technically and legally from citizenship, which is a different legal relationship between a person and a country. The noun "national" can include both citizens and non-citizens. The most common distinguishing feature of citizenship is that citizens have the right to participate in the political life of the state, such as by voting or standing for election. However, in most modern countries all nationals are citizens of the state, and full citizens are always nationals of the state.
In older texts the word "nationality", rather than "ethnicity", is often used to refer to an ethnic group (a group of people who share a common ethnic identity, language, culture, lineage, history, and so forth). This older meaning of "nationality" is not defined by political borders or passport ownership and includes nations that lack an independent state (such as the Arameans, Scots, Welsh, English, Andalusians, Basques, Catalans, Kurds, Kabyles, Baloch, Berbers, Bosniaks, Kashmiris,
Palestinians, Sindhi, Tamils, Hmong, Inuit, Copts, Māori, Sikhs, Wakhi, Székelys, Xhosas and Zulus).
Individuals may also be considered nationals of groups with autonomous status that have ceded some power to a larger government.
Nationality is the status that allows a nation to grant rights to the subject and to impose obligations upon the subject. In most cases, no rights or obligations are automatically attached to this status, although the status is a necessary precondition for any rights and obligations created by the state.
In European law, nationality is the status or relationship that gives a nation the right to protect a person from other nations. Diplomatic and consular protection are dependent upon this relationship between the person and the state. A person's status as being the national of a country is used to resolve the conflict of laws.
Within the broad limits imposed by few treaties and international law, states may freely define who are and are not their nationals. However, since the "Nottebohm" case, other states are only required to respect claim by a state to protect an alleged national if the nationality is based on a true social bond. In the case of dual nationality, states may determine the most effective nationality for a person, to determine which state's laws are most relevant. There are also limits on removing a person's status as a national. Article 15 of the Universal Declaration of Human Rights states that "Everyone has the right to a nationality," and "No one shall be arbitrarily deprived of his nationality nor denied the right to change his nationality."
Nationals normally have the right to enter or return to the country they belong to. Passports are issued to nationals of a state, rather than only to citizens, because the passport is the travel document used to enter the country. However, nationals may not have the right of abode (the right to live permanently) in the countries that grant them passports.
Conceptually, citizenship is focused on the internal political life of the state and nationality is a matter of international dealings.
In the modern era, the concept of full citizenship encompasses not only active political rights, but full civil rights and social rights. Nationality is a necessary but not sufficient condition to exercise full political rights within a state or other polity. Nationality is required for full citizenship, and some people have no nationality in international law. A person who is denied full citizenship or nationality is commonly called a stateless person.
Historically, the most significant difference between a national and a citizen is that the citizen has the right to vote for elected officials, and to be elected. This distinction between full citizenship and other, lesser relationships goes back to antiquity. Until the 19th and 20th centuries, it was typical for only a small percentage of people who belonged to a city or state to be full citizens. In the past, most people were excluded from citizenship on the basis of sex, socioeconomic class, ethnicity, religion, and other factors. However, they held a legal relationship with their government akin to the modern concept of nationality.
United States nationality law defines some persons born in some U.S. outlying possessions as U.S. nationals but not citizens. British nationality law defines six classes of British national, among which "British citizen" is one class (having the right of abode in the United Kingdom, along with some "British subjects"). Similarly, in the Republic of China, commonly known as Taiwan, the status of national without household registration applies to people who have Republic of China nationality, but do not have an automatic entitlement to enter or reside in the Taiwan Area, and do not qualify for civic rights and duties there. Under the nationality laws of Mexico, Colombia, and some other Latin American countries, nationals do not become citizens until they turn 18. Israeli law distinguishes nationality from citizenship. The nationality of an Arab citizen of Israel is "Arab", not Israeli, while the nationality of a Jewish citizen is "Jewish" not Israeli.
Nationality is sometimes used simply as an alternative word for ethnicity or national origin, just as some people assume that citizenship and nationality are identical. In some countries, the cognate word for "nationality" in local language may be understood as a synonym of ethnicity or as an identifier of cultural and family-based self-determination, rather than on relations with a state or current government. For example, some Kurds say that they have Kurdish nationality, even though there is no Kurdish sovereign state at this time in history.
In the context of former Soviet Union and former Socialist Federal Republic of Yugoslavia, "nationality" is often used as translation of the Russian "nacional'nost' " and Serbo-Croatian "narodnost", which were the terms used in those countries for ethnic groups and local affiliations within the member states of the federation. In the Soviet Union, more than 100 such groups were formally recognized. Membership in these groups was identified on Soviet internal passports, and recorded in censuses in both the USSR and Yugoslavia. In the early years of the Soviet Union's existence, ethnicity was usually determined by the person's native language, and sometimes through religion or cultural factors, such as clothing. Children born after the revolution were categorized according to their parents' recorded ethnicities. Many of these ethnic groups are still recognized by modern Russia and other countries.
Similarly, the term "nationalities of China" refers to ethnic and cultural groups in China. Spain is one nation, made up of nationalities, which are not politically recognized as nations (state), but can be considered smaller nations within the Spanish nation. Spanish law recognizes the autonomous communities of Andalusia, Aragon, Balearic Islands, Canary Islands, Catalonia, Valencia, Galicia and the Basque Country as "nationalities" ("nacionalidades").
In 2013, the Supreme Court of Israel unanimously affirmed the position that "citizenship" (e.g. Israeli) is separate from "le'om" (; "nationality" or "ethnic affiliation"; e.g. Jewish, Arab, Druze, Circassian), and that the existence of a unique "Israeli" "le'om" has not been proven. Israel recognizes more than 130 "le'umim" in total.
National identity is a person's subjective sense of belonging to one state or to one nation. A person may be a national of a state, in the sense of being its citizen, without subjectively or emotionally feeling a part of that state, for example many migrants in Europe often identify with their ancestral and/or religious background rather than with the state of which they are citizens. Conversely, a person may feel that he belongs to one state without having any legal relationship to it. For example, children who were brought to the U.S. illegally when quite young and grow up there with little contact with their native country and its culture often have a national identity of feeling American, despite legally being nationals of a different country.
Dual nationality is when a single person has a formal relationship with two separate, sovereign states. This might occur, for example, if a person's parents are nationals of separate countries, and the mother's country claims all offspring of the mother's as their own nationals, but the father's country claims all offspring of the father's.
Nationality, with its historical origins in allegiance to a sovereign monarch, was seen originally as a permanent, inherent, unchangeable condition, and later, when a change of allegiance was permitted, as a strictly exclusive relationship, so that becoming a national of one state required rejecting the previous state.
Dual nationality was considered a problem that caused conflict between states and sometimes imposed mutually exclusive requirements on affected people, such as simultaneously serving in two countries' military forces. Through the middle of the 20th century, many international agreements were focused on reducing the possibility of dual nationality. Since then, many accords recognizing and regulating dual nationality have been formed.
Statelessness is the condition in which an individual has no formal or protective relationship with any state. This might occur, for example, if a person's parents are nationals of separate countries, and the mother's country rejects all offspring of mothers married to foreign fathers, but the father's country rejects all offspring born to foreign mothers. Although this person may have an emotional national identity, he or she may not legally be the national of any state.
Another stateless situation arises when a person holds a travel document (passport) which recognizes the bearer as having the nationality of a "state" which is not internationally recognized, has no entry in the International Organization for Standardization's country list, is not a member of the United Nations, etc. In the current era, persons native to Taiwan who hold Republic of China passports are one example.
Some countries ( like the Kuwait, UAE, Saudi Arabia) can also remove your citizenship; the reasons for removal are fraud and security issues. There are also people who are abandoned at birth and the parents whereabouts are not known.
The following list includes states in which parents are able to confer nationality on their children or spouses. | https://en.wikipedia.org/wiki?curid=21583 |
Nereid
In Greek mythology, the Nereids ( ; "Nereides", sg. "Nereis") are sea nymphs (female spirits of sea waters), the 50 daughters of Nereus and Doris, sisters to their brother Nerites. They often accompany Poseidon, the god of the sea, and can be friendly and helpful to sailors (such as the Argonauts in their search for the Golden Fleece).
Nereids are particularly associated with the Aegean Sea, where they dwelt with their father Nereus in the depths within a golden palace. The most notable of them are Thetis, wife of Peleus and mother of Achilles; Amphitrite, wife of Poseidon and mother of Triton; and Galatea, the vain love interest of the Cyclops Polyphemus.They symbolized everything that is beautiful and kind about the sea. Their melodious voices sang as they danced around their father. They are represented as very beautiful girls, crowned with branches of red coral and dressed in white silk robes trimmed with gold, but who went barefoot. They were part of Poseidon's entourage and carried his trident.
In Homer's "Iliad" XVIII, when Thetis cries out in sympathy for the grief of Achilles for the slain Patroclus, her sisters appear. The Nereid Opis is mentioned in Virgil's "Aeneid". She is called by the goddess Diana to avenge the death of the Amazon-like female warrior Camilla. Diana gives Opis magical weapons for revenge on Camilla's killer, the Etruscan Arruns. Opis sees and laments Camilla's death, and kills Arruns with an arrow in revenge as directed by Diana.
In modern Greek folklore, the term "nereid" (, "neráida") has come to be used for all nymphs, fairies, or mermaids, not merely nymphs of the sea.
Nereid, a moon of the planet Neptune, is named after the Nereids. Nereid Lake in Antarctica is named after the nymphs.
This list is correlated from four sources: Homer's "Iliad", Hesiod's "Theogony", the "Bibliotheca" of Pseudo-Apollodorus and the "Fabulae" of Hyginus. Because of this, the total number of names goes beyond fifty. | https://en.wikipedia.org/wiki?curid=21586 |
Netball
Netball is a ball sport played by two teams of seven players. Netball is most popular in many Commonwealth nations, specifically in schools, and is predominantly played by women. According to the INF, netball is played by more than 20 million people in more than 80 countries. Major domestic leagues in the sport include the Netball Superleague in Great Britain, Suncorp Super Netball in Australia and the ANZ Premiership in New Zealand. Four major competitions take place internationally: the quadrennial World Netball Championships, the Commonwealth Games, and the yearly Quad Series and Fast5 Series. In 1995, netball became an International Olympic Committee recognised sport, but it has not been played at the Olympics.
Games are played on a rectangular court with raised goal rings at each end. Each team attempts to score goals by passing a ball down the court and shooting it through its goal ring. Players are assigned specific positions, which define their roles within the team and restrict their movement to certain areas of the court. During general play, a player with the ball can hold on to it for only three seconds before shooting for a goal or passing to another player. The winning team is the one that scores the most goals. Netball games are 60 minutes long. Variations have been developed to increase the game's pace and appeal to a wider audience.
Its development, derived from early versions of basketball, began in England in the 1890s. By 1960, international playing rules had been standardised for the game, and the International Federation of Netball and Women's Basketball (later renamed the International Netball Federation (INF)) was formed. As of 2019, the INF comprises more than 70 national teams organized into five global regions.
Netball emerged from early versions of basketball and evolved into its own sport as the number of women participating in sports increased. Basketball was invented in 1891 by James Naismith in the United States. The game was initially played indoors between two teams of nine players, using an association football that was thrown into closed-end peach baskets. Naismith's game spread quickly across the United States and variations of the rules soon emerged. Physical education instructor Senda Berenson developed modified rules for women in 1892; these eventually gave rise to women's basketball. Around this time separate intercollegiate rules were developed for men and women. The various basketball rules converged into a universal set in the United States.
Martina Bergman-Österberg introduced a version of basketball in 1893 to her female students at the Physical Training College in Hampstead, London. The rules of the game were modified at the college over several years: the game moved outdoors and was played on grass; the baskets were replaced by rings that had nets; and in 1897 and 1899, rules from women's basketball in the United States were incorporated. Österberg's new sport acquired the name "net ball". The first codified rules of netball were published in 1901 by the Ling Association, later the Physical Education Association of the United Kingdom. From England, netball spread to other countries in the British Empire. Variations of the rules and even names for the sport arose in different areas: "women's (outdoor) basketball" arrived in Australia around 1900 and in New Zealand from 1906, while "netball" was being played in Jamaican schools by 1909.
From the start, it was considered socially appropriate for women to play netball; netball's restricted movement appealed to contemporary notions of women's participation in sports, and the sport was distinct from potential rival male sports. Netball became a popular women's sport in countries where it was introduced and spread rapidly through school systems. School leagues and domestic competitions emerged during the first half of the 20th century, and in 1924 the first national governing body was established in New Zealand. International competition was initially hampered by a lack of funds and varying rules in different countries. Australia hosted New Zealand in the first international game of netball in Melbourne on 20 August 1938; Australia won 40–11. Efforts began in 1957 to standardise netball rules globally: by 1960 international playing rules had been standardised, and the International Federation of Netball and Women's Basketball, later the International Netball Federation (INF), was formed to administer the sport worldwide.
Representatives from England, Australia, New Zealand, South Africa, and the West Indies were part of a 1960 meeting in Sri Lanka that standardised the rules for the game. The game spread to other African countries in the 1970s. South Africa was prohibited from competing internationally from 1969 to 1994 due to apartheid. In the United States, Netball's popularity also increased during the 1970s, particularly in the New York area, and the United States of America Netball Association was created in 1992. The game also became popular in the Pacific Island nations of the Cook Islands, Fiji and Samoa during the 1970s. Netball Singapore was created in 1962, and the Malaysian Netball Association was created in 1978.
In Australia, the term "women's basketball" was used to refer to both netball and basketball. During the 1950s and 1960s, a movement arose to change the Australian name of the game from "women's basketball" to "netball" in order to avoid confusion between the two sports. The Australian Basketball Union offered to pay the costs involved to alter the name, but the netball organisation rejected the change. In 1970, the Council of the All Australia Netball Association officially changed the name to "netball" in Australia.
In 1963, the first international tournament was held in Eastbourne, England. Originally called the World Tournament, it later became known as the World Netball Championships. Following the first tournament, one of the organisers, Miss R. Harris, declared,
The World Netball Championships have been held every four years since then. The World Youth Netball Championships started in Canberra in 1988, and have been held roughly every four years since. In 1995, the International Olympic Committee recognized the International Federation of Netball Associations. Three years later netball debuted at the 1998 Commonwealth Games in Kuala Lumpur. Other international competitions also emerged in the late 20th century, including the Nations Cup and the Asian Netball Championship.
As of 2006, the IFNA recognises only women's netball. Men's netball teams exist in some areas but attract less attention from sponsors and spectators. Men's netball started to become popular in Australia during the 1980s, and the first men's championship was held in 1985. In 2004, New Zealand and Fiji sent teams to compete in the Australian Mixed and Men's National Championships. By 2006, mixed netball teams in Australia had as many male participants as rugby union. Other countries with men's national teams include Canada, Fiji, Jamaica, Kenya, Pakistan and the United Arab Emirates. Unlike women's netball at elite and national levels, men's and mixed gender teams are largely self-funded.
An all-transgender netball team from Indonesia competed at the 1994 Gay Games in New York City. The team had been the Indonesian national champions. At the 2000 Gay Games VI in Sydney, netball and volleyball were the two sports with the highest rates of transgender athletes participating. There were eight teams of indigenous players, with seven identifying as transgender. They came from places like Palm Island in northern Queensland, Samoa, Tonga and Papua New Guinea. Teams with transgender players were allowed to participate in several divisions including men's, mixed and transgender; they were not allowed to compete against the cisgender women's teams.
The objective of a game is to score more goals than the opposition. Goals are scored when a team member positioned in the attacking shooting circle shoots the ball through the goal ring. The goal rings are in diameter and sit atop -high goal posts that have no backboards. A -radius semi-circular "shooting circle" is an area at each end of the court. The goal posts are located within the shooting circle. Each team defends one shooting circle and attacks the other. The netball court is long, wide, and divided lengthwise into thirds. The ball is usually made of leather or rubber, measures in circumference (~ in diameter), and weighs . A normal game consists of four 15-minute quarters and can be played outdoors or in a covered stadium.
Each team is allowed seven players on the court. Each player is assigned a specific position, which limits their movement to a certain area of the court. A "bib" worn by each player contains a one- or two-letter abbreviation indicating this position. Only two positions are permitted in the attacking shooting circle, and can therefore shoot for a goal. Similarly, only two positions are permitted in the defensive shooting circle; they try to prevent the opposition from shooting goals. Other players are restricted to two thirds of the court, with the exception of the Centre, who may move anywhere on the court except for a shooting circle.
At the beginning of every quarter and after a goal has been scored, play starts with a player in the centre position passing the ball from the centre of the court. These "centre passes" alternate between the teams, regardless of which team scored the last goal. When the umpire blows the whistle to restart play, four players from each team can move into the centre third to receive the pass. The centre pass must be caught or touched in the centre third. The ball is then moved up and down the court through passing and must be touched by a player in each adjacent third of the court. Players can hold the ball for only three seconds at any time. It must be released before the foot they were standing on when they caught it touches the ground again. Contact between players is only permitted if it does not impede an opponent or the general play. When defending a pass or shot players must be at least away from the player with the ball. If illegal contact is made, the player who contacted cannot participate in play until the player taking the penalty has passed or shot the ball. If the ball is held in two hands and either dropped or a shot at goal is missed, the same player cannot be the first to touch it unless it first rebounds off the goal.
Indoor netball is a variation of netball, played exclusively indoors, in which the playing court is often surrounded on each side and overhead by a net. The net prevents the ball from leaving the court, permitting faster play by reducing playing stoppages.
Different forms of indoor netball exist. In a seven-per-side version called "action netball", seven players per team play with rules similar to netball. However, a game is split into 15-minute halves with a three-minute break in between. This version is played in Australia, New Zealand, South Africa and England.
A six-per-side version of the sport is also played in New Zealand. Two Centres per team can play in the whole court except the shooting circles; the remaining attacking and defending players are each restricted to one half of the court, including the shooting circles. The attacking and Centre players may shoot from outside the shooting circle for a two-point goal.
A five-per-side game is also common in indoor netball. Players can move throughout the court, with the exception of the shooting circles, which are restricted to certain attacking or defending players.
Fast5 (originally called Fastnet) is a variation on the rules of netball designed to make games faster and more television-friendly. The World Netball Series promotes it to raise the sport's profile and attract more spectators and greater sponsorship. The game is much shorter, with each quarter lasting only six minutes and only a two-minute break between quarters. The coaches can give instructions from the sideline during play, and unlimited substitutions are allowed. Like six-per-side indoor netball, attacking players may shoot two-point goals from outside the shooting circle. Each team can separately nominate one "power play" quarter, in which each goal scored by that team is worth double points and the centre pass is taken by the team that conceded the goal.
Netball has been adapted in several ways to meet children's needs. The rules for children are similar to those for adults, but various aspects of the game (such as the length of each quarter, goal height, and ball size) are modified.
Fun Net is a version of netball developed by Netball Australia for five- to seven-year-olds. It aims to improve basic netball skills using games and activities. The Fun Net program runs for 8–16 weeks. There are no winners or losers. The goal posts are high, and a smaller ball is used.
Netball Australia also runs a modified game called Netta aimed at 8- to 11-year-olds. The goal height and ball size are the same as for adults, but players rotate positions during the game, permitting each player to play each position. Netta was created to develop passing and catching skills. Its rules permit six seconds between catching and passing the ball, instead of the three seconds permitted in the adult game. Most players under 11 play this version at netball clubs.
A version called High Five Netball is promoted by the All England Netball Association. It is aimed at 9- to 11-year-old girls and includes only five positions. The players swap positions during the game. When a player is not on the court, she is expected to help the game in some other way, such as being the timekeeper or scorekeeper. High Five Netball has four six-minute quarters.
The recognised international governing body of netball is the International Federation of Netball Associations (IFNA), based in Manchester, England. Founded in 1960, the organisation was initially called the International Federation of Netball and Women's Basketball. The IFNA is responsible for compiling world rankings for national teams, maintaining the rules for netball and organising several major international competitions.
As of July 2019, the IFNA has 53 full and 19 associate national members in five regions. Each region has an IFNA regional federation.
The IFNA is affiliated with the General Association of International Sports Federations, the International World Games Association and the Association of IOC Recognised International Sports Federations. It is also a signatory to the World Anti-Doping Code.
Netball is a popular participant sport in countries of the Commonwealth of Nations. Non-Commonwealth entities with full IFNA membership include Switzerland, Taiwan, Thailand, Argentina, Bermuda, the Cayman Islands and the United States, along with former Commonwealth members Zimbabwe, Ireland and Hong Kong. According to the IFNA, over 20 million people play netball in more than 80 countries. International tournaments are held among countries in each of the five IFNA regions, either annually or every four years. School leagues and national club competitions have been organised in England, Australia, New Zealand and Jamaica since the early twentieth century. Franchise-based netball leagues did not emerge until the late 1990s. These competitions sought to increase the profile of the sport in their respective countries. Despite widespread local interest, participation was largely amateur.
Netball was first included in the 1998 Commonwealth Games and has been a fixture ever since; it is currently one of the "core" sports that must be contested at each edition of the Games.
The major international tournament in Africa is organised by the Confederation of African Netball Associations, which invites teams from Botswana, Namibia, Zambia, Malawi, South Africa, Lesotho, Swaziland, Zimbabwe and the Seychelles to take part. The tournament is hosted by a country within the region; senior and under 21 teams compete. The tournament has served as a qualifier for the World Championships. South Africa launched a new domestic competition in 2011 called Netball Grand Series. It features eight regional teams from South Africa and is aimed at increasing the amount of playing time for players. It runs for 17 weeks and replaces the National Netball League, which was played over only two weeks. According to Proteas captain Elsje Jordaan, it was hoped that the competition would create an opportunity for players to become professional.
The American Federation of Netball Associations (AFNA) hosts two tournaments each year: the Caribbean Netball Association (CNA) Under 16 Championship and the AFNA Senior Championship. The CNA championship involves two divisions of teams from the Caribbean islands. In 2010 five teams competed in two rounds of round robin matches in the Championship Division, while four teams competed in the Developmental Division. Jamaica, which has lost only once in the tournament, decided not to play the 2011 tournament. The AFNA Senior Championship includes Canada and the US along with the Caribbean nations. The tournament serves as a qualifier for the World Championship. Jamaica, with its high ranking, does not have to qualify; this leaves two spots to the other teams in the tournament.
The Asian Netball Championship is held every four years. The seventh Asian games were held in 2009 and featured Singapore, Thailand, Maldives, Taiwan, Malaysia, Sri Lanka, Hong Kong, India and Pakistan. There is also an Asian Youth Netball Championship for girls under 21 years of age, the seventh of which was held in 2010.
The major netball competition in Europe is the Netball Superleague, which features nine teams from England, Wales and Scotland. The league was created in 2005. Matches are broadcast on Sky Sports.
Netball has been featured at the Pacific Games, a multi-sport event with participation from 22 countries from around the South Pacific. The event is held every four years and has 12 required sports; the host country chooses the other four. Netball is not a required sport and has missed selection, particularly when former French or American territories host the games.
The ANZ Championship was a Trans-Tasman competition held between 2008 and 2016 that was broadcast on television in both New Zealand and Australia. It was contested among ten teams from Australia and New Zealand. It began in April 2008, succeeding Australia's Commonwealth Bank Trophy and New Zealand's National Bank Cup as the pre-eminent netball league in those countries. The competition was held annually between April and July, consisting of 69 matches played over 17 weeks. The ANZ Championship saw netball become a semi-professional sport in both countries, with increased media coverage and player salaries. The competition was replaced by new leagues in 2017, the Suncorp Super Netball (Australia) and ANZ Premiership (New Zealand).
There are four major international netball competitions; the Netball World Cup, Netball at the Commonwealth Games, Netball Quad Series and Fast5 Netball World Series.
Netball's important competition is the World Netball Championships (also known as the Netball World Cup), held every four years. It was first held in 1963 at the Chelsea College of Physical Education at Eastbourne, England, with eleven nations competing. Since its inception the competition has been dominated primarily by the Australian and New Zealand teams, which hold ten and four titles, respectively. Trinidad and Tobago is the only other team to win a championship title. That title, won in 1979, was shared with New Zealand and Australia; all three teams finished with equal points at the end of the round robin, and there were no finals.
The Fast5 Series is a competition among the top six national netball teams, as ranked by the INF World Rankings. It is organised by the INF in conjunction with the national governing bodies of the six competing nations, UK Sport, and the host city's local council. The All England Netball Association covers air travel, accommodation, food and local travel expenses for all teams, while the respective netball governing bodies cover player allowances. It is held over three days, with each team playing each other once during the first two days in a round-robin format. The four highest-scoring teams advance to the semi-finals; the winners face each other in the Grand Final. The competition features modified fastnet rules and has been likened to Twenty20 cricket and rugby sevens. A new format featuring shorter matches with modified rules was designed to make the game more appealing to spectators and television audiences. The World Netball Series was held annually in England from 2009 to 2011.
Netball gained Olympic recognition in 1995 after 20 years of lobbying. Although it has never been played at the Summer Olympics, politicians and administrators have been campaigning to have it included in the near future. Its absence from the Olympics has been seen by the netball community as a hindrance in the global growth of the game by limiting access to media attention and funding sources. Some funding sources became available with recognition in 1995, including the International Olympic Committee, national Olympic committees, national sport organisations, and state and federal governments.
One study found that over 14 weeks of play about 5% of people develop an injury. The most common injury is of the ankle (usually lateral ligament ankle strain and less often an ankle fracture). Knee injuries were less common and included anterior cruciate ligament injuries. The main cause of these injuries is believed to be due to incorrect landing. One study found not warming-up as a risk factor. Hypermobility (having a range of motion beyond normal limits) has been associated with injuries in one small study. Higher grade players, in both senior and junior competitions, are more susceptible to injuries than lower grade players, due to the high intensity and rapid pace of the game.
In October 2005, Australian captain Liz Ellis, tore her ACL in a match against New Zealand. This injury ruled her out of the chance to play at the 2006 Melbourne Commonwealth games. In October 2014, Casey Kopua ruptured the patella tendon in her left knee resulted in her missing up to 6 months of netball. | https://en.wikipedia.org/wiki?curid=21592 |
Njörðr
In Norse mythology, Njörðr is a god among the Vanir. Njörðr, father of the deities Freyr and Freyja by his unnamed sister, was in an ill-fated marriage with the goddess Skaði, lives in Nóatún and is associated with the sea, seafaring, wind, fishing, wealth, and crop fertility.
Njörðr is attested in the "Poetic Edda", compiled in the 13th century from earlier traditional sources, the "Prose Edda", written in the 13th century by Snorri Sturluson, in euhemerized form as a beloved mythological early king of Sweden in "Heimskringla", also written by Snorri Sturluson in the 13th century, as one of three gods invoked in the 14th century "Hauksbók" ring oath, and in numerous Scandinavian place names. Veneration of Njörðr survived into the 18th or 19th century Norwegian folk practice, where the god is recorded as Njor and thanked for a bountiful catch of fish.
Njörðr has been the subject of an amount of scholarly discourse and theory, often connecting him with the figure of the much earlier attested Germanic goddess Nerthus, the hero Hadingus, and theorizing on his formerly more prominent place in Norse paganism due to the appearance of his name in numerous place names. "Njörðr" is sometimes modernly anglicized as Njord, Njoerd, or Njorth.
The name "Njörðr" corresponds to that of the older Germanic fertility goddess "Nerthus", and both derive from the Proto-Germanic "*Nerþuz". The original meaning of the name is contested, but it may be related to the Irish word "nert" which means "force" and "power". It has been suggested that the change of sex from the female "Nerthus" to the male "Njörðr" is due to the fact that feminine nouns with u-stems disappeared early in Germanic language while the masculine nouns with u-stems prevailed. However, other scholars hold the change to be based not on grammatical gender but on the evolution of religious beliefs; that *Nerþuz and Njörðr appear as different genders because they are to be considered separate beings. The name "Njörðr" may be related to the name of the Norse goddess Njörun.
Njörðr's name appears in various place names in Scandinavia, such as "Nærdhæwi" (now Nalavi, Närke), "Njærdhavi" (now Mjärdevi, Linköping; both using the religious term vé), "Nærdhælunda" (now Närlunda, Helsingborg), "Nierdhatunum" (now Närtuna, Uppland) in Sweden, Njarðvík in southwest Iceland, Njarðarlög and Njarðey (now Nærøy) in Norway. Njörðr's name appears in a word for sponge; "Njarðarvöttr" (Old Norse "Njörðr's glove"). Additionally, in Old Icelandic translations of Classical mythology the Roman god Saturn's name is glossed as "Njörðr."
Njörðr is described as a future survivor of Ragnarök in stanza 39 of the poem "Vafþrúðnismál". In the poem, the god Odin, disguised as "Gagnráðr" faces off with the wise jötunn Vafþrúðnir in a battle of wits. While Odin states that Vafþrúðnir knows all the fates of the gods, Odin asks Vafþrúðnir "from where Njörðr came to the sons of the Æsir," that Njörðr rules over quite a lot of temples and hörgrs (a type of Germanic altar), and further adds that Njörðr was not raised among the Æsir. In response, Vafþrúðnir says:
In stanza 16 of the poem "Grímnismál", Njörðr is described as having a hall in Nóatún made for himself. The stanza describes Njörðr as a "prince of men," that he is "lacking in malice," and that he "rules over the "high-timbered temple." In stanza 43, the creation of the god Freyr's ship Skíðblaðnir is recounted, and Freyr is cited as the son of Njörðr. In the prose introduction to the poem "Skírnismál", Freyr is mentioned as the son of Njörðr, and stanza 2 cites the goddess Skaði as the mother of Freyr. Further in the poem, Njörðr is again mentioned as the father of Freyr in stanzas 38, 39, and 41.
In the late flyting poem "Lokasenna", an exchange between Njörðr and Loki occurs in stanzas 33, 34, 35, and 36. After Loki has an exchange with the goddess Freyja, in stanza 33 Njörðr states:
Loki responds in the stanza 34, stating that "from here you were sent east as hostage to the gods" (a reference to the Æsir-Vanir War) and that "the daughters of Hymir used you as a pisspot, and pissed in your mouth." In stanza 35, Njörðr responds that:
Loki tells Njörðr to "stop" and "keep some moderation," and that he "won't keep it a secret any longer" that Njörðr's son Freyr was produced with his unnamed sister, "though you'd expect him to be worse than he is." The god Tyr then interjects and the flyting continues in turn.
Njörðr is referenced in stanza 22 of the poem "Þrymskviða", where he is referred to as the father of the goddess Freyja. In the poem, the jötunn Þrymr mistakenly thinks that he will be receiving the goddess Freyja as his bride, and while telling his fellow jötunn to spread straw on the benches in preparation for the arrival of Freyja, he refers to her as the daughter of Njörðr of Nóatún. Towards the end of the poem "Sólarljóð", Njörðr is cited as having nine daughters. Two of the names of these daughters are given; the eldest Ráðveig and the youngest Kreppvör.
Njörðr is also mentioned in the "Prose Edda" books "Gylfaginning" and "Skáldskaparmál".
In the "Prose Edda", Njörðr is introduced in chapter 23 of the book "Gylfaginning". In this chapter, Njörðr is described by the enthroned figure of High as living in the heavens at Nóatún, but also as ruling over the movement of the winds, having the ability to calm both sea and fire, and that he is to be invoked in seafaring and fishing. High continues that Njörðr is very wealthy and prosperous, and that he can also grant wealth in land and valuables to those who request his aid. Njörðr originates from Vanaheimr and is devoid of Æsir stock, and he is described as having been traded with Hœnir in hostage exchange with between the Æsir and Vanir.
High further states that Njörðr's wife is Skaði, that she is the daughter of the jötunn Þjazi, and recounts a tale involving the two. High recalls that Skaði wanted to live in the home once owned by her father called Þrymheimr ("Thunder Home"). However, Njörðr wanted to live nearer to the sea. Subsequently, the two made an agreement that they would spend nine nights in Þrymheimr and then next three nights in Nóatún (or nine winters in Þrymheimr and another nine in Nóatún according to the "Codex Regius" manuscript). However, when Njörðr returned from the mountains to Nóatún, he says:
Skaði then responds:
High states that afterward Skaði went back up to the mountains to Þrymheimr and recites a stanza where Skaði skis around, hunts animals with a bow, and lives in her fathers old house. Chapter 24 begins, which describes Njörðr as the father of two beautiful and powerful children: Freyr and Freyja. In chapter 37, after Freyr has spotted the beautiful jötunn Gerðr, he becomes overcome with sorrow, and refuses to sleep, drink, or talk. Njörðr then sends for Skírnir to find out who he seems to be so angry at, and, not looking forward to being treated roughly, Skírnir reluctantly goes to Freyr.
Njörðr is introduced in "Skáldskaparmál" within a list of 12 Æsir attending a banquet held for Ægir. Further in "Skáldskaparmál", the skaldic god Bragi recounds the death of Skaði's father Þjazi by the Æsir. As one of the three acts of reparation performed by the Æsir for Þjazi's death, Skaði was allowed by the Æsir to choose a husband from amongst them, but given the stipulation that she may not see any part of them but their feet when making the selection. Expecting to choose the god Baldr by the beauty of the feet she selects, Skaði instead finds that she has picked Njörðr.
In chapter 6, a list of kennings is provided for Njörðr: "God of chariots," "Descendant of Vanir," "a Van," father of Freyr and Freyja, and "the giving God." This is followed by an excerpt from a composition by the 11th century skald Þórðr Sjáreksson, explained as containing a reference to Skaði leaving Njörðr:
Chapter 7 follows and provides various kennings for Freyr, including referring to him as the son of Njörðr. This is followed by an excerpt from a work by the 10th-century skald Egill Skallagrímsson that references Njörðr (here anglicized as "Niord"):
In chapter 20, "daughter of Njörðr" is given as a kenning for Freyja. In chapter 33, Njörðr is cited among the gods attending a banquet held by Ægir. In chapter 37, Freyja is again referred to as Njörðr's daughter in a verse by the 12th century skald Einarr Skúlason. In chapter 75, Njörðr is included in a list of the Æsir. Additionally, "Njörðr" is used in kennings for "warrior" or "warriors" various times in "Skáldskaparmál".
Njörðr appears in or is mentioned in three Kings' sagas collected in "Heimskringla"; "Ynglinga saga", the "Saga of Hákon the Good" and the "Saga of Harald Graycloak". In chapter 4 of "Ynglinga saga", Njörðr is introduced in connection with the Æsir-Vanir War. When the two sides became tired of war, they came to a peace agreement and exchanged hostages. For their part, the Vanir send to the Æsir their most "outstanding men"; Njörðr, described as wealthy, and Freyr, described as his son, in exchange for the Æsir's Hœnir. Additionally, the Æsir send Mímir in exchange for the wise Kvasir.
Further into chapter 4, Odin appoints Njörðr and Freyr as priests of sacrificial offerings, and they became gods among the Æsir. Freyja is introduced as a daughter of Njörðr, and as the priestess at the sacrifices. In the saga, Njörðr is described as having once wed his unnamed sister while he was still among the Vanir, and the couple produced their children Freyr and Freyja from this union, though this custom was forbidden among the Æsir.
Chapter 5 relates that Odin gave all of his temple priests dwelling places and good estates, in Njörðr's case being Nóatún. Chapter 8 states that Njörðr married a woman named Skaði, though she would not have intercourse with him. Skaði then marries Odin, and the two had numerous sons.
In chapter 9, Odin dies and Njörðr takes over as ruler of the Swedes, and he continues the sacrifices. The Swedes recognize him as their king, and pay him tribute. Njörðr's rule is marked with peace and many great crops, so much so that the Swedes believed that Njörðr held power over the crops and over the prosperity of mankind. During his rule, most of the Æsir die, their bodies are burned, and sacrifices are made by men to them. Njörðr has himself "marked for" Odin and he dies in his bed. Njörðr's body is burnt by the Swedes, and they weep heavily at his tomb. After Njörðr's reign, his son Freyr replaces him, and he is greatly loved and "blessed by good seasons like his father."
In chapter 14 of "Saga of Hákon the Good" a description of the pagan Germanic custom of Yule is given. Part of the description includes a series of toasts. The toasts begin with Odin's toasts, described as for victory and power for the king, followed by Njörðr and Freyr's toast, intended for good harvests and peace. Following this, a beaker is drank for the king, and then a toast is given for departed kin. Chapter 28 quotes verse where the kenning "Njörðr-of-roller-horses" is used for "sailor". In the "Saga of Harald Graycloak", a stanza is given of a poem entitled "Vellekla" ("Lack of Gold") by the 10th century Icelandic skald Einarr skálaglamm that mentions Njörðr in a kenning for "warrior."
In chapter 80 of the 13th century Icelandic saga "Egils saga", Egill Skallagrímsson composes a poem in praise of Arinbjörn ("Arinbjarnarkviða"). In stanza 17, Egill writes that all others watch in marvel how Arinbjörn gives out wealth, as he has been so endowed by the gods Freyr and Njörðr.
Veneration of Njörðr survived into 18th or 19th century Norwegian folk practice, as recorded in a tale collected by Halldar O. Opedal from an informant in Odda, Hordaland, Norway. The informant comments on a family tradition in which the god is thanked for a bountiful catch of fish:
Scholar Georges Dumézil further cites various tales of "havmennesker" (Norwegian "sea people") who govern over sea weather, wealth, or, in some incidents, give magic boats, and proposes that they are historically connected to Njörðr.
Njörðr is often identified with the goddess Nerthus, whose reverence by various Germanic tribes is described by Roman historian Tacitus in his 1st CE century work "Germania". The connection between the two is due to the linguistic relationship between "Njörðr" and the reconstructed "*Nerþuz", "Nerthus" being the feminine, Latinized form of what "Njörðr" would have looked like around 1 CE. This has led to theories about the relation of the two, including that Njörðr may have once been a hermaphroditic god or, generally considered more likely, that the name may indicate an otherwise unattested divine brother and sister pair such as Freyr and Freyja. Consequently, Nerthus has been identified with Njörðr's unnamed sister with whom he had Freyja and Freyr, which is mentioned in "Lokasenna".
In Saami mythology, Bieka-Galles (or Biega-, Biegga-Galles, depending on dialect; "The Old Man of the Winds") is a deity who rules over rain and wind, and is the subject of boat and wooden shovel (or, rather, oar) offerings. Due to similarities in between descriptions of Njörðr in "Gylfaginning" and descriptions of Bieka-Galles in 18th century missionary reports, Axel Olrik identified this deity as the result of influence from the seafaring North Germanic peoples on the landbound Saami.
Parallels have been pointed out between Njörðr and the figure of Hadingus, attested in book I of Saxo Grammaticus' 13th century work "Gesta Danorum". Some of these similarities include that, in parallel to Skaði and Njörðr in "Skáldskaparmál", Hadingus is chosen by his wife Ragnhild after selecting him from other men at a banquet by his lower legs, and, in parallel to Skaði and Njörðr in "Gylfaginning", Hadingus complains in verse of his displeasure at his life away from the sea and how he is disturbed by the howls of wolves, while his wife Regnhild complains of life at the shore and states her annoyance at the screeching sea birds. Georges Dumézil theorized that in the tale Hadingus passes through all three functions of his trifunctional hypothesis, before ending as an Odinic hero, paralleling Njörðr's passing from the Vanir to the Æsir in the Æsir-Vanir War.
In stanza 8 of the poem "Fjölsvinnsmál", Svafrþorinn is stated as the father of Menglöð by an unnamed mother, who the hero Svipdagr seeks. Menglöð has often been theorized as the goddess Freyja, and according to this theory, Svafrþorinn would therefore be Njörðr. The theory is complicated by the etymology of the name "Svafrþorinn" ("þorinn" meaning "brave" and "svafr" means "gossip") (or possibly connects to "sofa" "sleep"), which Rudolf Simek says makes little sense when attempting to connect it to Njörðr.
Njörðr has been the subject of an amount of artistic depictions. Depictions include "Freyr und Gerda; Skade und Niurd" (drawing, 1883) by K. Ehrenberg, "Njörðr" (1893) by Carl Frederick von Saltza, "Skadi" (1901) by E. Doepler d. J., and "Njörd's desire of the Sea" (1908) by W. G. Collingwood.
Njörðr is one of the incarnated gods in the New Zealand comedy/drama "The Almighty Johnsons". The part of "Johan Johnson/Njörðr" is played by Stuart Devenie. | https://en.wikipedia.org/wiki?curid=21594 |
Niger–Congo languages
The Niger–Congo languages are the world's third largest language family in terms of number of speakers and Africa's largest in terms of geographical area, number of speakers, and number of distinct languages. It is generally considered to be the world's largest language family in terms of number of distinct languages, just ahead of Austronesian, although this is complicated by the ambiguity about what constitutes a distinct language; the number of named Niger–Congo languages listed by "Ethnologue" is 1,540.
It is the third-largest language family in the world by number of native speakers, comprising around 700 million people as of 2015. Within Niger–Congo, the Bantu languages alone account for 350 million people (2015), or half the total Niger–Congo speaking population. The most widely spoken Niger–Congo languages by number of native speakers are Yoruba, Igbo, Fula and Zulu. The most widely spoken by total number of speakers is Swahili, which is used as a lingua franca in parts of eastern and southeastern Africa.
While the ultimate genetic unity of the core of Niger–Congo (called Atlantic–Congo) is widely accepted, the internal cladistic structure is not well established. Other primary branches may include Dogon, Mande, Ijo, Katla and Rashad. The connection of the Mande languages especially has never been demonstrated, and without them the validity of Niger–Congo family as a whole (as opposed to Atlantic–Congo or a similar subfamily) has not been established.
One of the most distinctive characteristics common to Atlantic–Congo languages is the use of a noun-class system, which is essentially a gender system with multiple genders.
The language family most likely originated in or near the area where these languages were spoken prior to Bantu expansion (i.e. West Africa or Central Africa). Its expansion may have been associated with the expansion of Sahel agriculture in the African Neolithic period, following the desiccation of the Sahara in c. 3500 BCE.
According to Roger Blench (2004), all specialists in Niger–Congo languages believe the languages to have a common origin, rather than merely constituting a typological classification, for reasons including their shared noun-class system, shared verbal extensions and shared basic lexicon. Similar classifications to Niger–Congo have been made ever since Diedrich Westermann in 1922. Joseph Greenberg continued that tradition, making it the starting point for modern linguistic classification in Africa, with some of his most notable publications going to press starting in the 1960s. However, there has been active debate for many decades over the appropriate subclassifications of the languages in this language family, which is a key tool used in localising a language's place of origin. No definitive "Proto-Niger–Congo" lexicon or grammar has been developed for the language family as a whole.
An important unresolved issue in determining the time and place where the Niger–Congo languages originated and their range prior to recorded history is this language family's relationship to the Kordofanian languages, now spoken in the Nuba mountains of Sudan, which is not contiguous with the remainder of the Niger–Congo-language-speaking region and is at the northeasternmost extent of the current Niger–Congo linguistic region. The current prevailing linguistic view is that Kordofanian languages are part of the Niger–Congo language family and that these may be the first of the many languages still spoken in that region to have been spoken in the region. The evidence is insufficient to determine if this outlier group of Niger–Congo language speakers represent a prehistoric range of a Niger–Congo linguistic region that has since contracted as other languages have intruded, or if instead, this represents a group of Niger–Congo language speakers who migrated to the area at some point in prehistory where they were an isolated linguistic community from the beginning.
There is more agreement regarding the place of origin of Benue–Congo, the largest subfamily of the group. Within Benue–Congo, the place of origin of the Bantu languages as well as time at which it started to expand is known with great specificity. Blench (2004), relying particularly on prior work by Kay Williamson and P. De Wolf, argued that Benue–Congo probably originated at the confluence of the Benue and Niger Rivers in central Nigeria. These estimates of the place of origin of the Benue-Congo language family do not fix a date for the start of that expansion, other than that it must have been sufficiently prior to the Bantu expansion to allow for the diversification of the languages within this language family that includes Bantu.
The classification of the relatively divergent family of the Ubangian languages, centred in the Central African Republic, as part of the Niger–Congo language family is disputed. Ubangian was grouped with Niger–Congo by Greenberg (1963), and later authorities concurred, but it was questioned by Dimmendaal (2008).
The Bantu expansion, beginning around 1000 BC, swept across much of Central and Southern Africa, leading to the extinction of much of the indigenous Pygmy and Bushmen (Khoisan) populations there.
The following is an overview of the language groups usually included in Niger–Congo. The genetic relationship of some branches is not universally accepted, and the cladistic connection between those who are accepted as related may also be unclear.
The core phylum of the Niger–Congo group are the Atlantic–Congo languages. The non-Atlantic–Congo languages within Niger–Congo are grouped as Dogon, Mande, Ijo (sometimes with Defaka as Ijoid), Katla and Rashad.
Atlantic–Congo combines the Atlantic languages, which do not form one branch, and Volta–Congo. It comprises more than 80% of the Niger–Congo speaking population, or close to 600 million people (2015).
The proposed Savannas group combines Adamawa, Ubangian and Gur. Outside of the Savannas group, Volta–Congo comprises Kru, Kwa (or "West Kwa"), Volta–Niger (also "East Kwa" or "West Benue–Congo") and Benue–Congo (or "East Benue–Congo"). Volta–Niger includes the two largest languages of Nigeria, Yoruba and Igbo. Benue–Congo includes the Southern Bantoid group, which is dominated by the Bantu languages, which account for 350 million people (2015), or half the total Niger–Congo speaking population.
The strict genetic unity of any of these subgroups may themselves be under dispute. For example, Roger Blench (2012) argued that Adamawa, Ubangian, Kwa, Bantoid, and Bantu are not coherent groups.
"Glottolog" 3.4 (2019) does not accept that the Kordofanian branches (Lafofa, Talodi and Heiban) or the difficult-to-classify Laal language have been demonstrated to be Atlantic–Congo languages. It otherwise accepts the family but not its inclusion within a broader Niger–Congo. Glottolog also considers Ijoid, Mande, and Dogon to be independent language phyla that have not been demonstrated to be related to each other.
The Atlantic–Congo group is characterised by the noun class systems of its languages. Atlantic–Congo largely corresponds to Mukarovsky's "Western Nigritic" phylum.
The polyphyletic Atlantic group accounts for about 35 million speakers as of 2016, mostly accounted for by Fula and Wolof speakers. Atlantic is not considered to constitute a valid group.
The putative Niger–Congo languages outside of the Atlantic–Congo family are centred in the upper Senegal and Niger river basins, south and west of Timbuktu (Mande, Dogon), the Niger Delta (Ijoid), and far to the east in south-central Sudan, around the Nuba Mountains (the Kordofanian families). They account for a total population of about 100 million (2015), mostly Mandé and Ijaw.
The various Kordofanian languages are spoken in south-central Sudan, around the Nuba Mountains. "Kordofanian" is a geographic grouping, not a genetic one, named for the Kordofan region. These are minor languages, spoken by a total of about 100,000 people according to 1980s estimates. Katla and Rashad languages show isoglosses with Benue-Congo that the other families lack.
The endangered or extinct Laal, Mpre and Jalaa languages are often assigned to Niger–Congo.
Niger–Congo as it is known today was only gradually recognized as a linguistic unit. In early classifications of the languages of Africa, one of the principal criteria used to distinguish different groupings was the languages' use of prefixes to classify nouns, or the lack thereof. A major advance came with the work of Sigismund Wilhelm Koelle, who in his 1854 "Polyglotta Africana" attempted a careful classification, the groupings of which in quite a number of cases correspond to modern groupings. An early sketch of the extent of Niger–Congo as one language family can be found in Koelle's observation, echoed in Bleek (1856), that the Atlantic languages used prefixes just like many Southern African languages. Subsequent work of Bleek, and some decades later the comparative work of Meinhof, solidly established Bantu as a linguistic unit.
In many cases, wider classifications employed a blend of typological and racial criteria. Thus, Friedrich Müller, in his ambitious classification (1876–88), separated the 'Negro' and Bantu languages. Likewise, the Africanist Karl Richard Lepsius considered Bantu to be of African origin, and many 'Mixed Negro languages' as products of an encounter between Bantu and intruding Asiatic languages.
In this period a relation between Bantu and languages with Bantu-like (but less complete) noun class systems began to emerge. Some authors saw the latter as languages which had not yet completely evolved to full Bantu status, whereas others regarded them as languages which had partly lost original features still found in Bantu. The Bantuist Meinhof made a major distinction between Bantu and a 'Semi-Bantu' group which according to him was originally of the unrelated Sudanic stock.
Westermann, a pupil of Meinhof, set out to establish the internal classification of the then Sudanic languages. In a 1911 work he established a basic division between 'East' and 'West'. A historical reconstruction of West Sudanic was published in 1927, and in his 1935 'Charakter und Einteilung der Sudansprachen' he conclusively established the relationship between Bantu and West Sudanic.
Joseph Greenberg took Westermann's work as a starting-point for his own classification. In a series of articles published between 1949 and 1954, he argued that Westermann's 'West Sudanic' and Bantu formed a single genetic family, which he named Niger–Congo; that Bantu constituted a subgroup of the Benue–Congo branch; that Adamawa–Eastern, previously not considered to be related, was another member of this family; and that Fula belonged to the West Atlantic languages. Just before these articles were collected in final book form ("The Languages of Africa") in 1963, he amended his classification by adding Kordofanian as a branch co-ordinate with Niger–Congo as a whole; consequently, he renamed the family "Congo–Kordofanian", later "Niger–Kordofanian". Greenberg's work on African languages, though initially greeted with scepticism, became the prevailing view among scholars.
Bennet and Sterk (1977) presented an internal reclassification based on lexicostatistics that laid the foundation for the regrouping in Bendor-Samuel (1989). Kordofanian was presented as one of several primary branches rather than being coordinate to the family as a whole, prompting re-introduction of the term "Niger–Congo", which is in current use among linguists. Many classifications continue to place Kordofanian as the most distant branch, but mainly due to negative evidence (fewer lexical correspondences), rather than positive evidence that the other languages form a valid genealogical group. Likewise, Mande is often assumed to be the second-most distant branch based on its lack of the noun-class system prototypical of the Niger–Congo family. Other branches lacking any trace of the noun-class system are Dogon and Ijaw, whereas the Talodi branch of Kordofanian does have cognate noun classes, suggesting that Kordofanian is also not a unitary group.
"Glottolog" (2013) accepts the core with noun-class systems, the Atlantic–Congo languages, apart from the recent inclusion of some of the Kordofanian groups, but not Niger–Congo as a whole. They list the following as separate families: Atlantic–Congo, Mande, Dogon, Ijoid, Lafofa, Katla–Tima, Heiban, Talodi, and Rashad.
Oxford Handbooks Online (2016) has indicated that the continuing reassessment of Niger-Congo's "internal structure is due largely to the preliminary nature of Greenberg’s classification, explicitly based as it was on a methodology that doesn’t produce proofs for genetic affiliations between languages but rather aims at identifying “likely candidates.”...The ongoing descriptive and documentary work on individual languages and their varieties, greatly expanding our knowledge on formerly little-known linguistic regions, is helping to identify clusters and units that allow for the application of the historical-comparative method. Only the reconstruction of lower-level units, instead of “big picture” contributions based on mass comparison, can help to verify (or disprove) our present concept of Niger-Congo as a genetic grouping consisting of Benue-Congo plus Volta-Niger, Kwa, Adamawa plus Gur, Kru, the so-called Kordofanian languages, and probably the language groups traditionally classified as Atlantic."
The coherence of Niger-Congo as a language phylum is supported by Grollemund, et al. (2016), using computational phylogenetic methods. The East/West Volta-Congo division, West/East Benue-Congo division, and North/South Bantoid division are not supported, whereas a Bantoid group consisting of Ekoid, Bendi, Dakoid, Jukunoid, Tivoid, Mambiloid, Beboid, Mamfe, Tikar, Grassfields, and Bantu is supported.
The Automated Similarity Judgment Program (ASJP) also groups many Niger-Congo branches together.
Proto-Niger–Congo (or Proto-Atlantic–Congo) has not been reconstructed, and few of the demonstrably coherent branches of it have been either. The major success has been several reconstructions of Proto-Bantu, which has consequently had an outsize influence on conceptions of what Proto-Niger–Congo may have been like. The only stage higher than Proto-Bantu that has been reconstructed is a pilot project by Stewart, who since the 1970s has reconstructed the common ancestor of the Potou–Tano and Bantu languages, without so far considering the hundreds of other languages which presumably descend from that same ancestor. Konstantin Pozdniakov has reconstructed the numeral system.
Over the years, several linguists have suggested a link between Niger–Congo and Nilo-Saharan, probably starting with Westermann's comparative work on the "Sudanic" family in which 'Eastern Sudanic' (now classified as Nilo-Saharan) and 'Western Sudanic' (now classified as Niger–Congo) were united. Gregersen (1972) proposed that Niger–Congo and Nilo-Saharan be united into a larger phylum, which he termed "Kongo–Saharan". His evidence was mainly based on the uncertainty in the classification of Songhay, morphological resemblances, and lexical similarities. A more recent proponent was Roger Blench (1995), who puts forward phonological, morphological and lexical evidence for uniting Niger–Congo and Nilo-Saharan in a "Niger–Saharan" phylum, with special affinity between Niger–Congo and Central Sudanic. However, fifteen years later his views had changed, with Blench (2011) proposing instead that the noun-classifier system of Central Sudanic, commonly reflected in a tripartite general–singulative–plurative number system, triggered the development or elaboration of the noun-class system of the Atlantic–Congo languages, with tripartite number marking surviving in the Plateau and Gur languages of Niger–Congo, and the lexical similarities being due to loans.
Niger–Congo languages have a clear preference for open syllables of the type CV (Consonant Vowel). The typical word structure of Proto-Niger–Congo (though it has not been reconstructed) is thought to have been CVCV, a structure still attested in, for example, Bantu, Mande and Ijoid – in many other branches this structure has been reduced through phonological change. Verbs are composed of a root followed by one or more extensional suffixes. Nouns consist of a root originally preceded by a noun class prefix of (C)V- shape which is often eroded by phonological change.
Several branches of Niger–Congo have a regular phonological contrast between two classes of consonants. Pending more clarity as to the precise nature of this contrast, it is commonly characterized as a contrast between fortis and lenis consonants.
Many Niger–Congo languages' vowel harmony is based on the [ATR] (advanced tongue root) feature. In this type of vowel harmony, the position of the root of the tongue in regards to backness is the phonetic basis for the distinction between two harmonizing sets of vowels. In its fullest form, this type involves two classes, each of five vowels:
The roots are then divided into [+ATR] and [−ATR] categories. This feature is lexically assigned to the roots because there is no determiner within a normal root that causes the [ATR] value.
There are two types of [ATR] vowel harmony controllers in Niger–Congo. The first controller is the root. When a root contains a [+ATR] or [−ATR] vowel, then that value is applied to the rest of the word, which involves crossing morpheme boundaries. For example, suffixes in Wolof assimilate to the [ATR] value of the root to which they attach. Some examples of these suffixes that alternate depending on the root are:
Furthermore, the directionality of assimilation in [ATR] root-controlled vowel harmony need not be specified. The root features [+ATR] and [−ATR] spread left and/or right as needed, so that no vowel would lack a specification and be ill-formed.
Unlike in the root-controlled harmony system, where the two [ATR] values behave symmetrically, a large number of Niger–Congo languages exhibit a pattern where the [+ATR] value is more active or dominant than the [−ATR] value. This results in the second vowel harmony controller being the [+ATR] value. If there is even one vowel that is [+ATR] in the whole word, then the rest of the vowels harmonize with that feature. However, if there is no vowel that is [+ATR], the vowels appear in their underlying form. This form of vowel harmony control is best exhibited in West African languages. For example, in Nawuri, the diminutive suffix /-bi/ will cause the underlying [−ATR] vowels in a word to become phonetically [+ATR].
There are two types of vowels which affect the harmony process. These are known as neutral or opaque vowels. Neutral vowels do not harmonize to the [ATR] value of the word, and instead maintain their own [ATR] value. The vowels that follow them, however, will receive the [ATR] value of the root. Opaque vowels maintain their own [ATR] value as well, but they affect the harmony process behind them. All of the vowels following an opaque vowel will harmonize with the [ATR] value of the opaque vowel instead of the [ATR] vowel of the root.
The vowel inventory listed above is a ten-vowel language. This is a language in which all of the vowels of the language participate in the harmony system, producing five harmonic pairs. Vowel inventories of this type are still found in some branches of Niger-Congo, for example in the Ghana Togo Mountain languages. However, this is the rarer inventory as oftentimes there are one or more vowels that are not part of a harmonic pair. This has resulted in seven-and nine-vowel systems being the more popular systems. The majority of languages with [ATR] controlled vowel harmony have either seven- or nine-vowel phonemes, with the most common non-participatory vowel being /a/. It has been asserted that this is because vowel quality differences in the mid-central region where /ə/, the counterpart of /a/, is found, are difficult to perceive. Another possible reason for the non-participatory status of /a/ is that there is articulatory difficulty in advancing the tongue root when the tongue body is low in order to produce a low [+ATR] vowel. Therefore, the vowel inventory for nine-vowel languages is generally:
And seven-vowel languages have one of two inventories:
Note that in the nine-vowel language, the missing vowel is, in fact, [ə], [a]'s counterpart, as would be expected.
The fact that ten vowels have been reconstructed for proto-Ijoid has led to the hypothesis that the original vowel inventory of Niger–Congo was a full ten-vowel system. On the other hand, Stewart, in recent comparative work, reconstructs a seven-vowel system for his proto-Potou-Akanic-Bantu.
Several scholars have documented a contrast between oral and nasal vowels in Niger–Congo. In his reconstruction of proto-Volta–Congo, Steward (1976) postulates that nasal consonants have originated under the influence of nasal vowels; this hypothesis is supported by the fact that there are several Niger–Congo languages that have been analysed as lacking nasal consonants altogether. Languages like this have nasal vowels accompanied with complementary distribution between oral and nasal consonants before oral and nasal vowels. Subsequent loss of the nasal/oral contrast in vowels may result in nasal consonants becoming part of the phoneme inventory. In all cases reported to date, the bilabial /m/ is the first nasal consonant to be phonologized. Niger–Congo thus invalidates two common assumptions about nasals: that all languages have at least one primary nasal consonant, and that if a language has only one primary nasal consonant it is /n/.
Niger–Congo languages commonly show fewer nasalized than oral vowels. Kasem, a language with a ten-vowel system employing ATR vowel harmony, has seven nasalized vowels. Similarly, Yoruba has seven oral vowels and only five nasal ones. However, the language of Zialo has a nasal equivalent for each of its seven oral vowels.
The large majority of present-day Niger–Congo languages are tonal. A typical Niger–Congo tone system involves two or three contrastive level tones. Four-level systems are less widespread, and five-level systems are rare. Only a few Niger–Congo languages are non-tonal; Swahili is perhaps the best known, but within the Atlantic branch some others are found. Proto-Niger–Congo is thought to have been a tone language with two contrastive levels. Synchronic and comparative-historical studies of tone systems show that such a basic system can easily develop more tonal contrasts under the influence of depressor consonants or through the introduction of a downstep. Languages which have more tonal levels tend to use tone more for lexical and less for grammatical contrasts.
Niger–Congo languages are known for their system of noun classification, traces of which can be found in every branch of the family but Mande, Ijoid, Dogon, and the Katla and Rashad branches of Kordofanian. These noun-classification systems are somewhat analogous to grammatical gender in other languages, but there are often a fairly large number of classes (often 10 or more), and the classes may be male human/female human/animate/inanimate, or even completely gender-unrelated categories such as places, plants, abstracts, and groups of objects. For example, in Bantu, the Swahili language is called "Kiswahili," while the Swahili people are "Waswahili." Likewise, in Ubangian, the Zande language is called "Pazande," while the Zande people are called "Azande."
In the Bantu languages, where noun classification is particularly elaborate, it typically appears as prefixes, with verbs and adjectives marked according to the class of the noun they refer to. For example, in Swahili, "watu wazuri wataenda" is 'good "(zuri)" people "(tu)" will go "(ta-enda)"'.
The same Atlantic–Congo languages which have noun classes also have a set of verb applicatives and other verbal extensions, such as the reciprocal suffix "-na" (Swahili "penda" 'to love', "pendana" 'to love each other'; also applicative "pendea" 'to love for' and causative "pendeza" 'to please').
A subject–verb–object word order is quite widespread among today's Niger–Congo languages, but SOV is found in branches as divergent as Mande, Ijoid and Dogon. As a result, there has been quite some debate as to the basic word order of Niger–Congo.
Whereas Claudi (1993) argues for SVO on the basis of existing SVO > SOV grammaticalization paths, Gensler (1997) points out that the notion of 'basic word order' is problematic as it excludes structures with, for example, auxiliaries. However, the structure SC-OC-VbStem (Subject concord, Object concord, Verb stem) found in the "verbal complex" of the SVO Bantu languages suggests an earlier SOV pattern (where the subject and object were at least represented by pronouns).
Noun phrases in most Niger–Congo languages are characteristically "noun-initial", with adjectives, numerals, demonstratives and genitives all coming after the noun. The major exceptions are found in the western areas where verb-final word order predominates and genitives precede nouns, though other modifiers still come afterwards. Degree words almost always follow adjectives, and except in verb-final languages adpositions are prepositional.
The verb-final languages of the Mende region have two quite unusual word order characteristics. Although verbs follow their direct objects, oblique adpositional phrases (like "in the house", "with timber") typically come after the verb, creating a SOVX word order. Also noteworthy in these languages is the prevalence of internally headed and correlative relative clauses, in both of which the head occurs "inside" the relative clause rather than the main clause. | https://en.wikipedia.org/wiki?curid=21601 |
Napo River
The Napo River () is a tributary to the Amazon River that rises in Ecuador on the flanks of the east Andean volcanoes of Antisana, Sinchulawa and Cotopaxi.
The total length is . The river drains an area of . The mean annual discharge is per second.
Before it reaches the plains it receives a great number of small streams from impenetrable, saturated and much broken mountainous districts, where the dense and varied vegetation seems to fight for every piece of ground. This river is one of Ecuador's Physical Features. From the north it is joined by the Coca River, having its sources in the gorges of Cayambe volcano on the equator, and also a powerful river, the Aguarico having its headwaters between Cayambe and the Colombia frontier.
From the west, it receives a secondary tributary, the Curaray, from the Andean slopes, between Cotopaxi and the Tungurahua volcano. From its Coca branch to the mouth of the Curaray the Napo is full of snags and shelving sandbanks and throws out numerous canoes among jungle-tangled islands, which in the wet season are flooded, giving the river an immense width. From the Coca to the Amazon it runs through a forested plain where not a hill is visible from the river - its uniformly level banks being only interrupted by swamps and lagoons.
From the Amazon the Napo is navigable for river craft up to its Curaray branch, a distance of about , and perhaps a bit further; thence, by painful canoe navigation, its upper waters may be ascended as far as Santa Rosa, the usual point of embarkation for any venturesome traveller who descends from the Quito tableland. The Coca river may be penetrated as far up as its middle course, where it is jammed between two mountain walls, in a deep canyon, along which it dashes over high falls and numerous reefs. This is the stream made famous by the expedition of Gonzalo Pizarro. | https://en.wikipedia.org/wiki?curid=21606 |
Nine-ball
Nine-ball (sometimes written 9-ball) is a discipline of the cue sport pool. The game is traceable to origins in the 1920s in the United States. It is played on a rectangular billiard table with at each of the four corners and in the middle of each long side. Using a cue stick, players must strike the white cue ball to nine colored billiard balls in ascending numerical order. An individual game (or ) is won by the player pocketing the . Matches are usually played as a to a set number of racks, with the player who reaches the set number winning the match.
The game is currently governed by the World Pool-Billiard Association (WPA), with multiple regional tours. The most prestigious nine-ball tournaments are the WPA World Nine-ball Championship, and the U.S. Open Nine-ball Championships. Notable players in the game include Efren Reyes, Francisco Bustamante, Thorsten Hohmann, Earl Strickland, and Shane Van Boening. The game is often associated with hustling and gambling, with tournaments often having a "buy-in" amount to become a participant. The sport has featured in popular culture, notably in the 1961 film "The Hustler" and its 1986 sequel "The Color of Money".
Nine-ball has been played with varied rules, with games such as ten-ball, seven-ball and three-ball being derived from the game. While usually a singles sport, the game can be played in doubles, with players completing alternate shots. Examples of tournaments featuring doubles include the World Cup of Pool, World Team Championship and the Mosconi Cup.
The game was established in America by 1920, although the exact origins are unknown. Nine-ball is played with the same equipment as eight-ball and other pool games.
The game of nine-ball is played on a billiard table with six pockets and with ten balls. The , which is usually a solid shade of white (but may be spotted in some tournaments), is struck to hit the other balls on the table. The remaining balls are numbered 1 through 9, each a distinct color, with the 9-ball being striped yellow and white. The aim of the game is to hit the lowest numbered ball on the table (often referred to as the ) and balls in succession to eventually pocket the nine-ball. As long as the lowest numbered ball on the table is hit first, the player may continue to shoot as long as any ball is pocketed in any of the 6 pockets. A shot where the player hits the object ball and pockets any other ball is sometimes called a . The winner is the player who pockets the nine-ball, even if doing so by a combination shot.
Each rack begins with the object balls placed in a rack and one player playing a . The object balls are placed in a diamond-shaped configuration, with the 1-ball positioned at the front on the , and the 9-ball placed in the center. The rack used to position the balls may be either triangle-shaped, as is used for eight-ball and other pool games, or a specific diamond-shaped rack that holds only nine balls may be used. Racks are usually made of wood or plastic. A template that lies on the table during the break has also come into use.
The break consists of hitting the 1-ball, with the attempt to pocket any ball. If the nine-ball is successfully potted, the player automatically wins the rack. This is sometimes known as a . Additional rules in some tournaments exist, such as a number of balls having to reach the , and players can be chosen to break alternatively or whoever won the preceding rack. The break is often the most crucial shot in nine-ball, as it is possible to win a rack without the opponent having a single shot. This is often called a , or running the rack. Earl Strickland holds the record for break and runs, after he successfully ran 11 consecutive racks in a tournament in 1996. The first break of a match is sometimes decided by a flip of a coin, but often by playing a , with both players playing a cue ball down the table, the closest to the top rail winning the initial break.
After the break, if no balls were pocketed, the opponent has the option to continue the rack as usual, or to play a . The rules on a push out are different to those of a regular shot, as the shot does not need to hit a rail or ball. Any balls pocketed are returned to the table, including the nine-ball. After the push out, the breaking player has the option to play the shot that has been left, or to force the opponent to play on from that location. In early versions of nine-ball the push out could be called at any time during the game, but is now only for the shot after the break. The ideal position to leave the balls in after a push out is to leave a shot that the player believes they can pocket, but that their opponent would struggle with.
If a player misses potting a ball on a shot, or commits a foul shot, then their opponent plays the next shot. A foul shot can involve not making first contact with the lowest numbered ball, pocketing the cue ball, or not making contact with a with the object ball. A foul shot for any reason offers the opponent , which means they can place the cue ball at any location on the table. A player making three successive fouls (for any reason) awards that rack to the opponent. Unlike some other cue sports, such as snooker, players are allowed to jump the cue ball over other balls. However, if any ball leaves the cloth at the end of a shot, it is counted as a foul. Jumping is common in nine-ball, and players often have a dedicated jump cue.
As of the 2000s, the rules have been somewhat in flux in certain contexts, especially in Europe. The European Pocket Billiard Federation (EPBF), the WPA-affiliate in Europe, has instituted a requirement on the Euro Tour is that the break shot be taken from a "" a rectangular box smaller than the regular nine-ball breaking area. While making the money ball on breaks are still possible, they are much more difficult with the break box. This was later used on the annual international Mosconi Cup tournaments. Another Mosconi Cup rule change in 2007 called for racking such that the 9-ball rather than the 1-ball is on the , which further stops overpowered break-off shots.
The general rules of the game are fairly consistent and usually do not stray too far from the earliest format set by the Billiard Congress of America (BCA). These later formed the basis of the standardized WPA rules, which the BCA follows as a member, although amateur league play may be governed by similar but slightly different rules promulgated by the American Poolplayers Association (APA) and other organizations.
Nine-ball events worldwide are run at the highest level by the WPA. The WPA World Nine-ball Championship has events for men, women and junior players. Events are generally open to any player who can pay the entry fee, however, some events are based on qualification. The WPA hosts a world ranking schedule based on WPA events, with other ranking systems also operated by the APA and the EPBF. Other major events held by the WPA include the U.S. Open Nine-ball Championship, China Open and Turning Stone Classic. In addition, Matchroom Sport runs major events such as the Mosconi Cup, World Cup of Pool and World Pool Masters.
Outside those events held on an worldwide basis, nine-ball is played in continental tour series. Events are held on series such as the Diamond Pool Tour, Asian Tour and Euro Tour.
Several games have been derived from nine-ball. Six-ball is essentially identical to nine-ball but with three fewer balls, which are racked in a three-row triangle, with the money ball, placed in the center of the back row. According to Rudolph Wanderone Jr., the game arose in early 20th century billiard halls; halls charged for matches by the 15 ball rack rather than by table, so players of nine-ball had six balls leftover. For this reason, the game is often played with the balls numbered between 10 and 15, with the 15-ball as the money ball.
Seven-ball is also similar, though it differs in two key ways: the game uses only seven object balls, which are racked in a hexagon, and players are restricted to pockets on their designated side the table. William D. Clayton is credited with the game's invention in the early 1980s. While not a common game, it was featured on television broadcaster ESPN's "Sudden Death Seven-ball" which aired in the early 2000s.
The most common derivative game is the game of Ten-ball. The game is a more stringent variant, using ten balls in which all pocketed balls must be . Unlike in nine-ball, the money ball cannot be pocketed on the break for an instant win. Due to its more challenging nature, and the fact that there is no publicly known technique for reliably pocketing specific object balls on the break shot, there have been suggestions among the professional circuit that ten-ball should replace nine-ball as the pro game of choice, especially since the rise of the nine-ball soft break, which is still legal in most international and non-European competition. Ten-ball has its own world championship known as the WPA World Ten-ball Championship.
The sport has featured in popular culture, most notably in the 1956 novel "The Hustler" and its 1961 film adaptation, and the 1984 novel sequel "The Color of Money" and 1986 film "The Color of Money". | https://en.wikipedia.org/wiki?curid=21609 |
Nostradamus
Michel de Nostredame (depending on the source, 14 or 21 December 1503 – 1 or 2 July 1566), usually Latinised as Nostradamus, was a French astrologer, physician and reputed seer, who is best known for his book "Les Prophéties", a collection of 942 poetic quatrains allegedly predicting future events. The book was first published in 1555 and has rarely been out of print since his death.
Nostradamus's family was originally Jewish, but had converted to Catholic Christianity before he was born. He studied at the University of Avignon, but was forced to leave after just over a year when the university closed due to an outbreak of the plague. He worked as an apothecary for several years before entering the University of Montpellier, hoping to earn a doctorate, but was almost immediately expelled after his work as an apothecary (a manual trade forbidden by university statutes) was discovered. He first married in 1531, but his wife and two children died in 1534 during another plague outbreak. He fought alongside doctors against the plague before remarrying to Anne Ponsarde, with whom he had six children. He wrote an almanac for 1550 and, as a result of its success, continued writing them for future years as he began working as an astrologer for various wealthy patrons. Catherine de' Medici became one of his foremost supporters. His "Les Prophéties", published in 1555, relied heavily on historical and literary precedent, and initially received mixed reception. He suffered from severe gout toward the end of his life, which eventually developed into edema. He died on 2 July 1566. Many popular authors have retold apocryphal legends about his life.
In the years since the publication of his "Les Prophéties", Nostradamus has attracted many supporters, who, along with much of the popular press, credit him with having accurately predicted many major world events. Most academic sources reject the notion that Nostradamus had any genuine supernatural prophetic abilities and maintain that the associations made between world events and Nostradamus's quatrains are the result of misinterpretations or mistranslations (sometimes deliberate). These academics argue that Nostradamus's predictions are characteristically vague, meaning they could be applied to virtually anything, and are useless for determining whether their author had any real prophetic powers. They also point out that English translations of his quatrains are almost always of extremely poor quality, based on later manuscripts, produced by authors with little knowledge of sixteenth-century French, and often deliberately mistranslated to make the prophecies fit whatever events the translator believed they were supposed to have predicted.
Nostradamus was born on either 14 or 21 December 1503 in Saint-Rémy-de-Provence, Provence, France, where his claimed birthplace still exists, and baptized Michel. He was one of at least nine children of notary Jaume (or Jacques) de Nostredame and Reynière, granddaughter of Pierre de Saint-Rémy who worked as a physician in Saint-Rémy. Jaume's family had originally been Jewish, but his father, Cresquas, a grain and money dealer based in Avignon, had converted to Catholicism around 1459–60, taking the Christian name "Pierre" and the surname "Nostredame" (Our Lady), the saint on whose day his conversion was solemnised. The earliest ancestor who can be identified on the paternal side is Astruge of Carcassonne, who died about 1420. Michel's known siblings included Delphine, Jean (c. 1507–77), Pierre, Hector, Louis, Bertrand, Jean II (born 1522) and Antoine (born 1523).
Little else is known about his childhood, although there is a persistent tradition that he was educated by his maternal great-grandfather Jean de St. Rémy — a tradition which is somewhat undermined by the fact that the latter disappears from the historical record after 1504 when the child was only one year old.
At the age of 14 Nostradamus entered the University of Avignon to study for his baccalaureate. After little more than a year (when he would have studied the regular trivium of grammar, rhetoric and logic rather than the later quadrivium of geometry, arithmetic, music, and astronomy/astrology), he was forced to leave Avignon when the university closed its doors during an outbreak of the plague. After leaving Avignon, Nostradamus, by his own account, traveled the countryside for eight years from 1521 researching herbal remedies. In 1529, after some years as an apothecary, he entered the University of Montpellier to study for a doctorate in medicine. He was expelled shortly afterwards by the student "procurator", Guillaume Rondelet, when it was discovered that he had been an apothecary, a "manual trade" expressly banned by the university statutes, and had been slandering doctors. The expulsion document, "BIU Montpellier, Register S 2 folio 87", still exists in the faculty library. However, some of his publishers and correspondents would later call him "Doctor". After his expulsion, Nostradamus continued working, presumably still as an apothecary, and became famous for creating a "rose pill" that purportedly protected against the plague.
In 1531 Nostradamus was invited by Jules-César Scaliger, a leading Renaissance scholar, to come to Agen. There he married a woman of uncertain name (possibly Henriette d'Encausse), who bore him two children. In 1534 his wife and children died, presumably from the plague. After their deaths, he continued to travel, passing through France and possibly Italy.
On his return in 1545, he assisted the prominent physician Louis Serre in his fight against a major plague outbreak in Marseille, and then tackled further outbreaks of disease on his own in Salon-de-Provence and in the regional capital, Aix-en-Provence. Finally, in 1547, he settled in Salon-de-Provence in the house which exists today, where he married a rich widow named Anne Ponsarde, with whom he had six children—three daughters and three sons. Between 1556 and 1567 he and his wife acquired a one-thirteenth share in a huge canal project, organised by Adam de Craponne, to create the Canal de Craponne to irrigate the largely waterless Salon-de-Provence and the nearby Désert de la Crau from the river Durance.
After another visit to Italy, Nostradamus began to move away from medicine and toward the "occult", although evidence suggests that he remained a Catholic and was opposed to the Protestant Reformation. But it seems he could have dabbled in horoscopes, necromancy, scrying, and good luck charms such as the hawthorn rod. Following popular trends, he wrote an almanac for 1550, for the first time in print Latinising his name to Nostradamus. He was so encouraged by the almanac's success that he decided to write one or more annually. Taken together, they are known to have contained at least 6,338 prophecies, as well as at least eleven annual calendars, all of them starting on 1 January and not, as is sometimes supposed, in March. It was mainly in response to the almanacs that the nobility and other prominent persons from far away soon started asking for horoscopes and "psychic" advice from him, though he generally expected his clients to supply the birth charts on which these would be based, rather than calculating them himself as a professional astrologer would have done. When obliged to attempt this himself on the basis of the published tables of the day, he frequently made errors and failed to adjust the figures for his clients' place or time of birth.
He then began his project of writing a book of one thousand mainly French quatrains, which constitute the largely undated prophecies for which he is most famous today. Feeling vulnerable to opposition on religious grounds, however, he devised a method of obscuring his meaning by using "Virgilianised" syntax, word games and a mixture of other languages such as Greek, Italian, Latin, and Provençal. For technical reasons connected with their publication in three installments (the publisher of the third and last installment seems to have been unwilling to start it in the middle of a "Century," or book of 100 verses), the last fifty-eight quatrains of the seventh "Century" have not survived in any extant edition.
The quatrains, published in a book titled "Les Prophéties" (The Prophecies), received a mixed reaction when they were published. Some people thought Nostradamus was a servant of evil, a fake, or insane, while many of the elite evidently thought otherwise. Catherine de' Medici, wife of King Henry II of France, was one of Nostradamus's greatest admirers. After reading his almanacs for 1555, which hinted at unnamed threats to the royal family, she summoned him to Paris to explain them and to draw up horoscopes for her children. At the time, he feared that he would be beheaded, but by the time of his death in 1566, Queen Catherine had made him Counselor and Physician-in-Ordinary to her son, the young King Charles IX of France.
Some accounts of Nostradamus's life state that he was afraid of being persecuted for heresy by the Inquisition, but neither prophecy nor astrology fell in this bracket, and he would have been in danger only if he had practised magic to support them. In 1538 he came into conflict with the Church in Agen after an Inquisitor visited the area looking for anti-Catholic views. His brief imprisonment at Marignane in late 1561 was solely because he had violated a recent royal decree by publishing his 1562 almanac without the prior permission of a bishop.
By 1566, Nostradamus's gout, which had plagued him painfully for many years and made movement very difficult, turned into edema. In late June he summoned his lawyer to draw up an extensive will bequeathing his property plus 3,444 crowns (around US$300,000 today), minus a few debts, to his wife pending her remarriage, in trust for her sons pending their twenty-fifth birthdays and her daughters pending their marriages. This was followed by a much shorter codicil. On the evening of 1 July, he is alleged to have told his secretary Jean de Chavigny, "You will not find me alive at sunrise." The next morning he was reportedly found dead, lying on the floor next to his bed and a bench (Presage 141 [originally 152] "for November 1567", as posthumously edited by Chavigny to fit what happened). He was buried in the local Franciscan chapel in Salon (part of it now incorporated into the restaurant "La Brocherie") but re-interred during the French Revolution in the Collégiale Saint-Laurent, where his tomb remains to this day.
In "The Prophecies" Nostradamus compiled his collection of major, long-term predictions. The first installment was published in 1555 and contained 353 quatrains. The third edition, with three hundred new quatrains, was reportedly printed in 1558, but now survives as only part of the omnibus edition that was published after his death in 1568. This version contains one unrhymed and 941 rhymed quatrains, grouped into nine sets of 100 and one of 42, called "Centuries".
Given printing practices at the time (which included type-setting from dictation), no two editions turned out to be identical, and it is relatively rare to find even two copies that are exactly the same. Certainly there is no warrant for assuming—as would-be "code-breakers" are prone to do—that either the spellings or the punctuation of any edition are Nostradamus's originals.
The "Almanacs", by far the most popular of his works, were published annually from 1550 until his death. He often published two or three in a year, entitled either "Almanachs" (detailed predictions), "Prognostications" or "Presages" (more generalised predictions).
Nostradamus was not only a diviner, but a professional healer. It is known that he wrote at least two books on medical science. One was an extremely free translation (or rather a paraphrase) of "The Protreptic" of Galen ("Paraphrase de C. GALIEN, sus l'Exhortation de Menodote aux estudes des bonnes Artz, mesmement Medicine"), and in his so-called "Traité des fardemens" (basically a medical cookbook containing, once again, materials borrowed mainly from others), he included a description of the methods he used to treat the plague, including bloodletting, none of which apparently worked. The same book also describes the preparation of cosmetics.
A manuscript normally known as the "Orus Apollo" also exists in the Lyon municipal library, where upwards of 2,000 original documents relating to Nostradamus are stored under the aegis of Michel Chomarat. It is a purported translation of an ancient Greek work on Egyptian hieroglyphs based on later Latin versions, all of them unfortunately ignorant of the true meanings of the ancient Egyptian script, which was not correctly deciphered until Champollion in the 19th century.
Since his death, only the "Prophecies" have continued to be popular, but in this case they have been quite extraordinarily so. Over two hundred editions of them have appeared in that time, together with over 2,000 commentaries. Their persistence in popular culture seems to be partly because their vagueness and lack of dating make it easy to quote them selectively after every major dramatic event and retrospectively claim them as "hits".
Nostradamus claimed to base his published predictions on judicial astrology—the astrological 'judgment', or assessment, of the 'quality' (and thus potential) of events such as births, weddings, coronations etc.—but was heavily criticised by professional astrologers of the day such as Laurens Videl for incompetence and for assuming that "comparative horoscopy" (the comparison of future planetary configurations with those accompanying known past events) could actually predict what would happen in the future.
Research suggests that much of his prophetic work paraphrases collections of ancient end-of-the-world prophecies (mainly Bible-based), supplemented with references to historical events and anthologies of omen reports, and then projects those into the future in part with the aid of comparative horoscopy. Hence the many predictions involving ancient figures such as Sulla, Gaius Marius, Nero, and others, as well as his descriptions of "battles in the clouds" and "frogs falling from the sky". Astrology itself is mentioned only twice in Nostradamus's "Preface" and 41 times in the "Centuries" themselves, but more frequently in his dedicatory "Letter to King Henry II". In the last quatrain of his sixth "century" he specifically attacks astrologers.
His historical sources include easily identifiable passages from Livy, Suetonius' "The Twelve Caesars", Plutarch and other classical historians, as well as from medieval chroniclers such as Geoffrey of Villehardouin and Jean Froissart. Many of his astrological references are taken almost word for word from Richard Roussat's "" of 1549–50.
One of his major prophetic sources was evidently the "Mirabilis Liber" of 1522, which contained a range of prophecies by Pseudo-Methodius, the Tiburtine Sibyl, Joachim of Fiore, Savonarola and others (his "Preface" contains 24 biblical quotations, all but two in the order used by Savonarola). This book had enjoyed considerable success in the 1520s, when it went through half a dozen editions, but did not sustain its influence, perhaps owing to its mostly Latin text, Gothic script and many difficult abbreviations. Nostradamus was one of the first to re-paraphrase these prophecies in French, which may explain why they are credited to him. Modern views of plagiarism did not apply in the 16th century; authors frequently copied and paraphrased passages without acknowledgement, especially from the classics. The latest research suggests that he may in fact have used bibliomancy for this—randomly selecting a book of history or prophecy and taking his cue from whatever page it happened to fall open at.
Further material was gleaned from the "De honesta disciplina" of 1504 by Petrus Crinitus, which included extracts from Michael Psellos's "De daemonibus", and the "De Mysteriis Aegyptiorum" ("Concerning the mysteries of Egypt"), a book on Chaldean and Assyrian magic by Iamblichus, a 4th-century Neo-Platonist. Latin versions of both had recently been published in Lyon, and extracts from both are paraphrased (in the second case almost literally) in his first two verses, the first of which is appended to this article. While it is true that Nostradamus claimed in 1555 to have burned all of the occult works in his library, no one can say exactly what books were destroyed in this fire.
Only in the 17th century did people start to notice his reliance on earlier, mainly classical sources.
Nostradamus's reliance on historical precedent is reflected in the fact that he explicitly rejected the label "prophet" (i.e. a person having prophetic powers of his own) on several occasions:
Given this reliance on literary sources, it is unlikely that Nostradamus used any particular methods for entering a trance state, other than contemplation, meditation and incubation. His sole description of this process is contained in "letter 41" of his collected Latin correspondence. The popular legend that he attempted the ancient methods of flame gazing, water gazing or both simultaneously is based on a naive reading of his first two verses, which merely liken his efforts to those of the Delphic and Branchidic oracles. The first of these is reproduced at the bottom of this article and the second can be seen by visiting the relevant facsimile site (see External Links). In his dedication to King Henry II, Nostradamus describes "emptying my soul, mind and heart of all care, worry and unease through mental calm and tranquility", but his frequent references to the "bronze tripod" of the Delphic rite are usually preceded by the words "as though" (compare, once again, External References to the original texts).
Most of the quatrains deal with disasters, such as plagues, earthquakes, wars, floods, invasions, murders, droughts, and battles—all undated and based on foreshadowings by the "Mirabilis Liber". Some quatrains cover these disasters in overall terms; others concern a single person or small group of people. Some cover a single town, others several towns in several countries. A major, underlying theme is an impending invasion of Europe by Muslim forces from farther east and south headed by the expected Antichrist, directly reflecting the then-current Ottoman invasions and the earlier Saracen equivalents, as well as the prior expectations of the "Mirabilis Liber". All of this is presented in the context of the supposedly imminent end of the world—even though this is not in fact mentioned—a conviction that sparked numerous collections of end-time prophecies at the time, including an unpublished collection by Christopher Columbus. Views on Nostradamus have varied widely throughout history. Academic views such as those of Jacques Halbronn regard Nostradamus's "Prophecies" as antedated forgeries written by later hands with a political axe to grind.
Many of Nostradamus's supporters believe his prophecies are genuine. Owing to the subjective nature of these interpretations, however, no two of them completely agree on what Nostradamus predicted, whether for the past or for the future. Many supporters, however, do agree, for example, that he predicted the Great Fire of London, the French Revolution, the rises of Napoleon and Adolf Hitler, both world wars, and the nuclear destruction of Hiroshima and Nagasaki. Popular authors frequently claim that he predicted whatever major event had just happened at the time of each book's publication, such as the Apollo moon landings in 1969, the Space Shuttle "Challenger" disaster in 1986, the death of Diana, Princess of Wales in 1997, and the September 11 attacks on the World Trade Center in 2001. This 'movable feast' aspect appears to be characteristic of the genre.
Possibly the first of these books to become popular in English was Henry C. Roberts' "The Complete Prophecies of Nostradamus" of 1947, reprinted at least seven times during the next forty years, which contained both transcriptions and translations, with brief commentaries. This was followed in 1961 (reprinted in 1982) by Edgar Leoni's "Nostradamus and His Prophecies". After that came Erika Cheetham's "The Prophecies of Nostradamus", incorporating a reprint of the posthumous 1568 edition, which was reprinted, revised and republished several times from 1973 onwards, latterly as "The Final Prophecies of Nostradamus". This served as the basis for the documentary "The Man Who Saw Tomorrow" and both did indeed mention possible generalised future attacks on New York (via nuclear weapons), though not specifically on the World Trade Center or on any particular date.
A two-part translation of Jean-Charles de Fontbrune's "Nostradamus: historien et prophète" was published in 1980, and John Hogue has published a number of books on Nostradamus from about 1987, including "Nostradamus and the Millennium: Predictions of the Future", "Nostradamus: The Complete Prophecies" (1999) and "Nostradamus: A Life and Myth" (2003). In 1992 one commentator who claimed to be able to contact Nostradamus under hypnosis even had him "interpreting" his own verse X.6 (a prediction specifically about floods in southern France around the city of Nîmes and people taking refuge in its "collosse", or Colosseum, a Roman amphitheatre now known as the "Arènes") as a prediction of an undated "attack on the Pentagon", despite the historical seer's clear statement in his dedicatory letter to King Henri II that his prophecies were about Europe, North Africa and part of Asia Minor.
With the exception of Roberts, these books and their many popular imitators were almost unanimous not merely about Nostradamus's powers of prophecy but also in inventing intriguing aspects of his purported biography: that he had been a descendant of the Israelite tribe of Issachar; he had been educated by his grandfathers, who had both been physicians to the court of Good King René of Provence; he had attended Montpellier University in 1525 to gain his first degree; after returning there in 1529, he had successfully taken his medical doctorate; he had gone on to lecture in the Medical Faculty there, until his views became too unpopular; he had supported the heliocentric view of the universe; he had travelled to the Habsburg Netherlands, where he had composed prophecies at the abbey of Orval; in the course of his travels, he had performed a variety of prodigies, including identifying future Pope, Sixtus V, who was then only a seminary monk. He is credited with having successfully cured the Plague at Aix-en-Provence and elsewhere; he had engaged in scrying, using either a magic mirror or a bowl of water; he had been joined by his secretary Chavigny at Easter 1554; having published the first installment of his "Prophéties", he had been summoned by Queen Catherine de' Medici to Paris in 1556 to discuss with her his prophecy at quatrain I.35 that her husband King Henri II would be killed in a duel; he had examined the royal children at Blois; he had bequeathed to his son a "lost book" of his own prophetic paintings; he had been buried standing up; and he had been found, when dug up at the French Revolution, to be wearing a medallion bearing the exact date of his disinterment. This was first recorded by Samuel Pepys as early as 1667, long before the French Revolution. Pepys records in his celebrated diary a legend that, before his death, Nostradamus made the townsfolk swear that his grave would never be disturbed; but that 60 years later his body was exhumed, whereupon a brass plaque was found on his chest correctly stating the date and time when his grave would be opened and cursing the exhumers.
In 2000, Li Hongzhi claimed that the 1999 prophecy at X.72 was a prediction of the Chinese Falun Gong persecution which began in July 1999, leading to an increased interest in Nostradamus among Falun Gong members.
From the 1980s onward, however, an academic reaction set in, especially in France. The publication in 1983 of Nostradamus's private correspondence and, during succeeding years, of the original editions of 1555 and 1557 discovered by Chomarat and Benazra, together with the unearthing of much original archival material revealed that much that was claimed about Nostradamus did not fit the documented facts. The academics revealed that not one of the claims just listed was backed up by any known contemporary documentary evidence. Most of them had evidently been based on unsourced rumours relayed as fact by much later commentators, such as Jaubert (1656), Guynaud (1693) and Bareste (1840), on modern misunderstandings of the 16th-century French texts, or on pure invention. Even the often-advanced suggestion that quatrain I.35 had successfully prophesied King Henry II's death did not actually appear in print for the first time until 1614, 55 years after the event.
Skeptics such as James Randi suggest that his reputation as a prophet is largely manufactured by modern-day supporters who fit his words to events that have either already occurred or are so imminent as to be inevitable, a process sometimes known as "retroactive clairvoyance" (postdiction). No Nostradamus quatrain is known to have been interpreted as predicting a specific event before it occurred, other than in vague, general terms that could equally apply to any number of other events. This even applies to quatrains that contain specific dates, such as III.77, which predicts "in 1727, in October, the king of Persia [shall be] captured by those of Egypt"—a prophecy that has, as ever, been interpreted retrospectively in the light of later events, in this case as though it presaged the known peace treaty between the Ottoman Empire and Persia of that year; Egypt was also an important Ottoman territory at this time. Similarly, Nostradamus's notorious "1999" prophecy at X.72 (see Nostradamus in popular culture) describes no event that commentators have succeeded in identifying either before or since, other than by twisting the words to fit whichever of the many contradictory happenings they claim as "hits". Moreover, no quatrain suggests, as is often claimed by books and films on the alleged Mayan Prophecy, that the world would end in December 2012. In his preface to the "Prophecies", Nostradamus himself stated that his prophecies extend 'from now to the year 3797'—an extraordinary date which, given that the preface was written in 1555, may have more than a little to do with the fact that 2242 (3797–1555) had recently been proposed by his major astrological source Richard Roussat as a possible date for the end of the world.
Additionally, scholars have pointed out that almost all English translations of Nostradamus's quatrains are of extremely poor quality, seem to display little or no knowledge of 16th-century French, are tendentious, and are sometimes intentionally altered in order to make them fit whatever events the translator believed they were supposed to refer (or vice versa). None of them were based on the original editions: Roberts had based his writings on that of 1672, Cheetham and Hogue on the posthumous edition of 1568. Even Leoni accepted on page 115 that he had never seen an original edition, and on earlier pages, he indicated that much of his biographical material was unsourced.
None of this research and criticism was originally known to most of the English-language commentators, by dint of the dates when they were writing and, to some extent, the language in which it was written. Hogue was in a position to take advantage of it, but it was only in 2003 that he accepted that some of his earlier biographical material had in fact been apocryphal. Meanwhile, some of the more recent sources listed (Lemesurier, Gruber, Wilson) have been particularly scathing about later attempts by some lesser-known authors and Internet enthusiasts to extract alleged hidden meanings from the texts, whether with the aid of anagrams, numerical codes, graphs or otherwise.
The prophecies retold and expanded by Nostradamus figured largely in popular culture in the 20th and 21st centuries. As well as being the subject of hundreds of books (both fiction and nonfiction), Nostradamus's life has been depicted in several films and videos, and his life and writings continue to be a subject of media interest.
There have also been several well-known Internet hoaxes, where quatrains in the style of Nostradamus have been circulated by e-mail as the real thing. The best-known examples concern the collapse of the World Trade Center in the 11 September attacks.
With the arrival of the year 2012, Nostradamus's prophecies started to be co-opted (especially by the History Channel) as evidence suggesting that the end of the world was imminent, notwithstanding the fact that his book never mentions the end of the world, let alone the year 2012. | https://en.wikipedia.org/wiki?curid=21615 |
List of multi-level marketing companies
This is a list of companies which use multi-level marketing (also known as network marketing, direct selling, referral marketing, and pyramid selling) for most of their sales. | https://en.wikipedia.org/wiki?curid=21619 |
Noah Webster
Noah Webster Jr. (October 16, 1758 – May 28, 1843) was an American lexicographer, textbook pioneer, English-language spelling reformer, political writer, editor, and prolific author. He has been called the "Father of American Scholarship and Education". His "Blue-backed Speller" books taught five generations of American children how to spell and read. Webster's name has become synonymous with "dictionary" in the United States, especially the modern Merriam-Webster dictionary that was first published in 1828 as "An American Dictionary of the English Language".
Born in West Hartford, Connecticut, Webster graduated from Yale College in 1778. He passed the bar examination after studying law under Oliver Ellsworth and others, but was unable to find work as a lawyer. He found some financial success by opening a private school and writing a series of educational books, including the "Blue-Backed Speller." A strong supporter of the American Revolution and the ratification of the United States Constitution, Webster later criticized American society for being in need of an intellectual foundation. He believed that American nationalism was superior to Europe because American values were superior.
In 1793, Alexander Hamilton recruited Webster to move to New York City and become an editor for a Federalist Party newspaper. He became a prolific author, publishing newspaper articles, political essays, and textbooks. He returned to Connecticut in 1798 and served in the Connecticut House of Representatives. Webster founded the Connecticut Society for the Abolition of Slavery in 1791 but later became somewhat disillusioned with the abolitionist movement.
In 1806, Webster published his first dictionary, "A Compendious Dictionary of the English Language". The following year, he started working on an expanded and comprehensive dictionary, finally publishing it in 1828. He was very influential in popularizing certain spellings in the United States. He was also influential in establishing the Copyright Act of 1831, the first major statutory revision of U.S. copyright law. While working on a second volume of his dictionary, Webster died in 1843, and the rights to the dictionary were acquired by George and Charles Merriam.
Webster was born in the Western Division of Hartford (which became West Hartford, Connecticut) to an established family. His father Noah Webster Sr. (1722–1813) was a descendant of Connecticut Governor John Webster; his mother Mercy (Steele) Webster (1727–1794) was a descendant of Governor William Bradford of Plymouth Colony. His father was primarily a farmer, though he was also deacon of the local Congregational church, captain of the town's militia, and a founder of a local book society (a precursor to the public library). After American independence, he was appointed a justice of the peace.
Webster's father never attended college, but he was intellectually curious and prized education. Webster's mother spent long hours teaching her children spelling, mathematics, and music. At age six, Webster began attending a dilapidated one-room primary school built by West Hartford's Ecclesiastical Society. Years later, he described the teachers as the "dregs of humanity" and complained that the instruction was mainly in religion. Webster's experiences there motivated him to improve the educational experience of future generations.
At age fourteen, his church pastor began tutoring him in Latin and Greek to prepare him for entering Yale College. Webster enrolled at Yale just before his 16th birthday, studying during his senior year with Ezra Stiles, Yale's president. His four years at Yale overlapped the American Revolutionary War and, because of food shortages and threatened British invasions, many of his classes had to be held in other towns. Webster served in the Connecticut Militia. His father had mortgaged the farm to send Webster to Yale, but he was now on his own and had nothing more to do with his family.
Webster lacked career plans after graduating from Yale in 1779, later writing that a liberal arts education "disqualifies a man for business". He taught school briefly in Glastonbury, but the working conditions were harsh and the pay low. He quit to study law. While studying law under future U.S. Supreme Court Chief Justice Oliver Ellsworth, Webster also taught full-time in Hartford—which was grueling, and ultimately impossible to continue. He quit his legal studies for a year and lapsed into a depression; he then found another practicing attorney to tutor him, and completed his studies and passed the bar examination in 1781. As the Revolutionary War was still going on, he could not find work as a lawyer. He received a master's degree from Yale by giving an oral dissertation to the Yale graduating class. Later that year, he opened a small private school in western Connecticut that was a success. Nevertheless, he soon closed it and left town, probably because of a failed romance. Turning to literary work as a way to overcome his losses and channel his ambitions, he began writing a series of well-received articles for a prominent New England newspaper justifying and praising the American Revolution and arguing that the separation from Britain would be a permanent state of affairs. He then founded a private school catering to wealthy parents in Goshen, New York and, by 1785, he had written his speller, a grammar book and a reader for elementary schools. Proceeds from continuing sales of the popular blue-backed speller enabled Webster to spend many years working on his famous dictionary.
Webster was by nature a revolutionary, seeking American independence from the cultural thralldom to Europe. To replace it, he sought to create a utopian America, cleansed of luxury and ostentation and the champion of freedom. By 1781, Webster had an expansive view of the new nation. American nationalism was superior to Europe because American values were superior, he claimed.
Webster dedicated his "Speller" and "Dictionary" to providing an intellectual foundation for American nationalism. From 1787 to 1789, Webster was an outspoken supporter of the new Constitution. In October 1787, he wrote a pamphlet entitled "An Examination into the Leading Principles of the Federal Constitution Proposed by the Late Convention Held at Philadelphia," published under the pen name "A Citizen of America." The pamphlet was influential, particularly outside New York State.
In terms of political theory, he de-emphasized virtue (a core value of republicanism) and emphasized widespread ownership of property (a key element of Federalism). He was one of the few Americans who paid much attention to French theorist Jean-Jacques Rousseau. It was not Rousseau's politics but his ideas on pedagogy in "Emile" (1762) that influenced Webster in adjusting his "Speller" to the stages of a child's development.
Webster married well and had joined the elite in Hartford but did not have much money. In 1793, Alexander Hamilton lent him $1,500 to move to New York City to edit the leading Federalist Party newspaper. In December, he founded New York's first daily newspaper "American Minerva" (later known as the "Commercial Advertiser"), which he edited for four years, writing the equivalent of 20 volumes of articles and editorials. He also published the semi-weekly publication "The Herald, A Gazette for the country" (later known as "The New York Spectator").
As a Federalist spokesman, he defended the administrations of George Washington and John Adams, especially their policy of neutrality between Britain and France, and he especially criticized the excesses of the French Revolution and its Reign of Terror. When French ambassador Citizen Genêt set up a network of pro-Jacobin "Democratic-Republican Societies" that entered American politics and attacked President Washington, he condemned them. He later defended Jay's Treaty between the United States and Britain. As a result, he was repeatedly denounced by the Jeffersonian Republicans as "a pusillanimous, half-begotten, self-dubbed patriot," "an incurable lunatic," and "a deceitful newsmonger ... Pedagogue and Quack."
Webster was elected a Fellow of the American Academy of Arts and Sciences in 1799.
For decades, he was one of the most prolific authors in the new nation, publishing textbooks, political essays, a report on infectious diseases, and newspaper articles for his Federalist party. He wrote so much that a modern bibliography of his published works required 655 pages. He moved back to New Haven in 1798; he was elected as a Federalist to the Connecticut House of Representatives in 1800 and 1802–1807.
The Copyright Act of 1831 was the first major statutory revision of U.S. copyright law, a result of intensive lobbying by Noah Webster and his agents in Congress. Webster also played a critical role lobbying individual states throughout the country during the 1780s to pass the first American copyright laws, which were expected to have distinct nationalistic implications for the infant nation.
As a teacher, he had come to dislike American elementary schools. They could be overcrowded, with up to seventy children of all ages crammed into one-room schoolhouses. They had poor, underpaid staff, no desks, and unsatisfactory textbooks that came from England. Webster thought that Americans should learn from American books, so he began writing the three volume compendium "A Grammatical Institute of the English Language". The work consisted of a speller (published in 1783), a grammar (published in 1784), and a reader (published in 1785). His goal was to provide a uniquely American approach to training children. His most important improvement, he claimed, was to rescue "our native tongue" from "the clamour of pedantry" that surrounded English grammar and pronunciation. He complained that the English language had been corrupted by the British aristocracy, which set its own standard for proper spelling and pronunciation. Webster rejected the notion that the study of Greek and Latin must precede the study of English grammar. The appropriate standard for the American language, argued Webster, was "the same republican principles as American civil and ecclesiastical constitutions." This meant that the people-at-large must control the language; popular sovereignty in government must be accompanied by popular usage in language.
The "Speller" was arranged so that it could be easily taught to students, and it progressed by age. From his own experiences as a teacher, Webster thought that the "Speller" should be simple and gave an orderly presentation of words and the rules of spelling and pronunciation. He believed that students learned most readily when he broke a complex problem into its component parts and had each pupil master one part before moving to the next. Ellis argues that Webster anticipated some of the insights currently associated with Jean Piaget's theory of cognitive development. Webster said that children pass through distinctive learning phases in which they master increasingly complex or abstract tasks. Therefore, teachers must not try to teach a three-year-old how to read; they could not do it until age five. He organized his speller accordingly, beginning with the alphabet and moving systematically through the different sounds of vowels and consonants, then syllables, then simple words, then more complex words, then sentences.
The speller was originally titled "The First Part of the Grammatical Institute of the English Language". Over the course of 385 editions in his lifetime, the title was changed in 1786 to "The American Spelling Book", and again in 1829 to "The Elementary Spelling Book". Most people called it the "Blue-Backed Speller" because of its blue cover and, for the next one hundred years, Webster's book taught children how to read, spell, and pronounce words. It was the most popular American book of its time; by 1837, it had sold 15 million copies, and some 60 million by 1890—reaching the majority of young students in the nation's first century. Its royalty of a half-cent per copy was enough to sustain Webster in his other endeavors. It also helped create the popular contests known as spelling bees.
As time went on, Webster changed the spellings in the book to more phonetic ones. Most of them already existed as alternative spellings. He chose spellings such as "defense", "color", and "traveler", and changed the "re" to "er" in words such as "center". He also changed "tongue" to the older spelling "tung", but this did not catch on.
Part three of his "Grammatical Institute" (1785) was a reader designed to uplift the mind and "diffuse the principles of virtue and patriotism."
"In the choice of pieces," he explained, "I have not been inattentive to the political interests of America. Several of those masterly addresses of Congress, written at the commencement of the late Revolution, contain such noble, just, and independent sentiments of liberty and patriotism, that I cannot help wishing to transfuse them into the breasts of the rising generation."
Students received the usual quota of Plutarch, Shakespeare, Swift, and Addison, as well as such Americans as Joel Barlow's "Vision of Columbus", Timothy Dwight's "Conquest of Canaan", and John Trumbull's poem "M'Fingal." He included excerpts from Tom Paine's "The Crisis" and an essay by Thomas Day calling for the abolition of slavery in accord with the Declaration of Independence.
Webster's "Speller" was entirely secular by design. It ended with two pages of important dates in American history, beginning with Columbus's discovery of America in 1492 and ending with the battle of Yorktown in 1781. There was no mention of God, the Bible, or sacred events. "Let sacred things be appropriated for sacred purposes," wrote Webster. As Ellis explains, "Webster began to construct a secular catechism to the nation-state. Here was the first appearance of 'civics' in American schoolbooks. In this sense, Webster's speller becoming what was to be the secular successor to "The New England Primer" with its explicitly biblical injunctions." Later in life, Webster became intensely religious and added religious themes. However, after 1840, Webster's books lost market share to the "McGuffey Eclectic Readers" of William Holmes McGuffey, which sold over 120 million copies.
Vincent P. Bynack (1984) examines Webster in relation to his commitment to the idea of a unified American national culture that would stave off the decline of republican virtues and solidarity. Webster acquired his perspective on language from such theorists as Maupertuis, Michaelis, and Herder. There he found the belief that a nation's linguistic forms and the thoughts correlated with them shaped individuals' behavior. Thus, the etymological clarification and reform of American English promised to improve citizens' manners and thereby preserve republican purity and social stability. This presupposition animated Webster's "Speller" and "Grammar".
In 1806, Webster published his first dictionary, . In 1807 Webster began compiling an expanded and fully comprehensive dictionary, "An American Dictionary of the English Language;" it took twenty-six years to complete. To evaluate the etymology of words, Webster learned twenty-eight languages, including Old English, Gothic, German, Greek, Latin, Italian, Spanish, French, Dutch, Welsh, Russian, Hebrew, Aramaic, Persian, Arabic, and Sanskrit. Webster hoped to standardize American speech, since Americans in different parts of the country used different languages. They also spelled, pronounced, and used English words differently.
Webster completed his dictionary during his year abroad in January 1825 in a boarding house in Cambridge, England. His book contained seventy thousand words, of which twelve thousand had never appeared in a published dictionary before. As a spelling reformer, Webster preferred spellings that matched pronunciation better. In "A Companion to the American Revolution" (2008), John Algeo notes: "It is often assumed that characteristically American spellings were invented by Noah Webster. He was very influential in popularizing certain spellings in America, but he did not originate them. Rather ... he chose already existing options such as "center, color" and "check" on such grounds as simplicity, analogy or etymology." He also added American words, like "skunk", that did not appear in British dictionaries. At the age of seventy, Webster published his dictionary in 1828, registering the copyright on April 14.
Though it now has an honored place in the history of American English, Webster's first dictionary only sold 2,500 copies. He was forced to mortgage his home to develop a second edition, and his life from then on was plagued with debt.
In 1840, the second edition was published in two volumes. On May 28, 1843, a few days after he had completed revising an appendix to the second edition, and with much of his efforts with the dictionary still unrecognized, Noah Webster died. The rights to his dictionary were acquired by George and Charles Merriam in 1843 from Webster's estate and all contemporary Merriam-Webster dictionaries trace their lineage to that of Webster, although many others have adopted his name, attempting to share in the popularity.
Lepore (2008) demonstrates Webster's paradoxical ideas about language and politics and shows why Webster's endeavors were at first so poorly received. Culturally conservative Federalists denounced the work as radical—too inclusive in its lexicon and even bordering on vulgar. Meanwhile, Webster's old foes the Republicans attacked the man, labeling him mad for such an undertaking.
Scholars have long seen Webster's 1844 dictionary to be an important resource for reading poet Emily Dickinson's life and work; she once commented that the "Lexicon" was her "only companion" for years. One biographer said, "The dictionary was no mere reference book to her; she read it as a priest his breviary—over and over, page by page, with utter absorption."
Nathan Austin has explored the intersection of lexicographical and poetic practices in American literature, and attempts to map out a "lexical poetics" using Webster's definitions as his base. Poets mined his dictionaries, often drawing upon the lexicography in order to express word play. Austin explicates key definitions from both the "Compendious" (1806) and "American" (1828) dictionaries, and finds a range of themes such as the politics of "American" versus "British" English and issues of national identity and independent culture. Austin argues that Webster's dictionaries helped redefine Americanism in an era of highly flexible cultural identity. Webster himself saw the dictionaries as a nationalizing device to separate America from Britain, calling his project a "federal language", with competing forces towards regularity on the one hand and innovation on the other. Austin suggests that the contradictions of Webster's lexicography were part of a larger play between liberty and order within American intellectual discourse, with some pulled toward Europe and the past, and others pulled toward America and the new future.
In 1850 Blackie and Son in Glasgow published the first general dictionary of English that relied heavily upon pictorial illustrations integrated with the text. Its "The Imperial Dictionary, English, Technological, and Scientific, Adapted to the Present State of Literature, Science, and Art; On the Basis of Webster's English Dictionary" used Webster's for most of their text, adding some additional technical words that went with illustrations of machinery.
Webster in early life was something of a freethinker, but in 1808 he became a convert to Calvinistic orthodoxy, and thereafter became a devout Congregationalist who preached the need to Christianize the nation. Webster grew increasingly authoritarian and elitist, fighting against the prevailing grain of Jacksonian Democracy. Webster viewed language as a tool to control unruly thoughts. His "American Dictionary" emphasized the virtues of social control over human passions and individualism, submission to authority, and fear of God; they were necessary for the maintenance of the American social order. As he grew older, Webster's attitudes changed from those of an optimistic revolutionary in the 1780s to those of a pessimistic critic of man and society by the 1820s.
His 1828 "American Dictionary" contained the greatest number of Biblical definitions given in any reference volume. Webster said of education, "Education is useless without the Bible. The Bible was America's basic text book in all fields. God's Word, contained in the Bible, has furnished all necessary rules to direct our conduct." Webster released his own edition of the Bible in 1833, called the Common Version. He used the King James Version (KJV) as a base and consulted the Hebrew and Greek along with various other versions and commentaries. Webster molded the KJV to correct grammar, replaced words that were no longer used, and did away with words and phrases that could be seen as offensive.
In 1834, he published "Value of the Bible and Excellence of the Christian Religion", an apologetic book in defense of the Bible and Christianity itself.
Webster helped found the Connecticut Society for the Abolition of Slavery in 1791, but by the 1830s rejected the new tone among abolitionists that emphasized that Americans who tolerated slavery were themselves sinners. In 1837, Webster warned his daughter Eliza about her fervent support of the abolitionist cause. Webster wrote, "slavery is a great sin and a general calamity—but it is not "our" sin, though it may prove to be a terrible calamity to us in the north. But we cannot legally interfere with the South on this subject." He added, "To come north to preach and thus disturb "our" peace, when we can legally do nothing to effect this object, is, in my view, highly criminal and the preachers of abolitionism deserve the penitentiary."
Noah Webster married Rebecca Greenleaf (1766–1847) on October 26, 1789, New Haven, Connecticut. They had eight children:
He moved to Amherst, Massachusetts in 1812, where he helped to found Amherst College. In 1822 the family moved back to New Haven, where Webster was awarded an honorary degree from Yale the following year. He is buried in New Haven's Grove Street Cemetery. | https://en.wikipedia.org/wiki?curid=21620 |
Stonewall riots
The Stonewall riots (also referred to as the Stonewall uprising or the Stonewall rebellion) were a series of spontaneous, violent demonstrations by members of the gay (LGBT) community in response to a police raid that began in the early morning hours of June 28, 1969, at the Stonewall Inn in the Greenwich Village neighborhood of Manhattan, New York City. Patrons of the Stonewall, other Village lesbian and gay bars, and neighborhood street people fought back when the police became violent. The riots are widely considered to constitute one of the most important events leading to the gay liberation movement and the modern fight for LGBT rights in the United States.
Gay Americans in the 1950s and 1960s faced an anti-gay legal system. Early homosexual groups in the U.S. sought to prove that gay people could be assimilated into society, and they favored non-confrontational education for homosexuals and heterosexuals alike. The last years of the 1960s, however, were contentious, as many social/political movements were active, including the civil rights movement, the counterculture of the 1960s, and the anti-Vietnam War movement. These influences, along with the liberal environment of Greenwich Village, served as catalysts for the Stonewall riots.
Very few establishments welcomed gay people in the 1950s and 1960s. Those that did were often bars, although bar owners and managers were rarely gay. At the time, the Stonewall Inn was owned by the Mafia. It catered to an assortment of patrons and was known to be popular among the poorest and most marginalized people in the gay community: butch lesbians, effeminate young men, drag queens, male prostitutes, transgender people, and homeless youth. While police raids on gay bars were routine in the 1960s, officers quickly lost control of the situation at the Stonewall Inn on June 28. Tensions between New York City police and gay residents of Greenwich Village erupted into more protests the next evening, and again several nights later. Within weeks, Village residents quickly organized into activist groups to concentrate efforts on establishing places for gay men and lesbians to be open about their sexual orientation without fear of being arrested.
After the Stonewall riots, gay men and lesbians in New York City faced gender, race, class, and generational obstacles to becoming a cohesive community. Within six months, two gay activist organizations were formed in New York, concentrating on confrontational tactics, and three newspapers were established to promote rights for gay men and lesbians. A year after the uprising, to mark the anniversary on June 28, 1970, the first gay pride marches took place in New York, Los Angeles, and San Francisco. The anniversary of the riots was also commemorated in Chicago and similar marches were organized in other cities. Within a few years, gay rights organizations were founded across the U.S. and the world. The Stonewall National Monument was established at the site in 2016.
Today, LGBT Pride events are held annually throughout the world toward the end of June to mark the Stonewall riots. Stonewall 50 – WorldPride NYC 2019 commemorated the 50th anniversary of the Stonewall uprising with city officials estimating 5 million attendees in Manhattan, and on June 6, 2019, New York City Police Commissioner James P. O'Neill rendered a formal apology on behalf of the New York Police Department for the actions of its officers at Stonewall in 1969.
Following the social upheaval of World War II, many people in the United States felt a fervent desire to "restore the prewar social order and hold off the forces of change", according to historian Barry Adam. Spurred by the national emphasis on anti-communism, Senator Joseph McCarthy conducted hearings searching for communists in the U.S. government, the U.S. Army, and other government-funded agencies and institutions, leading to a national paranoia. Anarchists, communists, and other people deemed un-American and subversive were considered security risks. Gay men and lesbians were included in this list by the U.S. State Department on the theory that they were susceptible to blackmail. In 1950, a Senate investigation chaired by Clyde R. Hoey noted in a report, "It is generally believed that those who engage in overt acts of perversion lack the emotional stability of normal persons", and said all of the government's intelligence agencies "are in complete agreement that sex perverts in Government constitute security risks". Between 1947 and 1950, 1,700 federal job applications were denied, 4,380 people were discharged from the military, and 420 were fired from their government jobs for being suspected homosexuals.
Throughout the 1950s and 1960s, the U.S. Federal Bureau of Investigation (FBI) and police departments kept lists of known homosexuals, their favored establishments, and friends; the U.S. Post Office kept track of addresses where material pertaining to homosexuality was mailed. State and local governments followed suit: bars catering to gay men and lesbians were shut down, and their customers were arrested and exposed in newspapers. Cities performed "sweeps" to rid neighborhoods, parks, bars, and beaches of gay people. They outlawed the wearing of opposite gender clothes, and universities expelled instructors suspected of being homosexual.
In 1952, the American Psychiatric Association listed homosexuality in the "Diagnostic and Statistical Manual" ("DSM") as a mental disorder. A large-scale study of homosexuality in 1962 was used to justify inclusion of the disorder as a supposed pathological hidden fear of the opposite sex caused by traumatic parent–child relationships. This view was widely influential in the medical profession. In 1956, however, the psychologist Evelyn Hooker performed a study that compared the happiness and well-adjusted nature of self-identified homosexual men with heterosexual men and found no difference. Her study stunned the medical community and made her a hero to many gay men and lesbians, but homosexuality remained in the "DSM" until 1974.
In response to this trend, two organizations formed independently of each other to advance the cause of gay men and lesbians and provide social opportunities where they could socialize without fear of being arrested. Los Angeles area homosexuals created the Mattachine Society in 1950, in the home of communist activist Harry Hay. Their objectives were to unify homosexuals, educate them, provide leadership, and assist "sexual deviants" with legal troubles. Facing enormous opposition to their radical approach, in 1953 the Mattachine shifted their focus to assimilation and respectability. They reasoned that they would change more minds about homosexuality by proving that gay men and lesbians were normal people, no different from heterosexuals. Soon after, several women in San Francisco met in their living rooms to form the Daughters of Bilitis (DOB) for lesbians. Although the eight women who created the DOB initially came together to be able to have a safe place to dance, as the DOB grew they developed similar goals to the Mattachine, and urged their members to assimilate into general society.
One of the first challenges to government repression came in 1953. An organization named ONE, Inc. published a magazine called "ONE". The U.S. Postal Service refused to mail its August issue, which concerned homosexual people in heterosexual marriages, on the grounds that the material was obscene despite it being covered in brown paper wrapping. The case eventually went to the Supreme Court, which in 1958 ruled that ONE, Inc. could mail its materials through the Postal Service.
Homophile organizations—as homosexual groups self-identified in this era—grew in number and spread to the East Coast. Gradually, members of these organizations grew bolder. Frank Kameny founded the Mattachine of Washington, D.C. He had been fired from the U.S. Army Map Service for being a homosexual, and sued unsuccessfully to be reinstated. Kameny wrote that homosexuals were no different from heterosexuals, often aiming his efforts at mental health professionals, some of whom attended Mattachine and DOB meetings telling members they were abnormal.
In 1965, news on Cuban prison work camps for homosexuals inspired Mattachine New York and D.C. to organize protests at the United Nations and the White House. Similar demonstrations were then held also at other government buildings. The purpose was to protest the treatment of gay people in Cuba and U.S. employment discrimination. These pickets shocked many gay people, and upset some of the leadership of Mattachine and the DOB. At the same time, demonstrations in the civil rights movement and opposition to the Vietnam War all grew in prominence, frequency, and severity throughout the 1960s, as did their confrontations with police forces.
On the outer fringes of the few small gay communities were people who challenged gender expectations. They were effeminate men and masculine women, or people who dressed and lived in contrast to their gender assigned at birth, either part or full-time. Contemporaneous nomenclature classified them as transvestites, and they were the most visible representatives of sexual minorities. They belied the carefully crafted image portrayed by the Mattachine Society and DOB that asserted homosexuals were respectable, normal people. The Mattachine and DOB considered the trials of being arrested for wearing clothing of the opposite gender as a parallel to the struggles of homophile organizations: similar but distinctly separate.
Gay, lesbian, bisexual, and transgender people staged a small riot at the Cooper Do-nuts cafe in Los Angeles in 1959 in response to police harassment. In a larger 1966 event in San Francisco, drag queens, hustlers, and trans women were sitting in Compton's Cafeteria when the police arrived to arrest people appearing to be physically male who were dressed as women. A riot ensued, with the cafeteria patrons slinging cups, plates, and saucers, and breaking the plexiglass windows in the front of the restaurant, and returning several days later to smash the windows again after they were replaced. Professor Susan Stryker classifies the Compton's Cafeteria riot as an "act of anti-transgender discrimination, rather than an act of discrimination against sexual orientation" and connects the uprising to the issues of gender, race, and class that were being downplayed by homophile organizations. It marked the beginning of transgender activism in San Francisco.
The Manhattan neighborhoods of Greenwich Village and Harlem were home to sizable gay and lesbian populations after World War I, when people who had served in the military took advantage of the opportunity to settle in larger cities. The enclaves of gay men and lesbians, described by a newspaper story as "short-haired women and long-haired men", developed a distinct subculture through the following two decades. Prohibition inadvertently benefited gay establishments, as drinking alcohol was pushed underground along with other behaviors considered immoral. New York City passed laws against homosexuality in public and private businesses, but because alcohol was in high demand, speakeasies and impromptu drinking establishments were so numerous and temporary that authorities were unable to police them all. However, police raids continued, resulting in the closure of iconic establishments such as Eve's Hangout in 1926.
The social repression of the 1950s resulted in a cultural revolution in Greenwich Village. A cohort of poets, later named the Beat poets, wrote about the evils of the social organization at the time, glorifying anarchy, drugs, and hedonistic pleasures over unquestioning social compliance, consumerism, and closed mindedness. Of them, Allen Ginsberg and William S. Burroughs—both Greenwich Village residents—also wrote bluntly and honestly about homosexuality. Their writings attracted sympathetic liberal-minded people, as well as homosexuals looking for a community.
By the early 1960s, a campaign to rid New York City of gay bars was in full effect by order of Mayor Robert F. Wagner, Jr., who was concerned about the image of the city in preparation for the 1964 World's Fair. The city revoked the liquor licenses of the bars, and undercover police officers worked to entrap as many homosexual men as possible. Entrapment usually consisted of an undercover officer who found a man in a bar or public park, engaged him in conversation; if the conversation headed toward the possibility that they might leave together—or the officer bought the man a drink—he was arrested for solicitation. One story in the "New York Post" described an arrest in a gym locker room, where the officer grabbed his crotch, moaning, and a man who asked him if he was all right was arrested. Few lawyers would defend cases as undesirable as these, and some of those lawyers kicked back their fees to the arresting officer.
The Mattachine Society succeeded in getting newly elected mayor John Lindsay to end the campaign of police entrapment in New York City. They had a more difficult time with the New York State Liquor Authority (SLA). While no laws prohibited serving homosexuals, courts allowed the SLA discretion in approving and revoking liquor licenses for businesses that might become "disorderly". Despite the high population of gay men and lesbians who called Greenwich Village home, very few places existed, other than bars, where they were able to congregate openly without being harassed or arrested. In 1966 the New York Mattachine held a "sip-in" at a Greenwich Village bar named Julius, which was frequented by gay men, to illustrate the discrimination homosexuals faced.
None of the bars frequented by gay men and lesbians were owned by gay people. Almost all of them were owned and controlled by organized crime, who treated the regulars poorly, watered down the liquor, and overcharged for drinks. However, they also paid off police to prevent frequent raids.
The Stonewall Inn, located at 51 and 53 Christopher Street, along with several other establishments in the city, was owned by the Genovese crime family. In 1966, three members of the Mafia invested $3,500 to turn the Stonewall Inn into a gay bar, after it had been a restaurant and a nightclub for heterosexuals. Once a week a police officer would collect envelopes of cash as a payoff known as a gayola, as the Stonewall Inn had no liquor license. It had no running water behind the bar—dirty glasses were run through tubs of water and immediately reused. There were no fire exits, and the toilets overran consistently. Though the bar was not used for prostitution, drug sales and other "cash transactions" took place. It was the only bar for gay men in New York City where dancing was allowed; dancing was its main draw since its re-opening as a gay club.
Visitors to the Stonewall Inn in 1969 were greeted by a bouncer who inspected them through a peephole in the door. The legal drinking age was 18, and to avoid unwittingly letting in undercover police (who were called "Lily Law", "Alice Blue Gown", or "Betty Badge"), visitors would have to be known by the doorman, or look gay. The entrance fee on weekends was $3, for which the customer received two tickets that could be exchanged for two drinks. Patrons were required to sign their names in a book to prove that the bar was a private "bottle club", but rarely signed their real names. There were two dance floors in the Stonewall; the interior was painted black, making it very dark inside, with pulsing gel lights or black lights. If police were spotted, regular white lights were turned on, signaling that everyone should stop dancing or touching. In the rear of the bar was a smaller room frequented by "queens"; it was one of two bars where effeminate men who wore makeup and teased their hair (though dressed in men's clothing) could go. Only a few transvestites, or men in full drag, were allowed in by the bouncers. The customers were "98 percent male" but a few lesbians sometimes came to the bar. Younger homeless adolescent males, who slept in nearby Christopher Park, would often try to get in so customers would buy them drinks. The age of the clientele ranged between the upper teens and early thirties, and the racial mix was evenly distributed among white, black, and Hispanic patrons. Because of its even mix of people, its location, and the attraction of dancing, the Stonewall Inn was known by many as ""the" gay bar in the city".
Police raids on gay bars were frequent—occurring on average once a month for each bar. Many bars kept extra liquor in a secret panel behind the bar, or in a car down the block, to facilitate resuming business as quickly as possible if alcohol was seized. Bar management usually knew about raids beforehand due to police tip-offs, and raids occurred early enough in the evening that business could commence after the police had finished. During a typical raid, the lights were turned on, and customers were lined up and their identification cards checked. Those without identification or dressed in full drag were arrested; others were allowed to leave. Some of the men, including those in drag, used their draft cards as identification. Women were required to wear three pieces of feminine clothing, and would be arrested if found not wearing them. Employees and management of the bars were also typically arrested. The period immediately before June 28, 1969, was marked by frequent raids of local bars—including a raid at the Stonewall Inn on the Tuesday before the riots—and the closing of the Checkerboard, the Tele-Star, and two other clubs in Greenwich Village.
At 1:20 a.m. on Saturday, June 28, 1969, four plainclothes policemen in dark suits, two patrol officers in uniform, and Detective Charles Smythe and Deputy Inspector Seymour Pine arrived at the Stonewall Inn's double doors and announced "Police! We're taking the place!" Stonewall employees do not recall being tipped off that a raid was to occur that night, as was the custom. According to Duberman (p. 194), there was a rumor that one might happen, but since it was much later than raids generally took place, Stonewall management thought the tip was inaccurate.
Historian David Carter presents information indicating that the Mafia owners of the Stonewall and the manager were blackmailing wealthier customers, particularly those who worked in the Financial District. They appeared to be making more money from extortion than they were from liquor sales in the bar. Carter deduces that when the police were unable to receive kickbacks from blackmail and the theft of negotiable bonds (facilitated by pressuring gay Wall Street customers), they decided to close the Stonewall Inn permanently. Two undercover policewomen and two undercover policemen had entered the bar earlier that evening to gather visual evidence, as the Public Morals Squad waited outside for the signal. Once inside, they called for backup from the Sixth Precinct using the bar's pay telephone. The music was turned off and the main lights were turned on. Approximately 205 people were in the bar that night. Patrons who had never experienced a police raid were confused. A few who realized what was happening began to run for doors and windows in the bathrooms, but police barred the doors. Michael Fader remembered,
"Things happened so fast you kind of got caught not knowing. All of a sudden there were police there and we were told to all get in lines and to have our identification ready to be led out of the bar."
The raid did not go as planned. Standard procedure was to line up the patrons, check their identification, and have female police officers take customers dressed as women to the bathroom to verify their sex, upon which any people appearing to be physically male and dressed as women would be arrested. Those dressed as women that night refused to go with the officers. Men in line began to refuse to produce their identification. The police decided to take everyone present to the police station, after separating those cross-dressing in a room in the back of the bar. Maria Ritter, then known as male to her family, recalled, "My biggest fear was that I would get arrested. My second biggest fear was that my picture would be in a newspaper or on a television report in my mother's dress!" Both patrons and police recalled that a sense of discomfort spread very quickly, spurred by police who began to assault some of the lesbians by "feeling some of them up inappropriately" while frisking them.
The police were to transport the bar's alcohol in patrol wagons. Twenty-eight cases of beer and nineteen bottles of hard liquor were seized, but the patrol wagons had not yet arrived, so patrons were required to wait in line for about 15 minutes. Those who were not arrested were released from the front door, but they did not leave quickly as usual. Instead, they stopped outside and a crowd began to grow and watch. Within minutes, between 100 and 150 people had congregated outside, some after they were released from inside the Stonewall, and some after noticing the police cars and the crowd. Although the police forcefully pushed or kicked some patrons out of the bar, some customers released by the police performed for the crowd by posing and saluting the police in an exaggerated fashion. The crowd's applause encouraged them further: "Wrists were limp, hair was primped, and reactions to the applause were classic."
When the first patrol wagon arrived, Inspector Pine recalled that the crowd—most of whom were homosexual—had grown to at least ten times the number of people who were arrested, and they all became very quiet. Confusion over radio communication delayed the arrival of a second wagon. The police began escorting Mafia members into the first wagon, to the cheers of the bystanders. Next, regular employees were loaded into the wagon. A bystander shouted, "Gay power!", someone began singing "We Shall Overcome", and the crowd reacted with amusement and general good humor mixed with "growing and intensive hostility". An officer shoved a transvestite, who responded by hitting him on the head with her purse as the crowd began to boo. Author Edmund White, who had been passing by, recalled, "Everyone's restless, angry, and high-spirited. No one has a slogan, no one even has an attitude, but something's brewing." Pennies, then beer bottles, were thrown at the wagon as a rumor spread through the crowd that patrons still inside the bar were being beaten.
A scuffle broke out when a woman in handcuffs was escorted from the door of the bar to the waiting police wagon several times. She escaped repeatedly and fought with four of the police, swearing and shouting, for about ten minutes. Described as "a typical New York butch" and "a dyke–stone butch", she had been hit on the head by an officer with a baton for, as one witness claimed, complaining that her handcuffs were too tight. Bystanders recalled that the woman, whose identity remains unknown (Stormé DeLarverie has been identified by some, including herself, as the woman, but accounts vary), sparked the crowd to fight when she looked at bystanders and shouted, "Why don't you guys do something?" After an officer picked her up and heaved her into the back of the wagon, the crowd became a mob and went "berserk": "It was at that moment that the scene became explosive."
The police tried to restrain some of the crowd, knocking a few people down, which incited bystanders even more. Some of those handcuffed in the wagon escaped when police left them unattended (deliberately, according to some witnesses). As the crowd tried to overturn the police wagon, two police cars and the wagon—with a few slashed tires—left immediately, with Inspector Pine urging them to return as soon as possible. The commotion attracted more people who learned what was happening. Someone in the crowd declared that the bar had been raided because "they didn't pay off the cops", to which someone else yelled "Let's pay them off!" Coins sailed through the air towards the police as the crowd shouted "Pigs!" and "Faggot cops!" Beer cans were thrown and the police lashed out, dispersing some of the crowd who found a construction site nearby with stacks of bricks. The police, outnumbered by between 500 and 600 people, grabbed several people, including folk singer and mentor of Bob Dylan, Dave Van Ronk—who had been attracted to the revolt from a bar two doors away from the Stonewall. Though Van Ronk was not gay, he had experienced police violence when he participated in antiwar demonstrations: "As far as I was concerned, anybody who'd stand against the cops was all right with me, and that's why I stayed in... Every time you turned around the cops were pulling some outrage or another." Van Ronk was one of thirteen arrested that night. Ten police officers—including two policewomen—barricaded themselves, Van Ronk, Howard Smith (a column writer for "The Village Voice"), and several handcuffed detainees inside the Stonewall Inn for their own safety.
Multiple accounts of the riot assert that there was no pre-existing organization or apparent cause for the demonstration; what ensued was spontaneous. Michael Fader explained,
We all had a collective feeling like we'd had enough of this kind of shit. It wasn't anything tangible anybody said to anyone else, it was just kind of like everything over the years had come to a head on that one particular night in the one particular place, and it was not an organized demonstration... Everyone in the crowd felt that we were never going to go back. It was like the last straw. It was time to reclaim something that had always been taken from us... All kinds of people, all different reasons, but mostly it was total outrage, anger, sorrow, everything combined, and everything just kind of ran its course. It was the police who were doing most of the destruction. We were really trying to get back in and break free. And we felt that we had freedom at last, or freedom to at least show that we demanded freedom. We weren't going to be walking meekly in the night and letting them shove us around—it's like standing your ground for the first time and in a really strong way, and that's what caught the police by surprise. There was something in the air, freedom a long time overdue, and we're going to fight for it. It took different forms, but the bottom line was, we weren't going to go away. And we didn't.
The only known photograph taken during the first night of the riots shows the homeless youth who slept in nearby Christopher Park, scuffling with police. The Mattachine Society newsletter a month later offered its explanation of why the riots occurred: "It catered largely to a group of people who are not welcome in, or cannot afford, other places of homosexual social gathering... The Stonewall became home to these kids. When it was raided, they fought for it. That, and the fact that they had nothing to lose other than the most tolerant and broadminded gay place in town, explains why."
Garbage cans, garbage, bottles, rocks, and bricks were hurled at the building, breaking the windows. Witnesses attest that "flame queens", hustlers, and gay "street kids"—the most outcast people in the gay community—were responsible for the first volley of projectiles, as well as the uprooting of a parking meter used as a battering ram on the doors of the Stonewall Inn. Sylvia Rivera, a self-identified street queen remembered:
You've been treating us like shit all these years? Uh-uh. Now it's our turn!... It was one of the greatest moments in my life.
The mob lit garbage on fire and stuffed it through the broken windows as the police grabbed a fire hose. Because it had no water pressure, the hose was ineffective in dispersing the crowd, and seemed only to encourage them.
The Tactical Patrol Force (TPF) of the New York City Police Department arrived to free the police trapped inside the Stonewall. One officer's eye was cut, and a few others were bruised from being struck by flying debris. Bob Kohler, who was walking his dog by the Stonewall that night, saw the TPF arrive: "I had been in enough riots to know the fun was over... The cops were totally humiliated. This never, ever happened. They were angrier than I guess they had ever been, because everybody else had rioted... but the fairies were not supposed to riot... no group had ever forced cops to retreat before, so the anger was just enormous. I mean, they wanted to kill." With larger numbers, police detained anyone they could and put them in patrol wagons to go to jail, though Inspector Pine recalled, "Fights erupted with the transvestites, who wouldn't go into the patrol wagon." His recollection was corroborated by another witness across the street who said, "All I could see about who was fighting was that it was transvestites and they were fighting furiously."
The TPF formed a phalanx and attempted to clear the streets by marching slowly and pushing the crowd back. The mob openly mocked the police. The crowd cheered, started impromptu kick lines, and sang to the tune of Ta-ra-ra Boom-de-ay: "We are the Stonewall girls/ We wear our hair in curls/ We don't wear underwear/ We show our pubic hair." Lucian Truscott reported in "The Village Voice": "A stagnant situation there brought on some gay tomfoolery in the form of a chorus line facing the line of helmeted and club-carrying cops. Just as the line got into a full kick routine, the TPF advanced again and cleared the crowd of screaming gay power[-]ites down Christopher to Seventh Avenue." One participant who had been in the Stonewall during the raid recalled, "The police rushed us, and that's when I realized this is not a good thing to do, because they got me in the back with a nightstick." Another account stated, "I just can't ever get that one sight out of my mind. The cops with the [nightsticks] and the kick line on the other side. It was the most amazing thing... And all the sudden that kick line, which I guess was a spoof on the machismo... I think that's when I felt rage. Because people were getting smashed with bats. And for what? A kick line."
Craig Rodwell, owner of the Oscar Wilde Memorial Bookshop, reported watching police chase participants through the crooked streets, only to see them appear around the next corner behind the police. Members of the mob stopped cars, overturning one of them to block Christopher Street. Jack Nichols and Lige Clarke, in their column printed in "Screw", declared that "massive crowds of angry protesters chased [the police] for blocks screaming, 'Catch them!' "
By 4:00 a.m., the streets had nearly been cleared. Many people sat on stoops or gathered nearby in Christopher Park throughout the morning, dazed in disbelief at what had transpired. Many witnesses remembered the surreal and eerie quiet that descended upon Christopher Street, though there continued to be "electricity in the air". One commented: "There was a certain beauty in the aftermath of the riot... It was obvious, at least to me, that a lot of people really were gay and, you know, this was our street." Thirteen people had been arrested. Some in the crowd were hospitalized, and four police officers were injured. Almost everything in the Stonewall Inn was broken. Inspector Pine had intended to close and dismantle the Stonewall Inn that night. Pay phones, toilets, mirrors, jukeboxes, and cigarette machines were all smashed, possibly in the riot and possibly by the police.
During the siege of the Stonewall, Craig Rodwell called "The New York Times", the "New York Post", and the "Daily News" to inform them what was happening. All three papers covered the riots; the "Daily News" placed coverage on the front page. News of the riot spread quickly throughout Greenwich Village, fueled by rumors that it had been organized by the Students for a Democratic Society, the Black Panthers, or triggered by "a homosexual police officer whose roommate went dancing at the Stonewall against the officer's wishes". All day Saturday, June 28, people came to stare at the burned and blackened Stonewall Inn. Graffiti appeared on the walls of the bar, declaring "Drag power", "They invaded our rights", "Support gay power", and "Legalize gay bars", along with accusations of police looting, and—regarding the status of the bar—"We are open."
The next night, rioting again surrounded Christopher Street; participants remember differently which night was more frantic or violent. Many of the same people returned from the previous evening—hustlers, street youths, and "queens"—but they were joined by "police provocateurs", curious bystanders, and even tourists. Remarkable to many was the sudden exhibition of homosexual affection in public, as described by one witness: "From going to places where you had to knock on a door and speak to someone through a peephole in order to get in. We were just out. We were in the streets."
Thousands of people had gathered in front of the Stonewall, which had opened again, choking Christopher Street until the crowd spilled into adjoining blocks. The throng surrounded buses and cars, harassing the occupants unless they either admitted they were gay or indicated their support for the demonstrators. Sylvia Rivera saw a friend of hers jump on a nearby car trying to drive through; the crowd rocked the car back and forth, terrifying its occupants. Another of Rivera's friends, Marsha P. Johnson, an African-American street queen, climbed a lamppost and dropped a heavy bag onto the hood of a police car, shattering the windshield. As on the previous evening, fires were started in garbage cans throughout the neighborhood. More than a hundred police were present from the Fourth, Fifth, Sixth, and Ninth Precincts, but after 2:00 a.m. the TPF arrived again. Kick lines and police chases waxed and waned; when police captured demonstrators, whom the majority of witnesses described as "sissies" or "swishes", the crowd surged to recapture them. Street battling ensued again until 4:00 a.m.
Beat poet and longtime Greenwich Village resident Allen Ginsberg lived on Christopher Street, and happened upon the jubilant chaos. After he learned of the riot that had occurred the previous evening, he stated, "Gay power! Isn't that great!... It's about time we did something to assert ourselves", and visited the open Stonewall Inn for the first time. While walking home, he declared to Lucian Truscott, "You know, the guys there were so beautiful—they've lost that wounded look that fags all had 10 years ago."
Activity in Greenwich Village was sporadic on Monday and Tuesday, partly due to rain. Police and Village residents had a few altercations, as both groups antagonized each other. Craig Rodwell and his partner Fred Sargeant took the opportunity the morning after the first riot to print and distribute 5,000 leaflets, one of them reading: "Get the Mafia and the Cops out of Gay Bars." The leaflets called for gay people to own their own establishments, for a boycott of the Stonewall and other Mafia-owned bars, and for public pressure on the mayor's office to investigate the "intolerable situation".
Not everyone in the gay community considered the revolt a positive development. To many older homosexuals and many members of the Mattachine Society who had worked throughout the 1960s to promote homosexuals as no different from heterosexuals, the display of violence and effeminate behavior was embarrassing. Randy Wicker, who had marched in the first gay picket lines before the White House in 1965, said the "screaming queens forming chorus lines and kicking went against everything that I wanted people to think about homosexuals... that we were a bunch of drag queens in the Village acting disorderly and tacky and cheap." Others found the closing of the Stonewall Inn, termed a "sleaze joint", as advantageous to the Village.
On Wednesday, however, "The Village Voice" ran reports of the riots, written by Howard Smith and Lucian Truscott, that included unflattering descriptions of the events and its participants: "forces of faggotry", "limp wrists", and "Sunday fag follies". A mob descended upon Christopher Street once again and threatened to burn down the offices of "The Village Voice". Also in the mob of between 500 and 1,000 were other groups that had had unsuccessful confrontations with the police, and were curious how the police were defeated in this situation. Another explosive street battle took place, with injuries to demonstrators and police alike, local shops getting looted (apparently caused by nongay protesters), and arrests of five people. The incidents on Wednesday night lasted about an hour, and were summarized by one witness: "The word is out. Christopher Street shall be liberated. The fags have had it with oppression."
The feeling of urgency spread throughout Greenwich Village, even to people who had not witnessed the riots. Many who were moved by the rebellion attended organizational meetings, sensing an opportunity to take action. On July 4, 1969, the Mattachine Society performed its annual picketing in front of Independence Hall in Philadelphia, called the Annual Reminder. Organizers Craig Rodwell, Frank Kameny, Randy Wicker, Barbara Gittings, and Kay Lahusen, who had all participated for several years, took a bus along with other picketers from New York City to Philadelphia. Since 1965, the pickets had been very controlled: women wore skirts and men wore suits and ties, and all marched quietly in organized lines. This year Rodwell remembered feeling restricted by the rules Kameny had set. When two women spontaneously held hands, Kameny broke them apart, saying, "None of that! None of that!" Rodwell, however, convinced about ten couples to hold hands. The hand-holding couples made Kameny furious, but they earned more press attention than all of the previous marches. Participant Lilli Vincenz remembered, "It was clear that things were changing. People who had felt oppressed now felt empowered." Rodwell returned to New York City determined to change the established quiet, meek ways of trying to get attention. One of his first priorities was planning Christopher Street Liberation Day.
Although the Mattachine Society had existed since the 1950s, many of their methods now seemed too mild for people who had witnessed or been inspired by the riots. Mattachine recognized the shift in attitudes in a story from their newsletter entitled, "The Hairpin Drop Heard Around the World." When a Mattachine officer suggested an "amicable and sweet" candlelight vigil demonstration, a man in the audience fumed and shouted, "Sweet! "Bullshit!" That's the role society has been forcing these queens to play." With a flyer announcing: "Do You Think Homosexuals Are Revolting? You Bet Your Sweet Ass We Are!", the Gay Liberation Front (GLF) was soon formed, the first gay organization to use "gay" in its name. Previous organizations such as the Mattachine Society, the Daughters of Bilitis, and various homophile groups had masked their purpose by deliberately choosing obscure names.
The rise of militancy became apparent to Frank Kameny and Barbara Gittings—who had worked in homophile organizations for years and were both very public about their roles—when they attended a GLF meeting to see the new group. A young GLF member demanded to know who they were and what their credentials were. Gittings, nonplussed, stammered, "I'm gay. That's why I'm here." The GLF borrowed tactics from and aligned themselves with black and antiwar demonstrators with the ideal that they "could work to restructure American society". They took on causes of the Black Panthers, marching to the Women's House of Detention in support of Afeni Shakur, and other radical New Left causes. Four months after the group formed, however, it disbanded when members were unable to agree on operating procedure.
Within six months of the Stonewall riots, activists started a citywide newspaper called "Gay"; they considered it necessary because the most liberal publication in the city—"The Village Voice"—refused to print the word "gay" in GLF advertisements seeking new members and volunteers. Two other newspapers were initiated within a six-week period: "Come Out!" and "Gay Power"; the readership of these three periodicals quickly climbed to between 20,000 and 25,000.
GLF members organized several same-sex dances, but GLF meetings were chaotic. When Bob Kohler asked for clothes and money to help the homeless youth who had participated in the riots, many of whom slept in Christopher Park or Sheridan Square, the response was a discussion on the downfall of capitalism. In late December 1969, several people who had visited GLF meetings and left out of frustration formed the Gay Activists Alliance (GAA). The GAA was to be entirely focused on gay issues, and more orderly. Their constitution started, "We as liberated homosexual activists demand the freedom for expression of our dignity and value as human beings." The GAA developed and perfected a confrontational tactic called a zap, where they would catch a politician off guard during a public relations opportunity, and force him or her to acknowledge gay and lesbian rights. City councilmen were zapped, and Mayor John Lindsay was zapped several times—once on television when GAA members made up the majority of the audience.
Raids on gay bars did not stop after the Stonewall riots. In March 1970, Deputy Inspector Seymour Pine raided the Zodiac and 17 Barrow Street. An after-hours gay club with no liquor or occupancy licenses called The Snake Pit was soon raided, and 167 people were arrested. One of them was Diego Viñales, an Argentinian national so frightened that he might be deported as a homosexual that he tried to escape the police precinct by jumping out of a two-story window, impaling himself on a spike fence. "The New York Daily News" printed a graphic photo of the young man's impalement on the front page. GAA members organized a march from Christopher Park to the Sixth Precinct in which hundreds of gay men, lesbians, and liberal sympathizers peacefully confronted the TPF. They also sponsored a letter-writing campaign to Mayor Lindsay in which the Greenwich Village Democratic Party and Congressman Ed Koch sent pleas to end raids on gay bars in the city.
The Stonewall Inn lasted only a few weeks after the riot. By October 1969 it was up for rent. Village residents surmised it was too notorious a location, and Rodwell's boycott discouraged business.
Christopher Street Liberation Day on June 28, 1970 marked the first anniversary of the Stonewall riots with an assembly on Christopher Street; with simultaneous Gay Pride marches in Los Angeles and Chicago, these were the first Gay Pride marches in U.S. history. The next year, Gay Pride marches took place in Boston, Dallas, Milwaukee, London, Paris, West Berlin, and Stockholm. The march in New York covered 51 blocks, from Christopher Street to Central Park. The march took less than half the scheduled time due to excitement, but also due to wariness about walking through the city with gay banners and signs. Although the parade permit was delivered only two hours before the start of the march, the marchers encountered little resistance from onlookers. "The New York Times" reported (on the front page) that the marchers took up the entire street for about 15 city blocks. Reporting by "The Village Voice" was positive, describing "the out-front resistance that grew out of the police raid on the Stonewall Inn one year ago".
By 1972, the participating cities included Atlanta, Buffalo, Detroit, Washington, D.C., Miami, Minneapolis, and Philadelphia, as well as San Francisco.
Frank Kameny soon realized the pivotal change brought by the Stonewall riots. An organizer of gay activism in the 1950s, he was used to persuasion, trying to convince heterosexuals that gay people were no different than they were. When he and other people marched in front of the White House, the State Department, and Independence Hall only five years earlier, their objective was to look as if they could work for the U.S. government. Ten people marched with Kameny then, and they alerted no press to their intentions. Although he was stunned by the upheaval by participants in the Annual Reminder in 1969, he later observed, "By the time of Stonewall, we had fifty to sixty gay groups in the country. A year later there was at least fifteen hundred. By two years later, to the extent that a count could be made, it was twenty-five hundred."
Similar to Kameny's regret at his own reaction to the shift in attitudes after the riots, Randy Wicker came to describe his embarrassment as "one of the greatest mistakes of his life". The image of gay people retaliating against police, after so many years of allowing such treatment to go unchallenged, "stirred an unexpected spirit among many homosexuals". Kay Lahusen, who photographed the marches in 1965, stated, "Up to 1969, this movement was generally called the homosexual or homophile movement... Many new activists consider the Stonewall uprising the birth of the gay liberation movement. Certainly it was the birth of gay pride on a massive scale." David Carter, in his article "What made Stonewall different", explained that even though there were several uprisings before Stonewall, the reason Stonewall was so historical was that thousands of people were involved, the riot lasted a long time (six days), it was the first to get major media coverage, and it sparked the formation of many gay rights groups.
Within two years of the Stonewall riots there were gay rights groups in every major American city, as well as Canada, Australia, and Western Europe. People who joined activist organizations after the riots had very little in common other than their same-sex attraction. Many who arrived at GLF or GAA meetings were taken aback by the number of gay people in one place. Race, class, ideology, and gender became frequent obstacles in the years after the riots. This was illustrated during the 1973 Stonewall rally when, moments after Barbara Gittings exuberantly praised the diversity of the crowd, feminist activist Jean O'Leary protested what she perceived as the mocking of women by cross-dressers and drag queens in attendance. During a speech by O'Leary, in which she claimed that drag queens made fun of women for entertainment value and profit, Sylvia Rivera and Lee Brewster jumped on the stage and shouted "You go to bars because of what drag queens did for you, and "these bitches" tell us to quit being ourselves!" Both the drag queens and lesbian feminists in attendance left in disgust.
O'Leary also worked in the early 1970s to exclude transgender people from gay rights issues because she felt that rights for transgender people would be too difficult to attain. Sylvia Rivera left New York City in the mid-1970s, relocating to upstate New York, but later returned to the city in the mid-1990s to advocate for homeless members of the gay community. The initial disagreements between participants in the movements, however, often evolved after further reflection. O'Leary later regretted her stance against the drag queens attending in 1973: "Looking back, I find this so embarrassing because my views have changed so much since then. I would never pick on a transvestite now." "It was horrible. How could I work to exclude transvestites and at the same time criticize the feminists who were doing their best back in those days to exclude lesbians?"
O'Leary was referring to the Lavender Menace, a description by second wave feminist Betty Friedan for attempts by members of the National Organization for Women (NOW) to distance themselves from the perception of NOW as a haven for lesbians. As part of this process, Rita Mae Brown and other lesbians who had been active in NOW were forced out. They staged a protest in 1970 at the Second Congress to Unite Women, and earned the support of many NOW members, finally gaining full acceptance in 1971.
The growth of lesbian feminism in the 1970s at times so conflicted with the gay liberation movement that some lesbians refused to work with gay men. Many lesbians found men's attitudes patriarchal and chauvinistic, and saw in gay men the same misguided notions about women as they saw in heterosexual men. The issues most important to gay men—entrapment and public solicitation—were not shared by lesbians. In 1977 a Lesbian Pride Rally was organized as an alternative to sharing gay men's issues, especially what Adrienne Rich termed "the violent, self-destructive world of the gay bars". Veteran gay activist Barbara Gittings chose to work in the gay rights movement, explaining "It's a matter of where does it hurt the most? For me it hurts the most not in the female arena, but the gay arena."
Throughout the 1970s, gay activism had significant successes. One of the first and most important was the "zap" in May 1970 by the Los Angeles GLF at a convention of the American Psychiatric Association (APA). At a conference on behavior modification, during a film demonstrating the use of electroshock therapy to decrease same-sex attraction, Morris Kight and GLF members in the audience interrupted the film with shouts of "Torture!" and "Barbarism!" They took over the microphone to announce that medical professionals who prescribed such therapy for their homosexual patients were complicit in torturing them. Although 20 psychiatrists in attendance left, the GLF spent the hour following the zap with those remaining, trying to convince them that homosexual people were not mentally ill. When the APA invited gay activists to speak to the group in 1972, activists brought John E. Fryer, a gay psychiatrist who wore a mask, because he felt his practice was in danger. In December 1973—in large part due to the efforts of gay activists—the APA voted unanimously to remove homosexuality from the "Diagnostic and Statistical Manual".
Gay men and lesbians came together to work in grassroots political organizations responding to organized resistance in 1977. A coalition of conservatives named Save Our Children staged a campaign to repeal a civil rights ordinance in Dade County, Florida. Save Our Children was successful enough to influence similar repeals in several American cities in 1978. However, the same year a campaign in California called the Briggs Initiative, designed to force the dismissal of homosexual public school employees, was defeated. Reaction to the influence of Save Our Children and the Briggs Initiative in the gay community was so significant that it has been called the second Stonewall for many activists, marking their initiation into political participation. The subsequent 1979 National March on Washington for Lesbian and Gay Rights was timed to coincide with the ten-year anniversary of the Stonewall riots.
The Stonewall riots marked such a significant turning point that many aspects of prior gay and lesbian culture, such as bar culture formed from decades of shame and secrecy, were forcefully ignored and denied. Historian Martin Duberman writes, "The decades preceding Stonewall... continue to be regarded by most gay men and lesbians as some vast neolithic wasteland." Sociologist Barry Adam notes, "Every social movement must choose at some point what to retain and what to reject out of its past. What traits are the results of oppression and what are healthy and authentic?" In conjunction with the growing feminist movement of the early 1970s, roles of butch and femme that developed in lesbian bars in the 1950s and 1960s were rejected, because as one writer put it: "all role playing is sick." Lesbian feminists considered the butch roles as archaic imitations of masculine behavior. Some women, according to Lillian Faderman, were eager to shed the roles they felt forced into playing. The roles returned for some women in the 1980s, although they allowed for more flexibility than before Stonewall.
Author Michael Bronski highlights the "attack on pre-Stonewall culture", particularly gay pulp fiction for men, where the themes often reflected self-hatred or ambivalence about being gay. Many books ended unsatisfactorily and drastically, often with suicide, and writers portrayed their gay characters as alcoholics or deeply unhappy. These books, which he describes as "an enormous and cohesive literature by and for gay men", have not been reissued and are lost to later generations. Dismissing the reason simply as political correctness, Bronski writes, "gay liberation was a youth movement whose sense of history was defined to a large degree by rejection of the past."
The riots spawned from a bar raid became a literal example of gay men and lesbians fighting back, and a symbolic call to arms for many people. Historian David Carter remarks in his book about the Stonewall riots that the bar itself was a complex business that represented a community center, an opportunity for the Mafia to blackmail its own customers, a home, and a place of "exploitation and degradation". The true legacy of the Stonewall riots, Carter insists, is the "ongoing struggle for lesbian, gay, bisexual, and transgender equality". Historian Nicholas Edsall writes,
Stonewall has been compared to any number of acts of radical protest and defiance in American history from the Boston Tea Party on. But the best and certainly a more nearly contemporary analogy is with Rosa Parks' refusal to move to the back of the bus in Montgomery, Alabama, in December 1955, which sparked the modern civil rights movement. Within months after Stonewall radical gay liberation groups and newsletters sprang up in cities and on college campuses across America and then across all of northern Europe as well.
Before the rebellion at the Stonewall Inn, homosexuals were, as historians Dudley Clendinen and Adam Nagourney write,
a secret legion of people, known of but discounted, ignored, laughed at or despised. And like the holders of a secret, they had an advantage which was a disadvantage, too, and which was true of no other minority group in the United States. They were invisible. Unlike African Americans, women, Native Americans, Jews, the Irish, Italians, Asians, Hispanics, or any other cultural group which struggled for respect and equal rights, homosexuals had no physical or cultural markings, no language or dialect which could identify them to each other, or to anyone else... But that night, for the first time, the usual acquiescence turned into violent resistance... From that night the lives of millions of gay men and lesbians, and the attitude toward them of the larger culture in which they lived, began to change rapidly. People began to appear in public as homosexuals, demanding respect.
Historian Lillian Faderman calls the riots the "shot heard round the world", explaining, "The Stonewall Rebellion was crucial because it sounded the rally for that movement. It became an emblem of gay and lesbian power. By calling on the dramatic tactic of violent protest that was being used by other oppressed groups, the events at the Stonewall implied that homosexuals had as much reason to be disaffected as they."
Joan Nestle co-founded the Lesbian Herstory Archives in 1974, and credits "its creation to that night and the courage that found its voice in the streets." Cautious, however, not to attribute the start of gay activism to the Stonewall riots, Nestle writes,
I certainly don't see gay and lesbian history starting with Stonewall... and I don't see resistance starting with Stonewall. What I do see is a historical coming together of forces, and the sixties changed how human beings endured things in this society and what they refused to endure... Certainly something special happened on that night in 1969, and we've made it more special in our need to have what I call a point of origin... it's more complex than saying that it all started with Stonewall.
The events of the early morning of June 28, 1969 were not the first instances of gay men and lesbians fighting back against police in New York City and elsewhere. Not only had the Mattachine Society been active in major cities such as Los Angeles and Chicago, but similarly marginalized people started the riot at Compton's Cafeteria in 1966, and another riot responded to a raid on Los Angeles' Black Cat Tavern in 1967. However, several circumstances were in place that made the Stonewall riots memorable. The location of the Lower Manhattan raid was a factor: it was across the street from "The Village Voice" offices, and the narrow crooked streets gave the rioters advantage over the police. Many of the participants and residents of Greenwich Village were involved in political organizations that were effectively able to mobilize a large and cohesive gay community in the weeks and months after the rebellion. The most significant facet of the Stonewall riots, however, was the commemoration of them in Christopher Street Liberation Day, which grew into the annual Gay Pride events around the world.
Stonewall (officially Stonewall Equality Limited) is an LGBT rights charity in the United Kingdom, founded in 1989, and named after the Stonewall Inn because of the Stonewall riots. The Stonewall Awards is an annual event the charity has held since 2006 to recognize people who have affected the lives of British lesbian, gay, and bisexual people.
The middle of the 1990s was marked by the inclusion of bisexuals as a represented group within the gay community, when they successfully sought to be included on the platform of the 1993 March on Washington for Lesbian, Gay and Bi Equal Rights and Liberation. Transgender people also asked to be included, but were not, though trans-inclusive language was added to the march's list of demands. The transgender community continued to find itself simultaneously welcome and at odds with the gay community as attitudes about non-binary gender discrimination and pansexual orientation developed and came increasingly into conflict. In 1994, New York City celebrated "Stonewall 25" with a march that went past the United Nations Headquarters and into Central Park. Estimates put the attendance at 1.1 million people. Sylvia Rivera led an alternate march in New York City in 1994 to protest the exclusion of transgender people from the events. Attendance at LGBT Pride events has grown substantially over the decades. Most large cities around the world now have some kind of Pride demonstration. Pride events in some cities mark the largest annual celebration of any kind. The growing trend towards commercializing marches into parades—with events receiving corporate sponsorship—has caused concern about taking away the autonomy of the original grassroots demonstrations that put inexpensive activism in the hands of individuals.
A "Stonewall Shabbat Seder" was first held at B'nai Jeshurun, a synagogue on New York's Upper West Side, in 1995.
President Barack Obama declared June 2009 Lesbian, Gay, Bisexual, and Transgender Pride Month, citing the riots as a reason to "commit to achieving equal justice under law for LGBT Americans". The year marked the 40th anniversary of the riots, giving journalists and activists cause to reflect on progress made since 1969. Frank Rich noted in "The New York Times" that no federal legislation exists to protect the rights of gay Americans. An editorial in the "Washington Blade" compared the scruffy, violent activism during and following the Stonewall riots to the lackluster response to failed promises given by President Obama; for being ignored, wealthy LGBT activists reacted by promising to give less money to Democratic causes. Two years later, the Stonewall Inn served as a rallying point for celebrations after the New York State Senate voted to pass same-sex marriage. The act was signed into law by Governor Andrew Cuomo on June 24, 2011. Individual states continue to battle with homophobia. The Missouri Senate passed a measure its supporters characterize as a religious freedom bill that could change the state's constitution despite Democrats' objections, and their 39-hour filibuster. This bill allows the "protection of certain religious organizations and individuals from being penalized by the state because of their sincere religious beliefs or practices concerning marriage between two persons of the same sex" discriminating against homosexual patronage.
Obama also referenced the Stonewall riots in a call for full equality during his second inaugural address on January 21, 2013:
We, the people, declare today that the most evident of truths—that all of us are created equal—is the star that guides us still; just as it guided our forebears through Seneca Falls, and Selma, and Stonewall... Our journey is not complete until our gay brothers and sisters are treated like anyone else under the law—for if we are truly created equal, then surely the love we commit to one another must be equal as well.
This was a historic moment, being the first time that a president mentioned gay rights or the word "gay" in an inaugural address.
In 2014, a marker dedicated to the Stonewall riots was included in the Legacy Walk, an outdoor public display in Chicago celebrating LGBT history and people.
Throughought June 2019, Stonewall 50 – WorldPride NYC 2019, produced by Heritage of Pride in partnership with the I Love New York program's LGBT division, was held in New York to commemorate the 50th anniversary of the Stonewall uprising. The final official estimate included 5 million visitors attending in Manhattan alone, making it the largest LGBTQ celebration in history. June is traditionally Pride month in New York City and worldwide, and the events were held under the auspices of the annual NYC Pride March. An apology from New York City Police Commissioner James P. O'Neill, on June 6, 2019, coincided with WorldPride being celebrated in New York City. O'Neill apologized on behalf of the NYPD for the actions of its officers at the Stonewall uprising in 1969.
The official 50th commemoration of the Stonewall Uprising happened on 28 June on Christopher Street in front of Stonewall Inn. The official commemoration was themed as a rally—in reference to the original rallies in front of Stonewall Inn in 1969. Talent for this event included Mayor Bill De Blasio, Senator Kirsten Gillibrand, Congressman Jerry Nadler, American activist Emma Gonzalez and global activist Rémy Bonny.
In 2019, Paris, France officially named a square in the Marais as Place des Émeutes-de-Stonewall.
In 2018, 49 years after the uprising, Stonewall Day was announced as a commemoration day by Pride Live, a social advocacy and community engagement organization. The second Stonewall Day was held on Friday, June 28, 2019, outside the Stonewall Inn. During this event, Pride Live introduced their Stonewall Ambassadors program, to raise awareness for the 50th anniversary of the Stonewall Riots. Those appearing at the event included: Geena Rocero, First Lady of New York City Chirlane McCray, Josephine Skriver, Wilson Cruz, Ryan Jamaal Swain, Angelica Ross, Donatella Versace, Conchita Wurst, Bob the Drag Queen, Whoopi Goldberg, and Lady Gaga, with performances by Alex Newell and Alicia Keys.
In June 1999, the U.S. Department of the Interior designated 51 and 53 Christopher Street and the surrounding area in Greenwich Village to be on the National Register of Historic Places, the first of significance to the lesbian, gay, bisexual and transgender community. In a dedication ceremony, Assistant Secretary of the Department of the Interior John Berry stated, "Let it forever be remembered that here—on this spot—men and women stood proud, they stood fast, so that we may be who we are, we may work where we will, live where we choose and love whom our hearts desire." The Stonewall Inn was itself named a National Historic Landmark in February 2000.
In May 2015, the New York City Landmarks Preservation Commission announced it would officially consider designating the Stonewall Inn as a landmark, making it the first city location to be considered based on its LGBT cultural significance alone. On June 23, 2015, the New York City Landmarks Preservation Commission unanimously approved the designation of the Stonewall Inn as a city landmark, making it the first landmark honored for its role in the fight for gay rights.
On June 24, 2016, President Obama announced the establishment of the Stonewall National Monument site to be administered by the National Park Service. The designation, which followed transfer of city parkland to the federal government, protects Christopher Park and adjacent areas totaling more than seven acres; the Stonewall Inn is within the boundaries of the monument but remains privately owned. The National Park Foundation formed a new nonprofit organization to raise funds for a ranger station and interpretive exhibits for the monument.
No newsreel or TV footage was taken of the riots and scant home movies and photographs exist, but those that do have been used in documentaries. | https://en.wikipedia.org/wiki?curid=29383 |
Sheffer stroke
In Boolean functions and propositional calculus, the Sheffer stroke denotes a logical operation that is equivalent to the negation of the conjunction operation, expressed in ordinary language as "not both". It is also called nand ("not and") or the alternative denial, since it says in effect that at least one of its operands is false. In digital electronics, it corresponds to the NAND gate. It is named after Henry M. Sheffer and written as ↑ or as | (but not as ||, often used to represent disjunction). In Bocheński notation it can be written as D"pq".
Its dual is the NOR operator (also known as the Peirce arrow or Quine dagger). Like its dual, NAND can be used by itself, without any other logical operator, to constitute a logical formal system (making NAND functionally complete). This property makes the NAND gate crucial to modern digital electronics, including its use in computer processor design.
The NAND operation is a logical operation on two logical values. It produces a value of true, if — and only if — at least one of the propositions is false.
The truth table of formula_1 (also written as formula_2 "W"
are inference rules.
Since the only connective of this logic is |, the symbol | could be discarded altogether, leaving only the parentheses to group the letters. A pair of parentheses must always enclose a pair of "wff"s. Examples of theorems in this simplified notation are
The notation can be simplified further, by letting
for any "U". This simplification causes the need to change some rules:
The result is a parenthetical version of the Peirce existential graphs.
Another way to simplify the notation is to eliminate parentheses by using Polish Notation. For example, the earlier examples with only parentheses could be rewritten using only strokes as follows
This follows the same rules as the parenthesis version, with the opening parenthesis replaced with a Sheffer stroke and the (redundant) closing parenthesis removed.
Or one could omit both parentheses "and" strokes and allow the order of the arguments to determine the order of function application so that for example, applying the function from right to left (reverse Polish notation – any other unambiguous convention based on ordering would do) | https://en.wikipedia.org/wiki?curid=29388 |
Stalactite
A stalactite (, ; from the Greek "stalasso", (σταλάσσω), "to drip", and meaning "that which drips") is a type of formation that hangs from the ceiling of caves, hot springs, or manmade structures such as bridges and mines. Any material that is soluble, can be deposited as a colloid, or is in suspension, or is capable of being melted, may form a stalactite. Stalactites may be composed of lava, minerals, mud, peat, pitch, sand, sinter, and amberat (crystallized urine of pack rats). A stalactite is not necessarily a speleothem, though speleothems are the most common form of stalactite because of the abundance of limestone caves.
The corresponding formation on the floor of the cave is known as a stalagmite. Mnemonics have been developed for which word refers to which type of formation; one is that "stalactite" has a C for "ceiling", and "stalagmite" has a G for "ground".
The most common stalactites are speleothems, which occur in limestone caves. They form through deposition of calcium carbonate and other minerals, which is precipitated from mineralized water solutions. Limestone is the chief form of calcium carbonate rock which is dissolved by water that contains carbon dioxide, forming a calcium bicarbonate solution in caverns. The chemical formula for this reaction is:
This solution travels through the rock until it reaches an edge and if this is on the roof of a cave it will drip down. When the solution comes into contact with air the chemical reaction that created it is reversed and particles of calcium carbonate are deposited. The reversed reaction is:
An average growth rate is a year. The quickest growing stalactites are those formed by a constant supply of slow dripping water rich in calcium carbonate (CaCO3) and carbon dioxide (CO2), which can grow at per year. The drip rate must be slow enough to allow the CO2 to degas from the solution into the cave atmosphere, resulting in deposition of CaCO3 on the stalactite. Too fast a drip rate and the solution, still carrying most of the CaCO3, falls to the cave floor where degassing occurs and CaCO3 is deposited as a stalagmite.
All limestone stalactites begin with a single mineral-laden drop of water. When the drop falls, it deposits the thinnest ring of calcite. Each subsequent drop that forms and falls deposits another calcite ring. Eventually, these rings form a very narrow (≈4 to 5 mm diameter), hollow tube commonly known as a "soda straw" stalactite. Soda straws can grow quite long, but are very fragile. If they become plugged by debris, water begins flowing over the outside, depositing more calcite and creating the more familiar cone-shaped stalactite. The same water drops that fall from the tip of a stalactite deposit more calcite on the floor below, eventually resulting in a rounded or cone-shaped stalagmite. Unlike stalactites, stalagmites never start out as hollow "soda straws". Given enough time, these formations can meet and fuse to create pillars of calcium carbonate known as a "column".
Stalactite formation generally begins over a large area, with multiple paths for the mineral rich water to flow. As minerals are dissolved in one channel slightly more than other competing channels, the dominant channel begins to draw more and more of the available water, which speeds its growth, ultimately resulting in all other channels being choked off. This is one reason why formations tend to have minimum distances from one another. The larger the formation, the greater the interformation distance.
Another type of stalactite is formed in lava tubes while lava is still active inside. The mechanism of formation is the deposition of material on the ceilings of caves, however with lava stalactites formation happens very quickly in only a matter of hours, days, or weeks, whereas limestone stalactites may take up to thousands of years. A key difference with lava stalactites is that once the lava has ceased flowing, so too will the stalactites cease to grow. This means that if the stalactite were to be broken it would never grow back.
The generic term "lavacicle" has been applied to lava stalactites and stalagmites indiscriminately and evolved from the word icicle.
Like limestone stalactites, they can leave lava drips on the floor that turn into lava stalagmites and may eventually fuse with the corresponding stalactite to form a column.
Shark tooth stalactites The shark tooth stalactite is broad and tapering in appearance. It may begin as a small driblet of lava from a semi-solid ceiling, but then grows by accreting layers as successive flows of lava rise and fall in the lava tube, coating and recoating the stalactite with more material. They can vary from a few millimeters to over a meter in length.
Splash stalactites
As lava flows through a tube, material will be splashed up on the ceiling and ooze back down, hardening into a stalactite. This type of formation results in an irregularly-shaped stalactite, looking somewhat like stretched taffy. Often they may be of a different color than the original lava that formed the cave.
Tubular lava stalactites
When the roof of a lava tube is cooling, a skin will form that traps semi-molten material inside. Trapped gases force lava to extrude out through small openings that result in hollow, tubular stalactites analogous to the soda straws formed as depositional speleothems in solution caves, The longest known is almost 2 meters in length. These are common in Hawaiian lava tubes and are often associated with a drip stalagmite that forms below as material is carried through the tubular stalactite and piles up on the floor beneath. Sometimes the tubular form collapses near the distal end, most likely when the pressure of escaping gases decreased and still-molten portions of the stalactites deflated and cooled.
Often these tubular stalactites will acquire a twisted, vermiform appearance as bits of lava crystallize and force the flow in different directions. These tubular lava helictites may also be influenced by air currents through a tube and point downwind.
A common stalactite found seasonally or year round in many caves is the ice stalactite, commonly referred to as icicles, especially on the surface. Water seepage from the surface will penetrate into a cave and if temperatures are below freezing the water will form stalactites. Creation may also be done by the freezing of water vapor. Similar to lava stalactites, ice stalactites form very quickly within hours or days. Unlike lava stalactites however, they may grow back as long as water and temperatures are suitable.
Ice stalactites can also form under sea ice when saline water is introduced to ocean water. These specific stalactites are referred to as brinicles.
Ice stalactites may also form corresponding stalagmites below them and given time may grow together to form an ice column.
Stalactites can also form on concrete, and on plumbing where there is a slow leak and calcium, magnesium or other ions in the water supply, although they form much more rapidly there than in the natural cave environment. These secondary deposits, such as stalactites, stalagmites, flowstone and others, which are derived from the lime, mortar or other calcareous material in concrete, outside of the "cave" environment, can not be classified as "speleothems" due to the definition of the term. The term "calthemite" is used to encompass the secondary deposits which mimic the shapes and forms of speleothems outside the cave environment.
The way stalactites form on concrete is due to different chemistry than those that form naturally in limestone caves and is due of the presence of calcium oxide in cement. Concrete is made from aggregate, sand and cement. When water is added to the mix, the calcium oxide in the cement reacts with water to form calcium hydroxide (Ca(OH)2). The chemical formula for this is:
Over time, any rainwater that penetrates cracks in set (hard) concrete will carry any free calcium hydroxide in solution to the edge of the concrete. Stalactites can form when the solution emerges on the underside of the concrete structure where it is suspended in the air, for example, on a ceiling or a beam. When the solution comes into contact with air on the underside of the concrete structure, another chemical reaction takes place. The solution reacts with carbon dioxide in the air and precipitates calcium carbonate.
When this solution drops down it leaves behind particles of calcium carbonate and over time these form into a stalactite. They are normally a few centimeters long and with a diameter of approximately . The growth rate of stalactites is significantly influenced by supply continuity of saturated solution and the drip rate. A straw shaped stalactite which has formed under a concrete structure can grow as much as 2 mm per day in length, when the drip rate is approximately 11 minutes between drops. Changes in leachate solution pH can facilitate additional chemical reactions, which may also influence calthemite stalactite growth rates.
The White Chamber in the Jeita Grotto's upper cavern in Lebanon contains an limestone stalactite which is accessible to visitors and is claimed to be the longest stalactite in the world. Another such claim is made for a limestone stalactite that hangs in the Chamber of Rarities in the Gruta Rei do Mato (Sete Lagoas, Minas Gerais, Brazil). However, vertical cavers have often encountered longer stalactites while exploring. One of the longest stalactites viewable by the general public is in Pol an Ionain (Doolin Cave), County Clare, Ireland, in a karst region known as The Burren; what makes it more impressive is the fact that the stalactite is held on by a section of calcite less than .
Stalactites are first mentioned (though not by name) by the Roman natural historian Pliny in a text which also mentions stalagmites and columns and refers to their creation by the dripping of water. The term "stalactite" was coined in the 17th century by the Danish Physician Ole Worm, who coined the Latin word from the Greek word σταλακτός (stalaktos, "dripping") and the Greek suffix -ίτης (-ites, connected with or belonging to). | https://en.wikipedia.org/wiki?curid=29390 |
Strangers in Paradise
Strangers in Paradise is a long-running, mostly self-published black-and-white comic book that was written and drawn by Terry Moore. Essentially the story of a love triangle between two women and one man, "Strangers in Paradise" is a slice-of-life dramedy that veered into the crime genre.
The first issue was published January 1, 1993. The series reached its planned conclusion in 2007 with issue #90 of volume 3. A follow up novel was announced at Comic-Con International 2012.
Terry Moore stated that "I started out wanting to do a newspaper strip, and tried one idea after another before I realised I hated the gag-a-day life and really wanted to try a story instead." The story he chose to tell turned out to be "Strangers in Paradise", or "this story about 2 girls and a guy who gets to know them" (from Moore's introduction to "The Collected Strangers in Paradise, Volume One"), which used characters he had developed during his time on the gag-a-day circuit. For example, Katchoo appears as a "happy-go-lucky wood nymph" in an early strip by Moore about an enchanted forest. These strips were collected into two trade paperbacks, but they did not include three issues. Because of this, the entire run was later published in one large paperback edition entitled "The Complete Paradise Too". This volume can be considered the true origin of Katchoo, Francine and the "Strangers in Paradise" universe.
"SiP", as it is commonly known, began as a three-issue mini-series published by Antarctic Press in 1993, which focused entirely on the relationship between the three main characters and Francine's unfaithful boyfriend. This is now known as "Volume 1.” Thirteen issues were published under Moore's own "Abstract Studio" imprint, and these make up "Volume 2.” This is where the "thriller" plot was introduced. The series moved to Image Comics' Homage imprint for the start of "Volume 3,” but after eight issues moved back to Abstract Studio, where it continued with the same numbering. Volume 3 concluded at issue #90, released June 6, 2007.
Moore revived the series as "Strangers in Paradise XXV" in 2018 for the 25th anniversary. The new miniseries included characters and elements from Moore's other works, "Echo", "Rachel Rising", and "Motor Girl".
The story primarily concerns the difficult relationship between two women, Helen Francine Peters (referred to as Francine throughout the series) and Katina Marie ("Katchoo") Choovanski, and their friend David Qin. Francine considers Katchoo her best friend; Katchoo is in love with Francine. David is in love with Katchoo (a relationship which Katchoo herself is deeply conflicted over).
The love triangle (which later expands into a love rectangle with the introduction of Casey Bullock, who marries Francine's ex-boyfriend Freddie Femur and later divorces him, in order to pursue both David and Katchoo) alternates with the mystery and intrigue regarding Katchoo's past as an underage lesbian hooker and the Parker Crime Syndicate. Run by David's lesbian sister Darcy, the "Parker Girls" work for the shadowy 'Big Six' organization, an international crime syndicate with influence over the world of politics. "Parker Girls" are highly trained women used by organized crime to control, manipulate, spy upon, and ultimately kill men and women in positions of power and authority, for the Big Six.
The series received the Eisner Award for Best Serialized Story in 1996 for "I Dream of You" as well as the National Cartoonists Society Reuben Award for Best Comic Book in 2003. It also won the GLAAD Award for Best Comic Book in 2001.
"Strangers in Paradise" has been collected into a series of full-size trade paperbacks, hardback collections, and smaller format paperback collections. These reprints collect the issues into different sets.
The full-size paperback collections to date are:
The hardback collections to date are:
The "pocket book" collections to date are:
Other books to date are:
Two limited edition statuettes of Katchoo were produced by Clayburn Moore as the first in a planned series of three statues based around the series. In the first she is standing in a skimpy black dress, and in the second she is reclining in a bath wearing her leather jacket and holding a drink and a gun.
In 2009, Shocker Toys released a Katchoo figure as part of the first series of its "Indie Spotlight" line.
In 1996 a series of trading cards was released by Comic Images, consisting of a 90-card base set plus extra collector's cards, such as the 500 'autograph cards' that featured Terry Moore's signature and information on the creation of "SiP". These extra cards were inserted randomly into packs. Also produced was a matching "SiP" binder, which came with 12 9-pocket sleeves to hold the cards.
Advertised on the official "SiP" website are character pin badges representing Francine, Katchoo and David. There is also a black tote bag featuring the "Strangers in Paradise" logo and a tumbler decorated with colour panels from the series, in addition to a postcard set and two T-shirts, although several of these items are listed as 'sold out', and are hard to come by elsewhere.
On September 13, 2017, Angela Robinson and Moore announced they were developing the film adaptation. In November 2017, Moore was working on a script for it. IMG Global Media is backing the project and Robinson will direct. | https://en.wikipedia.org/wiki?curid=29391 |
Summer
Summer is the hottest of the four temperate seasons, falling after spring and before autumn. At or around the summer solstice (about 3 days before Midsummer Day), the earliest sunrise and latest sunset occurs, the days are longest and the nights are shortest, with day length decreasing as the season progresses after the solstice. The date of the beginning of summer varies according to climate, tradition, and culture. When it is summer in the Northern Hemisphere, it is winter in the Southern Hemisphere, and vice versa.
From an astronomical view, the equinoxes and solstices would be the middle of the respective seasons, but sometimes astronomical summer is defined as starting at the solstice, the time of maximal insolation, often identified with the 21st day of June or December. A variable seasonal lag means that the meteorological centre of the season, which is based on average temperature patterns, occurs several weeks after the time of maximal insolation. The meteorological convention is to define summer as comprising the months of June, July, and August in the northern hemisphere and the months of December, January, and February in the southern hemisphere. Under meteorological definitions, all seasons are arbitrarily set to start at the beginning of a calendar month and end at the end of a month. This meteorological definition of summer also aligns with the commonly viewed notion of summer as the season with the longest (and warmest) days of the year, in which daylight predominates. The meteorological reckoning of seasons is used in Australia, Austria, Denmark, Russia and Japan. It is also used by many in the United Kingdom and in Canada. In Ireland, the summer months according to the national meteorological service, Met Éireann, are June, July and August. However, according to the Irish Calendar, summer begins on 1 May and ends on 1 August. School textbooks in Ireland follow the cultural norm of summer commencing on 1 May rather than the meteorological definition of 1 June.
Days continue to lengthen from equinox to solstice and summer days progressively shorten after the solstice, so meteorological summer encompasses the build-up to the longest day and a diminishing thereafter, with summer having many more hours of daylight than spring. Reckoning by hours of daylight alone, summer solstice marks the midpoint, not the beginning, of the seasons. Midsummer takes place over the shortest night of the year, which is the summer solstice, or on a nearby date that varies with tradition.
Where a seasonal lag of half a season or more is common, reckoning based on astronomical markers is shifted half a season. By this method, in North America, summer is the period from the summer solstice (usually 20 or 21 June in the Northern Hemisphere) to the autumn equinox.
Reckoning by cultural festivals, the summer season in the United States is traditionally regarded as beginning on Memorial Day weekend (the last Weekend in May) and ending on Labor Day (the first Monday in September), more closely in line with the meteorological definition for the parts of the country that have four-season weather. The similar Canadian tradition starts summer on Victoria Day one week prior (although summer conditions vary widely across Canada's expansive territory) and ends, as in the United States, on Labour Day.
In Chinese astronomy, summer starts on or around 5 May, with the "jiéqì" (solar term) known as lìxià (立夏), i.e. "establishment of summer", and it ends on or around 6 August.
In southern and southeast Asia, where the monsoon occurs, summer is more generally defined as lasting from March, April, May and June, the warmest time of the year, ending with the onset of the monsoon rains.
Because the temperature lag is shorter in the oceanic temperate southern hemisphere, most countries in this region use the meteorological definition with summer starting on 1 December and ending on the last day of February.
Summer is traditionally associated with hot or warm weather. In the Mediterranean regions, it is also associated with dry weather, while in other places (particularly in Eastern Asia because of the monsoon) it is associated with rainy weather. The wet season is the main period of vegetation growth within the savanna climate regime. Where the wet season is associated with a seasonal shift in the prevailing winds, it is known as a monsoon.
In the northern Atlantic Ocean, a distinct tropical cyclone season occurs from 1 June to 30 November. The statistical peak of the Atlantic hurricane season is 10 September. The Northeast Pacific Ocean has a broader period of activity, but in a similar timeframe to the Atlantic. The Northwest Pacific sees tropical cyclones year-round, with a minimum in February and March and a peak in early September. In the North Indian basin, storms are most common from April to December, with peaks in May and November. In the Southern Hemisphere, the tropical cyclone season runs from 1 November until the end of April with peaks in mid-February to early March.
Thunderstorm season in the United States and Canada runs in the spring through summer but sometimes can run as late as October or even November in the fall. These storms can produce hail, strong winds and tornadoes, usually during the afternoon and evening.
Schools and universities typically have a summer break to take advantage of the warmer weather and longer days. In almost all countries, children are out of school during this time of year for summer break, although dates vary. In the United States, public schools usually end in late May in Memorial Day weekend, while colleges finish in early May. Public school traditionally resumes near Labor Day, while higher institutions often resume in mid-August. In England and Wales, school ends in mid-July and resumes again in early September; in Scotland, the summer holiday begins in late June and ends in mid- to late-August. Similarly, in Canada the summer holiday starts on the last or second-last Friday in June and ends in late August or on the first Tuesday of September, with the exception of when that date falls before Labour Day, in which case, ends on the second Tuesday of the month. In Russia the summer holiday begins at the end of May and ends on 31 August.
In the Southern Hemisphere, school summer holiday dates include the major holidays of Christmas and New Year's Day. School summer holidays in Australia, New Zealand and South Africa begin in early December and end in early February, with dates varying between states. In South Africa, the new school-year usually starts during the second week of January, thus aligning the academic year with the Calendar year. In India, school ends in late April and resumes in early or mid-June. In Cameroon and Nigeria, schools usually finish for summer vacation in mid-July, and resume in the later weeks of September or the first week of October.
A wide range of public holidays fall during summer, including:
People generally take advantage of the high temperatures by spending more time outdoors during summer. Activities such as travelling to the beach and picnics occur during the summer months. Sports such as soccer, basketball, football, volleyball, skateboarding, baseball, softball, cricket, tennis and golf are played. Water sports also occur. These include water skiing, wakeboarding, swimming, surfing, tubing and water polo. The modern Olympics have been held during the summer months every four years since 1896. The 2000 Summer Olympics, in Sydney, however, were held during the Australian Spring.
Summer is normally a low point in television viewing, and television schedules generally reflect this by not scheduling new episodes of their most popular shows between the end of May sweeps and the beginning of the television season in September, instead scheduling low-cost reality television shows and burning off commitments to already-cancelled series. There is an exception to this with children's television. Many television shows made for children and are popular with children are released during the summer months, especially on children's cable channels such as the Disney Channel in the United States, as children are off school. Disney Channel, for example, ends its pre-school programming earlier in the day for older school-age children in the summer months while it reverts to the original scheduling as the new school year begins. Conversely, the music and film industries generally experience higher returns during the summer than other times of the year and market their summer hits accordingly. Summer is most popular for animated movies to be released theatrically in movie theaters.
With most school-age children and college students (except those attending summer school and summer camp) on summer vacation during the summer months, especially in the United States, travel and vacationing traditionally peak during the summer, with the volume of travel in a typical summer weekend rivalled only by Thanksgiving. Teenagers and college students often take summer jobs in industries that cater to recreation. Business activity for the recreation, tourism, restaurant, and retail industries peaks during the summer months as well as the holiday season. | https://en.wikipedia.org/wiki?curid=29392 |
Screwdriver (cocktail)
A screwdriver is a popular alcoholic highball drink made with orange juice and vodka. While the basic drink is simply the two ingredients, there are many variations. Many of the variations have different names in different parts of the world.
The screwdriver is mentioned in 1944: "A Screwdriver—a drink compounded of vodka and orange juice and supposedly invented by interned American fliers"; and in 1949: "the latest Yankee concoction of vodka and orange juice, called a 'screwdriver'".
A screwdriver with two parts of Sloe gin, and filled with orange juice is a "Slow (Sloe) Screw".
A screwdriver with two parts of Sloe gin, one part of Southern Comfort and filled with orange juice is a "Slow Comfortable Screw".
A screwdriver with one part of Sloe gin, one part of Southern Comfort, one part Galliano and filled with orange juice is a "Slow Comfortable Screw Up Against The Wall".
A screwdriver with one part of Sloe gin, one part of Southern Comfort, one part Galliano, one part tequila and filled with orange juice is a "Slow Comfortable Screw Up Against The Wall Mexican Style".
A screwdriver with one part of Sloe gin, one part of Southern Comfort, one part Galliano, one part peach schnapps, and filled with orange juice is a "Slow Comfortable Screw Up Against a Fuzzy Wall".
A screwdriver with one part of Sloe gin, one part of Southern Comfort, one part Galliano, one part peach schnapps, one part sparkling rosé, and filled with orange juice is a "Slow Comfortable Screw Up Against a Fuzzy Pink Wall".
A screwdriver with two parts vodka, four parts orange juice, and one part Galliano is a Harvey Wallbanger.
A screwdriver with equal parts vanilla vodka and Blue Curaçao topped with lemon-lime soda is a "Sonic Screwdriver". This cocktail is named after the Sonic Screwdriver from science fiction TV show, Doctor Who. The blue color of the cocktail resembles the blue parts of the prop from the show.
A screwdriver with one part Fireball and four parts orange juice is a "Burning Screwdriver".
A shot of vodka with a slice of orange is a "Cordless Screwdriver".
A screwdriver with half orange juice and half 7-up as mix is a "Screwup".
A "Virgin Screwdriver" is a mocktail (non-alcoholic version), usually made with orange juice and tonic water.
A screwdriver with apple juice instead of orange juice is an "Anita Bryant Cocktail". Bryant was an American singer and spokeswoman for the Florida Citrus Commission during the 1960s and 1970s. Starting in 1977, she became an anti-gay rights activist.
Because Bryant promoted orange juice, gay bars and gay-rights groups across the U.S. retaliated by boycotting the product. Gay bars across North America stopped serving screwdrivers and invented this cocktail to replace it. The sales and proceeds of the cocktail went to gay rights activists and helped fund their work against Bryant. The campaign was ultimately successful as Bryant's activism damaged her musical and business career. Her contract with the Florida Citrus Commission was left to expire in 1980 after they stated she was "worn out" as a spokesperson. After that, gay bars started selling screwdrivers again. | https://en.wikipedia.org/wiki?curid=29396 |
Single-stage-to-orbit
A single-stage-to-orbit (or SSTO) vehicle reaches orbit from the surface of a body using only propellants and fluids and without expending tanks, engines, or other major hardware. The term usually, but not exclusively, refers to reusable vehicles. To date, no Earth-launched SSTO launch vehicles have ever been flown; orbital launches from Earth have been performed by either fully or partially expendable multi-stage rockets.
The main projected advantage of the SSTO concept is elimination of the hardware replacement inherent in expendable launch systems. However, the non-recurring costs associated with design, development, research and engineering (DDR&E) of reusable SSTO systems are much higher than expendable systems due to the substantial technical challenges of SSTO, assuming that those technical issues can in fact be solved.
It is considered to be marginally possible to launch a single-stage-to-orbit chemically-fueled spacecraft from Earth. The principal complicating factors for SSTO from Earth are: high orbital velocity of over ; the need to overcome Earth's gravity, especially in the early stages of flight; and flight within Earth's atmosphere, which limits speed in the early stages of flight and influences engine performance.
Advances in rocketry in the 21st century have resulted in a substantial fall in the cost to launch a kilogram of payload to either low Earth orbit or the International Space Station, reducing the main projected advantage of the SSTO concept.
Notable single stage to orbit concepts include Skylon, the DC-X, the Lockheed Martin X-33, and the Roton SSTO. However, despite showing some promise, none of them has come close to achieving orbit yet due to problems with finding a sufficiently efficient propulsion system.
Single-stage-to-orbit is much easier to achieve on extraterrestrial bodies that have weaker gravitational fields and lower atmospheric pressure than Earth, such as the Moon and Mars, and has been achieved from the Moon by both the Apollo program's Lunar Module and several robotic spacecraft of the Soviet Luna program.
Before the second half of the twentieth century, very little research was conducted into space travel. During the 1960s, some of the first concept designs for this kind of craft began to emerge.
One of the earliest SSTO concepts was the expendable One stage Orbital Space Truck (OOST) proposed by Philip Bono, an engineer for Douglas Aircraft Company. A reusable version named ROOST was also proposed.
Another early SSTO concept was a reusable launch vehicle named NEXUS which was proposed by Krafft Arnold Ehricke in the early 1960s. It was one of the largest spacecraft ever conceptualized with a diameter of over 50 metres and the capability to lift up to 2000 short tons into Earth orbit, intended for missions to further out locations in the solar system such as Mars. The North American Air Augmented VTOVL from 1963 was a similarly large craft which would have used ramjets to decrease the liftoff mass of the vehicle by removing the need for large amounts of liquid oxygen while traveling through the atmosphere.
From 1965, Robert Salked investigated various single stage to orbit winged spaceplane concepts. He proposed a vehicle which would burn hydrocarbon fuel while in the atmosphere and then switch to hydrogen fuel for increasing efficiency once in space.
Further examples of Bono's early concepts (prior to the 1990s) which were never constructed include:
This was not technically single stage since it dropped some of its initial hydrogen tanks, but it came very close.
Around 1985 the NASP project was intended to launch a scramjet vehicle into orbit, but funding was stopped and the project cancelled. At around the same time, the HOTOL tried to use precooled jet engine technology, but failed to show significant advantages over rocket technology.
The DC-X, short for Delta Clipper Experimental, was an uncrewed one-third scale vertical takeoff and landing demonstrator for a proposed SSTO. It is one of only a few prototype SSTO vehicles ever built. Several other prototypes were intended, including the DC-X2 (a half-scale prototype) and the DC-Y, a full-scale vehicle which would be capable of single stage insertion into orbit. Neither of these were built, but the project was taken over by NASA in 1995, and they built the DC-XA, an upgraded one-third scale prototype. This vehicle was lost when it landed with only three of its four landing pads deployed, which caused it to tip over on its side and explode. The project has not been continued since.
From 1999 to 2001 Rotary Rocket attempted to build a SSTO vehicle called the Roton. It received a large amount of media attention and a working sub-scale prototype was completed, but the design was largely impractical.
There have been various approaches to SSTO, including pure rockets that are launched and land vertically, air-breathing scramjet-powered vehicles that are launched and land horizontally, nuclear-powered vehicles, and even jet-engine-powered vehicles that can fly into orbit and return landing like an airliner, completely intact.
For rocket-powered SSTO, the main challenge is achieving a high enough mass-ratio to carry sufficient propellant to achieve orbit, plus a meaningful payload weight. One possibility is to give the rocket an initial speed with a space gun, as planned in the Quicklaunch project.
For air-breathing SSTO, the main challenge is system complexity and associated research and development costs, material science, and construction techniques necessary for surviving sustained high-speed flight within the atmosphere, "and" achieving a high enough mass-ratio to carry sufficient propellant to achieve orbit, plus a meaningful payload weight. Air-breathing designs typically fly at supersonic or hypersonic speeds, and usually include a rocket engine for the final burn for orbit.
Whether rocket-powered or air-breathing, a reusable vehicle must be rugged enough to survive multiple round trips into space without adding excessive weight or maintenance. In addition a reusable vehicle must be able to reenter without damage, and land safely.
While single-stage rockets were once thought to be beyond reach, advances in materials technology and construction techniques have shown them to be possible. For example, calculations show that the Titan II first stage, launched on its own, would have a 25-to-1 ratio of fuel to vehicle hardware.
It has a sufficiently efficient engine to achieve orbit, but without carrying much payload.
Hydrogen might seem the obvious fuel for SSTO vehicles. When burned with oxygen, hydrogen gives the highest specific impulse of any commonly used fuel: around 450 seconds, compared with up to 350 seconds for kerosene.
Hydrogen has the following advantages:
However, hydrogen also has these disadvantages:
These issues can be dealt with, but at extra cost.
While kerosene tanks can be 1% of the weight of their contents, hydrogen tanks often must weigh 10% of their contents. This is because of both the low density and the additional insulation required to minimize boiloff (a problem which does not occur with kerosene and many other fuels). The low density of hydrogen further affects the design of the rest of the vehicle: pumps and pipework need to be much larger in order to pump the fuel to the engine. The end result is the thrust/weight ratio of hydrogen-fueled engines is 30–50% lower than comparable engines using denser fuels.
This inefficiency indirectly affects gravity losses as well; the vehicle has to hold itself up on rocket power until it reaches orbit. The lower excess thrust of the hydrogen engines due to the lower thrust/weight ratio means that the vehicle must ascend more steeply, and so less thrust acts horizontally. Less horizontal thrust results in taking longer to reach orbit, and gravity losses are increased by at least . While not appearing large, the mass ratio to delta-v curve is very steep to reach orbit in a single stage, and this makes a 10% difference to the mass ratio on top of the tankage and pump savings.
The overall effect is that there is surprisingly little difference in overall performance between SSTOs that use hydrogen and those that use denser fuels, except that hydrogen vehicles may be rather more expensive to develop and buy. Careful studies have shown that some dense fuels (for example liquid propane) exceed the performance of hydrogen fuel when used in an SSTO launch vehicle by 10% for the same dry weight.
In the 1960s Philip Bono investigated single-stage, VTVL tripropellant rockets, and showed that it could improve payload size by around 30%.
Operational experience with the DC-X experimental rocket has caused a number of SSTO advocates to reconsider hydrogen as a satisfactory fuel. The late Max Hunter, while employing hydrogen fuel in the DC-X, often said that he thought the first successful orbital SSTO would more likely be fueled by propane.
Some SSTO concepts use the same engine for all altitudes, which is a problem for traditional engines with a bell-shaped nozzle. Depending on the atmospheric pressure, different bell shapes are optimal. Engines operating in the lower atmosphere have shorter bells than those designed to work in vacuum. Having a bell that is only optimal at a single altitude lowers the overall engine efficiency.
One possible solution would be to use an aerospike engine, which can be effective in a wide range of ambient pressures. In fact, a linear aerospike engine was to be used in the X-33 design.
Other solutions involve using multiple engines and other altitude adapting designs such as double-mu bells or extensible bell sections.
Still, at very high altitudes, the extremely large engine bells tend to expand the exhaust gases down to near vacuum pressures. As a result, these engine bells are counterproductive due to their excess weight. Some SSTO concepts use very high pressure engines which permit high ratios to be used from ground level. This gives good performance, negating the need for more complex solutions.
Some designs for SSTO attempt to use airbreathing jet engines that collect oxidizer and reaction mass from the atmosphere to reduce the take-off weight of the vehicle.
Some of the issues with this approach are:
Thus with for example scramjet designs (e.g. X-43) the mass budgets do not seem to close for orbital launch.
Similar issues occur with single-stage vehicles attempting to carry conventional jet engines to orbit—the weight of the jet engines is not compensated sufficiently by the reduction in propellant.
On the other hand, LACE-like precooled airbreathing designs such as the Skylon spaceplane (and ATREX) which transition to rocket thrust at rather lower speeds (Mach 5.5) do seem to give, on paper at least, an improved orbital mass fraction over pure rockets (even multistage rockets) sufficiently to hold out the possibility of full reusability with better payload fraction.
It is important to note that mass fraction is an important concept in the engineering of a rocket. However, mass fraction may have little to do with the costs of a rocket, as the costs of fuel are very small when compared to the costs of the engineering program as a whole. As a result, a cheap rocket with a poor mass fraction may be able to deliver more payload to orbit with a given amount of money than a more complicated, more efficient rocket.
Many vehicles are only narrowly suborbital, so practically anything that gives a relatively small delta-v increase can be helpful, and outside assistance for a vehicle is therefore desirable.
Proposed launch assists include:
And on-orbit resources such as:
Due to weight issues such as shielding, many nuclear propulsion systems are unable to lift their own weight, and hence are unsuitable for launching to orbit. However, some designs such as the Orion project and some nuclear thermal designs do have a thrust to weight ratio in excess of 1, enabling them to lift off. Clearly, one of the main issues with nuclear propulsion would be safety, both during a launch for the passengers, but also in case of a failure during launch. No current program is attempting nuclear propulsion from Earth's surface.
Because they can be more energetic than the potential energy that chemical fuel allows for, some laser or microwave powered rocket concepts have the potential to launch vehicles into orbit, single stage. In practice, this area is not possible with current technology.
The design space constraints of SSTO vehicles were described by rocket design engineer Robert Truax:
The Tsiolkovsky rocket equation expresses the maximum change in velocity any single rocket stage can achieve:
where:
The mass ratio of a vehicle is defined as a ratio the initial vehicle mass when fully loaded with propellants formula_1 to the final vehicle mass formula_2 after the burn:
where:
The propellant mass fraction (formula_3) of a vehicle can be expressed solely as a function of the mass ratio:
The structural coefficient (formula_4) is a critical parameter in SSTO vehicle design. Structural efficiency of a vehicle is maximized as the structural coefficient approaches zero. The structural coefficient is defined as:
The overall structural mass fraction formula_5 can be expressed in terms of the structural coefficient:
An additional expression for the overall structural mass fraction can be found by noting that the payload mass fraction formula_6, propellant mass fraction and structural mass fraction sum to one:
Equating the expressions for structural mass fraction and solving for the initial vehicle mass yields:
This expression shows how the size of a SSTO vehicle is dependent on its structural efficiency. Given a mission profile formula_7 and propellant type formula_8, the size of a vehicle increases with an increasing structural coefficient. This growth factor sensitivity is shown parametrically for both SSTO and two-stage-to-orbit (TSTO) vehicles for a standard LEO mission. The curves vertically asymptote at the maximum structural coefficient limit where mission criteria can no longer be met:
In comparison to a non-optimized TSTO vehicle using restricted staging, a SSTO rocket launching an identical payload mass and using the same propellants will always require a substantially smaller structural coefficient to achieve the same delta-v. Given that current materials technology places a lower limit of approximately 0.1 on the smallest structural coefficients attainable, reusable SSTO vehicles are typically an impractical choice even when using the highest performance propellants available.
It is easier to achieve SSTO from a body with lower gravitational pull than Earth, such as the Moon or Mars. The Apollo Lunar Module ascended from the lunar surface to lunar orbit in a single stage.
A detailed study into SSTO vehicles was prepared by Chrysler Corporation's Space Division in 1970–1971 under NASA contract NAS8-26341. Their proposal (Shuttle SERV) was an enormous vehicle with more than of payload, utilizing jet engines for (vertical) landing. While the technical problems seemed to be solvable, the USAF required a winged design that led to the Shuttle as we know it today.
The uncrewed DC-X technology demonstrator, originally developed by McDonnell Douglas for the Strategic Defense Initiative (SDI) program office, was an attempt to build a vehicle that could lead to an SSTO vehicle. The one-third-size test craft was operated and maintained by a small team of three people based out of a trailer, and the craft was once relaunched less than 24 hours after landing. Although the test program was not without mishap (including a minor explosion), the DC-X demonstrated that the maintenance aspects of the concept were sound. That project was cancelled when it landed with three of four legs deployed, tipped over, and exploded on the fourth flight after transferring management from the Strategic Defense Initiative Organization to NASA.
The Aquarius Launch Vehicle was designed to bring bulk materials to orbit as cheaply as possible.
Current and previous SSTO projects include the Japanese Kankoh-maru project, ARCA Haas 2C, and the Indian Avatar spaceplane.
The British Government partnered with the ESA in 2010 to promote a single-stage to orbit spaceplane concept called Skylon. This design was pioneered by Reaction Engines Limited (REL), a company founded by Alan Bond after HOTOL was canceled. The Skylon spaceplane has been positively received by the British government, and the British Interplanetary Society. Following a successful propulsion system test that was audited by ESA's propulsion division in mid-2012, REL announced that it would begin a three-and-a-half-year project to develop and build a test jig of the Sabre engine to prove the engines performance across its air-breathing and rocket modes. In November 2012, it was announced that a key test of the engine precooler had been successfully completed, and that ESA had verified the precooler's design. The project's development is now allowed to advance to its next phase, which involves the construction and testing of a full-scale prototype engine.
Many studies have shown that regardless of selected technology, the most effective cost reduction technique is economies of scale. Merely launching a large total number reduces the manufacturing costs per vehicle, similar to how the mass production of automobiles brought about great increases in affordability.
Using this concept, some aerospace analysts believe the way to lower launch costs is the exact opposite of SSTO. Whereas reusable SSTOs would reduce per launch costs by making a reusable high-tech vehicle that launches frequently with low maintenance, the "mass production" approach views the technical advances as a source of the cost problem in the first place. By simply building and launching large quantities of rockets, and hence launching a large volume of payload, costs can be brought down. This approach was attempted in the late 1970s, early 1980s in West Germany with the Democratic Republic of the Congo-based OTRAG rocket.
This is somewhat similar to the approach some previous systems have taken, using simple engine systems with "low-tech" fuels, as the Russian and Chinese space programs still do.
An alternative to scale is to make the discarded stages practically reusable: this is the goal of the SpaceX reusable launch system development program and their Falcon 9, Falcon Heavy, and Starship. A similar approach is being pursued by Blue Origin, using New Glenn. | https://en.wikipedia.org/wiki?curid=29398 |
Structural biology
Structural biology is a branch of molecular biology, biochemistry, and biophysics concerned with the molecular structure of biological macromolecules (especially proteins, made up of amino acids, RNA or DNA, made up of nucleotides, membranes, made up of lipids) how they acquire the structures they have, and how alterations in their structures affect their function. This subject is of great interest to biologists because macromolecules carry out most of the functions of cells, and it is only by coiling into specific three-dimensional shapes that they are able to perform these functions. This architecture, the "tertiary structure" of molecules, depends in a complicated way on each molecule's basic composition, or "primary structure."
Biomolecules are too small to see in detail even with the most advanced light microscopes. The methods that structural biologists use to determine their structures generally involve measurements on vast numbers of identical molecules at the same time. These methods include:
Most often researchers use them to study the "native states" of macromolecules. But variations on these methods are also used to watch nascent or denatured molecules assume or reassume their native states. See protein folding.
A third approach that structural biologists take to understanding structure is bioinformatics to look for patterns among the diverse sequences that give rise to particular shapes. Researchers often can deduce aspects of the structure of integral membrane proteins based on the membrane topology predicted by hydrophobicity analysis. See protein structure prediction.
In the past few years it has become possible for highly accurate physical molecular models to complement the "in silico" study of biological structures. Examples of these models can be found in the Protein Data Bank.
Computational techniques like Molecular Dynamics simulations can be used in conjunction with empirical structure determination strategies to extend and study protein structure, conformation and function. | https://en.wikipedia.org/wiki?curid=29400 |
Sikhs
Sikhs ( or ; , ', ) are people associated with Sikhism, a monotheistic religion that originated in the 15th century, in the Punjab region in the northern part of the Indian subcontinent, based on the revelation of Guru Nanak. The term "Sikh" has its origin in the words शिष्य ('), meaning a "disciple" or a "student". According to Article I of the "Sikh Rehat Maryada" (Sikh code of conduct), a Sikh is: Any human being who faithfully believes in One Immortal Being; ten Gurus, from Guru Nanak to Guru Gobind Singh; Guru Granth Sahib; the teachings of the ten Gurus and the baptism bequeathed by the tenth Guru. Male Sikhs generally have "Singh" ("lion") as their middle or last name, though not all Singhs are Sikhs. Likewise, female Sikhs have "Kaur" ("princess") as their middle or last name. Sikhs who have undergone the "Khanḍe-kī-Pahul" ("baptism by Khanda") may also be recognized by the five Ks: "kesh", uncut hair, which is kept covered usually by a turban; "kara", an iron or steel bracelet; "kirpan", a dagger-like sword tucked into a "gatra" strap or a "kamal kasar" belt; "kachera", a cotton undergarment; and "kanga", a small wooden comb.
The Punjab region of the Indian subcontinent has been the historic homeland of the Sikhs, having even been ruled by the Sikhs for significant parts of the 18th and 19th centuries. Today, the Punjab state in northwest India has a majority Sikh population, and sizeable communities of Sikhs exist around the world. Many countries, such as the United Kingdom, recognize Sikhs as a designated religion on their censuses, and, as of 2020, Sikhs are considered as a separate ethnic group in the United States.
Guru Nanak (1469–1539), the founder of Sikhism, was born to Mehta Kalu and Mata Tripta in the village of Talwandi, present-day Nankana Sahib, near Lahore. Throughout his life, Guru Nanak was a religious leader and social reformer. However, Sikh political history may be said to begin in 1606 with the death of the fifth Sikh guru, Guru Arjan Dev. Religious practices were formalised by Guru Gobind Singh on 30 March 1699, when the Guru initiated five people from a variety of social backgrounds, known as the "Panj Piare" ("beloved five") to form a collective body of initiated Sikhs, known as the "Khalsa" ("pure").
During the rule of the Mughal Empire in India (1556–1707), several Sikh gurus were killed by the Mughals for opposing their persecution of minority religious communities, including Sikhs. The Sikhs subsequently militarized to oppose Mughal rule.
After defeating the Afghans and Mughals, sovereign states called Misls were formed under Jassa Singh Ahluwalia. The Confederacy of these states would be unified and transformed into the Sikh Empire under Maharaja Ranjit Singh Bahadur. This era would be characterised by religious tolerance and pluralism, including Christians, Muslims, and Hindus in positions of power. Its secular administration implemented military, economic, and governmental reforms. The empire is considered the zenith of political Sikhism, encompassing Kashmir, Ladakh, and Peshawar. Hari Singh Nalwa, the commander-in-chief of the Sikh Khalsa Army in the North West Frontier, expanded the confederacy to the Khyber Pass.
After the annexation of the Sikh kingdom by the British, the latter would begin recruiting from that area once recognizing the martial qualities of the Sikhs and Punjabis in general. During the 1857 Indian mutiny, the Sikhs stayed loyal to the British, resulting in heavy recruitment from Punjab to the colonial army for the next 90 years of the British Raj. The distinct turban that differentiates a Sikh from other turban wearers is a relic of the rules of the British Indian Army. The British colonial rule saw the emergence of many reform movements in India, including Punjab, such as the formation of the First and Second Singh Sabha in 1873 and 1879 respectively. The Sikh leaders of the Singh Sabha worked to offer a clear definition of Sikh identity and tried to purify Sikh belief and practice.
The later years of British colonial rule saw the emergence of the Akali movement to bring reform in the gurdwaras during the early 1920s. The movement led to the introduction of "Sikh Gurdwara Bill" in 1925, which placed all the historical Sikh shrines in India under the control of the Shiromani Gurdwara Parbandhak Committee.
The months leading up to the 1947 partition of India were marked by conflict in the Punjab between Sikhs and Muslims. This caused the religious migration of Punjabi Sikhs and Hindus from West Punjab to the east (modern India), mirroring a simultaneous religious migration of Punjabi Muslims from East Punjab to the west (modern Pakistan).
The 1960s saw growing animosity between Sikhs and Hindus in India, with the Sikhs demanding the creation of a Punjabi state on a linguistic basis similar to other states in India. This was promised to Sikh leader Master Tara Singh by Jawaharlal Nehru, in return for Sikh political support during negotiations for Indian independence. Although the Sikhs obtained the Punjab, they lost Hindi-speaking areas to Himachal Pradesh, Haryana, and Rajasthan. In 1966, on the first of November, Chandigarh was made a union territory and the capital of Punjab and Haryana.
Sikh leader Jarnail Singh Bhindranwale triggered violence in the Punjab, resulting in then-prime minister Indira Gandhi ordering an operation to remove Bhindranwale from the Golden Temple in Operation Blue Star. This would subsequently lead to Gandhi's assassination by her Sikh bodyguards. Her assassination would be followed by an explosion of violence against Sikh communities and the killing of thousands of Sikhs throughout India. Since 1984, relations between Sikhs and Hindus have moved toward a rapprochement aided by economic prosperity. However, a 2002 claim by the Hindu right-wing Rashtriya Swayamsevak Sangh (RSS) that "Sikhs are Hindus" disturbed Sikh sensibilities.
During the day of Vaisakhi in 1999, Sikhs worldwide celebrated the 300th anniversary of the creation of the Khalsa. Canada Post honoured Sikh Canadians with a commemorative stamp in conjunction with the anniversary. Likewise, on 9 April 1999 Indian president K. R. Narayanan issued a stamp commemorating the 300th anniversary of the Khalsa as well.
In 2004, Manmohan Singh became the first Sikh Prime Minister of India, and first Sikh Head of government in the world.
From the Guru Granth Sahib,
The five Ks ("panj kakaar") are five articles of faith which all baptized ("Amritdhari") Sikhs are obliged to wear. The symbols represent the ideals of Sikhism: honesty, equality, fidelity, meditating on Waheguru, and never bowing to tyranny.
The five symbols are:
The Sikhs have a number of musical instruments, including the rebab, dilruba, taus, jori, and sarinda. Playing the sarangi was encouraged by Guru Hargobind. The rebab was played by Bhai Mardana as he accompanied Guru Nanak on his journeys. The jori and sarinda were introduced to Sikh devotional music by Guru Arjan. The "taus" (Persian for "peacock") was designed by Guru Hargobind, who supposedly heard a peacock singing and wanted to create an instrument mimicking its sounds. The dilruba was designed by Guru Gobind Singh at the request of his followers, who wanted a smaller instrument than the taus. After Japji Sahib, all of the shabad in the Guru Granth Sahib were composed as raags. This type of singing is known as Gurmat Sangeet.
When they marched into battle, the Sikhs would play a "ranjit nagara" ("victory drum") to boost morale. Nagaras (usually two to three feet in diameter, although some were up to five feet in diameter) are played with two sticks. The beat of the large drums, and the raising of the Nishan Sahib, meant that the Singhs were on their way.
Numbering about 30 million worldwide, Sikhs make up 0.39% of the world population, approximately 83% of whom live in India. About 76% of all Sikhs live in the north Indian State of Punjab, forming the majority (about two-thirds) of the population. Substantial communities of Sikhs live in the Indian states or union territories of Chandigarh where they form 13.11% of the population, Haryana (over 1.2 million), Rajasthan, West Bengal, Uttar Pradesh, Delhi, Maharashtra, Uttarakhand, Madhya Pradesh, Assam and Jammu and Kashmir. Another substantial community of Sikhs exists within the Canadian province of British Columbia, numbering 0.2 million or 5% of the total population; outside India, it the only province or state in the world with Sikhism as the second most followed religion among the populous.
Sikh migration from British India began in earnest during the second half of the 19th century, when the British completed their annexation of the Punjab. The British Raj recruited Sikhs for the Indian Civil Service (particularly the British Indian Army), which led to Sikh migration throughout India and the British Empire. During the Raj, semiskilled Sikh artisans were transported from the Punjab to British East Africa to help build railroads. Sikhs emigrated from India after World War II, most going to the United Kingdom but many to North America. Some Sikhs who had settled in eastern Africa were expelled by Ugandan dictator Idi Amin in 1972. Economics is a major factor in Sikh migration, and significant communities exist in the United Kingdom, the United States, Malaysia, East Africa, Australia, Singapore and Thailand. Due to this, Canada is the country that has the highest number of Sikhs in proportion to the population in the world at 1.4 per cent of Canada's total population.
After the Partition of India in 1947, many Sikhs from what would become the Punjab of Pakistan migrated to India as well as to Afghanistan in fear of persecution. Afghanistan was home to hundreds of thousands of Sikhs and Hindus as of the 1970s, but due to the wars in Afghanistan by the 2010s the vast majority of Afghan Sikhs had migrated to India, Pakistan or the west.
Although the rate of Sikh migration from the Punjab has remained high, traditional patterns of Sikh migration favouring English-speaking countries (particularly the United Kingdom) have changed during the past decade due to stricter immigration laws. Moliner (2006) wrote that as a consequence of Sikh migration to the UK becoming "virtually impossible since the late 1970s," migration patterns evolved to continental Europe. Italy is a rapidly growing destination for Sikh migration, with Reggio Emilia and Vicenza having significant Sikh population clusters. Italian Sikhs are generally involved in agriculture, agricultural processing, the manufacture of machine tools, and horticulture.
Johnson and Barrett (2004) estimate that the global Sikh population increases annually by 392,633 (1.7% per year, based on 2004 figures); this percentage includes births, deaths, and conversions. Primarily for socio-economic reasons, Indian Sikhs have the lowest adjusted growth rate of any major religious group in India, at 16.9 percent per decade (estimated from 1991 to 2001). The Sikh population has the lowest gender balance in India, with only 903 women per 1,000 men according to the 2011 Indian census.
Since Sikhism has never actively sought converts, as the religion does not advocate discrimination against any caste or creed, Sikhs have remained a relatively homogeneous ethnic group. However, social stratification exists in the Sikh community despite Guru Nanak's calls for treating everyone equally in Sri Granth Sahib. As such, Sikhs comprise a number of sub-ethnic groups.
Along with Guru Nanak, other Sikh Gurus had also denounced the hierarchy of the caste system. Rather, they all came from just one caste, the Khatris. An order of Punjabi Sikhs, the Nihang or the Akalis, was formed during Ranjit Singh's time. Under their leader, Akali Phula Singh, they won many battles for the Sikh Confederacy during the early 19th century.
Over 60% of Sikhs belong to the Jat (or Jatt) caste, traditionally agrarian in occupation. Of these, most descend from the Shudra sub-caste of Jatts. Despite being very small in numbers, the mercantile Khatri and Arora castes also wield considerable influence within the Sikh community. Other common Sikh castes include Ahluwalias (brewers), "Kambojs" or "Kambos" (rural caste), "Ramgarhias" (artisans), Rajputs (kshatriyas), "Sainis" (kshatriyas), "Rai" Sikh (rural caste), "Labanas" (merchants), and "Kumhars", as well as the two Dalit castes known in Sikh terminology as the "Mazhabi" and the "Ravidasias".
Some Sikhs belonging to the landowning dominant castes have especially not shed all their prejudices against the Dalits. While Dalits would be allowed entry into the village gurdwaras but in some gurdwaras they would not be permitted to cook or serve "langar" (communal meal). Therefore, wherever they could mobilize resources, the Sikh Dalits of Punjab have tried to construct their own gurdwara and other local level institutions in order to attain a certain degree of cultural autonomy. In 1953, Sikh leader and activist, Master Tara Singh, succeeded in persuading the Indian government to include Sikh castes of the converted untouchables in the list of scheduled castes. In the Shiromani Gurdwara Prabandhak Committee, 20 of the 140 seats are reserved for low-caste Sikhs.
Other castes (over 1,000 members) include the Arain, Bhatra, Bairagi, Bania, Basith, Bawaria, Bazigar, Bhabra, Chamar, Chhimba (cotton farmers), Darzi, Dhobi, Gujar, Jhinwar, Kahar, Kalal, Kumhar, Lohar, Mahtam, Megh, Mirasi, Mochi, Mohyal, Nai, Ramgarhia, Sansi, Sudh, Tarkhan, and Kashyap.
The 3HO organisation claim to have inspired a moderate growth in non-Indian adherents of Sikhism. In 1998, an estimated 7,800 3HO Sikhs, known colloquially as "gora" () Sikhs, were mainly centred around Española, New Mexico, and Los Angeles, California.
During the late 19th and early 20th centuries, Sikhs began to emigrate to East Africa, the Far East, Canada, the United States and the United Kingdom. In 1907 the Khalsa Diwan Society was established in Vancouver, and four years later the first gurdwara was established in London. In 1912 the first gurdwara in the United States was founded in Stockton, California. There was a large Sikh immigration to Canada. While Sikhs were temporarily disenfranchised several decades ago, currently 17 of the 338 Canadian legislators are Sikhs, which is disproportionately higher than their share of the total Canadian population.As Sikhs wear turbans and keep beards (among other physical similarities to Middle Eastern men), Sikh men in Western countries have been mistaken for Muslim, Arabic, and/or Afghan since the September 11 attacks and the Iraq War. Several days after the 9/11 attacks, Sikh-American gas station owner Balbir Singh Sodhi was murdered in Arizona by a man who took Sodhi to be a member of al-Qaeda, marking the first recorded hate-crime in America motivated by 9/11. CNN would go on to suggest an increase in hate crimes against Sikh men in the US and the UK after the 9/11 attacks.
In an attempt to foster Sikh leaders in the Western world, youth initiatives by a number of organisations exist. The Sikh Youth Alliance of North America sponsors an annual Sikh Youth Symposium, a public-speaking and debate competition held in gurdwaras throughout the US and Canada.
The Sikh diaspora has been most successful in North America, and UK Sikhs have the highest percentage of home ownership (82%) of any religious community. UK Sikhs are the second-wealthiest religious group in the UK (after the Jewish community), with a median total household wealth of .
In May 2019, the UK government exempted "Kirpan" from the list of banned knives. The U.K. government has passed an amendment by which Sikhs in the country will be allowed to carry kirpans and use it during religious and cultural functions. The bill had been amended late last year to ensure that it would not impact the right of the British Sikh community to possess and supply kirpans, or religious swords. Similarly, the Sikh American Legal Defense and Education Fund overturned a 1925 Oregon law banning the wearing of turbans by teachers and government officials.
Sikh girls in the UK were also among the victims of the muslim grooming gangs.
Historically, most Indians have been farmers and 66 per cent of the Indian population are engaged in agriculture. Indian Sikhs are employed in agriculture to a lesser extent; India's 2001 census found 39 per cent of the working population of the Punjab employed in this sector. According to the Swedish political scientist Ishtiaq Ahmad, a factor in the success of the Indian green revolution was the "Sikh cultivator, often the Jat, whose courage, perseverance, spirit of enterprise and muscle prowess proved crucial." However, not all aspects of the green revolution were beneficial. Indian physicist Vandana Shiva wrote that the green revolution made the "negative and destructive impacts of science [i.e. the green revolution] on nature and society" invisible, and was a catalyst for Punjabi Sikh and Hindu tensions despite a growth in material wealth.
Manmohan Singh is an Indian economist, academic, and politician who served as the 13th Prime Minister of India from 2004 to 2014. The first Sikh in office, Singh was also the first prime minister since Jawaharlal Nehru to be re-elected after completing a full five-year term.
In the United States, the former US Ambassador to the United Nations and former governor of South Carolina, Nikki Haley, was born and raised as a Sikh, but converted to Christianity after her marriage. She still actively attends both Sikh and Christian services.
Notable Sikhs in science include nuclear scientist Piara Singh Gill, who worked on the Manhattan Project; fibre-optics pioneer Narinder Singh Kapany; and physicist, science writer and broadcaster Simon Singh.
In business, the UK-based clothing retailers New Look and the Thai-based JASPAL were founded by Sikhs. India's largest pharmaceutical company, Ranbaxy Laboratories, is headed by Sikhs. In Singapore Kartar Singh Thakral expanded his family's trading business, Thakral Holdings, into total assets of almost and is Singapore's 25th-richest person. Sikh Bob Singh Dhillon is the first Indo-Canadian billionaire.
In sports, Sikhs include England cricketer Monty Panesar; former 400-metre runner Milkha Singh; Indian wrestler and actor Dara Singh; former Indian hockey team captains Ajitpal Singh and Balbir Singh Sr.; former Indian cricket captain Bishen Singh Bedi; Harbhajan Singh, India's most successful off spin cricket bowler; Yuvraj Singh; World cup winning champion allrounder;Maninder Singhworld cup winning off spinner and Navjot Singh Sidhu, former Indian cricketer-turned-politician.
Sikhs in Bollywood, in the arts in general, include poet and lyricist Rajkavi Inderjeet Singh Tulsi; Gulzar; Jagjit Singh; Dharmendra; Sunny Deol; writer Khushwant Singh; actresses Neetu Singh, Simran Judge, Poonam Dhillon, Mahi Gill, Esha Deol, Parminder Nagra, Gul Panag, Mona Singh, Namrata Singh Gujral; and directors Gurinder Chadha and Parminder Gill.
According to a 1994 estimate, Punjabis (Sikhs and non-Sikhs) comprised 10 to 15% of all ranks in the Indian Army. The Indian government does not release religious or ethnic origins of the military personnel, but a 1991 report by Tim McGirk estimated that 20% of Indian Army officers were Sikhs. Together with the Gurkhas recruited from Nepal, the Maratha Light Infantry from Maharashtra and the Jat Regiment, the Sikhs are one of the few communities to have exclusive regiments in the Indian Army. The Sikh Regiment is one of the most-decorated regiments in the army, with 73 Battle Honours, 14 Victoria Crosses, 21 first-class Indian Orders of Merit (equivalent to the Victoria Cross), 15 Theatre Honours, 5 COAS Unit Citations, two Param Vir Chakras, 14 Maha Vir Chakras, 5 Kirti Chakras, 67 Vir Chakras, and 1,596 other awards. The highest-ranking general in the history of the Indian Air Force is a Punjabi Sikh, Marshal of the Air Force Arjan Singh. Plans by the United Kingdom Ministry of Defence for a Sikh infantry regiment were scrapped in June 2007.
Sikhs supported the British during the Indian Rebellion of 1857. By the beginning of World War I, Sikhs in the British Indian Army totaled over 100,000 (20 per cent of the force). Until 1945 fourteen Victoria Crosses (VC) were awarded to Sikhs, a per-capita regimental record. In 2002 the names of all Sikh VC and George Cross recipients were inscribed on the monument of the Memorial Gates on Constitution Hill, next to Buckingham Palace. Chanan Singh Dhillon was instrumental in campaigning for the memorial.
During World War I, Sikh battalions fought in Egypt, Palestine, Mesopotamia, Gallipoli and France. Six battalions of the Sikh Regiment were raised during World War II, serving in the Second Battle of El Alamein, the Burma and Italian campaigns and in Iraq and receiving 27 battle honours. Around the world, Sikhs are commemorated in Commonwealth cemeteries.
The Khalistan movement is a Sikh separatist movement, which seeks to create a separate country called Khalistān ("The Land of the Khalsa") in the Punjab region of South Asia to serve as a homeland for Sikhs. The territorial definition of the proposed country Khalistan consists of both the Punjab, India, along with Punjab, Pakistan, and includes parts of Haryana, Himachal Pradesh, Jammu and Kashmir, and Rajasthan.
Khalistan movement began as an expatriate venture. In 1971, the first explicit call for Khalistan was made in an advertisement published in the "New York Times" by an expat Jagjit Singh Chohan. By proclaiming the formation of Khalistan he was able to collect millions of dollars from the Sikh diaspora. On 12 April 1980 he declared the formation of "National Council of Khalistan", at Anandpur Sahib. He declared himself as the President of the Council and Balbir Singh Sandhu as its Secretary General. In May 1980, Jagjit Singh Chohan traveled to London and announced the formation of Khalistan. A similar announcement was made by Balbir Singh Sandhu, in Amritsar, who released stamps and currency of Khalistan. The inaction of the authorities in Amritsar and elsewhere was decried by Akali Dal headed by the Sikh leader Harchand Singh Longowal as a political stunt by the Congress(I) party of Indira Gandhi.
With financial and political support of the Sikh diaspora, the movement flourished in the Indian state of Punjab, which has a Sikh-majority population and reached its zenith in the late 1970s and 1980s when the secessionist movement caused large scale violence among the local population.
Operation Blue Star was an Indian military operation carried out between 1 and 8 June 1984, ordered by Prime Minister Indira Gandhi to remove militant religious leader Jarnail Singh Bhindranwale and his armed followers from the buildings of the Harmandir Sahib complex in Amritsar, Punjab. In July 1983, the Sikh political party Akali Dal's President Harchand Singh Longowal had invited Bhindranwale to take up residence in Golden Temple Complex to evade arrest. Bhindranwale later on made the sacred temple complex an armoury and headquarters. In the violent events leading up to the Operation Blue Star since the inception of Akali Dharm Yudh Morcha, the militants had killed 165 Hindus and Nirankaris, even 39 Sikhs opposed to Bhindranwale were killed. The total number of deaths was 410 in violent incidents and riots while 1,180 people were injured. Casualty figures for the Army were 83 dead and 249 injured. According to the official estimate presented by the Indian government, 1592 were apprehended and there were 493 combined militant and civilian casualties. High civilian casualties were attributed to militants using pilgrims trapped inside the temple as human shields.
Assassination of Prime Minister Indira Gandhi and bombing of Air India plane killing 328 passengers by Sikhs happened in the aftermath. Various pro-Khalistan outfits have been involved in a separatist movement against the Government of India ever since. There are claims of funding from Sikhs outside India to attract young people into these pro-Khalistan militant groups.
In January 1986, the Golden Temple was occupied by militants belonging to All India Sikh Students Federation and Damdami Taksal. On 26 January 1986, a gathering known as the Sarbat Khalsa (a de facto parliament) passed a resolution ("gurmattā") favouring the creation of Khalistan. Subsequently, a number of rebel militant groups in favour of Khalistan waged a major insurgency against the government of India. Indian security forces suppressed the insurgency in the early 1990s, but Sikh political groups such as the Khalsa Raj Party and SAD (A) continued to pursue an independent Khalistan through non-violent means. Pro-Khalistan organisations such as Dal Khalsa (International) are also active outside India, supported by a section of the Sikh diaspora.
In the 1990s the insurgency petered out, and the movement failed to reach its objective due to multiple reasons including a heavy police crackdown on separatists, divisions among the Sikhs and loss of support from the Sikh population.
Sikh art and culture are nearly synonymous with that of the Punjab, and Sikhs are easily recognised by their distinctive turban (Dastar). The Punjab has been called India's melting pot, due to the confluence of invading cultures from the rivers from which the region gets its name. Sikh culture is therefore a synthesis of cultures. Sikhism has forged a unique architecture, which S. S. Bhatti described as "inspired by Guru Nanak's creative mysticism" and "is a mute harbinger of holistic humanism based on pragmatic spirituality".
During the Mughal and Afghan persecution of the Sikhs during the 17th and 18th centuries, the latter were concerned with preserving their religion and gave little thought to art and culture. With the rise of Ranjit Singh and the Sikh Raj in Lahore and Delhi, there was a change in the landscape of art and culture in the Punjab; Hindus and Sikhs could build decorated shrines without the fear of destruction or looting.
The Sikh Confederacy was the catalyst for a uniquely Sikh form of expression, with Ranjit Singh commissioning forts, palaces, bungas (residential places) and colleges in a Sikh style. Sikh architecture is characterised by gilded fluted domes, cupolas, kiosks, stone lanterns, ornate balusters and square roofs. A pinnacle of Sikh style is Harmandir Sahib (also known as the Golden Temple) in Amritsar.
Sikh culture is influenced by militaristic motifs (with the Khanda the most obvious), and most Sikh artifacts—except for the relics of the Gurus—have a military theme. This theme is evident in the Sikh festivals of Hola Mohalla and Vaisakhi, which feature marching and displays of valor.
Although the art and culture of the Sikh diaspora have merged with that of other Indo-immigrant groups into categories like "British Asian", "Indo-Canadian" and "Desi-Culture", a minor cultural phenomenon which can be described as "political Sikh" has arisen. The art of diaspora Sikhs like Amarjeet Kaur Nandhra and Amrit and Rabindra Kaur Singh (the "Singh Twins") is influenced by their Sikhism and current affairs in the Punjab.
Bhangra and Giddha are two forms of Punjabi folk dancing which have been adapted and pioneered by Sikhs. Punjabi Sikhs have championed these forms of expression worldwide, resulting in Sikh culture becoming linked to Bhangra (although "Bhangra is not a Sikh institution but a Punjabi one").
Sikh painting is a direct offshoot of the Kangra school of painting. In 1810, Ranjeet Singh (1780–1839) occupied Kangra Fort and appointed Sardar Desa Singh Majithia his governor of the Punjab hills. In 1813 the Sikh army occupied Guler State, and Raja Bhup Singh became a vassal of the Sikhs. With the Sikh kingdom of Lahore becoming the paramount power, some of the Pahari painters from Guler migrated to Lahore for the patronage of Maharaja Ranjeet Singh and his Sardars.
The Sikh school adapted Kangra painting to Sikh needs and ideals. Its main subjects are the ten Sikh gurus and stories from Guru Nanak's Janamsakhis. The tenth Guru, Gobind Singh, left a deep impression on the followers of the new faith because of his courage and sacrifices. Hunting scenes and portraits are also common in Sikh painting. | https://en.wikipedia.org/wiki?curid=29405 |
Superworld
Superworld is a superhero-themed role-playing game published by Chaosium in 1983. Written by "Basic Role-Playing" and "RuneQuest" author Steve Perrin, "Superworld" began as one third of the "Worlds of Wonder" product, which also included a generic fantasy setting, "Magic World", and a generic science fiction setting, "Future World", all using the same core "Basic Role-Playing" rules. Only "Superworld" became a game in its own right.
"Superworld" is based on the traditional Chaosium "Basic Role-Playing" system augmented by super-powers.
Seven characteristics (Strength, Constitution, Size, Intelligence, Power, Dexterity, Appearance) are rolled with dice (2D6+6, rather than the 3d6 used for many other "Basic Role-Playing" games.) The sum of these characteristics gives a total of Hero Points used to buy super powers.
The super powers system follows the "Champions" model of powers that are described by their effects. For example, one does not buy "Laser Vision", but the effect "Energy Blast" and specifies that it is a laser emitted by the hero's eyes. Each effect can be modified by Advantages (less energy expenditure, for example) or Disadvantages (reduced number of uses, for example) which increase or reduce the cost of a power.
Hero Points can also be used to buy skills or increase characteristics. It is possible to get more Hero Points for character creation by choosing Disabilities for the character, such as Public Identity, Vulnerability to a Substance, Psychological Problems, etc. More Hero Points would be awarded for experience at the end of a game session.
The system functions in the same way as the other "Basic Role-Playing" games, by rolling percentile dice against skills. Lower rolls than needed can cause increased effect from Specials (equivalent to Impales in "RuneQuest"), or Criticals, and high rolls can cause critical failures (Fumbles). Combat rules have many options and take into account three types of energy for damage: Kinetic, Electric, and Radiation.
The game box contains three rules booklets, a booklet of character sheets, a booklet of tables for the Gamemaster, a page of cardboard figure silhouettes to be cut out, and a set 6, 8, and 20-sided dice. Printings from 1984 also contain a 4-page booklet of errata.
(1984) Scenario. Author: Ken Rolston. Set in a high school, and designed for teenage characters. It comes with six young pregenerated heroes, or lets players use their own. Beginning with the funeral of one of their friends, it sets the heroes on the track of a drug distribution network in their school, directed by the aforementioned Dr. Drugs.
It also includes rules for the creation and management of adolescent characters that have just discovered their powers, and a plan of Warren G. Harding High School, though the scenario recommends substituting the school in which the GM and the players studied.
(1985) Rules supplement.
Many authors: Stephen R. Marsh, Stephen Perrin, Ian Lee Starcher, Anthony Affronti, Jimmy Akin II, William A Barton, Norman Doege, Bruce Dresselhaus, Ray Greer, Zoran Kovacich, George MacDonald, Steve Maurer, Sandy Petersen, Wayne Shaw, John Sullivan—most are listed because they provided one or more optional rules.
Includes:
(1984) Scenario / Campaign.
Authors: Stephen Perrin, Yurek Chodak, Donald Harrington, Charles Huber.
A linked collection of three scenarios based on the members of the criminal organization HAVOC. All the characters are presented with characteristics for use with three different systems, "Superworld", "Champions" and "Villains & Vigilantes". Each may be played separately, or as part of a campaign.
Steve Marsh reviewed "Superworld" in "Ares Magazine" #17 and commented that "The game is anything but chaotic, but should create change in any gaming group that sees it. It is well done, and worth the price."
Crede Lambard reviewed "Superworld" in "Space Gamer" No. 70. Lambard commented that ""Superworld" is very good. I doubt that it will ever supplant "Champions", but it certainly supplements it . . . especially now that both Hero Games and Chaosium are putting out adventures with stats for both games."
The "Wild Cards" series of science fiction books came from a "Superworld" campaign gamemastered by George R. R. Martin, and played in by other science fiction writers. | https://en.wikipedia.org/wiki?curid=29407 |
Samuel Taylor Coleridge
Samuel Taylor Coleridge (; 21 October 177225 July 1834) was an English poet, literary critic, philosopher and theologian who, with his friend William Wordsworth, was a founder of the Romantic Movement in England and a member of the Lake Poets. He also shared volumes and collaborated with Charles Lamb, Robert Southey, and Charles Lloyd. He wrote the poems "The Rime of the Ancient Mariner" and "Kubla Khan", as well as the major prose work "Biographia Literaria". His critical work, especially on William Shakespeare, was highly influential, and he helped introduce German idealist philosophy to English-speaking culture. Coleridge coined many familiar words and phrases, including suspension of disbelief. He had a major influence on Ralph Waldo Emerson and American transcendentalism.
Throughout his adult life Coleridge had crippling bouts of anxiety and depression; it has been speculated that he had bipolar disorder, which had not been defined during his lifetime. He was physically unhealthy, which may have stemmed from a bout of rheumatic fever and other childhood illnesses. He was treated for these conditions with laudanum, which fostered a lifelong opium addiction.
Coleridge was born on 21 October 1772 in the town of Ottery St Mary in Devon, England. Samuel's father was the Reverend John Coleridge (1718–1781), the well-respected vicar of St Mary's Church, Ottery St Mary and was headmaster of the King's School, a free grammar school established by King Henry VIII (1509–1547) in the town. He had previously been master of Hugh Squier's School in South Molton, Devon, and lecturer of nearby Molland.
John Coleridge had three children by his first wife. Samuel was the youngest of ten by the Reverend Mr. Coleridge's second wife, Anne Bowden (1726–1809), probably the daughter of John Bowden, Mayor of South Molton, Devon, in 1726. Coleridge suggests that he "took no pleasure in boyish sports" but instead read "incessantly" and played by himself. After John Coleridge died in 1781, 8-year-old Samuel was sent to Christ's Hospital, a charity school which was founded in the 16th century in Greyfriars, London, where he remained throughout his childhood, studying and writing poetry. At that school Coleridge became friends with Charles Lamb, a schoolmate, and studied the works of Virgil and William Lisle Bowles.
In one of a series of autobiographical letters written to Thomas Poole, Coleridge wrote: "At six years old I remember to have read "Belisarius", "Robinson Crusoe", and "Philip Quarll" – and then I found the "Arabian Nights' Entertainments" – one tale of which (the tale of a man who was compelled to seek for a pure virgin) made so deep an impression on me (I had read it in the evening while my mother was mending stockings) that I was haunted by spectres whenever I was in the dark – and I distinctly remember the anxious and fearful eagerness with which I used to watch the window in which the books lay – and whenever the sun lay upon them, I would seize it, carry it by the wall, and bask, and read."
Coleridge seems to have appreciated his teacher, as he wrote in recollections of his school days in "Biographia Literaria":
I enjoyed the inestimable advantage of a very sensible, though at the same time, a very severe master [...] At the same time that we were studying the Greek Tragic Poets, he made us read Shakespeare and Milton as lessons: and they were the lessons too, which required most time and trouble to bring up, so as to escape his censure. I learnt from him, that Poetry, even that of the loftiest, and, seemingly, that of the wildest odes, had a logic of its own, as severe as that of science; and more difficult, because more subtle, more complex, and dependent on more, and more fugitive causes. [...] In our own English compositions (at least for the last three years of our school education) he showed no mercy to phrase, metaphor, or image, unsupported by a sound sense, or where the same sense might have been conveyed with equal force and dignity in plainer words... In fancy I can almost hear him now, exclaiming "Harp? Harp? Lyre? Pen and ink, boy, you mean! Muse, boy, Muse? your Nurse's daughter, you mean! Pierian spring? Oh aye! the cloister-pump, I suppose!" [...] Be this as it may, there was one custom of our master's, which I cannot pass over in silence, because I think it ... worthy of imitation. He would often permit our theme exercises, ... to accumulate, till each lad had four or five to be looked over. Then placing the whole number abreast on his desk, he would ask the writer, why this or that sentence might not have found as appropriate a place under this or that other thesis: and if no satisfying answer could be returned, and two faults of the same kind were found in one exercise, the irrevocable verdict followed, the exercise was torn up, and another on the same subject to be produced, in addition to the tasks of the day.
He later wrote of his loneliness at school in the poem "Frost at Midnight":
"With unclosed lids, already had I dreamt/Of my sweet birthplace."
From 1791 until 1794, Coleridge attended Jesus College, Cambridge. In 1792, he won the Browne Gold Medal for an ode that he wrote on the slave trade. In December 1793, he left the college and enlisted in the 15th (The King's) Light Dragoons using the false name "Silas Tomkyn Comberbache", perhaps because of debt or because the girl that he loved, Mary Evans, had rejected him. His brothers arranged for his discharge a few months later under the reason of "insanity" and he was readmitted to Jesus College, though he would never receive a degree from the university.
At Jesus College, Coleridge was introduced to political and theological ideas then considered radical, including those of the poet Robert Southey with whom he collaborated on the play "The Fall of Robespierre". Coleridge joined Southey in a plan, later abandoned, to found a utopian commune-like society, called Pantisocracy, in the wilderness of Pennsylvania. In 1795, the two friends married sisters Sara and Edith Fricker, in St Mary Redcliffe, Bristol, but Coleridge's marriage with Sara proved unhappy. He grew to detest his wife, whom he married mainly because of social constraints. Following the birth of their fourth child, he eventually separated from her.
A third sister, Mary, had already married a third poet Robert Lovell and both became partners in Pantisocracy. Lovell also introduced Coleridge and Southey to their future patron Joseph Cottle but died of a fever in April 1796. Coleridge was with him at his death.
In 1796 he released his first volume of poems entitled "Poems on various subjects", which also included four poems by Charles Lamb as well as a collaboration with Robert Southey and a work suggested by his and Lamb's schoolfriend Robert Favell. Among the poems were "Religious Musings", "Monody on the Death of Chatterton" and an early version of "The Eolian Harp" entitled "Effusion 35". A second edition was printed in 1797, this time including an appendix of works by Lamb and Charles Lloyd, a young poet to whom Coleridge had become a private tutor.
In 1796 he also privately printed "Sonnets from Various Authors", including sonnets by Lamb, Lloyd, Southey and himself as well as older poets such as William Lisle Bowles.
Coleridge made plans to establish a journal, "The Watchman", to be printed every eight days to avoid a weekly newspaper tax. The first issue of the short-lived journal was published in March 1796. It had ceased publication by May of that year.
The years 1797 and 1798, during which he lived in what is now known as Coleridge Cottage, in Nether Stowey, Somerset, were among the most fruitful of Coleridge's life. In 1795, Coleridge met poet William Wordsworth and his sister Dorothy. (Wordsworth, having visited him and being enchanted by the surroundings, rented Alfoxton Park, a little over three miles [5 km] away.) Besides "The Rime of the Ancient Mariner", Coleridge composed the symbolic poem "Kubla Khan", written—Coleridge himself claimed—as a result of an opium dream, in "a kind of a reverie"; and the first part of the narrative poem "Christabel". The writing of "Kubla Khan", written about the Mongol emperor Kublai Khan and his legendary palace at Xanadu, was said to have been interrupted by the arrival of a "Person from Porlock" – an event that has been embellished upon in such varied contexts as science fiction and Nabokov's "Lolita". During this period, he also produced his much-praised "conversation poems" "This Lime-Tree Bower My Prison", "Frost at Midnight", and "".
In 1798, Coleridge and Wordsworth published a joint volume of poetry, "Lyrical Ballads", which proved to be the starting point for the English romantic age. Wordsworth may have contributed more poems, but the real star of the collection was Coleridge's first version of "The Rime of the Ancient Mariner". It was the longest work and drew more praise and attention than anything else in the volume. In the spring Coleridge temporarily took over for Rev. Joshua Toulmin at Taunton's Mary Street Unitarian Chapel while Rev. Toulmin grieved over the drowning death of his daughter Jane. Poetically commenting on Toulmin's strength, Coleridge wrote in a 1798 letter to John Prior Estlin, "I walked into Taunton (eleven miles) and back again, and performed the divine services for Dr. Toulmin. I suppose you must have heard that his daughter, (Jane, on 15 April 1798) in a melancholy derangement, suffered herself to be swallowed up by the tide on the sea-coast between Sidmouth and Bere (Beer). These events cut cruelly into the hearts of old men: but the good Dr. Toulmin bears it like the true practical Christian, – there is indeed a tear in his eye, but that eye is lifted up to the Heavenly Father."
Coleridge also worked briefly in Shropshire, where he came in December 1797 as locum to its local Unitarian minister, Dr Rowe, in their church in the High Street at Shrewsbury. He is said to have read his "Rime of the Ancient Mariner" at a literary evening in Mardol. He was then contemplating a career in the ministry, and gave a probationary sermon in High Street church on Sunday, 14 January 1798. William Hazlitt, a Unitarian minister's son, was in the congregation, having walked from Wem to hear him. Coleridge later visited Hazlitt and his father at Wem but within a day or two of preaching he received a letter from Josiah Wedgwood II, who had offered to help him out of financial difficulties with an annuity of £150 (approximately £13,000 in today's money) per year on condition he give up his ministerial career. Coleridge accepted this, to the disappointment of Hazlitt who hoped to have him as a neighbour in Shropshire.
From 16 September 1798, Coleridge and the Wordsworths left for a stay in Germany; Coleridge soon went his own way and spent much of his time in university towns. In February 1799 he enrolled at the University of Göttingen, where he attended lectures by Johann Friedrich Blumenbach and Johann Gottfried Eichhorn. During this period, he became interested in German philosophy, especially the transcendental idealism and critical philosophy of Immanuel Kant, and in the literary criticism of the 18th-century dramatist Gotthold Lessing. Coleridge studied German and, after his return to England, translated the dramatic trilogy "Wallenstein" by the German Classical poet Friedrich Schiller into English. He continued to pioneer these ideas through his own critical writings for the rest of his life (sometimes without attribution), although they were unfamiliar and difficult for a culture dominated by empiricism.
In 1799, Coleridge and the Wordsworths stayed at Thomas Hutchinson's farm on the River Tees at Sockburn, near Darlington.
It was at Sockburn that Coleridge wrote his ballad-poem "Love", addressed to Sara Hutchinson. The knight mentioned is the mailed figure on the Conyers tomb in ruined Sockburn church. The figure has a wyvern at his feet, a reference to the Sockburn Worm slain by Sir John Conyers (and a possible source for Lewis Carroll's "Jabberwocky"). The worm was supposedly buried under the rock in the nearby pasture; this was the 'greystone' of Coleridge's first draft, later transformed into a 'mount'. The poem was a direct inspiration for John Keats' famous poem "La Belle Dame Sans Merci".
Coleridge's early intellectual debts, besides German idealists like Kant and critics like Lessing, were first to William Godwin's "Political Justice", especially during his Pantisocratic period, and to David Hartley's "Observations on Man", which is the source of the psychology which is found in "Frost at Midnight". Hartley argued that one becomes aware of sensory events as impressions, and that "ideas" are derived by noticing similarities and differences between impressions and then by naming them. Connections resulting from the coincidence of impressions create linkages, so that the occurrence of one impression triggers those links and calls up the memory of those ideas with which it is associated (See Dorothy Emmet, "Coleridge and Philosophy").
Coleridge was critical of the literary taste of his contemporaries, and a literary conservative insofar as he was afraid that the lack of taste in the ever growing masses of literate people would mean a continued desecration of literature itself.
In 1800, he returned to England and shortly thereafter settled with his family and friends in Greta Hall at Keswick in the Lake District of Cumberland to be near Grasmere, where Wordsworth had moved. He was a houseguest of the Wordsworths' for eighteen months, but was a difficult houseguest, as his dependency on laudanum grew and his frequent nightmares would wake the children. He was also a fussy eater, to Dorothy Wordsworth's frustration, who had to cook. For example, not content with salt, Coleridge sprinkled cayenne pepper on his eggs, which he ate from a teacup. His marital problems, nightmares, illnesses, increased opium dependency, tensions with Wordsworth, and a lack of confidence in his poetic powers fuelled the composition of "Dejection: An Ode" and an intensification of his philosophical studies.
In 1802, Coleridge took a nine-day walking holiday in the fells of the Lake District. Coleridge is credited with the first recorded descent of Scafell to Mickledore via Broad Stand, although this was more due to his getting lost than a keenness for mountaineering.
In 1804, he travelled to Sicily and Malta, working for a time as Acting Public Secretary of Malta under the Civil Commissioner, Alexander Ball, a task he performed quite successfully. He lived in San Anton Palace in the village of Attard. He gave this up and returned to England in 1806. Dorothy Wordsworth was shocked at his condition upon his return. From 1807 to 1808, Coleridge returned to Malta and then travelled in Sicily and Italy, in the hope that leaving Britain's damp climate would improve his health and thus enable him to reduce his consumption of opium. Thomas De Quincey alleges in his "Recollections of the Lakes and the Lake Poets" that it was during this period that Coleridge became a full-blown opium addict, using the drug as a substitute for the lost vigour and creativity of his youth. It has been suggested that this reflects De Quincey's own experiences more than Coleridge's.
His opium addiction (he was using as much as two quarts of laudanum a week) now began to take over his life: he separated from his wife Sara in 1808, quarrelled with Wordsworth in 1810, lost part of his annuity in 1811, and put himself under the care of Dr. Daniel in 1814. His addiction caused severe constipation, which required regular and humiliating enemas.
In 1809, Coleridge made his second attempt to become a newspaper publisher with the publication of the journal entitled "The Friend". It was a weekly publication that, in Coleridge's typically ambitious style, was written, edited, and published almost entirely single-handedly. Given that Coleridge tended to be highly disorganised and had no head for business, the publication was probably doomed from the start. Coleridge financed the journal by selling over five hundred subscriptions, over two dozen of which were sold to members of Parliament, but in late 1809, publication was crippled by a financial crisis and Coleridge was obliged to approach "Conversation Sharp", Tom Poole and one or two other wealthy friends for an emergency loan to continue. "The Friend" was an eclectic publication that drew upon every corner of Coleridge's remarkably diverse knowledge of law, philosophy, morals, politics, history, and literary criticism. Although it was often turgid, rambling, and inaccessible to most readers, it ran for 25 issues and was republished in book form a number of times. Years after its initial publication, a revised and expanded edition of "The Friend", with added philosophical content including his 'Essays on the Principles of Method', became a highly influential work and its effect was felt on writers and philosophers from John Stuart Mill to Ralph Waldo Emerson.
Between 1810 and 1820, Coleridge gave a series of lectures in London and Bristol – those on Shakespeare renewed interest in the playwright as a model for contemporary writers. Much of Coleridge's reputation as a literary critic is founded on the lectures that he undertook in the winter of 1810–11, which were sponsored by the Philosophical Institution and given at Scot's Corporation Hall off Fetter Lane, Fleet Street. These lectures were heralded in the prospectus as "A Course of Lectures on Shakespeare and Milton, in Illustration of the Principles of Poetry." Coleridge's ill-health, opium-addiction problems, and somewhat unstable personality meant that all his lectures were plagued with problems of delays and a general irregularity of quality from one lecture to the next. As a result of these factors, Coleridge often failed to prepare anything but the loosest set of notes for his lectures and regularly entered into extremely long digressions which his audiences found difficult to follow. However, it was the lecture on "Hamlet" given on 2 January 1812 that was considered the best and has influenced "Hamlet" studies ever since. Before Coleridge, "Hamlet" was often denigrated and belittled by critics from Voltaire to Dr. Johnson. Coleridge rescued the play's reputation, and his thoughts on it are often still published as supplements to the text.
In 1812 he allowed Robert Southey to make use of extracts from his vast number of private notebooks in their collaboration "Omniana; Or, Horae Otiosiores".
In August 1814, Coleridge was approached by Lord Byron's publisher, John Murray, about the possibility of translating Goethe's classic "Faust" (1808). Coleridge was regarded by many as the greatest living writer on the demonic and he accepted the commission, only to abandon work on it after six weeks. Until recently, scholars were in agreement that Coleridge never returned to the project, despite Goethe's own belief in the 1820s that he had in fact completed a long translation of the work. In September 2007, Oxford University Press sparked a heated scholarly controversy by publishing an English translation of Goethe's work that purported to be Coleridge's long-lost masterpiece (the text in question first appeared anonymously in 1821).
Between 1814 and 1816, Coleridge lived in Calne, Wiltshire and seemed able to focus on his work and manage his addiction, drafting "Biographia Literaria". He rented rooms from a local surgeon, Mr Page, on Church Street, just opposite the entrance to the churchyard. A blue plaque marks the property today.
In April 1816, Coleridge, with his addiction worsening, his spirits depressed, and his family alienated, took residence in the Highgate homes, then just north of London, of the physician James Gillman, first at South Grove and later at the nearby 3 The Grove. It is unclear whether his growing use of opium (and the brandy in which it was dissolved) was a symptom or a cause of his growing depression. Gillman was partially successful in controlling the poet's addiction. Coleridge remained in Highgate for the rest of his life, and the house became a place of literary pilgrimage for writers including Carlyle and Emerson.
In Gillman's home, Coleridge finished his major prose work, the "Biographia Literaria" (mostly drafted in 1815, and finished in 1817), a volume composed of 23 chapters of autobiographical notes and dissertations on various subjects, including some incisive literary theory and criticism. He composed a considerable amount of poetry, of variable quality. He published other writings while he was living at the Gillman homes, notably the "Lay Sermons" of 1816 and 1817, "Sibylline Leaves" (1817), "Hush" (1820), "Aids to Reflection" (1825), and "On the Constitution of the Church and State" (1830). He also produced essays published shortly after his death, such as "Essay on Faith" (1838) and "Confessions of an Inquiring Spirit" (1840). A number of his followers were central to the Oxford Movement, and his religious writings profoundly shaped Anglicanism in the mid-nineteenth century.
Coleridge also worked extensively on the various manuscripts which form his "Opus Maximum", a work which was in part intended as a post-Kantian work of philosophical synthesis. The work was never published in his lifetime, and has frequently been seen as evidence for his tendency to conceive grand projects which he then had difficulty in carrying through to completion. But while he frequently berated himself for his "indolence", the long list of his published works calls this myth into question. Critics are divided on whether the "Opus Maximum", first published in 2002, successfully resolved the philosophical issues he had been exploring for most of his adult life.
Coleridge died in Highgate, London on 25 July 1834 as a result of heart failure compounded by an unknown lung disorder, possibly linked to his use of opium. Coleridge had spent 18 years under the roof of the Gillman family, who built an addition onto their home to accommodate the poet.Faith may be defined as fidelity to our own being, so far as such being is not and cannot become an object of the senses; and hence, by clear inference or implication to being generally, as far as the same is not the object of the senses; and again to whatever is affirmed or understood as the condition, or concomitant, or consequence of the same. This will be best explained by an instance or example. That I am conscious of something within me peremptorily commanding me to do unto others as I would they should do unto me; in other words a categorical (that is, primary and unconditional) imperative; that the maxim ("regula maxima", or supreme rule) of my actions, both inward and outward, should be such as I could, without any contradiction arising therefrom, will to be the law of all moral and rational beings. "Essay on Faith"
Carlyle described him at Highgate: "Coleridge sat on the brow of Highgate Hill, in those years, looking down on London and its smoke-tumult, like a sage escaped from the inanity of life's battle ... The practical intellects of the world did not much heed him, or carelessly reckoned him a metaphysical dreamer: but to the rising spirits of the young generation he had this dusky sublime character; and sat there as a kind of "Magus", girt in mystery and enigma; his Dodona oak-grove (Mr. Gilman's house at Highgate) whispering strange things, uncertain whether oracles or jargon."
Coleridge is buried in the aisle of St. Michael's Parish Church in Highgate, London. He was originally buried at Old Highgate Chapel but was re-interred in St. Michael's in 1961. Coleridge could see the red door of the then new church from his last residence across the green, where he lived with a doctor he had hoped might cure him (in a house owned today by Kate Moss). When it was discovered Coleridge's vault had become derelict, the coffins – Coleridge's and those of his wife, daughter, son-in-law, and grandson – were moved to St. Michael's after an international fundraising appeal.
Drew Clode, a member of St. Michael's stewardship committee states, "they put the coffins in a convenient space which was dry and secure, and quite suitable, bricked them up and forgot about them". A recent excavation revealed the coffins were not in the location most believed, the far corner of the crypt, but actually below a memorial slab in the nave inscribed with: “Beneath this stone lies the body of Samuel Taylor Coleridge”.
St. Michael's plans to restore the crypt and allow public access. Says vicar Kunle Ayodeji of the plans: “. . we hope that the whole crypt can be cleared as a space for meetings and other uses, which would also allow access to Coleridge’s cellar.”
Coleridge is one of the most important figures in English poetry. His poems directly and deeply influenced all the major poets of the age. He was known by his contemporaries as a meticulous craftsman who was more rigorous in his careful reworking of his poems than any other poet, and Southey and Wordsworth were dependent on his professional advice. His influence on Wordsworth is particularly important because many critics have credited Coleridge with the very idea of "Conversational Poetry". The idea of utilising common, everyday language to express profound poetic images and ideas for which Wordsworth became so famous may have originated almost entirely in Coleridge’s mind. It is difficult to imagine Wordsworth’s great poems, "The Excursion" or "The Prelude", ever having been written without the direct influence of Coleridge’s originality.
As important as Coleridge was to poetry as a poet, he was equally important to poetry as a critic. His philosophy of poetry, which he developed over many years, has been deeply influential in the field of literary criticism. This influence can be seen in such critics as A. O. Lovejoy and I. A. Richards.
Coleridge is arguably best known for his longer poems, particuarly "The Rime of the Ancient Mariner" and "Christabel". Even those who have never read the "Rime" have come under its influence: its words have given the English language the metaphor of an albatross around one's neck, the quotation of "water, water everywhere, nor any drop to drink" (almost always rendered as "but not a drop to drink"), and the phrase "a sadder and a wiser man" (usually rendered as "a sadder but wiser man"). The phrase "All creatures great and small" may have been inspired by "The Rime": "He prayeth best, who loveth best;/ All things both great and small;/ For the dear God who loveth us;/ He made and loveth all." "Christabel" is known for its musical rhythm, language, and its Gothic tale.
"Kubla Khan", or, "A Vision in a Dream, A Fragment", although shorter, is also widely known. Both "Kubla Khan" and "Christabel" have an additional "Romantic" aura because they were never finished. Stopford Brooke characterised both poems as having no rival due to their "exquisite metrical movement" and "imaginative phrasing."
The eight of Coleridge's poems listed above are now often discussed as a group entitled "Conversation poems". The term itself was coined in 1928 by George McLean Harper, who borrowed the subtitle of "The Nightingale: A Conversation Poem" (1798) to describe the seven other poems as well. The poems are considered by many critics to be among Coleridge's finest verses; thus Harold Bloom has written, "With "Dejection", "The Ancient Mariner", and "Kubla Khan", "Frost at Midnight" shows Coleridge at his most impressive." They are also among his most influential poems, as discussed further below.
Harper himself considered that the eight poems represented a form of blank verse that is "...more fluent and easy than Milton's, or any that had been written since Milton". In 2006 Robert Koelzer wrote about another aspect of this apparent "easiness", noting that Conversation poems such as "... Coleridge's "The Eolian Harp" and "The Nightingale" maintain a middle register of speech, employing an idiomatic language that is capable of being construed as un-symbolic and un-musical: language that lets itself be taken as 'merely talk' rather than rapturous 'song'."
The last ten lines of "Frost at Midnight" were chosen by Harper as the "best example of the peculiar kind of blank verse Coleridge had evolved, as natural-seeming as prose, but as exquisitely artistic as the most complicated sonnet." The speaker of the poem is addressing his infant son, asleep by his side:
Therefore all seasons shall be sweet to thee,
Whether the summer clothe the general earth
With greenness, or the redbreast sit and sing
Betwixt the tufts of snow on the bare branch
Of mossy apple-tree, while the nigh thatch
Smokes in the sun-thaw; whether the eave-drops fall
Heard only in the trances of the blast,
Or if the secret ministry of frost
Shall hang them up in silent icicles,
Quietly shining to the quiet Moon.
In 1965, M. H. Abrams wrote a broad description that applies to the Conversation poems: "The speaker begins with a description of the landscape; an aspect or change of aspect in the landscape evokes a varied by integral process of memory, thought, anticipation, and feeling which remains closely intervolved with the outer scene. In the course of this meditation the lyric speaker achieves an insight, faces up to a tragic loss, comes to a moral decision, or resolves an emotional problem. Often the poem rounds itself to end where it began, at the outer scene, but with an altered mood and deepened understanding which is the result of the intervening meditation." In fact, Abrams was describing both the Conversation poems and later poems influenced by them. Abrams' essay has been called a "touchstone of literary criticism". As Paul Magnuson described it in 2002, "Abrams credited Coleridge with originating what Abrams called the 'greater Romantic lyric', a genre that began with Coleridge's 'Conversation' poems, and included Wordsworth's "Tintern Abbey", Shelley's "Stanzas Written in Dejection" and Keats's "Ode to a Nightingale", and was a major influence on more modern lyrics by Matthew Arnold, Walt Whitman, Wallace Stevens, and W. H. Auden."
In addition to his poetry, Coleridge also wrote influential pieces of literary criticism including "Biographia Literaria", a collection of his thoughts and opinions on literature which he published in 1817. The work delivered both biographical explanations of the author's life as well as his impressions on literature. The collection also contained an analysis of a broad range of philosophical principles of literature ranging from Aristotle to Immanuel Kant and Schelling and applied them to the poetry of peers such as William Wordsworth. Coleridge's explanation of metaphysical principles were popular topics of discourse in academic communities throughout the 19th and 20th centuries, and T.S. Eliot stated that he believed that Coleridge was "perhaps the greatest of English critics, and in a sense the last." Eliot suggests that Coleridge displayed "natural abilities" far greater than his contemporaries, dissecting literature and applying philosophical principles of metaphysics in a way that brought the subject of his criticisms away from the text and into a world of logical analysis that mixed logical analysis and emotion. However, Eliot also criticises Coleridge for allowing his emotion to play a role in the metaphysical process, believing that critics should not have emotions that are not provoked by the work being studied. Hugh Kenner in "Historical Fictions", discusses Norman Fruman's "Coleridge, the Damaged Archangel" and suggests that the term "criticism" is too often applied to "Biographia Literaria", which both he and Fruman describe as having failed to explain or help the reader understand works of art. To Kenner, Coleridge's attempt to discuss complex philosophical concepts without describing the rational process behind them displays a lack of critical thinking that makes the volume more of a biography than a work of criticism.
In "Biographia Literaria" and his poetry, symbols are not merely "objective correlatives" to Coleridge, but instruments for making the universe and personal experience intelligible and spiritually covalent. To Coleridge, the "cinque spotted spider," making its way upstream "by fits and starts," [Biographia Literaria] is not merely a comment on the intermittent nature of creativity, imagination, or spiritual progress, but the journey and destination of his life. The spider's five legs represent the central problem that Coleridge lived to resolve, the conflict between Aristotelian logic and Christian philosophy. Two legs of the spider represent the "me-not me" of thesis and antithesis, the idea that a thing cannot be itself and its opposite simultaneously, the basis of the clockwork Newtonian world view that Coleridge rejected. The remaining three legs—exothesis, mesothesis and synthesis or the Holy trinity—represent the idea that things can diverge without being contradictory. Taken together, the five legs—with synthesis in the center, form the Holy Cross of Ramist logic. The cinque-spotted spider is Coleridge's emblem of holism, the quest and substance of Coleridge's thought and spiritual life.
Coleridge wrote reviews of Ann Radcliffe's books and "The Mad Monk", among others. He comments in his reviews: "Situations of torment, and images of naked horror, are easily conceived; and a writer in whose works they abound, deserves our gratitude almost equally with him who should drag us by way of sport through a military hospital, or force us to sit at the dissecting-table of a natural philosopher. To trace the nice boundaries, beyond which terror and sympathy are deserted by the pleasurable emotions, – to reach those limits, yet never to pass them, hic labor, hic opus est." and "The horrible and the preternatural have usually seized on the popular taste, at the rise and decline of literature. Most powerful stimulants, they can never be required except by the torpor of an unawakened, or the languor of an exhausted, appetite... We trust, however, that satiety will banish what good sense should have prevented; and that, wearied with fiends, incomprehensible characters, with shrieks, murders, and subterraneous dungeons, the public will learn, by the multitude of the manufacturers, with how little expense of thought or imagination this species of composition is manufactured."
However, Coleridge used these elements in poems such as "The Rime of the Ancient Mariner" (1798), "Christabel" and "Kubla Khan" (published in 1816, but known in manuscript form before then) and certainly influenced other poets and writers of the time. Poems like these both drew inspiration from and helped to inflame the craze for Gothic romance. Coleridge also made considerable use of Gothic elements in his commercially successful play "Remorse".
Mary Shelley, who knew Coleridge well, mentions "The Rime of the Ancient Mariner" twice directly in "Frankenstein", and some of the descriptions in the novel echo it indirectly. Although William Godwin, her father, disagreed with Coleridge on some important issues, he respected his opinions and Coleridge often visited the Godwins. Mary Shelley later recalled hiding behind the sofa and hearing his voice chanting "The Rime of the Ancient Mariner".
C. S. Lewis also makes mention of his name in "The Screwtape Letters" (as a poor example of prayer, in which the devils should encourage).
Although his father was an Anglican vicar, Coleridge worked as a Unitarian preacher between 1796 and 1797. He eventually returned to the Church of England in 1814. His most noteworthy writings on religion are "Lay Sermons" (1817), "Aids to Reflection" (1825) and "The Constitution of Church and State" (1830).
Despite being mostly remembered today for his poetry and literary criticism, Coleridge was also (perhaps in his own eyes primarily) a theologian. His writings include discussions of the status of scripture, the doctrines of the Fall, justification and sanctification, and the personality and infinity of God. A key figure in the Anglican theology of his day, his writings are still regularly referred to by contemporary Anglican theologians. F. D. Maurice, F. J. A. Hort, F. W. Robertson, B. F. Westcott, John Oman and Thomas Erskine (once called the "Scottish Coleridge") were all influenced by him.
Coleridge was also a profound political thinker. While he began his life as a political radical, and an enthusiast for the French Revolution; over the years Coleridge developed a more conservative view of society, somewhat in the manner of Burke. Although seen as cowardly treachery by the next generation of Romantic poets, Coleridge's later thought became a fruitful source for the evolving radicalism of J. S. Mill. Mill found three aspects of Coleridge's thought especially illuminating:
The current standard edition is "The Collected Works of Samuel Taylor Coleridge," edited by Kathleen Coburn and many others from 1969 to 2002. This collection appeared across 16 volumes as Bollingen Series 75, published variously by Princeton University Press and Routledge & Kegan Paul. The set is broken down as follows into further parts, resulting in a total of 34 separate printed volumes:
In addition, Coleridge's letters are available in: "The Collected Letters of Samuel Taylor Coleridge" (1956–71), ed. Earl Leslie Griggs, 6 vols. (Oxford: Clarendon Press). | https://en.wikipedia.org/wiki?curid=29408 |
Spica
Spica , designated α Virginis (Latinised to Alpha Virginis, abbreviated Alpha Vir, α Vir), is the brightest object in the constellation Virgo and one of the 20 brightest stars in the night sky. Analysis of its parallax shows that it is located 250 ± 10 light years from the Sun. It is a spectroscopic binary star and rotating ellipsoidal variable; a system whose two stars are so close together they are egg-shaped rather than spherical, and can only be separated by their spectra. The primary is a blue giant and a variable star of the Beta Cephei type.
Spica, along with Arcturus and Denebola or Regulus depending on the source, is part of the Spring Triangle asterism, and by extension, also of the Great Diamond together with the star Cor Caroli.
As one of the nearest massive binary star systems to the Sun, Spica has been the subject of many observational studies.
Spica is believed to be the star that gave Hipparchus the data that led him to discover the precession of the equinoxes. A temple to Menat (an early Hathor) at Thebes was oriented with reference to Spica when it was built in 3200 BC, and, over time, precession slowly but noticeably changed Spica's location relative to the temple. Nicolaus Copernicus made many observations of Spica with his home-made triquetrum for his researches on precession.
Spica is 2.06 degrees from the ecliptic and can be occulted by the Moon and sometimes by planets. The last planetary occultation of Spica occurred when Venus passed in front of the star (as seen from Earth) on November 10, 1783. The next occultation will occur on September 2, 2197, when Venus again passes in front of Spica. The Sun passes a little more than 2° north of Spica around October 16 every year, and the star's heliacal rising occurs about two weeks later. Every 8 years, Venus passes Spica around the time of the star's heliacal rising, as in 2009 when it passed 3.5° north of the star on November 3.
A method of finding Spica is to follow the arc of the handle of the Big Dipper (or Plough) to Arcturus, and then continue on the same angular distance to Spica. This can be recalled by the mnemonic phrase, "arc to Arcturus and spike to Spica."
Spica is a close binary star whose components orbit each other every four days. They stay close together enough that they cannot be resolved as two stars through a telescope. The changes in the orbital motion of this pair results in a Doppler shift in the absorption lines of their respective spectra, making them a double-lined spectroscopic binary. Initially, the orbital parameters for this system were inferred using spectroscopic measurements. Between 1966 and 1970, the Narrabri Stellar Intensity Interferometer was used to observe the pair and to directly measure the orbital characteristics and the angular diameter of the primary, which was found to be , and the angular size of the semi-major axis of the orbit was found to be only slightly larger at .
Spica is a rotating ellipsoidal variable, which is a non-eclipsing close binary star system where the stars are mutually distorted through their gravitational interaction. This effect causes the apparent magnitude of the star system to vary by 0.03 over an interval that matches the orbital period. This slight dip in magnitude is barely noticeable visually. Both stars rotate faster than their mutual orbital period. This lack of synchronization and the high ellipticity of their orbit may indicate that this is a young star system. Over time, the mutual tidal interaction of the pair may lead to rotational synchronization and orbit circularization.
Spica is a polarimetric variable, first discovered to be such in 2016. The majority of the polarimetric signal is the result of the reflection of the light from one star off the other (and vice versa). The two stars in Spica were the first ever to have their reflectivity (or geometric albedo) measured. The geometric albedos of Spica A and B are, respectively, 3.61 percent and 1.36 percent, values that are low compared to planets.
The MK spectral classification of Spica is typically considered to be an early B-type main sequence star. Individual spectral types for the two components are difficult to assign accurately, especially for the secondary due to the Struve–Sahade effect. The Bright Star Catalogue derived a spectral class of B1 III-IV for the primary and B2V for the secondary, but later studies have given various different values.
The primary star has a stellar classification of B1 III–IV. The luminosity class matches the spectrum of a star that is midway between a subgiant and a giant star, and it is no longer a main-sequence star. The evolutionary stage has been calculated to be near or slightly past the end of the main sequence phase. This is a massive star with more than 10 times the mass of the Sun and seven times the Sun's radius. The bolometric luminosity of the primary is about 20,500 times that of the Sun, and nine times the luminosity of its companion. The primary is one of the nearest stars to the Sun that has enough mass to end its life in a Type II supernova explosion.
The primary is classified as a Beta Cephei variable star that varies in brightness over a 0.1738-day period. The spectrum shows a radial velocity variation with the same period, indicating that the surface of the star is regularly pulsating outward and then contracting. This star is rotating rapidly, with a rotational velocity of 199 km/s along the equator.
The secondary member of this system is one of the few stars whose spectrum is affected by the Struve–Sahade effect. This is an anomalous change in the strength of the spectral lines over the course of an orbit, where the lines become weaker as the star is moving away from the observer. It may be caused by a strong stellar wind from the primary scattering the light from secondary when it is receding. This star is smaller than the primary, with about 7 times the mass of the Sun and 3.6 times the Sun's radius. Its stellar classification is B2 V, making this a main-sequence star.
"α Virginis" (Latinised to "Alpha Virginis") is the system's Bayer designation.
The traditional name "Spica" derives from Latin "spīca virginis" "the virgin's ear of [wheat] grain". It was also anglicized as "Virgin's Spike".
Johann Bayer cited the name "Arista".
Other traditional names are "Azimech" , from Arabic السماك الأعزل "al-simāk al-ʼaʽzal" 'the unarmed "simāk"' (of unknown meaning, cf. Eta Boötis); "Alarph", Arabic for 'the grape-gatherer' or 'gleaner', and "Sumbalet" ("Sombalet", "Sembalet" and variants), from Arabic سنبلة "sunbulah" "ear of grain".
In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN's first bulletin of July 2016 included a table of the first two batches of names approved by the WGSN; which included "Spica" for this star. It is now so entered in the IAU Catalog of Star Names.
In Chinese, (), meaning "Horn (asterism)", refers to an asterism consisting of Spica and ζ Virginis. Consequently, the Chinese name for Spica is (, ).
In Hindu astronomy, Spica corresponds to the Nakshatra "Chitrā".
Both American ships USS "Spica" (AK-16) and USNS "Spica" (T-AFS-9) were named after this star while USS "Azimech" (AK-124), a "Crater"-class cargo ship, was given one of the star's medieval names.
A blue star represents Spica on the of the Brazilian state of Pará. Spica is also the star representing Pará on the Brazilian flag.
A South Korean Girl Group was named after the star.
Spica is a Vocaloid song sung by Hatsune Miku.
In a non-canonical chapter in , Subaru had a daughter with Rem named Spica.
Spica is the pseudonym of Lili in the children's manga series, "Zodiac P.I."
In his "Three Books of Occult Philosophy", Cornelius Agrippa attributes Spica's kabbalistic symbol to Hermes Trismegistus. | https://en.wikipedia.org/wiki?curid=29409 |
Stuart Little
Stuart Little is a 1945 American children's novel by E. B. White, his first book for children, and is widely recognized as a classic in children's literature. "Stuart Little" was illustrated by the subsequently award-winning artist Garth Williams, also his first work for children. It is a realistic fantasy about a mouse-like human boy named Stuart Little. According to the first chapter, he ″looked very much like a rat/mouse in every way″.
In a letter White wrote in response to inquiries from readers, he described how he came to conceive of Stuart Little: "many years ago I went to bed one night in a railway sleeping car, and during the night I dreamed about a tiny boy who acted rather like a rat. That's how the story of Stuart Little got started". He had the dream in the spring of 1926, while sleeping on a train on his way back to New York from a visit to the Shenandoah Valley. Biographer Michael Sims wrote that Stuart "arrived in [White's] mind in a direct shipment from the subconscious." White typed up a few stories about Stuart, which he told to his 18 nieces and nephews when they asked him to tell them a story. In 1935, White's wife Katharine showed these stories to Clarence Day, then a regular contributor to "The New Yorker". Day liked the stories and encouraged White not to neglect them, but neither Oxford University Press nor Viking Press was interested in the stories, and White did not immediately develop them further.
In the fall of 1938, as his wife wrote her annual collection of children's book reviews for "The New Yorker", White wrote a few paragraphs in his "One Man's Meat" column in "Harper's Magazine" about writing children's books. Anne Carroll Moore, the head children's librarian at the New York Public Library, read this column and responded by encouraging him to write a children's book that would "make the library lions roar". White's editor at Harper, who had heard about the Stuart stories from Katherine, asked to see them, and by March 1939 was intent on publishing them. Around that time, White wrote to James Thurber that he was "about half done" with the book; however, he made little progress with it until the winter of 1944-1945 when the book ended at Stuart setting off once more in his broken car, thinking that he will never see Margalo again, and 39 years and 9 months shy of E.B White's death.
This story follows the life of Stuart Little, a mouse born into the Little family.
Lucien Agosta, in his overview of the critical reception of the book, notes that "Critical reactions to "Stuart Little" have varied from disapprobation to unqualified admiration since the book was published in 1945, though generally it has been well received." Anne Carroll Moore, who had initially encouraged White to write the book, was critical of it when she read a proof of it. She wrote letters to White; his wife, Katharine; and Ursula Nordstrom, the children's editors at Harper's, advising that the book not be published.
Malcolm Cowley, who reviewed the book for "The New York Times", wrote, "Mr. White has a tendency to write amusing scenes instead of telling a story. To say that "Stuart Little" is one of the best children's books published this year is very modest praise for a writer of his talent." The book has become a children's classic, and is widely read by children and used by teachers. White received the Laura Ingalls Wilder Medal in 1970 for "Stuart Little" and "Charlotte's Web".
Actress Julie Harris narrated an unabridged adaptation on LP in two volumes for Pathways Of Sound (POS 1036 and 1037). The complete recording was later released on audio cassette by Bantam Audio and on CD by Listening Library, and is now available from Audible.com.
The book was very loosely adapted into a 1999 film of the same name, which combined live-action with computer animation. One such difference from the film and book is that Stuart is adopted instead of born into the family. There is also a plot line about him finding his real parents (who were killed in an accident when Stuart was a baby). A 2002 sequel to the first film, "Stuart Little 2", more closely follows the plot of the book. A third film, "" was released direct-to-video in 2006. This film was entirely computer-animated, and its plot was not derived from the book. All three films also leave out the plot of Stuart being a one-time substitute for a schoolhouse.
All three films feature Hugh Laurie as Mr. Little, Geena Davis as Mrs. Little, and Michael J. Fox as the voice of Stuart Little.
In 2015, it was announced that a remake of a "Stuart Little" film is in the works at Sony Pictures Entertainment and Red Wagon Entertainment. The movie will remain as a hybrid of live-action/computer animation. Douglas Wick, the producer of the original films, will produce the remake.
"The World of Stuart Little," a 1966 episode of NBC's "Children's Theater", narrated by Johnny Carson, won a Peabody Award and was nominated for an Emmy. An animated television series, "", (based on the film adaptations) was produced for HBO Family and aired for 13 episodes in 2003.
Three video games based on the film adaptations of the same name have been produced. "Stuart Little: the Journey Home", which was released only for the Game Boy Color in 2001, is based on the 1999 film. A game based on "Stuart Little 2" was released for the PlayStation, Game Boy Advance and Microsoft Windows in 2002. And a third game entitled "Stuart Little 3: Big Photo Adventure" was released exclusively for the PlayStation 2 in 2005. | https://en.wikipedia.org/wiki?curid=29414 |
Statite
A statite (a portmanteau of "static" and "satellite") is a hypothetical type of artificial satellite that employs a solar sail to continuously modify its orbit in ways that gravity alone would not allow. Typically, a statite would use the solar sail to "hover" in a location that would not otherwise be available as a stable geosynchronous orbit. Statites have been proposed that would remain in fixed locations high over Earth's poles, using reflected sunlight to counteract the gravity pulling them down. Statites might also employ their sails to change the shape or velocity of more conventional orbits, depending upon the purpose of the particular statite.
The concept of the statite was invented independently (and approximately simultaneously) by Robert L. Forward (who coined the term "statite") and Colin McInnes, who used the term "halo orbit" (not to be confused with the type of halo orbit invented by Robert Farquhar). Subsequently, the terms "non-Keplerian orbit" and "artificial Lagrange point" have been used as a generalization of the above terms.
No statites have been deployed to date, as solar sail technology remains in its infancy. NASA's cancelled "Sunjammer" solar sail mission had the stated objective of flying to an artificial Lagrange point near the Earth/Sun L1 point, to demonstrate the feasibility of the Geostorm geomagnetic storm warning mission concept proposed by NOAA's Patricia Mulligan. | https://en.wikipedia.org/wiki?curid=29417 |
Solar sail
Solar sails (also called light sails or photon sails) are a method of spacecraft propulsion using radiation pressure exerted by sunlight on large mirrors. Based on the physics, a number of spaceflight missions to test solar propulsion and navigation have been proposed since the 1980s.
A useful analogy to solar sailing may be a sailing boat; the light exerting a force on the mirrors is akin to a sail being blown by the wind. High-energy laser beams could be used as an alternative light source to exert much greater force than would be possible using sunlight, a concept known as beam sailing. Solar sail craft offer the possibility of low-cost operations combined with long operating lifetimes. Since they have few moving parts and use no propellant, they can potentially be used numerous times for delivery of payloads.
Solar sails use a phenomenon that has a proven, measured effect on astrodynamics. Solar pressure affects all spacecraft, whether in interplanetary space or in orbit around a planet or small body. A typical spacecraft going to Mars, for example, will be displaced thousands of kilometers by solar pressure, so the effects must be accounted for in trajectory planning, which has been done since the time of the earliest interplanetary spacecraft of the 1960s. Solar pressure also affects the orientation of a spacecraft, a factor that must be included in spacecraft design.
The total force exerted on an 800 by 800 meter solar sail, for example, is about at Earth's distance from the Sun, making it a low-thrust propulsion system, similar to spacecraft propelled by electric engines, but as it uses no propellant, that force is exerted almost constantly and the collective effect over time is great enough to be considered a potential manner of propelling spacecraft.
Johannes Kepler observed that comet tails point away from the Sun and suggested that the Sun caused the effect. In a letter to Galileo in 1610, he wrote, "Provide ships or sails adapted to the heavenly breezes, and there will be some who will brave even that void." He might have had the comet tail phenomenon in mind when he wrote those words, although his publications on comet tails came several years later.
James Clerk Maxwell, in 1861–1864, published his theory of electromagnetic fields and radiation, which shows that light has momentum and thus can exert pressure on objects. Maxwell's equations provide the theoretical foundation for sailing with light pressure. So by 1864, the physics community and beyond knew sunlight carried momentum that would exert a pressure on objects.
Jules Verne, in "From the Earth to the Moon", published in 1865, wrote "there will some day appear velocities far greater than these [of the planets and the projectile], of which light or electricity will probably be the mechanical agent ... we shall one day travel to the moon, the planets, and the stars." This is possibly the first published recognition that light could move ships through space.
Pyotr Lebedev was first to successfully demonstrate light pressure, which he did in 1899 with a torsional balance; Ernest Nichols and Gordon Hull conducted a similar independent experiment in 1901 using a Nichols radiometer.
Svante Arrhenius predicted in 1908 the possibility of solar radiation pressure distributing life spores across interstellar distances, providing one means to explain the concept of panspermia. He apparently was the first scientist to state that light could move objects between stars.
Konstantin Tsiolkovsky first proposed using the pressure of sunlight to propel spacecraft through space and suggested, "using tremendous mirrors of very thin sheets to utilize the pressure of sunlight to attain cosmic velocities".
Friedrich Zander (Tsander) published a technical paper in 1925 that included technical analysis of solar sailing. Zander wrote of "applying small forces" using "light pressure or transmission of light energy to distances by means of very thin mirrors".
JBS Haldane speculated in 1927 about the invention of tubular spaceships that would take humanity to space and how "wings of metallic foil of a square kilometre or more in area are spread out to catch the Sun's radiation pressure".
J. D. Bernal wrote in 1929, "A form of space sailing might be developed which used the repulsive effect of the Sun's rays instead of wind. A space vessel spreading its large, metallic wings, acres in extent, to the full, might be blown to the limit of Neptune's orbit. Then, to increase its speed, it would tack, close-hauled, down the gravitational field, spreading full sail again as it rushed past the Sun."
The first formal technology and design effort for a solar sail began in 1976 at Jet Propulsion Laboratory for a proposed mission to rendezvous with Halley's Comet.
Many people believe that spacecraft using solar sails are pushed by the Solar winds just as sailboats and sailing ships are pushed by the winds across the waters on Earth. But Solar radiation exerts a pressure on the sail due to reflection and a small fraction that is absorbed.
The momentum of a photon or an entire flux is given by Einstein's relation:
where p is the momentum, E is the energy (of the photon or flux), and c is the speed of light. Specifically the momentum of a photon depends on its wavelength
Solar radiation pressure can be related to the irradiance (solar constant) value of 1361 W/m2 at 1 AU (Earth-Sun distance), as revised in 2011:
An ideal sail is flat and has 100% specular reflection. An actual sail will have an overall efficiency of about 90%, about 8.17 μN/m2, due to curvature (billow), wrinkles, absorbance, re-radiation from front and back, non-specular effects, and other factors.
The force on a sail and the actual acceleration of the craft vary by the inverse square of distance from the Sun (unless extremely close to the Sun), and by the square of the cosine of the angle between the sail force vector and the radial from the Sun, so
where R is distance from the Sun in AU. An actual square sail can be modeled as:
Note that the force and acceleration approach zero generally around θ = 60° rather than 90° as one might expect with an ideal sail.
If some of the energy is absorbed, the absorbed energy will heat the sail, which re-radiates that energy from the front and rear surfaces, depending on the emissivity of those two surfaces.
Solar wind, the flux of charged particles blown out from the Sun, exerts a nominal dynamic pressure of about 3 to 4 nPa, three orders of magnitude less than solar radiation pressure on a reflective sail.
Sail loading (areal density) is an important parameter, which is the total mass divided by the sail area, expressed in g/m2. It is represented by the Greek letter σ.
A sail craft has a characteristic acceleration, ac, which it would experience at 1 AU when facing the Sun. Note this value accounts for both the incident and reflected momentums. Using the value from above of 9.08 μN per square metre of radiation pressure at 1 AU, ac is related to areal density by:
Assuming 90% efficiency, ac = 8.17 / σ mm/s2
The lightness number, λ, is the dimensionless ratio of maximum vehicle acceleration divided by the Sun's local gravity. Using the values at 1 AU:
The lightness number is also independent of distance from the Sun because both gravity and light pressure fall off as the inverse square of the distance from the Sun. Therefore, this number defines the types of orbit maneuvers that are possible for a given vessel.
The table presents some example values. Payloads are not included. The first two are from the detailed design effort at JPL in the 1970s. The third, the lattice sailer, might represent about the best possible performance level. The dimensions for square and lattice sails are edges. The dimension for heliogyro is blade tip to blade tip.
An active attitude control system (ACS) is essential for a sail craft to achieve and maintain a desired orientation. The required sail orientation changes slowly (often less than 1 degree per day) in interplanetary space, but much more rapidly in a planetary orbit. The ACS must be capable of meeting these orientation requirements. Attitude control is achieved by a relative shift between the craft's center of pressure and its center of mass. This can be achieved with control vanes, movement of individual sails, movement of a control mass, or altering reflectivity.
Holding a constant attitude requires that the ACS maintain a net torque of zero on the craft. The total force and torque on a sail, or set of sails, is not constant along a trajectory. The force changes with solar distance and sail angle, which changes the billow in the sail and deflects some elements of the supporting structure, resulting in changes in the sail force and torque.
Sail temperature also changes with solar distance and sail angle, which changes sail dimensions. The radiant heat from the sail changes the temperature of the supporting structure. Both factors affect total force and torque.
To hold the desired attitude the ACS must compensate for all of these changes.
In Earth orbit, solar pressure and drag pressure are typically equal at an altitude of about 800 km, which means that a sail craft would have to operate above that altitude. Sail craft must operate in orbits where their turn rates are compatible with the orbits, which is generally a concern only for spinning disk configurations.
Sail operating temperatures are a function of solar distance, sail angle, reflectivity, and front and back emissivities. A sail can be used only where its temperature is kept within its material limits. Generally, a sail can be used rather close to the Sun, around 0.25 AU, or even closer if carefully designed for those conditions.
Potential applications for sail craft range throughout the Solar System, from near the Sun to the comet clouds beyond Neptune. The craft can make outbound voyages to deliver loads or to take up station keeping at the destination. They can be used to haul cargo and possibly also used for human travel.
For trips within the inner Solar System, they can deliver loads and then return to Earth for subsequent voyages, operating as an interplanetary shuttle. For Mars in particular, the craft could provide economical means of routinely supplying operations on the planet according to Jerome Wright, "The cost of launching the necessary conventional propellants from Earth are enormous for manned missions. Use of sailing ships could potentially save more than $10 billion in mission costs."
Solar sail craft can approach the Sun to deliver observation payloads or to take up station keeping orbits. They can operate at 0.25 AU or closer. They can reach high orbital inclinations, including polar.
Solar sails can travel to and from all of the inner planets. Trips to Mercury and Venus are for rendezvous and orbit entry for the payload. Trips to Mars could be either for rendezvous or swing-by with release of the payload for aerodynamic braking.
Minimum transfer times to the outer planets benefit from using an indirect transfer (solar swing-by). However, this method results in high arrival speeds. Slower transfers have lower arrival speeds.
The minimum transfer time to Jupiter for "ac" of 1 mm/s2 with no departure velocity relative to Earth is 2 years when using an indirect transfer (solar swing-by). The arrival speed ("V"∞) is close to 17 km/s. For Saturn, the minimum trip time is 3.3 years, with an arrival speed of nearly 19 km/s.
The Sun's inner gravitational focus point lies at minimum distance of 550 AU from the Sun, and is the point to which light from distant objects is focused by gravity as a result of it passing by the Sun. This is thus the distant point to which solar gravity will cause the region of deep space on the other side of the Sun to be focused, thus serving effectively as a very large telescope objective lens.
It has been proposed that an inflated sail, made of beryllium, that starts at 0.05 AU from the Sun would gain an initial acceleration of 36.4 m/s2, and reach a speed of 0.00264c (about 950 km/s) in less than a day. Such proximity to the Sun could prove to be impractical in the near term due to the structural degradation of beryllium at high temperatures, diffusion of hydrogen at high temperatures as well as an electrostatic gradient, generated by the ionization of beryllium from the solar wind, posing a burst risk. A revised perihelion of 0.1 AU would reduce the aforementioned temperature and solar flux exposure.
Such a sail would take "Two and a half years to reach the heliopause, six and a half years to reach the Sun’s inner gravitational focus, with arrival at the inner Oort Cloud in no more than thirty years." "Such a mission could perform useful astrophysical observations en route, explore gravitational focusing techniques, and image Oort Cloud objects while exploring particles and fields in that region that are of galactic rather than solar origin."
Robert L. Forward has commented that a solar sail could be used to modify the orbit of a satellite about the Earth. In the limit, a sail could be used to "hover" a satellite above one pole of the Earth. Spacecraft fitted with solar sails could also be placed in close orbits such that they are stationary with respect to either the Sun or the Earth, a type of satellite named by Forward a "statite". This is possible because the propulsion provided by the sail offsets the gravitational attraction of the Sun. Such an orbit could be useful for studying the properties of the Sun for long durations. Likewise a solar sail-equipped spacecraft could also remain on station nearly above the polar solar terminator of a planet such as the Earth by tilting the sail at the appropriate angle needed to counteract the planet's gravity.
In his book "The Case for Mars", Robert Zubrin points out that the reflected sunlight from a large statite, placed near the polar terminator of the planet Mars, could be focused on one of the Martian polar ice caps to significantly warm the planet's atmosphere. Such a statite could be made from asteroid material.
The MESSENGER probe orbiting Mercury used light pressure on its solar panels to perform fine trajectory corrections on the way to Mercury. By changing the angle of the solar panels relative to the Sun, the amount of solar radiation pressure was varied to adjust the spacecraft trajectory more delicately than possible with thrusters. Minor errors are greatly amplified by gravity assist maneuvers, so using radiation pressure to make very small corrections saved large amounts of propellant.
In the 1970s, Robert Forward proposed two beam-powered propulsion schemes using either lasers or masers to push giant sails to a significant fraction of the speed of light.
In the science fiction novel "Rocheworld", Forward described a light sail propelled by super lasers. As the starship neared its destination, the outer portion of the sail would detach. The outer sail would then refocus and reflect the lasers back onto a smaller, inner sail. This would provide braking thrust to stop the ship in the destination star system.
Both methods pose monumental engineering challenges. The lasers would have to operate for years continuously at gigawatt strength. Forward's solution to this requires enormous solar panel arrays to be built at or near the planet Mercury. A planet-sized mirror or fresnel lens would need to be located at several dozen astronomical units from the Sun to keep the lasers focused on the sail. The giant braking sail would have to act as a precision mirror to focus the braking beam onto the inner "deceleration" sail.
A potentially easier approach would be to use a maser to drive a "solar sail" composed of a mesh of wires with the same spacing as the wavelength of the microwaves directed at the sail, since the manipulation of microwave radiation is somewhat easier than the manipulation of visible light. The hypothetical "Starwisp" interstellar probe design would use microwaves, rather than visible light, to push it. Masers spread out more rapidly than optical lasers owing to their longer wavelength, and so would not have as great an effective range.
Masers could also be used to power a painted solar sail, a conventional sail coated with a layer of chemicals designed to evaporate when struck by microwave radiation. The momentum generated by this evaporation could significantly increase the thrust generated by solar sails, as a form of lightweight ablative laser propulsion.
To further focus the energy on a distant solar sail, Forward proposed a lens designed as a large zone plate. This would be placed at a location between the laser or maser and the spacecraft.
Another more physically realistic approach would be to use the light from the Sun to accelerate. The ship would first drop into an orbit making a close pass to the Sun, to maximize the solar energy input on the sail, then it would begin to accelerate away from the system using the light from the Sun. Acceleration will drop approximately as the inverse square of the distance from the Sun, and beyond some distance, the ship would no longer receive enough light to accelerate it significantly, but would maintain the final velocity attained. When nearing the target star, the ship could turn its sails toward it and begin to use the outward pressure of the destination star to decelerate. Rockets could augment the solar thrust.
Similar solar sailing launch and capture were suggested for directed panspermia to expand life in other solar system. Velocities of 0.05% the speed of light could be obtained by solar sails carrying 10 kg payloads, using thin solar sail vehicles with effective areal densities of 0.1 g/m2 with thin sails of 0.1 µm thickness and sizes on the order of one square kilometer. Alternatively, swarms of 1 mm capsules could be launched on solar sails with radii of 42 cm, each carrying 10,000 capsules of a hundred million extremophile microorganisms to seed life in diverse target environments.
Theoretical studies suggest relativistic speeds if the solar sail harnesses a supernova.
Small solar sails have been proposed to accelerate the deorbiting of small artificial satellites from Earth orbits. Satellites in low Earth orbit can use a combination of solar pressure on the sail and increased atmospheric drag to accelerate satellite reentry. A de-orbit sail developed at Cranfield University is part of the UK satellite TechDemoSat-1, launched in 2014, and is expected to be deployed at the end of the satellite's five-year useful life. The sail's purpose is to bring the satellite out of orbit over a period of about 25 years. In July 2015 British 3U CubeSat called DeorbitSail was launched into space with the purpose of testing 16 m2 deorbit structure, but eventually it failed to deploy it. There is also a student 2U CubeSat mission called PW-Sat2 planned to launch in 2017 that will test 4 m2 deorbit sail. In June 2017 a second British 3U CubeSat called InflateSail deployed a 10 m2 deorbit sail at an altitude of .
In June 2017 the 3U Cubesat URSAMAIOR has been launched in Low Earth Orbit to test the deorbiting system ARTICA developed by Spacemind. The device, which occupies only 0.4 U of the cubesat, shall deploy a sail of 2.1 m2 to deorbit the satellite at the end of the operational life
IKAROS, launched in 2010, was the first practical solar sail vehicle. As of 2015, it was still under thrust, proving the practicality of a solar sail for long-duration missions. It is spin-deployed, with tip-masses in the corners of its square sail. The sail is made of thin polyimide film, coated with evaporated aluminium. It steers with electrically-controlled liquid crystal panels. The sail slowly spins, and these panels turn on and off to control the attitude of the vehicle. When on, they diffuse light, reducing the momentum transfer to that part of the sail. When off, the sail reflects more light, transferring more momentum. In that way, they turn the sail. Thin-film solar cells are also integrated into the sail, powering the spacecraft. The design is very reliable, because spin deployment, which is preferable for large sails, simplified the mechanisms to unfold the sail and the LCD panels have no moving parts.
Parachutes have very low mass, but a parachute is not a workable configuration for a solar sail. Analysis shows that a parachute configuration would collapse from the forces exerted by shroud lines, since radiation pressure does not behave like aerodynamic pressure, and would not act to keep the parachute open.
The highest thrust-to-mass designs for ground-assembled deploy-able structures are square sails with the masts and guy lines on the dark side of the sail. Usually there are four masts that spread the corners of the sail, and a mast in the center to hold guy-wires. One of the largest advantages is that there are no hot spots in the rigging from wrinkling or bagging, and the sail protects the structure from the Sun. This form can, therefore, go close to the Sun for maximum thrust. Most designs steer with small moving sails on the ends of the spars.
In the 1970s JPL studied many rotating blade and ring sails for a mission to rendezvous with Halley's Comet. The intention was to stiffen the structures using angular momentum, eliminating the need for struts, and saving mass. In all cases, surprisingly large amounts of tensile strength were needed to cope with dynamic loads. Weaker sails would ripple or oscillate when the sail's attitude changed, and the oscillations would add and cause structural failure. The difference in the thrust-to-mass ratio between practical designs was almost nil, and the static designs were easier to control.
JPL's reference design was called the "heliogyro". It had plastic-film blades deployed from rollers and held out by centrifugal forces as it rotated. The spacecraft's attitude and direction were to be completely controlled by changing the angle of the blades in various ways, similar to the cyclic and collective pitch of a helicopter. Although the design had no mass advantage over a square sail, it remained attractive because the method of deploying the sail was simpler than a strut-based design.
Heliogyro design is similar to the blades on a helicopter. The design is faster to manufacture due to lightweight centrifugal stiffening of sails. Also, they are highly efficient in cost and velocity because the blades are lightweight and long. Unlike the square and spinning disk designs, heliogyro is easier to deploy because the blades are compacted on a reel. The blades roll out when they are deploying after the ejection from the spacecraft. As the heliogyro travels through space the system spins around because of the centrifugal acceleration. Finally, payloads for the space flights are placed in the center of gravity to even out the distribution of weight to ensure stable flight.
JPL also investigated "ring sails" (Spinning Disk Sail in the above diagram), panels attached to the edge of a rotating spacecraft. The panels would have slight gaps, about one to five percent of the total area. Lines would connect the edge of one sail to the other. Masses in the middles of these lines would pull the sails taut against the coning caused by the radiation pressure. JPL researchers said that this might be an attractive sail design for large manned structures. The inner ring, in particular, might be made to have artificial gravity roughly equal to the gravity on the surface of Mars.
A solar sail can serve a dual function as a high-gain antenna. Designs differ, but most modify the metalization pattern to create a holographic monochromatic lens or mirror in the radio frequencies of interest, including visible light.
Pekka Janhunen from FMI has invented a type of solar sail called the electric solar wind sail. Mechanically it has little in common with the traditional solar sail design. The sails are replaced with straightened conducting tethers (wires) placed radially around the host ship. The wires are electrically charged to create an electric field around the wires. The electric field extends a few tens of metres into the plasma of the surrounding solar wind. The solar electrons are reflected by the electric field (like the photons on a traditional solar sail). The radius of the sail is from the electric field rather than the actual wire itself, making the sail lighter. The craft can also be steered by regulating the electric charge of the wires. A practical electric sail would have 50–100 straightened wires with a length of about 20 km each.
Electric solar wind sails can adjust their electrostatic fields and sail attitudes.
A magnetic sail would also employ the solar wind. However, the magnetic field deflects the electrically charged particles in the wind. It uses wire loops, and runs a static current through them instead of applying a static voltage.
All these designs maneuver, though the mechanisms are different.
Magnetic sails bend the path of the charged protons that are in the solar wind. By changing the sails' attitudes, and the size of the magnetic fields, they can change the amount and direction of the thrust.
The most common material in current designs is a thin layer of aluminum coating on a polymer (plastic) sheet, such as aluminized 2 µm Kapton film. The polymer provides mechanical support as well as flexibility, while the thin metal layer provides the reflectivity. Such material resists the heat of a pass close to the Sun and still remains reasonably strong. The aluminum reflecting film is on the Sun side. The sails of "Cosmos 1" were made of aluminized PET film (Mylar).
Eric Drexler developed a concept for a sail in which the polymer was removed. He proposed very high thrust-to-mass solar sails, and made prototypes of the sail material. His sail would use panels of thin aluminium film (30 to 100 nanometres thick) supported by a tensile structure. The sail would rotate and would have to be continually under thrust. He made and handled samples of the film in the laboratory, but the material was too delicate to survive folding, launch, and deployment. The design planned to rely on space-based production of the film panels, joining them to a deploy-able tension structure. Sails in this class would offer high area per unit mass and hence accelerations up to "fifty times higher" than designs based on deploy-able plastic films.
The material developed for the Drexler solar sail was a thin aluminium film with a baseline thickness of 0.1 µm, to be fabricated by vapor deposition in a space-based system. Drexler used a similar process to prepare films on the ground. As anticipated, these films demonstrated adequate strength and robustness for handling in the laboratory and for use in space, but not for folding, launch, and deployment.
Research by Geoffrey Landis in 1998–1999, funded by the NASA Institute for Advanced Concepts, showed that various materials such as alumina for laser lightsails and carbon fiber for microwave pushed lightsails were superior sail materials to the previously standard aluminium or Kapton films.
In 2000, Energy Science Laboratories developed a new carbon fiber material that might be useful for solar sails. The material is over 200 times thicker than conventional solar sail designs, but it is so porous that it has the same mass. The rigidity and durability of this material could make solar sails that are significantly sturdier than plastic films. The material could self-deploy and should withstand higher temperatures.
There has been some theoretical speculation about using molecular manufacturing techniques to create advanced, strong, hyper-light sail material, based on nanotube mesh weaves, where the weave "spaces" are less than half the wavelength of light impinging on the sail. While such materials have so far only been produced in laboratory conditions, and the means for manufacturing such material on an industrial scale are not yet available, such materials could mass less than 0.1 g/m2, making them lighter than any current sail material by a factor of at least 30. For comparison, 5 micrometre thick Mylar sail material mass 7 g/m2, aluminized Kapton films have a mass as much as 12 g/m2, and Energy Science Laboratories' new carbon fiber material masses 3 g/m2.
The least dense metal is lithium, about 5 times less dense than aluminium. Fresh, unoxidized surfaces are reflective. At a thickness of 20 nm, lithium has an area density of 0.011 g/m2. A high-performance sail could be made of lithium alone at 20 nm (no emission layer). It would have to be fabricated in space and not used to approach the Sun. In the limit, a sail craft might be constructed with a total areal density of around 0.02 g/m2, giving it a lightness number of 67 and ac of about 400 mm/s2. Magnesium and beryllium are also potential materials for high-performance sails. These 3 metals can be alloyed with each other and with aluminium.
Aluminium is the common choice for the reflection layer. It typically has a thickness of at least 20 nm, with a reflectivity of 0.88 to 0.90. Chromium is a good choice for the emission layer on the face away from the Sun. It can readily provide emissivity values of 0.63 to 0.73 for thicknesses from 5 to 20 nm on plastic film. Usable emissivity values are empirical because thin-film effects dominate; bulk emissivity values do not hold up in these cases because material thickness is much thinner than the emitted wavelengths.
Sails are fabricated on Earth on long tables where ribbons are unrolled and joined to create the sails. Sail material needed to have as little weight as possible because it would require the use of the shuttle to carry the craft into orbit. Thus, these sails are packed, launched, and unfurled in space.
In the future, fabrication could take place in orbit inside large frames that support the sail. This would result in lower mass sails and elimination of the risk of deployment failure.
Sailing operations are simplest in interplanetary orbits, where altitude changes are done at low rates. For outward bound trajectories, the sail force vector is oriented forward of the Sun line, which increases orbital energy and angular momentum, resulting in the craft moving farther from the Sun. For inward trajectories, the sail force vector is oriented behind the Sun line, which decreases orbital energy and angular momentum, resulting in the craft moving in toward the Sun. It is worth noting that only the Sun's gravity pulls the craft toward the Sun—there is no analog to a sailboat's tacking to windward. To change orbital inclination, the force vector is turned out of the plane of the velocity vector.
In orbits around planets or other bodies, the sail is oriented so that its force vector has a component along the velocity vector, either in the direction of motion for an outward spiral, or against the direction of motion for an inward spiral.
Trajectory optimizations can often require intervals of reduced or zero thrust. This can be achieved by rolling the craft around the Sun line with the sail set at an appropriate angle to reduce or remove the thrust.
A close solar passage can be used to increase a craft's energy. The increased radiation pressure combines with the efficacy of being deep in the Sun's gravity well to substantially increase the energy for runs to the outer Solar System. The optimal approach to the Sun is done by increasing the orbital eccentricity while keeping the energy level as high as practical. The minimum approach distance is a function of sail angle, thermal properties of the sail and other structure, load effects on structure, and sail optical characteristics (reflectivity and emissivity). A close passage can result in substantial optical degradation. Required turn rates can increase substantially for a close passage. A sail craft arriving at a star can use a close passage to reduce energy, which also applies to a sail craft on a return trip from the outer Solar System.
A lunar swing-by can have important benefits for trajectories leaving from or arriving at Earth. This can reduce trip times, especially in cases where the sail is heavily loaded. A swing-by can also be used to obtain favorable departure or arrival directions relative to Earth.
A planetary swing-by could also be employed similar to what is done with coasting spacecraft, but good alignments might not exist due to the requirements for overall optimization of the trajectory.
The following table lists some example concepts using beamed laser propulsion as proposed by the physicist Robert L. Forward:
Ref:
Both the Mariner 10 mission, which flew by the planets Mercury and Venus, and the MESSENGER mission to Mercury demonstrated the use of solar pressure as a method of attitude control in order to conserve attitude-control propellant.
Hayabusa also used solar pressure on its solar paddles as a method of attitude control to compensate for broken reaction wheels and chemical thruster.
MTSAT-1R (Multi-Functional Transport Satellite)'s solar sail counteracts the torque produced by sunlight pressure on the solar array. The trim tab on the solar array makes small adjustments to the torque balance.
NASA has successfully tested deployment technologies on small scale sails in vacuum chambers.
On February 4, 1993, the Znamya 2, a 20-meter wide aluminized-mylar reflector, was successfully deployed from the Russian Mir space station. Although the deployment succeeded, propulsion was not demonstrated. A second test, Znamya 2.5, failed to deploy properly.
In 1999, a full-scale deployment of a solar sail was tested on the ground at DLR/ESA in Cologne.
A joint private project between Planetary Society, Cosmos Studios and Russian Academy of Science in 2001 made a suborbital prototype test, which failed because of rocket failure.
A 15-meter-diameter solar sail (SSP, solar sail sub payload, "soraseiru sabupeiro-do") was launched together with ASTRO-F on a M-V rocket on February 21, 2006, and made it to orbit. It deployed from the stage, but opened incompletely.
On August 9, 2004, the Japanese ISAS successfully deployed two prototype solar sails from a sounding rocket. A clover-shaped sail was deployed at 122 km altitude and a fan-shaped sail was deployed at 169 km altitude. Both sails used 7.5-micrometer film. The experiment purely tested the deployment mechanisms, not propulsion.
On 21 May 2010, Japan Aerospace Exploration Agency (JAXA) launched the world's first interplanetary solar sail spacecraft "IKAROS" ("Interplanetary Kite-craft Accelerated by Radiation Of the Sun") to Venus. Using a new solar-photon propulsion method, it was the first true solar sail spacecraft fully propelled by sunlight, and was the first spacecraft to succeed in solar sail flight.
JAXA successfully tested IKAROS in 2010. The goal was to deploy and control the sail and, for the first time, to determine the minute orbit perturbations caused by light pressure. Orbit determination was done by the nearby AKATSUKI probe from which IKAROS detached after both had been brought into a transfer orbit to Venus. The total effect over the six month flight was 100 m/s.
Until 2010, no solar sails had been successfully used in space as primary propulsion systems. On 21 May 2010, the Japan Aerospace Exploration Agency (JAXA) launched the IKAROS (Interplanetary Kite-craft Accelerated by Radiation Of the Sun) spacecraft, which deployed a 200 m2 polyimide experimental solar sail on June 10. In July, the next phase for the demonstration of acceleration by radiation began. On 9 July 2010, it was verified that IKAROS collected radiation from the Sun and began photon acceleration by the orbit determination of IKAROS by range-and-range-rate (RARR) that is newly calculated in addition to the data of the relativization accelerating speed of IKAROS between IKAROS and the Earth that has been taken since before the Doppler effect was utilized. The data showed that IKAROS appears to have been solar-sailing since 3 June when it deployed the sail.
IKAROS has a diagonal spinning square sail 14×14 m (196 m2) made of a thick sheet of polyimide. The polyimide sheet had a mass of about 10 grams per square metre. A thin-film solar array is embedded in the sail. Eight LCD panels are embedded in the sail, whose reflectance can be adjusted for attitude control. IKAROS spent six months traveling to Venus, and then began a three-year journey to the far side of the Sun.
A team from the NASA Marshall Space Flight Center (Marshall), along with a team from the NASA Ames Research Center, developed a solar sail mission called NanoSail-D, which was lost in a launch failure aboard a Falcon 1 rocket on 3 August 2008. The second backup version, NanoSail-D2, also sometimes called simply NanoSail-D, was launched with FASTSAT on a Minotaur IV on November 19, 2010, becoming NASA's first solar sail deployed in low earth orbit. The objectives of the mission were to test sail deployment technologies, and to gather data about the use of solar sails as a simple, "passive" means of de-orbiting dead satellites and space debris. The NanoSail-D structure was made of aluminium and plastic, with the spacecraft massing less than . The sail has about of light-catching surface. After some initial problems with deployment, the solar sail was deployed and over the course of its 240-day mission reportedly produced a "wealth of data" concerning the use of solar sails as passive deorbit devices.
NASA launched the second NanoSail-D unit stowed inside the FASTSAT satellite on the Minotaur IV on November 19, 2010. The ejection date from the FASTSAT microsatellite was planned for December 6, 2010, but deployment only occurred on January 20, 2011.
In June 21, 2005, a joint private project between Planetary Society, Cosmos Studios and Russian Academy of Science launched a prototype sail "Cosmos 1" from a submarine in the Barents Sea, but the Volna rocket failed, and the spacecraft failed to reach orbit. They intended to use the sail to gradually raise the spacecraft to a higher Earth orbit over a mission duration of one month. The launch attempt sparked public interest according to Louis Friedman. Despite the failed launch attempt of Cosmos 1, The Planetary Society received applause for their efforts from the space community and sparked a rekindled interest in solar sail technology.
On Carl Sagan's 75th birthday (November 9, 2009) the Planetary Society announced plans to make three further attempts, dubbed LightSail-1, -2, and -3. The new design will use a 32 m2 Mylar sail, deployed in four triangular segments like NanoSail-D. The launch configuration is a 3U CubeSat format, and as of 2015, it was scheduled as a secondary payload for a 2016 launch on the first SpaceX Falcon Heavy launch.
"LightSail-1" was launched on 20 May 2015. The purpose of the test was to allow a full checkout of the satellite's systems in advance of LightSail-2. Its deployment orbit was not high enough to escape Earth's atmospheric drag and demonstrate true solar sailing.
"LightSail-2" was launched on 25 June 2019, and deployed into a much higher low-Earth orbit. Its solar sails were deployed on 23 July 2019.
Despite the losses of "Cosmos 1" and NanoSail-D (which were due to failure of their launchers), scientists and engineers around the world remain encouraged and continue to work on solar sails. While most direct applications created so far intend to use the sails as inexpensive modes of cargo transport, some scientists are investigating the possibility of using solar sails as a means of transporting humans. This goal is strongly related to the management of very large (i.e. well above 1 km2) surfaces in space and the sail making advancements. Development of solar sails for manned space flight is still in its infancy.
A technology demonstration sail craft, dubbed "Sunjammer", was in development with the intent to prove the viability and value of sailing technology. "Sunjammer" had a square sail, 124 feet (38 meters) wide on each side (total area 13,000 sq ft or 1,208 sq m). It would have traveled from the Sun-Earth Lagrangian point 900,000 miles from Earth (1.5 million km) to a distance of 1,864,114 miles (3 million kilometers). The demonstration was expected to launch on a Falcon 9 in January 2015. It would have been a secondary payload, released after the placement of the DSCOVR climate satellite at the L1 point. Citing a lack of confidence in the ability of its contractor L'Garde to deliver, the mission was cancelled in October 2014.
, the European Space Agency (ESA) has a proposed deorbit sail, named ""Gossamer"", that would be intended to be used to accelerate the deorbiting of small (less than ) artificial satellites from low-Earth orbits. The launch mass is with a launch volume of only . Once deployed, the sail would expand to and would use a combination of solar pressure on the sail and increased atmospheric drag to accelerate satellite reentry.
The Near-Earth Asteroid Scout (NEA Scout) is a mission being jointly developed by NASA's Marshall Space Flight Center (MSFC) and the Jet Propulsion Laboratory (JPL), consisting of a controllable low-cost CubeSat solar sail spacecraft capable of encountering near-Earth asteroids (NEA). Four booms would deploy, unfurling the aluminized polyimide solar sail. In 2015, NASA announced it had selected NEA Scout to launch as one of several secondary payloads aboard Artemis 1, the first flight of the agency's heavy-lift SLS launch vehicle.
OKEANOS (Outsized Kite-craft for Exploration and Astronautics in the Outer Solar System) is a proposed mission concept by Japan's JAXA to Jupiter's Trojan asteroids using a hybrid solar sail for propulsion; the sail is covered with thin solar panels to power an ion engine. "In-situ" analysis of the collected samples would be performed by either direct contact or using a lander carrying a high-resolution mass spectrometer. A lander and a sample-return to Earth are options under study. The OKEANOS Jupiter Trojan Asteroid Explorer is a finalist for Japan's ISAS 2nd Large-class mission to be launched in the late 2020s.
The well-funded Breakthrough Starshot project announced in April 12, 2016, aims to develop a fleet of 1000 light sail nanocraft carrying miniature cameras, propelled by ground-based lasers and send them to Alpha Centauri at 20% the speed of light. The trip would take 20 years.
In August 2019, NASA awarded the "Solar Cruiser" team $400,000 for nine-month mission concept studies. The spacecraft would have a solar sail and would orbit the Sun in a polar orbit, while the coronagraph instrument would enable simultaneous measurements of the Sun's magnetic field structure and velocity of coronal mass ejections. If selected for development, it would launch in 2024.
A similar technology appeared in the "" episode, . In the episode, Lightships are described as an ancient technology used by Bajorans to travel beyond their solar system by using light from the Bajoran sun and specially constructed sails to propel them through space ().
A space sail is used in the novel "Planet of the Apes". | https://en.wikipedia.org/wiki?curid=29420 |
Sabellianism
In Christianity, Sabellianism is the eastern Church heresy equivalent to the western historic Patripassianism, which are both forms of theological modalism. Sabellianism is the belief that the Father, Son, and Holy Spirit are three different "modes" or "aspects" of God, as opposed to a Trinitarian view of three distinct persons within the Godhead. The term "Sabellianism" comes from Sabellius, who was a theologian and priest from the 3rd century. None of his writings have survived and so all that is known about him comes from his opponents. All evidence shows that Sabellius held Jesus to be deity while denying the plurality of persons in God and holding a belief similar to modalistic monarchianism. Modalistic monarchianism has been generally understood to have arisen during the second and third centuries, and to have been regarded as heresy after the fourth, although this is disputed by some.
Sabellianism has been rejected by the majority of Christian churches in favour of Trinitarianism, which was eventually defined as three distinct, co-equal, co-eternal Persons of One Substance by the Athanasian Creed, probably dating from the late 5th or early 6th century. The Greek term "homoousian" or "consubstantial" () had been used before its adoption by the First Council of Nicaea. The Gnostics were the first to use the word ', while before the Gnostics there is no trace at all of its existence. The early church theologians were probably made aware of this concept, and thus of the doctrine of emanation, taught by the Gnostics. In Gnostic texts the word ' is used with the following meanings:
It has been noted that this Greek term "homoousian" ("same being" or "consubstantial"), which Athanasius of Alexandria favoured, was also a term reportedly used by Sabellius—a term that many who held with Athanasius were uneasy about. Their objection to the term "homoousian" was that it was considered to be un-Scriptural, suspicious, and "of a Sabellian tendency." This was because Sabellius also considered the Father and the Son to be "one substance," meaning that, to Sabellius, the Father and Son were one essential person, though operating as different manifestations or modes. Athanasius' use of the word is intended to affirm that while the Father and Son are eternally distinct in a truly personal manner (i.e. with mutual love John 3:35, 14:31), both are nevertheless One Being, Essence, Nature, or Substance, having One personal Spirit.
Modalism has been mainly associated with Sabellius, who taught a form of it in Rome in the 3rd century. This had come to him via the teachings of Noetus and Praxeas. Noetus was excommunicated from the Church after being examined by council, and Praxeas is said to have recanted his modalistic views in writing, teaching again his former faith. Sabellius likewise was excommunicated by council in Alexandria, and after complaint of this was made to Rome, a second council then assembled in Rome and also ruled against not only Sabellianism, but against Arianism, and against Tritheism, while affirming a "Divine Triad" as the catholic understanding of the "Divine Monarchy".
Hippolytus of Rome knew Sabellius personally, writing how he and others had admonished Sabellius in "Refutation of All Heresies". He knew Sabellius opposed Trinitarian theology, yet he called Modal Monarchism the heresy of Noetus, not that of Sabellius. Sabellianism was embraced by Christians in Cyrenaica, to whom Dionysius, Patriarch of Alexandria (who was instrumental in the excommunication of Sabellius in Alexandria), wrote letters arguing against this belief. Hippolytus himself perceived modalism as a new and peculiar idea which was covertly gaining a following:
Some others are secretly introducing another doctrine, who have become disciples of one Noetus, who was a native of Smyrna, (and) lived not very long ago. This person was greatly puffed up and inflated with pride, being inspired by the conceit of a strange spirit. | There has appeared one, Noetus by name, and by birth a native of Smyrna. This person introduced a heresy from the tenets of Heraclitus. Now a certain man called Epigonus becomes his minister and pupil, and this person during his sojourn at Rome disseminated his godless opinion. But Cleomenes, who had become his disciple, an alien both in way of life and habits from the Church, was wont to corroborate the (Noetian) doctrine. | But in like manner, also, Noetus, being by birth a native of Smyrna, and a fellow addicted to reckless babbling, as well as crafty withal, introduced (among us) this heresy which originated from one Epigonus. It reached Rome, and was adopted by Cleomenes, and so has continued to this day among his successors.
Tertullian also perceived modalism as entering into the Church from without as a new idea, and opposing the doctrine which had been received through succession. After setting forth his understanding of the manner of faith which had been received by the Church, he then describes how the "simple" who always constitute the majority of believers are often startled at the idea that the One God exists in three and were opposed to his understanding of "the rule of faith." Proponents of Tertullian argue that he described the "simple" as the majority, rather than those who opposed him as the majority. This is contended from Tertullian's argument that they were putting forth ideas of their own which had not been taught to them by their elders:
We, however, as we indeed always have done (and more especially since we have been better instructed by the Paraclete, who leads men indeed into all truth), believe that there is one only God, but under the following dispensation, or οἰκονομία, as it is called, that this one only God has also a Son, His Word, who proceeded from Himself, by whom all things were made, and without whom nothing was made. Him we believe to have been sent by the Father into the Virgin, and to have been born of her—being both Man and God, the Son of Man and the Son of God, and to have been called by the name of Jesus Christ; we believe Him to have suffered, died, and been buried, according to the Scriptures, and, after He had been raised again by the Father and taken back to heaven, to be sitting at the right hand of the Father, and that He will come to judge the quick and the dead; who sent also from heaven from the Father, according to His own promise, the Holy Ghost, the Paraclete, the sanctifier of the faith of those who believe in the Father, and in the Son, and in the Holy Ghost. That this rule of faith has come down to us from the beginning of the gospel, even before any of the older heretics, much more before Praxeas, a pretender of yesterday, will be apparent both from the lateness of date which marks all heresies, and also from the absolutely novel character of our new-fangled Praxeas. In this principle also we must henceforth find a presumption of equal force against all heresies whatsoever—that whatever is first is true, whereas that is spurious which is later in date.
The simple, indeed, (I will not call them unwise and unlearned,) who always constitute the majority of believers, are startled at the dispensation (of the Three in One), on the ground that their very rule of faith withdraws them from the world’s plurality of gods to the one only true God; not understanding that, although He is the one only God, He must yet be believed in with His own οἰκονομία . The numerical order and distribution of the Trinity they assume to be a division of the Unity; whereas the Unity which derives the Trinity out of its own self is so far from being destroyed, that it is actually supported by it. They are constantly throwing out against us that we are preachers of two gods and three gods, while they take to themselves pre-eminently the credit of being worshippers of the One God; just as if the Unity itself with irrational deductions did not produce heresy, and the Trinity rationally considered constitute the truth.
According to modalism and Sabellianism, God is said to be only one person who reveals himself in different ways called "modes", "faces", "aspects", "roles" or "masks" (Greek πρόσωπα "prosopa"; Latin "personae") of the One God, as perceived by "the believer", rather than "three co-eternal persons" within "the Godhead", or a "co-equal Trinity". Modalists note that the only number expressly and repeatedly ascribed to God in the Old Testament is "One," do not accept interpreting this number as denoting union (i.e. Gen 2:24) when it is applied to God, and dispute the meaning or validity of related New Testament passages cited by Trinitarians. The Comma Johanneum, which is generally regarded as a spurious text in First John (1 John 5:7) known primarily from the King James Version and some versions of the Textus Receptus, but not included in modern critical texts, is an instance (the only one expressly stated) of the word "Three" describing God. Many modalists point out the lack of the word "Trinity" in any canonical scripture.
Passages such as Deut 6:4-5; Deut 32:12; 2Kings 19:15-19; Job 6:10; Job 31:13-15; Psalm 71:22; Psalm 83:16,18; Is 42:8; Is 45:5-7; Is 48:2,9,11-13; Mal 2:8,10; Matt 19:17; Romans 3:30; 2Cor 11:2-3; Gal 3:20; and Jude 1:25 are referenced by modalists as affirming that the Being of the One God is solidly single, and although known in several modes, precludes any concept of divine co-existence. Hippolytus described similar reasoning by Noetus and his followers saying: Now they seek to exhibit the foundation for their dogma by citing the word in the law, “I am the God of your fathers: ye shall have no other gods beside me;” and again in another passage, “I am the first,” He saith, “and the last; and beside me there is none other.” Thus they say they prove that God is one... And we cannot express ourselves otherwise, he says; for the apostle also acknowledges one God, when he says, “Whose are the fathers, (and) of whom as concerning the flesh Christ came, who is over all, God blessed for ever.”
Oneness Pentecostals, an identifier used by some modern modalists, claim that Colossians 1:12-20 refers to Christ's relationship with the Father in the sense of different roles of God:
giving thanks to the Father, who has qualified you to share in the inheritance of the saints in light. He has delivered us from the domain of darkness and transferred us to the kingdom of his beloved Son, in whom we have redemption, the forgiveness of sins. He is the image of the invisible God, the firstborn of all creation. For by him all things were created, in heaven and on earth, visible and invisible, whether thrones or dominions or rulers or authorities; all things were created through him and for him. And he is before all things, and in him all things hold together. And he is the head of the body, the church. He is the beginning, the firstborn from the dead, that in everything he might be preeminent. For in him all the fullness of God was pleased to dwell, and through him to reconcile to himself all things, whether on earth or in heaven, making peace by the blood of his cross.
Oneness Pentecostals also cite Christ's response to Philip's query on who the Father was in John 14:10 to support this assertion:
Trinitarian Christians hold that verses such as Colossians 1:12-20 remove all reasonable doubt that scripture teaches the Son, Who IS the Word of God (i.e. John 1:1-3), is literally "living," and literally Creator of everything together with God the Father and the Spirit of God. In the Trinitarian view, the above usage not only takes John 14:10 out of its immediate context, but is also resolutely contrary to the congruence of the Gospel of John as a whole, and strongly suspected of begging the question in interpretation. Trinitarians understand John 14:10 as informed by parallel verses such as John 1:14 and John 1:18, and as affirming the eternal union of the Son with His Father:
Many doctrinal exchanges between modalists and Trinitarians are similar to the above. Passages such as Gen 1:26-27; Gen 16:11-13; Gen 32:24,30; Judg 6:11-16; Is 48:16; Zech 2:8-9; Matt 3:16-17; Mark 13:32; Luke 12:10; John 5:18-27; John 14:26-28; John 15:26; John 16:13-16; John 17:5,20-24; Acts 1:6-9; and Heb 1:1-3,8-10 are referenced by Trinitarians as affirming that the Being of the One God is an eternal, personal, and mutually indwelling communion of Father [God], Son [the Word of God], and Holy Spirit [the Spirit of God]. Addressing the fact that the word "Trinity" does not occur in scripture, Trinitarians attest that extra-biblical doctrinal language often summarizes our understanding scripture in a clear and concise manner—other examples being even the words "modalism", "mode", and "role"—and that use of such language does not of itself demonstrate accuracy or inaccuracy. Further, the accusative implication that the word "Trinity" gained common use apart from careful and pious fidelity to scripture may be associated with ad hominem argumentation. Hippolytus described his own response to Noetus' doctrine, claiming the truth to be more evident than either of the two mutually opposed views of Arianism and Sabellianism : In this way, then, they choose to set forth these things, and they make use only of one class of passages; just in the same one-sided manner that Theodotus employed when he sought to prove that Christ was a mere man. But neither has the one party nor the other understood the matter rightly, as the Scriptures themselves confute their senselessness, and attest the truth. See, brethren, what a rash and audacious dogma they have introduced... For who will not say that there is one God? Yet he will not on that account deny the economy [i.e., the number and disposition of persons in the Trinity]. The proper way, therefore, to deal with the question is first of all to refute the interpretation put upon these passages by these men, and then to explain their real meaning.
Tertullian said of Praxeas' followers:For, confuted on all sides on the distinction between the Father and the Son, which we maintain without destroying their inseparable union... they endeavour to interpret this distinction in a way which shall nevertheless tally with their own opinions: so that, all in one Person, they distinguish two, Father and Son, understanding the Son to be flesh, that is man, that is Jesus; and the Father to be spirit, that is God, that is Christ. Thus they, while contending that the Father and the Son are one and the same, do in fact begin by dividing them rather than uniting them.”
A comparison of the above statement by Tertullian with the following example statement made by Oneness Pentecostals today is striking: "Jesus is the Son of God according to the flesh... and the very God Himself according to the Spirit..."
The form of the Lord's Name appearing in verse nineteen of the Great Commission, Matthew 28:16-20, has also historically been spoken during Christian baptism, Trinitarian Christians believing the three distinct, albeit co-inherent, persons of the Holy Trinity received witness by Jesus' baptism. Many modalists do not use this form as the Lord's Name. It is also suggested by some modern Oneness Pentecostal critics, that Matthew 28:19 is not part of the original text, because Eusebius of Caesarea quoted it by saying "In my name", and in that source there was no mention of baptism in the verse. Eusebius did, however, quote the "trinitarian" formula in his later writings. (Conybeare ("Hibbert Journal" i (1902-3), page 102). Matthew 28:19 is quoted also in the Didache (Didache 7:1), which dates to the late 1st Century or early 2nd Century) and in the Diatesseron (Diatesseron 55:5-7), which dates to the mid 2nd Century harmony of the Synoptic Gospels. The "Shem-Tob's Hebrew Gospel of Matthew" (George Howard), written during the 14th century, also has no reference of baptism or a "trinitarian" formula in Matthew 28:19. However, it is also true that no Greek manuscript of the Gospel of Matthew has ever been found which does not contain Matthew 28:19. The earliest extant copies of Matthew's Gospel date to the 3rd Century, and they contain Matthew 28:19. Therefore, scholars generally agree that Matthew 28:19 is likely part of the original Gospel of Matthew, though a minority disputes this.
In passages of scripture such as Matthew 3:16-17 where the Father, Son, and Holy Spirit are separated in the text and witness, modalists view this phenomenon as confirming God's omnipresence, and His ability to manifest himself as he pleases. Oneness Pentecostals and Modalists attempt to dispute the traditional doctrine of eternal co-existent union, while affirming the Christian doctrine of God taking on flesh as Jesus Christ. Like Trinitarians, Oneness adherents attest that Jesus Christ is fully God and fully man. However, Trinitarians believe that the "Word of God," the eternal second Person of the Trinity, was manifest as the Son of God by taking humanity to Himself and by glorifying that Humanity to equality with God through His resurrection, in eternal union with His own Divinity. In contrast, Oneness adherents hold that the One and Only true God—Who manifests Himself in any way He chooses, including as Father, Son and Holy Spirit (though not choosing to do so in an eternally simultaneous manner)—became man in the temporary role of Son. Many Oneness Pentecostals have also placed a strongly Nestorian distinction between Jesus' humanity and Divinity as in the example compared with Tertullian's statement above.
Oneness Pentecostals and other modalists are regarded by Roman Catholic, Greek Orthodox, and most other mainstream Christians as heretical for denying the literal existence of God's Beloved Son from Heaven, including His eternal Being and personal communion with the Father as High Priest, Mediator, Intercessor and Advocate; rejecting the direct succession of apostolic gifts and authority through the ordination of the Christian bishops; rejecting the identity of mainstream Christians as the God-begotten Body and Church which Christ founded; and rejecting the affirmations of the ecumenical councils such as the Councils of Nicaea and Constantinople, including the Holy Trinity. These rejections are for mainstream Christendom similar to Unitarianism, in that they primarily result from Christological heresy. While many Unitarians are Arians, modalists differentiate themselves from Arian or Semi-Arian Unitarians by affirming Christ's full Godhead, whereas both the Arian and Semi-Arian views assert Christ as not of one substance (Greek: οὐσία) with, and therefore also not equal with, God the Father. Dionysius, bishop of Rome, set forth the understanding of traditional Christianity concerning both Arianism and Sabellianism in "Against the Sabellians", ca. AD 262. He, in similarity to Hippolytus, explained that the two errors are at opposite extremes in seeking to understand the Son of God, Arianism misusing that the Son is distinct respecting the Father, and Sabellianism misusing that the Son is equal respecting the Father. In fact, he also repudiated the idea of three Gods as error as well. While Arianism and Sabellianism may appear to be diametrically opposed, the former claiming Christ to be created and the latter claiming Christ is God, both in common deny the Trinitarian belief that Christ is God Eternal in His Humanity, and that this is the very basis of man's hope of salvation. "One, not by conversion of the Godhead into flesh, but by taking of the manhood into God."
Hippolytus' account of the excommunication of Noetus is as follows: When the blessed presbyters heard this, they summoned him before the Church, and examined him. But he denied at first that he held such opinions. Afterwards, however, taking shelter among some, and having gathered round him some others who had embraced the same error, he wished thereafter to uphold his dogma openly as correct. And the blessed presbyters called him again before them, and examined him. But he stood out against them, saying, “What evil, then, am I doing in glorifying Christ?” And the presbyters replied to him, “We too know in truth one God; we know Christ; we know that the Son suffered even as He suffered, and died even as He died, and rose again on the third day, and is at the right hand of the Father, and cometh to judge the living and the dead. And these things which we have learned we allege.” Then, after examining him, they expelled him from the Church. And he was carried to such a pitch of pride, that he established a school.
Today's Oneness Pentecostal organisations left their original organization when a council of Pentecostal leaders officially adopted Trinitarianism, and have since established schools.
Epiphanius (Haeres 62) about 375 notes that the adherents of Sabellius were still to be found in great numbers, both in Mesopotamia and at Rome. The First Council of Constantinople in 381 in canon VII and the Third Council of Constantinople in 680 in canon XCV declared the baptism of Sabellius to be invalid, which indicates that Sabellianism was still extant.
The chief critics of Sabellianism were Tertullian and Hippolytus. In his work "Adversus Praxeas", Chapter I, Tertullian wrote "By this Praxeas did a twofold service for the devil at Rome: he drove away prophecy, and he brought in heresy; he put to flight the Paraclete, and he crucified the Father." Likewise Hippolytus wrote, "Do you see, he says, how the Scriptures proclaim one God? And as this is clearly exhibited, and these passages are testimonies to it, I am under necessity, he says, since one is acknowledged, to make this One the subject of suffering. For Christ was God, and suffered on account of us, being Himself the Father, that He might be able also to save us... See, brethren, what a rash and audacious dogma they have introduced, when they say without shame, the Father is Himself Christ, Himself the Son, Himself was born, Himself suffered, Himself raised Himself. But it is not so." From these notions came the pejorative term "Patripassianism" for the movement, from the Latin words "pater" for "father", and "passus" from the verb "to suffer" because it implied that the Father suffered on the Cross.
It is important to note that our only sources extant for our understanding of Sabellianism are from their detractors. Scholars today are not in agreement as to what exactly Sabellius or Praxeas taught. It is easy to suppose that Tertullian and Hippolytus at least at times misrepresented the opinions of their opponents.
The Greek Orthodox teach that God is not of a substance that is comprehensible since God the Father has no origin and is eternal and infinite. Thus it is improper to speak of things as "physical" and "metaphysical"; rather it is correct to speak of things as "created" and "uncreated." God the Father is the origin and source of the Trinity of Whom the Son is begotten and the Spirit proceeding, all Three being Uncreated. Therefore, the consciousness of God is not obtainable to created beings either in this life or the next (see apophatism). Through co-operation with the Holy Spirit (called theosis), Mankind can become good (God-like), not becoming uncreated, but partaker of His divine energies (). From such a perspective Mankind can be reconciled from the Knowledge of Good and the Knowledge of Evil he obtained in the Garden of Eden (see the Fall of Man), his created substance thus partaking of Uncreated God through the indwelling Presence of the eternally incarnate () Son of God and His Father by the Spirit (, ).
At the Arroyo Seco World Wide Camp Meeting, near Los Angeles, in 1913, Canadian evangelist R.E. McAlister stated at a baptismal service that the apostles had baptized in the name of Jesus only and not in the triune Name of Father, Son, and Holy Spirit. Later that night, John G. Schaeppe, a German immigrant, had a vision of Jesus and woke up the camp shouting that the name of Jesus needed to be glorified. From that point, Frank J. Ewart began requiring that anyone baptized using the Trinitarian formula needed to be rebaptized in the name of Jesus “only.” Support for this position began to spread, along with a belief in one Person in the Godhead, acting in different modes or offices.
The General Council of the Assemblies of God convened in St. Louis, Missouri in October 1916, to confirm their belief in Trinitarian orthodoxy. The Oneness camp was faced by a majority who required acceptance of the Trinitarian baptismal formula and the orthodox doctrine of the Trinity or remove themselves from the denomination. In the end, about a quarter of the ministers withdrew.
Oneness Pentecostalism teaches that God is one Person, and that the Father (a spirit) is united with Jesus (a man) as the Son of God. However, Oneness Pentecostalism differs somewhat by rejecting sequential modalism, and by the full acceptance of the begotten humanity of the Son, not eternally begotten, who was the man Jesus and was born, crucified, and risen, and not the deity. This directly opposes the pre-existence of the Son as a pre-existent mode, which Sabellianism generally does not oppose.
Oneness Pentecostals believe that Jesus was "Son" only when he became flesh on earth, but was the Father before being made man. They refer to the Father as the "Spirit" and the Son as the "Flesh", but they believe that Jesus and the Father are one essential Person, though operating as different "manifestations" or "modes". Oneness Pentecostals reject the Trinity doctrine, viewing it as pagan and nonscriptural, and hold to the Jesus' Name doctrine with respect to baptisms. They are often referred to as "Modalists" or "Jesus Only". Oneness Pentecostalism can be compared to Sabellianism, or can be described as holding to a form of Sabellianism, as both are nontrinitarian, and as both believe that Jesus was "Almighty God in the Flesh", but they do not totally identify each other.
It cannot be certain whether Sabellius taught Modalism completely as it is taught today as Oneness doctrine, since only a few fragments of his writings are extant and, therefore, all we have of his teachings comes through the writing of his detractors.
The following excerpts which demonstrate some of the known doctrinal characteristics of ancient Sabellians may be seen to compare with the doctrines in the modern Oneness movement:
While Oneness Pentecostals seek to differentiate themselves from ancient Sabellianism, modern theologians such as James R. White and Robert Morey see no significant difference between the ancient heresy of Sabellianism and current Oneness doctrine. This is based on the denial by Oneness Pentecostals of the Trinity, especially of the Divinity and Eternality of the Son of God, based upon a denial of the distinction between the Father, Son, and Holy Spirit. Sabellianism, Patripassianism, Modalistic Monarchianism, functionalism, Jesus Only, Father Only, and Oneness Pentecostalism are viewed by these theologians as being derived from a Platonic doctrine that God was an indivisible Monad and could not be differentiated as distinct Persons. | https://en.wikipedia.org/wiki?curid=29425 |
Sino-Indian War
The Sino-Indian War, also known as the Indo-China War and Sino-Indian Border Conflict, was a war between China and India that occurred in 1962. A Chinese disputed Himalayan border was the main cause of the war. There had been a series of violent border skirmishes between the two countries after the 1959 Tibetan uprising, when India granted asylum to the Dalai Lama. India initiated a defensive Forward Policy from 1960 to hinder Chinese military patrols and logistics, in which it placed outposts along the border, including several north of the McMahon Line, the eastern portion of the Line of Actual Control proclaimed by Chinese Premier Zhou Enlai in 1959.
Chinese military action grew increasingly aggressive after India rejected proposed Chinese diplomatic settlements throughout 1960–1962, with China re-commencing previously-banned "forward patrols" in Ladakh from 30 April 1962. China finally abandoned all attempts of peaceful resolution on 20 October 1962, invading disputed territory along the 3,225 kilometre- (2,000-mile-) long Himalayan border in Ladakh and across the McMahon Line. Chinese troops advanced over Indian forces in both theatres, capturing Rezang La in Chushul in the western theatre, as well as Tawang in the eastern theatre. The war ended when China declared a ceasefire on 20 November 1962, and simultaneously announced its withdrawal to its claimed "Line of Actual Control".
Much of the fighting took place in harsh mountain conditions, entailing large-scale combat at altitudes of over 4,000 metres (14,000 feet). The Sino-Indian War was also notable for the lack of deployment of naval and aerial assets by either China or India.
As the Sino-Soviet split heated up, Moscow made a major effort to support India, especially with the sale of advanced MiG fighter-aircraft. The United States and Britain refused to sell advanced weaponry to India, causing it to turn to the Soviet Union.
This was the first war between India and China. Following the end of the war, both sides kept forward armed positions and a number of small clashes broke out, but no large-scale fighting ensued.
China and India shared a long border, sectioned into three stretches by Nepal, Sikkim (then an Indian protectorate), and Bhutan, which follows the Himalayas between Burma and what was then West Pakistan. A number of disputed regions lie along this border. At its western end is the Aksai Chin region, an area the size of Switzerland, that sits between the Chinese autonomous region of Xinjiang and Tibet (which China declared as an autonomous region in 1965). The eastern border, between Burma and Bhutan, comprises the present Indian state of Arunachal Pradesh (formerly the North-East Frontier Agency). Both of these regions were overrun by China in the 1962 conflict.
Most combat took place at high elevations. The Aksai Chin region is a desert of salt flats around 5,000 metres (16,000 feet) above sea level, and Arunachal Pradesh is mountainous with a number of peaks exceeding 7,000 metres (23,000 feet). The Chinese Army had possession of one of the highest ridges in the regions. The high altitude and freezing conditions also caused logistical and welfare difficulties; in past similar conflicts (such as the Italian Campaign of World War I) harsh conditions have caused more casualties than have enemy actions. The Sino-Indian War was no different, with many troops on both sides succumbing to the freezing cold temperatures.
The main cause of the war was a dispute over the sovereignty of the widely separated Aksai Chin and Arunachal Pradesh border regions. Aksai Chin, claimed by India to belong to Kashmir and by China to be part of Xinjiang, contains an important road link that connects the Chinese regions of Tibet and Xinjiang. China's construction of this road was one of the triggers of the conflict.
The western portion of the Sino-Indian boundary originated in 1834, with the conquest of Ladakh by the armies of Raja Gulab Singh (Dogra) under the suzerainty of the Sikh Empire. Following an unsuccessful campaign into Tibet, Gulab Singh and the Tibetans signed a treaty in 1842 agreeing to stick to the "old, established frontiers", which were left unspecified. The British defeat of the Sikhs in 1846 resulted in the transfer of the Jammu and Kashmir region including Ladakh to the British, who then installed Gulab Singh as the Maharaja under their suzerainty. British commissioners contacted Chinese officials to negotiate the border, who did not show any interest. The British boundary commissioners fixed the southern end of the boundary at Pangong Lake, but regarded the area north of it till the Karakoram Pass as "terra incognita".
The Maharaja of Kashmir and his officials were keenly aware of the trade routes from Ladakh. Starting from Leh, there were two main routes into Central Asia: one passed through the Karakoram Pass to Shahidulla at the foot of the Kunlun Mountains and went on to Yarkand through the Kilian and Sanju passes; the other went east via the Chang Chenmo Valley, passed the Lingzi Tang Plains in the Aksai Chin region, and followed the course of the Karakash River to join the first route at Shahidulla. The Maharaja regarded Shahidulla as his northern outpost, in effect treating the Kunlun mountains as the boundary of his domains. His British suzerains were sceptical of such an extended boundary because Shahidulla was 79 miles away from the Karakoram pass and the intervening area was uninhabited. Nevertheless, the Maharaja was allowed to treat Shahidulla as his outpost for more than 20 years.
Chinese Turkestan regarded the "northern branch" of the Kunlun range with the Kilian and Sanju passes as its southern boundary. Thus the Maharaja's claim was uncontested. After the 1862 Dungan Revolt, which saw the expulsion of the Chinese from Turkestan, the Maharaja of Kashmir constructed a small fort at Shahidulla in 1864. The fort was most likely supplied from Khotan, whose ruler was now independent and on friendly terms with Kashmir. When the Khotanese ruler was deposed by the Kashgaria strongman Yakub Beg, the Maharaja was forced to abandon his post in 1867. It was then occupied by Yakub Beg's forces until the end of the Dungan Revolt.
In the intervening period, W. H. Johnson of Survey of India was commissioned to survey the Aksai Chin region. While in the course of his work, he was "invited" by the Khotanese ruler to visit his capital. After returning, Johnson noted that Khotan's border was at Brinjga, in the Kunlun mountains, and the entire the Karakash Valley was within the territory of Kashmir. The boundary of Kashmir that he drew, stretching from Sanju Pass to the eastern edge of Chang Chenmo Valley along the Kunlun mountains, is referred to as the "Johnson Line" (or "Ardagh-Johnson Line").
After the Chinese reconquered Turkestan in 1878, renaming it Xinjiang, they again reverted to their traditional boundary. By now, the Russian Empire was entrenched in Central Asia, and the British were anxious to avoid a common border with the Russians. After creating the Wakhan corridor as the buffer in the northwest of Kashmir, they wanted the Chinese to fill out the "no man's land" between the Karakoram and Kunlun ranges. Under British (and possibly Russian) encouragement, the Chinese occupied the area up to the Yarkand River valley (called Raskam), including Shahidulla, by 1890. They also erected a boundary pillar at the Karakoram pass by about 1892. These efforts appear half-hearted. A map provided by Hung Ta-chen, a senior Chinese official at St. Petersburgh, in 1893 showed the boundary of Xinjiang up to Raskam. In the east, it was similar to the Johnson line, placing Aksai Chin in Kashmir territory.
By 1892, the British settled on the policy that their preferred boundary for Kashmir was the "Indus watershed", i.e., the water-parting from which waters flow into the Indus river system on one side and into the Tarim basin on the other. In the north, this water-parting was along the Karakoram range. In the east, it was more complicated because the Chip Chap River, Galwan River and the Chang Chenmo River flow into the Indus whereas the Karakash River flows into the Tarim basin. A boundary alignment along this water-parting was defined by the Viceroy Lord Elgin and communicated to London. The British government in due course proposed it to China via its envoy Sir Claude MacDonald in 1899. This boundary, which came to be called the Macartney–MacDonald Line, ceded to China the Aksai Chin plains in the northeast, and the Trans-Karakoram Tract in the north. In return, the British wanted China to cede its 'shadowy suzerainty' on Hunza.
In 1911 the Xinhai Revolution resulted in power shifts in China, and by the end of World War I, the British officially used the Johnson Line. They took no steps to establish outposts or assert control on the ground. According to Neville Maxwell, the British had used as many as 11 different boundary lines in the region, as their claims shifted with the political situation. From 1917 to 1933, the "Postal Atlas of China", published by the Government of China in Peking had shown the boundary in Aksai Chin as per the Johnson line, which runs along the Kunlun mountains. The "Peking University Atlas", published in 1925, also put the Aksai Chin in India. Upon independence in 1947, the government of India used the Johnson Line as the basis for its official boundary in the west, which included the Aksai Chin. On 1 July 1954, India's first Prime Minister Jawaharlal Nehru definitively stated the Indian position, claiming that Aksai Chin had been part of the Indian Ladakh region for centuries, and that the border (as defined by the Johnson Line) was non-negotiable. According to George N. Patterson, when the Indian government finally produced a report detailing the alleged proof of India's claims to the disputed area, "the quality of the Indian evidence was very poor, including some very dubious sources indeed".
In 1956–57, China constructed a road through Aksai Chin, connecting Xinjiang and Tibet, which ran south of the Johnson Line in many places. Aksai Chin was easily accessible to the Chinese, but access from India, which meant negotiating the Karakoram mountains, was much more difficult. The road came on Chinese maps published in 1958.
In 1826, British India gained a common border with China after the British wrested control of Manipur and Assam from the Burmese, following the First Anglo-Burmese War of 1824–1826. In 1847, Major J. Jenkins, agent for the North East Frontier, reported that the Tawang was part of Tibet. In 1872, four monastic officials from Tibet arrived in Tawang and supervised a boundary settlement with Major R. Graham, NEFA official, which included the Tawang Tract as part of Tibet. Thus, in the last half of the 19th century, it was clear that the British treated the Tawang Tract as part of Tibet. This boundary was confirmed in a 1 June 1912 note from the British General Staff in India, stating that the "present boundary (demarcated) is south of Tawang, running westwards along the foothills from near Udalguri, Darrang to the southern Bhutanese border and Tezpur claimed by China." A 1908 map of The Province of Eastern Bengal and Assam prepared for the Foreign Department of the Government of India, showed the international boundary from Bhutan continuing to the Baroi River, following the Himalayas foothill alignment. In 1913, representatives of the UK, China and Tibet attended a conference in Simla regarding the borders between Tibet, China and British India. Whilst all three representatives initialed the agreement, Beijing later objected to the proposed boundary between the regions of Outer Tibet and Inner Tibet, and did not ratify it. The details of the Indo-Tibetan boundary was not revealed to China at the time. The foreign secretary of the British Indian government, Henry McMahon, who had drawn up the proposal, decided to bypass the Chinese (although instructed not to by his superiors) and settle the border bilaterally by negotiating directly with Tibet. According to later Indian claims, this border was intended to run through the highest ridges of the Himalayas, as the areas south of the Himalayas were traditionally Indian. The McMahon Line lay south of the boundary India claims. India's government held the view that the Himalayas were the ancient boundaries of the Indian subcontinent, and thus should be the modern boundaries of India, while it is the position of the Chinese government that the disputed area in the Himalayas have been geographically and culturally part of Tibet since ancient times.
Months after the Simla agreement, China set up boundary markers south of the McMahon Line. T. O'Callaghan, an official in the Eastern Sector of the North East Frontier, relocated all these markers to a location slightly south of the McMahon Line, and then visited Rima to confirm with Tibetan officials that there was no Chinese influence in the area. The British-run Government of India initially rejected the Simla Agreement as incompatible with the Anglo-Russian Convention of 1907, which stipulated that neither party was to negotiate with Tibet "except through the intermediary of the Chinese government". The British and Russians cancelled the 1907 agreement by joint consent in 1921. It was not until the late 1930s that the British started to use the McMahon Line on official maps of the region.
China took the position that the Tibetan government should not have been allowed to make such a treaty, rejecting Tibet's claims of independent rule. For its part, Tibet did not object to any section of the McMahon Line excepting the demarcation of the trading town of Tawang, which the Line placed under British-Indian jurisdiction. Up until World War II, Tibetan officials were allowed to administer Tawang with complete authority. Due to the increased threat of Japanese and Chinese expansion during this period, British Indian troops secured the town as part of the defence of India's eastern border.
In the 1950s, India began patrolling the region. It found that, at multiple locations, the highest ridges actually fell north of the McMahon Line. Given India's historic position that the original intent of the line was to separate the two nations by the highest mountains in the world, in these locations India extended its forward posts northward to the ridges, regarding this move as compliant with the original border proposal, although the Simla Convention did not explicitly state this intention.
The 1940s saw huge change with the Partition of India in 1947 (resulting in the establishment of the two new states of India and Pakistan), and the establishment of the People's Republic of China (PRC) after the Chinese Civil War in 1949. One of the most basic policies for the new Indian government was that of maintaining cordial relations with China, reviving its ancient friendly ties. India was among the first nations to grant diplomatic recognition to the newly created PRC.
At the time, Chinese officials issued no condemnation of Nehru's claims or made any opposition to Nehru's open declarations of control over Aksai Chin. In 1956, Chinese Premier Zhou Enlai stated that he had no claims over Indian-controlled territory. He later argued that Aksai Chin was already under Chinese jurisdiction and that the McCartney-MacDonald Line was the line China could accept. Zhou later argued that as the boundary was undemarcated and had never been defined by treaty between any Chinese or Indian government, the Indian government could not unilaterally define Aksai Chin's borders.
In 1950, the Chinese People's Liberation Army took control of Tibet, which all Chinese governments regarded as still part of China. Later the Chinese extended their influence by building a road in 1956–67 and placing border posts in Aksai Chin. India found out after the road was completed, protested against these moves and decided to look for a diplomatic solution to ensure a stable Sino-Indian border. To resolve any doubts about the Indian position, Prime Minister Jawaharlal Nehru declared in parliament that India regarded the McMahon Line as its official border. The Chinese expressed no concern at this statement, and in 1961 and 1962, the government of China asserted that there were no frontier issues to be taken up with India.
In 1954, Prime Minister Nehru wrote a memo calling for India's borders to be clearly defined and demarcated; in line with previous Indian philosophy, Indian maps showed a border that, in some places, lay north of the McMahon Line. Chinese Premier Zhou Enlai, in November 1956, again repeated Chinese assurances that the People's Republic had no claims on Indian territory, although official Chinese maps showed of territory claimed by India as Chinese. CIA documents created at the time revealed that Nehru had ignored Burmese premier Ba Swe when he warned Nehru to be cautious when dealing with Zhou. They also allege that Zhou purposefully told Nehru that there were no border issues with India.
In 1954, China and India negotiated the Five Principles of Peaceful Coexistence, by which the two nations agreed to abide in settling their disputes. India presented a frontier map which was accepted by China, and the slogan "Hindi-Chini bhai-bhai" (Indians and Chinese are brothers) was popular then. Nehru in 1958 had privately told G. Parthasarathi, the Indian envoy to China not to trust the Chinese at all and send all communications directly to him, bypassing the Defence Minister VK Krishna Menon since his communist background clouded his thinking about China. According to Georgia Tech scholar , Nehru's policy on Tibet was to create a strong Sino-Indian partnership which would be catalysed through agreement and compromise on Tibet. Garver believes that Nehru's previous actions had given him confidence that China would be ready to form an "Asian Axis" with India.
This apparent progress in relations suffered a major setback when, in 1959, Nehru accommodated the Tibetan religious leader at the time, the 14th Dalai Lama, who fled Lhasa after a failed Tibetan uprising against Chinese rule. The Chairman of the Communist Party of China, Mao Zedong, was enraged and asked the Xinhua News Agency to produce reports on Indian expansionists operating in Tibet.
Border incidents continued through this period. In August 1959, the People's Liberation Army took an Indian prisoner at Longju, which had an ambiguous position in the McMahon Line, and two months later in Aksai Chin, a clash at Kongka Pass led to the death of nine Indian frontier policemen.
On 2 October, Soviet Premier Nikita Khrushchev defended Nehru in a meeting with Chairman Mao. This action reinforced China's impression that the Soviet Union, the United States and India all had expansionist designs on China. The People's Liberation Army went so far as to prepare a self-defence counterattack plan. Negotiations were restarted between the nations, but no progress was made.
As a consequence of their non-recognition of the McMahon Line, China's maps showed both the North East Frontier Area (NEFA) and Aksai Chin to be Chinese territory. In 1960, Zhou Enlai unofficially suggested that India drop its claims to Aksai Chin in return for a Chinese withdrawal of claims over NEFA. Adhering to his stated position, Nehru believed that China did not have a legitimate claim over either of these territories, and thus was not ready to concede them. This adamant stance was perceived in China as Indian opposition to Chinese rule in Tibet. Nehru declined to conduct any negotiations on the boundary until Chinese troops withdrew from Aksai Chin, a position supported by the international community. India produced numerous reports on the negotiations, and translated Chinese reports into English to help inform the international debate. China believed that India was simply securing its claim lines in order to continue its "grand plans in Tibet". India's stance that China withdraw from Aksai Chin caused continual deterioration of the diplomatic situation to the point that internal forces were pressuring Nehru to take a military stance against China.
In 1960, based on an agreement between Nehru and Zhou Enlai, officials from India and China held discussions in order to settle the boundary dispute. China and India disagreed on the major watershed that defined the boundary in the western sector. The Chinese statements with respect to their border claims often misrepresented the cited sources. The failure of these negotiations was compounded by successful Chinese border agreements with Nepal (Sino-Nepalese Treaty of Peace and Friendship) and Burma in the same year.
At the beginning of 1961, Nehru appointed General as army Chief of General Staff, but he refused to increase military spending and prepare for a possible war. According to James Barnard Calvin of the U.S. Navy, in 1959, India started sending Indian troops and border patrols into disputed areas. This program created both border skirmishes and deteriorating relations between India and China. The aim of this policy was to create outposts behind advancing Chinese troops to interdict their supplies, forcing them north of the disputed line. There were eventually 60 such outposts, including 43 north of the McMahon Line, to which India claimed sovereignty. China viewed this as further confirmation of Indian expansionist plans directed towards Tibet. According to the Indian official history, implementation of the Forward Policy was intended to provide evidence of Indian occupation in the previously unoccupied region through which Chinese troops had been advancing. Kaul was confident, through contact with Indian Intelligence and CIA information, that China would not react with force. Indeed, at first the PLA simply withdrew, but eventually Chinese forces began to counter-encircle the Indian positions which clearly encroached into the north of McMahon Line. This led to a tit-for-tat Indian reaction, with each force attempting to outmanoeuver the other. Despite the escalating nature of the dispute, the two forces withheld from engaging each other directly.
Chinese attention was diverted for a time by the military activity of the Nationalists on Taiwan, but on 23 June the U.S. assured China that a Nationalist invasion would not be permitted. China's heavy artillery facing Taiwan could then be moved to Tibet. It took China six to eight months to gather the resources needed for the war, according to Anil Athale, author of the official Indian history. The Chinese sent a large quantity of non-military supplies to Tibet through the Indian port of Calcutta.
Various border conflicts and "military incidents" between India and China flared up throughout the summer and autumn of 1962. In May, the Indian Air Force was told not to plan for close air support, although it was assessed as being a feasible way to counter the unfavourable ratio of Chinese to Indian troops. In June, a skirmish caused the deaths of dozens of Chinese troops. The Indian Intelligence Bureau received information about a Chinese buildup along the border which could be a precursor to war.
During June–July 1962, Indian military planners began advocating "probing actions" against the Chinese, and accordingly, moved mountain troops forward to cut off Chinese supply lines. According to Patterson, the Indian motives were threefold:
On 10 July 1962, 350 Chinese troops surrounded an Indian post in Chushul (north of the McMahon Line) but withdrew after a heated argument via loudspeaker. On 22 July, the Forward Policy was extended to allow Indian troops to push back Chinese troops already established in disputed territory. Whereas Indian troops were previously ordered to fire only in self-defence, all post commanders were now given discretion to open fire upon Chinese forces if threatened. In August, the Chinese military improved its combat readiness along the McMahon Line and began stockpiling ammunition, weapons and fuel.
Given his foreknowledge of the coming Cuban Missile Crisis, Mao Zedong was able to persuade Nikita Khrushchev, First Secretary of the Communist Party of the Soviet Union, to reverse the Russian policy of backing India, at least temporarily. In mid-October, the Communist organ "Pravda" encouraged peace between India and China. When the Cuban Missile Crisis ended and Mao's rhetoric changed, Russia reversed course.
In June 1962, Indian forces established an outpost at Dhola, on the southern slopes of the Thag La Ridge. Dhola lay north of the McMahon Line but south of the ridges along which India interpreted the McMahon Line to run. In August, China issued diplomatic protests and began occupying positions at the top of Thag La. On 8 September, a 60-strong PLA unit descended to the south side of the ridge and occupied positions that dominated one of the Indian posts at Dhola. Fire was not exchanged, but Nehru said to the media that the Indian Army had instructions to "free our territory" and the troops had been given discretion to use force. On 11 September, it was decided that "all forward posts and patrols were given permission to fire on any armed Chinese who entered Indian territory".
The operation to occupy Thag La was flawed in that Nehru's directives were unclear and it got underway very slowly because of this. In addition to this, each man had to carry over the long trek and this severely slowed down the reaction. By the time the Indian battalion reached the point of conflict, Chinese units controlled both banks of the Namka Chu River. On 20 September, Chinese troops threw grenades at Indian troops and a firefight developed, triggering a long series of skirmishes for the rest of September.
Some Indian troops, including Brigadier Dalvi who commanded the forces at Thag La, were also concerned that the territory they were fighting for was not strictly territory that "we should have been convinced was ours". According to Neville Maxwell, even members of the Indian defence ministry were categorically concerned with the validity of the fighting in Thag La.
On 4 October, Kaul assigned some troops to secure regions south of the Thag La Ridge. Kaul decided to first secure Yumtso La, a strategically important position, before re-entering the lost Dhola post. Kaul had then realised that the attack would be desperate and the Indian government tried to stop an escalation into all-out war. Indian troops marching to Thag La had suffered in the previously unexperienced conditions; two Gurkha soldiers died of pulmonary edema.
On 10 October, an Indian Rajput patrol of 50 troops to Yumtso La were met by an emplaced Chinese position of some 1,000 soldiers. Indian troops were in no position for battle, as Yumtso La was 16,000 feet (4,900 m) above sea level and Kaul did not plan on having artillery support for the troops. The Chinese troops opened fire on the Indians under their belief that they were north of the McMahon Line. The Indians were surrounded by Chinese positions which used mortar fire. They managed to hold off the first Chinese assault, inflicting heavy casualties.
At this point, the Indian troops were in a position to push the Chinese back with mortar and machine gun fire. Brigadier Dalvi opted not to fire, as it would mean decimating the Rajput who were still in the area of the Chinese regrouping. They helplessly watched the Chinese ready themselves for a second assault. In the second Chinese assault, the Indians began their retreat, realising the situation was hopeless. The Indian patrol suffered 25 casualties, and the Chinese 33. The Chinese troops held their fire as the Indians retreated, and then buried the Indian dead with military honours, as witnessed by the retreating soldiers. This was the first occurrence of heavy fighting in the war.
This attack had grave implications for India and Nehru tried to solve the issue, but by 18 October, it was clear that the Chinese were preparing for an attack, with a massive troop buildup. A long line of mules and porters had also been observed supporting the buildup and reinforcement of positions south of the Thag La Ridge.
Two of the major factors leading up to China's eventual conflicts with Indian troops were India's stance on the disputed borders and perceived Indian subversion in Tibet. There was "a perceived need to punish and end perceived Indian efforts to undermine Chinese control of Tibet, Indian efforts which were perceived as having the objective of restoring the pre-1949 status quo ante of Tibet". The other was "a perceived need to punish and end perceived Indian aggression against Chinese territory along the border". John W. Garver argues that the first perception was incorrect based on the state of the Indian military and polity in the 1960s. It was, nevertheless a major reason for China's going to war. He argues that while the Chinese perception of Indian border actions were "substantially accurate", Chinese perceptions of the supposed Indian policy towards Tibet were "substantially inaccurate".
The CIA's declassified POLO documents reveal contemporary American analysis of Chinese motives during the war. According to this document, "Chinese apparently were motivated to attack by one primary consideration — their determination to retain the ground on which PLA forces stood in 1962 and to punish the Indians for trying to take that ground". In general terms, they tried to show the Indians once and for all that China would not acquiesce in a military "reoccupation" policy. Secondary reasons for the attack were to damage Nehru's prestige by exposing Indian weakness and to expose as traitorous Khrushchev's policy of supporting Nehru against a Communist country.
Another factor which might have affected China's decision for war with India was a perceived need to stop a Soviet-U.S.-India encirclement and isolation of China. India's relations with the Soviet Union and United States were both strong at this time, but the Soviets (and Americans) were preoccupied by the Cuban Missile Crisis and would not interfere with the Sino-Indian War. P. B. Sinha suggests that China waited until October to attack because the timing of the war was exactly in parallel with American actions so as to avoid any chance of American or Soviet involvement. Although American buildup of forces around Cuba occurred on the same day as the first major clash at Dhola, and China's buildup between 10 and 20 October appeared to coincide exactly with the United States establishment of a blockade against Cuba which began 20 October, the Chinese probably prepared for this before they could anticipate what would happen in Cuba. Another explanation is that the confrontation in the Taiwan Strait had eased by then.
Garver argues that the Chinese correctly assessed Indian border policies, particularly the Forward Policy, as attempts for incremental seizure of Chinese-controlled territory. On Tibet, Garver argues that one of the major factors leading to China's decision for war with India was a common tendency of humans "to attribute others' behavior to interior motivations, while attributing their own behavior to situational factors". Studies from China published in the 1990s confirmed that the root cause for China going to war with India was the perceived Indian aggression in Tibet, with the forward policy simply catalysing the Chinese reaction.
Neville Maxwell and Allen Whiting argue that the Chinese leadership believed they were defending territory that was legitimately Chinese, and which was already under de facto Chinese occupation prior to Indian advances, and regarded the Forward Policy as an Indian attempt at creeping annexation. Mao Zedong himself compared the Forward Policy to a strategic advance in Chinese chess:
India claims that the motive for the Forward Policy was to cut off the supply routes for Chinese troops posted in NEFA and Aksai Chin. According to the official Indian history, the forward policy was continued because of its initial success, as it claimed that Chinese troops withdrew when they encountered areas already occupied by Indian troops. It also claimed that the Forward Policy was having success in cutting out supply lines of Chinese troops who had advanced South of the McMahon Line, though there was no evidence of such advance before the 1962 war. The Forward Policy rested on the assumption that Chinese forces "were not likely to use force against any of our posts, even if they were in a position to do so". No serious re-appraisal of this policy took place even when Chinese forces ceased withdrawing. Nehru's confidence was probably justified given the difficulty for China to supply the area over the high altitude terrain over 5000 km (3000 miles) from the more populated areas of China.
Chinese policy toward India, therefore, operated on two seemingly contradictory assumptions in the first half of 1961. On the one hand, the Chinese leaders continued to entertain a hope, although a shrinking one, that some opening for talks would appear. On the other hand, they read Indian statements and actions as clear signs that Nehru wanted to talk only about a Chinese withdrawal. Regarding the hope, they were willing to negotiate and tried to prod Nehru into a similar attitude. Regarding Indian intentions, they began to act politically and to build a rationale based on the assumption that Nehru already had become a lackey of imperialism; for this reason he opposed border talks.
Krishna Menon is reported to have said that when he arrived in Geneva on 6 June 1961 for an international conference in Laos, Chinese officials in Chen Yi's delegation indicated that Chen might be interested in discussing the border dispute with him. At several private
meetings with Menon, Chen avoided any discussion of the dispute and Menon surmised that the Chinese wanted him to broach the matter first. He did not, as he was under instructions from Nehru to avoid taking the initiative, leaving the Chinese with the impression
that Nehru was unwilling to show any flexibility.
In September, the Chinese took a step toward criticising Nehru openly in their commentary. After citing Indonesian and Burmese press criticism of Nehru by name, the Chinese critiqued his moderate remarks on colonialism (People's Daily Editorial, 9 September): "Somebody at the Non-Aligned Nations Conference advanced the argument that the era of classical colonialism is gone and dead...contrary to facts." This was a distortion of Nehru's remarks but appeared close enough to be credible. On the same day, Chen Yi referred to Nehru by implication at the Bulgarian embassy reception: "Those who attempted to deny history, ignore reality, and distort the truth and who attempted to divert the Conference from its important object have failed to gain support and were isolated." On 10 September, they dropped all circumlocutions and criticised him by name in a China Youth article and NCNA report—the first time in almost two years that they had commented extensively on the Prime Minister.
By early 1962, the Chinese leadership began to believe that India's intentions were to launch a massive attack against Chinese troops, and that the Indian leadership wanted a war. In 1961, the Indian army had been sent into Goa, a small region without any other international borders apart from the Indian one, after Portugal refused to surrender the exclave colony to the Indian Union. Although this action met little to no international protest or opposition, China saw it as an example of India's expansionist nature, especially in light of heated rhetoric from Indian politicians. India's Home Minister declared, "If the Chinese will not vacate the areas occupied by it, India will have to repeat what it did in Goa. India will certainly drive out the Chinese forces", while another member of the Indian Congress Party pronounced, "India will take steps to end [Chinese] aggression on Indian soil just as it ended Portuguese aggression in Goa". By mid-1962, it was apparent to the Chinese leadership that negotiations had failed to make any progress, and the Forward Policy was increasingly perceived as a grave threat as Delhi increasingly sent probes deeper into border areas and cut off Chinese supply lines. Foreign Minister Marshal Chen Yi commented at one high-level meeting, "Nehru's forward policy is a knife. He wants to put it in our heart. We cannot close our eyes and await death." The Chinese leadership believed that their restraint on the issue was being perceived by India as weakness, leading to continued provocations, and that a major counterblow was needed to stop perceived Indian aggression.
Xu Yan, prominent Chinese military historian and professor at the PLA's National Defense University, gives an account of the Chinese leadership's decision to go to war. By late September 1962, the Chinese leadership had begun to reconsider their policy of "armed coexistence", which had failed to address their concerns with the forward policy and Tibet, and consider a large, decisive strike. On 22 September 1962, the "People's Daily" published an article which claimed that "the Chinese people were burning with 'great indignation' over the Indian actions on the border and that New Delhi could not 'now say that warning was not served in advance'."
The Indian side was confident war would not be triggered and made little preparations. India had only two divisions of troops in the region of the conflict. In August 1962, Brigadier D. K. Palit claimed that a war with China in the near future could be ruled out. Even in September 1962, when Indian troops were ordered to "expel the Chinese" from Thag La, Maj. General J. S. Dhillon expressed the opinion that "experience in Ladakh had shown that a few rounds fired at the Chinese would cause them to run away." Because of this, the Indian army was completely unprepared when the attack at Yumtso La occurred.
Recently declassified CIA documents which were compiled at the time reveal that India's estimates of Chinese capabilities made them neglect their military in favour of economic growth. It is claimed that if a more military-minded man had been in place instead of Nehru, India would have been more likely to have been ready for the threat of a counter-attack from China.
On 6 October 1962, the Chinese leadership convened. Lin Biao reported that PLA intelligence units had determined that Indian units might assault Chinese positions at Thag La on 10 October (Operation Leghorn). The Chinese leadership and the Central Military Council decided upon war to launch a large-scale attack to punish perceived military aggression from India. In Beijing, a larger meeting of Chinese military was convened in order to plan for the coming conflict.
Mao and the Chinese leadership issued a directive laying out the objectives for the war. A main assault would be launched in the eastern sector, which would be coordinated with a smaller assault in the western sector. All Indian troops within China's claimed territories in the eastern sector would be expelled, and the war would be ended with a unilateral Chinese ceasefire and withdrawal, followed by a return to the negotiating table. India led the Non-Aligned Movement, Nehru enjoyed international prestige, and China, with a larger military, would be portrayed as an aggressor. He said that a well-fought war "will guarantee at least thirty years of peace" with India, and determined the benefits to offset the costs.
China also reportedly bought a significant amount of Indian rupee currency from Hong Kong, supposedly to distribute amongst its soldiers in preparation for the war.
On 8 October, additional veteran and elite divisions were ordered to prepare to move into Tibet from the Chengdu and Lanzhou military regions.
On 12 October, Nehru declared that he had ordered the Indian army to "clear Indian territory in the NEFA of Chinese invaders" and personally met with Kaul, issuing instructions to him.
On 14 October, an editorial on "People's Daily" issued China's final warning to India: "So it seems that Mr. Nehru has made up his mind to attack the Chinese frontier guards on an even bigger scale. ... It is high time to shout to Mr. Nehru that the heroic Chinese troops, with the glorious tradition of resisting foreign aggression, can never be cleared by anyone from their own territory ... If there are still some maniacs who are reckless enough to ignore our well-intentioned advice and insist on having another try, well, let them do so. History will pronounce its inexorable verdict ... At this critical moment ... we still want to appeal once more to Mr. Nehru: better rein in at the edge of the precipice and do not use the lives of Indian troops as stakes in your gamble."
Marshal Liu Bocheng headed a group to determine the strategy for the war. He concluded that the opposing Indian troops were among India's best, and to achieve victory would require deploying crack troops and relying on force concentration to achieve decisive victory. On 16 October, this war plan was approved, and on the 18th, the final approval was given by the Politburo for a "self-defensive counter-attack", scheduled for 20 October.
On 20 October 1962, the Chinese People's Liberation Army launched two attacks, 1000 kilometres (600 miles) apart. In the western theatre, the PLA sought to expel Indian forces from the Chip Chap valley in Aksai Chin while in the eastern theatre, the PLA sought to capture both banks of the Namka Chu river. Some skirmishes also took place at the Nathula Pass, which is in the Indian state of Sikkim (an Indian protectorate at that time). Gurkha rifles travelling north were targeted by Chinese artillery fire. After four days of fierce fighting, the three regiments of Chinese troops succeeded in securing a substantial portion of the disputed territory.
Chinese troops launched an attack on the southern banks of the Namka Chu River on 20 October. The Indian forces were undermanned, with only an understrength battalion to support them, while the Chinese troops had three regiments positioned on the north side of the river. The Indians expected Chinese forces to cross via one of five bridges over the river and defended those crossings. The PLA bypassed the defenders by fording the river, which was shallow at that time of year, instead. They formed up into battalions on the Indian-held south side of the river under cover of darkness, with each battalion assigned against a separate group of Rajputs.
At 5:14 am, Chinese mortar fire began attacking the Indian positions. Simultaneously, the Chinese cut the Indian telephone lines, preventing the defenders from making contact with their headquarters. At about 6:30 am, the Chinese infantry launched a surprise attack from the rear and forced the Indians to leave their trenches.
The Chinese overwhelmed the Indian troops in a series of flanking manoeuvres south of the McMahon Line and prompted their withdrawal from Namka Chu. Fearful of continued losses, Indian troops retreated into Bhutan. Chinese forces respected the border and did not pursue. Chinese forces now held all of the territory that was under dispute at the time of the Thag La confrontation, but they continued to advance into the rest of NEFA.
On 22 October, at 12:15 am, PLA mortars fired on Walong, on the McMahon line. Flares launched by Indian troops the next day revealed numerous Chinese milling around the valley. The Indians tried to use their mortars against the Chinese but the PLA responded by lighting a bush fire, causing confusion among the Indians. Some 400 Chinese troops attacked the Indian position. The initial Chinese assault was halted by accurate Indian mortar fire. The Chinese were then reinforced and launched a second assault. The Indians managed to hold them back for four hours, but the Chinese used weight of numbers to break through. Most Indian forces were withdrawn to established positions in Walong, while a company supported by mortars and medium machine guns remained to cover the retreat.
Elsewhere, Chinese troops launched a three-pronged attack on Tawang, which the Indians evacuated without any resistance.
Over the following days, there were clashes between Indian and Chinese patrols at Walong as the Chinese rushed in reinforcements. On 25 October, the Chinese made a probe, which was met with resistance from the 4th Sikhs. The following day, a patrol from the 4th Sikhs was encircled, and after being unable to break the encirclement, an Indian unit was able to flank the Chinese, allowing the Sikhs to break free.
On the Aksai Chin front, China already controlled most of the disputed territory. Chinese forces quickly swept the region of any remaining Indian troops. Late on 19 October, Chinese troops launched a number of attacks throughout the western theatre. By 22 October, all posts north of Chushul had been cleared.
On 20 October, the Chinese easily took the Chip Chap Valley, Galwan Valley, and Pangong Lake. Many outposts and garrisons along the Western front were unable to defend against the surrounding Chinese troops. Most Indian troops positioned in these posts offered resistance but were either killed or taken prisoner. Indian support for these outposts was not forthcoming, as evidenced by the Galwan post, which had been surrounded by enemy forces in August, but no attempt made to relieve the besieged garrison. Following the 20 October attack, nothing was heard from Galwan.
On 24 October, Indian forces fought hard to hold the Rezang La Ridge, in order to prevent a nearby airstrip from falling.
After realising the magnitude of the attack, the Indian Western Command withdrew many of the isolated outposts to the south-east. Daulet Beg Oldi was also evacuated, but it was south of the Chinese claim line and was not approached by Chinese forces. Indian troops were withdrawn in order to consolidate and regroup in the event that China probed south of their claim line.
By 24 October, the PLA had entered territory previously administered by India to give the PRC a diplomatically strong position over India. The majority of Chinese forces had advanced sixteen kilometres (10 miles) south of the control line prior to the conflict. Four days of fighting were followed by a three-week lull. Zhou ordered the troops to stop advancing as he attempted to negotiate with Nehru. The Indian forces had retreated into more heavily fortified positions around Se La and Bomdi La which would be difficult to assault. Zhou sent Nehru a letter, proposing
Nehru's 27 October reply expressed interest in the restoration of peace and friendly relations and suggested a return to the "boundary prior to 8 September 1962". He was categorically concerned about a mutual twenty kilometre (12-mile) withdrawal after "40 or 60 kilometres (25 or 40 miles) of blatant military aggression". He wanted the creation of a larger immediate buffer zone and thus resist the possibility of a repeat offensive. Zhou's 4 November reply repeated his 1959 offer to return to the McMahon Line in NEFA and the Chinese traditionally claimed MacDonald Line in Aksai Chin. Facing Chinese forces maintaining themselves on Indian soil and trying to avoid political pressure, the Indian parliament announced a national emergency and passed a resolution which stated their intent to "drive out the aggressors from the sacred soil of India". The United States and the United Kingdom supported India's response. The Soviet Union was preoccupied with the Cuban Missile Crisis and did not offer the support it had provided in previous years. With the backing of other great powers, a 14 November letter by Nehru to Zhou once again rejected his proposal.
Neither side declared war, used their air force, or fully broke off diplomatic relations, but the conflict is commonly referred to as a war. This war coincided with the Cuban Missile Crisis and was viewed by the western nations at the time as another act of aggression by the Communist bloc.
According to Calvin, the Chinese side evidently wanted a diplomatic resolution and discontinuation of the conflict.
After Zhou received Nehru's letter (rejecting Zhou's proposal), the fighting resumed on the eastern theatre on 14 November (Nehru's birthday), with an Indian attack on Walong, claimed by China, launched from the defensive position of Se La and inflicting heavy casualties on the Chinese. The Chinese resumed military activity on Aksai Chin and NEFA hours after the Walong battle.
In the eastern theatre, the PLA attacked Indian forces near Se La and Bomdi La on 17 November. These positions were defended by the Indian 4th Infantry Division. Instead of attacking by road as expected, PLA forces approached via a mountain trail, and their attack cut off a main road and isolated 10,000 Indian troops.
Se La occupied high ground, and rather than assault this commanding position, the Chinese captured Thembang, which was a supply route to Se La.
On the western theatre, PLA forces launched a heavy infantry attack on 18 November near Chushul. Their attack started at 4:35 am, despite a mist surrounding most of the areas in the region. At 5:45 the Chinese troops advanced to attack two platoons of Indian troops at Gurung Hill.
The Indians did not know what was happening, as communications were dead. As a patrol was sent, China attacked with greater numbers. Indian artillery could not hold off the superior Chinese forces. By 9:00 am, Chinese forces attacked Gurung Hill directly and Indian commanders withdrew from the area and also from the connecting Spangur Gap.
The Chinese had been simultaneously attacking Rezang La which was held by 123 Indian troops. At 5:05 am, Chinese troops launched their attack audaciously. Chinese medium machine gun fire pierced through the Indian tactical defences.
At 6:55 am the sun rose and the Chinese attack on the 8th platoon began in waves. Fighting continued for the next hour, until the Chinese signaled that they had destroyed the 7th platoon. Indians tried to use light machine guns on the medium machine guns from the Chinese but after 10 minutes the battle was over. Logistical inadequacy once again hurt the Indian troops. The Chinese gave the Indian troops a respectful military funeral. The battles also saw the death of Major Shaitan Singh of the Kumaon Regiment, who had been instrumental in the first battle of Rezang La. The Indian troops were forced to withdraw to high mountain positions. Indian sources believed that their troops were just coming to grips with the mountain combat and finally called for more troops. The Chinese declared a ceasefire, ending the bloodshed.
Indian forces suffered heavy casualties, with dead Indian troops' bodies being found in the ice, frozen with weapons in hand. The Chinese forces also suffered heavy casualties, especially at Rezang La. This signalled the end of the war in Aksai Chin as China had reached their claim line – many Indian troops were ordered to withdraw from the area. China claimed that the Indian troops wanted to fight on until the bitter end. The war ended with their withdrawal, so as to limit the number of casualties.
The PLA penetrated close to the outskirts of Tezpur, Assam, a major frontier town nearly fifty kilometres (30 miles) from the Assam-North-East Frontier Agency border. The local government ordered the evacuation of the civilians in Tezpur to the south of the Brahmaputra River, all prisons were thrown open, and government officials who stayed behind destroyed Tezpur's currency reserves in anticipation of a Chinese advance.
China had reached its claim lines so the PLA did not advance farther, and on 19 November, it declared a unilateral cease-fire. Zhou Enlai declared a unilateral ceasefire to start on midnight, 21 November. Zhou's ceasefire declaration stated,
Zhou had first given the ceasefire announcement to Indian chargé d'affaires on 19 November (before India's request for United States air support), but New Delhi did not receive it until 24 hours later. The aircraft carrier was ordered back after the ceasefire, and thus, American intervention on India's side in the war was avoided. Retreating Indian troops, who hadn't come into contact with anyone knowing of the ceasefire, and Chinese troops in NEFA and Aksai Chin, were involved in some minor battles, but for the most part, the ceasefire signalled an end to the fighting. The United States Air Force flew in supplies to India in November 1962, but neither side wished to continue hostilities.
Toward the end of the war India increased its support for Tibetan refugees and revolutionaries, some of them having settled in India, as they were fighting the same common enemy in the region. The Nehru administration ordered the raising of an elite Indian-trained "Tibetan Armed Force" composed of Tibetan refugees.
The Chinese military action has been viewed by the United States as part of the PRC's policy of making use of aggressive wars to settle its border disputes and to distract both its own population and international opinion from its internal issues. According to James Calvin from the United States Marine Corps, western nations at the time viewed China as an aggressor during the China–India border war, and the war was part of a monolithic communist objective for a world dictatorship of the proletariat. This was further triggered by Mao Zedong's views that: "The way to world conquest lies through Havana, Accra, and Calcutta". Calvin believes that Chinese actions show a "pattern of conservative aims and limited objectives, rather than expansionism" and blames this particular conflict on India's provocations towards China. Calvin also expresses that China, in the past, has been adamant to gain control over regions to which it has a "traditional claim", which triggered the dispute over NEFA and Aksai Chin and indeed Tibet. Calvin's assumption, based on the history of the Cold War and the Domino Effect, assumed that China might ultimately try to regain control of everything that it considers as "traditionally Chinese" which in its view includes the entirety of South East Asia.
The Kennedy administration was disturbed by what they considered "blatant Chinese communist aggression against India". In a May 1963 National Security Council meeting, contingency planning on the part of the United States in the event of another Chinese attack on India was discussed. Defense Secretary Robert McNamara and General Maxwell Taylor advised the president to use nuclear weapons should the Americans intervene in such a situation. McNamara stated "Before any substantial commitment to defend India against China is given, we should recognise that in order to carry out that commitment against any substantial Chinese attack, we would have to use nuclear weapons. Any large Chinese Communist attack on any part of that area would require the use of nuclear weapons by the U.S., and this is to be preferred over the introduction of large numbers of U.S. soldiers." After hearing this and listening to two other advisers, Kennedy stated "We should defend India, and therefore we will defend India." It remains unclear if his aides were trying to dissuade the President of considering any measure with regard to India by immediately raising the stakes to an unacceptable level, nor is it clear if Kennedy was thinking of conventional or nuclear means when he gave his reply. By 1964 China had developed its own nuclear weapon which would have likely caused any American nuclear policy in defense of India to be reviewed. The Johnson Administration considered and then rejected giving nuclear weapons technology to the Indians. India developed its own nuclear weapon by 1974, within 10 years of the Chinese.
The United States was unequivocal in its recognition of the Indian boundary claims in the eastern sector, while not supporting the claims of either side in the western sector. Britain, on the other hand, agreed with the Indian position completely, with the foreign secretary stating, 'we have taken the view of the government of India on the present frontiers and the disputed territories belong to India.'
The non-aligned nations remained mostly uninvolved, and only the United Arab Republic openly supported India. Of the non-aligned nations, six, Egypt, Burma, Cambodia, Sri Lanka, Ghana and Indonesia, met in Colombo on 10 December 1962. The proposals stipulated a Chinese withdrawal of 20 km (12 miles) from the customary lines without any reciprocal withdrawal on India's behalf. The failure of these six nations to unequivocally condemn China deeply disappointed India.
In 1972, Chinese Premier Zhou explained the Chinese point of view to President Nixon of the US. As for the causes of the war, Zhou asserted that China did not try to expel Indian troops from south of the McMahon line and that three open warning telegrams were sent to Nehru before the war. Indian patrols south of the McMahon line were expelled and suffered casualties in the Chinese attack. Zhou also told Nixon that Chairman Mao ordered the troops to return to show good faith. The Indian government maintains that the Chinese military could not advance further south due to logistical problems and the cut-off of resource supplies.
While Western nations did not view Chinese actions favourably because of fear of the Chinese and competitiveness, Pakistan, which had had a turbulent relationship with India ever since the Indian partition, improved its relations with China after the war. Prior to the war, Pakistan also shared a disputed boundary with China, and had proposed to India that the two countries adopt a common defence against "northern" enemies (i.e. China), which was rejected by India. China and Pakistan took steps to peacefully negotiate their shared boundaries, beginning on 13 October 1962, and concluding in December of that year. Pakistan also expressed fear that the huge amounts of western military aid directed to India would allow it to threaten Pakistan's security in future conflicts. Mohammed Ali, External Affairs Minister of Pakistan, declared that massive Western aid to India in the Sino-Indian dispute would be considered an unfriendly act towards Pakistan. As a result, Pakistan made efforts to improve its relations with China. The following year, China and Pakistan peacefully settled disputes on their shared border, and negotiated the China-Pakistan Border Treaty in 1963, as well as trade, commercial, and barter treaties. On 2 March 1963, Pakistan conceded its northern claim line in Pakistani-controlled Kashmir to China in favour of a more southerly boundary along the Karakoram Range. The border treaty largely set the border along the MacCartney-Macdonald Line. India's military failure against China would embolden Pakistan to initiate the Second Kashmir War with India. It effectively ended in a stalemate as Calvin states that the Sino-Indian War had caused the previously passive government to take a stand on actively modernising India's military. China offered diplomatic support to Pakistan in this war but did not offer military support. In January 1966, China condemned the Tashkent Agreement between India and Pakistan as a Soviet-US plot in the region. In the Indo-Pakistani War of 1971, Pakistan expected China to provide military support, but it was left alone as India successfully helped the rebels in East Pakistan to found the new nation-state of Bangladesh.
During the conflict, Nehru wrote two letters to U.S. President John F. Kennedy, asking for 12 squadrons of fighter jets and a modern radar system. These jets were seen as necessary to beef up Indian air strength so that air-to-air combat could be initiated safely from the Indian perspective (bombing troops was seen as unwise for fear of Chinese retaliatory action). Nehru also asked that these aircraft be manned by American pilots until Indian airmen were trained to replace them. These requests were rejected by the Kennedy Administration (which was involved in the Cuban Missile Crisis during most of the Sino-Indian War). The U.S. nonetheless provided non-combat assistance to Indian forces and planned to send the carrier USS "Kitty Hawk" to the Bay of Bengal to support India in case of an air war.
As the Sino-Soviet split heated up, Moscow made a major effort to support India, especially with the sale of advanced MiG warplanes. The U.S. and Britain refused to sell these advanced weapons so India turned to the USSR. India and the USSR reached an agreement in August 1962 (before the Cuban Missile Crisis) for the immediate purchase of twelve MiG-21s as well as for Soviet technical assistance in the manufacture of these aircraft in India. According to P.R. Chari, "The intended Indian production of these relatively sophisticated aircraft could only have incensed Peking so soon after the withdrawal of Soviet technicians from China." In 1964 further Indian requests for American jets were rejected. However Moscow offered loans, low prices and technical help in upgrading India's armaments industry. India by 1964 was a major purchaser of Soviet arms. According to Indian diplomat G. Parthasarathy, "only after we got nothing from the US did arms supplies from the Soviet Union to India commence." India's favored relationship with Moscow continued into the 1980s, but ended after the collapse of Soviet Communism in 1991.
In 1962, President of Pakistan Ayub Khan made clear to India that Indian troops could safely be transferred from the Pakistan frontier to the Himalayas.
According to the China's official military history, the war achieved China's policy objectives of securing borders in its western sector, as China retained de facto control of the Aksai Chin. After the war, India abandoned the Forward Policy, and the de facto borders stabilised along the Line of Actual Control.
According to James Calvin of Marine Corps Command and Staff College, even though China won a military victory it lost in terms of its international image. China's first nuclear weapon test in October 1964 and its support of Pakistan in the 1965 India Pakistan War tended to confirm the American view of communist world objectives, including Chinese influence over Pakistan.
Lora Saalman opined in a study of Chinese military publications, that while the war led to much blame, debates and ultimately acted as causation of military modernisation of India but the war is now treated as basic reportage of facts with relatively diminished interest by Chinese analysts.
The aftermath of the war saw sweeping changes in the Indian military to prepare it for similar conflicts in the future, and placed pressure on Indian prime minister Jawaharlal Nehru, who was seen as responsible for failing to anticipate the Chinese attack on India. Indians reacted with a surge in patriotism and memorials were erected for many of the Indian troops who died in the war. Arguably, the main lesson India learned from the war was the need to strengthen its own defences and a shift from Nehru's foreign policy with China based on his stated concept of "brotherhood". Because of India's inability to anticipate Chinese aggression, Prime Minister Nehru faced harsh criticism from government officials, for having promoted pacifist relations with China. Indian President Radhakrishnan said that Nehru's government was naive and negligent about preparations, and Nehru admitted his failings. According to Inder Malhotra, a former editor of "The Times of India" and a commentator for "The Indian Express", Indian politicians invested more effort in removing Defence Minister Krishna Menon than in actually waging war. Krishna Menon's favoritism weakened the Indian Army, and national morale dimmed. The public saw the war as a political and military debacle. Under American advice (by American envoy John Kenneth Galbraith who made and ran American policy on the war as all other top policy makers in the US were absorbed in coincident Cuban Missile Crisis) Indians refrained, not according to the best choices available, from using the Indian air force to beat back the Chinese advances. The CIA later revealed that at that time the Chinese had neither the fuel nor runways long enough for using their air force effectively in Tibet. Indians in general became highly sceptical of China and its military. Many Indians view the war as a betrayal of India's attempts at establishing a long-standing peace with China and started to question the once popular "Hindi-Chini bhai-bhai" (meaning "Indians and Chinese are brothers"). The war also put an end to Nehru's earlier hopes that India and China would form a strong Asian Axis to counteract the increasing influence of the Cold War bloc superpowers.
The unpreparedness of the army was blamed on Defence Minister Menon, who resigned his government post to allow for someone who might modernise India's military further. India's policy of weaponisation via indigenous sources and self-sufficiency was thus cemented. Sensing a weakened army, Pakistan, a close ally of China, began a policy of provocation against India by infiltrating Jammu and Kashmir and ultimately triggering the Second Kashmir War with India in 1965 and Indo-Pakistani war of 1971. The Attack of 1965 was successfully stopped and ceasefire was negotiated under international pressure. In the Indo-Pakistani war of 1971 India won a clear victory, resulting in liberation of Bangladesh (formerly East-Pakistan).
As a result of the war, the Indian government commissioned an investigation, resulting in the classified Henderson Brooks–Bhagat Report on the causes of the war and the reasons for failure. India's performance in high-altitude combat in 1962 led to an overhaul of the Indian Army in terms of doctrine, training, organisation and equipment. Neville Maxwell claimed that the Indian role in international affairs after the border war was also greatly reduced after the war and India's standing in the non-aligned movement suffered. The Indian government has attempted to keep the Hendersen-Brooks-Bhagat Report secret for decades, although portions of it have recently been leaked by Neville Maxwell.
According to James Calvin, an analyst from the U.S. Navy, India gained many benefits from the 1962 conflict. This war united the country as never before. India got 32,000 square miles (8.3 million hectares, 83,000 km2) of disputed territory even if it felt that NEFA was hers all along. The new Indian republic had avoided international alignments; by asking for help during the war, India demonstrated its willingness to accept military aid from several sectors. And, finally, India recognised the serious weaknesses in its army. It would more than double its military manpower in the next two years and it would work hard to resolve the military's training and logistic problems to later become the second-largest army in the world. India's efforts to improve its military posture significantly enhanced its army's capabilities and preparedness.
Soon after the end of the war, the Indian government passed the Defence of India Act in December 1962, permitting the "apprehension and detention in custody of any person [suspected] of being of hostile origin." The broad language of the act allowed for the arrest of any person simply for having a Chinese surname, Chinese ancestry or a Chinese spouse. The Indian government incarcerated thousands of Chinese-Indians in an internment camp in Deoli, Rajasthan, where they were held for years without trial. The last internees were not released until 1967. Thousands more Chinese-Indians were forcibly deported or coerced to leave India. Nearly all internees had their properties sold off or looted. Even after their release, the Chinese Indians faced many restrictions in their freedom. They could not travel freely until the mid-1990s.
India also reported some military conflicts with China after the 1962 war. In late 1967, there were two incidents in which both countries exchanged fire in Sikkim. The first one was dubbed the "Nathu La incident", and the other being "Chola incident" in which advancing Chinese forces were forced to withdraw from Sikkim, then a protectorate of India and later a state of India after annexation in 1975. In the 1987 Sino-Indian skirmish, both sides showed military restraint and it was a bloodless conflict. In 2017 the two countries once again were involved in a military standoff, in which several troops were injured. In 2020, soldiers were killed in skirmishes for the first time since the war ended.
In 1993 and 1996, the two sides signed the Sino-Indian Bilateral Peace and Tranquility Accords, agreements to maintain peace and tranquility along the Line of Actual Control (LoAC). Ten meetings of a Sino-Indian Joint Working Group (SIJWG) and five of an expert group have taken place to determine where the LoAC lies, but little progress has occurred.
On 20 November 2006, Indian politicians from Arunachal Pradesh expressed their concern over Chinese military modernization and appealed to parliament to take a harder stance on the PRC following a military buildup on the border similar to that in 1962. Additionally, China's military aid to Pakistan as well is a matter of concern to the Indian public, as the two sides have engaged in various wars.
On 6 July 2006, the historic Silk Road passing through this territory via the Nathu La pass was reopened. Both sides have agreed to resolve the issues by peaceful means.
In October 2011, it was stated that India and China will formulate a border mechanism to handle different perceptions as to the LAC and resume the bilateral army exercises between the Indian and Chinese army from early 2012. | https://en.wikipedia.org/wiki?curid=29426 |
Simple module
In mathematics, specifically in ring theory, the simple modules over a ring "R" are the (left or right) modules over "R" that are non-zero and have no non-zero proper submodules. Equivalently, a module "M" is simple if and only if every cyclic submodule generated by a non-zero element of "M" equals "M". Simple modules form building blocks for the modules of finite length, and they are analogous to the simple groups in group theory.
In this article, all modules will be assumed to be right unital modules over a ring "R".
Z-modules are the same as abelian groups, so a simple Z-module is an abelian group which has no non-zero proper subgroups. These are the cyclic groups of prime order.
If "I" is a right ideal of "R", then "I" is simple as a right module if and only if "I" is a minimal non-zero right ideal: If "M" is a non-zero proper submodule of "I", then it is also a right ideal, so "I" is not minimal. Conversely, if "I" is not minimal, then there is a non-zero right ideal "J" properly contained in "I". "J" is a right submodule of "I", so "I" is not simple.
If "I" is a right ideal of "R", then the quotient module "R"/"I" is simple if and only if "I" is a maximal right ideal: If "M" is a non-zero proper submodule of "R"/"I", then the preimage of "M" under the quotient map is a right ideal which is not equal to "R" and which properly contains "I". Therefore, "I" is not maximal. Conversely, if "I" is not maximal, then there is a right ideal "J" properly containing "I". The quotient map has a non-zero kernel which is not equal to , and therefore is not simple.
Every simple "R"-module is isomorphic to a quotient "R"/"m" where "m" is a maximal right ideal of "R". By the above paragraph, any quotient "R"/"m" is a simple module. Conversely, suppose that "M" is a simple "R"-module. Then, for any non-zero element "x" of "M", the cyclic submodule "xR" must equal "M". Fix such an "x". The statement that "xR" = "M" is equivalent to the surjectivity of the homomorphism that sends "r" to "xr". The kernel of this homomorphism is a right ideal "I" of "R", and a standard theorem states that "M" is isomorphic to "R"/"I". By the above paragraph, we find that "I" is a maximal right ideal. Therefore, "M" is isomorphic to a quotient of "R" by a maximal right ideal.
If "k" is a field and "G" is a group, then a group representation of "G" is a left module over the group ring "k"["G]" (for details, see the main page on this relationship). The simple "k[G]" modules are also known as irreducible representations. A major aim of representation theory is to understand the irreducible representations of groups.
The simple modules are precisely the modules of length 1; this is a reformulation of the definition.
Every simple module is indecomposable, but the converse is in general not true.
Every simple module is cyclic, that is it is generated by one element.
Not every module has a simple submodule; consider for instance the Z-module Z in light of the first example above.
Let "M" and "N" be (left or right) modules over the same ring, and let be a module homomorphism. If "M" is simple, then "f" is either the zero homomorphism or injective because the kernel of "f" is a submodule of "M". If "N" is simple, then "f" is either the zero homomorphism or surjective because the image of "f" is a submodule of "N". If "M" = "N", then "f" is an endomorphism of "M", and if "M" is simple, then the prior two statements imply that "f" is either the zero homomorphism or an isomorphism. Consequently, the endomorphism ring of any simple module is a division ring. This result is known as Schur's lemma.
The converse of Schur's lemma is not true in general. For example, the Z-module Q is not simple, but its endomorphism ring is isomorphic to the field Q.
If "M" is a module which has a non-zero proper submodule "N", then there is a short exact sequence
A common approach to proving a fact about "M" is to show that the fact is true for the center term of a short exact sequence when it is true for the left and right terms, then to prove the fact for "N" and "M"/"N". If "N" has a non-zero proper submodule, then this process can be repeated. This produces a chain of submodules
In order to prove the fact this way, one needs conditions on this sequence and on the modules "M""i"/"M""i" + 1. One particularly useful condition is that the length of the sequence is finite and each quotient module "M""i"/"M""i" + 1 is simple. In this case the sequence is called a composition series for "M". In order to prove a statement inductively using composition series, the statement is first proved for simple modules, which form the base case of the induction, and then the statement is proved to remain true under an extension of a module by a simple module. For example, the Fitting lemma shows that the endomorphism ring of a finite length indecomposable module is a local ring, so that the strong Krull-Schmidt theorem holds and the category of finite length modules is a Krull-Schmidt category.
The Jordan–Hölder theorem and the Schreier refinement theorem describe the relationships amongst all composition series of a single module. The Grothendieck group ignores the order in a composition series and views every finite length module as a formal sum of simple modules. Over semisimple rings, this is no loss as every module is a semisimple module and so a direct sum of simple modules. Ordinary character theory provides better arithmetic control, and uses simple C"G" modules to understand the structure of finite groups "G". Modular representation theory uses Brauer characters to view modules as formal sums of simple modules, but is also interested in how those simple modules are joined together within composition series. This is formalized by studying the Ext functor and describing the module category in various ways including quivers (whose nodes are the simple modules and whose edges are composition series of non-semisimple modules of length 2) and Auslander–Reiten theory where the associated graph has a vertex for every indecomposable module.
An important advance in the theory of simple modules was the Jacobson density theorem. The Jacobson density theorem states:
In particular, any primitive ring may be viewed as (that is, isomorphic to) a ring of "D"-linear operators on some "D"-space.
A consequence of the Jacobson density theorem is Wedderburn's theorem; namely that any right artinian simple ring is isomorphic to a full matrix ring of "n"-by-"n" matrices over a division ring for some "n". This can also be established as a corollary of the Artin–Wedderburn theorem. | https://en.wikipedia.org/wiki?curid=29430 |
Sonar
Sonar (sound navigation ranging) is a technique that uses sound propagation (usually underwater, as in submarine navigation) to navigate, communicate with or detect objects on or under the surface of the water, such as other vessels. Two types of technology share the name "sonar": "passive" sonar is essentially listening for the sound made by vessels; "active" sonar is emitting pulses of sounds and listening for echoes. Sonar may be used as a means of acoustic location and of measurement of the echo characteristics of "targets" in the water. Acoustic location in air was used before the introduction of radar. Sonar may also be used for robot navigation, and SODAR (an upward-looking in-air sonar) is used for atmospheric investigations. The term "sonar" is also used for the equipment used to generate and receive the sound. The acoustic frequencies used in sonar systems vary from very low (infrasonic) to extremely high (ultrasonic). The study of underwater sound is known as underwater acoustics or hydroacoustics.
The first recorded use of the technique was by Leonardo da Vinci in 1490 who used a tube inserted into the water to detect vessels by ear. It was developed during World War I to counter the growing threat of submarine warfare, with an operational passive sonar system in use by 1918. Modern active sonar systems use an acoustic transducer to generate a sound wave which is reflected from target objects.
Although some animals (dolphins, bats, some shrews, and others) have used sound for communication and object detection for millions of years, use by humans in the water is initially recorded by Leonardo da Vinci in 1490: a tube inserted into the water was said to be used to detect vessels by placing an ear to the tube.
In the late 19th century an underwater bell was used as an ancillary to lighthouses or lightships to provide warning of hazards.
The use of sound to "echo-locate" underwater in the same way as bats use sound for aerial navigation seems to have been prompted by the disaster of 1912. The world's first patent for an underwater echo-ranging device was filed at the British Patent Office by English meteorologist Lewis Fry Richardson a month after the sinking of "Titanic", and a German physicist Alexander Behm obtained a patent for an echo sounder in 1913.
The Canadian engineer Reginald Fessenden, while working for the Submarine Signal Company in Boston, Massachusetts, built an experimental system beginning in 1912, a system later tested in Boston Harbor, and finally in 1914 from the U.S. Revenue Cutter "Miami" on the Grand Banks off Newfoundland. In that test, Fessenden demonstrated depth sounding, underwater communications (Morse code) and echo ranging (detecting an iceberg at a range). The "Fessenden oscillator", operated at about 500 Hz frequency, was unable to determine the bearing of the iceberg due to the 3-metre wavelength and the small dimension of the transducer's radiating face (less than wavelength in diameter). The ten Montreal-built British H-class submarines launched in 1915 were equipped with Fessenden oscillators.
During World War I the need to detect submarines prompted more research into the use of sound. The British made early use of underwater listening devices called hydrophones, while the French physicist Paul Langevin, working with a Russian immigrant electrical engineer Constantin Chilowsky, worked on the development of active sound devices for detecting submarines in 1915. Although piezoelectric and magnetostrictive transducers later superseded the electrostatic transducers they used, this work influenced future designs. Lightweight sound-sensitive plastic film and fibre optics have been used for hydrophones, while Terfenol-D and PMN (lead magnesium niobate) have been developed for projectors.
In 1916, under the British Board of Invention and Research, Canadian physicist Robert William Boyle took on the active sound detection project with A. B. Wood, producing a prototype for testing in mid-1917. This work, for the Anti-Submarine Division of the British Naval Staff, was undertaken in utmost secrecy, and used quartz piezoelectric crystals to produce the world's first practical underwater active sound detection apparatus. To maintain secrecy, no mention of sound experimentation or quartz was made – the word used to describe the early work ("supersonics") was changed to "ASD"ics, and the quartz material to "ASD"ivite: "ASD" for "Anti-Submarine Division", hence the British acronym "ASDIC". In 1939, in response to a question from the Oxford English Dictionary, the Admiralty made up the story that it stood for "Allied Submarine Detection Investigation Committee", and this is still widely believed, though no committee bearing this name has been found in the Admiralty archives.
By 1918, Britain and France had built prototype active systems. The British tested their ASDIC on in 1920 and started production in 1922. The 6th Destroyer Flotilla had ASDIC-equipped vessels in 1923. An anti-submarine school HMS "Osprey" and a training flotilla of four vessels were established on Portland in 1924.
By the outbreak of World War II, the Royal Navy had five sets for different surface ship classes, and others for submarines, incorporated into a complete anti-submarine system. The effectiveness of early ASDIC was hampered by the use of the depth charge as an anti-submarine weapon. This required an attacking vessel to pass over a submerged contact before dropping charges over the stern, resulting in a loss of ASDIC contact in the moments leading up to attack. The hunter was effectively firing blind, during which time a submarine commander could take evasive action. This situation was remedied by using several ships cooperating and by the adoption of "ahead-throwing weapons", such as Hedgehogs and later Squids, which projected warheads at a target ahead of the attacker and still in ASDIC contact. Developments during the war resulted in British ASDIC sets that used several different shapes of beam, continuously covering blind spots. Later, acoustic torpedoes were used.
Early in World War II (September 1940), British ASDIC technology was transferred for free to the United States. Research on ASDIC and underwater sound was expanded in the UK and in the US. Many new types of military sound detection were developed. These included sonobuoys, first developed by the British in 1944 under the codename "High Tea", dipping/dunking sonar and mine-detection sonar. This work formed the basis for post-war developments related to countering the nuclear submarine.
During the 1930s American engineers developed their own underwater sound-detection technology, and important discoveries were made, such as the existence of thermoclines and their effects on sound waves. Americans began to use the term "SONAR" for their systems, coined by Frederick Hunt to be the equivalent of RADAR.
In 1917, the US Navy acquired J. Warren Horton's services for the first time. On leave from Bell Labs, he served the government as a technical expert, first at the experimental station at Nahant, Massachusetts, and later at US Naval Headquarters, in London, England. At Nahant he applied the newly developed vacuum tube, then associated with the formative stages of the field of applied science now known as electronics, to the detection of underwater signals. As a result, the carbon button microphone, which had been used in earlier detection equipment, was replaced by the precursor of the modern hydrophone. Also during this period, he experimented with methods for towing detection. This was due to the increased sensitivity of his device. The principles are still used in modern towed sonar systems.
To meet the defense needs of Great Britain, he was sent to England to install in the Irish Sea bottom-mounted hydrophones connected to a shore listening post by submarine cable. While this equipment was being loaded on the cable-laying vessel, World War I ended and Horton returned home.
During World War II, he continued to develop sonar systems that could detect submarines, mines, and torpedoes. He published "Fundamentals of Sonar" in 1957 as chief research consultant at the US Navy Underwater Sound Laboratory. He held this position until 1959 when he became technical director, a position he held until mandatory retirement in 1963.
There was little progress in US sonar from 1915 to 1940. In 1940, US sonars typically consisted of a magnetostrictive transducer and an array of nickel tubes connected to a 1-foot-diameter steel plate attached back-to-back to a Rochelle salt crystal in a spherical housing. This assembly penetrated the ship hull and was manually rotated to the desired angle. The piezoelectric Rochelle salt crystal had better parameters, but the magnetostrictive unit was much more reliable. High losses to US merchant supply shipping early in World War II led to large scale high priority US research in the field, pursuing both improvements in magnetostrictive transducer parameters and Rochelle salt reliability. Ammonium dihydrogen phosphate (ADP), a superior alternative, was found as a replacement for Rochelle salt; the first application was a replacement of the 24 kHz Rochelle-salt transducers. Within nine months, Rochelle salt was obsolete. The ADP manufacturing facility grew from few dozen personnel in early 1940 to several thousands in 1942.
One of the earliest application of ADP crystals were hydrophones for acoustic mines; the crystals were specified for low-frequency cutoff at 5 Hz, withstanding mechanical shock for deployment from aircraft from , and ability to survive neighbouring mine explosions. One of key features of ADP reliability is its zero aging characteristics; the crystal keeps its parameters even over prolonged storage.
Another application was for acoustic homing torpedoes. Two pairs of directional hydrophones were mounted on the torpedo nose, in the horizontal and vertical plane; the difference signals from the pairs were used to steer the torpedo left-right and up-down. A countermeasure was developed: the targeted submarine discharged an effervescent chemical, and the torpedo went after the noisier fizzy decoy. The counter-countermeasure was a torpedo with active sonar – a transducer was added to the torpedo nose, and the microphones were listening for its reflected periodic tone bursts. The transducers comprised identical rectangular crystal plates arranged to diamond-shaped areas in staggered rows.
Passive sonar arrays for submarines were developed from ADP crystals. Several crystal assemblies were arranged in a steel tube, vacuum-filled with castor oil, and sealed. The tubes then were mounted in parallel arrays.
The standard US Navy scanning sonar at the end of World War II operated at 18 kHz, using an array of ADP crystals. Desired longer range, however, required use of lower frequencies. The required dimensions were too big for ADP crystals, so in the early 1950s magnetostrictive and barium titanate piezoelectric systems were developed, but these had problems achieving uniform impedance characteristics, and the beam pattern suffered. Barium titanate was then replaced with more stable lead zirconate titanate (PZT), and the frequency was lowered to 5 kHz. The US fleet used this material in the AN/SQS-23 sonar for several decades. The SQS-23 sonar first used magnetostrictive nickel transducers, but these weighed several tons, and nickel was expensive and considered a critical material; piezoelectric transducers were therefore substituted. The sonar was a large array of 432 individual transducers. At first, the transducers were unreliable, showing mechanical and electrical failures and deteriorating soon after installation; they were also produced by several vendors, had different designs, and their characteristics were different enough to impair the array's performance. The policy to allow repair of individual transducers was then sacrificed, and "expendable modular design", sealed non-repairable modules, was chosen instead, eliminating the problem with seals and other extraneous mechanical parts.
The Imperial Japanese Navy at the onset of World War II used projectors based on quartz. These were big and heavy, especially if designed for lower frequencies; the one for Type 91 set, operating at 9 kHz, had a diameter of and was driven by an oscillator with 5 kW power and 7 kV of output amplitude. The Type 93 projectors consisted of solid sandwiches of quartz, assembled into spherical cast iron bodies. The Type 93 sonars were later replaced with Type 3, which followed German design and used magnetostrictive projectors; the projectors consisted of two rectangular identical independent units in a cast iron rectangular body about . The exposed area was half the wavelength wide and three wavelengths high. The magnetostrictive cores were made from 4 mm stampings of nickel, and later of an iron-aluminium alloy with aluminium content between 12.7% and 12.9%. The power was provided from a 2 kW at 3.8 kV, with polarization from a 20 V, 8 A DC source.
The passive hydrophones of the Imperial Japanese Navy were based on moving-coil design, Rochelle salt piezo transducers, and carbon microphones.
Magnetostrictive transducers were pursued after World War II as an alternative to piezoelectric ones. Nickel scroll-wound ring transducers were used for high-power low-frequency operations, with size up to in diameter, probably the largest individual sonar transducers ever. The advantage of metals is their high tensile strength and low input electrical impedance, but they have electrical losses and lower coupling coefficient than PZT, whose tensile strength can be increased by prestressing. Other materials were also tried; nonmetallic ferrites were promising for their low electrical conductivity resulting in low eddy current losses, Metglas offered high coupling coefficient, but they were inferior to PZT overall. In the 1970s, compounds of rare earths and iron were discovered with superior magnetomechanic properties, namely the Terfenol-D alloy. This made possible new designs, e.g. a hybrid magnetostrictive-piezoelectric transducer. The most recent of these improved magnetostrictive materials is Galfenol.
Other types of transducers include variable-reluctance (or moving-armature, or electromagnetic) transducers, where magnetic force acts on the surfaces of gaps, and moving coil (or electrodynamic) transducers, similar to conventional speakers; the latter are used in underwater sound calibration, due to their very low resonance frequencies and flat broadband characteristics above them.
Active sonar uses a sound transmitter (or projector) and a receiver. When the two are in the same place it is monostatic operation. When the transmitter and receiver are separated it is bistatic operation. When more transmitters (or more receivers) are used, again spatially separated, it is multistatic operation. Most sonars are used monostatically with the same array often being used for transmission and reception. Active sonobuoy fields may be operated multistatically.
Active sonar creates a pulse of sound, often called a "ping", and then listens for reflections (echo) of the pulse. This pulse of sound is generally created electronically using a sonar projector consisting of a signal generator, power amplifier and electro-acoustic transducer/array. A transducer is a device that can transmit and receive acoustic signals ("pings"). A beamformer is usually employed to concentrate the acoustic power into a beam, which may be swept to cover the required search angles. Generally, the electro-acoustic transducers are of the Tonpilz type and their design may be optimised to achieve maximum efficiency over the widest bandwidth, in order to optimise performance of the overall system. Occasionally, the acoustic pulse may be created by other means, e.g. chemically using explosives, airguns or plasma sound sources.
To measure the distance to an object, the time from transmission of a pulse to reception is measured and converted into a range using the known speed of sound. To measure the bearing, several hydrophones are used, and the set measures the relative arrival time to each, or with an array of hydrophones, by measuring the relative amplitude in beams formed through a process called beamforming. Use of an array reduces the spatial response so that to provide wide cover multibeam systems are used. The target signal (if present) together with noise is then passed through various forms of signal processing, which for simple sonars may be just energy measurement. It is then presented to some form of decision device that calls the output either the required signal or noise. This decision device may be an operator with headphones or a display, or in more sophisticated sonars this function may be carried out by software. Further processes may be carried out to classify the target and localise it, as well as measuring its velocity.
The pulse may be at constant frequency or a chirp of changing frequency (to allow pulse compression on reception). Simple sonars generally use the former with a filter wide enough to cover possible Doppler changes due to target movement, while more complex ones generally include the latter technique. Since digital processing became available pulse compression has usually been implemented using digital correlation techniques. Military sonars often have multiple beams to provide all-round cover while simple ones only cover a narrow arc, although the beam may be rotated, relatively slowly, by mechanical scanning.
Particularly when single frequency transmissions are used, the Doppler effect can be used to measure the radial speed of a target. The difference in frequency between the transmitted and received signal is measured and converted into a velocity. Since Doppler shifts can be introduced by either receiver or target motion, allowance has to be made for the radial speed of the searching platform.
One useful small sonar is similar in appearance to a waterproof flashlight. The head is pointed into the water, a button is pressed, and the device displays the distance to the target. Another variant is a "fishfinder" that shows a small display with shoals of fish. Some civilian sonars (which are not designed for stealth) approach active military sonars in capability, with three-dimensional displays of the area near the boat.
When active sonar is used to measure the distance from the transducer to the bottom, it is known as echo sounding. Similar methods may be used looking upward for wave measurement.
Active sonar is also used to measure distance through water between two sonar transducers or a combination of a hydrophone (underwater acoustic microphone) and projector (underwater acoustic speaker). When a hydrophone/transducer receives a specific interrogation signal it responds by transmitting a specific reply signal. To measure distance, one transducer/projector transmits an interrogation signal and measures the time between this transmission and the receipt of the other transducer/hydrophone reply. The time difference, scaled by the speed of sound through water and divided by two, is the distance between the two platforms. This technique, when used with multiple transducers/hydrophones/projectors, can calculate the relative positions of static and moving objects in water.
In combat situations, an active pulse can be detected by an enemy and will reveal a submarine's position at twice the maximum distance that the submarine can itself detect a contact and give clues as to the submarines identity based on the characteristics of the outgoing ping. For these reasons, active sonar is not frequently used by military submarines.
A very directional, but low-efficiency, type of sonar (used by fisheries, military, and for port security) makes use of a complex nonlinear feature of water known as non-linear sonar, the virtual transducer being known as a "parametric array".
Project Artemis was an experimental research and development project in the late 1950s to mid 1960s to examine acoustic propagation and signal processing for a low-frequency active sonar system that might be used for ocean surveillance. A secondary objective was examination of engineering problems of fixed active bottom systems. The receiving array was located on the slope of Plantagnet Bank off Bermuda. The active source array was deployed from the converted World War II tanker . Elements of Artemis were used experimentally after the main experiment was terminated.
This is an active sonar device that receives a specific stimulus and immediately (or with a delay) retransmits the received signal or a predetermined one. Transponders can be used to remotely activate or recover subsea equipment.
A sonar target is small relative to the sphere, centred around the emitter, on which it is located. Therefore, the power of the reflected signal is very low, several orders of magnitude less than the original signal. Even if the reflected signal was of the same power, the following example (using hypothetical values) shows the problem: Suppose a sonar system is capable of emitting a 10,000 W/m2 signal at 1 m, and detecting a 0.001 W/m2 signal. At 100 m the signal will be 1 W/m2 (due to the inverse-square law). If the entire signal is reflected from a 10 m2 target, it will be at 0.001 W/m2 when it reaches the emitter, i.e. just detectable. However, the original signal will remain above 0.001 W/m2 until 3000 m. Any 10 m2 target between 100 and 3000 m using a similar or better system would be able to detect the pulse, but would not be detected by the emitter. The detectors must be very sensitive to pick up the echoes. Since the original signal is much more powerful, it can be detected many times further than twice the range of the sonar (as in the example).
Active sonar have two performance limitations: due to noise and reverberation. In general, one or other of these will dominate, so that the two effects can be initially considered separately.
In noise-limited conditions at initial detection:
where SL is the source level, PL is the propagation loss (sometimes referred to as transmission loss), TS is the target strength, NL is the noise level, AG is the array gain of the receiving array (sometimes approximated by its directivity index) and DT is the detection threshold.
In reverberation-limited conditions at initial detection (neglecting array gain):
where RL is the reverberation level, and the other factors are as before.
Passive sonar listens without transmitting. It is often employed in military settings, although it is also used in science applications, "e.g.", detecting fish for presence/absence studies in various aquatic environments – see also passive acoustics and passive radar. In the very broadest usage, this term can encompass virtually any analytical technique involving remotely generated sound, though it is usually restricted to techniques applied in an aquatic environment.
Passive sonar has a wide variety of techniques for identifying the source of a detected sound. For example, U.S. vessels usually operate 60 Hz alternating current power systems. If transformers or generators are mounted without proper vibration insulation from the hull or become flooded, the 60 Hz sound from the windings can be emitted from the submarine or ship. This can help to identify its nationality, as all European submarines and nearly every other nation's submarine have 50 Hz power systems. Intermittent sound sources (such as a wrench being dropped), called "transients," may also be detectable to passive sonar. Until fairly recently, an experienced, trained operator identified signals, but now computers may do this.
Passive sonar systems may have large sonic databases, but the sonar operator usually finally classifies the signals manually. A computer system frequently uses these databases to identify classes of ships, actions (i.e. the speed of a ship, or the type of weapon released), and even particular ships.
Passive sonar on vehicles is usually severely limited because of noise generated by the vehicle. For this reason, many submarines operate nuclear reactors that can be cooled without pumps, using silent convection, or fuel cells or batteries, which can also run silently. Vehicles' propellers are also designed and precisely machined to emit minimal noise. High-speed propellers often create tiny bubbles in the water, and this cavitation has a distinct sound.
The sonar hydrophones may be towed behind the ship or submarine in order to reduce the effect of noise generated by the watercraft itself. Towed units also combat the thermocline, as the unit may be towed above or below the thermocline.
The display of most passive sonars used to be a two-dimensional waterfall display. The horizontal direction of the display is bearing. The vertical is frequency, or sometimes time. Another display technique is to color-code frequency-time information for bearing. More recent displays are generated by the computers, and mimic radar-type plan position indicator displays.
Unlike active sonar, only one-way propagation is involved. Because of the different signal processing used, the minimal detectable signal-to-noise ratio will be different. The equation for determining the performance of a passive sonar is
where SL is the source level, PL is the propagation loss, NL is the noise level, AG is the array gain and DT is the detection threshold. The figure of merit of a passive sonar is
The detection, classification and localisation performance of a sonar depends on the environment and the receiving equipment, as well as the transmitting equipment in an active sonar or the target radiated noise in a passive sonar.
Sonar operation is affected by variations in sound speed, particularly in the vertical plane. Sound travels more slowly in fresh water than in sea water, though the difference is small. The speed is determined by the water's bulk modulus and mass density. The bulk modulus is affected by temperature, dissolved impurities (usually salinity), and pressure. The density effect is small. The speed of sound (in feet per second) is approximately:
This empirically derived approximation equation is reasonably accurate for normal temperatures, concentrations of salinity and the range of most ocean depths. Ocean temperature varies with depth, but at between 30 and 100 meters there is often a marked change, called the thermocline, dividing the warmer surface water from the cold, still waters that make up the rest of the ocean. This can frustrate sonar, because a sound originating on one side of the thermocline tends to be bent, or refracted, through the thermocline. The thermocline may be present in shallower coastal waters. However, wave action will often mix the water column and eliminate the thermocline. Water pressure also affects sound propagation: higher pressure increases the sound speed, which causes the sound waves to refract away from the area of higher sound speed. The mathematical model of refraction is called Snell's law.
If the sound source is deep and the conditions are right, propagation may occur in the 'deep sound channel'. This provides extremely low propagation loss to a receiver in the channel. This is because of sound trapping in the channel with no losses at the boundaries. Similar propagation can occur in the 'surface duct' under suitable conditions. However, in this case there are reflection losses at the surface.
In shallow water propagation is generally by repeated reflection at the surface and bottom, where considerable losses can occur.
Sound propagation is affected by absorption in the water itself as well as at the surface and bottom. This absorption depends upon frequency, with several different mechanisms in sea water. Long-range sonar uses low frequencies to minimise absorption effects.
The sea contains many sources of noise that interfere with the desired target echo or signature. The main noise sources are waves and shipping. The motion of the receiver through the water can also cause speed-dependent low frequency noise.
When active sonar is used, scattering occurs from small objects in the sea as well as from the bottom and surface. This can be a major source of interference. This acoustic scattering is analogous to the scattering of the light from a car's headlights in fog: a high-intensity pencil beam will penetrate the fog to some extent, but broader-beam headlights emit much light in unwanted directions, much of which is scattered back to the observer, overwhelming that reflected from the target ("white-out"). For analogous reasons active sonar needs to transmit in a narrow beam to minimize scattering.
The scattering of sonar from objects (mines, pipelines, zooplankton, geological features, fish etc.) is how active sonar detects them, but this ability can be masked by strong scattering from false targets, or 'clutter'. Where they occur (under breaking waves; in ship wakes; in gas emitted from seabed seeps and leaks etc.), gas bubbles are powerful sources of clutter, and can readily hide targets. TWIPS (Twin Inverted Pulse Sonar) is currently the only sonar that can overcome this clutter problem. This is important as many recent conflicts have occurred in coastal waters, and the inability to detect whether mines are present or not present hazards and delays to military vessels, and also to aid convoys and merchant shipping trying to support the region long after the conflict has ceased.
The sound "reflection" characteristics of the target of an active sonar, such as a submarine, are known as its target strength. A complication is that echoes are also obtained from other objects in the sea such as whales, wakes, schools of fish and rocks.
Passive sonar detects the target's "radiated" noise characteristics. The radiated spectrum comprises a continuous spectrum of noise with peaks at certain frequencies which can be used for classification.
"Active" (powered) countermeasures may be launched by a submarine under attack to raise the noise level, provide a large false target, and obscure the signature of the submarine itself.
"Passive" (i.e., non-powered) countermeasures include:
Modern naval warfare makes extensive use of both passive and active sonar from water-borne vessels, aircraft and fixed installations. Although active sonar was used by surface craft in World War II, submarines avoided the use of active sonar due to the potential for revealing their presence and position to enemy forces. However, the advent of modern signal-processing enabled the use of passive sonar as a primary means for search and detection operations. In 1987 a division of Japanese company Toshiba reportedly sold machinery to the Soviet Union that allowed their submarine propeller blades to be milled so that they became radically quieter, making the newer generation of submarines more difficult to detect.
The use of active sonar by a submarine to determine bearing is extremely rare and will not necessarily give high quality bearing or range information to the submarines fire control team. However, use of active sonar on surface ships is very common and is used by submarines when the tactical situation dictates it is more important to determine the position of a hostile submarine than conceal their own position. With surface ships, it might be assumed that the threat is already tracking the ship with satellite data as any vessel around the emitting sonar will detect the emission. Having heard the signal, it is easy to identify the sonar equipment used (usually with its frequency) and its position (with the sound wave's energy). Active sonar is similar to radar in that, while it allows detection of targets at a certain range, it also enables the emitter to be detected at a far greater range, which is undesirable.
Since active sonar reveals the presence and position of the operator, and does not allow exact classification of targets, it is used by fast (planes, helicopters) and by noisy platforms (most surface ships) but rarely by submarines. When active sonar is used by surface ships or submarines, it is typically activated very briefly at intermittent periods to minimize the risk of detection. Consequently, active sonar is normally considered a backup to passive sonar. In aircraft, active sonar is used in the form of disposable sonobuoys that are dropped in the aircraft's patrol area or in the vicinity of possible enemy sonar contacts.
Passive sonar has several advantages, most importantly that it is silent. If the target radiated noise level is high enough, it can have a greater range than active sonar, and allows the target to be identified. Since any motorized object makes some noise, it may in principle be detected, depending on the level of noise emitted and the ambient noise level in the area, as well as the technology used. To simplify, passive sonar "sees" around the ship using it. On a submarine, nose-mounted passive sonar detects in directions of about 270°, centered on the ship's alignment, the hull-mounted array of about 160° on each side, and the towed array of a full 360°. The invisible areas are due to the ship's own interference. Once a signal is detected in a certain direction (which means that something makes sound in that direction, this is called broadband detection) it is possible to zoom in and analyze the signal received (narrowband analysis). This is generally done using a Fourier transform to show the different frequencies making up the sound. Since every engine makes a specific sound, it is straightforward to identify the object. Databases of unique engine sounds are part of what is known as "acoustic intelligence" or ACINT.
Another use of passive sonar is to determine the target's trajectory. This process is called target motion analysis (TMA), and the resultant "solution" is the target's range, course, and speed. TMA is done by marking from which direction the sound comes at different times, and comparing the motion with that of the operator's own ship. Changes in relative motion are analyzed using standard geometrical techniques along with some assumptions about limiting cases.
Passive sonar is stealthy and very useful. However, it requires high-tech electronic components and is costly. It is generally deployed on expensive ships in the form of arrays to enhance detection. Surface ships use it to good effect; it is even better used by submarines, and it is also used by airplanes and helicopters, mostly to a "surprise effect", since submarines can hide under thermal layers. If a submarine's commander believes he is alone, he may bring his boat closer to the surface and be easier to detect, or go deeper and faster, and thus make more sound.
Examples of sonar applications in military use are given below. Many of the civil uses given in the following section may also be applicable to naval use.
Until recently, ship sonars were usually with hull mounted arrays, either amidships or at the bow. It was soon found after their initial use that a means of reducing flow noise was required. The first were made of canvas on a framework, then steel ones were used. Now domes are usually made of reinforced plastic or pressurized rubber. Such sonars are primarily active in operation. An example of a conventional hull mounted sonar is the SQS-56.
Because of the problems of ship noise, towed sonars are also used. These also have the advantage of being able to be placed deeper in the water. However, there are limitations on their use in shallow water. These are called towed arrays (linear) or variable depth sonars (VDS) with 2/3D arrays. A problem is that the winches required to deploy/recover these are large and expensive. VDS sets are primarily active in operation while towed arrays are passive.
An example of a modern active-passive ship towed sonar is Sonar 2087 made by Thales Underwater Systems.
Modern torpedoes are generally fitted with an active/passive sonar. This may be used to home directly on the target, but wake homing torpedoes are also used. An early example of an acoustic homer was the Mark 37 torpedo.
Torpedo countermeasures can be towed or free. An early example was the German "Sieglinde" device while the "Bold" was a chemical device. A widely used US device was the towed AN/SLQ-25 Nixie while the mobile submarine simulator (MOSS) was a free device. A modern alternative to the Nixie system is the UK Royal Navy S2170 Surface Ship Torpedo Defence system.
Mines may be fitted with a sonar to detect, localize and recognize the required target. An example is the CAPTOR mine.
Mine countermeasure (MCM) sonar, sometimes called "mine and obstacle avoidance sonar (MOAS)", is a specialized type of sonar used for detecting small objects. Most MCM sonars are hull mounted but a few types are VDS design. An example of a hull mounted MCM sonar is the Type 2193 while the SQQ-32 mine-hunting sonar and Type 2093 systems are VDS designs.
Submarines rely on sonar to a greater extent than surface ships as they cannot use radar at depth. The sonar arrays may be hull mounted or towed. Information fitted on typical fits is given in and .
Helicopters can be used for antisubmarine warfare by deploying fields of active-passive sonobuoys or can operate dipping sonar, such as the AQS-13. Fixed wing aircraft can also deploy sonobuoys and have greater endurance and capacity to deploy them. Processing from the sonobuoys or dipping sonar can be on the aircraft or on ship. Dipping sonar has the advantage of being deployable to depths appropriate to daily conditions. Helicopters have also been used for mine countermeasure missions using towed sonars such as the AQS-20A.
Dedicated sonars can be fitted to ships and submarines for underwater communication.
The United States began a system of passive, fixed ocean surveillance systems in 1950 with the classified name Sound Surveillance System (SOSUS) with American Telephone and Telegraph Company (AT&T), with its Bell Laboratories research and Western Electric manufacturing entities being contracted for development and installation. The systems exploited the deep sound (SOFAR) channel and were based on an AT&T sound spectrograph, which converted sound into a visual spectrogram representing a time–frequency analysis of sound that was developed for speech analysis and modified to analyze low-frequency underwater sounds. That process was Low Frequency Analysis and Recording and the equipment was termed the Low Frequency Analyzer and Recorder, both with the acronym LOFAR. LOFAR research was termed "Jezebel" and led to usage in air and surface systems, particularly sonobuys using the process and sometimes using "Jezebel" in their name. The proposed system offered such promise of long-range submarine detection that the Navy ordered immediate moves for implementation.
Between installation of a test array followed by a full scale, forty element, prototype operational array in 1951 and 1958 systems were installed in the Atlantic and then the Pacific under the unclassified name "Project Caesar". The original systems were terminated at classified shore stations designated Naval Facility (NAVFAC) explained as engaging in "ocean research" to cover their classified mission. The system was upgraded multiple times with more advanced cable allowing the arrays to be installed in ocean basins and upgraded processing. The shore stations were eliminated in a process of consolidation and rerouting the arrays to central processing centers into the 1990s. In 1985, with new mobile arrays and other systems becoming operational the collective system name was changed to Integrated Undersea Surveillance System (IUSS). In 1991 the mission of the system was declassified. The year before IUSS insignia were authorized for wear. Access was granted to some systems for scientific research.
A similar system is believed to have been operated by the Soviet Union.
Sonar can be used to detect frogmen and other scuba divers. This can be applicable around ships or at entrances to ports. Active sonar can also be used as a deterrent and/or disablement mechanism. One such device is the Cerberus system.
Limpet mine imaging sonar (LIMIS) is a hand-held or ROV-mounted imaging sonar designed for patrol divers (combat frogmen or clearance divers) to look for limpet mines in low visibility water.
The LUIS is another imaging sonar for use by a diver.
Integrated navigation sonar system (INSS) is a small flashlight-shaped handheld sonar for divers that displays range.
This is a sonar designed to detect and locate the transmissions from hostile active sonars. An example of this is the Type 2082 fitted on the British s.
Fishing is an important industry that is seeing growing demand, but world catch tonnage is falling as a result of serious resource problems. The industry faces a future of continuing worldwide consolidation until a point of sustainability can be reached. However, the consolidation of the fishing fleets are driving increased demands for sophisticated fish finding electronics such as sensors, sounders and sonars. Historically, fishermen have used many different techniques to find and harvest fish. However, acoustic technology has been one of the most important driving forces behind the development of the modern commercial fisheries.
Sound waves travel differently through fish than through water because a fish's air-filled swim bladder has a different density than seawater. This density difference allows the detection of schools of fish by using reflected sound. Acoustic technology is especially well suited for underwater applications since sound travels farther and faster underwater than in air. Today, commercial fishing vessels rely almost completely on acoustic sonar and sounders to detect fish. Fishermen also use active sonar and echo sounder technology to determine water depth, bottom contour, and bottom composition.
Companies such as eSonar, Raymarine, Marport Canada, Wesmar, Furuno, Krupp, and Simrad make a variety of sonar and acoustic instruments for the deep sea commercial fishing industry. For example, net sensors take various underwater measurements and transmit the information back to a receiver on board a vessel. Each sensor is equipped with one or more acoustic transducers depending on its specific function. Data is transmitted from the sensors using wireless acoustic telemetry and is received by a hull mounted hydrophone. The analog signals are decoded and converted by a digital acoustic receiver into data which is transmitted to a bridge computer for graphical display on a high resolution monitor.
Echo sounding is a process used to determine the depth of water beneath ships and boats. A type of active sonar, echo sounding is the transmission of an acoustic pulse directly downwards to the seabed, measuring the time between transmission and echo return, after having hit the bottom and bouncing back to its ship of origin. The acoustic pulse is emitted by a transducer which receives the return echo as well. The depth measurement is calculated by multiplying the speed of sound in water(averaging 1,500 meters per second) by the time between emission and echo return.
The value of underwater acoustics to the fishing industry has led to the development of other acoustic instruments that operate in a similar fashion to echo-sounders but, because their function is slightly different from the initial model of the echo-sounder, have been given different terms.
The net sounder is an echo sounder with a transducer mounted on the headline of the net rather than on the bottom of the vessel. Nevertheless, to accommodate the distance from the transducer to the display unit, which is much greater than in a normal echo-sounder, several refinements have to be made. Two main types are available. The first is the cable type in which the signals are sent along a cable. In this case there has to be the provision of a cable drum on which to haul, shoot and stow the cable during the different phases of the operation. The second type is the cable-less net-sounder – such as Marport's Trawl Explorer – in which the signals are sent acoustically between the net and hull mounted receiver-hydrophone on the vessel. In this case no cable drum is required but sophisticated electronics are needed at the transducer and receiver.
The display on a net sounder shows the distance of the net from the bottom (or the surface), rather than the depth of water as with the echo-sounder's hull-mounted transducer. Fixed to the headline of the net, the footrope can usually be seen which gives an indication of the net performance. Any fish passing into the net can also be seen, allowing fine adjustments to be made to catch the most fish possible. In other fisheries, where the amount of fish in the net is important, catch sensor transducers are mounted at various positions on the cod-end of the net. As the cod-end fills up these catch sensor transducers are triggered one by one and this information is transmitted acoustically to display monitors on the bridge of the vessel. The skipper can then decide when to haul the net.
Modern versions of the net sounder, using multiple element transducers, function more like a sonar than an echo sounder and show slices of the area in front of the net and not merely the vertical view that the initial net sounders used.
The sonar is an echo-sounder with a directional capability that can show fish or other objects around the vessel.
Small sonars have been fitted to remotely operated vehicles (ROVs) and unmanned underwater vehicles (UUVs) to allow their operation in murky conditions. These sonars are used for looking ahead of the vehicle. The Long-Term Mine Reconnaissance System is a UUV for MCM purposes.
Sonars which act as beacons are fitted to aircraft to allow their location in the event of a crash in the sea. Short and long baseline sonars may be used for caring out the location, such as LBL.
In 2013 an inventor in the United States unveiled a "spider-sense" bodysuit, equipped with ultrasonic sensors and haptic feedback systems, which alerts the wearer of incoming threats; allowing them to respond to attackers even when blindfolded.
Detection of fish, and other marine and aquatic life, and estimation their individual sizes or total biomass using active sonar techniques. As the sound pulse travels through water it encounters objects that are of different density or acoustic characteristics than the surrounding medium, such as fish, that reflect sound back toward the sound source. These echoes provide information on fish size, location, abundance and behavior. Data is usually processed and analysed using a variety of software such as "Echoview".
An upward looking echo sounder mounted on the bottom or on a platform may be used to make measurements of wave height and period. From this statistics of the surface conditions at a location can be derived.
Special short range sonars have been developed to allow measurements of water velocity.
Sonars have been developed that can be used to characterise the sea bottom into, for example, mud, sand, and gravel. Relatively simple sonars such as echo sounders can be promoted to seafloor classification systems via add-on modules, converting echo parameters into sediment type. Different algorithms exist, but they are all based on changes in the energy or shape of the reflected sounder pings. Advanced substrate classification analysis can be achieved using calibrated (scientific) echosounders and parametric or fuzzy-logic analysis of the acoustic data.
Side-scan sonars can be used to derive maps of seafloor topography (bathymetry) by moving the sonar across it just above the bottom. Low frequency sonars such as GLORIA have been used for continental shelf wide surveys while high frequency sonars are used for more detailed surveys of smaller areas.
Powerful low frequency echo-sounders have been developed for providing profiles of the upper layers of the ocean bottom.
Gas bubbles can leak from the seabed, or close to it, from multiple sources. These can be detected by both passive and active sonar (shown in schematic figure by yellow and red systems respectively). Natural seeps of methane and carbon dioxide occur. Gas pipelines can leak, and it is important to be able to detect whether leakage occurs from Carbon Capture and Storage Facilities (CCSFs; e.g. depleted oil wells into which extracted atmospheric carbon is stored). Quantification of the amount of gas leaking is difficult, and although estimates can be made use active and passive sonar, it is important to question their accuracy because of the assumptions inherent in making such estimations from sonar data.
Various synthetic aperture sonars have been built in the laboratory and some have entered use in mine-hunting and search systems. An explanation of their operation is given in synthetic aperture sonar.
Parametric sources use the non-linearity of water to generate the difference frequency between two high frequencies. A virtual end-fire array is formed. Such a projector has advantages of broad bandwidth, narrow beamwidth, and when fully developed and carefully measured it has no obvious sidelobes: see Parametric array. Its major disadvantage is very low efficiency of only a few percent. P.J. Westervelt summarizes the trends involved.
Use of both passive and active sonar has been proposed for various extraterrestrial uses. An example of the use of active sonar is in determining the depth of hydrocarbon seas on Titan, An example of the use of passive sonar is in the detection of methanefalls on Titan,
It has been noted that those proposals which suggest use of sonar without taking proper account of the difference between the Earthly (atmosphere, ocean, mineral) environments and the extraterrestrial ones, can lead to erroneous values
Research has shown that use of active sonar can lead to mass strandings of marine mammals. Beaked whales, the most common casualty of the strandings, have been shown to be highly sensitive to mid-frequency active sonar. Other marine mammals such as the blue whale also flee away from the source of the sonar, while naval activity was suggested to be the most probable cause of a mass stranding of dolphins. The US Navy, which part-funded some of the studies, said that the findings only showed behavioural responses to sonar, not actual harm, but they "will evaluate the effectiveness of [their] marine mammal protective measures in light of new research findings". A 2008 US Supreme Court ruling on the use of sonar by the US Navy noted that there had been no cases where sonar had been conclusively shown to have harmed or killed a marine mammal.
Some marine animals, such as whales and dolphins, use echolocation systems, sometimes called "biosonar" to locate predators and prey. Research on the effects of sonar on blue whales in the Southern California Bight shows that mid-frequency sonar use disrupts the whales' feeding behavior. This indicates that sonar-induced disruption of feeding and displacement from high-quality prey patches could have significant and previously undocumented impacts on baleen whale foraging ecology, individual fitness and population health.
A review of evidence on the mass strandings of beaked whale linked to naval exercises where sonar was used was published in 2019. It concluded that the effects of mid-frequency active sonar are strongest on Cuvier's beaked whales but vary among individuals or populations. The review suggested the strength of response of individual animals may depend on whether they had prior exposure to sonar, and that symptoms of decompression sickness have been found in stranded whales that may be a result of such response to sonar. It noted that in the Canary Islands where multiple strandings had been previously reported, no more mass strandings had occurred once naval exercises during which sonar was used were banned in the area, and recommended that the ban be extended to other areas where mass strandings continue to occur.
High-intensity sonar sounds can create a small temporary shift in the hearing threshold of some fish.
The frequencies of sonars range from infrasonic to above a megahertz. Generally, the lower frequencies have longer range, while the higher frequencies offer better resolution, and smaller size for a given directionality.
To achieve reasonable directionality, frequencies below 1 kHz generally require large size, usually achieved as towed arrays.
Low frequency sonars are loosely defined as 1–5 kHz, albeit some navies regard 5–7 kHz also as low frequency. Medium frequency is defined as 5–15 kHz. Another style of division considers low frequency to be under 1 kHz, and medium frequency at between 1–10 kHz.
American World War II era sonars operated at a relatively high frequency of 20–30 kHz, to achieve directionality with reasonably small transducers, with typical maximum operational range of 2500 yd. Postwar sonars used lower frequencies to achieve longer range; e.g. SQS-4 operated at 10 kHz with range up to 5000 yd. SQS-26 and SQS-53 operated at 3 kHz with range up to 20,000 yd; their domes had size of approx. a 60-ft personnel boat, an upper size limit for conventional hull sonars. Achieving larger sizes by conformal sonar array spread over the hull has not been effective so far, for lower frequencies linear or towed arrays are therefore used.
Japanese WW2 sonars operated at a range of frequencies. The Type 91, with 30 inch quartz projector, worked at 9 kHz. The Type 93, with smaller quartz projectors, operated at 17.5 kHz (model 5 at 16 or 19 kHz magnetostrictive) at powers between 1.7 and 2.5 kilowatts, with range of up to 6 km. The later Type 3, with German-design magnetostrictive transducers, operated at 13, 14.5, 16, or 20 kHz (by model), using twin transducers (except model 1 which had three single ones), at 0.2 to 2.5 kilowatts. The simple type used 14.5 kHz magnetostrictive transducers at 0.25 kW, driven by capacitive discharge instead of oscillators, with range up to 2.5 km.
The sonar's resolution is angular; objects further apart are imaged with lower resolutions than nearby ones.
Another source lists ranges and resolutions vs frequencies for sidescan sonars. 30 kHz provides low resolution with range of 1000–6000 m, 100 kHz gives medium resolution at 500–1000 m, 300 kHz gives high resolution at 150–500 m, and 600 kHz gives high resolution at 75–150 m. Longer range sonars are more adversely affected by nonhomogenities of water. Some environments, typically shallow waters near the coasts, have complicated terrain with many features; higher frequencies become necessary there. | https://en.wikipedia.org/wiki?curid=29438 |
Skylab
Skylab was the first United States space station, launched by NASA, occupied for about 24 weeks between May 1973 and February 1974. It was operated by three separate three-astronaut crews: SL-2, SL-3 and SL-4. Major operations included an orbital workshop, a solar observatory, Earth observation, and hundreds of experiments.
Unable to be re-boosted by the Space Shuttle, which was not ready until the early 1980s, Skylab's orbit decayed and it burned up in the atmosphere on July 11, 1979, over the Indian Ocean.
Skylab was the only space station operated exclusively by the United States. A permanent US station was planned starting in 1969, but funding for this was canceled and replaced with US participation in an International Space Station in 1993.
Skylab had a weight of with a Apollo command and service module (CSM) attached and included a workshop, a solar observatory, and several hundred life science and physical science experiments. It was launched uncrewed into low Earth orbit by a Saturn V rocket modified into the Saturn INT-21, with the S-IVB third stage not available for propulsion because the orbital workshop was built out of it. This was the final flight for the rocket more commonly known for carrying the crewed Apollo Moon landing missions. Three subsequent missions delivered three-astronaut crews in the Apollo CSM launched by the smaller Saturn IB rocket. For the final two crewed missions to Skylab, NASA assembled a backup Apollo CSM/Saturn IB in case an in-orbit rescue mission was needed, but this vehicle was never flown. The station was damaged during launch when the micrometeoroid shield tore away from the workshop, taking one of the main solar panel arrays with it and jamming the other main array. This deprived Skylab of most of its electrical power and also removed protection from intense solar heating, threatening to make it unusable. The first crew deployed a replacement heat shade and freed the jammed solar panels to save Skylab. This was the first time that a repair of this magnitude was performed in space.
Skylab included the Apollo Telescope Mount (a multi-spectral solar observatory), a multiple docking adapter with two docking ports, an airlock module with extravehicular activity (EVA) hatches, and the orbital workshop, the main habitable space inside Skylab. Electrical power came from solar arrays and fuel cells in the docked Apollo CSM. The rear of the station included a large waste tank, propellant tanks for maneuvering jets, and a heat radiator. Astronauts conducted numerous experiments aboard Skylab during its operational life. The telescope significantly advanced solar science, and observation of the Sun was unprecedented. Astronauts took thousands of photographs of Earth, and the Earth Resources Experiment Package (EREP) viewed Earth with sensors that recorded data in the visible, infrared, and microwave spectral regions. The record for human time spent in orbit was extended beyond the 23 days set by the Soyuz 11 crew aboard Salyut 1 to 84 days by the Skylab 4 crew.
Later plans to reuse Skylab were stymied by delays in development of the Space Shuttle, and Skylab's decaying orbit could not be stopped. Skylab's atmospheric reentry began on July 11, 1979, amid worldwide media attention. Before re-entry, NASA ground controllers tried to adjust Skylab's orbit to minimize the risk of debris landing in populated areas, targeting the south Indian Ocean, which was partially successful. Debris showered Western Australia, and recovered pieces indicated that the station had disintegrated lower than expected. As the Skylab program drew to a close, NASA's focus had shifted to the development of the Space Shuttle. NASA space station and laboratory projects included Spacelab, Shuttle-"Mir", and Space Station "Freedom", which was merged into the International Space Station.
Rocket engineer Wernher von Braun, science fiction writer Arthur C. Clarke, and other early advocates of crewed space travel, expected until the 1960s that a space station would be an important early step in space exploration. Von Braun participated in the publishing of a series of influential articles in "Collier's" magazine from 1952 to 1954, titled "Man Will Conquer Space Soon!". He envisioned a large, circular station 250 feet (75m) in diameter that would rotate to generate artificial gravity and require a fleet of 7,000-ton (6,500-metric ton) space shuttles for construction in orbit. The 80 men aboard the station would include astronomers operating a telescope, meteorologists to forecast the weather, and soldiers to conduct surveillance. Von Braun expected that future expeditions to the Moon and Mars would leave from the station.
The development of the transistor, the solar cell, and telemetry, led in the 1950s and early 1960s to uncrewed satellites that could take photographs of weather patterns or enemy nuclear weapons and send them to Earth. A large station was no longer necessary for such purposes, and the United States Apollo program to send men to the Moon chose a mission mode that would not need in-orbit assembly. A smaller station that a single rocket could launch retained value, however, for scientific purposes.
In 1959, von Braun, head of the Development Operations Division at the Army Ballistic Missile Agency, submitted his final Project Horizon plans to the U.S. Army. The overall goal of Horizon was to place men on the Moon, a mission that would soon be taken over by the rapidly forming NASA. Although concentrating on the Moon missions, von Braun also detailed an orbiting laboratory built out of a Horizon upper stage, an idea used for Skylab. A number of NASA centers studied various space station designs in the early 1960s. Studies generally looked at platforms launched by the Saturn V, followed up by crews launched on Saturn IB using an Apollo command and service module, or a Gemini capsule on a Titan II-C, the latter being much less expensive in the case where cargo was not needed. Proposals ranged from an Apollo-based station with two to three men, or a small "canister" for four men with Gemini capsules resupplying it, to a large, rotating station with 24 men and an operating lifetime of about five years. A proposal to study the use of a Saturn S-IVB as a crewed space laboratory was documented in 1962 by the Douglas Aircraft Company.
The Department of Defense (DoD) and NASA cooperated closely in many areas of space. In September 1963, NASA and the DoD agreed to cooperate in building a space station. The DoD wanted its own crewed facility, however, and in December it announced Manned Orbital Laboratory (MOL), a small space station primarily intended for photo reconnaissance using large telescopes directed by a two-person crew. The station was the same diameter as a Titan II upper stage, and would be launched with the crew riding atop in a modified Gemini capsule with a hatch cut into the heat shield on the bottom of the capsule. MOL competed for funding with a NASA station for the next five years and politicians and other officials often suggested that NASA participate in MOL or use the DoD design. The military project led to changes to the NASA plans so that they would resemble MOL less.
NASA management was concerned about losing the 400,000 workers involved in Apollo after landing on the Moon in 1969. A reason von Braun, head of NASA's Marshall Space Flight Center during the 1960s, advocated for a smaller station after his large one was not built was that he wished to provide his employees with work beyond developing the Saturn rockets, which would be completed relatively early during Project Apollo. NASA set up the "Apollo Logistic Support System Office", originally intended to study various ways to modify the Apollo hardware for scientific missions. The office initially proposed a number of projects for direct scientific study, including an extended-stay lunar mission which required two Saturn V launchers, a "lunar truck" based on the Lunar Module (LEM), a large crewed solar telescope using a LEM as its crew quarters, and small space stations using a variety of LEM or CSM-based hardware. Although it did not look at the space station specifically, over the next two years the office would become increasingly dedicated to this role. In August 1965, the office was renamed, becoming the "Apollo Applications Program" (AAP).
As part of their general work, in August 1964 the Manned Spacecraft Center (MSC) presented studies on an expendable lab known as "Apollo "X"", short for "Apollo Extension System". "Apollo X" would have replaced the LEM carried on the top of the S-IVB stage with a small space station slightly larger than the CSM's service area, containing supplies and experiments for missions between 15 and 45 days' duration. Using this study as a baseline, a number of different mission profiles were looked at over the next six months.
In November 1964, von Braun proposed a more ambitious plan to build a much larger station built from the S-II second stage of a Saturn V. His design replaced the S-IVB third stage with an aeroshell, primarily as an adapter for the CSM on top. Inside the shell was a cylindrical equipment section. On reaching orbit, the S-II second stage would be vented to remove any remaining hydrogen fuel, then the equipment section would be slid into it via a large inspection hatch. This became known as a "wet workshop" concept, because of the conversion of an active fuel tank. The station filled the entire interior of the S-II stage's hydrogen tank, with the equipment section forming a "spine" and living quarters located between it and the walls of the booster. This would have resulted in a very large living area. Power was to be provided by solar cells lining the outside of the S-II stage.
One problem with this proposal was that it required a dedicated Saturn V launch to fly the station. At the time the design was being proposed, it was not known how many of the then-contracted Saturn Vs would be required to achieve a successful Moon landing. However, several planned Earth-orbit test missions for the LEM and CSM had been canceled, leaving a number of Saturn IBs free for use. Further work led to the idea of building a smaller "wet workshop" based on the S-IVB, launched as the second stage of a Saturn IB.
A number of S-IVB-based stations were studied at MSC from mid-1965, which had much in common with the Skylab design that eventually flew. An airlock would be attached to the hydrogen tank, in the area designed to hold the LEM, and a minimum amount of equipment would be installed in the tank itself in order to avoid taking up too much fuel volume. Floors of the station would be made from an open metal framework that allowed the fuel to flow through it. After launch, a follow-up mission launched by a Saturn IB would launch additional equipment, including solar panels, an equipment section and docking adapter, and various experiments. Douglas Aircraft, builder of the S-IVB stage, was asked to prepare proposals along these lines. The company had for several years been proposing stations based on the S-IV stage, before it was replaced by the S-IVB.
On April 1, 1966, MSC sent out contracts to Douglas, Grumman, and McDonnell for the conversion of a S-IVB spent stage, under the name "Saturn S-IVB spent-stage experiment support module" (SSESM). In May, astronauts voiced concerns over the purging of the stage's hydrogen tank in space. Nevertheless, in late July it was announced that the Orbital Workshop would be launched as a part of Apollo mission AS-209, originally one of the Earth-orbit CSM test launches, followed by two Saturn I/CSM crew launches, AAP-1 and AAP-2.
MOL remained AAP's chief competitor for funds, although the two programs cooperated on technology. NASA considered flying experiments on MOL, or using its Titan IIIC booster instead of the much more expensive Saturn IB. The agency decided that the Air Force station was not large enough, and that converting Apollo hardware for use with Titan would be too slow and too expensive. The DoD later canceled MOL in June 1969.
Design work continued over the next two years, in an era of shrinking budgets. (NASA sought $450 million for Apollo Applications in fiscal year 1967, for example, but received $42 million.) In August 1967, the agency announced that the lunar mapping and base construction missions examined by the AAP were being canceled. Only the Earth-orbiting missions remained, namely the Orbital Workshop and Apollo Telescope Mount solar observatory.
The success of Apollo 8 in December 1968, launched on the third flight of a Saturn V, made it likely that one would be available to launch a dry workshop. Later, several Moon missions were canceled as well, originally to be Apollo missions 18 through 20. The cancellation of these missions freed up three Saturn V boosters for the AAP program. Although this would have allowed them to develop von Braun's original S-II based mission, by this time so much work had been done on the S-IV based design that work continued on this baseline. With the extra power available, the wet workshop was no longer needed; the S-IC and S-II lower stages could launch a "dry workshop", with its interior already prepared, directly into orbit.
A dry workshop simplified plans for the interior of the station. Industrial design firm Raymond Loewy/William Snaith recommended emphasizing habitability and comfort for the astronauts by providing a wardroom for meals and relaxation and a window to view Earth and space, although astronauts were dubious about the designers' focus on details such as color schemes. Habitability had not previously been an area of concern when building spacecraft due to their small size and brief mission durations, but the Skylab missions would last for months. NASA sent a scientist on Jacques Piccard's "Ben Franklin" submarine in the Gulf Stream in July and August 1969 to learn how six people would live in an enclosed space for four weeks.
Astronauts were uninterested in watching movies on a proposed entertainment center or in playing games, but they did want books and individual music choices. Food was also important; early Apollo crews complained about its quality, and a NASA volunteer found it intolerable to live on the Apollo food for four days on Earth. Its taste and composition were unpleasant, in the form of cubes and squeeze tubes. Skylab food significantly improved on its predecessors by prioritizing edibility over scientific needs.
Each astronaut had a private sleeping area the size of a small walk-in closet, with a curtain, sleeping bag, and locker. Designers also added a shower and a toilet for comfort and to obtain precise urine and feces samples for examination on Earth.
Skylab did not have recycling systems such as conversion of urine to drinking water; it also did not dispose of waste by dumping it into space. The S-IVB's liquid oxygen tank below the OWS was used to store trash and waste water, passed through an airlock.
Rescuing astronauts from Skylab was possible in the most likely emergency circumstances. The crew could use the CSM to quickly return to Earth if the station suffered serious damage. If the CSM failed, the spacecraft and Saturn IB for the next Skylab mission would have been launched with two astronauts to retrieve the crew; given Skylab's ample supplies, its residents would have been able to wait up to several weeks for the rescue mission.
On August 8, 1969, the McDonnell Douglas Corporation received a contract for the conversion of two existing S-IVB stages to the Orbital Workshop configuration. One of the S-IV test stages was shipped to McDonnell Douglas for the construction of a mock-up in January 1970. The Orbital Workshop was renamed "Skylab" in February 1970 as a result of a NASA contest. The actual stage that flew was the upper stage of the AS-212 rocket (the S-IVB stage, S-IVB 212). The mission computer used aboard Skylab was the IBM System/4Pi TC-1, a relative of the AP-101 Space Shuttle computers. The Saturn V with serial number SA-513, originally produced for the Apollo program—before the cancellation of Apollo 18, 19, and 20—was repurposed and redesigned to launch Skylab. The Saturn V's third stage was removed and replaced with Skylab, but with the controlling Instrument Unit remaining in its standard position.
Skylab was launched on May 14, 1973 by the modified Saturn V. The launch is sometimes referred to as Skylab 1, or SL-1. Severe damage was sustained during launch and deployment, including the loss of the station's micrometeoroid shield/sun shade and one of its main solar panels. Debris from the lost micrometeoroid shield further complicated matters by becoming tangled in the remaining solar panel, preventing its full deployment and thus leaving the station with a huge power deficit.
Immediately following Skylab's launch, Pad A at Kennedy Space Center Launch Complex 39 was deactivated, and construction proceeded to modify it for the Space Shuttle program, originally targeting a maiden launch in March 1979. The crewed missions to Skylab would occur using a Saturn IB rocket from Launch Pad 39B.
SL-1 was the last uncrewed launch from LC-39A until February 19, 2017, when SpaceX CRS-10 was launched from there.
Three crewed missions, designated SL-2, SL-3 and SL-4, were made to Skylab in the Apollo command and service modules. The first crewed mission, SL-2, launched on May 25, 1973 atop a Saturn IB and involved extensive repairs to the station. The crew deployed a parasol-like sunshade through a small instrument port from the inside of the station, bringing station temperatures down to acceptable levels and preventing overheating that would have melted the plastic insulation inside the station and released poisonous gases. This solution was designed by NASA's "Mr. Fix It" Jack Kinzler, who won the NASA Distinguished Service Medal for his efforts. The crew conducted further repairs via two spacewalks (extra-vehicular activity, or EVA). The crew stayed in orbit with Skylab for 28 days. Two additional missions followed, with the launch dates of July 28, 1973 (SL-3) and November 16, 1973 (SL-4), and mission durations of 59 and 84 days, respectively. The last Skylab crew returned to Earth on February 8, 1974.
In addition to the three crewed missions, there was a rescue mission on standby that had a crew of two, but could take five back down.
Also of note was the three-man crew of Skylab Medical Experiment Altitude Test, who spent 56 days in 1972 at low-pressure on Earth to evaluate medical experiment equipment. This was a spaceflight analog test in full gravity, but Skylab hardware was tested and medical knowledge was gained.
Skylab orbited Earth 2,476 times during the 171 days and 13 hours of its occupation during the three crewed Skylab expeditions. Each of these extended the human record of 23 days for amount of time spent in space set by the Soviet Soyuz 11 crew aboard the space station Salyut 1 on June 30, 1971. Skylab 2 lasted 28 days, Skylab 3 56 days, and Skylab 4 84 days. Astronauts performed ten spacewalks, totaling 42 hours and 16 minutes. Skylab logged about 2,000 hours of scientific and medical experiments, 127,000 frames of film of the Sun and 46,000 of Earth. Solar experiments included photographs of eight solar flares, and produced valuable results that scientists stated would have been impossible to obtain with uncrewed spacecraft. The existence of the Sun's coronal holes were confirmed because of these efforts. Many of the experiments conducted investigated the astronauts' adaptation to extended periods of microgravity.
A typical day began at 6 a.m. Central Time Zone. Although the toilet was small and noisy, both veteran astronauts—who had endured earlier missions' rudimentary waste-collection systems—and rookies complimented it. The first crew enjoyed taking a shower once a week, but found drying themselves in weightlessness and vacuuming excess water difficult; later crews usually cleaned themselves daily with wet washcloths instead of using the shower. Astronauts also found that bending over in weightlessness to put on socks or tie shoelaces strained their stomach muscles.
Breakfast began at 7 am. Astronauts usually stood to eat, as sitting in microgravity also strained their stomach muscles. They reported that their food—although greatly improved from Apollo—was bland and repetitive, and weightlessness caused utensils, food containers, and bits of food to float away; also, gas in their drinking water contributed to flatulence. After breakfast and preparation for lunch, experiments, tests and repairs of spacecraft systems and, if possible, 90 minutes of physical exercise followed; the station had a bicycle and other equipment, and astronauts could jog around the water tank. After dinner, which was scheduled for 6 pm, crews performed household chores and prepared for the next day's experiments. Following lengthy daily instructions (some of which were up to 15 meters long) sent via teleprinter, the crews were often busy enough to postpone sleep.
The station offered what a later study called "a highly satisfactory living and working environment for crews", with enough room for personal privacy. Although it had a dart set, playing cards, and other recreational equipment in addition to books and music players, the window with its view of Earth became the most popular way to relax in orbit.
Prior to departure about 80 experiments were named, although they are also described as "almost 300 separate investigations". Experiments were divided into six broad categories:
Because the solar scientific airlock—one of two research airlocks—was unexpectedly occupied by the "Parasol" that replaced the missing meteorite shield, a few experiments were instead installed outside with the telescopes during space walks, or shifted to the Earth-facing scientific airlock.
Skylab 2 spent less time than planned on most experiments due to station repairs. On the other hand, Skylab 3 and Skylab 4 far exceeded the initial experiment plans, once the crews adjusted to the environment and established comfortable working relationships with ground control.
The figure (below) lists an overview of most major experiments. Skylab 4 carried out several more experiments, such as to observe Comet Kohoutek.
Riccardo Giacconi shared the 2002 Nobel Prize in Physics for his study of X-ray astronomy, including the study of emissions from the sun onboard Skylab, contributing to the birth of X-ray astronomy.
Skylab had certain features to protect vulnerable technology from radiation. The window was vulnerable to darkening, and this darkening could affect experiment S190. As a result, a light shield that could be open or shut was designed and installed on Skylab. To protect a wide variety of films, used for a variety of experiments and for astronaut photography, there were five film vaults. There were four smaller film vaults in the Multiple Docking Adapter, mainly because the structure could not carry enough weight for a single larger film vault. The orbital workshop could handle a single larger safe, which is also more efficient for shielding. The large vault in the orbital workshop had an empty mass of 2398 lb (1088 kg, 171.3 stones). The four smaller vaults had combined mass of 1545 lb. The primary construction material of all five safes was aluminum. When Skylab re-entered there was one 180 lb chunk of aluminum found that was thought to be a door to one of the film vaults. The big film vault was one of the heaviest single pieces of Skylab to re-enter Earth's atmosphere.
A later example of a radiation vault is the Juno Radiation Vault for the Juno Jupiter orbiter, launched in 2011, which was designed to protect much of the uncrewed spacecraft's electronics, using 1 cm thick walls of titanium.
The Skylab film vault was used for storing film from various sources including the Apollo Telescope Mount solar instruments. Six ATM experiments used film to record data, and over the course of the missions over 150,000 successful exposures were recorded. The film canister had to be manually retrieved on crewed spacewalks to the instruments during the missions. The film canisters were returned to Earth aboard the Apollo capsules when each mission ended, and were among the heaviest items that had to be returned at the end of each mission. The heaviest canisters weighed 40 kg and could hold up to 16,000 frames of film.
There were two types of gyroscopes on Skylab. Control-moment gyroscopes (CMG) could physically move the station, and rate gyroscopes measured the rate of rotation to find its orientation. The CMG helped provide the fine pointing needed by the Apollo Telescope Mount, and to resist various forces that can change the station's orientation.
Some of the forces acting on Skylab that the pointing system needed to resist:
Skylab was the first large spacecraft to use big gyroscopes, capable of controlling its attitude. The control could also be used to help point the instruments. The gyroscopes took about ten hours to get spun up if they were turned off. There was also a thruster system to control Skylab's attitude. There were 9 rate-gyroscope sensors, 3 for each axis. These were sensors that fed their output to the Skylab digital computer. Two of three were active and their input was averaged, while the third was a backup. From NASA SP-400 "Skylab, Our First Space Station", "each Skylab control-moment gyroscope consisted of a motor-driven rotor, electronics assembly, and power inverter assembly. The 21-inch diameter rotor weighed and rotated at approximately 8950 revolutions per minute".
There were three control movement gyroscopes on Skylab, but only two were required to maintain pointing. The control and sensor gyroscopes were part of a system that help detect and control the orientation of the station in space. Other sensors that helped with this were a Sun tracker and a star tracker. The sensors fed data to the main computer, which could then use the control gyroscopes and or the thruster system to keep Skylab pointed as desired.
Skylab had a zero-gravity shower system in the work and experiment section of the Orbital Workshop designed and built at the Manned Spaceflight Center. It had a cylindrical curtain that went from floor to ceiling and a vacuum system to suck away water. The floor of the shower had foot restraints.
To bathe, the user coupled a pressurized bottle of warmed water to the shower's plumbing, then stepped inside and secured the curtain. A push-button shower nozzle was connected by a stiff hose to the top of the shower. The system was designed for about 6 pints (2.8 liters) of water per shower, the water being drawn from the personal hygiene water tank. The use of both the liquid soap and water was carefully planned out, with enough soap and warm water for one shower per week per person.
The first astronaut to use the space shower was Paul J. Weitz on Skylab 2, the first crewed mission. A Skylab shower took about two and a half hours, including the time to set up the shower and dissipate used water. The procedure for operating the shower was as follows:
One of the big concerns with bathing in space was control of droplets of water so that they did not cause an electrical short by floating into the wrong area. The vacuum water system was thus integral to the shower. The vacuum fed to a centrifugal separator, filter, and collection bag to allow the system to vacuum up the fluids. Waste water was injected into a disposal bag which was in turn put in the waste tank. The material for the shower enclosure was fire-proof beta cloth wrapped around hoops of diameter; the top hoop was connected to the ceiling. The shower could be collapsed to the floor when not in use. Skylab also supplied astronauts with rayon terrycloth towels which had a color-coded stitching for each crew-member. There were 420 towels on board Skylab initially.
A simulated Skylab shower was also used during the 56-day Earth-bound Skylab analog mission SMEAT; the crew used the shower after exercise and found it a positive experience.
There was a variety of hand-held and fixed experiments that used various types of film. In addition to the instruments in the ATM solar observatory, 35 and 70 mm film cameras were carried on board. A TV camera was carried that recorded video electronically. These electronic signals could be recorded to magnetic tape or be transmitted to Earth by radio signal. The TV camera was not a digital camera of the type that became common in the later decades, although Skylab did have a digital computer using microchips on board.
It was determined that film would fog up to due to radiation over the course of the mission. To prevent this film was stored in vaults.
Personal (hand-held) camera equipment:
Film for the DAC was contained in DAC Film Magazines, which contained up to 140 feet (42.7 m) of film. At 24 frames per second this was enough for 4 minutes of filming, with progressively longer film times with lower frame rates such as 16 minutes at 6 frames per second. The film had to be loaded or unloaded from the DAC in a photographic dark room.
Experiment S190B was the Actron Earth Terrain Camera
The S190A was the "Multispectral Photographic Camera "
There was also a Polaroid SX-70 instant camera, and a pair of Leitz Trinovid 10 x 40 binoculars modified for use in space to aid in Earth observations.
The SX-70 was used to take pictures of the Extreme Ultraviolet monitor by Dr. Garriot, as the monitor provided a live video feed of the solar corona in ultraviolet light as observed by Skylab solar observatory instruments located in the Apollo Telescope Mount.
Skylab was controlled in part by a digital computer system, and one of its main jobs was to control the pointing of the station; pointing was especially important for its solar power collection and observatory functions. The computer consisted of two actual computers, a primary and a secondary. The system ran several thousand words of code, which was also backed up on the Memory Load Unit (MLU). The two computers were linked to each other and various input and output items by the workshop computer interface. Operations could be switched from the primary to the backup, which were the same design, either automatically if errors were detected, by the Skylab crew, or from the ground.
The Skylab computer was a space-hardened and customized version of the TC-1 computer, a version of the IBM System/4 Pi, itself based on the System 360 computer. The TC-1 had a 16,000-word memory based on ferrite memory cores, while the MLU was a read-only tape drive that contained a backup of the main computer programs. The tape drive would take 11 seconds to upload the backup of the software program to a main computer. The TC-1 used 16-bit words and the central processor came from the 4Pi computer. There was a 16k and an 8k version of the software program.
The computer had a mass of 100 pounds (45.4 kg), and consumed about ten percent of the station's electrical power.
After launch the computer is what the controllers on the ground communicated with to control the station's orientation. When the sun-shield was torn off the ground staff had to balance solar heating with electrical production. On March 6, 1978 the computer system was re-activated by NASA to control the re-entry.
The system had a user interface which consisted of a display, ten buttons, and a three position switch. Because the numbers were in octal (base-8), it only had numbers zero to seven (8 keys), and the other two keys were enter and clear. The display could show minutes and seconds which would count-down to orbital benchmarks, or it could display keystrokes when using the interface. The interface could be used to change the software program. The user interface was called the Digital Address System (DAS) and could send commands to the computer's command system. The command system could also get commands from the ground.
For personal computing needs Skylab crews were equipped with models of the then new hand-held electronic scientific calculator, which was used in place of slide-rules used on prior space missions as the primary personal computer. The model used was the Hewlett Packard HP 35. Some slide rules continued in use aboard Skylab, and a circular slide rule was at the workstation.
The three crewed Skylab missions used only about 16.8 of the 24-man-months of oxygen, food, water, and other supplies stored aboard Skylab . A fourth crewed mission was under consideration, which would have used the launch vehicle kept on standby for the Skylab Rescue mission. This would have been a 20-day mission to boost Skylab to a higher altitude and do more scientific experiments. Another plan was to use a Teleoperator Retrieval System (TRS) launched aboard the Space Shuttle (then under development), to robotically re-boost the orbit. When Skylab 5 was cancelled, it was expected Skylab would stay in orbit until the 1980s, which was enough time to overlap with the beginning of Shuttle launches. Other options for launching TRS included the Titan III and Atlas Agena. No option received the level of effort and funding needed for execution before Skylab's sooner-than-expected re-entry.
Though no one returned after the end of the SL-4 mission in February 1974, the crew left a bag filled with supplies to welcome visitors, and left the hatch unlocked. Skylab's internal systems were evaluated and tested from the ground, and effort was put into plans for re-using it, as late as 1978. NASA discouraged any discussion of additional visits due to the station's age, but in 1977 and 1978, when the agency still believed the Space Shuttle would be ready by 1979, it completed two studies on reusing the station. By September 1978, the agency believed Skylab was safe for crews, with all major systems intact and operational. It still had 180 man-days of water and 420-man-days of oxygen, and astronauts could refill both; the station could hold up to about 600 to 700-man-days of drinkable water and 420-man-days of food. Before SL-4 left they did one more boost, running the Skylab thrusters for 3 minutes which added 11 km in height to its orbit. Skylab was left in a 433 by 455 km orbit on departure. At this time, the NASA-accepted estimate for its re-entry was nine years.
The studies cited several benefits from reusing Skylab, which one called a resource worth "hundreds of millions of dollars" with "unique habitability provisions for long duration space flight". Because no more operational Saturn V rockets were available after the Apollo program, four to five shuttle flights and extensive space architecture would have been needed to build another station as large as Skylab's volume. Its ample size—much greater than that of the shuttle alone, or even the shuttle plus Spacelab—was enough, with some modifications, for up to seven astronauts of both sexes, and experiments needing a long duration in space; even a movie projector for recreation was possible.
Proponents of Skylab's reuse also said repairing and upgrading Skylab would provide information on the results of long-duration exposure to space for future stations. The most serious issue for reactivation was stationkeeping, as one of the station's gyroscopes had failed and the attitude control system needed refueling; these issues would need EVA to fix or replace. The station had not been designed for extensive resupply. However, although it was originally planned that Skylab crews would only perform limited maintenance they successfully made major repairs during EVA, such as the SL-2 crew's deployment of the solar panel and the SL-4 crew's repair of the primary coolant loop. The SL-2 crew fixed one item during EVA by, reportedly, "hit[ting] it with [a] hammer".
Some studies also said, beyond the opportunity for space construction and maintenance experience, reactivating the station would free up shuttle flights for other uses, and reduce the need to modify the shuttle for long-duration missions. Even if the station were not crewed again, went one argument, it might serve as an experimental platform.
The reactivation would likely have occurred in four phases:
The first three phases would have required about $60 million in 1980s dollars, not including launch costs.
Other options for launching TRS were Titan III or Atlas Agena.
After a boost of by SL-4's Apollo CSM before its departure in 1974, Skylab was left in a parking orbit of by that was expected to last until at least the early 1980s, based on estimates of the 11-year sunspot cycle that began in 1976. NASA first considered as early as 1962 the potential risks of a space station reentry, but decided not to incorporate a retrorocket system in Skylab due to cost and acceptable risk.
The spent 49-ton Saturn V S-II stage which had launched Skylab in 1973 remained in orbit for almost two years, and made an uncontrolled reentry on January 11, 1975.
British mathematician Desmond King-Hele of the Royal Aircraft Establishment predicted in 1973 that Skylab would de-orbit and crash to Earth in 1979, sooner than NASA's forecast, because of increased solar activity. Greater-than-expected solar activity heated the outer layers of Earth's atmosphere and increased drag on Skylab. By late 1977, NORAD also forecast a reentry in mid-1979; a National Oceanic and Atmospheric Administration (NOAA) scientist criticized NASA for using an inaccurate model for the second most-intense sunspot cycle in a century, and for ignoring NOAA predictions published in 1976.
The reentry of the USSR's nuclear powered Cosmos 954 in January 1978, and the resulting radioactive debris fall in northern Canada, drew more attention to Skylab's orbit. Although Skylab did not contain radioactive materials, the State Department warned NASA about the potential diplomatic repercussions of station debris. Battelle Memorial Institute forecast that up to 25 tons of metal debris could land in 500 pieces over an area 4,000 miles long and 1,000 miles wide. The lead-lined film vault, for example, might land intact at 400 feet per second.
Ground controllers re-established contact with Skylab in March 1978 and recharged its batteries. Although NASA worked on plans to reboost Skylab with the Space Shuttle through 1978 and the TRS was almost complete, the agency gave up in December when it became clear that the shuttle would not be ready in time; its first flight, STS-1, did not occur until April 1981. Also rejected were proposals to launch the TRS using one or two uncrewed rockets or to attempt to destroy the station with missiles.
Skylab's demise in 1979 was an international media event, with T-shirts and hats with bullseyes and "Skylab Repellent" with a money-back guarantee, wagering on the time and place of re-entry, and nightly news reports. The "San Francisco Examiner" offered a $10,000 prize for the first piece of Skylab delivered to its offices; the competing "San Francisco Chronicle" offered $200,000 if a subscriber suffered personal or property damage. A Nebraska neighborhood painted a target so that the station would have "something to aim for", a resident said.
A report commissioned by NASA calculated that the odds were 1 in 152 of debris hitting any human, and odds of 1 in 7 of debris hitting a city of 100,000 people or more. Special teams were readied to head to any country hit by debris. The event caused so much panic in the Philippines that President Ferdinand Marcos appeared on national television to reassure the public.
A week before re-entry, NASA forecast that it would occur between July 10 and 14, with the 12th the most likely date, and the Royal Aircraft Establishment predicted the 14th. In the hours before the event, ground controllers adjusted Skylab's orientation to minimize the risk of re-entry on a populated area. They aimed the station at a spot south-southeast of Cape Town, South Africa, and re-entry began at approximately 16:37 UTC, July 11, 1979. The Air Force provided data from a secret tracking system. The station did not burn up as fast as NASA expected. Debris landed about east of Perth, Western Australia due to a four-percent calculation error, and was found between Esperance, Western Australia and Rawlinna, from 31° to 34°S and 122° to 126°E, about 130–150 km (81–93 miles) radius around Balladonia, Western Australia. Residents and an airline pilot saw dozens of colorful flares as large pieces broke up in the atmosphere; the debris landed in an almost unpopulated area, but the sightings still caused NASA to fear human injury or property damage. The Shire of Esperance light-heartedly fined NASA A$400 for littering, and Scott Barley of Highway Radio raised the funds from his morning show listeners in April 2009 and paid the fine on behalf of NASA.
Stan Thornton found 24 pieces of Skylab at his home in Esperance, and a Philadelphia businessman flew him, his parents, and his girlfriend to San Francisco where he collected the "Examiner" prize. The Miss Universe 1979 pageant was scheduled for July 20, 1979 in Perth, and a large piece of Skylab debris was displayed on the stage. Analysis of the debris showed that the station had disintegrated above the Earth, much lower than expected.
After the demise of Skylab, NASA focused on the reusable Spacelab module, an orbital workshop that could be deployed with the Space Shuttle and returned to Earth. The next American major space station project was Space Station Freedom, which was merged into the International Space Station in 1993 and launched starting in 1998. Shuttle-Mir was another project and led to the US funding Spektr, Priroda, and the Mir Docking Module in the 1990s.
There was a Skylab Rescue mission assembled for the second crewed mission to Skylab, but it was not needed. Another rescue mission was assembled for the last Skylab and was also on standby for ASTP. That launch stack might have been used for Skylab 5 (which would have been the fourth crewed Skylab mission), but this was cancelled and the SA-209 Saturn IB rocket was put on display at NASA Kennedy Space Center.
Launch vehicles:
Skylab 5 would have been a short 20-day mission to conduct more scientific experiments and use the Apollo's Service Propulsion System engine to boost Skylab into a higher orbit. Vance Brand (commander), William B. Lenoir (science pilot), and Don Lind (pilot) would have been the crew for this mission, with Brand and Lind being the prime crew for the Skylab Rescue flights. Brand and Lind also trained for a mission that would have aimed Skylab for a controlled deorbit.
The mission would have launched in April 1974 and supported later use by the Space Shuttle by boosting the station to higher orbit.
In addition to the flown Skylab space station, a second flight-quality backup Skylab space station had been built during the program. NASA considered using it for a second station in May 1973 or later, to be called Skylab B (S-IVB 515), but decided against it. Launching another Skylab with another Saturn V rocket would have been very costly, and it was decided to spend this money on the development of the Space Shuttle instead. The backup is on display at the National Air and Space Museum in Washington, D.C.
A full-size training mock-up once used for astronaut training is located at the Lyndon B. Johnson Space Center visitor's center in Houston, Texas. Another full-size training mock-up is at the U.S. Space & Rocket Center in Huntsville, Alabama. Originally displayed indoors, it was subsequently stored outdoors for several years to make room for other exhibits. To mark the 40th anniversary of the Skylab program, the Orbital Workshop portion of the trainer was restored and moved into the Davidson Center in 2013. NASA transferred the backup Skylab to the National Air and Space Museum in 1975. On display in the Museum's Space Hall since 1976, the orbital workshop has been slightly modified to permit viewers to walk through the living quarters.
The numerical identification of the crewed Skylab missions was the cause of some confusion. Originally, the uncrewed launch of Skylab and the three crewed missions to the station were numbered "SL-1" through "SL-4". During the preparations for the crewed missions, some documentation was created with a different scheme—"SLM-1" through "SLM-3"—for those missions only. William Pogue credits Pete Conrad with asking the Skylab program director which scheme should be used for the mission patches, and the astronauts were told to use 1-2-3, not 2-3-4. By the time NASA administrators tried to reverse this decision, it was too late, as all the in-flight clothing had already been manufactured and shipped with the 1-2-3 mission patches.
NASA Astronaut Group 4 and Group 6 were scientists recruited as astronauts. They and the scientific community hoped to have two on each Skylab mission, but Deke Slayton, director of flight crew operations, insisted that two trained pilots fly on each.
The "Skylab Medical Experiment Altitude Test" or SMEAT was a 56-day (8-week) Earth analog Skylab test. The test had a low-pressure high oxygen-percentage atmosphere but it operated under full gravity, as SMEAT was not in orbit. The test had a three-astronaut crew with Commander (Crippen), Science Pilot (Bobko), and Pilot (Thornton); there was a focus on medical studies and Thornton was an M.D. The crew lived and worked in the pressure chamber, converted to be like Skylab, from July 26 to September 20, 1972.
From 1966 to 1974, the Skylab program cost a total of $2.2 billion, equivalent to $10 billion in 2010 dollars. As its three three-person crews spent 510 total man-days in space, each man-day cost approximately $20 million, compared to $7.5 million for the International Space Station.
The documentary "Searching for Skylab" was released online in March 2019. It was written and directed by Dwight Steven-Boniecki and was partly crowdfunded. | https://en.wikipedia.org/wiki?curid=29441 |
StrongARM
The StrongARM is a family of computer microprocessors developed by Digital Equipment Corporation and manufactured in the late 1990s which implemented the ARM v4 instruction set architecture. It was later sold to Intel in 1997, who continued to manufacture it before replacing it with the XScale in the early 2000s.
According to Allen Baum, the StrongARM traces its history to attempts to make a low-power version of the DEC Alpha, which DEC's engineers quickly concluded was not possible. They then became interested in designs dedicated to low-power applications which led them to the ARM family. One of the only major users of the ARM for performance-related products at that time was Apple, whose Newton device was based on the ARM platform. DEC approached Apple wondering if they might be interested in a high-performance ARM, to which the Apple engineers replied "Phhht, yeah. You can’t do it, but, yeah, if you could we'd use it."
The StrongARM was a collaborative project between DEC and Advanced RISC Machines to create a faster ARM microprocessor. The StrongARM was designed to address the upper-end of the low-power embedded market, where users needed more performance than the ARM could deliver while being able to accept more external support. Targets were devices such as newer personal digital assistants and set-top boxes.
Traditionally, the semiconductor division of DEC was located in Massachusetts. In order to gain access to the design talent in Silicon Valley, DEC opened a design center in Palo Alto, California. This design center was led by Dan Dobberpuhl and was the main design site for the StrongARM project. Another design site that worked on the project was in Austin, Texas that was created by some ex-DEC designers returning from Apple Computer and Motorola. The project was set up in 1995, and quickly delivered their first design, the SA-110.
DEC agreed to sell StrongARM to Intel as part of a lawsuit settlement in 1997. Intel used the StrongARM to replace their ailing line of RISC processors, the i860 and i960.
When the semiconductor division of DEC was sold to Intel, many engineers from the Palo Alto design group moved to SiByte, a start-up company designing MIPS system-on-a-chip (SoC) products for the networking market. The Austin design group spun off to become Alchemy Semiconductor, another start-up company designing MIPS SoCs for the hand-held market. A new StrongARM core was developed by Intel and introduced in 2000 as the XScale.
The SA-110 was the first microprocessor in the StrongARM family. The first versions, operating at 100, 160, and 200 MHz, were announced on 5 February 1996. When announced, samples of these versions were available, with volume production slated for mid-1996. Faster 166 and 233 MHz versions were announced on 12 September 1996. Samples of these versions were available at announcement, with volume production slated for December 1996. Throughout 1996, the SA-110 was the highest performing microprocessor for portable devices. Towards the end of 1996 it was a leading CPU for internet/intranet appliances and thin client systems. The SA-110's first design win was the Apple MessagePad 2000. It was also used in a number of products including the Acorn Computers Risc PC and Eidos Optima video editing system. The SA-110's lead designers were Daniel W. Dobberpuhl, Gregory W. Hoeppner, Liam Madden, and Richard T. Witek.
The SA-110 had a simple microarchitecture. It was a scalar design that executed instructions in-order with a five-stage classic RISC pipeline. The microprocessor was partitioned into several blocks, the IBOX, EBOX, IMMU, DMMU, BIU, WB and PLL. The IBOX contained hardware that operated in the first two stages of the pipeline such as the program counter. It fetched, decoded and issued instructions. Instruction fetch occurs during the first stage, decode and issue during the second. The IBOX decodes the more complex instructions in the ARM instruction set by translating them into sequences of simpler instructions. The IBOX also handled branch instructions. The SA-110 did not have branch prediction hardware, but had mechanisms for their speedy processing.
Execution starts at stage three. The hardware that operates during this stage is contained in the EBOX, which comprises the register file, arithmetic logic unit (ALU), barrel shifter, multiplier and condition code logic. The register file had three read ports and two write ports. The ALU and barrel shifter executed instructions in a single cycle. The multiplier is not pipelined and has a latency of multiple cycles.
The IMMU and DMMU are memory management units for instructions and data, respectively. Each MMU contained a 32-entry fully associative translation lookaside buffer (TLB) that can map 4 KB, 64 KB or 1 MB pages. The write buffer (WB) has eight 16-byte entries. It enables the pipelining of stores. The bus interface unit (BIU) provided the SA-110 with an external interface.
The PLL generates the internal clock signal from an external 3.68 MHz clock signal. It was not designed by DEC, but was contracted to the Centre Suisse d'Electronique et de Microtechnique (CSEM) located in Neuchâtel, Switzerland.
The instruction cache and data cache each have a capacity of 16 KB and are 32-way set-associative and virtually addressed. The SA-110 was designed to be used with slow (and therefore low-cost) memory and therefore the high set associativity allows a higher hit rate than competing designs, and the use of virtual addresses allows memory to be simultaneously cached and uncached. The caches are responsible for most of the transistor count and they take up half the die area.
The SA-110 contained 2.5 million transistors and is 7.8 mm by 6.4 mm large (49.92 mm2). It was fabricated by DEC in its proprietary CMOS-6 process at its Fab 6 fab in Hudson, Massachusetts. CMOS-6 was DEC's sixth-generation complementary metal–oxide–semiconductor (CMOS) process. CMOS-6 has a 0.35 µm feature size, a 0.25 µm effective channel length but for use with the SA-110, only three levels of aluminium interconnect. It used a power supply with a variable voltage of 1.2 to 2.2 volts (V) to enable designs to find a balance between power consumption and performance (higher voltages enable higher clock rates). The SA-110 was packaged in a 144-pin thin quad flat pack (TQFP).
The SA-1100 was a derivative of the SA-110 developed by DEC. Announced in 1997, the SA-1100 was targeted for portable applications such as PDAs and differs from the SA-110 by providing a number of features that are desirable for such applications. To accommodate these features, the data cache was reduced in size to 8 KB.
The extra features are integrated memory, PCMCIA, and color LCD controllers connected to an on-die system bus, and five serial I/O channels that are connected to a peripheral bus attached to the system bus. The memory controller supported FPM and EDO DRAM, SRAM, flash, and ROM. The PCMCIA controller supports two slots. The memory address and data bus is shared with the PCMCIA interface. Glue logic is required. The serial I/O channels implement a slave USB interface, a SDLC, two UARTs, an IrDA interface, a MCP, and a synchronous serial port.
The SA-1100 had a companion chip, the SA-1101. It was introduced by Intel on 7 October 1998. The SA-1101 provided additional peripherals to complement those integrated on the SA-1100 such as a video output port, two PS/2 ports, a USB controller and a PCMCIA controller that replaces that on the SA-1100. Design of the device started by DEC, but was only partially complete when acquired by Intel, who had to finish the design. It was fabricated at DEC's former Hudson, Massachusetts fabrication plant, which was also sold to Intel.
The SA-1100 contained 2.5 million transistors and measured 8.24 mm by 9.12 mm (75.15 mm2). It was fabricated in a 0.35 μm CMOS process with three levels of aluminium interconnect and was packaged in a 208-pin TQFP.
One of the early recipients of this processor was-ill-fated Psion netBook and its more consumer oriented sibling Psion Series 7.
The SA-1110 was a derivative of the SA-110 developed by Intel. It was announced on 31 March 1999, positioned as an alternative to the SA-1100. At announcement, samples were set for June 1999 and volume later that year. Intel discontinued the SA-1110 in early 2003. The SA-1110 was available in 133 or 206 MHz versions. It differed from the SA-1100 by featuring support for 66 MHz (133 MHz version only) or 103 MHz (206 MHz version only) SDRAM. Its companion chip, which provided additional support for peripherals, was the SA-1111. The SA-1110 was packaged in a 256-pin micro ball grid array. It was used in mobile phones, personal data assistants (PDAs) such as the Compaq (later HP) iPAQ and HP Jornada, the Sharp SL-5x00 Linux Based Platforms and the Simputer. It was also used to run the Intel Web Tablet, a tablet device that is considered potentially the first to introduce large screen, portable web browsing. Intel dropped the product just prior to launch in 2001.
The SA-1500 was a derivative of the SA-110 developed by DEC initially targeted for set-top boxes. It was designed and manufactured in low volumes by DEC but was never put into production by Intel. The SA-1500 was available at 200 to 300 MHz. The SA-1500 featured an enhanced SA-110 core, an on-chip coprocessor called the "Attached Media Processor" (AMP), and an on-chip SDRAM and I/O bus controller. The SDRAM controller supported 100 MHz SDRAM, and the I/O controller implemented a 32-bit I/O bus that may run at frequencies up to 50 MHz for connecting to peripherals and the SA-1501 companion chip.
The AMP implemented a long instruction word instruction set containing instructions designed for multimedia, such as integer and floating-point multiply–accumulate and SIMD arithmetic. Each long instruction word is 64 bits wide and specifies an arithmetic operation and a branch or a load/store. Instructions operate on operands from a 64-entry 36-bit register file, and on a set of control registers. The AMP communicates with the SA-110 core via an on-chip bus and it shares the data cache with the SA-110. The AMP contained an ALU with a shifter, a branch unit, a load/store unit, a multiply–accumulate unit, and a single-precision floating-point unit. The AMP supported user-defined instructions via a 512-entry writable control store.
The SA-1501 companion chip provided additional video and audio processing capabilities and various I/O functions such as PS/2 ports, a parallel port, and interfaces for various peripherals.
The SA-1500 contains 3.3 million transistors and measures 60 mm2. It was fabricated in a 0.28 µm CMOS process. It used a 1.5 to 2.0 V internal power supply and 3.3 V I/O, consuming less than 0.5 W at 100 MHz and 2.5 W at 300 MHz. It was packaged in a 240-pin metal quad flat package or a 256-ball plastic ball grid array.
The StrongARM latch is a electronic latch circuit topology first proposed by Toshiba engineers Tsuguo Kobayashi "et al." and got significant attention after being used in StrongARM microprocessors. It is widely used as a sense amplifier, a comparator, or just a robust latch with high sensitivity. | https://en.wikipedia.org/wiki?curid=29445 |
Shaul Mofaz
Lieutenant General Shaul Mofaz (; 4 November 1948) is an Israeli former soldier and politician. He joined the Israel Defense Forces in 1966 and served in the Paratroopers Brigade. He fought in the Six-Day War, Yom Kippur War, 1982 Lebanon War, and Operation Entebbe with the paratroopers and Sayeret Matkal, an elite special forces unit. In 1998 he became the sixteenth IDF's Chief of the General Staff, serving until 2002. He is of Iranian Jewish ancestry.
After leaving the army, he entered politics. He was appointed Minister of Defense in 2002, holding the position until 2006 when he was elected to the Knesset on the Kadima list. He then served as Deputy Prime Minister and Minister of Transportation and Road Safety until 2009. After becoming Kadima leader in March 2012 he became Leader of the Opposition, before returning to the cabinet during a 70-day spell in which he served as Acting Prime Minister, Vice Prime Minister and Minister without Portfolio. Kadima was reduced to just two seats in the 2013 elections, and Mofaz retired from politics shortly before the 2015 elections.
Shaul Mofaz was born Shahrām Mofazzazkār () on 4 November 1948 in Tehran, to Persian Jewish parents from Isfahan. Mofaz immigrated to Israel with his parents in 1957. Upon graduating from high school in 1966, he joined the Israel Defense Forces and served in the Paratroopers Brigade. He served in the Six-Day War, Yom Kippur War, 1982 Lebanon War, and Operation Entebbe with the paratroopers and Sayeret Matkal, an elite special forces unit.
Mofaz was then appointed an infantry brigade commander for the 1982 Lebanon War. Afterwards he attended the US Marine Corps Command and Staff College in Quantico, Virginia, United States. On his return he was briefly appointed commander of the Officers School, before returning to active service as commander of the 35th Paratroopers Brigade in 1986, and led its forces during Operation Law and Order.
Mofaz served in a series of senior military posts, having been promoted to the rank of Brigadier General (1988). In 1993 he was made commander of the IDF forces in the West Bank. In 1994, he was promoted to Major General, commanding the Southern Corps. His rapid rise continued; in 1997 Mofaz was appointed Deputy Chief of the General Staff and in 1998 he was appointed Chief of the General Staff.
His term of Chief of Staff was noted for financial and structural reforms of the Israeli Army. But the most significant event in his tenure was the eruption of the Second Intifada in September, 2000. The tough tactics undertaken by Mofaz drew widespread concern from the international community but were broadly supported by the Israeli public. Controversy erupted over the offensive in Jenin, intermittent raids in the Gaza Strip, and the continued isolation of Yasser Arafat.
Mofaz foresaw the wave of violence coming early as 1999 and prepared the IDF for intense guerrilla warfare in the territories. He fortified posts at the Gaza Strip and kept Israel Defense Forces casualties low. While he was known for claiming, "Israel has the most moral army in the world," he drew criticism from both Israeli and international human rights monitoring groups because of the methods he had undertaken, including using armored bulldozers to demolish 2,500 Palestinian civilian homes, displacing thousands, in order to create a security "buffer zone" along the Rafah border.
Following a government crisis in 2002, Shaul Mofaz was appointed Defense Minister by Ariel Sharon. Although he supported an agreement with the Palestinians, he was willing to make no compromise in the war against militant groups such as Hamas, Islamic Jihad, Tanzim, and Al-Aqsa Martyrs Brigades.
The fact that he had only recently left his position as IDF Chief of Staff prevented him from participating in the 2003 election (by which time Mofaz had joined Sharon's Likud). Nevertheless, Sharon reappointed him as Defense Minister in the new government.
On 21 November 2005, Mofaz rejected Sharon's invitation to join his new party, Kadima, and instead announced his candidacy for the leadership of Likud. But, on 11 December 2005, one day after he promised he would never leave the Likud, he withdrew from both the leadership race and the Likud to join Kadima.
Following the elections in late March 2006, Mofaz was moved from the position of Defense Minister and received the Transport ministry in the new Cabinet installed on 4 May 2006.
In 2008, with Israel's then prime minister, Ehud Olmert, being pressured to resign due to corruption charges, Mofaz announced that he would run for the leadership of the Kadima party.
On 5 August 2008, Mofaz officially entered the race to be leader of Kadima. That same day he received a blessing by Shas spiritual leader Rabbi Ovadia Yosef. On 17 September 2008, he lost the Kadima party election, losing to Tzipi Livni for the spot of the Prime Minister and leader of Kadima. Livni's narrow margin of 431 votes was 43.1% to Shaul Mofaz's 42.0%, a huge difference from the 10 to 12-point exit polls margins. She said the "national responsibility (bestowed) by the public brings me to approach this job with great reverence". Mofaz accepted the Kadima primary's result, despite his lawyer, Yehuda Weinstein's appeal advice, and telephoned Livni congratulating her. Livni got 16,936 votes, with 16,505 votes, for Mofaz. Public Security Minister Avi Dichter and Interior Minister Meir Sheetrit had 6.5% and 8.5% respectively.
Placed second on the Kadima list, Mofaz retained his seat in the 2009 elections, but lost his cabinet position after Likud formed the government.
On 27 March 2012, Shaul Mofaz won the Kadima party leadership primaries by a landslide, defeating party chairwoman Tzipi Livni. Mofaz became Vice Prime Minister as part of a deal reached for a government of national unity with Binyamin Netanyahu. Mofaz said during the Kadima primaries that he would not join a government led by Netanyahu.
Mofaz left over Netanyahu's indecision over a draft reform law and warned that the prime minister was trying to patch together a majority for a vote to plunge the region into war.
In 2013 Kadima, just 4 years prior the ruling party, received 2% of the votes, barely passing to the Knesset.
In the buildup to the 2015 elections Kadima was not expect to pass the threshold, as it was raised to 3.25%. Mofaz negotiated with the Zionist Union alliance to bring Kadima onto their slate, but ended negotiations when it became clear he would not be their candidate for Defense Minister. Immediately after Mofaz announced he was not joining the Zionist Union slate, it was announced the former Military Intelligence Directorate (Israel) head Amos Yadlin was appointed to the Zionist Union slate and would be their candidate for Defense Minister. Within a week of his announcement that he was not running with the Zionist Union, Mofaz announced his retirement from politics.
A fictionalized version of Mofaz appeared in the 2008 drama film "Lemon Tree". | https://en.wikipedia.org/wiki?curid=29450 |
Stasi
The Ministry for State Security (, MfS) or State Security Service (, SSD), commonly known as the (), was the official state security service of the German Democratic Republic (East Germany). It has been described as one of the most effective and repressive intelligence and secret police agencies ever to have existed. The Stasi was headquartered in East Berlin, with an extensive complex in Berlin-Lichtenberg and several smaller facilities throughout the city. The Stasi motto was (Shield and Sword of the Party), referring to the ruling Socialist Unity Party of Germany (, SED) and also echoing a theme of the KGB, the Soviet counterpart and close partner, with respect to its own ruling party, the Communist Party of the Soviet Union (CPSU). Erich Mielke was the Stasi's longest-serving chief, in power for 32 years of the GDR's 40 years of existence.
One of its main tasks was spying on the population, primarily through a vast network of citizens turned informants, and fighting any opposition by overt and covert measures, including hidden psychological destruction of dissidents (, literally meaning decomposition). It arrested 250,000 people as political prisoners during its existence. Its Main Directorate for Reconnaissance () was responsible for both espionage and for conducting covert operations in foreign countries. Under its long-time head Markus Wolf, this directorate gained a reputation as one of the most effective intelligence agencies of the Cold War. The Stasi also maintained contacts, and occasionally cooperated, with Western terrorists.
Numerous Stasi officials were prosecuted for their crimes after 1990. After German reunification, the surveillance files that the Stasi had maintained on millions of East Germans were opened, so that any citizen could inspect their personal file on request. These files are now maintained by the Stasi Records Agency.
The Stasi was founded on 8 February 1950. Wilhelm Zaisser was the first Minister of State Security of the GDR, and Erich Mielke was his deputy. Zaisser tried to depose SED General Secretary Walter Ulbricht after the June 1953 uprising, but was instead removed by Ulbricht and replaced with Ernst Wollweber thereafter. Following the June 1953 uprising, the Politbüro decided to downgrade the apparatus to a State Secretariat and incorporate it under the Ministry of Interior under the leadership of Willi Stoph. The Minister of State Security simultaneously became a State Secretary of State Security. The Stasi held this status until November 1955, when was restored to a ministry. Wollweber resigned in 1957 after clashes with Ulbricht and Erich Honecker, and was succeeded by his deputy, Erich Mielke.
In 1957, Markus Wolf became head of the Hauptverwaltung Aufklärung (HVA) (Main Reconnaissance Administration), the foreign intelligence section of the Stasi. As intelligence chief, Wolf achieved great success in penetrating the government, political and business circles of West Germany with spies. The most influential case was that of Günter Guillaume, which led to the downfall of West German Chancellor Willy Brandt in May 1974. In 1986, Wolf retired and was succeeded by Werner Grossmann.
Although Mielke's Stasi was superficially granted independence in 1957, until 1990 the KGB continued to maintain liaison officers in all eight main Stasi directorates, each with his own office inside the Stasi's Berlin compound, and in each of the fifteen Stasi district headquarters around East Germany. Collaboration was so close that the KGB invited the Stasi to establish operational bases in Moscow and Leningrad to monitor visiting East German tourists and Mielke referred to the Stasi officers as "Chekists of the Soviet Union". In 1978, Mielke formally granted KGB officers in East Germany the same rights and powers that they enjoyed in the Soviet Union.
The Ministry for State Security was organized according to the "Line principle". A high-ranking official was in charge of a particular mission of the Ministry and headed a division in the Central Apparatus ("Zentrale"). A corresponding division was organized in each of the 15 District Departments for State Security ("Bezirksverwaltungen für Staatssicherheit" in the Berlin Capital Region and 14 regional districts ("Bezirke")). At the local level the Stasi had Area Precincts for State Security ("Bezirksverwaltungen für Staatssicherheit" - one each for the 227 cities and municipal districts and the 11 city boroughs ("Stadtbezirken") of East Berlin). A single case officer held responsibility for the particular mission in each area precinct. The line principle meant that the case officers were subordinated to the specialized divisions at the district departments. The specialized divisions at the district departments were subordinated to the specialized division in the central apparatus and the whole line was under the direct command and control of the high-ranking Stasi officer in charge of the mission. The Stasi also fielded Location Detachments ("Objektdienststellen") at state-owned enterprises of high importance (such as the joint USSR-East German Wismar uranium mining company). Shortly before the transformation of the Stasi into the Office of National Security the Ministry had the following structure:
Minister for State Security
Policy Board (Kollegium des MfS, included the Minister and his deputies)
Central Apparatus ("Zentrale")
Divisions directly subordinated to the Minister Army general Erich Mielke ("Dem Minister für Staatssicherheit direkt unterstellte Diensteinheiten")
Divisions directly subordinated to the Deputy Minister Colonel General Werner Großmann ("Dem Stellvertreter GO Großmann unterstellte Diensteinheiten") (his predecessor was the legendary Colonel general Markus Wolf)
Divisions directly subordinated to the Deputy Minister Colonel general Rudi Mittig ("Dem Stellvertreter GO Mittig unterstellte Diensteinheiten")
Divisions directly subordinated to the Deputy Minister Lieutenant general Gerhard Neiber ("Dem Stellvertreter GL Neiber unterstellte Diensteinheiten")
Divisions directly subordinated to the Deputy Minister Lieutenant general Wolfgang Schwanitz ("Dem Stellvertreter GL Schwanitz unterstellte Diensteinheiten") (Schwanitz was appointed as the chief of the Stasi successor agency - the Office for National Security)
Selected Stasi departments:
Between 1950 and 1989, the Stasi employed a total of 274,000 people in an effort to root out the class enemy. In 1989, the Stasi employed 91,015 people full-time, including 2,000 fully employed unofficial collaborators, 13,073 soldiers and 2,232 officers of GDR army, along with 173,081 unofficial informants inside GDR and 1,553 informants in West Germany.
Regular commissioned Stasi officers were recruited from conscripts who had been honourably discharged from their 18 months' compulsory military service, had been members of the SED, had had a high level of participation in the Party's youth wing's activities and had been Stasi informers during their service in the Military. The candidates would then have to be recommended by their military unit political officers and Stasi agents, the local chiefs of the District (Bezirk) Stasi and Volkspolizei office, of the district in which they were permanently resident, and the District Secretary of the SED. These candidates were then made to sit through several tests and exams, which identified their intellectual capacity to be an officer, and their political reliability. University graduates who had completed their military service did not need to take these tests and exams. They then attended a two-year officer training programme at the Stasi college ("Hochschule") in Potsdam. Less mentally and academically endowed candidates were made ordinary technicians and attended a one-year technology-intensive course for non-commissioned officers.
By 1995, some 174,000 "inoffizielle Mitarbeiter" (IMs) Stasi informants had been identified, almost 2.5% of East Germany's population between the ages of 18 and 60. 10,000 IMs were under 18 years of age. From the volume of material destroyed in the final days of the regime, the office of the Federal Commissioner for the Stasi Records (BStU) believes that there could have been as many as 500,000 informers. A former Stasi colonel who served in the counterintelligence directorate estimated that the figure could be as high as 2 million if occasional informants were included. There is significant debate about how many IMs were actually employed.
Full-time officers were posted to all major industrial plants (the extensiveness of any surveillance largely depended on how valuable a product was to the economy) and one tenant in every apartment building was designated as a watchdog reporting to an area representative of the Volkspolizei (Vopo). Spies reported every relative or friend who stayed the night at another's apartment. Tiny holes were drilled in apartment and hotel room walls through which Stasi agents filmed citizens with special video cameras. Schools, universities, and hospitals were extensively infiltrated, as were organizations, such as computer clubs where teenagers exchanged Western video games.
The Stasi had formal categorizations of each type of informant, and had official guidelines on how to extract information from, and control, those with whom they came into contact. The roles of informants ranged from those already in some way involved in state security (such as the police and the armed services) to those in the dissident movements (such as in the arts and the Protestant Church). Information gathered about the latter groups was frequently used to divide or discredit members. Informants were made to feel important, given material or social incentives, and were imbued with a sense of adventure, and only around 7.7%, according to official figures, were coerced into cooperating. A significant proportion of those informing were members of the SED. Use of some form of blackmail was not uncommon. A large number of Stasi informants were tram conductors, janitors, doctors, nurses and teachers. Mielke believed that the best informants were those whose jobs entailed frequent contact with the public.
The Stasi's ranks swelled considerably after Eastern Bloc countries signed the 1975 Helsinki accords, which GDR leader Erich Honecker viewed as a grave threat to his regime because they contained language binding signatories to respect "human and basic rights, including freedom of thought, conscience, religion, and conviction". The number of IMs peaked at around 180,000 in that year, having slowly risen from 20,000–30,000 in the early 1950s, and reaching 100,000 for the first time in 1968, in response to "Ostpolitik" and protests worldwide. The Stasi also acted as a proxy for KGB to conduct activities in other Eastern Bloc countries, such as Poland, where the Soviets were despised.
The Stasi infiltrated almost every aspect of GDR life. In the mid-1980s, a network of IMs began growing in both German states. By the time that East Germany collapsed in 1989, the Stasi employed 91,015 employees and 173,081 informants. About one out of every 63 East Germans collaborated with the Stasi. By at least one estimate, the Stasi maintained greater surveillance over its own people than any secret police force in history. The Stasi employed one secret policeman for every 166 East Germans. By comparison, the Gestapo deployed one secret policeman per 2,000 people. As ubiquitous as this was, the ratios swelled when informers were factored in: counting part-time informers, the Stasi had one agent per 6.5 people. This comparison led Nazi hunter Simon Wiesenthal to call the Stasi even more oppressive than the Gestapo. Stasi agents infiltrated and undermined West Germany's government and spy agencies.
In some cases, spouses even spied on each other. A high-profile example of this was peace activist Vera Lengsfeld, whose husband, Knud Wollenberger, was a Stasi informant.
The Stasi perfected the technique of psychological harassment of perceived enemies known as "Zersetzung" () – a term borrowed from chemistry which literally means "decomposition".
By the 1970s, the Stasi had decided that the methods of overt persecution that had been employed up to that time, such as arrest and torture, were too crude and obvious. It was realised that psychological harassment was far less likely to be recognised for what it was, so its victims, and their supporters, were less likely to be provoked into active resistance, given that they would often not be aware of the source of their problems, or even its exact nature. "Zersetzung" was designed to side-track and "switch off" perceived enemies so that they would lose the will to continue any "inappropriate" activities.
Tactics employed under "Zersetzung" generally involved the disruption of the victim's private or family life. This often included psychological attacks, such as breaking into homes and subtly manipulating the contents, in a form of gaslighting – moving furniture, altering the timing of an alarm, removing pictures from walls or replacing one variety of tea with another. Other practices included property damage, sabotage of cars, purposely incorrect medical treatment, smear campaigns including sending falsified compromising photos or documents to the victim's family, denunciation, provocation, psychological warfare, psychological subversion, wiretapping, bugging, mysterious phone calls or unnecessary deliveries, even including sending a vibrator to a target's wife. Usually, victims had no idea that the Stasi were responsible. Many thought that they were losing their minds, and mental breakdowns and suicide could result.
One great advantage of the harassment perpetrated under "Zersetzung" was that its subtle nature meant that it was able to be plausibly denied. This was important given that the GDR was trying to improve its international standing during the 1970s and 80s, especially in conjunction with the "Ostpolitik" of West German Chancellor Willy Brandt massively improving relations between the two German states.
After German reunification, revelations of the Stasi's international activities were publicized, such as its military training of the West German Red Army Faction.
Recruitment of informants became increasingly difficult towards the end of the GDR's existence, and, after 1986, there was a negative turnover rate of IMs. This had a significant impact on the Stasi's ability to survey the populace, in a period of growing unrest, and knowledge of the Stasi's activities became more widespread. Stasi had been tasked during this period with preventing the country's economic difficulties becoming a political problem, through suppression of the very worst problems the state faced, but it failed to do so.
Stasi officers reportedly had discussed re-branding East Germany as a democratic capitalist country to the West, which in actuality would have been taken over by Stasi officers. The plan specified 2,587 OibE officers ("Offiziere im besonderen Einsatz", "officers on special assignment") who would have assumed power as detailed in the Top Secret Document 0008-6/86 of 17 March 1986. According to Ion Mihai Pacepa, the chief intelligence officer in communist Romania, other communist intelligence services had similar plans. On 12 March 1990, "Der Spiegel" reported that the Stasi was indeed attempting to implement 0008-6/86. Pacepa has noted that what happened in Russia and how KGB Colonel Vladimir Putin took over Russia resembles these plans. See Putinism.
On 7 November 1989, in response to the rapidly changing political and social situation in the GDR in late 1989, Erich Mielke resigned. On 17 November 1989, the Council of Ministers "(Ministerrat der DDR)" renamed the Stasi the "Office for National Security" "(Amt für Nationale Sicherheit" – AfNS), which was headed by "Generalleutnant" Wolfgang Schwanitz. On 8 December 1989, GDR Prime Minister Hans Modrow directed the dissolution of the AfNS, which was confirmed by a decision of the "Ministerrat" on 14 December 1989.
As part of this decision, the "Ministerrat" originally called for the evolution of the AfNS into two separate organizations: a new foreign intelligence service "(Nachrichtendienst der DDR)" and an "Office for the Protection of the Constitution of the GDR" "(Verfassungsschutz der DDR)", along the lines of the West German "Bundesamt für Verfassungsschutz", however, the public reaction was extremely negative, and under pressure from the "Round Table" "(Runder Tisch)", the government dropped the creation of the "Verfassungsschutz der DDR" and directed the immediate dissolution of the AfNS on 13 January 1990. Certain functions of the AfNS reasonably related to law enforcement were handed over to the GDR Ministry of Internal Affairs. The same ministry also took guardianship of remaining AfNS facilities.
When the parliament of Germany investigated public funds that disappeared after the Fall of the Berlin Wall, it found out that East Germany had transferred large amounts of money to Martin Schlaff through accounts in Vaduz, the capital of Liechtenstein, in return for goods "under Western embargo".
Moreover, high-ranking Stasi officers continued their post-GDR careers in management positions in Schlaff's group of companies. For example, in 1990, Herbert Kohler, Stasi commander in Dresden, transferred 170 million marks to Schlaff for "harddisks" and months later went to work for him.
The investigations concluded that "Schlaff's empire of companies played a crucial role" in the Stasi attempts to secure the financial future of Stasi agents and keep the intelligence network alive.
The "Stern" magazine noted that KGB officer (and future Russian President) Vladimir Putin worked with his Stasi colleagues in Dresden in 1989.
During the Peaceful Revolution of 1989, Stasi offices and prisons throughout the country were occupied by citizens, but not before the Stasi destroyed a number of documents (approximately 5%) consisting of, by one calculation, 1 billion sheets of paper.
With the fall of the GDR the Stasi was dissolved. Stasi employees began to destroy the extensive files and documents they held, by hand, fire and with the use of shredders. When these activities became known, a protest began in front of the Stasi headquarters, The evening of 15 January 1990 saw a large crowd form outside the gates calling for a stop to the destruction of sensitive files. The building contained vast records of personal files, many of which would form important evidence in convicting those who had committed crimes for the Stasi. The protesters continued to grow in number until they were able to overcome the police and gain entry into the complex. Once inside, specific targets of the protesters' anger were portraits of Erich Honecker and Erich Mielke which were trampled on or burnt. Among the protesters were former Stasi collaborators seeking to destroy incriminating documents.
With the German reunification on 3 October 1990, a new government agency was founded called the "Federal Commissioner for the Records of the State Security Service of the former German Democratic Republic" (), officially abbreviated "BStU". There was a debate about what should happen to the files, whether they should be opened to the people or kept closed.
Those who opposed opening the files cited privacy as a reason. They felt that the information in the files would lead to negative feelings about former Stasi members, and, in turn, cause violence. Pastor Rainer Eppelmann, who became Minister of Defense and Disarmament after March 1990, felt that new political freedoms for former Stasi members would be jeopardized by acts of revenge. Prime Minister Lothar de Maizière even went so far as to predict murder. They also argued against the use of the files to capture former Stasi members and prosecute them, arguing that not all former members were criminals and should not be punished solely for being a member. There were also some who believed that everyone was guilty of something. Peter-Michael Diestel, the Minister of Interior, opined that these files could not be used to determine innocence and guilt, claiming that "there were only two types of individuals who were truly innocent in this system, the newborn and the alcoholic". Other opinions, such as the one of West German Interior Minister Wolfgang Schäuble, believed in putting the Stasi behind them and working on German reunification.
Others argued that everyone should have the right to see their own file, and that the files should be opened to investigate former Stasi members and prosecute them, as well as not allow them to hold office. Opening the files would also help clear up some of the rumors that were currently circulating. Some also believed that politicians involved with the Stasi should be investigated.
The fate of the files was finally decided under the Unification Treaty between the GDR and West Germany. This treaty took the Volkskammer law further and allowed more access and use of the files. Along with the decision to keep the files in a central location in the East, they also decided who could see and use the files, allowing people to see their own files.
In 1992, following a declassification ruling by the German government, the Stasi files were opened, leading people to look for their files. Timothy Garton Ash, an English historian, after reading his file, wrote "The File: A Personal History".
Between 1991 and 2011, around 2.75 million individuals, mostly GDR citizens, requested to see their own files. The ruling also gave people the ability to make duplicates of their documents. Another big issue was how the media could use and benefit from the documents. It was decided that the media could obtain files as long as they were depersonalized and not regarding an individual under the age of 18 or a former Stasi member. This ruling not only gave the media access to the files, but also gave schools access.
Even though groups of this sort were active in the community, those who were tracking down ex-members were, as well. Many of these hunters succeeded in catching ex-Stasi; however, charges could not be made for merely being a member. The person in question would have to have participated in an illegal act, not just be a registered Stasi member. Among the high-profile individuals who were arrested and tried were Erich Mielke, Third Minister of State Security of the GDR, and Erich Honecker, head of state for the GDR. Mielke was sentenced to six years prison for the murder of two policemen in 1931. Honecker was charged with authorizing the killing of would-be escapees on the east–west frontier and the Berlin Wall. During his trial, he went through cancer treatment. Because he was nearing death, Honecker was allowed to spend his final time in freedom. He died in Chile in May 1994.
Reassembling the destroyed files has been relatively easy due to the number of archives and the failure of shredding machines (in some cases "shredding" meant tearing paper in two by hand and documents could be recovered easily). In 1995, the BStU began reassembling the shredded documents; 13 years later, the three dozen archivists commissioned to the projects had only reassembled 327 bags; they are now using computer-assisted data recovery to reassemble the remaining 16,000 bagsestimated at 45 million pages. It is estimated that this task may be completed at a cost of 30 million dollars.
The CIA acquired some Stasi records during the looting of the Stasi's archives. West Germany asked for their return and received some in April 2000. See also Rosenholz files.
There a number of memorial sites and museums relating to the Stasi in former Stasi prisons and administration buildings. In addition, offices of the Stasi Records Agency in Berlin, Dresden, Erfurt, Frankfurt-an-der-Oder and Halle (Saale) all have permanent and changing exhibitions relating to the activities of the Stasi in their region.
Memorial and Education Centre Andreasstrasse - a museum in Erfurt which is housed in a former Stasi remand prison. From 1952 until 1989, over 5000 political prisoners were held on remand and interrogated in the Andreasstrasse prison, which was one of 17 Stasi remand prisons in the GDR. On 4 December 1989, local citizens occupied the prison and the neighbouring Stasi district headquarters to stop the mass destruction of Stasi files. It was the first time East Germans had undertaken such resistance against the Stasi and it instigated the take over of Stasi buildings throughout the country.
(The Bautzner Strasse Memorial in Dresden) - A Stasi remand prison and the Stasi's regional head office in Dresden. It was used as a prison by the Soviet occupying forces from 1945 to 1953, and from 1953 to 1989 by the Stasi. The Stasi held and interrogated between 12,000 and 15,000 people during the time they used the prison. The building was originally a 19th-century paper mill. It was converted into a block of flats in 1933 before being confiscated by the Soviet army in 1945. The Stasi prison and offices were occupied by local citizens on 5 December 1989, during a wave of such takeovers across the country. The museum and memorial site was opened to the public in 1994.
- A memorial and museum at Collegienstraße 10 in Frankfurt-an-der-Oder, in a building that was used as a detention centre by the Gestapo, the Soviet occupying forces and the Stasi. The building was the Stasi district offices and a remand prison from 1950 until 1969, after which the Volkspolizei used the prison. From 1950 to 1952 it was an execution site where 12 people sentenced to death were executed. The prison closed in 1990. It has been a cultural centre and a memorial to the victims of political tyranny since June 1994, managed by the Museum Viadrina.
, a memorial and 'centre of encounter' in Gera in a former remand prison, originally opened in 1874, that was used by the Gestapo from 1933-1945, the Soviet occupying forces from 1945 to 1949, and from 1952 to 1989 by the Stasi. The building was also the district offices of the Stasi administration. Between 1952 and 1989 over 2,800 people were held in the prison on political grounds. The memorial site opened with the official name ""Die Gedenk- und Begegnungsstätte im Torhaus der politischen Haftanstalt von 1933 bis 1945 und 1945 bis 1989"" in November 2005.
The (Red Ox) is a museum and memorial site at the prison at Am Kirchtor 20, Halle (Saale). Part of the prison, built 1842, was used by the Stasi from 1950 until 1989, during with time over 9,000 political prisoners were held in the prison. From 1954 it was mainly used for women prisoners. The name "Roter Ochse" is the informal name of the prison, possibly originating in the 19th century from the colour of the external walls. It still operates as a prison for young people. Since 1996, the building which was used as an interrogation centre by the Stasi and an execution site by the Nazis has been a museum and memorial centre for victims of political persecution.
- The memorial site at Mortzplatz in Magdeburg is a museum on the site of a former prison, built from 1873-1876, that was used by the Soviet administration from 1945-1949 and the Stasi from 1958 until 1989 to hold political prisoners. Between 1950 and 1958 the Stasi shared another prison with the civil police. The prison at Moritzplatz was used by the Volkspolizei from 1952 until 1958. Between 1945 and 1989, more than 10,000 political prisoners were held in the prison. The memorial site and museum was founded in December 1990.
The Soviet administration took over the prison in 1945, also using it as a prison for holding political prisoners on remand. The Stasi then used it as a remand prison, mainly for political prisoners from 1952 until 1989. Over 6,000 people were held in the prison by the Stasi during that time. On 27 October 1989, the prison freed all political prisoners due to a nationwide amnesty. On 5 December 1989, the Stasi Headquarters in Potsdam and the Lindenstrasse Prison were occupied by protesters. From January 1990 the building was used as offices for various citizens initiatives and new political groups, such as the Neue Forum. The building was opened to the public from 20 January 1990 and people were taken on tours of the site. It officially became a Memorial site in 1995.
The prison closed in the early 1990s. The state of Mecklenburg-Vorpommern took ownership of it in 1998, and the memorial site and museum were established in 1999. An extensive restoration of the site began in December 2018.
Former Stasi agent Matthias Warnig (codename "Arthur") is currently the CEO of Nord Stream.
German investigations have revealed that some of the key Gazprom Germania managers are former Stasi agents.
Former Stasi officers continue to be politically active via the "Gesellschaft zur Rechtlichen und Humanitären Unterstützung" (GRH, Society for Legal and Humanitarian Support). Former high-ranking officers and employees of the Stasi, including the last Stasi director, Wolfgang Schwanitz, make up the majority of the organization's members, and it receives support from the German Communist Party, among others.
Impetus for the establishment of the GRH was provided by the criminal charges filed against the Stasi in the early 1990s. The GRH, decrying the charges as "victor's justice", called for them to be dropped. Today the group provides an alternative if somewhat utopian voice in the public debate on the GDR legacy. It calls for the closure of the Berlin-Hohenschönhausen Memorial and can be a vocal presence at memorial services and public events. In March 2006 in Berlin, GRH members disrupted a museum event; a political scandal ensued when the Berlin Senator (Minister) of Culture refused to confront them.
Behind the scenes, the GRH also lobbies people and institutions promoting opposing viewpoints. For example, in March 2006, the Berlin Senator for Education received a letter from a GRH member and former Stasi officer attacking the Museum for promoting "falsehoods, anticommunist agitation and psychological terror against minors". Similar letters have also been received by schools organizing field trips to the museum. | https://en.wikipedia.org/wiki?curid=29452 |
Sandra Bullock
Sandra Annette Bullock (; born July 26, 1964) is an American-German actress, producer, and philanthropist. She was the highest paid actress in the world in 2010 and 2014. In 2015, Bullock was chosen as "People's" Most Beautiful Woman and was included in "Time" 100 most influential people in the world in 2010. Bullock is the recipient of several accolades, including an Academy Award and a Golden Globe Award.
After making her acting debut with a minor role in the thriller "Hangmen" (1987), Bullock received early attention for her supporting work in the action film "Demolition Man" (1993). Her breakthrough came in the action thriller "Speed" (1994). She established herself in the 1990s with leading roles in the romantic comedies "While You Were Sleeping" (1995) and "Hope Floats" (1998), and the thrillers "The Net" (1995) and "A Time to Kill" (1996). Bullock achieved further success in the following decades with the comedies "Miss Congeniality" (2000), "Two Weeks Notice" (2002), "The Proposal" (2009), "The Heat" (2013), and "Ocean's 8" (2018), the drama "Crash" (2004), and the thrillers "Premonition" (2007) and "Bird Box" (2018). Bullock was awarded the Academy Award for Best Actress and the Golden Globe Award for Best Actress in a Drama for portraying Leigh Anne Tuohy in the biographical drama "The Blind Side" (2009). She was nominated in the same categories for playing an astronaut stranded in space in the science fiction thriller "Gravity" (2013), which was her highest-grossing live-action release.
In addition to her acting career, Bullock is the founder of the production company Fortis Films. She has produced some of the films in which she has starred, including "" (2005) and "All About Steve" (2009). She was an executive producer of the ABC sitcom "George Lopez" (2002–2007) and made several appearances during its run.
Bullock was born in Arlington, Virginia, on July 26, 1964, the daughter of Helga Mathilde Meyer (1942–2000), an opera singer and voice teacher from Germany, and John W. Bullock (1925–2018), an Army employee and part-time voice coach from Birmingham, Alabama. Her father, who was in charge of the Army's Military Postal Service in Europe, was stationed in Nuremberg when he met her mother. Her parents married in Germany. Bullock's maternal grandfather was a German rocket scientist from Nuremberg. The family returned to Arlington, where her father worked with the Army Materiel Command before becoming a contractor for The Pentagon. Bullock has a younger sister, Gesine Bullock-Prado, who served as president of Bullock's production company Fortis Films.
For 12 years Bullock was raised in Nuremberg, Germany and Vienna and Salzburg, Austria, and grew up speaking German. She had a Waldorf education in Nuremberg. As a child, while her mother went on European opera tours, Bullock usually stayed with her aunt Christl and cousin Susanne, the latter of whom later married politician Peter Ramsauer. Bullock studied ballet and vocal arts as a child and frequently accompanied her mother, taking small parts in her opera productions. In Nuremberg, she sang in the opera's children's choir. Bullock has a scar above her left eye which was caused by a fall into a creek when she was a child. While she maintains her American citizenship, Bullock applied for German citizenship in 2009.
Bullock attended Washington-Lee High School, where she was a cheerleader and performed in school theater productions. After graduating in 1982, she attended East Carolina University (ECU) in Greenville, North Carolina, where she received a BFA in Drama in 1987. While at ECU, she performed in multiple theater productions including "Peter Pan" and "Three Sisters". She then moved to Manhattan, New York, where she supported herself as a bartender, cocktail waitress, and coat checker while auditioning for roles.
While in New York, Bullock took acting classes with Sanford Meisner. She appeared in several student films, and later landed a role in an Off-Broadway play "No Time Flat". Director Alan J. Levi was impressed by Bullock's performance and offered her a part in the made-for-television film "" (1989). This led to her being cast in a series of small roles in several independent films as well as in the lead role of the short-lived NBC television version of the film "Working Girl" (1990). She went on to appear in several films, such as "Love Potion No. 9" (1992), "The Thing Called Love" (1993) and "Fire on the Amazon" (1993), before rising to early prominence with her supporting role in the sci-fi action film "Demolition Man" (1993).
Bullock's big breakthrough came in 1994, when she played Anne Porter, a passenger eventually driving the bus in the smash-hit blockbuster "Speed" alongside fellow actor Keanu Reeves. She was required to read for "Speed" to ensure that there was the right chemistry between her and Reeves. She recalls that they had to do "all these really physical scenes together, rolling around on the floor and stuff." "Speed" garnered acclaim from Rotten Tomatoes which deemed it a "terrific popcorn thriller [with] outstanding performances from Keanu Reeves, Dennis Hopper, and Sandra Bullock". It took in US$350 million worldwide.
After the success of "Speed", Bullock established herself as a Hollywood leading actress. In the romantic comedy "While You Were Sleeping" (1995), she portrayed a lonely Chicago Transit Authority token collector who saves the life of a man. While the film made US$182 million globally, it received positive reviews, with Rotten Tomatoes' critical consensus reading: ""While You Were Sleeping" is built wholly from familiar ingredients, but assembled with such skill—and with such a charming performance from Sandra Bullock—that it gives formula a good name." She received her first Golden Globe Award nomination for Best Actress – Motion Picture Musical or Comedy. In 1995, Bullock also starred in the thriller "The Net" (1995) as a computer programmer who stumbles upon a conspiracy that puts her life and the lives of those around her in great danger. Owen Gleiberman, writing for "Entertainment Weekly", complimented her performance saying "Bullock pulls you into the movie. Her overripe smile and clear, imploring eyes are sometimes evocative of Julia Roberts". "The Net" made US$110.6 million.
In the crime drama "A Time to Kill" (1996), Bullock portrayed a member of the defense team, in the trial for murder of two men who raped a young girl, opposite Samuel L. Jackson, Matthew McConaughey and Kevin Spacey. She received a MTV Movie Award nomination for Best Breakthrough Performance. The film grossed US$152 million around the world. Bullock subsequently received US$11 million for the critically panned "" (1997), which she agreed to star in for financial backing for her next project, "Hope Floats" (1998). She has stated that she regrets making the sequel. In "Hope Floats" she starred as an unassuming housewife whose life is disrupted when her husband (played by Michael Paré) reveals his infidelity to her on a talk show. While the film made US$81.4 million, critic James Berardinelli remarked that her "undisputed strength lies in a blend of light drama and comedy".
Bullock starred in comedy "Practical Magic" (1998) with Nicole Kidman as two witch sisters who face a curse which threatens to prevent them ever finding lasting love. While the film opened atop the chart on its North American opening weekend, it flopped at the box office. The same year she provided her voice as Miriam the animated adventure film "The Prince of Egypt" and wrote, produced, and directed the short film "Making Sandwiches". Alongside Ben Affleck, she played a free-spirited drifter who begins to talk to a writer in the 1999 romantic comedy "Forces of Nature". The film was a commercial hit, grossing US$93 million worldwide, and "Boxoffice Magazine" remarked: "The combination of Affleck's deadpan by-the-book persona with the spontaneity of Bullock's character sparks with convincing chemistry, their diverse personalities causing both to grow and bring to the surface what each is running away from or can't admit."
Bullock took on the role of a FBI agent who must go undercover as a beauty pageant contestant in the comedy "Miss Congeniality" (2000). It was a financial success, grossing US$212 million worldwide and earned Bullock another Golden Globe Award nomination for Best Actress – Motion Picture Musical or Comedy. Also in 2000 she played a newspaper columnist obliged to enter a rehabilitation program for alcoholism in the dramedy "28 Days"; it was a moderate commercial success with a global gross of US$62.1 million. Bullock produced the romantic comedy "Kate & Leopold", released in 2001, then starred in the psychological thriller "Murder by Numbers" (2002) as a seasoned homicide detective. Roger Ebert awarded the film three stars out of a possible four, stating: "Bullock does a good job here of working against her natural likability, creating a character you'd like to like, and could like, if she weren't so sad, strange and turned in upon herself."
Bullock teamed up with Hugh Grant for the romantic comedy "Two Weeks Notice" (2002) in which she starred as a lawyer who walks out on her boss. Liz Braun, of "Jam! Movies", found Bullock and Grant to be "perfectly paired", stating: "The script allows the two actors to be at their comedic best, even though the film as a whole is amateurish in many ways". "Two Weeks Notice" made US$199 million globally. She was presented with the Raul Julia Award for Excellence in 2002 for helping expand career openings for Hispanic talent in the media and entertainment industry as the executive producer of the sitcom "George Lopez" (2002–2007). She also made several appearances on the show as Accident Amy, an accident-prone employee at the factory Lopez's character manages.
As part of a large ensemble cast, Bullock played the wife of a district attorney in the drama "Crash" (2004), which won the Academy Award for Best Picture. She received positive reviews for her performance; some critics suggested that it was the best performance of her career. In 2005, she received a US$17.5 million salary for "" and was a co-recipient of the Women in Film Crystal Award. In the romantic drama "The Lake House" (2006), Bullock reunited with Keanu Reeves although their characters were separated throughout the film; they were only on set together for two weeks during filming. The film had a negative critical response but made US$114.8 million. In 2006 Bullock played Harper Lee in "Infamous", a drama based on George Plimpton's 1997 book, "Truman Capote: In Which Various Friends, Enemies, Acquaintances, and Detractors Recall His Turbulent Career".
Bullock headlined the supernatural thriller "Premonition" (2007) as a housewife who experiences the days surrounding her husband's death in non-chronological order. Despite negative reviews, several critics, including Rex Reed, commended Bullock for her performance and the film grossed US$84.1 million around the globe. In 2008 Bullock was announced as the face of the cosmetic brand Artistry.
Bullock had two record highs in 2009. The romantic comedy "The Proposal", with Ryan Reynolds, grossed US$317 million at the box office worldwide which made it her fourth-most successful picture to date. She received her third Golden Globe Award nomination for Best Performance by an Actress in a Motion Picture – Musical or Comedy. The drama "The Blind Side" opened at number two behind "" with US$34.2 million, making it her second-highest opening weekend ever. "The Blind Side" grossed over US$309 million, making it her highest-grossing domestic film, her fourth-highest-grossing film worldwide, and the first one in history to pass the US$200 million mark with only one top-billed female star. Bullock had initially turned down the role of Leigh Anne Tuohy three times due to discomfort in portraying a devout Christian. She was awarded the Academy Award, Golden Globe Award, Screen Actors Guild Award and Critics' Choice Movie Award for Best Actress. "The Blind Side" also received an Academy Award for Best Picture nomination.
Winning the Oscar also gave Bullock another unique distinction—since she won two Razzies the day before for her performance in "All About Steve" (2009)—she is the only performer ever to have been named both best and worst for the same year. Following a two-year hiatus from the screen, Bullock starred alongside Tom Hanks as a widow of the September 11 attacks in the drama "Extremely Loud & Incredibly Close", a film adaptation based on the novel of the same name. Despite mixed reviews, the film was nominated for numerous awards including an Academy Award for Best Picture. Bullock was nominated for Best Actress Drama by Teen Choice Awards.
In 2013, Bullock starred alongside Melissa McCarthy in the comedy "The Heat" as a FBI Special Agent who, along with a city detective, must take down a mobster in Boston. It received positive reviews from critics, and took in US$230 million at the box office worldwide. Bullock played an astronaut stranded in space in the sci-fi thriller "Gravity", opposite George Clooney, which premiered at the 70th Venice Film Festival and was released on October 4, 2013 to coincide with the beginning of World Space Week. "Gravity" received universal acclaim among critics and a standing ovation in Venice. The film was called "the most realistic and beautifully choreographed film ever set in space" and some critics called Bullock's performance the best work of her career. "Variety" wrote:
"Gravity" took in US$716 million at the box office worldwide and made it Bullock's second-most successful picture. For her role as Dr. Ryan Stone, Bullock was nominated for the Academy Award, Golden Globe Award, BAFTA Award, Screen Actors Guild Award, and Critics' Choice Movie Award for Best Actress.
In 2015 Bullock provided the voice of the villain in the animated film "Minions", which became her highest-grossing film to date with over US$1.1 billion worldwide and she executive produced and starred, as a political consultant hired to help win a Bolivian presidential election, in the drama "Our Brand Is Crisis" based on the 2005 documentary film of the same name by Rachel Boynton. Upon the film's release, which was a critical and commercial flop, she took another sabbatical from film. Bullock returned in an all-female spin-off of the "Ocean's Eleven" franchise, "Ocean's 8" (2018) directed by Gary Ross. Bullock plays Debbie Ocean, the sister of Danny Ocean, who helps plan a sophisticated heist of the annual Met Gala in New York City. The film was a commercial success grossing US$296.9 million globally.
Bullock played Malorie, a woman who must find a way to guide herself and her children to safety despite the potential threat from an unseen adversary, in the Netflix post-apocalyptic horror film "Bird Box" (2018), based on the novel of the same name. Bullock received universal acclaim for her work. "Variety" found her to be "wonderfully self-reliant" while "TheWrap" described her performance as "fascinating and terrifying to watch." Bullock's films have grossed over US$5.3 billion worldwide and her total domestic gross stands at over US$2.6 billion.
Since her acting debut, Bullock has been dubbed "America's sweetheart" in the media due to her "friendly and direct and so unpretentious" nature.
She was selected as one of "People" magazine's 50 Most Beautiful People in the world in 1996 and 1999 and was also ranked number 58 on "Empire" magazine's Top 100 Movie Stars of All Time list. On March 24, 2005, Bullock received a motion picture star on the Hollywood Walk of Fame at 6801 Hollywood Boulevard in Hollywood.
In 2010, "Time" magazine included Bullock in its annual "Time" 100 as one of the most influential people in the world. Bullock was selected by "People" magazine as its 2010 Woman of the Year and ranked number 12 on "People"s Most Beautiful 2011 list.
In September 2013, Bullock joined other Hollywood legends at the TCL Chinese Theatre on Hollywood Boulevard in making imprints of her hands and feet in cement of the theater's forecourt. In November 2013 "The Hollywood Reporter" named Bullock among the most powerful women in entertainment and she was also named "Entertainment Weekly"s Entertainer of the Year due to her success with "The Heat" and "Gravity".
Bullock ranked number two on the 2014 "Forbes" list of most powerful actresses and was honored with the Decade of Hotness Award by Spike Guys' Choice Awards. She was named the Most Beautiful Woman by "People" in 2015.
Bullock owns the production company Fortis Films. She was an executive producer of the "George Lopez" sitcom (co-produced with Robert Borden and Bruce Helford), which garnered a syndication deal of US$10 million. Bullock tried to produce a film based on F.X. Toole's short story "Million Dollar Baby" but could not interest the studios in a female boxing drama. The story was eventually adapted and directed by Clint Eastwood as the Oscar-winning film "Million Dollar Baby" (2004). Fortis Films also produced "All About Steve" which was released in September 2009. Her father, John Bullock, was the company's CEO and her sister, Gesine Bullock-Prado, is the former president.
In November 2006, Bullock founded an Austin, Texas, restaurant named Bess Bistro which was located on West 6th Street. She later opened another business, Walton's Fancy and Staple, across the street in a building she extensively renovated. Walton's is a bakery, upscale restaurant, and floral shop that also offers services including event planning. After almost nine years in business, Bess Bistro closed on September 20, 2015.
Bullock has been a public supporter of the American Red Cross and has donated US$1 million to the organization at least five times. Her first public donation of that amount was to the Red Cross's Liberty Disaster Relief Fund. Three years later, she sent money in response to the 2004 Indian Ocean earthquake and tsunamis. In 2010, she donated US$1 million to relief efforts in Haiti following the Haiti earthquake and again donated the same amount following the 2011 Tōhoku earthquake and tsunami. She donated US$1 million in 2017 to support Red Cross relief efforts for Hurricane Harvey in Texas.
Along with other stars, Bullock did a public service announcement urging people to sign a petition for clean-up efforts of the Deepwater Horizon oil spill in the Gulf of Mexico. Bullock backs the Texas non-profit organization The Kindred Life Foundation, Inc. (KLF) and in late 2008 joined other top celebrities in supporting the work of KLF's founder and CEO, Amos Ramirez. At a fundraising gala for the organization, Bullock said, "Amos has led many efforts across our nation that have helped families that are in need. Our country needs more organizations that are committed to the service that Kindred Life is."
In 2012, Bullock was inducted into the Warren Easton Hall of Fame for her donations to charities. She was honored in 2013 with the Favorite Humanitarian Award at the 39th People's Choice Awards for her contributions to New Orleans' Warren Easton High School, which was severely damaged by Hurricane Katrina.
Bullock was once engaged to actor Tate Donovan, whom she met while filming "Love Potion No. 9". Their relationship lasted three years. She previously dated football player Troy Aikman and actors Matthew McConaughey and Ryan Gosling.
Bullock married motorcycle builder and "Monster Garage" host Jesse James on July 16, 2005. They first met when Bullock arranged for her ten-year-old godson to meet James as a Christmas present. In November 2009, Bullock and James entered into a custody battle with James' second ex-wife, former adult film actress Janine Lindemulder, with whom James had a child. Bullock and James subsequently won full legal custody of James' five-year-old daughter. A scandal arose in March 2010 when several women claimed to have had affairs with James during his marriage to Bullock. Bullock canceled European promotional appearances for "The Blind Side" citing "unforeseen personal reasons".
On March 18, 2010, James responded to the rumors of infidelity by issuing a public apology to Bullock. He stated, "The vast majority of the allegations reported are untrue and unfounded ... beyond that, I will not dignify these private matters with any further public comment." James declared, "There is only one person to blame for this whole situation, and that is me." He asked that Bullock and their children one day "find it in their hearts to forgive me" for their "pain and embarrassment". James' publicist subsequently announced on March 30, 2010, that James had checked into a rehabilitation facility to "deal with personal issues" and save his relationship to Bullock. However, on April 28, 2010, it was reported that Bullock had filed for divorce on April 23 in Austin, Texas. Their divorce was finalized on June 28, 2010, with "conflict of personalities" cited as the reason.
Bullock announced on April 28, 2010, that she had proceeded with plans to adopt a son born in January 2010 in New Orleans, Louisiana. Bullock and James had begun an initial adoption process four months earlier. Bullock's son began living with them in January 2010, but they chose to keep the news private until after the Oscars in March 2010. However, given the couple's separation and then divorce, Bullock continued the adoption of her son as a single parent. Bullock announced in December 2015 that she had adopted a second child and appeared on the cover of "People" magazine with her then three-year-old new daughter.
On December 20, 2000, Bullock was in a private jet crash on a runway from which she and the two crew escaped uninjured. The pilots were unable to activate the runway lights during a night landing at Jackson Hole Airport due to the use of out-of-date approach plates but continued the landing anyway. The aircraft landed in the airport's graded safety area between the runway and parallel taxiway and hit a snowbank. The accident caused a separation of the nose cone and landing gear, partial separation of the right wing, and a bend in the left wing.
While Bullock was in Massachusetts on April 18, 2008, shooting the film "The Proposal", she and her then-husband Jesse James were in a vehicle that was hit head-on by a drunk driver. They were uninjured.
Beginning in 2002 Bullock was stalked across several states by a man named Thomas James Weldon. Bullock obtained a restraining order against him in 2003, which was renewed in 2006. After the restraining order expired and Weldon was released from a mental institution, he again traveled across several states to find Bullock; she then obtained another restraining order.
Bullock won a multimillion-dollar judgment against Benny Daneshjou, the builder of her Lake Austin, Texas, home in October 2004. The jury ruled that the house was uninhabitable. It has since been torn down and rebuilt. Daneshjou and his insurer later settled with Bullock for roughly half the awarded verdict.
On April 22, 2007, a woman named Marcia Diana Valentine was found lying outside James and Bullock's home in Orange County, California. When James confronted the woman, she ran to her car, got behind the wheel, and tried to run over him. She was said to be an obsessed fan of Bullock. Valentine was charged with one felony count each of aggravated assault and stalking. Bullock obtained a restraining order to bar Valentine from "contacting or coming near her home, family or work for three years". Valentine pleaded not guilty to charges of aggravated assault and stalking. She was subsequently convicted of stalking and sentenced to three years' probation.
Joshua James Corbett broke into Bullock's Los Angeles home in June 2014. Bullock locked herself in a room and dialed 911. Corbett pleaded no contest in 2017 and was sentenced to five years' probation for stalking Bullock and breaking into her residence. He was then subject to a ten-year protective order that required him to stay away from Bullock. After Corbett missed a court date the previous month, police officers went to his parents' residence on May 2, 2018, where he lived in a guest house, to arrest him. He refused to leave and threatened to shoot officers. A SWAT team was called and, after a five-hour standoff, they deployed gas canisters and entered the house where they found Corbett had committed suicide. Corbett's death was the result of "multiple incised wounds" according to the Los Angeles County coroner. | https://en.wikipedia.org/wiki?curid=29455 |
Smallfilms
Smallfilms is a British television production company that made animated TV programmes for children from 1959 until the 1980s. In 2014 the company began operating again, producing a new series of its most famous show, "The Clangers". It was originally a partnership between Oliver Postgate (writer, animator and narrator) and Peter Firmin (modelmaker and illustrator). Several very popular series of short films were made using stop-motion animation, including "Clangers", "Noggin the Nog" and "Ivor the Engine". Another Smallfilms production, "Bagpuss", came top of a BBC poll to find the favourite British children's programme of the 20th century.
In 1957, Postgate was appointed a stage manager with Associated-Rediffusion, the company that then held the commercial weekday television franchise for London. Attached to the children's programming section, he thought he could do better with the relatively low budgets of the then black and white television productions.
He wrote "Alexander the Mouse", a story about a mouse born to be king. Using an Irish-produced magnetic system—on which animated characters were magnetically attached to a painted background, then filmed using a 45 degree mirror—he persuaded Peter Firmin, who was then teaching at the Central School of Art, to create the painted backgrounds. Postgate later recalled that they broadcast around 26 of these programmes live-to-air, a task made harder by the production problems encountered by the use and restrictions of using magnets.
After the relative success of "Alexander the Mouse", Postgate agreed a deal to make his next series on film, for a budget of £175 per programme (a minuscule amount even at that time). Making a stop motion animation table in his bedroom, he wrote the Chinese serial "The Journey of Master Ho": a formal Chinese epic, about a small boy and a water-buffalo. This was intended for deaf children, a distinct advantage in that the production required no soundtrack, which reduced production costs. He engaged a painter to produce the backgrounds, but as the painter was classically Chinese-trained he produced them in three-quarter view, rather than in the conventional Egyptian full-view manner needed for flat animation under a camera. This resulted in the Firmin-produced characters looking as if they were short in one leg, but the success of the production provided the foundation for Postgate and Firmin to start up their own company, solely producing animated children's television programmes, initially for ITV, but soon afterward with the BBC.
Postgate's initial BBC career was not solely concerned with Smallfilms. To gain experience, he accepted a contract as a television director in the BBC Children's Department in 1960, on a show entitled "Little Laura", another animated series made on film, written and drawn by V. H. Drummond. The series continued in production until 1962, with Postgate credited also as animator on the 1962 series. He also wrote serials for long-running BBC children's programmes "Blue Peter" and stories for "Vision On".
Setting up the business in a disused cowshed at Firmin's home in Blean near Canterbury, Kent, Postgate and Firmin made children's animation programmes, based on concepts that mostly originated from Postgate. Firmin did the artwork and built the models, whilst Postgate wrote the scripts, did the stop motion filming, and voiced many of the characters. "Smallfilms" was able to produce two minutes of film per day, ten times as much as a conventional animation studio, with Postgate moving the (originally cardboard) characters himself, and working his 16mm camera frame-by-frame with a home-made clicker. As Postgate voiced so many of the productions, including the WereBear story tapes, his distinctive voice became familiar to generations of children.
They began in 1959 with "Ivor the Engine", a series for ITV about a Welsh steam locomotive who wanted to sing in a choir, based on Postgate's wartime encounter with Welshman Denzyl Ellis, who was once a fireman on the Royal Scot. It was remade in colour for the BBC in the 1970s. This was followed, also in 1959, by "Noggin the Nog", their first production for the BBC, which established Smallfilms as a safe and reliable pair of hands to produce children's entertainment, in the days when the number of UK television channels was restricted to two.
In 2000, Postgate and his friend Loaf set up a small publishing company called The Dragons Friendly Society, to look after "Noggin the Nog", "Pogles' Wood" and "Pingwings".
After Postgate's death in December 2008, Smallfilms was inherited by his son Daniel. Universal took the distribution rights to the works of Smallfilms. Any such agreement does not include the materials published through The Dragons Friendly Society.
In 2014, Postgate's son, Daniel Postgate, collaborated with Peter Firmin on the production of a new series of "Clangers", with Daniel writing many of the episodes.
Postgate and Firmin recognised that their product was not sold to children, but to commissioning television executives. Postgate described in a later interview the then "gentlemanly and rather innocent" business of programme commissioning thus: "We would go to the BBC once a year, show them the films we'd made, and they would say: 'Yes, lovely, now what are you going to do next?' We would tell them, and they would say: 'That sounds fine, we'll mark it in for eighteen months from now,' and we would be given praise and encouragement and some money in advance, and we'd just go away and do it." The only occasion that this informal arrangement caused any real difficulty emerged in the 1965 series "The Pogles", which BBC management felt was too frightening for the intended audience, and led to their asking for a change of direction: resulting in a revised show, and a change of name to "Pogles' Wood".
Postgate had strict views regarding storylines, which perhaps limited the possibilities for series development. When asked if the "Clangers" adventures were quite surreal sometimes, Postgate replied: "They're surreal but logical. I have a strong prejudice against fantasy for its own sake. Once one gets to a point beyond where cause-and-effect mean anything at all, then science fiction becomes science nonsense. Everything that happened was strictly logical, according to the laws of physics which happened to apply in that part of the world."
In June 2015, the BBC's Mark Savage reported: "Firmin said the "Clangers" surrealism had led to accusations that Postgate was taking hallucinogenic drugs". Firmin told Savage: "People used to say, 'Ooh, what's Oliver on, with all of these weird ideas?' And we used to say, 'He's on cups of tea and biscuits.'"
The Smallfilms system was reliant on the company's two key employees, Postgate and Firmin, and was devoid of modern considerations and essentials, as Postgate pointed out: "[We were] excused the interference of educationalists, sociologists and other pseudo-scientists, which produces eventually a confection of formulae which have no integrity. No, the mainspring of what we did was because it was fun."
Recognising their commissioning audience, Smallfilms purposefully developed storylines that would engage both adults and children. While the storylines and production were remembered by children, the adult jokes, like those about the Welsh in "Ivor the Engine", or the fact that the Clangers swore occasionally, gave the films an instant parental engagement, and a later nostalgic revival amongst former children re-watching their favourite programmes.
From October 2008 until 2013, production company Coolabi held the merchandising and distribution rights to a number of the Smallfilms productions. Coolabi hoped to introduce "Bagpuss" to a new generation, saying that there was "significant potential to build on the affection in which this classic brand is held".
However, in the event it was Smallfilms itself that returned the classic shows to production, agreeing a deal with the BBC in 2014 to produce a further 52 episodes of "Clangers", as a third series of that show for broadcasting in 2015, which the company also pre-sold in the United States. | https://en.wikipedia.org/wiki?curid=29458 |
Sabotage
Sabotage is a deliberate action aimed at weakening a polity, effort, or organization through subversion, obstruction, disruption, or destruction. One who engages in sabotage is a "saboteur". Saboteurs typically try to conceal their identities because of the consequences of their actions and to avoid Invoking legal and organizational requirements for addressing sabotage.
The English word derives from the French word "Saboter", meaning to “bungle, botch, wreck or sabotage”, and was originally used to refer to labour disputes, in which workers wearing wooden shoes called interrupted production through different means. A popular but incorrect account of the origin of the term's present meaning is the story that poor workers in France would throw a wooden "sabot" into the machines to disrupt production.
One of the first appearances of "Saboter" and "Saboteur" in French literature is in the "Dictionnaire du Bas-Langage ou manières de parler usitées parmi le peuple" of D'Hautel, edited in 1808. In it the literal definition is to “ make noise with sabots” as well as “bungle, jostle, hustle, haste.” The word "Sabotage" only appears later.
The word "Sabotage" is found in 1873–1874 in the "Dictionnaire de la langue française" of Émile Littré. Here it is defined mainly as “ making sabots, sabot maker”.It is at the end of the 19th century that it really began to be used with the meaning of "deliberately and maliciously destroying property" or "working slower". In 1897, Émile Pouget, a famous syndicalist and anarchist wrote "action de saboter un travail" (action of sabotaging or bungling a work) in "Le Père Peinard" and in 1911 he also wrote a book entitled "Le Sabotage".
At the inception of the Industrial Revolution, skilled workers such as the Luddites (1811–1812) used sabotage as a means of negotiation in labor disputes.
Labor unions such as the Industrial Workers of the World (IWW) have advocated sabotage as a means of self-defense and direct action against unfair working conditions.
The IWW was shaped in part by the industrial unionism philosophy of Big Bill Haywood, and in 1910 Haywood was exposed to sabotage while touring Europe:
The experience that had the most lasting impact on Haywood was witnessing a general strike on the French railroads. Tired of waiting for parliament to act on their demands, railroad workers walked off their jobs all across the country. The French government responded by drafting the strikers into the army and then ordering them back to work. Undaunted, the workers carried their strike to the job. Suddenly, they could not seem to do anything right. Perishables sat for weeks, sidetracked and forgotten. Freight bound for Paris was misdirected to Lyon or Marseille instead. This tactic — the French called it "sabotage" — won the strikers their demands and impressed Bill Haywood.
For the IWW, sabotage's meaning expanded to include the original use of the term: any withdrawal of efficiency, including the slowdown, the strike, working to rule, or creative bungling of job assignments.
One of the most severe examples was at the construction site of the Robert-Bourassa Generating Station in 1974, in Québec, Canada, when workers used bulldozers to topple electric generators, damaged fuel tanks, and set buildings on fire. The project was delayed a year, and the direct cost of the damage estimated at $2 million CAD. The causes were not clear, but three possible factors have been cited: inter-union rivalry, poor working conditions, and the perceived arrogance of American executives of the contractor, Bechtel Corporation.
Certain groups turn to destruction of property to stop environmental destruction or to make visible arguments against forms of modern technology they consider detrimental to the environment. The U.S. Federal Bureau of Investigation (FBI) and other law enforcement agencies use the term eco-terrorist when applied to damage of property. Proponents argue that since property cannot feel terror, damage to property is more accurately described as sabotage. Opponents, by contrast, point out that property owners and operators can indeed feel terror. The image of the monkey wrench thrown into the moving parts of a machine to stop it from working was popularized by Edward Abbey in the novel "The Monkey Wrench Gang" and has been adopted by eco-activists to describe destruction of earth damaging machinery.
From 1992 to late 2007 a radical environmental activist movement known as ELF or Earth Liberation Front engaged in a near constant campaign of decentralized sabotage of any construction projects near wild lands and extractive industries such as logging and even the burning down of a ski resort of Vail Colorado. ELF used sabotage tactics often in loose coordination with other environmental activist movements to physically delay or destroy threats to wild lands as the political will developed to protect the targeted wild areas that ELF engaged.
In war, the word is used to describe the activity of an individual or group not associated with the military of the parties at war, such as a foreign agent or an indigenous supporter, in particular when actions result in the destruction or damaging of a productive or vital facility, such as equipment, factories, dams, public services, storage plants or logistic routes. Prime examples of such sabotage are the events of Black Tom and the Kingsland Explosion. Like spies, saboteurs who conduct a military operation in civilian clothes or enemy uniforms behind enemy lines are subject to prosecution and criminal penalties instead of detention as prisoners of war. It is common for a government in power during war or supporters of the war policy to use the term loosely against opponents of the war. Similarly, German nationalists spoke of a stab in the back having cost them the loss of World War I.
A modern form of sabotage is the distribution of software intended to damage specific industrial systems. For example, the U.S. Central Intelligence Agency (CIA) is alleged to have sabotaged a Siberian pipeline during the Cold War, using information from the Farewell Dossier. A more recent case may be the Stuxnet computer worm, which was designed to subtly infect and damage specific types of industrial equipment. Based on the equipment targeted and the location of infected machines, security experts believe it was an attack on the Iranian nuclear program by the United States, Israel or, according to the latest news, even Russia.
Sabotage, done well, is inherently difficult to detect and difficult to trace to its origin. During World War II, the U.S. Federal Bureau of Investigation (FBI) investigated 19,649 cases of sabotage and concluded the enemy had not caused any of them.
Sabotage in warfare, according to the Office of Strategic Services (OSS) manual, varies from highly technical "coup de main" acts that require detailed planning and specially trained operatives, to innumerable simple acts that ordinary citizen-saboteurs can perform. Simple sabotage is carried out in such a way as to involve a minimum danger of injury, detection, and reprisal. There are two main methods of sabotage; physical destruction and the "human element". While physical destruction as a method is self-explanatory, its targets are nuanced, reflecting objects to which the saboteur has normal and inconspicuous access in everyday life. The "human element" is based on universal opportunities to make faulty decisions, to adopt a non-cooperative attitude, and to induce others to follow suit.
There are many examples of physical sabotage in wartime. However, one of the most effective uses of sabotage is against organizations. The OSS manual provides numerous techniques under the title "General Interference with Organizations and Production":
From the section entitled, "General Devices for Lowering Morale and Creating Confusion" comes the following quintessential simple sabotage advice: "Act stupid."
The United States Office of Strategic Services, later renamed the CIA, noted specific value in committing simple sabotage against the enemy during wartime: "... slashing tires, draining fuel tanks, starting fires, starting arguments, acting stupidly, short-circuiting electric systems, abrading machine parts will waste materials, manpower, and time." To underline the importance of simple sabotage on a widespread scale, they wrote, "Widespread practice of simple sabotage will harass and demoralize enemy administrators and police." The OSS was also focused on the battle for hearts and minds during wartime; "the very practice of simple sabotage by natives in enemy or occupied territory may make these individuals identify themselves actively with the United Nations War effort, and encourage them to assist openly in periods of Allied invasion and occupation."
On 30 July 1916, the Black Tom explosion occurred when German agents set fire to a complex of warehouses and ships in Jersey City, New Jersey that held munitions, fuel, and explosives bound to aid the Allies in their fight.
On 11 January 1917, Fiodore Wozniak, using a rag saturated with phosphorus or an incendiary pencil supplied by German sabotage agents, set fire to his workbench at an ammunition assembly plant near Lyndhurst, New Jersey, causing a four-hour fire that destroyed half a million 3-inch explosive shells and destroyed the plant for an estimated at $17 million in damages. Wozniak's involvement was not discovered until 1927.
On 12 February 1917, Bedouins allied with the British destroyed a Turkish railroad near the port of Wajh, derailing a Turkish locomotive. The Bedouins traveled by camel and used explosives to demolish a portion of track.
In Ireland, the Irish Republican Army (IRA) used sabotage against the British following the Easter 1916 uprising. The IRA compromised communication lines and lines of transportation and fuel supplies. The IRA also employed passive sabotage, refusing dock and train workers to work on ships and rail cars used by the government. In 1920, agents of the IRA committed arson against at least fifteen British warehouses in Liverpool. The following year, the IRA set fire to numerous British targets again, including the Dublin Customs House, this time sabotaging most of Liverpool's firetrucks in the firehouses before lighting the matches.
Lieutenant Colonel George T. Rheam was a British soldier, who ran Brickendonbury Manor from October 1941 to June 1945 during World War II, which was Station XVII of the Special Operations Executive (SOE), which trained specialists for the SOE. Rheam innovated many sabotage techniques, and is considered by M. R. D. Foot the "founder of modern industrial sabotage."
Sabotage training for the Allies consisted of teaching would-be saboteurs key components of working machinery to destroy.
"Saboteurs learned hundreds of small tricks to cause the Germans big trouble. The cables in a telephone junction box ... could be jumbled to make the wrong connections when numbers were dialed. A few ounces of plastique, properly placed, could bring down a bridge, cave in a mine shaft, or collapse the roof of a railroad tunnel."
The Polish Home Army Armia Krajowa, who commanded the majority of resistance organizations in Poland (even the National Forces, except the Military Organization Lizard Union; The Home Army also included the Polish Socialist Party – Freedom, Equality, Independence) and coordinating and aiding the Jewish Military Union as well as more reluctantly helping the Jewish Combat Organization, was responsible for the greatest number of acts of sabotage in German—occupied Europe. The Home Army's sabotage operations Operation Garland and Operation Ribbon are just two examples. In all, the Home Army damaged 6,930 locomotives, set 443 rail transports on fire, damaged over 19,000 rail cars "wagony", and blew up 38 rail bridges, not to mention the attacks against the rail roads. The Home Army was also responsible for 4,710 built-in flaws in parts for aircraft engines and 92,000 built-in flaws in artillery projectiles, among other examples of significant sabotage. In addition, over 25,000 acts of more minor sabotage were committed. It continued to fight against both the Germans and the Soviets; however, it did aid the Western Allies by collecting constant and detailed information on the German rail, wheeled, and horse transports. As for Stalin's proxies, their actions led to a great number of the Polish and Jewish hostages, mostly civilians, being murdered in reprisal by the Germans. The Gwardia Ludowa destroyed around 200 German trains during the war, and indiscriminately threw hand grenades into places frequented by Germans.
The French Resistance ran an extremely effective sabotage campaign against the Germans during World War II. Receiving their sabotage orders through messages over the BBC radio or by aircraft, the French used both passive and active forms of sabotage. Passive forms included losing German shipments and allowing poor quality material to pass factory inspections. Many active sabotage attempts were against critical rail lines of transportation. German records count 1,429 instances of sabotage from French Resistance forces between January 1942 and February 1943. From January through March 1944, sabotage accounted for three times the number of locomotives damaged by Allied air power. See also Normandy Landings for more information about sabotage on D-Day.
During World War II, the Allies committed sabotage against the Peugeot truck factory. After repeated failures in Allied bombing attempts to hit the factory, a team of French Resistance fighters and Special Operations Executive (SOE) agents distracted the German guards with a game of soccer while part of their team entered the plant and destroyed machinery.
In December 1944, the Germans ran a false flag sabotage infiltration, Operation Greif, which was commanded by Waffen-SS commando Otto Skorzeny during the Battle of the Bulge. German commandos, wearing US Army uniforms, carrying US Army weapons, and using US Army vehicles, penetrated US lines to spread panic and confusion among US troops and to blow up bridges, ammunition dumps, and fuel stores and to disrupt the lines of communication. Many of the commandos were captured by the Americans. Because they were wearing US uniforms, a number of the Germans were executed as spies, either summarily or after military commissions.
From 1948 to 1960, the Malayan Communists committed numerous effective acts of sabotage against the British Colonial authorities, first targeting railway bridges, then hitting larger targets such as military camps. Most of their efforts were centered around crippling Malaysia's colonial economy and involved sabotage against trains, rubber trees, water pipes, and electric lines. The Communists' sabotage efforts were so successful that they caused backlash among the Malaysian population, who gradually withdrew support for the Communist movement as their livelihoods became threatened.
In Mandatory Palestine from 1945 to 1948, Jewish groups opposed British control. Though that control was to end according to the United Nations Partition Plan for Palestine in 1948, the groups used sabotage as an opposition tactic. The Haganah focused their efforts on camps used by the British to hold refugees and radar installations that could be used to detect illegal immigrant ships. The Stern Gang and the Irgun used terrorism and sabotage against the British government and against lines of communications. In November 1946, the Irgun and Stern Gang attacked a railroad twenty-one times in a three-week period, eventually causing shell-shocked Arab railway workers to strike. The 6th Airborne Division was called in to provide security as a means of ending the strike.
The Viet Cong used swimmer saboteurs often and effectively during the Vietnam War. Between 1969 and 1970, swimmer saboteurs sunk, destroyed, or damaged 77 assets of the U.S. and its allies. Viet Cong swimmers were poorly equipped but well-trained and resourceful. The swimmers provided a low-cost/low-risk option with high payoff; possible loss to the country for failure compared to the possible gains from a successful mission led to the obvious conclusion the swimmer saboteurs were a good idea.
On 1 January 1984, the Cuscatlan bridge over Lempa river in El Salvador, critical to flow of commercial and military traffic, was destroyed by guerrilla forces using explosives after using mortar fire to "scatter" the bridge's guards, causing an estimated $3.7 million in required repairs, and considerably impacting on El Salvadoran business and security.
In 1982 in Honduras, a group of nine Salvadorans and Nicaraguans destroyed a main electrical power station, leaving the capital city Tegucigalpa without power for three days.
Some criminals have engaged in acts of sabotage for reasons of extortion. For example, Klaus-Peter Sabotta sabotaged German railway lines in the late 1990s in an attempt to extort DM10 million from the German railway operator Deutsche Bahn. He is now serving a sentence of life imprisonment. In 1989, ex-Scotland Yard detective Rodney Whitchelo was sentenced to 17 years in prison for spiking Heinz baby food products in supermarkets, in an extortion attempt on the food manufacturer.
The term political sabotage is sometimes used to define the acts of one political camp to disrupt, harass or damage the reputation of a political opponent, usually during an electoral campaign, such as during Watergate. Smear campaigns are a commonly used tactic. The term could also describe the actions and expenditures of private entities, corporations and organizations against democratically approved or enacted laws, policies and programs.
After the Cold War ended, the Mitrokhin Archives were declassified, which included detailed KGB plans of active measures to subvert politics in opposing nations.
Sabotage is a crucial tool of the successful coup d'etat, which requires control of communications before, during, and after the coup is staged. Simple sabotage against physical communications platforms using semi-skilled technicians, or even those trained only for this task, could effectively silence the target government of the coup, leaving the information battle space open to the dominance of the coup's leaders. To underscore the effectiveness of sabotage, "A single cooperative technician will be able temporarily to put out of action a radio station which would otherwise require a full-scale assault."
Railroads, where strategically important to the regime the coup is against, are prime targets for sabotage—if a section of the track is damaged entire portions of the transportation network can be stopped until it is fixed.
A sabotage radio was a small two-way radio designed for use by resistance movements in World War II, and after the war often used by expeditions and similar parties.
Arquilla and Rondfeldt, in their work entitled "Networks and Netwars", differentiate their definition of "netwar" from a list of "trendy synonyms", including "cybotage", a portmanteau from the words "sabotage" and "cyber". They dub the practitioners of cybotage "cyboteurs" and note while all cybotage is not netwar, some netwar is cybotage.
Counter-sabotage, defined by "Webster's Dictionary", is "counterintelligence designed to detect and counteract sabotage". The United States Department of Defense definition, found in the "Dictionary of Military and Associated Terms", is "action designed to detect and counteract sabotage. See also counterintelligence".
During World War II, British subject Eddie Chapman, trained by the Germans in sabotage, became a double agent for the British. The German Abwehr entrusted Chapman to destroy the British de Havilland Company's main plant which manufactured the outstanding Mosquito light bomber, but required photographic proof from their agent to verify the mission's completion. A special unit of the Royal Engineers known as the Magic Gang covered the de Havilland plant with canvas panels and scattered papier-mâché furniture and chunks of masonry around three broken and burnt giant generators. Photos of the plant taken from the air reflected devastation for the factory and a successful sabotage mission, and Chapman, as a British sabotage double-agent, fooled the Germans for the duration of the war.
In Japanese, the verb saboru (サボる) means to skip school or loaf on the job. | https://en.wikipedia.org/wiki?curid=29462 |
Scabbard
A scabbard is a sheath for holding a sword, knife, or other large blade. As well, rifles may be stored in a scabbard by horse riders. Military cavalry and cowboys had scabbards for their saddle ring carbine rifles and lever action rifles on their horses for storage and protection. Scabbards have been made of many materials over the millennia, including leather, wood, and metals such as brass or steel.
Most commonly, sword scabbards were worn suspended from a sword belt or shoulder belt called a baldric.
Wooden scabbards were usually covered in fabric or leather; the leather versions also usually bore metal fittings for added protection and carrying ease. Japanese blades typically have their sharp cutting edge protected by a wooden scabbard called a saya. Many scabbards, such as ones the Greeks and Romans used, were small and light. They were designed for holding the sword rather than protecting it. All-metal scabbards were popular items for a display of wealth among elites in the European Iron Age, and often intricately decorated. Little is known about the scabbards of the early Iron Age, due to their wooden construction. However, during the Middle and late Iron Ages, the scabbard became important especially as a vehicle for decorative elaboration. After 200 BC fully decorated scabbards became rare. A number of ancient scabbards have been recovered from weapons sacrifices, a few of which had a lining of fur on the inside. The fur was probably kept oily, keeping the blade free from rust. The fur would also allow a smoother, quicker draw.
Entirely metal scabbards became popular in Europe early in the 19th century and eventually superseded most other types. Metal was more durable than leather and could better withstand the rigours of field use, particularly among troops mounted on horseback. In addition, metal offered the ability to present a more military appearance, as well as the opportunity to display increased ornamentation. Nevertheless, leather scabbards never entirely lost favour among military users and were widely used as late as the American Civil War (1861–65).
Some military police forces, naval shore patrols, law enforcement and other groups used leather scabbards as a kind of truncheon.
Scabbards were historically, albeit rarely, worn across the back, but only by a handful of Celtic tribes, and only with very short lengths of sword. This is because drawing a long, sharp blade over one's shoulder and past one's head from a scabbard on the back is relatively awkward, especially in a hurry, and the length of the arm sets a hard upper limit on how long a blade can be drawn at all in this way. Sheathing the sword again is even harder since it has to be done effectively blind unless the scabbard is taken off first. Common depictions of long swords being drawn from the back are a modern invention, born from safety and convenience considerations on a film set and typically enabled by creative editing, and have enjoyed such great popularity in fiction and fantasy that they are widely and incorrectly believed to have been common in Medieval times. Some more well-known examples of this include the back scabbard depicted in the film "Braveheart" and the back scabbard seen in the video game series "The Legend of Zelda". There is some limited data from woodcuts and textual fragments that Mongol light horse archers, Chinese soldiers, Japanese Samurai and European Knights wore a slung baldric over the shoulder, allowing longer blades such as greatswords/zweihanders and nodachi/ōdachi to be strapped across the back, though these would have to be removed from the back before the sword could be unsheathed.
In "The Ancient Celts" by Barry Cunliffe, Cunliffe writes, "All these pieces of equipment [shields, spears, swords, mail] mentioned in the texts, are reflected in the archaeological record and in the surviving iconography, though it is sometimes possible to detect regional variations (page 94). Among the Parisii of Yorkshire, for example, the "...sword was sometimes worn across the back and therefore had to be drawn over the shoulder from behind the head."
The metal fitting where the blade enters the leather or metal scabbard is called the throat, which is often part of a larger scabbard mount, or locket, that bears a carrying ring or stud to facilitate wearing the sword. The blade's point in leather scabbards is usually protected by a metal tip, or chape, which, on both leather and metal scabbards, is often given further protection from wear by an extension called a drag, or shoe. | https://en.wikipedia.org/wiki?curid=29463 |
Spinel
Spinel () is the magnesium/aluminium member of the larger spinel group of minerals. It has the formula MgAl2O4 in the cubic crystal system. Its name comes from the Latin word "spinella", which means "spine" in reference to its pointed crystals.
Spinel crystallizes in the isometric system; common crystal forms are octahedra, usually twinned. It has an imperfect octahedral cleavage and a conchoidal fracture. Its hardness is 8, its specific gravity is 3.5–4.1, and it is transparent to opaque with a vitreous to dull luster. It may be colorless, but is usually various shades of pink, rose, red, blue, green, yellow, brown, black, or (uncommon) violet. There is a unique natural white spinel, now lost, that surfaced briefly in what is now Sri Lanka. Some spinels are among the most famous gemstones; among them are the Black Prince's Ruby and the "Timur ruby" in the British Crown Jewels, and the "Côte de Bretagne", formerly from the French Crown jewels. The Samarian Spinel is the largest known spinel in the world, weighing .
The transparent red spinels were called spinel-rubies or balas rubies. In the past, before the arrival of modern science, spinels and rubies were equally known as rubies. After the 18th century the word ruby was only used for the red gem variety of the mineral corundum and the word spinel came to be used. "Balas" is derived from Balascia, the ancient name for Badakhshan, a region in central Asia situated in the upper valley of the Panj River, one of the principal tributaries of the Oxus River. Mines in the Gorno Badakhshan region of Tajikistan constituted for centuries the main source for red and pink spinels.
Spinel is found as a metamorphic mineral, and also as a primary mineral in rare mafic igneous rocks; in these igneous rocks, the magmas are relatively deficient in alkalis relative to aluminium, and aluminium oxide may form as the mineral corundum or may combine with magnesia to form spinel. This is why spinel and ruby are often found together. The spinel petrogenesis in mafic magmatic rocks is strongly debated, but certainly results from mafic magma interaction with more evolved magma or rock (e.g. gabbro, troctolite).
Spinel, (Mg,Fe)(Al,Cr)2O4, is common in peridotite in the uppermost Earth's mantle, between approximately 20 km to approximately 120 km, possibly to lower depths depending on the chromium content. At significantly shallower depths, above the Moho, calcic plagioclase is the more stable aluminous mineral in peridotite while garnet is the stable phase deeper in the mantle below the spinel stability region.
Spinel, (Mg,Fe)Al2O4, is a common mineral in the Ca-Al-rich inclusions (CAIs) in some chondritic meteorites.
Spinel has long been found in the gemstone-bearing gravel of Sri Lanka and in limestones of the Badakshan Province in modern-day Afghanistan and Tajikistan; and of Mogok in Myanmar. Over the last decades gem quality spinels are found in the marbles of Lục Yên District (Vietnam), Mahenge and Matombo (Tanzania), Tsavo (Kenya) and in the gravels of Tunduru (Tanzania) and Ilakaka (Madagascar).
Since 2000 in several locations around the world have been discovered spinels with unusual vivid pink or blue color. Such "glowing" spinels are known from Mogok (Myanmar), Mahenge plateau (Tanzania), Lục Yên District (Vietnam) and some more localities. In 2018 bright blue spinels have been reported also in the southern part of Baffin Island (Canada). The pure blue coloration of spinel is caused by small additions of cobalt.
Synthetic spinel, accidentally produced in the middle of the 18th century, has been described more recently in scientific publications in 2000 and 2004. By 2015, transparent spinel was being made in sheets and other shapes through sintering. Synthetic spinel, which looks like glass but has notably higher strength against pressure, can also have applications in military and commercial use. | https://en.wikipedia.org/wiki?curid=29467 |
Speech recognition
Speech recognition is an interdisciplinary subfield of computer science and computational linguistics that develops methodologies and technologies that enable the recognition and translation of spoken language into text by computers. It is also known as automatic speech recognition (ASR), computer speech recognition or speech to text (STT). It incorporates knowledge and research in the computer science, linguistics and computer engineering fields.
Some speech recognition systems require "training" (also called "enrollment") where an individual speaker reads text or isolated vocabulary into the system. The system analyzes the person's specific voice and uses it to fine-tune the recognition of that person's speech, resulting in increased accuracy. Systems that do not use training are called "speaker independent" systems. Systems that use training are called "speaker dependent".
Speech recognition applications include voice user interfaces such as voice dialing (e.g. "call home"), call routing (e.g. "I would like to make a collect call"), domotic appliance control, search key words (e.g. find a podcast where particular words were spoken), simple data entry (e.g., entering a credit card number), preparation of structured documents (e.g. a radiology report), determining speaker characteristics, speech-to-text processing (e.g., word processors or emails), and aircraft (usually termed direct voice input).
The term "voice recognition" or "speaker identification" refers to identifying the speaker, rather than what they are saying. Recognizing the speaker can simplify the task of translating speech in systems that have been trained on a specific person's voice or it can be used to authenticate or verify the identity of a speaker as part of a security process.
From the technology perspective, speech recognition has a long history with several waves of major innovations. Most recently, the field has benefited from advances in deep learning and big data. The advances are evidenced not only by the surge of academic papers published in the field, but more importantly by the worldwide industry adoption of a variety of deep learning methods in designing and deploying speech recognition systems.
The key areas of growth were: vocabulary size, speaker independence and processing speed.
Raj Reddy was the first person to take on continuous speech recognition as a graduate student at Stanford University in the late 1960s. Previous systems required users to pause after each word. Reddy's system issued spoken commands for playing chess.
Around this time Soviet researchers invented the dynamic time warping (DTW) algorithm and used it to create a recognizer capable of operating on a 200-word vocabulary. DTW processed speech by dividing it into short frames, e.g. 10ms segments, and processing each frame as a single unit. Although DTW would be superseded by later algorithms, the technique carried on. Achieving speaker independence remained unsolved at this time period.
During the late 1960s Leonard Baum developed the mathematics of Markov chains at the Institute for Defense Analysis. A decade later, at CMU, Raj Reddy's students James Baker and Janet M. Baker began using the Hidden Markov Model (HMM) for speech recognition. James Baker had learned about HMMs from a summer job at the Institute of Defense Analysis during his undergraduate education. The use of HMMs allowed researchers to combine different sources of knowledge, such as acoustics, language, and syntax, in a unified probabilistic model.
The 1980s also saw the introduction of the n-gram language model.
Much of the progress in the field is owed to the rapidly increasing capabilities of computers. At the end of the DARPA program in 1976, the best computer available to researchers was the PDP-10 with 4 MB ram. It could take up to 100 minutes to decode just 30 seconds of speech.
Two practical products were:
By this point, the vocabulary of the typical commercial speech recognition system was larger than the average human vocabulary. Raj Reddy's former student, Xuedong Huang, developed the Sphinx-II system at CMU. The Sphinx-II system was the first to do speaker-independent, large vocabulary, continuous speech recognition and it had the best performance in DARPA's 1992 evaluation. Handling continuous speech with a large vocabulary was a major milestone in the history of speech recognition. Huang went on to found the speech recognition group at Microsoft in 1993. Raj Reddy's student Kai-Fu Lee joined Apple where, in 1992, he helped develop a speech interface prototype for the Apple computer known as Casper.
Lernout & Hauspie, a Belgium-based speech recognition company, acquired several other companies, including Kurzweil Applied Intelligence in 1997 and Dragon Systems in 2000. The L&H speech technology was used in the Windows XP operating system. L&H was an industry leader until an accounting scandal brought an end to the company in 2001. The speech technology from L&H was bought by ScanSoft which became Nuance in 2005. Apple originally licensed software from Nuance to provide speech recognition capability to its digital assistant Siri.
In the 2000s DARPA sponsored two speech recognition programs: Effective Affordable Reusable Speech-to-Text (EARS) in 2002 and Global Autonomous Language Exploitation (GALE). Four teams participated in the EARS program: IBM, a team led by BBN with LIMSI and Univ. of Pittsburgh, Cambridge University, and a team composed of ICSI, SRI and University of Washington. EARS funded the collection of the Switchboard telephone speech corpus containing 260 hours of recorded conversations from over 500 speakers. The GALE program focused on Arabic and Mandarin broadcast news speech. Google's first effort at speech recognition came in 2007 after hiring some researchers from Nuance. The first product was GOOG-411, a telephone based directory service. The recordings from GOOG-411 produced valuable data that helped Google improve their recognition systems. Google Voice Search is now supported in over 30 languages.
In the United States, the National Security Agency has made use of a type of speech recognition for keyword spotting since at least 2006. This technology allows analysts to search through large volumes of recorded conversations and isolate mentions of keywords. Recordings can be indexed and analysts can run queries over the database to find conversations of interest. Some government research programs focused on intelligence applications of speech recognition, e.g. DARPA's EARS's program and IARPA's Babel program.
In the early 2000s, speech recognition was still dominated by traditional approaches such as Hidden Markov Models combined with feedforward artificial neural networks.
Today, however, many aspects of speech recognition have been taken over by a deep learning method called Long short-term memory (LSTM), a recurrent neural network published by Sepp Hochreiter & Jürgen Schmidhuber in 1997. LSTM RNNs avoid the vanishing gradient problem and can learn "Very Deep Learning" tasks that require memories of events that happened thousands of discrete time steps ago, which is important for speech.
Around 2007, LSTM trained by Connectionist Temporal Classification (CTC) started to outperform traditional speech recognition in certain applications. In 2015, Google's speech recognition reportedly experienced a dramatic performance jump of 49% through CTC-trained LSTM, which is now available through Google Voice to all smartphone users.
The use of deep feedforward (non-recurrent) networks for acoustic modeling was introduced during later part of 2009 by Geoffrey Hinton and his students at University of Toronto and by Li Deng and colleagues at Microsoft Research, initially in the collaborative work between Microsoft and University of Toronto which was subsequently expanded to include IBM and Google (hence "The shared views of four research groups" subtitle in their 2012 review paper). A Microsoft research executive called this innovation "the most dramatic change in accuracy since 1979". In contrast to the steady incremental improvements of the past few decades, the application of deep learning decreased word error rate by 30%. This innovation was quickly adopted across the field. Researchers have begun to use deep learning techniques for language modeling as well.
In the long history of speech recognition, both shallow form and deep form (e.g. recurrent nets) of artificial neural networks had been explored for many years during 1980s, 1990s and a few years into the 2000s.
But these methods never won over the non-uniform internal-handcrafting Gaussian mixture model/Hidden Markov model (GMM-HMM) technology based on generative models of speech trained discriminatively. A number of key difficulties had been methodologically analyzed in the 1990s, including gradient diminishing and weak temporal correlation structure in the neural predictive models. All these difficulties were in addition to the lack of big training data and big computing power in these early days. Most speech recognition researchers who understood such barriers hence subsequently moved away from neural nets to pursue generative modeling approaches until the recent resurgence of deep learning starting around 2009–2010 that had overcome all these difficulties. Hinton et al. and Deng et al. reviewed part of this recent history about how their collaboration with each other and then with colleagues across four groups (University of Toronto, Microsoft, Google, and IBM) ignited a renaissance of applications of deep feedforward neural networks to speech recognition.
By early 2010s "speech" recognition, also called voice recognition was clearly differentiated from "speaker" recognition, and speaker independence was considered a major breakthrough. Until then, systems required a "training" period. A 1987 ad for a doll had carried the tagline "Finally, the doll that understands you." – despite the fact that it was described as "which children could train to respond to their voice".
In 2017, Microsoft researchers reached a historical human parity milestone of transcribing conversational telephony speech on the widely benchmarked Switchboard task. Multiple deep learning models were used to optimize speech recognition accuracy. The speech recognition word error rate was reported to be as low as 4 professional human transcribers working together on the same benchmark, which was funded by IBM Watson speech team on the same task.
Both acoustic modeling and language modeling are important parts of modern statistically-based speech recognition algorithms. Hidden Markov models (HMMs) are widely used in many systems. Language modeling is also used in many other natural language processing applications such as document classification or statistical machine translation.
Modern general-purpose speech recognition systems are based on Hidden Markov Models. These are statistical models that output a sequence of symbols or quantities. HMMs are used in speech recognition because a speech signal can be viewed as a piecewise stationary signal or a short-time stationary signal. In a short time-scale (e.g., 10 milliseconds), speech can be approximated as a stationary process. Speech can be thought of as a Markov model for many stochastic purposes.
Another reason why HMMs are popular is because they can be trained automatically and are simple and computationally feasible to use. In speech recognition, the hidden Markov model would output a sequence of "n"-dimensional real-valued vectors (with "n" being a small integer, such as 10), outputting one of these every 10 milliseconds. The vectors would consist of cepstral coefficients, which are obtained by taking a Fourier transform of a short time window of speech and decorrelating the spectrum using a cosine transform, then taking the first (most significant) coefficients. The hidden Markov model will tend to have in each state a statistical distribution that is a mixture of diagonal covariance Gaussians, which will give a likelihood for each observed vector. Each word, or (for more general speech recognition systems), each phoneme, will have a different output distribution; a hidden Markov model for a sequence of words or phonemes is made by concatenating the individual trained hidden Markov models for the separate words and phonemes.
Described above are the core elements of the most common, HMM-based approach to speech recognition. Modern speech recognition systems use various combinations of a number of standard techniques in order to improve results over the basic approach described above. A typical large-vocabulary system would need context dependency for the phonemes (so phonemes with different left and right context have different realizations as HMM states); it would use cepstral normalization to normalize for different speaker and recording conditions; for further speaker normalization it might use vocal tract length normalization (VTLN) for male-female normalization and maximum likelihood linear regression (MLLR) for more general speaker adaptation. The features would have so-called delta and delta-delta coefficients to capture speech dynamics and in addition might use heteroscedastic linear discriminant analysis (HLDA); or might skip the delta and delta-delta coefficients and use splicing and an LDA-based projection followed perhaps by heteroscedastic linear discriminant analysis or a global semi-tied co variance transform (also known as maximum likelihood linear transform, or MLLT). Many systems use so-called discriminative training techniques that dispense with a purely statistical approach to HMM parameter estimation and instead optimize some classification-related measure of the training data. Examples are maximum mutual information (MMI), minimum classification error (MCE) and minimum phone error (MPE).
Decoding of the speech (the term for what happens when the system is presented with a new utterance and must compute the most likely source sentence) would probably use the Viterbi algorithm to find the best path, and here there is a choice between dynamically creating a combination hidden Markov model, which includes both the acoustic and language model information, and combining it statically beforehand (the finite state transducer, or FST, approach).
A possible improvement to decoding is to keep a set of good candidates instead of just keeping the best candidate, and to use a better scoring function (re scoring) to rate these good candidates so that we may pick the best one according to this refined score. The set of candidates can be kept either as a list (the N-best list approach) or as a subset of the models (a lattice). Re scoring is usually done by trying to minimize the Bayes risk (or an approximation thereof): Instead of taking the source sentence with maximal probability, we try to take the sentence that minimizes the expectancy of a given loss function with regards to all possible transcriptions (i.e., we take the sentence that minimizes the average distance to other possible sentences weighted by their estimated probability). The loss function is usually the Levenshtein distance, though it can be different distances for specific tasks; the set of possible transcriptions is, of course, pruned to maintain tractability. Efficient algorithms have been devised to re score lattices represented as weighted finite state transducers with edit distances represented themselves as a finite state transducer verifying certain assumptions.
Dynamic time warping is an approach that was historically used for speech recognition but has now largely been displaced by the more successful HMM-based approach.
Dynamic time warping is an algorithm for measuring similarity between two sequences that may vary in time or speed. For instance, similarities in walking patterns would be detected, even if in one video the person was walking slowly and if in another he or she were walking more quickly, or even if there were accelerations and deceleration during the course of one observation. DTW has been applied to video, audio, and graphics – indeed, any data that can be turned into a linear representation can be analyzed with DTW.
A well-known application has been automatic speech recognition, to cope with different speaking speeds. In general, it is a method that allows a computer to find an optimal match between two given sequences (e.g., time series) with certain restrictions. That is, the sequences are "warped" non-linearly to match each other. This sequence alignment method is often used in the context of hidden Markov models.
Neural networks emerged as an attractive acoustic modeling approach in ASR in the late 1980s. Since then, neural networks have been used in many aspects of speech recognition such as phoneme classification, phoneme classification through multi-objective evolutionary algorithms, isolated word recognition, audiovisual speech recognition, audiovisual speaker recognition and speaker adaptation.
Neural networks make fewer explicit assumptions about feature statistical properties than HMMs and have several qualities making them attractive recognition models for speech recognition. When used to estimate the probabilities of a speech feature segment, neural networks allow discriminative training in a natural and efficient manner. However, in spite of their effectiveness in classifying short-time units such as individual phonemes and isolated words, early neural networks were rarely successful for continuous recognition tasks because of their limited ability to model temporal dependencies.
One approach to this limitation was to use neural networks as a pre-processing, feature transformation or dimensionality reduction, step prior to HMM based recognition. However, more recently, LSTM and related recurrent neural networks (RNNs) and Time Delay Neural Networks(TDNN's) have demonstrated improved performance in this area.
Deep Neural Networks and Denoising Autoencoders are also under investigation. A deep feedforward neural network (DNN) is an artificial neural network with multiple hidden layers of units between the input and output layers. Similar to shallow neural networks, DNNs can model complex non-linear relationships. DNN architectures generate compositional models, where extra layers enable composition of features from lower layers, giving a huge learning capacity and thus the potential of modeling complex patterns of speech data.
A success of DNNs in large vocabulary speech recognition occurred in 2010 by industrial researchers, in collaboration with academic researchers, where large output layers of the DNN based on context dependent HMM states constructed by decision trees were adopted.
See comprehensive reviews of this development and of the state of the art as of October 2014 in the recent Springer book from Microsoft Research. See also the related background of automatic speech recognition and the impact of various machine learning paradigms, notably including deep learning, in
recent overview articles.
One fundamental principle of deep learning is to do away with hand-crafted feature engineering and to use raw features. This principle was first explored successfully in the architecture of deep autoencoder on the "raw" spectrogram or linear filter-bank features, showing its superiority over the Mel-Cepstral features which contain a few stages of fixed transformation from spectrograms.
The true "raw" features of speech, waveforms, have more recently been shown to produce excellent larger-scale speech recognition results.
Since 2014, there has been much research interest in "end-to-end" ASR. Traditional phonetic-based (i.e., all HMM-based model) approaches required separate components and training for the pronunciation, acoustic and language model. End-to-end models jointly learn all the components of the speech recognizer. This is valuable since it simplifies the training process and deployment process. For example, a n-gram language model is required for all HMM-based systems, and a typical n-gram language model often takes several gigabytes in memory making them impractical to deploy on mobile devices. Consequently, modern commercial ASR systems from Google and Apple (as of 2017) are deployed on the cloud and require a network connection as opposed to the device locally.
The first attempt at end-to-end ASR was with Connectionist Temporal Classification (CTC)-based systems introduced by Alex Graves of Google DeepMind and Navdeep Jaitly of the University of Toronto in 2014. The model consisted of recurrent neural networks and a CTC layer. Jointly, the RNN-CTC model learns the pronunciation and acoustic model together, however it is incapable of learning the language due to conditional independence assumptions similar to a HMM. Consequently, CTC models can directly learn to map speech acoustics to English characters, but the models make many common spelling mistakes and must rely on a separate language model to clean up the transcripts. Later, Baidu expanded on the work with extremely large datasets and demonstrated some commercial success in Chinese Mandarin and English. In 2016, University of Oxford presented LipNet, the first end-to-end sentence-level lip reading model, using spatiotemporal convolutions coupled with an RNN-CTC architecture, surpassing human-level performance in a restricted grammar dataset. A large-scale CNN-RNN-CTC architecture was presented in 2018 by Google DeepMind achieving 6 times better performance than human experts.
An alternative approach to CTC-based models are attention-based models. Attention-based ASR models were introduced simultaneously by Chan et al. of Carnegie Mellon University and Google Brain and Bahdanau et al. of the University of Montreal in 2016. The model named "Listen, Attend and Spell" (LAS), literally "listens" to the acoustic signal, pays "attention" to different parts of the signal and "spells" out the transcript one character at a time. Unlike CTC-based models, attention-based models do not have conditional-independence assumptions and can learn all the components of a speech recognizer including the pronunciation, acoustic and language model directly. This means, during deployment, there is no need to carry around a language model making it very practical for applications with limited memory. By the end of 2016, the attention-based models have seen considerable success including outperforming the CTC models (with or without an external language model). Various extensions have been proposed since the original LAS model. Latent Sequence Decompositions (LSD) was proposed by Carnegie Mellon University, MIT and Google Brain to directly emit sub-word units which are more natural than English characters; University of Oxford and Google DeepMind extended LAS to "Watch, Listen, Attend and Spell" (WLAS) to handle lip reading surpassing human-level performance.
Typically a manual control input, for example by means of a finger control on the steering-wheel, enables the speech recognition system and this is signalled to the driver by an audio prompt. Following the audio prompt, the system has a "listening window" during which it may accept a speech input for recognition.
Simple voice commands may be used to initiate phone calls, select radio stations or play music from a compatible smartphone, MP3 player or music-loaded flash drive. Voice recognition capabilities vary between car make and model. Some of the most recent car models offer natural-language speech recognition in place of a fixed set of commands, allowing the driver to use full sentences and common phrases. With such systems there is, therefore, no need for the user to memorize a set of fixed command words.
In the health care sector, speech recognition can be implemented in front-end or back-end of the medical documentation process. Front-end speech recognition is where the provider dictates into a speech-recognition engine, the recognized words are displayed as they are spoken, and the dictator is responsible for editing and signing off on the document. Back-end or deferred speech recognition is where the provider dictates into a digital dictation system, the voice is routed through a speech-recognition machine and the recognized draft document is routed along with the original voice file to the editor, where the draft is edited and report finalized. Deferred speech recognition is widely used in the industry currently.
One of the major issues relating to the use of speech recognition in healthcare is that the American Recovery and Reinvestment Act of 2009 (ARRA) provides for substantial financial benefits to physicians who utilize an EMR according to "Meaningful Use" standards. These standards require that a substantial amount of data be maintained by the EMR (now more commonly referred to as an Electronic Health Record or EHR). The use of speech recognition is more naturally suited to the generation of narrative text, as part of a radiology/pathology interpretation, progress note or discharge summary: the ergonomic gains of using speech recognition to enter structured discrete data (e.g., numeric values or codes from a list or a controlled vocabulary) are relatively minimal for people who are sighted and who can operate a keyboard and mouse.
A more significant issue is that most EHRs have not been expressly tailored to take advantage of voice-recognition capabilities. A large part of the clinician's interaction with the EHR involves navigation through the user interface using menus, and tab/button clicks, and is heavily dependent on keyboard and mouse: voice-based navigation provides only modest ergonomic benefits. By contrast, many highly customized systems for radiology or pathology dictation implement voice "macros", where the use of certain phrases – e.g., "normal report", will automatically fill in a large number of default values and/or generate boilerplate, which will vary with the type of the exam – e.g., a chest X-ray vs. a gastrointestinal contrast series for a radiology system.
As an alternative to this navigation by hand, cascaded use of speech recognition and information extraction has been studied as a way to fill out a handover form for clinical proofing and sign-off. The results are encouraging, and the paper also opens data, together with the related performance benchmarks and some processing software, to the research and development community for studying clinical documentation and language-processing.
Prolonged use of speech recognition software in conjunction with word processors has shown benefits to short-term-memory restrengthening in brain AVM patients who have been treated with resection. Further research needs to be conducted to determine cognitive benefits for individuals whose AVMs have been treated using radiologic techniques.
Substantial efforts have been devoted in the last decade to the test and evaluation of speech recognition in fighter aircraft. Of particular note have been the US program in speech recognition for the Advanced Fighter Technology Integration (AFTI)/F-16 aircraft (F-16 VISTA), the program in France for Mirage aircraft, and other programs in the UK dealing with a variety of aircraft platforms. In these programs, speech recognizers have been operated successfully in fighter aircraft, with applications including: setting radio frequencies, commanding an autopilot system, setting steer-point coordinates and weapons release parameters, and controlling flight display.
Working with Swedish pilots flying in the JAS-39 Gripen cockpit, Englund (2004) found recognition deteriorated with increasing g-loads. The report also concluded that adaptation greatly improved the results in all cases and that the introduction of models for breathing was shown to improve recognition scores significantly. Contrary to what might have been expected, no effects of the broken English of the speakers were found. It was evident that spontaneous speech caused problems for the recognizer, as might have been expected. A restricted vocabulary, and above all, a proper syntax, could thus be expected to improve recognition accuracy substantially.
The Eurofighter Typhoon, currently in service with the UK RAF, employs a speaker-dependent system, requiring each pilot to create a template. The system is not used for any safety-critical or weapon-critical tasks, such as weapon release or lowering of the undercarriage, but is used for a wide range of other cockpit functions. Voice commands are confirmed by visual and/or aural feedback. The system is seen as a major design feature in the reduction of pilot workload, and even allows the pilot to assign targets to his aircraft with two simple voice commands or to any of his wingmen with only five commands.
Speaker-independent systems are also being developed and are under test for the F35 Lightning II (JSF) and the Alenia Aermacchi M-346 Master lead-in fighter trainer. These systems have produced word accuracy scores in excess of 98%.
The problems of achieving high recognition accuracy under stress and noise pertain strongly to the helicopter environment as well as to the jet fighter environment. The acoustic noise problem is actually more severe in the helicopter environment, not only because of the high noise levels but also because the helicopter pilot, in general, does not wear a facemask, which would reduce acoustic noise in the microphone. Substantial test and evaluation programs have been carried out in the past decade in speech recognition systems applications in helicopters, notably by the U.S. Army Avionics Research and Development Activity (AVRADA) and by the Royal Aerospace Establishment (RAE) in the UK. Work in France has included speech recognition in the Puma helicopter. There has also been much useful work in Canada. Results have been encouraging, and voice applications have included: control of communication radios, setting of navigation systems, and control of an automated target handover system.
As in fighter applications, the overriding issue for voice in helicopters is the impact on pilot effectiveness. Encouraging results are reported for the AVRADA tests, although these represent only a feasibility demonstration in a test environment. Much remains to be done both in speech recognition and in overall speech technology in order to consistently achieve performance improvements in operational settings.
Training for air traffic controllers (ATC) represents an excellent application for speech recognition systems. Many ATC training systems currently require a person to act as a "pseudo-pilot", engaging in a voice dialog with the trainee controller, which simulates the dialog that the controller would have to conduct with pilots in a real ATC situation. Speech recognition and synthesis techniques offer the potential to eliminate the need for a person to act as pseudo-pilot, thus reducing training and support personnel. In theory, Air controller tasks are also characterized by highly structured speech as the primary output of the controller, hence reducing the difficulty of the speech recognition task should be possible. In practice, this is rarely the case. The FAA document 7110.65 details the phrases that should be used by air traffic controllers. While this document gives less than 150 examples of such phrases, the number of phrases supported by one of the simulation vendors speech recognition systems is in excess of 500,000.
The USAF, USMC, US Army, US Navy, and FAA as well as a number of international ATC training organizations such as the Royal Australian Air Force and Civil Aviation Authorities in Italy, Brazil, and Canada are currently using ATC simulators with speech recognition from a number of different vendors.
ASR is now commonplace in the field of telephony and is becoming more widespread in the field of computer gaming and simulation. In telephony systems, ASR is now being predominantly used in contact centers by integrating it with IVR systems. Despite the high level of integration with word processing in general personal computing, in the field of document production, ASR has not seen the expected increases in use.
The improvement of mobile processor speeds has made speech recognition practical in smartphones. Speech is used mostly as a part of a user interface, for creating predefined or custom speech commands.
For language learning, speech recognition can be useful for learning a second language. It can teach proper pronunciation, in addition to helping a person develop fluency with their speaking skills.
Students who are blind (see Blindness and education) or have very low vision can benefit from using the technology to convey words and then hear the computer recite them, as well as use a computer by commanding with their voice, instead of having to look at the screen and keyboard.
Students who are physically disabled or suffer from Repetitive strain injury/other injuries to the upper extremities can be relieved from having to worry about handwriting, typing, or working with scribe on school assignments by using speech-to-text programs. They can also utilize speech recognition technology to freely enjoy searching the Internet or using a computer at home without having to physically operate a mouse and keyboard.
Speech recognition can allow students with learning disabilities to become better writers. By saying the words aloud, they can increase the fluidity of their writing, and be alleviated of concerns regarding spelling, punctuation, and other mechanics of writing. Also, see Learning disability.
Use of voice recognition software, in conjunction with a digital audio recorder and a personal computer running word-processing software has proven to be positive for restoring damaged short-term-memory capacity, in stroke and craniotomy individuals.
People with disabilities can benefit from speech recognition programs. For individuals that are Deaf or Hard of Hearing, speech recognition software is used to automatically generate a closed-captioning of conversations such as discussions in conference rooms, classroom lectures, and/or religious services.
Speech recognition is also very useful for people who have difficulty using their hands, ranging from mild repetitive stress injuries to involve disabilities that preclude using conventional computer input devices. In fact, people who used the keyboard a lot and developed RSI became an urgent early market for speech recognition. Speech recognition is used in deaf telephony, such as voicemail to text, relay services, and captioned telephone. Individuals with learning disabilities who have problems with thought-to-paper communication (essentially they think of an idea but it is processed incorrectly causing it to end up differently on paper) can possibly benefit from the software but the technology is not bug proof. Also the whole idea of speak to text can be hard for intellectually disabled person's due to the fact that it is rare that anyone tries to learn the technology to teach the person with the disability.
This type of technology can help those with dyslexia but other disabilities are still in question. The effectiveness of the product is the problem that is hindering it being effective. Although a kid may be able to say a word depending on how clear they say it the technology may think they are saying another word and input the wrong one. Giving them more work to fix, causing them to have to take more time with fixing the wrong word.
The performance of speech recognition systems is usually evaluated in terms of accuracy and speed. Accuracy is usually rated with word error rate (WER), whereas speed is measured with the real time factor. Other measures of accuracy include Single Word Error Rate (SWER) and Command Success Rate (CSR).
Speech recognition by machine is a very complex problem, however. Vocalizations vary in terms of accent, pronunciation, articulation, roughness, nasality, pitch, volume, and speed. Speech is distorted by a background noise and echoes, electrical characteristics. Accuracy of speech recognition may vary with the following:
As mentioned earlier in this article, accuracy of speech recognition may vary depending on the following factors:
With discontinuous speech full sentences separated by silence are used, therefore it becomes easier to recognize the speech as well as with isolated speech.
With continuous speech naturally spoken sentences are used, therefore it becomes harder to recognize the speech, different from both isolated and discontinuous speech.
Constraints are often represented by a grammar.
Speech recognition is a multi-leveled pattern recognition task.
e.g. Known word pronunciations or legal word sequences, which can compensate for errors or uncertainties at lower level;
For telephone speech the sampling rate is 8000 samples per second;
computed every 10 ms, with one 10 ms section called a frame;
Analysis of four-step neural network approaches can be explained by further information. Sound is produced by air (or some other medium) vibration, which we register by ears, but machines by receivers. Basic sound creates a wave which has two descriptions: amplitude (how strong is it), and frequency (how often it vibrates per second).
Accuracy can be computed with the help of word error rate (WER). Word error rate can be calculated by aligning the recognized word and referenced word using dynamic string alignment. The problem may occur while computing the word error rate due to the difference between the sequence lengths of recognized word and referenced word.
Let, S be the number of substitutions,
The formula to compute the word error rate(WER) is
While computing the word recognition rate (WRR) word error rate (WER) is used and the formula is
Here H is the number of correctly recognized words. H= N-(S+D).
Speech recognition can become a means of attack, theft, or accidental operation. For example, activation words like "Alexa" spoken in an audio or video broadcast can cause devices in homes and offices to start listening for input inappropriately, or possibly take an unwanted action. Voice-controlled devices are also accessible to visitors to the building, or even those outside the building if they can be heard inside. Attackers may be able to gain access to personal information, like calendar, address book contents, private messages, and documents. They may also be able to impersonate the user to send messages or make online purchases.
Two attacks have been demonstrated that use artificial sounds. One transmits ultrasound and attempt to send commands without nearby people noticing. The other adds small, inaudible distortions to other speech or music that are specially crafted to confuse the specific speech recognition system into recognizing music as speech, or to make what sounds like one command to a human sound like a different command to the system.
Popular speech recognition conferences held each year or two include SpeechTEK and SpeechTEK Europe, ICASSP, Interspeech/Eurospeech, and the IEEE ASRU. Conferences in the field of natural language processing, such as ACL, NAACL, EMNLP, and HLT, are beginning to include papers on speech processing. Important journals include the IEEE Transactions on Speech and Audio Processing (later renamed IEEE Transactions on Audio, Speech and Language Processing and since Sept 2014 renamed IEEE/ACM Transactions on Audio, Speech and Language Processing—after merging with an ACM publication), Computer Speech and Language, and Speech Communication.
Books like "Fundamentals of Speech Recognition" by Lawrence Rabiner can be useful to acquire basic knowledge but may not be fully up to date (1993). Another good source can be "Statistical Methods for Speech Recognition" by Frederick Jelinek and "Spoken Language Processing (2001)" by Xuedong Huang etc., "Computer Speech", by Manfred R. Schroeder, second edition published in 2004, and "Speech Processing: A Dynamic and Optimization-Oriented Approach" published in 2003 by Li Deng and Doug O'Shaughnessey. The updated textbook "Speech and Language Processing" (2008) by Jurafsky and Martin presents the basics and the state of the art for ASR. Speaker recognition also uses the same features, most of the same front-end processing, and classification techniques as is done in speech recognition. A comprehensive textbook, "Fundamentals of Speaker Recognition" is an in depth source for up to date details on the theory and practice. A good insight into the techniques used in the best modern systems can be gained by paying attention to government sponsored evaluations such as those organised by DARPA (the largest speech recognition-related project ongoing as of 2007 is the GALE project, which involves both speech recognition and translation components).
A good and accessible introduction to speech recognition technology and its history is provided by the general audience book "The Voice in the Machine. Building Computers That Understand Speech" by Roberto Pieraccini (2012).
The most recent book on speech recognition is "Automatic Speech Recognition: A Deep Learning Approach" (Publisher: Springer) written by Microsoft researchers D. Yu and L. Deng and published near the end of 2014, with highly mathematically oriented technical detail on how deep learning methods are derived and implemented in modern speech recognition systems based on DNNs and related deep learning methods. A related book, published earlier in 2014, "Deep Learning: Methods and Applications" by L. Deng and D. Yu provides a less technical but more methodology-focused overview of DNN-based speech recognition during 2009–2014, placed within the more general context of deep learning applications including not only speech recognition but also image recognition, natural language processing, information retrieval, multimodal processing, and multitask learning.
In terms of freely available resources, Carnegie Mellon University's Sphinx toolkit is one place to start to both learn about speech recognition and to start experimenting. Another resource (free but copyrighted) is the HTK book (and the accompanying HTK toolkit). For more recent and state-of-the-art techniques, Kaldi toolkit can be used. In 2017 Mozilla launched the open source project called Common Voice to gather big database of voices that would help build free speech recognition project DeepSpeech (available free at GitHub) using Google open source platform TensorFlow.
The commercial cloud based speech recognition APIs are broadly available from AWS, Azure, IBM, and GCP.
A demonstration of an on-line speech recognizer is available on Cobalt's webpage.
For more software resources, see List of speech recognition software. | https://en.wikipedia.org/wiki?curid=29468 |
Sapphire
Sapphire is a precious gemstone, a variety of the mineral corundum, consisting of aluminum oxide () with trace amounts of elements such as iron, titanium, chromium, vanadium, or magnesium. It is typically blue, but natural "fancy" sapphires also occur in yellow, purple, orange, and green colors; "parti sapphires" show two or more colors. Red corundum stones also occur, and are not called sapphires, but rubies. Pink colored corundum may be either classified as ruby or sapphire depending on locale.
Commonly, natural sapphires are cut and polished into gemstones and worn in jewelry. They also may be created synthetically in laboratories for industrial or decorative purposes in large crystal boules. Because of the remarkable hardness of sapphires – 9 on the Mohs scale (the third hardest mineral, after diamond at 10 and moissanite at 9.5) – sapphires are also used in some non-ornamental applications, such as infrared optical components, high-durability windows, wristwatch crystals and movement bearings, and very thin electronic wafers, which are used as the insulating substrates of special-purpose solid-state electronics such as integrated circuits and GaN-based blue LEDs.
Sapphire is the birthstone for September and the gem of the 45th anniversary. A sapphire jubilee occurs after 65 years.
Sapphire is one of the two gem-varieties of corundum, the other being ruby (defined as corundum in a shade of red). Although blue is the best-known sapphire color, they occur in other colors, including gray and black, and they can be colorless. A pinkish orange variety of sapphire is called padparadscha.
Significant sapphire deposits are found in Australia, Cambodia, Cameroon, China (Shandong), Colombia, Ethiopia, India (Kashmir), Kenya, Laos, Madagascar, Malawi, Mozambique, Myanmar (Burma), Nigeria, Rwanda, Sri Lanka, Tanzania, Thailand, United States (Montana) and Vietnam. Sapphire and rubies are often found in the same geographical settings, but they generally have different geological formations. For example, both ruby and sapphire are found in Myanmar's Mogok Stone Tract, but the rubies form in marble, while the sapphire forms in granitic pegmatites or corundum syenites.
Every sapphire mine produces a wide range of quality, and origin is not a guarantee of quality. For sapphire, Kashmir receives the highest premium, although Burma, Sri Lanka, and Madagascar also produce large quantities of fine quality gems.
The cost of natural sapphires varies depending on their color, clarity, size, cut, and overall quality. Sapphires that are completely untreated are worth far more than those that have been treated. Geographical origin also has a major impact on price. For most gems of one carat or more, an independent report from a respected laboratory such as American Gemological Laboratories (AGL), Gem Research Swisslab (GRS), GIA, Gübelin, Lotus Gemology, or SSEF, is often required by buyers before they will make a purchase.
Gemstone color can be described in terms of hue, saturation, and tone. Hue is commonly understood as the "color" of the gemstone. Saturation refers to the vividness or brightness of the hue, and tone is the lightness to darkness of the hue. Blue sapphire exists in various mixtures of its primary (blue) and secondary hues, various tonal levels (shades) and at various levels of saturation (vividness).
Blue sapphires are evaluated based upon the purity of their blue hue. Violet, and green are the most common secondary hues found in blue sapphires. The highest prices are paid for gems that are pure blue and of vivid saturation. Gems that are of lower saturation, or are too dark or too light in tone are of less value. However, color preferences are a personal taste, like a flavor of ice cream
The Logan sapphire in the National Museum of Natural History, in Washington, D.C., is one of the largest faceted gem-quality blue sapphires in existence. The 422.66-ct Siren of Serendip in the Houston Museum of Natural Science is another stunning example of a Sri Lankan sapphire on public display.
Sapphires in colors other than blue are called "fancy" or "parti colored" sapphires.
Fancy sapphires are often found in yellow, orange, green, brown, purple and violet hues.
Particolored sapphires are those stones which exhibit two or more colors within a single stone. Australia is the largest source of particolored sapphires; they are not commonly used in mainstream jewelry and remain relatively unknown. Particolored sapphires cannot be created synthetically and only occur naturally.
Colorless sapphires have historically been used as diamond substitutes in jewelry.
Pink sapphires occur in shades from light to dark pink, and deepen in color as the quantity of chromium increases. The deeper the pink color, the higher their monetary value. In the United States, a minimum color saturation must be met to be called a ruby, otherwise the stone is referred to as a "pink sapphire".
"Padparadscha" is a delicate, light to medium toned, pink-orange to orange-pink hued corundum, originally found in Sri Lanka, but also found in deposits in Vietnam and parts of East Africa. Padparadscha sapphires are rare; the rarest of all is the totally natural variety, with no sign of artificial treatment.
The name is derived from the Sanskrit "padma ranga" (padma = lotus; ranga = color), a color akin to the lotus flower ("Nelumbo nucifera").
Among the fancy (non-blue) sapphires, natural padparadscha fetch the highest prices. Since 2001, more sapphires of this color have appeared on the market as a result of artificial lattice diffusion of beryllium.
A "star sapphire" is a type of sapphire that exhibits a star-like phenomenon known as asterism; red stones are known as "star rubies". Star sapphires contain intersecting needle-like inclusions following the underlying crystal structure that causes the appearance of a six-rayed "star"-shaped pattern when viewed with a single overhead light source. The inclusion is often the mineral rutile, a mineral composed primarily of titanium dioxide. The stones are cut "en cabochon", typically with the center of the star near the top of the dome. Occasionally, twelve-rayed stars are found, typically because two different sets of inclusions are found within the same stone, such as a combination of fine needles of rutile with small platelets of hematite; the first results in a whitish star and the second results in a golden-colored star. During crystallization, the two types of inclusions become preferentially oriented in different directions within the crystal, thereby forming two six-rayed stars that are superimposed upon each other to form a twelve-rayed star. Misshapen stars or 12-rayed stars may also form as a result of twinning.
The inclusions can alternatively produce a "cat's eye" effect if the girdle plane of the cabochon is oriented parallel to the crystal's c-axis rather than perpendicular to it. To get a cat's eye, the planes of exsolved inclusions must be extremely uniform and tightly packed. If the dome is oriented in between these two directions, an 'off-center' star will be visible, offset away from the high point of the dome.
At 1404.49 carats, The Star of Adam is claimed to be the largest blue star sapphire, but whenever such claims are made, one should be careful not to equate size with quality or value. The gem was mined in the city of Ratnapura, southern Sri Lanka. The Black Star of Queensland, the second largest star sapphire in the world, weighs 733 carats. The Star of India mined in Sri Lanka and weighing 563.4 carats is thought to be the third-largest star sapphire, and is currently on display at the American Museum of Natural History in New York City. The 182-carat Star of Bombay, mined in Sri Lanka and located in the National Museum of Natural History in Washington, D.C., is another example of a large blue star sapphire. The value of a star sapphire depends not only on the weight of the stone, but also the body color, visibility, and intensity of the asterism. A common mistake made by novices is to value stones with strong stars the highest. In fact, the color of the stone has more impact on the value than the visibility of the star. Since more transparent stones tend to have better colors, the most expensive star stones are semi-transparent "glass body" stones with vivid colors.
Large rubies and sapphires of poor transparency are frequently used with suspect appraisals that vastly overstate their value. This was the case of the "Life and Pride of America Star Sapphire". Circa 1985, Roy Whetstine claimed to have bought the 1905-ct stone for $10 at the Tucson gem show, but a reporter discovered that L.A. Ward of Fallbrook, CA, who appraised it at the price of $1200/ct, had appraised another stone of the exact same weight several years before Whetstine claimed to have found it.
Bangkok-based Lotus Gemology maintains an updated listing of world auction records of ruby, sapphire, and spinel. As of November 2019, no sapphire has ever sold at auction for more than $17,295,796.
A rare variety of natural sapphire, known as color-change sapphire, exhibits different colors in different light. Color change sapphires are blue in outdoor light and purple under incandescent indoor light, or green to gray-green in daylight and pink to reddish-violet in incandescent light. Color change sapphires come from a variety of locations, including Madagascar, Myanmar, Sri Lanka and Tanzania. Two types exist. The first features the chromium chromophore that creates the red color of ruby, combined with the iron + titanium chromophore that produces the blue color in sapphire. A more rare type, which comes from the Mogok area of Myanmar, features a vanadium chromophore, the same as is used in Verneuil synthetic color-change sapphire.
Virtually all gemstones that show the "alexandrite effect" (color change; a.k.a. 'metamerism') show similar absorption/transmission features in the visible spectrum. This is an absorption band in the yellow (~590 nm), along with valleys of transmission in the blue-green and red. Thus the color one sees depends on the spectral composition of the light source. Daylight is relatively balanced in its spectral power distribution (SPD) and since the human eye is most sensitive to green light, the balance is tipped to the green side. However incandescent light (including candle light) is heavily tilted to the red end of the spectrum, thus tipping the balance to red.
Color-change sapphires colored by the Cr + Fe/Ti chromophores generally change from blue or violetish blue to violet or purple. Those colored by the V chromophore can show a more pronounced change, moving from blue-green to purple.
Certain synthetic color-change sapphires have a similar color change to the natural gemstone alexandrite and they are sometimes marketed as "alexandrium" or "synthetic alexandrite". However, the latter term is a misnomer: synthetic color-change sapphires are, technically, not synthetic alexandrites but rather alexandrite "simulants". This is because genuine alexandrite is a variety of chrysoberyl: not sapphire, but an entirely different mineral.
Rubies are corundum with a dominant red body color. This is generally caused by traces of chromium (Cr3+) substituting for the (Al3+) ion in the corundum structure. The color can be modified by both iron and trapped hole color centers.
Unlike localized ("intra-atomic") absorption of light which causes color for chromium and vanadium impurities, blue color in sapphires comes from intervalence charge transfer, which is the transfer of an electron from one transition-metal ion to another via the conduction or valence band. The iron can take the form Fe2+ or Fe3+, while titanium generally takes the form Ti4+. If Fe2+ and Ti4+ ions are substituted for Al3+, localized areas of charge imbalance are created. An electron transfer from Fe2+ and Ti4+ can cause a change in the valence state of both. Because of the valence change there is a specific change in energy for the electron, and electromagnetic energy is absorbed. The wavelength of the energy absorbed corresponds to yellow light. When this light is subtracted from incident white light, the complementary color blue results. Sometimes when atomic spacing is different in different directions there is resulting blue-green dichroism.
Purple sapphires contain trace amounts of chromium and iron plus titanium and come in a variety of shades. Corundum that contains extremely low levels of chromophores is near colorless. Completely colorless corundum generally does not exist in nature. If trace amounts of iron are present, a very pale yellow to green color may be seen. However, if both titanium and iron impurities are present together, and in the correct valence states, the result is a blue color.
Intervalence charge transfer is a process that produces a strong colored appearance at a low percentage of impurity. While at least 1% chromium must be present in corundum before the deep red ruby color is seen, sapphire blue is apparent with the presence of only 0.01% of titanium and iron.
The most complete description of the causes of color in corundum extant can be found in Chapter 4 of Ruby & Sapphire: A Gemologist's Guide (chapter authored by John Emmett, Emily Dubinsky and Richard Hughes).
Sapphires can be treated by several methods to enhance and improve their clarity and color. It is common practice to heat natural sapphires to improve or enhance their appearance. This is done by heating the sapphires in furnaces to temperatures between for several hours, or even weeks at a time. Different atmospheres may be used. Upon heating, the stone becomes more blue in color, but loses some of the rutile inclusions (silk). When high temperatures (1400 °C+) are used, exsolved rutile silk is dissolved and it becomes clear under magnification. The titanium from the rutile enters solid solution and thus creates with iron the blue color The inclusions in natural stones are easily seen with a jeweler's loupe. Evidence of sapphire and other gemstones being subjected to heating goes back at least to Roman times. Un-heated natural stones are somewhat rare and will often be sold accompanied by a certificate from an independent gemological laboratory attesting to "no evidence of heat treatment".
Yogo sapphires do not need heat treating because their cornflower blue color is attractive out of the ground; they are generally free of inclusions, and have high uniform clarity. When Intergem Limited began marketing the Yogo in the 1980s as the world's only guaranteed untreated sapphire, heat treatment was not commonly disclosed; by the late 1980s, heat treatment became a major issue. At that time, much of all the world's sapphires were being heated to enhance their natural color. Intergem's marketing of guaranteed untreated Yogos set them against many in the gem industry. This issue appeared as a front-page story in the "Wall Street Journal" on 29 August 1984 in an article by Bill Richards, "Carats and Schticks: Sapphire Marketer Upsets The Gem Industry". However, the biggest problem the Yogo mine faced was not competition from heated sapphires, but the fact that the Yogo stones could never produce quantities of sapphire above one carat after faceting. As a result, it has remained a niche product, with a market that largely exists in the US.
Lattice ('bulk') diffusion treatments are used to add impurities to the sapphire to enhance color. This process was originally developed and patented by Linde Air division of Union Carbide and involved diffusing titanium into synthetic sapphire to even out the blue color. It was later applied to natural sapphire. Today, titanium diffusion often uses a synthetic colorless sapphire base. The color layer created by titanium diffusion is extremely thin (less than 0.5 mm). Thus repolishing can and does produce slight to significant loss of color. Chromium diffusion has been attempted, but was abandoned due to the slow diffusion rates of chromium in corundum.
In the year 2000, beryllium diffused "padparadscha" colored sapphires entered the market. Typically beryllium is diffused into a sapphire under very high heat, just below the melting point of the sapphire. Initially ("c." 2000) orange sapphires were created, although now the process has been advanced and many colors of sapphire are often treated with beryllium. Due to the small size of the beryllium ion, the color penetration is far greater than with titanium diffusion. In some cases, it may penetrate the entire stone. Beryllium-diffused orange sapphires may be difficult to detect, requiring advanced chemical analysis by gemological labs ("e.g.", Gübelin, SSEF, GIA, American Gemological Laboratories (AGL), Lotus Gemology.
According to United States Federal Trade Commission guidelines, disclosure is required of any mode of enhancement that has a significant effect on the gem's value.
There are several ways of treating sapphire. Heat-treatment in a reducing or oxidizing atmosphere (but without the use of any other added impurities) is commonly used to improve the color of sapphires, and this process is sometimes known as "heating only" in the gem trade. In contrast, however, heat treatment combined with the deliberate addition of certain specific impurities (e.g. beryllium, titanium, iron, chromium or nickel, which are absorbed into the crystal structure of the sapphire) is also commonly performed, and this process can be known as "diffusion" in the gem trade. However, despite what the terms "heating only" and "diffusion" might suggest, both of these categories of treatment actually involve diffusion processes.
The most complete description of corundum treatments extant can be found in Chapter 6 of Ruby & Sapphire: A Gemologist's Guide (chapter authored by John Emmett, Richard Hughes and Troy R. Douthit).
Sapphires are mined from alluvial deposits or from primary underground workings. Commercial mining locations for sapphire and ruby include (but are not limited to) the following countries: Afghanistan, Australia, Myanmar/Burma, Cambodia, China, Colombia, India, Kenya, Laos, Madagascar, Malawi, Nepal, Nigeria, Pakistan, Sri Lanka, Tajikistan, Tanzania, Thailand, United States, and Vietnam. Sapphires from different geographic locations may have different appearances or chemical-impurity concentrations, and tend to contain different types of microscopic inclusions. Because of this, sapphires can be divided into three broad categories: classic metamorphic, non-classic metamorphic or magmatic, and classic magmatic.
Sapphires from certain locations, or of certain categories, may be more commercially appealing than others, particularly classic metamorphic sapphires from Kashmir, Burma, or Sri Lanka that have not been subjected to heat-treatment.
The Logan sapphire, the Star of India, The Star of Adam and the Star of Bombay originate from Sri Lankan mines. Madagascar is the world leader in sapphire production (as of 2007) specifically its deposits in and around the town of Ilakaka. Prior to the opening of the Ilakaka mines, Australia was the largest producer of sapphires (such as in 1987). In 1991 a new source of sapphires was discovered in Andranondambo, southern Madagascar. That area has been exploited for its sapphires started in 1993, but it was practically abandoned just a few years later—because of the difficulties in recovering sapphires in their bedrock.
In North America, sapphires have been mined mostly from deposits in Montana: fancies along the Missouri River near Helena, Montana, Dry Cottonwood Creek near Deer Lodge, Montana, and Rock Creek near Philipsburg, Montana. Fine blue Yogo sapphires are found at Yogo Gulch west of Lewistown, Montana. A few gem-grade sapphires and rubies have also been found in the area of Franklin, North Carolina.
The sapphire deposits of Kashmir are well known in the gem industry, although their peak production took place in a relatively short period at the end of the nineteenth and early twentieth centuries. They have a superior vivid blue hue, coupled with a mysterious and almost sleepy quality, described by some gem enthusiasts as ‘blue velvet”. Kashmir-origin contributes meaningfully to the value of a sapphire, and most corundum of Kashmir origin can be readily identified by its characteristic silky appearance and exceptional hue. The unique blue appears lustrous under any kind of light, unlike non-Kashmir sapphires which may appear purplish or grayish in comparison. Sotheby's has been in the forefront overseeing record-breaking sales of Kashmir sapphires worldwide. In October 2014, Sotheby's Hong Kong achieved consecutive per-carat price records for Kashmir sapphires – first with the 12.00 carat Cartier sapphire ring at US$193,975 per carat, then with a 17.16 carat sapphire at US$236,404, and again in June 2015 when the per-carat auction record was set at US$240,205. At present, the world record price-per-carat for sapphire at auction is held by a sapphire from Kashmir in a ring, which sold in October 2015 for approximately US$242,000 per carat (HK$52,280,000 in total, including buyer's premium, or more than US$6.74 million).
In 1902, the French chemist Auguste Verneuil announced a process for producing synthetic ruby crystals. In the flame-fusion (Verneuil process), fine alumina powder is added to an oxyhydrogen flame, and this is directed downward against a ceramic pedestal. Following the successful synthesis of ruby, Verneuil focussed his efforts on sapphire. Synthesis of blue sapphire came in 1909, after chemical analyses of sapphire suggested to Verneuil that iron and titanium were the cause of the blue color. Verneuil patented the process of producing synthetic blue sapphire in 1911.
The key to the process is that the alumina powder does not melt as it falls through the flame. Instead it forms a sinter cone on the pedestal. When the tip of that cone reaches the hottest part of the flame, the tip melts. Thus the crystal growth is started from a tiny point, ensuring minimal strain.
Next, more oxygen is added to the flame, causing it to burn slightly hotter. This expands the growing crystal laterally. At the same time, the pedestal is lowered at the same rate that the crystal grows vertically. The alumina in the flame is slowly deposited, creating a teardrop shaped "boule" of sapphire material. This step is continued until the desired size is reached, the flame is shut off and the crystal cools. The now elongated crystal contains a lot of strain due to the high thermal gradient between the flame and surrounding air. To release this strain, the now finger-shaped crystal will be tapped with a chisel to split it into two halves.
Due to the vertical layered growth of the crystal and the curved upper growth surface (which starts from a drop), the crystals will display curved growth lines following the top surface of the boule. This is in contrast to natural corundum crystals, which feature angular growth lines expanding from a single point and following the planar crystal faces.
Chemical dopants can be added to create artificial versions of the ruby, and all the other natural colors of sapphire, and in addition, other colors never seen in geological samples. Artificial sapphire material is identical to natural sapphire, except it can be made without the flaws that are found in natural stones. The disadvantage of Verneuil process is that the grown crystals have high internal strains. Many methods of manufacturing sapphire today are variations of the Czochralski process, which was invented in 1916 by Polish chemist Jan Czochralski. In this process, a tiny sapphire seed crystal is dipped into a crucible made of the precious metal iridium or molybdenum, containing molten alumina, and then slowly withdrawn upward at a rate of 1 to 100 mm per hour. The alumina crystallizes on the end, creating long carrot-shaped boules of large size up to 200 kg in mass.
Synthetic sapphire is also produced industrially from agglomerated aluminum oxide, sintered and fused (such as by hot isostatic pressing) in an inert atmosphere, yielding a transparent but slightly porous polycrystalline product.
In 2003, the world's production of synthetic sapphire was 250 tons (1.25 × 109 carats), mostly by the United States and Russia. The availability of cheap synthetic sapphire unlocked many industrial uses for this unique material.
Synthetic sapphire—sometimes referred to as "sapphire glass"—is commonly used as a window material, because it is both highly transparent to wavelengths of light between 150 nm (UV) and 5500 nm (IR) (the visible spectrum extends about 380 nm to 750 nm), and extraordinarily scratch-resistant.
The key benefits of sapphire windows are:
Some sapphire-glass windows are made from pure sapphire boules that have been grown in a specific crystal orientation, typically along the optical axis, the c-axis, for minimum birefringence for the application.
The boules are sliced up into the desired window thickness and finally polished to the desired surface finish. Sapphire optical windows can be polished to a wide range of surface finishes due to its crystal structure and its hardness. The surface finishes of optical windows are normally called out by the scratch-dig specifications in accordance with the globally adopted MIL-O-13830 specification.
The sapphire windows are used in both high pressure and vacuum chambers for spectroscopy, crystals in various watches, and windows in grocery store barcode scanners since the material's exceptional hardness and toughness makes it very resistant to scratching.
It is used for end windows on some high-powered laser tubes as its wide-band transparency and thermal conductivity allow it to handle very high power densities in the infra-red or UV spectrum without degrading due to heating.
Along with zirconia and aluminum oxynitride, synthetic sapphire is used for shatter resistant windows in armored vehicles and various military body armor suits, in association with composites.
One type of xenon arc lamp – originally called the "Cermax" and now known generically as the "ceramic body xenon lamp" – uses sapphire crystal output windows. This product tolerates higher thermal loads and thus higher output powers when compared with conventional Xe lamps with pure silica window.
Thin sapphire wafers were the first successful use of an insulating substrate upon which to deposit silicon to make the integrated circuits known as silicon on sapphire or "SOS"; now other substrates can also be used for the class of circuits known more generally as silicon on insulator. Besides its excellent electrical insulating properties, sapphire has high thermal conductivity. CMOS chips on sapphire are especially useful for high-power radio-frequency (RF) applications such as those found in cellular telephones, public-safety band radios, and satellite communication systems. "SOS" also allows for the monolithic integration of both digital and analog circuitry all on one IC chip, and the construction of extremely low power circuits.
In one process, after single crystal sapphire boules are grown, they are core-drilled into cylindrical rods, and wafers are then sliced from these cores.
Wafers of single-crystal sapphire are also used in the semiconductor industry as substrates for the growth of devices based on gallium nitride (GaN). The use of sapphire significantly reduces the cost, because it has about one-seventh the cost of germanium. Gallium nitride on sapphire is commonly used in blue light-emitting diodes (LEDs).
The first laser was made with a rod of synthetic ruby. Titanium-sapphire lasers are popular due to their relatively rare capacity to be tuned to various wavelengths in the red and near-infrared region of the electromagnetic spectrum. They can also be easily mode-locked. In these lasers a synthetically produced sapphire crystal with chromium or titanium impurities is irradiated with intense light from a special lamp, or another laser, to create stimulated emission.
Monocrystalline sapphire is fairly biocompatible and the exceptionally low wear of sapphire–metal pairs has led to the introduction (in Ukraine) of sapphire monocrystals for hip
joint endoprostheses.
Extensive tables listing over a hundred important and famous rubies and sapphires can be found in Chapter 10 of Ruby & Sapphire: A Gemologist's Guide. | https://en.wikipedia.org/wiki?curid=29469 |
Salvation
Salvation is being saved or protected from harm or being saved or delivered from a dire situation. In religion, salvation generally refers to the saving of the soul from sin and its consequences.
The academic study of salvation is called soteriology.
In religion, salvation is the saving of the soul from sin and its consequences. It may also be called deliverance or redemption from sin and its effects. Salvation is considered to be caused, depending on the religion or even denomination, either only by the grace of God (i.e. unmerited and unearned) or by faith or good deeds (works) or a combination thereof. Religions often emphasize that man is a sinner by nature and that the penalty of sin is death (physical death, spiritual death: spiritual separation from God and eternal punishment in hell).
In contemporary Judaism, redemption (Hebrew "ge'ulah"), refers to God redeeming the people of Israel from their various exiles. This includes the final redemption from the present exile.
Judaism holds that adherents do not need personal salvation as Christians believe. Jews do not subscribe to the doctrine of original sin. Instead, they place a high value on individual morality as defined in the law of God — embodied in what Jews know as the Torah or The Law, given to Moses by God on biblical Mount Sinai.
In Judaism, salvation is closely related to the idea of redemption, a saving from the states or circumstances that destroy the value of human existence. God, as the universal spirit and Creator of the World, is the source of all salvation for humanity, provided an individual honours God by observing his precepts. So redemption or salvation depends on the individual. Judaism stresses that salvation cannot be obtained through anyone else or by just invoking a deity or believing in any outside power or influence.
When examining Jewish intellectual sources throughout history, there is clearly a spectrum of opinions regarding death versus the afterlife. Possibly an over-simplification, one source says salvation can be achieved in the following manner: Live a holy and righteous life dedicated to Yahweh, the God of Creation. Fast, worship, and celebrate during the appropriate holidays.
By origin and nature, Judaism is an ethnic religion. Therefore, salvation has been primarily conceived in terms of the destiny of Israel as the elect people of Yahweh (often referred to as “the Lord”), the God of Israel. In the biblical text of Psalms, there is a description of death, when people go into the earth or the "realm of the dead" and cannot praise God. The first reference to resurrection is collective in Ezekiel's vision of the dry bones, when all the Israelites in exile will be resurrected. There is a reference to individual resurrection in the Book of Daniel (165 BCE), the last book of the Hebrew Bible. It was not until the 2nd century BCE that there arose a belief in an afterlife, in which the dead would be resurrected and undergo divine judgment. Before that time, the individual had to be content that his posterity continued within the holy nation.
The salvation of the individual Jew was connected to the salvation of the entire people. This belief stemmed directly from the teachings of the Torah. In the Torah, God taught his people sanctification of the individual. However, he also expected them to function together (spiritually) and be accountable to one another. The concept of salvation was tied to that of restoration for Israel.
Christianity's primary premise is that the incarnation and death of Jesus Christ formed the climax of a divine plan for humanity's salvation. This plan was conceived by God consequent on the Fall of Adam, the progenitor of the human race, and it would be completed at the Last Judgment, when the Second Coming of Christ would mark the catastrophic end of the world.
For Christianity, salvation is only possible through Jesus Christ. Christians believe that Jesus' death on the cross was the once-for-all sacrifice that atoned for the sin of humanity.
The Christian religion, though not the exclusive possessor of the idea of redemption, has given to it a special definiteness and a dominant position. Taken in its widest sense, as deliverance from dangers and ills in general, most religions teach some form of it. It assumes an important position, however, only when the ills in question form part of a great system against which human power is helpless.
According to Christian belief, sin as the human predicament is considered to be universal. For example, in the Apostle Paul declared everyone to be under sin—Jew and Gentile alike. Salvation is made possible by the life, death, and resurrection of Jesus, which in the context of salvation is referred to as the "atonement". Christian soteriology ranges from exclusive salvation to universal reconciliation concepts. While some of the differences are as widespread as Christianity itself, the overwhelming majority agrees that salvation is made possible by the work of Jesus Christ, the Son of God, dying on the cross.
Variant views on salvation are among the main fault lines dividing the various Christian denominations, both between Roman Catholicism and Protestantism and within Protestantism, notably in the Calvinist–Arminian debate, and the fault lines include conflicting definitions of depravity, predestination, atonement, but most pointedly justification.
Salvation, according to most denominations, is believed to be a process that begins when a person first becomes a Christian, continues through that person's life, and is completed when they stand before Christ in judgment. Therefore, according to Catholic apologist James Akin, the faithful Christian can say in faith and hope, "I "have been" saved; I "am being" saved; and I "will be" saved."
Christian salvation concepts are varied and complicated by certain theological concepts, traditional beliefs, and dogmas. Scripture is subject to individual and ecclesiastical interpretations. While some of the differences are as widespread as Christianity itself, the overwhelming majority agrees that salvation is made possible by the work of Jesus Christ, the Son of God, dying on the cross.
The purpose of salvation is debated, but in general most Christian theologians agree that God devised and implemented his plan of salvation because he loves them and regards human beings as his children. Since human existence on Earth is said to be "given to sin", salvation also has connotations that deal with the liberation of human beings from sin, and the suffering associated with the punishment of sin—i.e., "the wages of sin are death."
Christians believe that salvation depends on the grace of God. Stagg writes that a fact assumed throughout the Bible is that humanity is in, "serious trouble from which we need deliverance…. The fact of sin as the human predicament is implied in the mission of Jesus, and it is explicitly affirmed in that connection". By its nature, salvation must answer to the plight of humankind as it actually is. Each individual's plight as sinner is the result of a fatal choice involving the whole person in bondage, guilt, estrangement, and death. Therefore, salvation must be concerned with the total person. "It must offer redemption from bondage, forgiveness for guilt, reconciliation for estrangement, renewal for the marred image of God".
According to doctrine of the Latter Day Saint movement, the plan of salvation is a plan that God created to save, redeem, and exalt humankind. The elements of this plan are drawn from various sources, including the Bible, Book of Mormon, Doctrine & Covenants, Pearl of Great Price, and numerous statements made by the leadership of The Church of Jesus Christ of Latter-day Saints (LDS Church). The first appearance of the graphical representation of the plan of salvation is in the 1952 missionary manual entitled "A Systematic Program for Teaching the Gospel."
In Islam, salvation refers to the eventual entrance to Paradise. Islam teaches that people who die disbelieving in God do not receive salvation. It also teaches that non-Muslims who die believing in the God but disbelieving in his message (Islam), are left to his will. Those who die believing in the One God and his message (Islam) receive salvation.
Narrated Anas that Muhammad said,
Islam teaches that all who enter into Islam must remain so in order to receive salvation.
For those who have not been granted Islam or to whom the message has not been brought;
Belief in the “One God”, also known as the "Tawhid" (التَوْحيدْ) in Arabic, consists of two parts (or principles):
Islam also stresses that in order to gain salvation, one must also avoid sinning along with performing good deeds. Islam acknowledges the inclination of humanity towards sin. Therefore, Muslims are constantly commanded to seek God's forgiveness and repent. Islam teaches that no one can gain salvation simply by virtue of their belief or deeds, instead it is the Mercy of God, which merits them salvation. However, this repentance must not be used to sin any further. Islam teaches that God is Merciful.
Islam describes a true believer to have Love of God and Fear of God. Islam also teaches that every person is responsible for their own sins. The Quran states;
Al-Agharr al-Muzani, a companion of Mohammad, reported that Ibn 'Umar stated to him that Mohammad said,
Sin in Islam is not a state, but an action (a bad deed); Islam teaches that a child is born sinless, regardless of the belief of his parents, dies a Muslim; he enters heaven, and does not enter hell.
There are acts of worship that Islam teaches to be mandatory. Islam is built on five principles. Narrated Ibn 'Umar that Muhammad said,
Not performing the mandatory acts of worship may deprive Muslims of the chance of salvation.
Hinduism, Buddhism, Jainism and Sikhism share certain key concepts, which are interpreted differently by different groups and individuals. In these religions one is not liberated from sin and its consequences, but from the "saṃsāra" (cycle of rebirth) perpetuated by passions and delusions and its resulting karma. They differ however on the exact nature of this liberation. Salvation is always self-attained in Dharmic traditions, and a more appropriate term would be "moksha" or "mukti" which mean liberation and release respectively. This state and the conditions considered necessary for its realization is described in early texts of Indian religion such as the Upanishads and the Pāli Canon, and later texts such the Yoga Sutras of Patanjali and the Vedanta tradition. "Moksha" can be attained by sādhanā, literally "means of accomplishing something". It includes a variety of disciplines, such as yoga and meditation.
Nirvana is the profound peace of mind that is acquired with moksha (liberation). In Buddhism and Jainism, it is the state of being free from suffering. In Hindu philosophy, it is union with the Brahman (Supreme Being). The word literally means "blown out" (as in a candle) and refers, in the Buddhist context, to the blowing out of the fires of desire, aversion, and delusion, and the imperturbable stillness of mind acquired thereafter.
In Theravada Buddhism the emphasis is on one's own liberation from samsara. The Mahayana traditions emphasize the bodhisattva path, in which "each Buddha and Bodhisattva is a redeemer", assisting the Buddhist in seeking to achieve the redemptive state. The assistance rendered is a form of self-sacrifice on the part of the teachers, who would presumably be able to achieve total detachment from worldly concerns, but have instead chosen to remain engaged in the material world to the degree that this is necessary to assist others in achieving such detachment.
In Jainism, "salvation", "moksa" and "nirvana" are one and the same. When a soul ("atman") achieves moksa, it is released from the cycle of births and deaths, and achieves its pure self. It then becomes a "siddha" (literally means one who has accomplished his ultimate objective). Attaining Moksa requires annihilation of all "karmas", good and bad, because if karma is left, it must bear fruit. | https://en.wikipedia.org/wiki?curid=29473 |
Lockheed S-3 Viking
The Lockheed S-3 Viking is a 4-crew, twin-engine turbofan-powered jet aircraft that was used by the U.S. Navy (USN) primarily for anti-submarine warfare. In the late 1990s, the S-3B's mission focus shifted to surface warfare and aerial refueling. The Viking also provided electronic warfare and surface surveillance capabilities to a carrier battle group. A carrier-based, subsonic, all-weather, long-range, multi-mission aircraft; it carried automated weapon systems and was capable of extended missions with in-flight refueling. Because of its characteristic sound, it was nicknamed the "War Hoover" after the vacuum cleaner brand.
The S-3 was phased out from front-line fleet service aboard aircraft carriers in January 2009, with its missions taken over by aircraft like the P-3C Orion, P-8 Poseidon, Sikorsky SH-60 Seahawk and Boeing F/A-18E/F Super Hornet. Several aircraft were flown by Air Test and Evaluation Squadron Thirty (VX-30) at Naval Base Ventura County / NAS Point Mugu, California, for range clearance and surveillance operations on the NAVAIR Point Mugu Range until 2016 and one S-3 is operated by the National Aeronautics and Space Administration (NASA) at the NASA Glenn Research Center.
In the mid-1960s, the USN developed the VSX (Heavier-than-air, Anti-submarine, Experimental) requirement for a replacement for the piston-engined Grumman S-2 Tracker as an anti-submarine aircraft to fly off aircraft carriers. In August 1968, a team led by Lockheed and a Convair/Grumman team were asked to further develop their proposals to meet this requirement. Lockheed recognised that it had little experience in designing carrier based aircraft, so Ling-Temco-Vought (LTV) was brought into the team, being responsible for the folding wings and tail, the engine nacelles, and the landing gear, which was derived from LTV A-7 Corsair II (nose) and Vought F-8 Crusader (main). Sperry Univac Federal Systems was assigned the task of developing the aircraft's onboard computers which integrated input from sensors and sonobuoys.
On 4 August 1969, Lockheed's design was selected as the winner of the contest and 8 prototypes, designated YS-3A were ordered. The first prototype was flown on 21 January 1972 by military test pilot John Christiansen, and the S-3 entered service in 1974. During the production run from 1974 to 1978, a total of 186 S-3As have been built. The majority of the surviving S-3As were later upgraded to the S-3B variant, with 16 aircraft converted into ES-3A Shadow electronic intelligence (ELINT) collection aircraft.
The S-3 is a conventional monoplane with a cantilever shoulder wing, very slightly swept with a leading edge angle of 15° and an almost straight trailing edge. Its 2 GE TF-34 high-bypass turbofan engines mounted in nacelles under the wings provide excellent fuel efficiency, giving the Viking the required long range and endurance, while maintaining docile engine-out characteristics.
The aircraft can seat 4 crew members (3 officers and 1 enlisted) with pilot and copilot/tactical coordinator (COTAC) in the front of the cockpit and the tactical coordinator (TACCO) and sensor operator (SENSO) in the back. Entry is via a hatch/ladder folding down out of the lower starboard side of the fuselage behind the cockpit, in between the rear and front seats on the starboard side. When the aircraft's anti-submarine warfare (ASW) role ended in the late 1990s, the enlisted SENSOs were removed from the crew. In tanker crew configuration, the S-3B typically flew with a pilot and co-pilot/COTAC. The wing is fitted with leading edge and Fowler flaps. Spoilers are fitted to both the upper and the lower surfaces of the wings. All control surfaces are actuated by dual hydraulically boosted irreversible systems. In the event of dual hydraulic failures, an Emergency Flight Control System (EFCS) permits manual control with greatly increased stick forces and reduced control authority.
Unlike many tactical jets which required ground service equipment, the S-3 was equipped with an auxiliary power unit (APU) and capable of unassisted starts. The aircraft's original APU could provide only minimal electric power and pressurized air for both aircraft cooling and for the engines' pneumatic starters. A newer, more powerful APU could provide full electrical service to the aircraft. The APU itself was started from a hydraulic accumulator by pulling a handle in the cockpit. The APU accumulator was fed from the primary hydraulic system, but could also be pumped up manually (with much effort) from the cockpit.
All crew members sit on forward-facing, upward-firing Douglas Escapac zero-zero ejection seats. In "group eject" mode, initiating ejection from either of the front seat ejects the entire crew in sequence, with the back seats ejecting 0.5 seconds before the front in order to provide safe separation (this was to prevent the pilots, who were more aware of what was happening outside the aircraft from ejecting without the rest of the crew, or being forced to delay ejection to order the crew to eject in an emergency; ejection from either rear seat would not eject the pilots, who had to initiate their own ejections, to prevent loss of the aircraft if a rear crewmember ejected prematurely. If a pilot ejected prematurely, the plane was lost anyway, and automatic ejection prevented the crew from crashing with a pilot-less aircraft before they were aware of what had happened). The rear seats are capable of self ejection and the ejection sequence includes a pyrotechnic charge that stows the rear keyboard trays out of the occupants' way immediately before ejection. Safe ejection requires the seats to be weighted in pairs and when flying with a single crewman in the back the unoccupied seat is fitted with ballast.
At the time it entered the fleet, the S-3 introduced an unprecedented level of systems integration. Previous ASW aircraft like the Lockheed P-3 Orion and S-3's predecessor, the Grumman S-2 Tracker, featured separate instrumentation and controls for each sensor system. Sensor operators often monitored paper traces, using mechanical calipers to make precise measurements and annotating data by writing on the scrolling paper. Beginning with the S-3, all sensor systems were integrated through a single General Purpose Digital Computer (GPDC). Each crew station had its own display, the co-pilot/COTAC, TACCO and SENSO displays were Multi-Purpose Displays (MPD) capable of displaying data from any of a number of systems. This new level of integration allowed the crew to consult with each other by examining the same data at multiple stations simultaneously, to manage workload by assigning responsibility for a given sensor from one station to another and to easily combine clues from each sensor to classify faint targets. Because of this, the 4-crew S-3 was considered roughly equivalent in capability to the much larger P-3 with a crew of 12.
The aircraft has two underwing hardpoints that can be used to carry fuel tanks, general purpose and cluster bombs, missiles, rockets, and storage pods. It also has four internal bomb bay stations that can be used to carry general-purpose bombs, aerial torpedoes, and special stores (B57 and B61 nuclear weapons). Fifty-nine sonobuoys are carried, as well as a dedicated Search and Rescue (SAR) chute. The S-3 is fitted with the ALE-39 countermeasure system and can carry up to 90 rounds of chaff, flares, and expendable jammers (or a combination of all) in three dispensers. A retractable magnetic anomaly detector (MAD) Boom is fitted in the tail.
In the late 1990s, the S-3B's role was changed from anti-submarine warfare (ASW) to anti-surface warfare (ASuW). At that time, the MAD Boom was removed, along with several hundred pounds of submarine detection electronics. With no remaining sonobuoy processing capability, most of the sonobuoy chutes were faired over with a blanking plate.
On 20 February 1974, the S-3A officially became operational with the Air Antisubmarine Squadron FORTY-ONE (VS-41), the "Shamrocks," at NAS North Island, California, which served as the initial S-3 Fleet Replacement Squadron (FRS) for both the Atlantic and Pacific Fleets until a separate Atlantic Fleet FRS, VS-27, was established in the 1980s. The first operational cruise of the S-3A took place in 1975 with the VS-21 "Fighting Redtails" aboard .
Starting in 1987, some S-3As were upgraded to S-3B standard with the addition of a number of new sensors, avionics, and weapons systems, including the capability to launch the AGM-84 Harpoon anti-ship missile. The S-3B could also be fitted with "buddy stores", external fuel tanks that allowed the Viking to refuel other aircraft. In July 1988, VS-30 became the first fleet squadron to receive the enhanced capability Harpoon/ISAR equipped S-3B, based at NAS Cecil Field in Jacksonville, Florida. 16 S-3As were converted to ES-3A Shadows for carrier-based electronic intelligence (ELINT) duties. Six aircraft, designated US-3A, were converted for a specialized utility and limited cargo Carrier onboard delivery (COD) requirement. Plans were also made to develop the KS-3A carrier-based tanker aircraft, but this program was ultimately cancelled after the conversion of just one early development S-3A.
With the collapse of the Soviet Union and the breakup of the Warsaw Pact, the Soviet-Russian submarine threat was perceived as much reduced, and the Vikings had the majority of their antisubmarine warfare equipment removed. The aircraft's mission subsequently changed to sea surface search, sea and ground attack, over-the-horizon targeting, and aircraft refueling. As a result, the S-3B after 1997 was typically crewed by one pilot and one copilot [NFO]; the additional seats in the S-3B could still support additional crew members for certain missions. To reflect these new missions the Viking squadrons were redesignated from "Air Antisubmarine Warfare Squadrons" to "Sea Control Squadrons."
Prior to the aircraft's retirement from front-line fleet use aboard US aircraft carriers, a number of upgrade programs were implemented. These include the Carrier Airborne Inertial Navigation System II (CAINS II) upgrade, which replaced older inertial navigation hardware with ring laser gyroscopes with a Honeywell EGI (Enhanced GPS Inertial Navigation System) and added digital electronic flight instruments (EFI). The Maverick Plus System (MPS) added the capability to employ the AGM-65E laser-guided or AGM-65F infrared-guided air-to-surface missile, and the AGM-84H/K Stand-off Land Attack Missile Expanded Response (SLAM/ER). The SLAM/ER is a GPS/inertial/infrared guided cruise missile derived from the AGM-84 Harpoon that can be controlled by the aircrew in the terminal phase of flight if an AWW-13 data link pod is carried by the aircraft.
The S-3B saw extensive service during the 1991 Gulf War, performing attack, tanker, and ELINT duties, and launching ADM-141 TALD decoys. This was the first time an S-3B was employed overland during an offensive air strike. The first mission occurred when an aircraft from VS-24, from the , attacked an Iraqi Silkworm missile site. The aircraft also participated in the Yugoslav wars in the 1990s and in Operation Enduring Freedom in 2001.
The first ES-3A was delivered in 1991, entering service after two years of testing. The Navy established two squadrons of eight ES-3A aircraft each in both the Atlantic and Pacific Fleets to provide detachments of typically two aircraft, ten officers, and 55 enlisted aircrew, maintenance and support personnel (which comprised/supported four complete aircrews) to deploying carrier air wings. The Pacific Fleet squadron, Fleet Air Reconnaissance Squadron FIVE (VQ-5), the "Sea Shadows," was originally based at the former NAS Agana, Guam but later relocated to NAS North Island in San Diego, California, with the Pacific Fleet S-3 Viking squadrons when NAS Agana closed in 1995 as a result of a 1993 Base Realignment and Closure (BRAC) decision. The Atlantic Fleet squadron, the VQ-6 "Black Ravens," were originally based with all Atlantic Fleet S-3 Vikings at the former NAS Cecil Field in Jacksonville, Florida, but later moved to NAS Jacksonville, approximately to the east, when NAS Cecil Field was closed in 1999 as a result of the same 1993 BRAC decision that closed NAS Agana.
The ES-3A operated primarily with carrier battle groups, providing organic 'Indications and Warning' support to the group and joint theater commanders. In addition to their warning and reconnaissance roles, and their extraordinarily stable handling characteristics and range, Shadows were a preferred recovery tanker (aircraft that provide refueling for returning aircraft). They averaged over 100 flight hours per month while deployed. Excessive utilization caused earlier than expected equipment replacement when Naval aviation funds were limited, making them an easy target for budget-driven decision makers. In 1999, both ES-3A squadrons and all 16 aircraft were decommissioned and the ES-3A inventory placed in Aerospace Maintenance and Regeneration Group (AMARG) storage at Davis-Monthan AFB, Arizona.
In March 2003, during Operation Iraqi Freedom, an S-3B Viking from Sea Control Squadron 38 (The "Red Griffins") piloted by Richard McGrath Jr. launched from . The crew successfully executed a time sensitive strike and fired a laser-guided Maverick missile to neutralize a significant Iraqi naval and leadership target in the port city of Basra, Iraq. This was one of the few times in its operational history that the S-3B Viking had been employed overland on an offensive combat air strike and the first time it launched a laser-guided Maverick missile in combat.
On 1 May 2003, US President George W. Bush flew in the co-pilot seat of a VS-35 Viking from NAS North Island, California, to off the California coast. There, he delivered his "Mission Accomplished" speech announcing the end of major combat in the 2003 invasion of Iraq. During the flight, the aircraft used the customary presidential callsign of "Navy One". The aircraft that President Bush flew in was retired shortly thereafter and on 15 July 2003 was accepted as an exhibit at the National Museum of Naval Aviation at NAS Pensacola, Florida.
Between July and December 2008 the VS-22 Checkmates, the last sea control squadron, operated a detachment of four S-3Bs from the Al Asad Airbase in Al Anbar Province, west of Baghdad. The planes were fitted with LANTIRN pods and they performed non-traditional intelligence, surveillance, and reconnaissance (NTISR). After more than 350 missions, the Checkmates returned to NAS Jacksonville, Florida, on 15 December 2008, prior to disestablishing on 29 January 2009.
Though a proposed airframe known as the Common Support Aircraft was once advanced as a successor to the S-3, E-2 and C-2, this plan failed to materialize. As the surviving S-3 airframes were forced into sundown retirement, a Lockheed Martin full scale fatigue test was performed and extended the service life of the aircraft by approximately 11,000 flight-hours. This supported Navy plans to retire all Vikings from front-line fleet service by 2009 so new strike fighter and multi-mission aircraft could be introduced to recapitalize the aging fleet inventory, with former Viking missions assumed by other fixed-wing and rotary-wing aircraft.
The final carrier based S-3B Squadron, VS-22 was decommissioned at NAS Jacksonville on 29 January 2009. Sea Control Wing Atlantic was decommissioned the following day on 30 January 2009, concurrent with the U.S. Navy retiring the last S-3B Viking from front-line Fleet service.
In June 2010 the first of three aircraft to patrol the Pacific Missile Test Center's range areas off of California was reactivated and delivered. The jet aircraft's higher speed, 10-hour endurance, modern radar, and a LANTIRN targeting pod allowed it to quickly confirm the test range being clear of wayward ships and aircraft before tests commence. These S-3Bs are flown by Air Test and Evaluation Squadron Thirty (VX-30) based out of NAS Point Mugu, California. Also, the NASA Glenn Research Center acquired four S-3Bs in 2005. Since 2009, one of these aircraft (USN BuNo 160607) has also carried the civil registration N601NA and is used for various tests.
By late 2015, the U.S. Navy had three Vikings remaining operational in support roles. One was moved to The Boneyard in November 2015, and the final two were retired, one stored and the other transferred to NASA, on 11 January 2016, officially retiring the S-3 from Navy service.
Naval analysts have suggested returning the stored S-3s to service with the U.S. Navy to fill gaps it left in the carrier air wing when it was retired. This is in response to the realization that the Chinese navy is producing new weapons that can threaten carriers beyond the range their aircraft can strike them. Against the DF-21D anti-ship ballistic missile, carrier-based F/A-18 Super Hornets and F-35C Lightning IIs have about half the unrefueled strike range, so bringing the S-3 back to aerial tanking duties would extend their range against it, as well as free up more Super Hornets that were forced to fill the role. Against submarines armed with anti-ship cruise missiles like the Klub and YJ-18, the S-3 would restore area coverage for ASW duties. Bringing the S-3 out of retirement could at least be a stop-gap measure to increase the survivability and capabilities of aircraft carriers until new aircraft can be developed for such purposes.
In October 2013, the Republic of Korea Navy expressed an interest in acquiring up to 18 ex-USN S-3s to augment their fleet of 16 Lockheed P-3 Orion aircraft. In August 2015, a military program review group approved a proposal to incorporate 12 mothballed S-3s to perform ASW duties; the Viking plan will be sent to the Defense Acquisition Program Administration for further assessment before final approval by the national defense system committee. Although the planes are old, being in storage kept them serviceable and using them is a cheaper way to fulfill short-range airborne ASW capabilities left after the retirement of the S-2 Tracker than buying newer aircraft. Refurbished S-3s could be returned to use by 2019. In 2017, the Republic of Korea Navy canceled plans to purchase refurbished and upgraded Lockheed S-3 Viking aircraft for maritime patrol and anti-submarine duties, leaving offers by Airbus, Boeing, Lockheed Martin, and Saab on the table.
In April 2014, Lockheed Martin announced that they would offer refurbished and remanufactured S-3s, dubbed the C-3, as a replacement for the Northrop Grumman C-2A Greyhound for carrier onboard delivery. The requirement for 35 aircraft would be met from the 91 S-3s currently in storage. In February 2015, the Navy announced that the Bell Boeing V-22 Osprey had been selected to replace the C-2 for the COD mission.
Notes
Bibliography | https://en.wikipedia.org/wiki?curid=29475 |
Kaman SH-2 Seasprite
The Kaman SH-2 Seasprite is a ship-based helicopter originally developed and produced by American manufacturer Kaman Aircraft Corporation. It has been typically used as a compact and fast-moving rotorcraft for utility and anti-submarine warfare missions.
Development of the Seasprite had been initiated during the late 1950s in response to a request from the United States Navy, calling for a suitably fast and compact naval helicopter for utility missions. Kaman's submission, internally designated as the "K-20", was favourably evaluated, leading to the issuing of a contract for the construction of four prototypes and an initial batch of 12 production helicopters, designated as the "HU2K-1". Under the 1962 United States Tri-Service aircraft designation system, the HU2K was redesignated H-2, the HU2K-1 becoming the UH-2A. Beyond the U.S. Navy, the company had also made efforts to acquire other customers for export sales, in particular the Royal Canadian Navy; however, the initial interest of the Canadians was quelled as a result of Kaman's demand for price increases and the Seasprite performing below company projections during its sea trials. Due to its unsatisfactory performance, from 1968 onwards, the U.S. Navy's existing UH-2s were remanufactured from their originally-delivered single-engine arrangement to a more powerful twin-engine configuration.
In October 1970, the Seasprite was selected by the U.S. Navy as the platform for the interim Light Airborne Multi-Purpose System (LAMPS) helicopter, which resulted in greatly enhanced anti-submarine and anti-surface threat capabilities being developed and installed upon a new variant of the type, designated as the "SH-2D/F". Accordingly, during the 1970s and 1980s, the majority of the existing UH-2 helicopters were remanufactured into the improved SH-2F model. In this configuration, the Seasprite extended and increased shipboard sensor and weapon capabilities against several types of enemy threats, including submarines of all types, surface ships and patrol craft that may be armed with anti-ship missiles.
The Seasprite served for many decades with the U.S. Navy. Highlights of its service life included operations during the lengthy Vietnam War, in which the type was primarily used to rescue downed friendly aircrews within the theatre of operations, and its deployment during the Gulf War, where Seasprites conducted combat support and surface warfare operations against hostile Iraqi forces. In more routine operations, the Seasprite was operated in a number of roles, including anti-submarine warfare (ASW), search and rescue (SAR), utility and plane guard (the latter being performed when on attachment to aircraft carriers). The type was finally withdrawn in 2001 when the last examples of the final variant, known as the SH-2G Super Seasprite were retired. During the 1990s and 2000s, ex-U.S. Navy Seasprites were offered to various nations as a form of foreign aid, which typically met with mixed interest and a limited uptake.
In 1956, the U.S. Navy launched a new competition with the intent of meeting its requirements for a compact, all-weather multipurpose naval helicopter, encouraging private companies to submit their proposals. American manufacturer Kaman Aircraft Corporation decided to produce its own response for the competition, their submitted design, which was given the internal company designation of "K-20", was of a relatively conventional helicopter powered by a single General Electric T58-8F turboshaft engine which drove a 44-foot four-bladed main rotor and a four-bladed tail rotor. Following an evaluation of the designs that had been bid in response, the U.S. Navy decided to select the submission by Kaman to proceed with further development. Accordingly, in late 1957, Kaman was promptly awarded with a contract calling for the construction of four prototypes and an initial batch of 12 production helicopters, designated as the "HU2K-1".
In 1960, the Royal Canadian Navy announced that the HU2K has been identified as the frontrunner for their own requirement for an anti-submarine warfare helicopter; this choice was confirmed when the Treasury Board of the Canadian government gave its approval for the initial procurement of 12 rotorcraft from Kaman at a price of $14.5 million. However, the Canadian purchase was disrupted by multiple factors, including Kaman's decision to abruptly raise the estimated price of the initial batch to $23 million; as the same time, there were concerns amongst officials that the manufacturer's projections of both the weight and performance criteria has been overly optimistic. In response, the Canadian Naval Board decided to hold off on issuing its approval to proceed with the HU2K purchase until after the US Navy had conducted sea trials with the type. During these sea trials, it was revealed that the HU2K was indeed overweight and underpowered; in light of this inferior performance, the HU2K was deemed to be incapable of meeting the Canadian requirements. Accordingly, during late 1961, the competing Sikorsky CH-124 Sea King was selected to fulfil the intended role instead.
Having been unable to achieve any follow-on orders for the type, Kaman decided in the late 1960s to terminate production following the completion of the delivery of 184 H-2s to the U.S. Navy. However, in 1971, production was restarted by Kaman in order to manufacture an improved variant of the helicopter, designated as the "SH-2F". A significant factor in the reopening of the production line was that the Navy's Sikorsky SH-60 Sea Hawk, which was both newer and more capable in anti-submarine operations, had been determined to be too large to allow it to be safely operated from the smaller flight decks present upon the older frigates then in service.
Upon the enactment of the 1962 United States Tri-Service aircraft designation system, the HU2K-1 had been redesignated as the "UH-2A", while the "HU2K-1U" model was redesignated as the "UH-2B". During its service, the UH-2 Seasprite would be subject to several modifications and improvements, such as the addition of fixtures for the mounting of external stores. Beginning in 1968, the Navy's remaining UH-2s were extensively remanufactured; perhaps the most extensive alteration performance was the replacement of their original single-engine arrangement with a more powerful twin-engine configuration.
In October 1970, the UH-2 was selected to be the platform to function as the interim Light Airborne Multi-Purpose System (LAMPS) helicopter. During the course of the 1960s, LAMPS had evolved out of an urgent requirement to develop a manned helicopter that would be capable of supporting a non-aviation vessel and serve as its tactical Anti-Submarine Warfare arm. Widely referred to as "LAMPS Mark I", the advanced sensors, processors, and display capabilities aboard the helicopter enabled such equipped ships to extend their situational awareness beyond the line-of-sight limitations that unavoidably hampered the performance of shipboard radars, as well as the short distances involved in the acoustic detection and prosecution of underwater threats associated with hull-mounted sonars. Those H-2s that were reconfigured to perform the LAMPS mission were accordingly re-designated as "SH-2D"s.
On 16 March 1971, the first SH-2D LAMPS prototype conducted its first flight. Beginning in 1973, production deliveries of the latest variant of the rotorcraft, designated as the "SH-2F", commenced. Amongst the features of the "SH-2F" model was the full suite of LAMPS I equipment, along with various other improvements, such as upgraded engines, an extended life main rotor, and an elevated take-off weight. During 1981, the Navy placed an order for 60 production SH-2Fs. From 1987 onwards, a total of 16 SH-2Fs were upgraded with a chin-mounted forward-looking infrared (FLIR) sensor, chaff/flare launchers, dual rear-mounted infrared countermeasures, and missile/mine detecting equipment.
Eventually, all but two H-2s that were then in the U.S. Navy inventory were remanufactured into the SH-2F configuration. The final production procurement of the SH-2F was in Fiscal Year 1986. The final six orders for production SH-2Fs were converted to the more extensive and newer SH-2G Super Seasprite variant.
In 1962, the initial UH-2 model commenced its operational service with the U.S. Navy. The U.S. Navy quickly determined that the helicopter's capabilities were greatly restricted by its single engine; thus, the service ordered Kaman to retrofit all of its Seasprites into a more capable twin-engine arrangement instead; when furnished with a pair of engines, the Seasprite was capable of attaining an airspeed of 130 knots and operating at a range of up to 411 nautical miles. The U.S. Navy would operate a total fleet of nearly 200 Seasprites to perform a variety of missions, ranging from anti-submarine warfare (ASW) operations, search and rescue (SAR) and utility transport. Under typical operational conditions, several UH-2s would be deployed upon each of the U.S. Navy's aircraft carriers in order to perform plane guard and SAR missions.
The UH-2 was introduced in time to see action in the Tonkin Gulf incident in August 1964. The Seasprite's principal contribution to what would escalate into the lengthy Vietnam War between the Soviet-backed North Vietnamese and the United States-backed South Vietnamese, was the retrieval of downed aircrews, both from the sea and from inside enemy territory. The type was increasingly relied upon to perform the retrieval mission as the conflict intensified, such as during Operation Rolling Thunder in 1965. During October 1966 alone, out of 269 downed pilots, helicopter-based SAR teams were recorded as having enabled the recovery of 103 men.
During the 1970s, the conversion of UH-2s to the SH-2 anti-submarine configuration provided the U.S. Navy with its first dedicated ASW helicopter capable of operating from vessels other than its aircraft carriers. The compact size of the SH-2 allowed the type to be operated from flight decks that were too small for the majority of helicopters; this factor would later play a role in the U.S. Navy's decision to acquire the improved SH-2F during the early 1980s.
The SH-2F fleet was utilized to enforce and support Operation Earnest Will in July 1987, Operation Praying Mantis in April 1988, and Operation Desert Storm during January 1991 in the Persian Gulf region. The countermeasures and additional equipment present upon the SH-2F allowed the type to conduct combat support and surface warfare missions within these hostile environments, which had an often-minimal submarine threat. In April 1994, the SH-2F was retired from active service with the U.S. Navy; the timing corresponded with the retirement of the last of the Vietnam-era Knox Class Frigates that were unable to accommodate the new and larger SH-60 Sea Hawks, which were used to replace the aging Seasprites.
In 1991, the U.S. Navy had begun to receive deliveries of the new SH-2G Super Seasprite; a total of 18 converted SH-2Fs and 6 new-built SH-2Gs were produced. These were assigned to Naval Reserve squadrons, the SH-2G entered service with HSL-84 in 1993. The SH-2 served in some 600 deployments and flew 1.5 million flight hours before the last of the type were finally retired in mid-2001.
The Royal New Zealand Navy (RNZN) replaced its Westland Wasps with an initial batch of four interim SH-2F Seasprites (formerly operated by the U.S. Navy), operated and maintained by a mix of Navy and Air Force personnel known as No. 3 Squadron RNZAF Naval Support Flight, to operate with ANZAC class frigates until the fleet of five new SH-2G(NZ) Super Seasprites were delivered. In October 2005, the Navy air element was transferred to No. 6 Squadron RNZAF at RNZAF Base Auckland in Whenuapai. RNZN Seasprites have seen service in East Timor. 10 of the 11 SH-2G(A)s rejected by the Royal Australian Navy were purchased in 2014 to replace the five RNZN SH-2G(NZ) Seasprites that had required either a MLU (Mid Life Upgrade) or replacement due to corrosion issues, maintenance problems and obsolescence. Kaman modified the ex-Australian aircraft and renamed them SH-2G(I), with the last one being delivered to New Zealand in early 2016. Eight of the aircraft are flying with the ninth and tenth aircraft being attritional aircraft used for spares etc. The 11th aircraft is held by Kaman as a prototype and test aircraft. The five SH-2G(NZ) have been sold to Peru. A SH-2F (ex-RNZN, NZ3442) is preserved in the Royal New Zealand Air Force Museum, donated to the museum by Kaman Aircraft Corporation after an accident while in service with the RNZN.
During the late 1990s, the United States decided to offer the surplus U.S. Navy SH-2Fs as foreign aid to a number of overseas countries. Amongst those to be offered the type included Greece, which had been offered six, and Turkey, which had been offered 14, but they rejected the offer. Egypt opted to acquire four SH-2F under this aid program, they were mainly used for spares in to support of their existing fleet of ten SH-2Gs. Poland chose to acquire the later SH-2G variant. | https://en.wikipedia.org/wiki?curid=29476 |
Stop consonant
In phonetics, a stop, also known as a plosive or oral occlusive, is a consonant in which the vocal tract is blocked so that all airflow ceases.
The occlusion may be made with the tongue tip or blade (, ) tongue body (, ), lips (, ), or glottis (). Stops contrast with nasals, where the vocal tract is blocked but airflow continues through the nose, as in and , and with fricatives, where partial occlusion impedes but does not block airflow in the vocal tract.
The terms "stop, occlusive," and "plosive" are often used interchangeably. Linguists who distinguish them may not agree on the distinction being made. The terms refer to different features of the consonant. "Stop" refers to the airflow that is stopped. "Occlusive" refers to the articulation, which occludes (blocks) the vocal tract. "Plosive" refers to the release burst (plosion) of the consonant. Some object to the use of "plosive" for inaudibly released stops, which may then instead be called "applosives".
Either "occlusive" or "stop" may be used as a general term covering the other together with nasals. That is, 'occlusive' may be defined as oral occlusive (stops/plosives) plus nasal occlusives (nasals such as , ), or 'stop' may be defined as oral stops (plosives) plus nasal stops (nasals). Ladefoged and Maddieson (1996) prefer to restrict 'stop' to oral occlusives. They say,
In addition, they use "plosive" for a pulmonic stop; "stops" in their usage include ejective and implosive consonants.
If a term such as "plosive" is used for oral obstruents, and nasals are not called nasal stops, then a "stop" may mean the glottal stop; "plosive" may even mean non-glottal stop. In other cases, however, it may be the word "plosive" that is restricted to the glottal stop. Note that, generally speaking, stops do not have plosion (a release burst). In English, for example, there are stops with no audible release, such as the in "apt". However, pulmonic stops do have plosion in other environments.
In Ancient Greek, the term for stop was ("áphōnon"), which means "unpronounceable", "voiceless", or "silent", because stops could not be pronounced without a vowel. This term was calqued into Latin as , and from there borrowed into English as "mute". "Mute" was sometimes used instead for voiceless consonants, whether stops or fricatives, a usage that was later replaced with "surd", from Latin "deaf" or "silent", a term still occasionally seen in the literature. For more information on the Ancient Greek terms, see .
A stop is typically analysed as having up to three phases:
Only the hold phase is requisite. A stop may lack an approach when it is preceded by a consonant that involves an occlusion at the same place of articulation, as in in "end" or "old". In many languages, such as Malay and Vietnamese, word-final stops lack a release burst, even when followed by a vowel, or have a nasal release. See no audible release.
Nasal occlusives are somewhat similar. In the catch and hold, airflow continues through the nose; in the release, there is no burst, and final nasals are typically unreleased across most languages.
In affricates, the catch and hold are those of a stop, but the release is that of a fricative. That is, affricates are stop–fricative contours.
All spoken natural languages in the world have stops, and most have at least the voiceless stops , , and . However, there are exceptions: Colloquial Samoan lacks the coronal , and several North American languages, such as the northern Iroquoian and southern Iroquoian languages (i.e., Cherokee), lack the labial . In fact, the labial is the least stable of the voiceless stops in the languages of the world, as the unconditioned sound change → (→ → Ø) is quite common in unrelated languages, having occurred in the history of Classical Japanese, Classical Arabic, and Proto-Celtic, for instance. Formal Samoan has only one word with velar ; colloquial Samoan conflates and to . Ni‘ihau Hawaiian has for to a greater extent than Standard Hawaiian, but neither distinguish a from a . It may be more accurate to say that Hawaiian and colloquial Samoan do not distinguish velar and coronal stops than to say they lack one or the other.
See Common occlusives for the distribution of both stops and nasals.
Voiced stops are pronounced with vibration of the vocal cords, voiceless stops without. Stops are commonly voiceless, and many languages, such as Mandarin Chinese and Hawaiian, have only voiceless stops. Others, such as most Australian languages, are indeterminate: stops may vary between voiced and voiceless without distinction.
In aspirated stops, the vocal cords (vocal folds) are abducted at the time of release. In a prevocalic aspirated stop (a stop followed by a vowel or sonorant), the time when the vocal cords begin to vibrate will be delayed until the vocal folds come together enough for voicing to begin, and will usually start with breathy voicing. The duration between the release of the stop and the voice onset is called the "voice onset time" (VOT) or the "aspiration interval". Highly aspirated stops have a long period of aspiration, so that there is a long period of voiceless airflow (a phonetic ) before the onset of the vowel. In tenuis stops, the vocal cords come together for voicing immediately following the release, and there is little or no aspiration (a voice onset time close to zero). In English, there may be a brief segment of breathy voice that identifies the stop as voiceless and not voiced. In voiced stops, the vocal folds are set for voice before the release, and often vibrate during the entire hold, and in English, the voicing after release is not breathy. A stop is called "fully voiced" if it is voiced during the entire occlusion. In English, however, initial voiced stops like or may have no voicing during the period of occlusion, or the voicing may start shortly before the release and continue after release, and word-final stops tend to be fully devoiced: In most dialects of English, the final /b/, /d/ and /g/ in words like "rib", "mad" and "dog" are fully devoiced. Initial voiceless stops, like the "p" in "pie", are aspirated, with a palpable puff of air upon release, whereas a stop after an "s", as in "spy", is tenuis (unaspirated). When spoken near a candle flame, the flame will flicker more after the words "par, tar," and "car" are articulated, compared with "spar, star," and "scar". In the common pronunciation of "papa", the initial "p" is aspirated whereas the medial "p" is not.
In a geminate or long consonant, the occlusion lasts longer than in simple consonants. In languages where stops are only distinguished by length (e.g., Arabic, Ilwana, Icelandic), the long stops may be held up to three times as long as the short stops. Italian is well known for its geminate stops, as the double "t" in the name "Vittoria" takes just as long to say as the "ct" does in English "Victoria". Japanese also prominently features geminate consonants, such as in the minimal pair 来た "kita" 'came' and 切った "kitta" 'cut'.
Note that there are many languages where the features voice, aspiration, and length reinforce each other, and in such cases it may be hard to determine which of these features predominates. In such cases, the terms fortis is sometimes used for aspiration or gemination, whereas lenis is used for single, tenuous, or voiced stops. Be aware, however, that the terms "fortis" and "lenis" are poorly defined, and their meanings vary from source to source.
Simple nasals are differentiated from stops only by a lowered velum that allows the air to escape through the nose during the occlusion. Nasals are acoustically sonorants, as they have a non-turbulent airflow and are nearly always voiced, but they are articulatorily obstruents, as there is complete blockage of the oral cavity. The term occlusive may be used as a cover term for both nasals and stops.
A prenasalized stop starts out with a lowered velum that raises during the occlusion. The closest examples in English are consonant clusters such as the [nd] in "candy", but many languages have prenasalized stops that function phonologically as single consonants. Swahili is well known for having words beginning with prenasalized stops, as in "ndege" 'bird', and in many languages of the South Pacific, such as Fijian, these are even spelled with single letters: "b" [mb], "d" [nd].
A postnasalized stop begins with a raised velum that lowers during the occlusion. This causes an audible nasal "release", as in English "sudden". This could also be compared to the /dn/ cluster found in Russian and other Slavic languages, which can be seen in the name of the Dnieper River.
Note that the terms "prenasalization" and "postnasalization" are normally used only in languages where these sounds are phonemic: that is, not analyzed into sequences of stop plus nasal.
Stops may be made with more than one airstream mechanism. The normal mechanism is pulmonic egressive, that is, with air flowing outward from the lungs. All languages have pulmonic stops. Some languages have stops made with other mechanisms as well: ejective stops (glottalic egressive), implosive stops (glottalic ingressive), or click consonants (lingual ingressive).
A fortis stop (in the narrow sense) is produced with more muscular tension than a lenis stop (in the narrow sense). However, this is difficult to measure, and there is usually debate over the actual mechanism of alleged fortis or lenis consonants.
There are a series of stops in the Korean language, sometimes written with the IPA symbol for ejectives, which are produced using "stiff voice", meaning there is increased contraction of the glottis than for normal production of voiceless stops. The indirect evidence for stiff voice is in the following vowels, which have a higher fundamental frequency than those following other stops. The higher frequency is explained as a result of the glottis being tense. Other such phonation types include breathy voice, or murmur; slack voice; and creaky voice.
The following stops have been given dedicated symbols in the IPA.
Many subclassifications of stops are transcribed by adding a diacritic or modifier letter to the IPA symbols above. | https://en.wikipedia.org/wiki?curid=29480 |
Stayman convention
Stayman is a bidding convention in the card game contract bridge. It is used by a partnership to find a 4-4 or 5-3 trump fit in a suit after making a one (1NT) opening bid and it has been adapted for use after a 2NT opening, a 1NT overcall, and many other natural notrump bids.
The convention is named for Sam Stayman, who wrote the first published description in 1945, but its inventors were two other players: the British expert Jack Marx in 1939, who published it only in 1946, and Stayman's regular partner George Rapée in 1944.
A bid and made in a major suit (i.e. 4 or 4 ) scores better than a game contract bid and made in a minor suit (i.e. 5 or 5 ) or in notrump (i.e. 3NT). Also, the success rate for a game contract in a major suit when a partnership has a combined holding of 26 points and eight cards in the major is about 80%, whereas a game contract in 3NT with 26 (HCP) has a success rate of only 60%, or 50% with 25 HCP; the success rate for a minor suit game contract when holding 26 points is about 30%.
Accordingly, partnership priority is to find an eight card or better major suit fit when jointly holding sufficient values for a game contract. 5-3 and 6-2 fits are easy to find in basic methods as responder can bid 3 or 3 over 1NT, and opener will not normally have a 5 card major to bid 1NT. However, finding 4-4 fits presents a problem. The 2 and 2 bids cannot be used for this as they are weak takeouts, a sign-off bid.
After an opening bid or an overcall of 1NT (2NT), or bids an artificial 2 (3) to ask opener or overcaller if he holds a four- or five-card major suit; some partnership agreements may require the major to be headed by an honor of at least a specified rank, such as the queen. The artificial club bid typically promises four cards in at least one of the major suits (promissory Stayman) and, "in standard form", enough strength to continue bidding after partner's response (8 HCP for an invitational bid opposite a standard strong 1NT opening or overcall showing 15-17 HCP, 11 HCP opposite a weak notrump of 12-14 HCP, or 5 HCP to go to game opposite a standard 2NT showing 20-21 points). It also promises distribution that is not 4333. By invoking the Stayman convention, the responder takes control of the bidding since strength and distribution of the opener's hand is already known within a limited range. The opener responds with the following rebids.
A notrump opener should have neither a suit longer than five cards nor more than one 5-card suit since an opening notrump bid shows a balanced hand. A notrump bidder who has at least four cards in each major suit normally responds in hearts, as this can still allow a spade fit to be found. Variant methods are to bid the longer or stronger major, with a preference given to spades, or to use 2NT to show both majors.
In the standard form of Stayman over 1NT, the responder has a number of options depending on his partner's answer:
Over these bids, the notrump bidder (1) with a maximum hand (17 HCP), goes to game over an invitational bid and (2) with four (or more) cards in each major suit, corrects to the previously unbid major suit.
In the standard form of Stayman over 2NT, the responder has only two normal rebids.
In either case, a responder who rebids notrump over a response in a major suit promises four cards of the other major suit. Thus, a notrump opener who holds at least four cards in each major suit should "correct" by bidding the other major suit at the lowest level.
Of course, once a fit is found, responder who has sufficient strength also may bid 4 (Gerber) or 4NT (Blackwood), or cue bid aces, depending upon partnership agreement, to explore slam in any of the above sequences. Some partnerships also admit responder's rebids of a major suit that the notrump bidder did not name.
A bid of 4 over an opening bid of 3NT may be either Stayman or Gerber, depending upon the partnership agreement.
If an adverse suit bid is inserted immediately after a 1NT opening, Stayman may be employed via a double (by partnership agreement) or a cue bid, depending on the strength of his hand. The cue bid, which is conventional, is completely artificial and means nothing other than invoking Stayman. For example, if South opens 1NT, and West overcalls 2, North, if he has adequate values, may call 3, invoking Stayman. South would then show his major or bid game in notrump. Alternatively, North, if his hand lacks the values for game in notrump, may double, which by partnership agreement employs Stayman. This keeps the Stayman bidding at second level.
Partnerships who have not yet learned Stayman but choose to adopt Stayman (without having yet learned or having chosen not to use Jacoby Transfers) will need to adjust their use of normal two-level responses after a 1NT opening, because the availability of this convention changes the nature of what had been normal 1NT responses. When the notrump bidder's partner does not invoke Stayman but instead calls 2 or 2, it is a sign of relative weakness (since if responder held 8 HCP or more, he would have invoked Stayman). These bids are commonly referred to as "drop dead bids", as the opening notrump bidder is requested to withdraw from the auction. If opener has maximum values, a fit, and strong support, he may raise to the 3-level, but under no circumstances may he take any other action. This provides the partnership with an advantage that the non-Stayman partnership doesn't enjoy. For example, a responder may have no honors at all; that is, a total of zero HCP. His partner is likely to be set if he passes. A non-Stayman responder would have to pass, because to bid would provoke a rebid. But a Stayman responder can respond to his partner's 1NT opening at level 2 if he has a 6-card non-club suit. The responder with 3 HCP and a singleton can make a similar call with a 5-card non-club suit. This gives the partnership a better than even chance of success in making the contract, whereas without a response (and without Stayman), the contract would likely be set.
Similarly, a response of 2 indicates less than 8 HCP and should usually be passed. In rare cases, when the opener has maximum values and a fit in diamonds with at least two of the top three honors, he may raise diamonds, and responder may see a chance for game in notrump.
There are many variations on this basic theme, and partnership agreement may alter the details of its use. It is one of the most widely used conventions in bridge.
Some partnerships play that 2 Stayman does not absolutely promise a four-card major (non promissory Stayman). For example, if responder has a short suit and wishes to know if opener has four-card cover in it, so as to play in notrumps. If opener shows hearts initially, 2 can be used to find a fit in spades when the 2 does not promise a four-card major.
1NT - 2, 2 -
Alternatively 2 can be used for all hands with four spades and not four hearts, either invitational or game values, while 3NT denies four spades.
Today, most players use Stayman in conjunction with Jacoby transfers. With Stayman in effect, the responder practically denies having a five-card major, as otherwise he would transfer to the major immediately. The only exception is when responder has 5-4 in the majors; in that case, he could use Stayman, and in the case of a 2 response, bid the five-card major at the two level (weakness take-out / Garbage Stayman) or at the three level (forcing to game). However, the latter hand can also be bid by first using a transfer and then showing the second suit naturally. The Smolen convention provides an alternative method to show a five-card major and game-going values. A minor drawback of Jacoby transfers is that a 2 contract is not possible.
The Smolen convention is an adjunct to Stayman for situations in which the notrump opener has denied holding a four-card major and responder has a five-card major and a four-card major with game-going values.
If the notrump opener responds to the Stayman 2 asking bid with 2, denying a four-card major, responder initiates the Smolen Transfer with a jump shift to three of his four-card major. The jump shift shows which is the four-card major and promises five in the other major. The notrump opener then bids four of the other major with three cards in the suit or 3NT with fewer than three.
Smolen may also be used when responder has a six-card major and a four-card major with game-going values; after the 2 negative response by opener, responder double jump shifts to four in the suit just below his six-card major and the notrump opener transfers to four of his partner's six-card major.
This convention allows a partnership to find either a 5-3 fit, 6-3 and 6-2 fit while ensuring that the notrump opener, who has the stronger hand, will be declarer.
"Garbage" Stayman (or "Weak Stayman" or "Rescue Stayman") and "Crawling" Stayman are adaptations of Stayman frequently used for damage control when holding a weak hand opposite a 1NT opening bid. For example, on the following hand.
Partner opens 1NT (15-17), and right hand opponent passes. Opponents have 23-25 HCP. Thus, 1NT is virtually certain to go down by at least three or four tricks. Indeed, in No-trumps, this dummy will be completely worthless.
In "Garbage Stayman", you bid 2 Stayman with this "garbage" hand rather than passing on the first round, and then "pass opener's response". If opener rebids a major suit you have found a 4-4 fit and ability to trump club losers. Likewise, a response of 2 guarantees no worse than a 5-2 fit in diamonds and, with a fifth trump, a potential additional ruff. Declarer can also reach dummy with ruffs and may then be able to take finesses or execute a squeeze that otherwise would not be possible. The result is a contract that will go down fewer tricks or may even make, rather than a contract that is virtually certain to go down at least three or four tricks. However the hand must be able to tolerate any rebid from opener.
"Crawling Stayman" is an optional extension of "Garbage Stayman" for situations in which the responder's diamond suit is short. In "Crawling Stayman", the responder rebids 2 over the Notrump bidder's 2 reply. This conventional bid shows a weak hand with at least four cards in each major suit, asking the Notrump bidder to choose between the major suits at the cheapest level by either passing the 2 bid or correcting to 2. The name "Crawling Stayman" comes from the fact that the bidding "crawls" at the slowest possible pace: (pass) – 1NT – (pass) – 2; (pass) – 2 – (pass) – 2; (pass) – 2; (pass) – pass – (pass).
Alternatively, responder's 2 and 2 bids after the 2 rebid can be weak sign-offs. This allows responder to effectively bid hands which are 5-4 in the majors, by looking first for a 4-4 fit and, if none is found, signing off in his 5 card suit.
"Garbage Stayman" is even more useful opposite a weak NT opening (12-14) as it occurs more frequently and can mitigate very expensive penalties if responder is weak. It is in frequent use in Acol.
"Garbage Stayman" and "Crawling Stayman" bids over a 2NT bid work the same way, but occur at the "three" level.
If Jacoby transfers are not played, there are two approaches to resolve the situation when responder has a 5-card major but only invitational values. In one, more common, referred to as "non-forcing Stayman", in the sequence:
responder's simple rebid of a major suit is invitational, showing 8-9 points and a 5-card spade suit. In the "forcing Stayman" variant, the bid is one-round forcing.
In the original Precision Club system, forcing and non-forcing Stayman are differentiated in the start: 2 by responder shows only invitational values (and the continuation is the same as in basic Stayman), while 2 is forcing to game (responder bids 2NT without majors).
This allows responder to find exact shape of 1NT opener. Developed for use with weak 1 NT opening. Relay bids over opener's rebids of 2, 2, 2, 2NT, 3 allow shape to be defined further if attempting to find 5-3 major fits. Advantages are responder's shape, which may be any distribution, is undisclosed, and responder is able to locate suit shortage holdings not suitable for no trumps. Disadvantage is 2 can't be used as a damage control bid.
1NT – 2♣
Developed to be used in combination with following other responses to 1NT: 2, 2 Jacoby transfers to majors; 2 range finder/transfer to minors (opener's rebids: 2NT 12-13 HCP, 3 14 HCP. Responder passes or corrects to 3 or 3 sign off if weak. After opener's 3 rebid responder bids 3 to show 4 hearts or 3 to show 4 spades both game forcing. Responder's rebid of 3NT denies 4 card major); 2NT invitational hand with both 4 card majors (opener's rebids: no bid no 4 card major 12-13 HCP, 3 4 hearts 12-13 HCP, 3 4 spades 12-13 HCP, 3 4 hearts 14 HCP, 3 4 spades 14 HCP, 3NT 14 HCP no 4 card major)
This allows responder to find exact shape of 1NT opener that may only contain a four-card major. Developed for use with weak 1 NT opening. Relay bids over opener's rebids of 2, 2, 2 allow shape to be defined further if attempting to find 5-3 major fits. Advantages are responder's shape, which may be any distribution, is undisclosed, and responder is able to locate suit shortage holdings not suitable for notrumps. May be also used as a damage control bid, and for both invitational, and game forcing hands.
1NT – 2♣
1NT – 3♣ weak sign off.
Opener's rebids of 2, 2, 2 may all be passed if responder is weak.
Developed to be used in combination with following other responses to 1NT: 2, 2 Jacoby transfers to majors; 2 five spades four hearts 10-11 HCP; 2NT invitational hand with 5,5 minors 10-11 HCP.
This allows responder to find exact shape of 1NT opener that may contain a 5 card major. Developed for use with weak 1NT opening. Relay bids over opener's rebids of 2D, 2H, 2S allow shape to be defined further if attempting to find 5-3 major fits. Advantages are responder's shape, which may be any distribution, is undisclosed, and responder is able to locate suit shortage holdings not suitable for no trumps. May be also used as a damage control bid, and for both invitational, and game forcing hands.
1NT – 2C
Opener's rebids of 2D, 2H, 2S may all be passed if responder is weak.
Developed to be used in combination with following other responses to 1NT: 2D, 2H Jacoby transfers to majors; 2S range finder/transfer C; 2NT invitational hand with 5,5 minors 10-11 HCP.
This allows responder to check for 5-3 major fits where it is possible that opener's 1NT or 2NT might include a five card major. As described by Australian Ron Klinger, it can be played with a weak or strong 1NT.
1NT - 2
1NT - 2, 2 OR 2NT
After a transfer, accept it with any 4333, bid 3NT with only two trumps, otherwise bid 4M.
1NT - 2, 2 OR 2NT - 3 = Stayman
1NT - 2, (2 OR 2NT) - 3, 3
An alternative, simpler version of 5 card Stayman is:
1NT - 2
This structure permits use by weak hands with 5+ diamonds and 2+ cards in each major.
After 1NT - 2, 2
If responder has a five-card major, he begins with a transfer. After completion of the transfer, bidding the other major at the three level shows four cards in it and a game forcing hand, in line with the 1NT - 2, 2 structure above (1NT - 2, 2 - 2 = invitational 5-4).
Similarly after 2NT - 3, 3
A drawback of Five Card Major Stayman (particularly the simpler version) is that the weaker hand may become declarer in a 4-4 major fit.
Puppet Stayman is similar to Five Card Stayman. It is more complex but has the major advantage that the strong hand virtually always becomes declarer.
Initially developed by Neil Silverman and refined by Kit Woolsey and Steve Robinson in 1977-78, is a variation of the Stayman convention designed to find a 5-3 fit in a major, augmenting the search for a 4-4 major fit by standard Stayman. In 1977, Woolsey wrote that Puppet Stayman has several advantages over standard Stayman:
As in standard Stayman, Puppet Stayman begins with a 2 response to a 1NT opening and is at least game invitational; this asks opener to bid a 5-card major if he has one and otherwise to bid 2. Over a 2 response, rebids by responder are intended to disclose his distributional features in the majors as well as his strength. The original 1977 and 1978 revised rebids described by Woolsey are tabulated below:
Opener and responder continue the bidding having a clearer understanding of each other's distributional features and are better positioned to select the ultimate and level of the contract.
Many variations to the Puppet Stayman bidding structure have been devised since Woolsey's 1978 summary; partnership review and agreement on the preferred modern treatment is required.
Some no longer advocate use of Puppet Stayman over a 1NT opening preferring to use the concept exclusively over a 2NT opening and reserving other Stayman variations and conventions such Jacoby Transfers and Smolen Transfers in search of major-suit fits after a 1NT opening.
Puppet Stayman is more commonly used after a 2NT opening than after a 1NT opening. Responses to a 2NT opening or very strong 2NT rebid (20-22 or 23-24):
Responder bids 3 seeking information about opener's major suit holding. Opener replies:
By this means all 5-3 and 4-4 major suit fits can be found.
An alternative pattern frees up the 2NT-3 sequence as a slam try in the minors. To allow 3-5 spade fits to be found when responder holds 5 spades and 4 hearts, some of the responses change:
2 Checkback Stayman (or simply Checkback) is used after a 1NT rebid by opener rather than a 1NT opening. It is used to "check back" if opener has major suit support, saying nothing additional about the club suit. It can find 3-5 fits, 4-4 fits (in Standard American) and 5-3 fits (in Acol), and also shows whether opener was maximum or minimum strength for his notrump bid. In five-card major systems, bidding Checkback implies that the responder has five cards in his major, and may have four in the other.
1m – 1M; 1NT – 2
The 2 is "Checkback Stayman". Responses by opener shows the following:
Partnership agreement is required on how to handle the case of holding four of the other major and three of partner's suit. One could agree to bid up the line, or support partner's suit first. If partner cannot support your first suit, he will invite with 2NT or bid game with 3NT and you will then correct to your other suit.
In Acol, if the opening bid was a major, opener can rebid his major after a Checkback inquiry to show that it has five cards rather than four and find 5-3 fits. Moreover, 1M – 2m; 2NT – 3 can also be used as Checkback Stayman. It is useful also to include an indication of range, particularly if opener's 2NT rebid is forcing to game and shows a wide points range (15-19). This is achieved by using 3 for minimum hands and 3/3/3NT for maximum hands, or vice versa. After 3, responder can still bid 3/3 to look for a 5-3 fit.
New Minor Forcing is an alternative to Checkback Stayman where either 2 or 2 can be used as the checkback bid. It can be used by responder with invitational values or better to find three-card support for his major or to find a 4-4 heart fit if holding five spades and four hearts); it also allows a return to the minor to play. | https://en.wikipedia.org/wiki?curid=29482 |
Saks Fifth Avenue
Saks Fifth Avenue is an American chain of luxury department stores owned, since 1867, by the oldest commercial corporation in North America, the Hudson's Bay Company. Its main flagship store is located on Fifth Avenue in Midtown Manhattan, New York City.
Saks Fifth Avenue is the successor of a business founded by Andrew Saks in 1867 and incorporated in New York in 1902 as Saks & Company. Saks died in 1912, and in 1923 Saks & Co. merged with Gimbel Brothers, Inc., which was owned by a cousin of Horace Saks, Bernard Gimbel, operating as a separate autonomous subsidiary. On September 15, 1924, Horace Saks and Bernard Gimbel opened Saks Fifth Avenue in New York City, with a full-block avenue frontage south of St. Patrick's Cathedral, facing what would become Rockefeller Center. The architects were Starrett & van Vleck, who developed a reticent, genteel Anglophile classicizing facade similar to their Gimbels Department Store in Pittsburgh (1914).
When Bernard's brother, Adam Gimbel, became president of Saks Fifth Avenue in 1926 after Horace Saks's sudden passing, the company expanded, opening seasonal resort branches in Palm Beach, Florida, and Southampton, New York, in 1928. The first full-line year-round Saks store opened in Chicago, in 1929, followed by another resort store in Miami Beach, Florida. In 1938, Saks expanded to the West Coast, opening in Beverly Hills, California. By the end of the 1930s, Saks Fifth Avenue had a total of 10 stores, including resort locations such as Sun Valley, Idaho, Mount Stowe, and Newport, Rhode Island. More full-line stores followed with Detroit, Michigan, in 1940 and Pittsburgh, Pennsylvania, in 1949. In Downtown Pittsburgh, the company moved to its own freestanding location approximately one block from its former home on the fourth floor in the downtown Gimbel's flagship. The San Francisco location opened in 1952, competing locally with I. Magnin. BATUS Inc. acquired Gimbel Bros., Inc. and its Saks Fifth Avenue subsidiary in 1973 as part of its diversification strategy. More expansion followed from the 1960s through the 1990s including the Midwest, and the South, particularly in Texas. In 1990, BATUS sold Saks to Investcorp S.A., which took Saks public in 1996 as Saks Holdings, Inc.
In 1990, the company launched "Saks Off 5th", an outlet store offshoot of the main brand, with 107 stores worldwide by 2016.
In 1998, Proffitt's, Inc. the parent company of Proffitt's and other department stores, acquired Saks Holdings Inc. Upon completing the acquisition, Proffitt's, Inc. changed its name to Saks, Inc.
Since 2000 Saks has opened international locations in Saudi Arabia, United Arab Emirates, Bahrain, Kazakhstan, Canada, and Mexico City.
In August 2007, the United States Postal Service began an experimental program selling the plus zip code extension to businesses. The first company to do so was Saks Fifth Avenue, which received the zip code of 10022-7463 ("SHOE") for the eighth-floor shoe department in its flagship Fifth Avenue store.
During the 2007–2009 recession, Saks Fifth Avenue closed some stores and to cut prices and profit margins, thus according to Reuters "training shoppers to expect discounts. It took three years before it could start selling at closer to full price". In the following years, the company closed stores in locations including Orange County (2010), Denver (2011), Pittsburgh (2012), Highland Park, Illinois (2012/13) and in June 2013 its last Dallas store to implement the "strategy of employing our resources in our most productive locations".
As of 2013, the New York flagship store, whose real estate value was estimated between $800 million and over $1 billion at the time, generated around 20% of Saks' annual sales at $620 million, with other stores being less profitable according to analysts.
On July 29, 2013, the Hudson's Bay Company (HBC), owner of the competing chain Lord & Taylor, announced it would acquire Saks Fifth Avenue's parent company for US$2.9 billion. Plans called for up to seven Saks Fifth Avenues to open in major Canadian markets. Expansion into Canada is expected to compete with Canadian Holt Renfrew chain and challenge Nordstrom's expansion into Canada, which began in summer 2014 with the opening of a Nordstrom store in Calgary. In January 2014, HBC announced the first Saks store in Canada would occupy in its flagship Queen Street building in downtown Toronto, connected to the Toronto Eaton Centre via sky bridge. The store opened in February 2016 with a second Toronto area location in the Sherway Gardens shopping center opening in spring 2016. On February 22, 2018, Saks Fifth Avenue opened its third Canadian store in Calgary, Alberta.
Starting in 2015 Saks began a $250 million, three-year restoration of its Fifth Avenue flagship store. In October 2015, Saks announced it would debut a new location in Greenwich, Connecticut. In autumn 2015, Saks announced it would replace its existing store at the Houston Galleria with a new store.
On March 17, 2020, Saks temporarily closes their doors in response to the coronavirus pandemic.
In 2005, vendors filed against Saks alleging unlawful chargebacks. The U.S. Securities and Exchange Commission (SEC) investigated the complaint for years and, according to the "New York Times", "exposed a tangle of illicit tactics that let Saks... keep money it owed to clothing makers", inflating Saks' yearly earnings up to 43% and abusively collecting around $30 million from suppliers over seven years. Saks settled with the SEC in 2007, after firing three or more executives involved in the fraudulent activities.
In 2014, Saks fired transgender employee Leyth Jamal after she was allegedly "belittled by coworkers, forced to use the men's room and repeatedly referred to by male pronouns (he and him)". After Jamal submitted a lawsuit for unfair dismissal, the company stated in a motion to dismiss that "it is well settled that transsexuals are not protected by Title VII of the Civil Rights Act of 1964." In a court filing, the United States Department of Justice rebuked Saks' argument, stating that "discrimination against an individual based on gender identity is discrimination because of sex." The Human Rights Campaign removed the company from its list of "allies" during the controversy. The lawsuit was later settled amicably, with undisclosed terms.
In 2017, following the events of Hurricane Maria in Puerto Rico, Saks's San Juan store located in Mall of San Juan suffered major damages along with its neighboring anchor store Nordstrom. Taubman Centers, the company who owns the mall, filed a lawsuit against Saks for failing to provide an estimated reopening date and failing to restore damages after the hurricane due to a binding contract. Although Nordstrom reopened on November 9, 2018, on October 30, 2018, Saks Fifth Avenue announced that it would officially vacate The Mall Of San Juan.
Saks-34th Street was a fashion-focused middle market department store that was spun off from Saks & Company when that upscale retailer moved to a new store on New York's Fifth Avenue, a location that Saks Fifth Avenue maintains to this day. Saks-34th Street became a part of the New York division of Gimbels, and a sky bridge across 33rd Street connected the second floors of both flagship buildings. In the 1947 movie "Miracle on 34th Street" the facade of Saks-34th Street is shown in a scene that focuses on the Gimbel's flagship store. Branch locations were opened around the greater New York area. After Gimbels decided to close the division, the first floor of the building was used as a Christmas season annex for Gimbel's before being sold to the E. J. Korvettes chain. After the demise of the Korvette's chain the building was remodeled into the Herald Center. Today primary tenant is H&M.
Saks Fifth Avenue at 9600 Wilshire Boulevard is a department store in Beverly Hills, California. It is part of the Saks Fifth Avenue company. It was designed by the architectural firm Parkinson and Parkinson, with interiors by Paul R. Williams. The store opened in 1938. The exterior of the building was designed by the Parkinsons, with the interior completed by Williams in the Hollywood Regency style. David Gebhard and Robert Winter, writing in "Los Angeles: An Architectural Guide" described the building as having "enough curved surface to suggest that the thirties Streamline Moderne could be elegant". The store was expanded and redesigned by Williams in 1940 and 1948. The store was immediately successful upon opening and it would subsequently expand to almost and employ 500 people.
Williams's designs for the store marked a departure from traditional department stores by reducing the emphasis on commerciality that foresaw the rise of boutique stores in the 1980s and 1990s. Only a few examples of merchandise were displayed in hidden recesses. The President of Saks Fifth Avenue, Adam Gimbel, said in an interview with the "Los Angeles Times" that "Each room attempts to create a mood which is in keeping with the merchandise sold there. For example, a Pompeian room done in cool green with appropriate frieze is used for beach and swimming pool costumes and a French provincial room houses informal sports and country clothes The accessories are carried in an oval room done in a Regency spirit". The individual shipping areas of the store were semi-enclosed which prevented distraction for customers. Williams created an interior reminiscences of his designs for luxurious private residences, with rooms lit by indirect lamps and footlights focused on the clothes. New departments for furs, corsets, gifts and debutante dresses were added in the 1940 expansion. The Terrace Restaurant, a rooftop restaurant run by Perino's, served customers for several years. It was expanded in the 1940s renovations to provide cover during inclement weather.
The store featured in the 2005 film "Shopgirl". The story had originally been set in Neiman Marcus but Saks Fifth Avenue lobbied the film makers to portray their store instead. | https://en.wikipedia.org/wiki?curid=29483 |
Seabee
United States Naval Construction Battalions, better known as the Navy Seabees, form the U.S. Naval Construction Force (NCF). The Seabee nickname is a heterograph of the first letters "C B" from the words Construction Battalion. Depending upon how the word is used "Seabee" can refer to one of three things: all enlisted personnel in the USN's occupational field 7 (OF-7), all officers and enlisted assigned to the Naval Construction Force (NCF), or Construction Battalions. Seabees serve outside the NCF as well. During WWII they served in both the Naval Combat Demolition Units and the Underwater Demolition Teams (UDTs). In addition, they served as elements of Cubs, Lions, Acorns and the United States Marine Corps.
They also provided the manpower for the top secret CWS Flame Tank Group. Today they have many special task assignments starting with Camp David and the Naval Support Unit at the Department of State. Seabees serve under both Commanders of the Naval Surface Forces Atlantic/Pacific fleets as well as on many base Public Works and USN diving commands.
Naval Construction Battalions were conceived as a replacement for civilian construction companies on contract to the Navy after the U.S. was attacked at Pearl Harbor. At that time civilian contractors had roughly 70,000 men working on U.S. bases overseas. International law made it illegal for civilian workers to resist an attack. To do so would classify them as guerrillas and could lead to summary execution. That is exactly what happened when the Japanese invaded Wake Island and would serve as the backstory to the WWII movie "The Fighting Seabees".
Adm. Moreell's concept model CB was a USMC trained battalion of construction tradesmen. A military equivalent of those civilian companies, capable of any type of construction, anywhere needed, under any conditions or circumstances. It was realized that CBs were flexible, adaptable and could be utilized in every theater of operations. The use of USMC organization allowed for smooth co-ordination, integration or interface between NCF and Marine Corps elements. Additionally, CBs could be deployed individually or in multiples as the project scope and scale dictated. What distinguishes Seabees from Combat Engineers are the skill sets. Combat Engineering is but a sub-set in the Seabee toolbox. They have a storied legacy of creative field ingenuity, stretching from Normandy and Okinawa to Iraq and Afghanistan. Adm. Ernest King wrote to the Seabees on their second anniversary, "Your ingenuity and fortitude have become a legend in the naval service."
Seabees believe that anything they are tasked with, they "Can Do" (the CB motto). They were unique at conception and remain unchanged from Adm Moreell's model today. In the October 1944 issue of Flying (magazine), the Seabees are described as "a phenomenon of World War II".
Since creation, all Seabee advanced military training has been under USMC instruction. Even so, they always bring their toolbox. One of those tools is the ingenuity Admiral King referenced. They gained fame for their application of it during WWII. The UDTs and flamethrowing tanks are declassified top secret examples. Post war they followed with more of the same for the CIA and State Department. Together with their USMC training and ability to appropriate anything, they provide the Navy an unconventional asset found nowhere else in the U.S military.
CB Conceptual Formation
Pre-WWII, the concept pioneered in 1917 by the Twelfth Regiment had not been forgotten by the Navy's Civil Engineers. Planning at Bureau of Yards and Docks (BuDocks) began providing for "Navy Construction Battalions" (CB) in contingency war plans. In 1934, Capt. Carl Carlson's version of the CB was approved by Chief of Naval Operations
In 1935, RADM. Norman Smith, head of BuDocks, selected Captain Walter Allen, War Plans Officer, to represent BuDocks on the War Plans Board. Capt. Allen presented the bureau's Construction Battalion concept and the Board included it in the Rainbow war plans. The Seabees named their first training center for Capt. Allen.
The proposal was criticised because the CBs would have a dual command; military control administrated by fleet line Officers while construction operations would be administrated by Civil Engineer Corps officers. Another issue was no provision for the military organization or military training necessary to provide unit structure, discipline, and esprit de corps. In December 1937, RADM. Ben Moreell became BuDocks Chief and the lead proponent of the CB proposal.
In 1941 civilian contractors were working on numerous projects for the Navy and BuDocks decided to improve project oversight by creating "Headquarters Construction Companies". These companies would have 2 officers and 99 enlisted, but would do no actual construction. On October 31 1941, RADM. Chester Nimitz, Chief of the Bureau of Navigation, authorized formation of the 1st Headquarters Construction Company. Recruitment began in November and boot training began December 7 at Naval Station Newport, Rhode Island. By December 16, four additional companies had been authorized, but Pearl Harbor had changed all the plans.
On December 28, 1941, RADM Moreell requested authority to commission three Naval Construction Battalions. His request was approved on January 5, 1942 by Admiral Nimitz. The 1st HQ Construction Company was used to commission the 1st Naval Construction Detachment, which was assigned to Operation Bobcat. They were sent to Bora Bora and are known in Seabee history as "Bobcats".
Concurrently, the other four requested HQ Construction Companies had been approved. BuDocks took Companies 2 & 3 to form the 1st Naval Construction Battalion at Charleston, South Carolina. HQ Companies 4 & 5 were used for the 2nd CB. All four deployed as independent units. CBs 3, 4, & 5 were all deployed similarly. CB 6 was the first battalion to deploy full complement to the same deployment site.
Before all this could happen, BuDocks had to address the dual command issue. Naval regs stated unit command was strictly limited to line officers. BuDocks deemed it essential that CBs be commanded by CEC officers trained in construction. The Bureau of Naval Personnel (BuPers) strongly opposed this. Adm. Moreell took the issue directly to the Secretary of the Navy, Frank Knox. On March 19, 1942 Knox gave the Civil Engineer Corps complete command of all Naval Construction units. Almost 11,400 would become CEC during WWII with 7,960 doing CB service. Two weeks prior, on March 5th all construction battalion personnel were officially named "Seabees".
The first volunteers were construction tradesmen who were given advanced rank for their trade skills. This would result in their being the highest paid group in uniform. To recruit these men, age and physical standards were waived up to age 50. Until November 1942 the average recruit age was 37, even so all received the same physical training. In December, FDR ordered the Selective Service System to provide CB recruits. Enlistees could request CB service with a written statement certifying that they were trade qualified. This lasted until October 1943 when voluntary enlistment in the Seabees ceased until December 1944. By war's end, 258,872 officers and enlisted had served in the Seabees. They never reached the Navy's authorized quota of 321,056.
In 1942 initial CB boot was Camp Allen,VA., which moved to Camp Bradford, which moved to Camp Peary and finally moved to Camp Endicott, Rhode Island. CBs 1-5 were sent directly overseas for urgent projects. CBs that followed were sent to Advance Base Depots (ABDs) for deployment. Camp Rousseau at Port Hueneme became operational first and was the ABD to the Pacific. The Davisville ABD became operational in June with NTC Camp Endicott commissioned that August. Other CB Camps were Camp Parks, Livermore, Ca., and Camp Lee-Stephenson, Quoddy Village, Eastport, Maine and Camp Holliday, Gulfport, Ms.
CBs sent to the Pacific were attached to one of the four Amphibious Corps: I, III, and V were USMC. The VII Amphibious Force was under General Douglas MacArthur, Supereme Commander.
Advance Bases
The Office of Naval Operations created a code identifying Advance Base (AB) construction as a numbered metaphor for the size/type of base. That code was also used to identify the "unit" that would be the administration for that base. These were Lion, Cub, Oak and Acorn with a Lion being a main Fleet Base (numbered 1–6). Cubs were Secondary Fleet Bases 1/4 the size of a Lion (numbered 1–12). Oak and Acorn were the names given air installations, new or captured (airfield or airstrip). Cubs were quickly adopted as the favored type. The speed with which the Seabees were able to get a Cub operational led the Marines to consider them a tactical component. Camp Bedilion shared a common fence-line with Camp Rousseau at Port Hueneme, and was home to the Acorn Assembly and Training Detachment (AATD) As the war progressed, BuDocks realized that logistics required that Advance Base Construction Depots (ABCDs) be built and CBs built seven. When the code was first created, BuDocks foresaw two CBs constructing a Lion. By 1944 an entire Regiment was being used. The invasion of Okinawa took four Construction Brigades of 55,000 men. The Seabees built the infra-structure needed to take the war to Japan. By war's end, CBs had served on six continents and had constructed over 300 advanced bases on as many islands. They built everything: airfields, airstrips, piers, wharves, breakwaters, PT & seaplane bases, bridges, roads, com-centers, fuel farms, hospitals, barracks and anything else.
Atlantic
In the Atlantic CBs' biggest job was the preparations for the Normandy landing. Months later CBMUs 627, 628, and 629 were tasked to facilitate the crossing of the Rhine. For CBMU 629 it was front-line work.
USMC historian Gordon L. Rottman wrote "that one of the biggest contributions the Navy made to the Marine Corps during WWII was the creation of the Seabees". As part of that contribution the Corps would be influential upon the CB organization and its history. After the experience of Guadalcanal the Department of War decided that the Marines and Seabees would make all subsequent landings together. The Order of battle would show the Seabees as being attached to the Marine Corps. That arrangement lead to numerous good-natured claims by the Seabees that they had landed first and signs left on the beach saying "What took you so long?" The Seabees in the UDTs made an effort of this.
When the first three battalions were formed the Seabees did not have a fully functional base of their own. Upon leaving navy boot camp the first recruits were sent to National Youth Administration camps in Illinois, New Jersey, New York and Virginia to receive military training from the Marine Corps. The Marine Corps listed CBs on their Table of organization: "D-Series Division" for 1942, "E-Series Division" for 1943, and "Amphibious Corps" for 1944/45.
When the CBs were created the Marine Corps wanted one for each of the three Marine Divisions, but was told no because of war priorities. That did not keep early Seabee units from have close contact with the Marine Corps
The 1st Naval Construction Detachment (Bobcats) together with and A Co CB 3 was transferred to the Marines and redesignated 3rd Battalion 22nd Marines. The Bobcats had deployed without receiving advanced military training. The 22nd Marines took care of that. The 4th Construction Detachment was attached to the 5th Marine Defense Battalion for two years.
By autumn, actual CBs, the 18th , 19th and 25th had been transferred to the Corps as combat engineers. Each was attached to a composite engineer regiment and redesignated as 3rd Bn of that Regiment:
There were numerous USMC/Seabee pairings. The first one in combat was the 6th CB with the 1st Marine Division. The 18th CB was sent as their relief from Fleet Marine Force depot Norfolk. Many more would follow. The 6th Special CB was tasked to the 4th Marines Advance Depot in the Russells. In November, the 14th CB was tasked to the 2nd Raider Bn on Guadalcanal. Earlier in June, the 24th CB was tasked to the 9th Marine Defense Bn on Rendova. The 33rd and 73rd CBs had a dets tasked to the 1st Pioneers as shore party for the 5th Marines on Peleliu. Also attached was the 17th Special CB colored. At Enogi Inlet on Munda, the 47th had a det support the 1st and 4th Marine Raiders. On Bougainville, the 3rd Marine Div. made CO of the 71st CB shore party commander. The 71st was supported by dets from the 25th, 53rd, and the 75th CBs. At Cape Torokina the 75th had 100 men volunteer to support the assault of the 3rd Marines. Also at Bougainville, the 53rd provided shore parties to the 2nd Raiders on green beach and the 3rd Raiders on Puruata Island. The 121st was formed at the CB Training Center of MTC Camp Lejuene as 3rd Bn 20th Marines. They would be shore party to the 23rd Marines on Roi-Namur, Saipan, and Tinian.
In 1944 the Marine Engineer Regiments were inactivated. Even so, Marine Divisions still had a CB tasked to them. For Iwo Jima, the 133rd and 31st CBs were attached to the 4th and 5th Marine Divisions. The 133rd was tasked to the 23rd Marines as their shore party. The 31st CB was attached to the 5th Shore Party Regiment with their demolitionsmen attached to the 5th Marine Div. The 8th Marine Field Depot was the shore party command eschelon for Iwo Jima. They requested 26 heavy equipment operators from the 8th CB. Okinawa saw the 58th, 71st, 130th, and 145th CBs attached to the 6th, 2nd, and 1st Marine Divisions respectively.
From Iwo Jima the 5th Marine Div. returned to Camp Tarawa to have the 116th CB attached. When Japan fell the 116th CB was part of the occupation force. V-J day found thousands of Japanese troops still in China and the III Marine Amphibious Corps was sent there to get them home. The 33rd NCR was assigned to III Marine Amphib. Corps for this mission.
Seabee Battalions were also tasked individually to the four Amphibious Corps. The 19th CB started out with the I MAC prior to joining the 17th Marines. The 53rd CB was attached to I MAC as Naval Construction Battalion I M.A.C. When I MAC was redesignated III Amphibious Corps the battalion became an element of the 1st Provisional Marine Brigade. For Guam, III Amphibious Corps had the 2nd Special CB, 25th and 53rd CBs. The CO 3/19 Marines (25 CB) was shore party commander for the 3rd Marines on beaches Red 1 and Red 2. The 3rd Marines would award 25's shore party 17 bronze stars. V Amphibious Corps (VAC) had the 23rd Special and 62nd CBs on Iwo Jima. On Tinian the 6th Construction Brigade was attached to V Amphibious Corps.
When the war ended the Seabees had a unique standing with the U.S. Marine Corps. Seabee historian William Bradford Huie wrote "that the two have a camaraderie unknown else-wheres in the U.S. military". Even though they are "Navy" the Seabees adopted USMC fatigues with a Seabee insignia in place of the EGA. A number of WWII CBs incorporated USMC insignia into theirs: CBs 5, 18, 19, 25, 31, 53, 71, 117 and the 6th Brigade. Admiral Moreell wrote that the Marines were the best fighting men in the Pacific. Even-so, a leatherneck had to serve 90 days with the Seabees to qualify to as a Junior Seabee.
see Notes
In early May 1943, a two-phase "Naval Demolition Project" was ordered by the Chief of Naval Operations "to meet a present and urgent requirement" for the invasion of Sicily. Phase-1 began at Amphibious Training Base (ATB) Solomons, Maryland with the creation of Operational Naval Demolition Unit # 1. Six Officers lead by Lt. Fred Wise CEC and eighteen enlisted reported from Camp Peary dynamiting and demolition school. Seabees called them "Demolitioneers".
Naval Combat Demolition Units (NCDUs) consisted of one junior CEC officer, five enlisted, and were numbered 1–216. After that first group had been trained Lt. Commander Draper Kauffman was selected to command the program. It had been set up in Camp Peary's "Area E"(explosives) at the dynamiting and demolition school. Between May and mid-July the first six NCDU classes graduated at Camp Peary. From there the program moved to Fort Pierce where the first class began mid-July. Despite the move, Camp Peary remained Kauffman's primary recruit center. "He would go back to the dynamite school, assemble the (Seabees) in the auditorium and say, "I need volunteers for hazardous, prolonged and distant duty." Fort Pierce had two Seabee units assigned, CBD 1011 and CBMU 570. They were tasked with the construction and maintenance of obstacles needed for demolitions training.
Thirty four NCDUs were assigned to the Invasion of Normandy. When the first 10 units arrived in England they had no commander. So Lt. Smith(CEC) assumed the role and split them into 3 groups to train with the 146th, 277th and 299th Combat Engineers. As more units arrived they were assigned to these groups plus had 5 army engineers attached to them. Group III(Lt. Smith) did research and development and is credited with developing the Hagensen Pack. NCDUs saw a 53 percent casualty rate at Normandy. Four from Utah beach later took part in Operation Dragoon.
With Europe invaded Admiral Turner requisitioned all available NCDUs from Fort Pierce for integration into the UDTs for the Pacific. That requisition order netted Admiral Turner 20 NCDUs that had received Presidential Unit Citations and another 11 that had gotten Navy Unit Commendations at Normandy. Before Normandy 30 NCDUs had been sent to the Pacific while three had gone to the Mediterranean. NCDUs 1–10 were staged at Turner City, Florida Island in the Solomons during January 1944. NCDU 1 went briefly to the Aleutians in 1943. NCDUs 4 and 5 were the first to see combat with the 4th Marines at Green island and Emirau Island. A few were temporarily attached to UDTs. Later NCDUs 1–10 were combined to form Underwater Demolition Team Able. That team was disbanded. NCDUs 2 and 3, plus 19, 20, 21 and 24 were assigned to MacArthur's 7th Amphibious Force and were the only NCDUs remaining at the war's end. The other men from Team Able were assigned to numeric UDTs.
see Notes
Prior to Operation Galvanic and Tarawa, V Amphibious Corps had identified coral as an issue for future amphibious operations. RADM. Kelly Turner, commander V Amphibious Corps had ordered a review to get a grip on the problem. VAC found that the only people having any applicable experience with the material were men in the Naval Construction Battalions. Lt. Thomas C. Crist, of CB 10, was in Pearl Harbor from Canton Island where he had been in charge of clearing coral heads. His being in Pearl Harbor was pivotal in UDT history. While there he learned of the Adm. Turner's interest in coral blasting and met him. The Admiral tasked Lt. Crist to develop a method for blasting coral under combat conditions and putting together a team to do it. Lt. Crist started by getting men from CB 10. By December 1, 1943 he had close to 30 officers and 150 enlisted at Waipio Amphibious Operating Base on Oahu.
In November the Navy had a hard lesson with coral and tides at Tarawa. It prompted Adm. Turner to request the creation of nine Underwater Demolition Teams to address those issues. Six teams for VAC in the Central Pacific while the other three would go to III Amphibious Corps in the South Pacific. Adm. Turner chose the term "underwater" to distinguish from the Fort Pierce program. UDTs 1 & 2 were formed from the 180 Seabees Lt Crist had staged. Seabees make up the majority of the men in teams 1
13 and 15. How many Seabees were in UDT 10 is not cited in the records nor is anything stated for UDT 12. Seabees were roughly 20% of UDT 11.
UDT officers were mostly CEC. UDT 10 had 5 officers and 24 enlisted, trained as OSS Maritime Unit: Operational Swimmer Group II). but, the OSS was not allowed to operate in the Pacific Theater. Adm. Nimitz needed swimmers and approved their transfer from the OSS to his control. The MU men brought with the swimfins they had trained with and the Seabees made them a part of UDT attire as quickly as the Supply dept. could get them. In the Seabee dominated teams the next largest group of UDT volunteers came from the joint Army-Navy Scouts and Raiders school that was also in Fort Pierce. Additional volunteers came from the Navy's Bomb disposal School, Marine Corps and U.S. Fleet.
The first team commanders were Cmdr. E.D. Brewster (CEC) UDT 1 and Lt. Crist (CEC) UDT 2. Both Teams were "provisional" totaling the 180 men Lt Crist had put together. Seven different CBs made up UDT 2. They wore fatigues, life-vests and were expected to stay in boats like the NCDUs. However, at Kwajalein Fort Pierce protocol was changed. Adm.Turner ordered daylight recon, and Ensign Lewis F. Luehrs and Seabee Chief Bill Acheson wore swim trunks under their fatigues. They stripped down, spent 45 minutes in the water in broad daylight. Still wet and in their trunks they were taken directly to Adm. Turner to report. He concluded individual swimmers were the only way to get accurate intel on underwater obstacles, reporting as much to Adm. Nimitz. At Engebi Cmdr. Brewster was wounded and all those with Ens. Luehrs men wore trunks under their fatigues. The success of those UDT 1 Seabees not following Fort Pierce protocol rewrote the UDT mission model and training regimen. Ens. Luehrs and Chief Acheson were each awarded a Silver Star for their exploit while unintentionally creating the UDT "naked warrior" image. Diving masks were not common in 1944 and somehad tried using goggles at Kwajalein. They were a rare item in Hawaii so Lt. Crist and CB Chief Howard Roeder had requested supply get them. A fortuitous observation spotted a magazine advertisement for diving masks. A priority dispatch to the States appropriated the store's entire stock.
Adm. Turner also requested formation of a "Naval Combat Demolition Training & Experimental Base" at Kihei. It was approved, with the lessons of UDT 1 incorporated into the training, making it distinctly different from that at Fort Pierce. Lt. Crist was briefly the first training officer until when he was made Commander of UDT 3. When UDT 3 returned from Leyte in November 1944 the team became school instructors and Lt. Crist was again OIC of training. Under Lt. Crist the training course was changed with an emphasis on swimming and recon. Also covered were: night ops, weapons, bivouacking, small unit tactics, along with coral and lava blasting. The team instructed until April 1945 when it was sent to Fort Priece to instruct there. Lt. Crist was promoted to Lt. Cmdr and returned to Hawaii. Team 3 would train teams 12–22. Teams 12, 13 and 14 all had men from Team Able. UDT 14 is called the first "all fleet team" even though Seabees from Team Able were attached and the Commander and XO were both CEC (Ltjg. A.B. Onderdonk and Ltjg. C.E. Emery). UDT 15 was the last team formed of NCDUs. Teams 12, 13, 14, and 15 were sent to Iwo Jima. Three teams would go back ashore on D-plus 2 to clear the waters edge for five days. After July 1944 new UDTs were USN with no Army or USMC. In 1945 CBMU 570 was tasked to support UDT coldwater training at ATB Oceanside, CA.
On Guam team 8 requested permission to build a base. It was approved by AdComPhibsPac, but disapproved by the Island Command. Team 8 turned to the CBs on the island to appropriate everything they needed. The coral paving got placed the night before Admiral Nimitz made an inspection. The Admiral gave the base and teams 8 & 10 a glowing review.
By V-J day 34 teams had been formed.
Teams 1–21 saw actual deployment. The Seabees provided over half of the men in those teams. The Navy did not publicize the existence of the UDTs until post war and when they did they gave credit to Lt. Cmdr. Kauffman and the Seabees. During WWII the Navy did not have a rating for the UDTs nor did they have an insignia. Those men with the CB rating on their uniforms considered themselves Seabees that were doing underwater demolition. They did not call themselves "UDTs" or "Frogmen", but rather "Demolitioneers" reflecting where LtCdr Kauffman had recruited them from, the CB dynamiting and demolition school.
UDTs had to be of standard recruiting age, Seabees older could not volunteer. In preparation for the invasion of Japan the UDTs created a cold water training center and mid-year 1945 men had to pass a stricter physical. Team 9 lost 70% of the team to this change.
see Notes
In February 1942 CNO Admiral Harold Rainsford Stark recommended African Americans for ratings in the construction trades. In April the Navy announced it would enlist African Americans in the Seabees. Even so, there were just two CBs that were "colored" units, the 34th and 80th. Both had white Southern officers and black enlisted. Both battalions experienced problems with that arrangement that led to the replacement of the officers. The men of the 34th went on a hunger strike which made national news. The Commander of the 80th had 19 enlisted dishonorably discharged for sedition. The NAACP and Thurgood Marshall got 14 of those reversed. In 1943 the Navy drew up a proposal to raise the number of colored CBs to 5 and require that all non-rated men in the next 24 CBs be colored. The proposal was approved, but not acted on.
The lack of stevedores in combat zones was a huge issue to the Navy. Authorization for the formation of cargo handling CBs or "Special CBs" happened mid-September 1942. By wars end 41 Special CBs had been commissioned of which 15 were "colored". They were the first fully integrated units in the U.S. Navy. V-J Day brought the decommissioning of all of them. The Special CBs were forerunners of today's Navy Cargo Handling Battalions of the Navy Expeditionary Logistics Support Group (United States). The arrival of 15 colored Special CBs in Pearl Harbor made segregation an issue for the Navy. For some time the men slept in tents, but the disparity of treatment was obvious even to the Navy. The 14th Naval District felt they deserved proper shelter with at least separate but equal barracks. Manana Barracks and Waiawa Gulch became the United State's largest colored military installation with over 4,000 Seabee stevedores housed there. It was the site of racial strife to the point that the camp was fenced in and placed under armed guard. The Seabees would be trucked back and forth to the docks in cattle trucks. Two naval supply depots were located at Waiawa Gulch.
The 17th Special(colored) CB at Peleliu 15–18 September 1944 is omitted from the USMC order of battle. On D-day at Peleliu, the 7th Marines were in a situation where they did not have enough men to man the lines and get the wounded to safety. Coming to their aid were the 2 companies of the 16th Marine Field Depot (colored) and the 17th Special CB. The Japanese mounted a counter-attack at 0200 hours on D-day night. By the time it was over, nearly the entire 17th had volunteered to carry ammunition to the front lines on the stretchers they brought the wounded back on. They volunteered to man the line where the wounded had been, man 37mm guns that had lost their crews and volunteered for anything the Marines needed. The 17th remained with the 7th Marines until the right flank had been secured on D plus 3. According to the Military History Encyclopedia on the Web, "were it not for the Black Marine shore party---the counterattack on the 7th Marines would not have been repulsed".
A Construction Battalion Detachment (CBD) was formed from "screening Camp Peary and the NCF for geologists, petroleum engineers, oil drillers, tool pushers, roustabouts and roughnecks" and later designated 1058. Many additional enlisted and officers were chosen for their arctic experience with CB 12 and CB 66. The selected men were assembled at Camp Lee Stephenson. Congress had earmarked $1,000,000 for Operation Pet 4 to determine if there was actually oil in NPR 4 (U.S. Navy Petroleum Reserve No. 4) in 1944. NPR-4 had been created and placed in the oil reserve in 1923. Today NPR-4 is the National Petroleum Reserve in Alaska. The detachment's mission was:
In 1944 the base camp was constructed at Point Barrow. Four D-8s with twenty sleds of supplies were prepped for the 330 mile trek to Umiat once the tundra had frozen. After those supplies were delivered the Cats returned for the heavy well equipment. During the summer of 1945 a 1,816' wildcat was drilled and designated Seabee#1 before being shut down by the cold. The well site was near four known seeps at Umiat in the very south-east of NPR 4. The rock in the area was from the Upper Cretaceous and a stratum of it was named the "Seabee Formation". On the coast the Seabees drilled test holes at Cape Simpson and Point Barrow. Once the runways were completed additional supplies were flown in. In March 1946 civilians took over the project. Some had been members of CBD 1058 and had been hired immediately upon discharge for the same job they had performed for the Navy." The Navy drew upon the cold weather experience it gained from CBD 1058 and applied it in Operation Highjump and Operation Deep Freeze. – Today Seabee #1 is a USGS monitor well.
Land surveys
Twice the Seabees have been tasked with large scale land surveys. The first was done by CBD 1058 for a proposed NPR 4 pipeline route to Fairbanks. The Trans-Alaskan pipeline follows a portion of their survey from roughly the arctic circle to Fairbanks. The second would be done by a Seabee team from MCB 10. That group was sent to Vietnam in 1956 to survey and map that country's entire road network. This work would be heavily drawn upon during the Vietnam War.
see Notes
On V-J-Day CB 114 was in the Aleutians. In September 1945 the battalion sent a detachment to the USSR to build a Fleet Weather Central. It was located outside Petropavlovsk-Kamchatsky on the Kamchatka Peninsula and code named TAMA. The original agreement gave the Seabees 3 weeks to complete the base. Upon arrival the Russians told the Seabees they had 10 days and were amazed that the Seabees did it. It was one of two that Stalin agreed to. The other was near Khabarousk, Siberia in buildings provided by the Russians.
V-J-Day lead to Operation Beleaguer for the repatriation of the remnants of the Japanese Army left in China. Part of the 33rd CB Regiment was tasked: CBs 83, 96, 122 and 32nd Special. These units landed at Tsingtao and Tangku in November 1945 attached to the 6th Marine Division. CB 42 and A Co. 33rd Special landed at Shanghai attached to Naval Advance Base Unit 13. With the war over, the ongoing discharge men eligible left only enough for one CB and the two CB Specials. The men were consolidated in the 96th with the other units decommissioned. In December the 96th started airfields at Tsingtao and Chinwangtao in support of III Marine Amphibious Corps operations. On 20 May 1946 orders were issued for CB III Marine Amphibious Corps to inactivate 96 CB on 1 August. Prior, the 6th Marine Division was renamed the 3rd Marine Brigade for a short period. The 96th CB was transferred to the 4th Marines, 1st Marine Division and deactivated from them in August.
In early 1946 the 53rd NCB was deployed with Operation Crossroads for the nuclear testing at Bikini Atoll. The unit was assigned to Task Group 1.8 and designated TU 1.8.6. 53's project list included observation, instrument and communication towers, radio beacons, seismic huts, photo reference crosses, general base and recreational facilities, as well as dredging the lagoon. From March-May the battalion strength was 1006 including stevedores. The numbers were then drawn down until August 3rd when the battalion was decommissioned. The remaining men were transferred to CBD 1156 that was commissioned on Bikini. The TU 1.8.6 designation continued with them. The CBD remained at the atoll for nine days after the second nuclear test.
UDT 3 was designated TU 1.1.3 for the operation. On 27 April 1946, seven officers and 51 enlisted embarked the USS Begor (APD-127) at CBC Port Hueneme, for transit to Bikini. Their assignment was to retrieve water samples from ground zero of the Baker blast. In 1948, the displaced bikinians put in a request that the U.S. Navy blast a channel to the island Kili where they had been relocated. This was given to the Seabee detachment on Kwajelin. They requested UDT 3 assist.
In January 1947, CBs 104 and 105 were reactivated. The 121st CB was decommissioned in December and re-designated CBD 1504. The 30th NCR was home-ported on Guam composed of CBDs 1501-13 and NCB 103. In 1949, the 103rd was made a Mobile Construction Battalion (MCB) while CBs 104 and 105 were made Amphibious Construction Battalions(ACBs). From 1949 until 1968 CBs were designated MCBs. In June 1950 the Naval Construction Force numbered roughly 2,800.
The outbreak of the Korean War led to a call-up of 10,000 from the Seabee Reserve. Seabees landed at Inchon during the assault, installing causeways dealing with enormous tides and enemy fire. Their actions there and elsewheres underscored the necessity of having CBs. During that war the authorized size of a CB was 550 men. When the truce was declared there was no CB demobilization as there had been at the end of WWII.
During the Korea, the U.S. realized the need of an air station in the region. Cubi Point in the Philippines was selected. Civilian contractors were approached for bids. After seeing the Zambales Mountains and the maze of jungle, they claimed it could not be done. The Navy then turned to the Seabees. The first to arrive was CBD 1802 to do the surveying. MCB 3 arrived on 2 October 1951 to get the project going and was joined by MCB 5 in November. Over the next five years, MCBs 2, 7, 9, 11 and CBD 1803 all contributed to the effort. They leveled a mountain to make way for a nearly runway. NAS Cubi Point turned out to be one of the largest earth-moving projects in the world, equivalent to the construction of the Panama Canal. Seabees there moved of dry fill plus another 15 million that was hydraulic fill. The $100 million facility was commissioned on 25 July 1956, and comprised an air station and an adjacent pier that was capable of docking the Navy's largest carriers. Adjusted-for-inflation, today's price-tag for what the Seabees built at Cubi Point would be $906,871,323.53.
Seabee Teams
The first Seabees to be referred to as Seabee Teams were CBD 1802 and CBD 1803. They were followed by Detachments Able and Baker. Then someone in the U.S. State Department learned of these teams and had an idea for making "good use" of the Seabees in the Cold War. Teams could be sent as "U.S. Good Will Ambassadors" to third world nations as a means to combat the spread of Communism and promote "Good Will", a military version of the Peace Corps. These 13 man teams would construct schools, drill wells or build clinics creating a positive image or rapport for the U.S. They were utilized by the United States Agency for International Development and were in S.E. Asia by the mid 1950s. Then in the early sixties the U.S. Army Special Forces were being sent into rural areas of South Vietnam to develop a self-defense force to counter the Communist threat and making use of the Seabee teams at these same places made sense to the CIA. To start, twelve "Seabee teams, with Secret Clearances, were sent with the Army's Special Forces in the CIA funded Civilian Irregular Defense Group program (CIDG)" in the years 1963–1965. By 1965 the U.S. Army had enough engineers in theater to end Seabee involvement with Special Forces. At first teams were called Seabee Technical Assistance Teams (STAT) and were restricted to two in theater at a time. Teams after STAT 1104 were renamed Seabee Teams and by 1969 there were 17 in theater. As a military force Seabee Teams received many awards for heroism. Teams were sent to other nations as well. The Royal Thai government requested STATs in 1963 and ever since the Seabees have continued to deploy teams.
Construction Civic Action Details or CCAD
CCADs or "See-Kads" are larger civic action units of 20–25 Seabees with the same purpose as Seabee Teams. The CCAD designation is not found in the record prior to 2013.
Operation Highjump
In December 1946, 166 Seabees sailed from Port Hueneme on the USS Yancey and USS Merrick assigned to Operation Highjump. They were part of Admiral Richard E. Byrd's Antarctic expedition. The U.S. Navy was in charge with "Classified" orders "to do all it could to establish a basis for a (U.S.) land claim in Antarctica". The Navy sent the Seabees to do the job starting with the construction of Little America (exploration base) IV as well as a runway for aerial mapping flights. This Operation was vastly larger than IGY Operation Deep Freeze that followed.
Operation Deep Freeze
In 1955, Seabees were assigned to Operation Deep Freeze making Antarctica an annual deployment site. Their task was the construction and maintenance of scientific bases for the [[National Science. The first "wintering over" crew included 200 Seabees. They cleared an ice runway at [[McMurdo Sound|Mcmurdo]] for
the advance party of Deep Freeze II to fly to South Pole Station. [[Naval Mobile Construction Battalion 1|MCB 1]] was assigned for Deep Freeze II.
Antarctica added to the Seabee's list of accomplishments:
see Notes
[[File:STAT 1104.jpg|thumb|STAT 1104 in Port Hueneme L-R standing: John Klepher, Dale Brakken, William Hoover KIA, Ltjg Peterlin, Cmdr L.W.Eyman, Douglas Mattick, James Keenan, J.R. McCully, Marvin Shields KIA, kneeling: Richard Supczak, F.J. Alexander Jr, James Wilson, Jack Allen. For their actions in the [[Battle of Dong Xoai]], the 9-man team received the [[Navy Unit Commendation]] a Medal of Honor, 2 Silver Stars, 6 Bronze Stars with Vs and 9 purple hearts. (USN) ]]
[[File:Vietnam era Seabee Equipment Operator collar devices.jpg|thumb|Vietnam era EO3 – EO1 collar devices]]
Seabees deployed to Vietnam twice in the 1950s. First in June 1954 as elements of [[Operation Passage to Freedom]] and then two years later to map and survey the nation's roads. Seabee teams 501 and 502 arrived on 25 January 1963 and are regarded as the first Seabees of the [[Vietnam War]]. They were sent to Dam Pau and [[Tri Ton]] to build camps for the Special Forces. In 1964 ACB 1 was the first CB in the theatre. Beginning in 1965 Naval Construction Regiments deployed to the theater. Seabees supported the Marines at Khe Sanh and [[Chu Lai Base Area|Chu Lai combat base]] in addition to building numerous aircraft-support facilities, roads, and bridges. They also worked with and taught construction skills to the Vietnamese. In June 1965, Construction Mechanic 3rd Class [[Marvin Glenn Shields|Marvin G. Shields]] of Seabee Team 1104 was at the [[Battle of Dong Xoai]]. He was posthumously awarded the [[Medal of Honor]] and is the only Seabee to be awarded the medal. Seabee Teams continued to be deployed throughout the [[Vietnam War]] and often engaging enemy forces alongside Marines and [[United States Army Special Forces|Army special forces]]. Teams typically built schools, clinics, or drilled wells. In 1966 Seabees repaired the airfield at [[Khe Sanh Combat Base|Khe Sahn]] in four days, with 3,900 feet of 60-foot-wide aluminum matting. [[General Westmoreland]] "called it one of the most outstanding military engineering feats in Vietnam." MCB 4 had a det at [[Con Thien]] whose actions were a near repeat of [[Battle of Đồng Xoài|Dong Xoai]].
In 1968 the Marine Corps requested that the Navy make a name change to the CBs to reduce confusion. The Marines were using "MCB" for Marine Corps Base while the Navy was using "MCB" for Mobile Construction Battalions. The Navy added "Naval" to MCB creating the NMCBs that now exist. During that year the 30th Naval Construction Regiment had five battalions in the Da Nang area and two at Chu Lai. The 32nd NCR had three battalions tasked near Phu Bai and one at Dong Ha. In May 1968 two reserve battalions RNMCB 12 and 22 were activated, bring the total number of battalions in Vietnam to 21. Both ACBs were in theater as well as Construction Battalion Maintenance Units (CBMUs) 301 and 302. In 1968 NMCB 10 had an unusual "tasking" supporting the [[101st Airborne]]. During 1969 the Seabees deployed topped out at 29,000, from there their draw-down began. The last battalion withdrew late 1971 with the last Seabee teams out a year later. When it was over they had sent 137 Seabee teams, built 15 CB camps, and deployed 22 battalions. [[Construction Battalion Maintenance Unit 302|CBMU 302]] became the largest CB ever at over 1400 men and was homeported at [[Cam Rahn Bay]]. On 23 April 1975 it was announced that U.S. involvement in Vietnam was over.
that day saw NMCB 4 start construction of a temporary camp for [[Operation New Life]] on Guam. In seven days 2,000 squad tents were put up and 3,500 when done.
During Vietnam the Seabees had a few uniform variations. One was the stenciling of unit numbers across the back of the field jacket M-65. Another was the collar and cover devices for E4 E6 enlisted. The Navy authorized that the "crow" be replaced by the rating insignia of each trade. Nametags were another, they started out white with a multicolored seabee until 1968 when they followed USMC OD green pattern. The NAVCATs became the only Seabees to ever be authorized to wear a shoulder patch.
NAVCATs Naval Construction Action Teams
CBMU 302 had 23 NAVCATS total with 15 active at its peak. Teams were numbered 1-23. They were Vice Admiral [[Elmo Zumwalt]]'s expansion of the Seabee Team concept. He submitted it in November 1968 to General [[Creighton Abrams]] commander of [[Military Assistance Command, Vietnam]].
Agent Orange
Many Seabees were exposed to the [[defoliant]] [[herbicide]] while in Vietnam. NCBC Gulfport was the largest storage depot in the United States for [[agent orange]]. From there it was shipped to Vietnam. In 1968 the NCBC received 68,000 [[barrel]]s to forward. Long term barrel storage began in 1969. That lasted until 1977. The site covered 30 [[acre]]s and was still being cleaned up in 2013.
see Notes0
[[Image:Tektite I exterior.jpg|right|thumb|Tektite I assembled by ACB 2 (NOAA)]]
In 1960 MCB 103 built a [[Project Mercury]] [[telemetry]] and [[Ground station|ground instrumentation station]] on Canton island.
On 28 January 1969 a detachment of 50 men from [[Amphibious Construction Battalion 2]] plus 17 Seabee divers began installation of the [[Tektite habitat]] in Great Lameshur Bay at [[Lameshur, U.S. Virgin Islands]]. The Tektite program was funded by [[NASA]] and was the first scientists-in-the-sea program sponsored by the U.S. government. The Seabees also constructed a 12-hut base camp at Viers that is used today as the Virgin Islands Environmental Resource Station. The Tektite project was a product of the Cold War. It caused the U.S. Navy to realize the need for a permanent Underwater Construction capability that led to the formation the Seabee Underwater Construction Teams".
see Notes
As the [[Cold War]] wound down, new challenges and changes came for the Seabees starting with the increased incidence of terrorism. This was in addition to ongoing Seabee support missions for USN/USMC bases worldwide. Even though the Cold war had wound down Cold War Facilities still required support like the [[UGM-27 Polaris|Polaris]] and [[UGM-73 Poseidon|Poseidon]] submarines at [[Holy Loch]], [[Naval Station Rota, Spain|Rota]]. In 1971, the Seabees began their huge peacetime construction project on [[Naval Support Facility Diego Garcia|Diego Garcia]] in the [[Indian Ocean]]. That project began in 1971 and was completed in 1987 at a cost of $200 million. With the extended construction timeline, it is difficult to inflation-adjust that cost into today's dollars. The complex accommodates the Navy's largest ships and cargo planes. The base served as a staging facility for Operations [[Desert Shield]] and [[Desert Storm]]. Seabee construction was responsible for the upgrade and expansion of [[Naval Air Station Sigonella]], Sicily, making it a major base for the [[United States Sixth Fleet]].
There were combat related assignments as well. In 1983, a truck bomb demolished the [[1983 Beirut barracks bombings|Marine's barracks in Beirut]], Lebanon. From the [[Beirut International Airport]] [[Druze]] militia artillery harassed the Marines. After consultations, NMCB-1 in Rota sent in a 70-man AirDet to construct secure bunkers for the Marines. EO2 Kirt May became the first Seabee post-Vietnam to receive a [[Purple Heart]] while on the job.
CN Carmella Jones became the first female Seabee when she cross-rated to Equipment Operator during the summer of 1972.
[[Robert Stethem]] was executed by the Lebanese [[Shia Islam|Shia]] militia [[Hezbollah]] when they hijacked [[TWA Flight 847]] in 1985. SW2 Stethem was a Seabee [[Navy diver (United States Navy)|diver]] in UCT 1. The was named for him. On 24 August 2010, on board USS "Stethem", SW2 Stethem was posthumously made an honorary [[Steelworker (United States Navy)|Master Chief Constructionman (CUCM)]] by the [[Master Chief Petty Officer of the Navy]] and awarded the Prisoner of War Medal.
During the [[Gulf War|Persian Gulf War]], more than 5,000 Seabees served in the Middle East. In August 1990 the First Marine Expeditionary Force (I MEF) was initially assigned NMCBs 4, 5, 7, and 40. The first Seabees in theater were a Det from ABC 1 that was soon joined by a Det from ACB 2. Shortly after them CBUs 411 and 415 arrived in [[Saudi Arabia]]. Mid September the Air-Dets for the four CBs arrived to build air fields for Marine Air Groups (MAG) 11, 13, 16, and 25 of the [[3rd Marine Air Wing]]. NMCB 7 was the first battalion to arrive. Camps were constructed for both the 1st and 2nd Marine Divisions as well as Hq complexes for MEF I and II. Overall, in [[Saudi Arabia]], Seabees built numerous camps and galleys. They laid millions of square feet of [[runways]], [[airport aprons|aprons]] as well as over 200 [[helicopter|helo]] zones. They built and maintained two 500-bed Fleet Hospitals near [[Jubail|Al-Jubayl]]. The 3rd NCR was activated to provide a command echelon. NMCBs 24 and 74 were also deployed in support of the Marine Corps. A desert camp was constructed at Ras Al Mishab, near the Kuwaiti border named "Camp Nomad" which supported [[MAG 26]].
[[File:US Navy 030527-N-5362A-010 Engineering Aide 1st Class Scott Lyerla assigned to Naval Mobile Construction Battalion Fifteen (NMCB-15) helps to guard his convoy as it travels through Al Hillah in support of Operation Iraqi Freedo.jpg|thumb|NMCB 15 Seabee mans a machine gun while travelling through Al [[Hillah]], Iraq in May 2003.]]
Seabees were deployed in the invasion of Afghanistan in 2001 and Iraq in 2003. All active and reserve NMCBs and NCRs were sent to repair infrastructure in both countries. . NMCB 133 deployed to FOB [[Camp Rhino]] and [[Kandahar Airfield]] where a detention facility was constructed. One of the Seabees most visible tasks was the removal of statues of [[Saddam Hussein]] in Baghdad. In Afghanistan, the Seabees' main task was the construction of multiple [[forward operating base]]s.
Since 2002, Seabees have provided vital construction skills for civic action programs in the Philippines. Their efforts have had an effect in the southern Philippines, most notably near [[Abu Sayyaf]]'s jungle training area. Seabees work with Army, Marines, and Air Force under Joint Special Operations Task Force-Philippines.
see Notes
[[File:US Navy 060821-N-7770P-002 A team of U.S. Navy Seabees assigned to Naval Mobile Construction Battalion Five (NMCB 5), attached to Combined Joint Task Force Horn of Africa (CJTF HOA), set up tents.jpg|thumb|NMCB 5 attached to [[Combined Joint Task Force – Horn of Africa]] set tents for displaced flood victims in [[Ethiopia]]. (2006)]]
At present, there are six active-duty Naval Mobile Construction Battalions (NMCBs) in the United States Navy, split between the Pacific Fleet and the Atlantic Fleet.
30th Naval Construction Regiment, Hq Guam Pacific Fleet Homeport for the Pacific Fleet Battalions is [[Naval Construction Battalion Center Port Hueneme]] Ca.
22nd Naval Construction Regiment is stationed at [[Naval Construction Battalion Center (Gulfport, Mississippi)]] the homeport to the Alantic fleet CBs.
NCF Reserve
From the 1960s through 1991, reserve battalions were designated as "Reserve Naval Mobile Construction Battalions" (RNMCBs). After 1991 "Reserve" was dropped with the integration of reserve units within the NCF making all battalions NMCBs
Detachment: A construction crew that is "detached" from the battalion's "main body" deployment site. The size is determined by the project scale and completion date.
Battalion: The [[battalion]] is the basic NCF unit with a HQ Company plus four Construction Companies: A, B, C, & D. CBs are organized to function as independent self sufficient units.
Regiment: Naval Construction Regiments (NCRs) purpose is to provide a higher eschelon command to three or four CBs operating on close proximity.
Division: 1st Naval Construction
Division was in service from August 2002 until May 2013 when it was decommissioned.
Naval Construction Groups: In 2013, Seabee Readiness Groups (SRGs) were decommissioned, and re-organized as Naval Construction Groups 1 and 2. They are regimental-level command groups tasked with administrative and operational control of CBs, as well as conducting pre deployment for all assigned units. Naval Construction Group 2 (NCG-2) is based at CBC Gulfport, and Naval Construction Group 1 (NCG-1) is at CBC Port Hueneme.
Seabee Engineering Reconnaissance Teams (SERTs)
[[File:US Navy 030412-N-1485H-009 Seabee Engineer Reconnaissance Team (SERT) reach their mission destination to determine if an old bridge can be used to support troop and convoy movements during an annual field exercise.jpg|thumb|Seabee Engineer Reconnaissance Team from NMCB 40 making an assessment of a bridge to determine its structural capacity to support movements during a field exercise.]]
Seabee Engineer Reconnaissance Teams are ten-person teams, developed during Operation Iraqi Freedom (OIF). SERTs are divided into three elements: liaison, security, and a reconnaissance. The liaison (LNO) element has a CEC officer and two communications petty officers who are responsible for the transfer of field assessments, intelligence, and command reach-back. The reconnaissance element has a CEC officer, who is the Officer-in-Charge (OIC), a [[Builder (US Navy)|BU]] or [[Steelworker (US Navy)|SW]] [[chief petty officer|cpo]] with bridge construction experience, and petty officers of OF-7 ratings. The unit has a [[corpsman]] or medically-trained member, with the rest of the team selected for being the best of their trades in their battalion. All are qualified Seabee Combat Warfare Specialists. The UCTs proved the SERT concept was viable leading to its adoption throughout OIF.
Amphibious Construction Battalions (PHIBCBs)
[[File:US Navy 030404-N-1050K-023 U.S. Navy Seabees assigned to Amphibious Construction Battalions One and Two prepare to place the next roadway section being used in the building of the Elevated Causeway System-Modular (ELCAS (M)) st.jpg|thumb|US Navy 030404-N-1050K-023U.S. Seabees from ACBs 1 and 2 place a deck section in the assembly of the Elevated Causeway System-Modular (ELCAS (M)) at [[Camp Patriot]], [[Kuwait]] (Apr 2003).]]
ACBs (or PHIBCB) were preceded by the pontoon assembly CBs formed during World War II. On 31 October 1950, MCBs 104 and 105 were re-designated ACB 1 and ACB 2, and assigned to Naval Beach Groups. ACBs report to [[Surface warfare|surface]] [[U.S. Navy type commands#Commander, Naval Surface Forces|TYCOMs]]. Additionally, in an ACB half the enlisted are a construction rate while the other half are fleet.
Construction Battalion Maintenance Units:
When during World War II these units had 1/4 the personnel of a CB. Their task was to assume maintenance of bases once CBs had completed construction. Today, CBMU's provide public works support at Naval Support Activities, Forward Operating Bases, and Fleet Hospital/Expeditionary Medical Facilities during wartime or contingency operations for a Marine Expeditionary Force (MEF), Marine Expeditionary Group (MEG), or NSW. They also provide disaster recovery support to Naval Regional Commanders in [[Contiguous United States|CONUS]].
NAVFAC Engineering & Expeditionary Warfare Center Ocean Facilities Department supports the Fleet through the support it gives the Underwater Construction Teams". UCTs deploy worldwide to conduct underwater construction, inspection, repair, and demolition operations of ocean facilities, to include repair of battle damage. They maintain a capability to support a [[Fleet Marine Force]] [[Amphibious warfare|amphibious assault]], subsequent combat service support ashore, and self-defense for their camp and facilities under construction. UCT1 is [[home port]]ed at Virginia Beach, Virginia, while UCT2 is at Port Hueneme, California.
Underwater Construction Team (UCT):
[[File:U.S. Sailors assigned to Construction Dive Detachment Alpha, Underwater Construction Team 2 dive over the remains of the battleship USS Arizona at Joint Base Pearl Harbor-Hickam, Hawaii, March 21, 2013 130321-N-WX059-135.jpg|thumb|Underwater Construction Team 2 along with divers of the [[National Park Service]] make dives to ascertain the condition and status of the battleship [[USS Arizona Memorial]] at Pearl Harbor in 2013]]
"NAVFAC Engineering & Expeditionary Warfare Center Ocean Facilities Department supports the Fleet through the support it gives the Underwater Construction Teams". UCTs deploy worldwide to conduct underwater construction, inspection, repair, and demolition operations of ocean facilities, to include repair of battle damage. They maintain a capability to support a [[Fleet Marine Force]] [[Amphibious warfare|amphibious assault]], subsequent combat service support ashore, and self-defense for their camp and facilities under construction. UCT1 is [[home port]]ed at Virginia Beach, Virginia, while UCT2 is at Port Hueneme, California.
Public Works: U.S. Naval Bases: These units have CEC officers leading them and enlisted Seabees for the various crews. About one-third of new Seabees are assigned to Public Works Departments (PWD) at naval installations both within the United States and overseas. While stationed at a Public Works Department, a Seabee has the opportunity to get specialized training and extensive experience in one or more facets of their rating. Some bases have civilians that augment the Seabees, but the department is a military organization.
[[File:Swan Islands.jpeg|thumb|CIA runway by MCB 6 Det Alfa on Swan Island]]
see Notes
Naval Intelligence: NAVFACs
The Navy built 22 Naval Facilities (NAVFACs) for its [[Sound Surveillance System]] (SOSUS) to track Soviet submarines. They were in service 1954–79 with Seabees staffing the Public works at each Facility. In the 1980s technology reduced the number of tracking stations to 11 with advent of the Integrated Underwater Surveillance System (IUSS). NAVFAC tracking facilities were finally undone by further advances in tech, the end of the Cold War and disclosures by [[John Anthony Walker|John Walker]] to the Soviets.
The Seabees have also been tasked building Naval Communication facilities. One at [[Nea Makri]] Greece was built by MCB 6 in 1962 and later upgraded by NMCB 133. [[United States Naval Communications Station Sidi Yahya El Gharb|Naval Communications Station Sidi Yahya]] is another going back to WWII another is NavCommSta Guam. It started out on the island as the Joint Communications Agency (JCA) in 1945.
Camp David is officially known as [[Camp David|Naval Support Facility Thurmont]], because it is technically a [[military installation]]. The staffing is primarily provided by the CEC, Seabees, and Marines of the U.S. Navy and Marine Corps. "In the early 1950s, the first Seabee BUs, UTs and CEs took over routine maintenance and repairs of the base. Although there have been vast changes made at the Camp over the years, Seabees continue to staff base public works while keeping the [[Groundskeeping|grounds]] in an impeccable condition." Additional Naval rates were added to oversee base administrative functions. "Selectees undergo a single scope [[Background check|background investigation]] to determine if they are eligible for a [[Security clearance|Top Secret Sensitive Compartmentalized Information (TS/SCI) Yankee White (YW) clearance]]. All personnel assigned to duty in Presidential support activities are required to have a "Yankee White" clearance. The tour lasts 36 months." When the base has a larger construction project a regular Naval Construction Battalion will send a detachment to take care of the job. CBs 5 and [[Naval Mobile Construction Battalion 133|133]] have drawn these assignments.
[[File:Diplomatic Security photo.jpg|thumb|Naval Support Unit Seabees securing a diplomatic compound in Dec. 2010. (Dept. of State)]]
In 1964, at the height of the Cold War, Seabees were assigned to the State Department because listening devices were found in the [[Embassy of the United States in Moscow]]. Those initial Seabees were "Naval Mobile Construction Battalion FOUR, Detachment November". The U.S. had just constructed a new embassy in [[Warsaw]]. After what had been found in Moscow Seabees were dispatched and found many "bugs" there also. This led to the creation of the Naval Support Unit in 1966 as well as the decision to make it permanent two years later. That year William Darrah, a Seabee of the support unit, is credited with saving the U.S. Embassy in [[Prague, Czechoslovakia]] from a potentially disastrous fire. In 1986, "as a result of reciprocal expulsions ordered by Washington and Moscow" Seabees were sent to "Moscow and Leningrad to help keep the embassy and the consulate functioning".
The Support Unit has a limited number of special billets for select NCOs, E-5 and above. These Seabees are assigned to the [[Department of State]] and attached to [[Diplomatic Security]]. Those chosen can be assigned to the [[Regional Security Officer]] of a specific embassy or be part of a team traveling from one embassy to the next. Duties include the installation of [[alarm systems]], [[CCTV cameras]], [[electromagnetic lock]]s, safes, vehicle barriers, and securing compounds. They can also assist with the [[security engineering]] in sweeping embassies (electronic counter-intelligence). They are tasked with new construction or renovations in security sensitive areas and supervise private contractors in non-sensitive areas. Due to Diplomatic protocol the Support Unit is required to wear civilian clothes most of the time they are on duty and receive a supplemental clothing allowance for this. The information regarding this assignment is very scant, but State Department records in 1985 indicate Department security had 800 employees, plus 1,200 Marines and 115 Seabees. That Seabee number is roughly the same today.
see Notes
Combat Service Support Detachments (CSSD) have several hundred Seabees assigned in support of [[Naval Special Warfare]] (NSW) units based out of Coronado, CA, and Virginia Beach, VA. Seabees provide the field support for power generation/distribution, logistical movement, vehicle repair, construction/maintenance of encampments, water purification or facilities. Seabees assigned to support NSW receive extra training in first aid, small arms, driving, and specialized equipment. and are expected to qualify as Expeditionary Warfare Specialists. Seabees assigned to NSW are eligible to receive the following Naval Enlisted Classifications upon filling the requirements: 5306 – Naval Special Warfare (Combat Service Support) or 5307 – Naval Special Warfare (Combat Support). They also can apply for selection to support the NSW Development Group.
[[File:USMC barracks inspection.jpg|thumb|USMC barracks inspection during NMCB 74's [[military training]] at Camp Lejeune in March 1968]]
Trainees begin "A" School (trade school) upon completion of [[Recruit Training Command, Great Lakes, Illinois|boot]]: 4 weeks classroom, 8 weeks hands-on. From "A" School, trainees most often report to a NMCB or ACB. There recruits go through four-weeks of Expeditionary Combat Skills (ECS) which is also required for those who report to a [[Navy Expeditionary Combat Command]]. ECS is basic training in: map reading, combat first aid, recon, and other combat-related skills. Half of each course is spent on basic marksmanship to qualify with a [[M16 rifle#M16A3|M16]] rifle and the [[Beretta M9|M9 service pistol]]. Those posted to Alfa Company of a NMCB may be assigned to a crew-served weapon: [[Mk 19 grenade launcher|MK 19]] 40mm grenade launcher, the [[M2 Browning|.50-caliber machine gun]], or the [[M240 machine gun]]. Many reserve units still field the [[M60 machine gun]]. Seabees were last U.S. military to wear the [[U.S. Woodland]] camouflage uniform or the [[Desert Camouflage Uniform]]. They now have the Navy Working Uniform NWU Type III and use [[All-purpose Lightweight Individual Carrying Equipment|ALICE]] field gear. Some units, with the Marines, will use USMC-issue [[Improved Load Bearing Equipment]] (ILBE).
Current rates: The current ratings were adopted by the Navy in 1948.
The Seabee "constructionman" ranks of E-1 through E-3 are designated by sky-blue stripes on uniforms. The color was adopted in 1899 as a uniform trim color designating the [[Civil Engineer Corps]], but was later given up. Its continued use is a bit of Naval Heritage in the NCF.
At [[Uniformed_services_pay_grades_of_the_United_States#Enlisted_pay_grades|paygrade]] E-8, the Builder, Steelworker, and Engineering Aide rates combine into a single rate: Senior Chief Constructionman (CUCS). At the E-9 paygrade they are referred to as a Master Chief Constructionman (CUCM).
The remaining Seabee rates combine only at the E-9 paygrade:
[[Navy diver (United States Navy)|Diver]] : is a qualification that the various rates can obtain with three grades: Basic Underwater Construction Technician/ NEC 5932 (2nd Class Diver), Advanced Underwater Construction Technician/ NEC 5931 (1st Class Diver), and Master Underwater Construction Technician/ NEC 5933 (Master diver). Seabee divers are attached to five principal commands outside the NCF:
On 1 March 1942 the RADM Moreell recommended that an insignia be created to promote "esprit de corps" in the new CBs to ID their equipment as the Air corps did to ID squadrons. It was not intended for uniforms. Frank J. Iafrate, a civilian file clerk at [[Quonset Point]] Advance Naval Base, Davisville, Rhode Island, who created the original "Disney Style" Seabee. In early 1942 his design was sent to RADM Moreell who made a single request. That the Seabee being set inside a letter Q, for Quonset Point, be changed to a hawser rope and it would officially adopted.
The Seabees had a second Logo. It was of a shirtless constructionman holding a sledge hammer with a rifle strapped across his back standing upon the words "Construimus Batuimus USN". The figure was on a shield with a blue field across the top and vertical red and white stripes. A small CEC logo is left of the figure and a small anchor is to the right. This logo was incorporated into many CB Unit insignias.
During World War II, artists working for [[The Walt Disney Company|Disney Onsignia Department]] designed logos for about ten Seaabee units including the: 60th NCB, 78th NCB 112th NCB, and the 133rd NCB. There are two Disney published Seabee logos that are not identified with any unit. Disney did not create the original Seabee insignia.
The end of WWII brought the decommissioning of nearly all of the CBs. They had been in existence less than four years when this happened and the Navy had not created a Historical Branch or Archive for the NCF. So, there was no central archive for Seabee history. As time passed, first with Korea and then Vietnam, Construction Battalions were reactivated with the units having no idea what the WWII insignia had been so they made new ones.
[[File:Navy Seabee Combat Warfare Specialist Insignia.png|350px|right|thumb|SCW insignia: officer and enlisted]]
The military qualification badge for the Seabees is known as the Seabee combat warfare specialist insignia (SCW). It was created in 1993 for both officers and enlisted personnel. Only members attached to a qualifying NCF unit are eligible for the SCW pin. The qualifying units include: NMCBs, ACBs, NCF Support Units (NCFSU), UCTs, and NCRs.
[[File:Fleet Marine Force insignia authorized for US Naval personnel.jpg|350px|right|thumb|Fleet Marine Force insignia authorized for US Naval personnel: Officer, Enlisted, and Chaplain]]
The [[Fleet Marine Force Insignia]] or Fleet Marine Force pin (FMF pin), are for issue to those USN officers and enlisted trained and qualified to support the U. S. Marine Corps. Those Seabees assigned with the Fleet Marine Force can earn the FMF pin. The FMF pin comes in three classes : enlisted, officer, and chaplain. For requirements, see: Fleet Marine Force Warfare Specialist (EFMFWS) Program per [[OPNAV Instruction]] 1414.4B.
The Peltier Award is given to the "best of type" active duty Naval Construction Battalions. It was instituted by Rear Admiral Eugene J. Peltier CEC and has been given annually since 1960. He is a former head of the Bureau of Yards and Docks (1959–1962).
[[File:US_Navy_050728-N-8268B-022_A_Logistical_Amphibious_Recovery_Craft_(LARC)_amphibious_vehicle_assigned_to_Beachmaster_Unit_One_(BMU-1)_launches_from_the_Military_Sealift_Command_(MSC)_sea_barge_heavy_lift_ship_SS_Cape_Mohican_(T.jpg|thumb|US Navy 050728-N-8268B-022 A Logistical Amphibious Recovery Craft (LARC) amphibious vehicle assigned to [[Beachmaster Unit One]] (BMU-1) launches from the Military Sealift Command (MSC) sea barge heavy lift ship SS Cape Mohican (T-AKR-5065)]]
see: [[Seabee (barge)]]
There were six "Seabee" ships built: the SS "Cape Mendocino" (T-AKR-5064), the and the . The other three of were operated by [[Lykes Brothers Steamship Company]] and were originally the SS Doctor Lykes, the SS Tillie Lykes, and the SS Almeria Lykes. The NCF primarily uses the Seabee barges . Barges with a 2.5' [[Draft (hull)|draft]] are loaded and floated to and from a mother container ship, facilitating loading and unloading of containerized cargo at sea. These ships have an elevator system for loading the barges out of the water at the stern onto the vessel. Loaded barges can then be moved toward the vessel's bow by means of a track to be stowed on one of three decks. Seabee barge carriers can store 38 barges, 12 each on the lower decks and 14 on the upper deck. The 38 barges can hold 160 containers. A barges measures 97'x35'. A barge carrier also has storage tanks of nearly 36000 m³(9,510,194 gal.) volume built in its sides and double hull, allowing it to be used also as a tanker. The ships were purchased by [[Military Sealift Command]].
[[File:Fighting_seabee_statue.jpg|right|thumb|The Fighting Seabee Statue at [[Quonset Point]], where the [[Seabee Museum and Memorial Park]] commemorates Camp Endicott which is on the [[National Register of Historic Places]] (U.S. Navy)]]
The U.S. Navy Seabee Museum is located outside the main gate of Naval Base Ventura County, Port Hueneme, Ca. In July 2011 the new facility opened with galleries, grand hall, theater, storage, and research areas.
The Seabee Heritage Center is the Atlantic Coast Annex of the Seabee Museum in Port Hueneme. It opened in 1995. Exhibits at the Gulfport Annex are provided by the Seabee Museum in Port Hueneme.
The [[Seabee Museum and Memorial Park]] in [[Davisville, Rhode Island]] was opened in the late 1990s. A Fighting Seabee Statue is located there.
Other U.S. military construction/engineering organizations:
WWII
Marine Corps, Seabees outside the NCF
NCDUs, Seabees outside the NCF
UDTs, Seabees outside the NCF
Seabee North Slope Oil Exploration 1944
Cold War: Korea – Seabee Teams
Cold War: Antarctica
Cold War: Vietnam
U.S. Navy BMR study guide 1963 U.S. Naval Communications Listening Station [[Nea Makri]], Greece.
Cold War: Teketite
Cold War: CIA
Iraq Afghanistan
Seabee insignia
Naval Support Unit
SEABEE Barge Carriers
[[Category:United States Navy]]
[[Category:Seabees]]
[[Category:Seabee units and formations]]
[[Category:Military engineering of the United States]]
[[Category:United States Navy ratings]]
[[Category:Military units and formations established in 1942]] | https://en.wikipedia.org/wiki?curid=29484 |
Skyscraper
A skyscraper is a continuously habitable high-rise building that has over 40 floors and is taller than . Historically, the term first referred to buildings with 10 to 20 floors in the 1880s. The definition shifted with advancing construction technology during the 20th century. Skyscrapers may host offices, hotels, residential spaces, and retail spaces. For buildings above a height of , the term supertall skyscrapers can be used, while skyscrapers reaching beyond are classified as megatall skyscrapers.
One common feature of skyscrapers is having a steel framework that supports curtain walls. These curtain walls either bear on the framework below or are suspended from the framework above, rather than resting on load-bearing walls of conventional construction. Some early skyscrapers have a steel frame that enables the construction of load-bearing walls taller than of those made of reinforced concrete.
Modern skyscrapers' walls are not load-bearing, and most skyscrapers are characterised by large surface areas of windows made possible by steel frames and curtain walls. However, skyscrapers can have curtain walls that mimic conventional walls with a small surface area of windows. Modern skyscrapers often have a tubular structure, and are designed to act like a hollow cylinder to resist wind, seismic, and other lateral loads. To appear more slender, allow less wind exposure and transmit more daylight to the ground, many skyscrapers have a design with setbacks, which in some cases is also structurally required.
, only nine cities have more than 100 skyscrapers that are or taller: Hong Kong (355), New York City (284), Shenzhen (235), Dubai (199), Shanghai (163), Tokyo (155), Chongqing (127), Chicago (126), and Guangzhou (115).
The term "skyscraper" was first applied to buildings of steel framed construction of at least 10 storeys in the late 19th century, a result of public amazement at the tall buildings being built in major American cities like Chicago, New York City, Philadelphia, Detroit, and St. Louis. The first steel-frame skyscraper was the Home Insurance Building (originally 10 storeys with a height of ) in Chicago, Illinois in 1885. Some point to Philadelphia's 10-storey Jayne Building (1849–50) as a proto-skyscraper, or to New York's seven-floor Equitable Life Building (New York City), built in 1870, for its innovative use of a kind of skeletal frame, but such designation depends largely on what factors are chosen. Even the scholars making the argument find it to be purely academic.
The structural definition of the word "skyscraper" was refined later by architectural historians, based on engineering developments of the 1880s that had enabled construction of tall multi-storey buildings. This definition was based on the steel skeleton—as opposed to constructions of load-bearing masonry, which passed their practical limit in 1891 with Chicago's Monadnock Building.
The Council on Tall Buildings and Urban Habitat defines skyscrapers as those buildings which reach or exceed in height. Others in the United States and Europe also draw the lower limit of a skyscraper at .
The Emporis Standards Committee defines a high-rise building as "a multi-storey structure between 35–100 metres tall, or a building of unknown height from 12–39 floors" and a skyscraper as "a multi-storey building whose architectural height is at least ". Some structural engineers define a highrise as any vertical construction for which wind is a more significant load factor than earthquake or weight. Note that this criterion fits not only high-rises but some other tall structures, such as towers.
The word "skyscraper" often carries a connotation of pride and achievement. The skyscraper, in name and social function, is a modern expression of the age-old symbol of the world center or "axis mundi": a pillar that connects earth to heaven and the four compass directions to one another.
The tallest building in ancient times was the Great Pyramid of Giza in ancient Egypt, built in the 26th century BC. It was not surpassed in height for thousands of years, the Lincoln Cathedral having exceeded it in 1311–1549, before its central spire collapsed. The latter in turn was not surpassed until the Washington Monument in 1884. However, being uninhabited, none of these structures actually comply with the modern definition of a skyscraper.
High-rise apartments flourished in classical antiquity. Ancient Roman insulae in imperial cities reached 10 and more storeys. Beginning with Augustus (r. 30 BC-14 AD), several emperors attempted to establish limits of 20–25 m for multi-storey buildings, but met with only limited success. Lower floors were typically occupied by shops or wealthy families, the upper rented to the lower classes. Surviving Oxyrhynchus Papyri indicate that seven-storey buildings existed in provincial towns such as in 3rd century AD Hermopolis in Roman Egypt.
The skylines of many important medieval cities had large numbers of high-rise urban towers, built by the wealthy for defense and status. The residential Towers of 12th century Bologna numbered between 80 and 100 at a time, the tallest of which is the high Asinelli Tower. A Florentine law of 1251 decreed that all urban buildings be immediately reduced to less than 26 m. Even medium-sized towns of the era are known to have proliferations of towers, such as the 72 up to 51 m height in San Gimignano.
The medieval Egyptian city of Fustat housed many high-rise residential buildings, which Al-Muqaddasi in the 10th century described as resembling minarets. Nasir Khusraw in the early 11th century described some of them rising up to 14 storeys, with roof gardens on the top floor complete with ox-drawn water wheels for irrigating them. Cairo in the 16th century had high-rise apartment buildings where the two lower floors were for commercial and storage purposes and the multiple storeys above them were rented out to tenants. An early example of a city consisting entirely of high-rise housing is the 16th-century city of Shibam in Yemen. Shibam was made up of over 500 tower houses, each one rising 5 to 11 storeys high, with each floor being an apartment occupied by a single family. The city was built in this way in order to protect it from Bedouin attacks. Shibam still has the tallest mudbrick buildings in the world, with many of them over high.
An early modern example of high-rise housing was in 17th-century Edinburgh, Scotland, where a defensive city wall defined the boundaries of the city. Due to the restricted land area available for development, the houses increased in height instead. Buildings of 11 storeys were common, and there are records of buildings as high as 14 storeys. Many of the stone-built structures can still be seen today in the old town of Edinburgh. The oldest iron framed building in the world, although only partially iron framed, is The Flaxmill (also locally known as the "Maltings"), in Shrewsbury, England. Built in 1797, it is seen as the "grandfather of skyscrapers", since its fireproof combination of cast iron columns and cast iron beams developed into the modern steel frame that made modern skyscrapers possible. In 2013 funding was confirmed to convert the derelict building into offices.
In 1857, Elisha Otis introduced the safety elevator, allowing convenient and safe passenger movement to upper floors, at the E.V. Haughwout Building in New York City. Otis later introduced the first commercial passenger elevators to the Equitable Life Building in 1870, considered by a portion of New Yorkers to be the first skyscraper. Another crucial development was the use of a steel frame instead of stone or brick, otherwise the walls on the lower floors on a tall building would be too thick to be practical. An early development in this area was Oriel Chambers in Liverpool, England. It was only five floors high. Further developments led to what many individuals and organizations consider the world's first skyscraper, the ten-story Home Insurance Building in Chicago, built in 1884–1885. While its original height of 42.1 m (138ft) is not considered very impressive today, it was at that time. The building of tall buildings in the 1880s gave the skyscraper its first architectural movement the Chicago School, which developed what has been called the Commercial Style.
The architect, Major William Le Baron Jenney, created a load-bearing structural frame. In this building, a steel frame supported the entire weight of the walls, instead of load-bearing walls carrying the weight of the building. This development led to the "Chicago skeleton" form of construction. In addition to the steel frame, the Home Insurance Building also utilized fireproofing, elevators, and electrical wiring, key elements in most skyscrapers today.
Burnham and Root's Rand McNally Building in Chicago, 1889, was the first all-steel framed skyscraper, while Louis Sullivan's Wainwright Building in St. Louis, Missouri, 1891, was the first steel-framed building with soaring vertical bands to emphasize the height of the building and is therefore considered to be the first early skyscraper.
In 1889, the Mole Antonelliana in Italy was 167 m (549 ft) tall.
Most early skyscrapers emerged in the land-strapped areas of Chicago and New York City toward the end of the 19th century. A land boom in Melbourne, Australia between 1888 and 1891 spurred the creation of a significant number of early skyscrapers, though none of these were steel reinforced and few remain today. Height limits and fire restrictions were later introduced. London builders soon found building heights limited due to a complaint from Queen Victoria, rules that continued to exist with few exceptions.
Concerns about aesthetics and fire safety had likewise hampered the development of skyscrapers across continental Europe for the first half of the twentieth century. Some notable exceptions are the tall 1898 Witte Huis "(White House)" in Rotterdam; the Royal Liver Building in Liverpool, completed in 1911 and high; the tall 1924 Marx House in Düsseldorf, Germany; the Kungstornen "(Kings' Towers)" in Stockholm, Sweden, which were built 1924–25, the Edificio Telefónica in Madrid, Spain, built in 1929; the Boerentoren in Antwerp, Belgium, built in 1932; the Prudential Building in Warsaw, Poland, built in 1934; and the Torre Piacentini in Genoa, Italy, built in 1940.
After an early competition between Chicago and New York City for the world's tallest building, New York took the lead by 1895 with the completion of the tall American Surety Building, leaving New York with the title of the world's tallest building for many years.
Modern skyscrapers are built with steel or reinforced concrete frameworks and curtain walls of glass or polished stone. They use mechanical equipment such as water pumps and elevators. Since the 1960s, according to the CTHUB, the skyscraper has been reoriented away from a symbol for North American corporate power to instead communicate a city or nation's place in the world.
Skyscraper construction entered a three-decades-long era of stagnation in 1930 due to the Great Depression and then World War II. Shortly after the war ended, the Soviet Union began construction on a series of skyscrapers in Moscow. Seven, dubbed the "Seven Sisters", were built between 1947 and 1953; and one, the Main building of Moscow State University, was the tallest building in Europe for nearly four decades (1953–1990). Other skyscrapers in the style of Socialist Classicism were erected in East Germany (Frankfurter Tor), Poland (PKiN), Ukraine (Hotel Ukrayina), Latvia (Academy of Sciences) and other Eastern Bloc countries. Western European countries also began to permit taller skyscrapers during the years immediately following World War II. Early examples include Edificio España (Spain) Torre Breda (Italy).
From the 1930s onward, skyscrapers began to appear in various cities in East and Southeast Asia as well as in Latin America. Finally, they also began to be constructed in cities of Africa, the Middle East, South Asia and Oceania (mainly Australia) from the late 1950s on.
Skyscraper projects after World War II typically rejected the classical designs of the early skyscrapers, instead embracing the uniform international style; many older skyscrapers were redesigned to suit contemporary tastes or even demolished—such as New York's Singer Building, once the world's tallest skyscraper.
German architect Ludwig Mies van der Rohe became one of the world's most renowned architects in the second half of the 20th century. He conceived of the glass façade skyscraper and, along with Norwegian Fred Severud, he designed the Seagram Building in 1958, a skyscraper that is often regarded as the pinnacle of the modernist high-rise architecture.
Skyscraper construction surged throughout the 1960s. The impetus behind the upswing was a series of transformative innovations which made it possible for people to live and work in "cities in the sky".
In the early 1960s structural engineer Fazlur Rahman Khan, considered the "father of tubular designs" for high-rises, discovered that the dominating rigid steel frame structure was not the only system apt for tall buildings, marking a new era of skyscraper construction in terms of multiple structural systems. His central innovation in skyscraper design and construction was the concept of the "tube" structural system, including the "framed tube", "trussed tube", and "bundled tube". His "tube concept", using all the exterior wall perimeter structure of a building to simulate a thin-walled tube, revolutionized tall building design. These systems allow greater economic efficiency, and also allow skyscrapers to take on various shapes, no longer needing to be rectangular and box-shaped. The first building to employ the tube structure was the Chestnut De-Witt apartment building, this building is considered to be a major development in modern architecture. These new designs opened an economic door for contractors, engineers, architects, and investors, providing vast amounts of real estate space on minimal plots of land. Over the next fifteen years, many towers were built by Fazlur Rahman Khan and the "Second Chicago School", including the hundred-storey John Hancock Center and the massive Willis Tower. Other pioneers of this field include Hal Iyengar, William LeMessurier, and Minoru Yamasaki, the architect of the World Trade Center.
Many buildings designed in the 70s lacked a particular style and recalled ornamentation from earlier buildings designed before the 50s. These design plans ignored the environment and loaded structures with decorative elements and extravagant finishes. This approach to design was opposed by Fazlur Khan and he considered the designs to be whimsical rather than rational. Moreover, he considered the work to be a waste of precious natural resources. Khan's work promoted structures integrated with architecture and the least use of material resulting in the least carbon emission impact on the environment. The next era of skyscrapers will focus on the environment including performance of structures, types of material, construction practices, absolute minimal use of materials/natural resources, embodied energy within the structures, and more importantly, a holistically integrated building systems approach.
Modern building practices regarding supertall structures have led to the study of "vanity height". Vanity height, according to the CTBUH, is the distance between the highest floor and its architectural top (excluding antennae, flagpole or other functional extensions). Vanity height first appeared in New York City skyscrapers as early as the 1920s and 1930s but supertall buildings have relied on such uninhabitable extensions for on average 30 % of their height, raising potential definitional and sustainability issues. The current era of skyscrapers focuses on sustainability, its built and natural environments, including the performance of structures, types of materials, construction practices, absolute minimal use of materials and natural resources, energy within the structure, and a holistically integrated building systems approach. LEED is a current green building standard.
Architecturally, with the movements of Postmodernism, New Urbanism and New Classical Architecture, that established since the 1980s, a more classical approach came back to global skyscraper design, that remains popular today. Examples are the Wells Fargo Center, NBC Tower, Parkview Square, 30 Park Place, the Messeturm, the iconic Petronas Towers and Jin Mao Tower.
Other contemporary styles and movements in skyscraper design include organic, sustainable, neo-futurist, structuralist, high-tech, deconstructivist, blob, digital, streamline, novelty, critical regionalist, vernacular, Neo Art Deco and neo-historist, also known as revivalist.
3 September is the global commemorative day for skyscrapers, called "Skyscraper Day".
New York City developers competed among themselves, with successively taller buildings claiming the title of "world's tallest" in the 1920s and early 1930s, culminating with the completion of the Chrysler Building in 1930 and the Empire State Building in 1931, the world's tallest building for forty years. The first completed tall World Trade Center tower became the world's tallest building in 1972. However, it was overtaken by the Sears Tower (now Willis Tower) in Chicago within two years. The tall Sears Tower stood as the world's tallest building for 24 years, from 1974 until 1998, until it was edged out by Petronas Twin Towers in Kuala Lumpur, which held the title for six years.
The design and construction of skyscrapers involves creating safe, habitable spaces in very tall buildings. The buildings must support their weight, resist wind and earthquakes, and protect occupants from fire. Yet they must also be conveniently accessible, even on the upper floors, and provide utilities and a comfortable climate for the occupants. The problems posed in skyscraper design are considered among the most complex encountered given the balances required between economics, engineering, and construction management.
One common feature of skyscrapers is a steel framework from which curtain walls are suspended, rather than load-bearing walls of conventional construction. Most skyscrapers have a steel frame that enables them to be built taller than typical load-bearing walls of reinforced concrete. Skyscrapers usually have a particularly small surface area of what are conventionally thought of as walls. Because the walls are not load-bearing most skyscrapers are characterized by surface areas of windows made possible by the concept of steel frame and curtain wall. However, skyscrapers can also have curtain walls that mimic conventional walls and have a small surface area of windows.
The concept of a skyscraper is a product of the industrialized age, made possible by cheap fossil fuel derived energy and industrially refined raw materials such as steel and concrete. The construction of skyscrapers was enabled by steel frame construction that surpassed brick and mortar construction starting at the end of the 19th century and finally surpassing it in the 20th century together with reinforced concrete construction as the price of steel decreased and labour costs increased.
The steel frames become inefficient and uneconomic for supertall buildings as usable floor space is reduced for progressively larger supporting columns. Since about 1960, tubular designs have been used for high rises. This reduces the usage of material (more efficient in economic terms – Willis Tower uses a third less steel than the Empire State Building) yet allows greater height. It allows fewer interior columns, and so creates more usable floor space. It further enables buildings to take on various shapes.
Elevators are characteristic to skyscrapers. In 1852 Elisha Otis introduced the safety elevator, allowing convenient and safe passenger movement to upper floors. Another crucial development was the use of a steel frame instead of stone or brick, otherwise the walls on the lower floors on a tall building would be too thick to be practical. Today major manufacturers of elevators include Otis, ThyssenKrupp, Schindler, and KONE.
Advances in construction techniques have allowed skyscrapers to narrow in width, while increasing in height. Some of these new techniques include mass dampers to reduce vibrations and swaying, and gaps to allow air to pass through, reducing wind shear.
Good structural design is important in most building design, but particularly for skyscrapers since even a small chance of catastrophic failure is unacceptable given the high price. This presents a paradox to civil engineers: the only way to assure a lack of failure is to test for all modes of failure, in both the laboratory and the real world. But the only way to know of all modes of failure is to learn from previous failures. Thus, no engineer can be absolutely sure that a given structure will resist all loadings that could cause failure, but can only have large enough margins of safety such that a failure is acceptably unlikely. When buildings do fail, engineers question whether the failure was due to some lack of foresight or due to some unknowable factor.
The load a skyscraper experiences is largely from the force of the building material itself. In most building designs, the weight of the structure is much larger than the weight of the material that it will support beyond its own weight. In technical terms, the dead load, the load of the structure, is larger than the live load, the weight of things in the structure (people, furniture, vehicles, etc.). As such, the amount of structural material required within the lower levels of a skyscraper will be much larger than the material required within higher levels. This is not always visually apparent. The Empire State Building's setbacks are actually a result of the building code at the time (1916 Zoning Resolution), and were not structurally required. On the other hand, John Hancock Center's shape is uniquely the result of how it supports loads. Vertical supports can come in several types, among which the most common for skyscrapers can be categorized as steel frames, concrete cores, tube within tube design, and shear walls.
The wind loading on a skyscraper is also considerable. In fact, the lateral wind load imposed on supertall structures is generally the governing factor in the structural design. Wind pressure increases with height, so for very tall buildings, the loads associated with wind are larger than dead or live loads.
Other vertical and horizontal loading factors come from varied, unpredictable sources, such as earthquakes.
By 1895, steel had replaced cast iron as skyscrapers' structural material. Its malleability allowed it to be formed into a variety of shapes, and it could be riveted, ensuring strong connections. The simplicity of a steel frame eliminated the inefficient part of a shear wall, the central portion, and consolidated support members in a much stronger fashion by allowing both horizontal and vertical supports throughout. Among steel's drawbacks is that as more material must be supported as height increases, the distance between supporting members must decrease, which in turn increases the amount of material that must be supported. This becomes inefficient and uneconomic for buildings above 40 storeys tall as usable floor spaces are reduced for supporting column and due to more usage of steel.
A new structural system of framed tubes was developed by Fazlur Rahman Khan in 1963. The framed tube structure is defined as "a three dimensional space structure composed of three, four, or possibly more frames, braced frames, or shear walls, joined at or near their edges to form a vertical tube-like structural system capable of resisting lateral forces in any direction by cantilevering from the foundation". Closely spaced interconnected exterior columns form the tube. Horizontal loads (primarily wind) are supported by the structure as a whole. Framed tubes allow fewer interior columns, and so create more usable floor space, and about half the exterior surface is available for windows. Where larger openings like garage doors are required, the tube frame must be interrupted, with transfer girders used to maintain structural integrity. Tube structures cut down costs, at the same time allowing buildings to reach greater heights. Concrete tube-frame construction was first used in the DeWitt-Chestnut Apartment Building, completed in Chicago in 1963, and soon after in the John Hancock Center and World Trade Center.
The tubular systems are fundamental to tall building design. Most buildings over 40-storeys constructed since the 1960s now use a tube design derived from Khan's structural engineering principles, examples including the construction of the World Trade Center, Aon Center, Petronas Towers, Jin Mao Building, and most other supertall skyscrapers since the 1960s. The strong influence of tube structure design is also evident in the construction of the current tallest skyscraper, the Burj Khalifa.
Khan pioneered several other variations of the tube structure design. One of these was the concept of X-bracing, or the trussed tube, first employed for the John Hancock Center. This concept reduced the lateral load on the building by transferring the load into the exterior columns. This allows for a reduced need for interior columns thus creating more floor space. This concept can be seen in the John Hancock Center, designed in 1965 and completed in 1969. One of the most famous buildings of the structural expressionist style, the skyscraper's distinctive X-bracing exterior is actually a hint that the structure's skin is indeed part of its 'tubular system'. This idea is one of the architectural techniques the building used to climb to record heights (the tubular system is essentially the spine that helps the building stand upright during wind and earthquake loads). This X-bracing allows for both higher performance from tall structures and the ability to open up the inside floorplan (and usable floor space) if the architect desires.
The John Hancock Center was far more efficient than earlier steel-frame structures. Where the Empire State Building (1931), required about 206 kilograms of steel per square metre and 28 Liberty Street (1961) required 275, the John Hancock Center required only 145. The trussed tube concept was applied to many later skyscrapers, including the Onterie Center, Citigroup Center and Bank of China Tower.
An important variation on the tube frame is the bundled tube, which uses several interconnected tube frames. The Willis Tower in Chicago used this design, employing nine tubes of varying height to achieve its distinct appearance. The bundled tube structure meant that "buildings no longer need be boxlike in appearance: they could become sculpture."
The invention of the elevator was a precondition for the invention of skyscrapers, given that most people would not (or could not) climb more than a few flights of stairs at a time. The elevators in a skyscraper are not simply a necessary utility, like running water and electricity, but are in fact closely related to the design of the whole structure: a taller building requires more elevators to service the additional floors, but the elevator shafts consume valuable floor space. If the service core, which contains the elevator shafts, becomes too big, it can reduce the profitability of the building. Architects must therefore balance the value gained by adding height against the value lost to the expanding service core.
Many tall buildings use elevators in a non-standard configuration to reduce their footprint. Buildings such as the former World Trade Center Towers and Chicago's John Hancock Center use sky lobbies, where express elevators take passengers to upper floors which serve as the base for local elevators. This allows architects and engineers to place elevator shafts on top of each other, saving space. Sky lobbies and express elevators take up a significant amount of space, however, and add to the amount of time spent commuting between floors.
Other buildings, such as the Petronas Towers, use double-deck elevators, allowing more people to fit in a single elevator, and reaching two floors at every stop. It is possible to use even more than two levels on an elevator, although this has never been done. The main problem with double-deck elevators is that they cause everyone in the elevator to stop when only people on one level need to get off at a given floor.
Buildings with sky lobbies include the World Trade Center, Petronas Twin Towers, Willis Tower and Taipei 101. The 44th-floor sky lobby of the John Hancock Center also featured the first high-rise indoor swimming pool, which remains the highest in America.
Skyscrapers are usually situated in city centers where the price of land is high. Constructing a skyscraper becomes justified if the price of land is so high that it makes economic sense to build upward as to minimize the cost of the land per the total floor area of a building. Thus the construction of skyscrapers is dictated by economics and results in skyscrapers in a certain part of a large city unless a building code restricts the height of buildings.
Skyscrapers are rarely seen in small cities and they are characteristic of large cities, because of the critical importance of high land prices for the construction of skyscrapers. Usually only office, commercial and hotel users can afford the rents in the city center and thus most tenants of skyscrapers are of these classes.
Today, skyscrapers are an increasingly common sight where land is expensive, as in the centers of big cities, because they provide such a high ratio of rentable floor space per unit area of land.
One problem with skyscrapers is car parking. In the largest cities most people commute via public transport, but in smaller cities many parking spaces are needed. Multi-storey car parks are impractical to build very tall, so much land area is needed.
There may be a correlation between skyscraper construction and great income inequality but this has not been conclusively proven.
The amount of steel, concrete, and glass needed to construct a single skyscraper is large, and these materials represent a great deal of embodied energy. Skyscrapers are thus energy intensive buildings, but skyscrapers have a long lifespan, for example the Empire State Building in New York City, United States completed in 1931 and is still in active use.
Skyscrapers have considerable mass, which means that they must be built on a sturdier foundation than would be required for shorter, lighter buildings. Building materials must also be lifted to the top of a skyscraper during construction, requiring more energy than would be necessary at lower heights. Furthermore, a skyscraper consumes much electricity because potable and non-potable water have to be pumped to the highest occupied floors, skyscrapers are usually designed to be mechanically ventilated, elevators are generally used instead of stairs, and natural lighting cannot be utilized in rooms far from the windows and the windowless spaces such as elevators, bathrooms and stairwells.
Skyscrapers can be artificially lit and the energy requirements can be covered by renewable energy or other electricity generation with low greenhouse gas emissions. Heating and cooling of skyscrapers can be efficient, because of centralized HVAC systems, heat radiation blocking windows and small surface area of the building. There is Leadership in Energy and Environmental Design (LEED) certification for skyscrapers. For example, the Empire State Building received a gold Leadership in Energy and Environmental Design rating in September 2011 and the Empire State Building is the tallest LEED certified building in the United States, proving that skyscrapers can be environmentally friendly. Also the 30 St Mary Axe in London, the United Kingdom is an environmentally friendly skyscraper.
In the lower levels of a skyscraper a larger percentage of the building cross section must be devoted to the building structure and services than is required for lower buildings:
In low-rise structures, the support rooms (chillers, transformers, boilers, pumps and air handling units) can be put in basements or roof space—areas which have low rental value. There is, however, a limit to how far this plant can be located from the area it serves. The farther away it is the larger the risers for ducts and pipes from this plant to the floors they serve and the more floor area these risers take. In practice this means that in highrise buildings this plant is located on 'plant levels' at intervals up the building.
At the beginning of the 20th century, New York City was a center for the Beaux-Arts architectural movement, attracting the talents of such great architects as Stanford White and Carrere and Hastings. As better construction and engineering technology became available as the century progressed, New York City and Chicago became the focal point of the competition for the tallest building in the world. Each city's striking skyline has been composed of numerous and varied skyscrapers, many of which are icons of 20th-century architecture:
Momentum in setting records passed from the United States to other nations with the opening of the Petronas Twin Towers in Kuala Lumpur, Malaysia, in 1998. The record for the world's tallest building has remained in Asia since the opening of Taipei 101 in Taipei, Taiwan, in 2004. A number of architectural records, including those of the world's tallest building and tallest free-standing structure, moved to the Middle East with the opening of the Burj Khalifa in Dubai, United Arab Emirates.
This geographical transition is accompanied by a change in approach to skyscraper design. For much of the twentieth century large buildings took the form of simple geometrical shapes. This reflected the "international style" or modernist philosophy shaped by Bauhaus architects early in the century. The last of these, the Willis Tower and World Trade Center towers in New York, erected in the 1970s, reflect the philosophy. Tastes shifted in the decade which followed, and new skyscrapers began to exhibit postmodernist influences. This approach to design avails itself of historical elements, often adapted and re-interpreted, in creating technologically modern structures. The Petronas Twin Towers recall Asian pagoda architecture and Islamic geometric principles. Taipei 101 likewise reflects the pagoda tradition as it incorporates ancient motifs such as the ruyi symbol. The Burj Khalifa draws inspiration from traditional Islamic art. Architects in recent years have sought to create structures that would not appear equally at home if set in any part of the world, but that reflect the culture thriving in the spot where they stand.
The following list measures height of the roof. The more common gauge is the "highest architectural detail"; such ranking would have included Petronas Towers, built in 1996.
Proposals for such structures have been put forward, including the Burj Mubarak Al Kabir in Kuwait and Azerbaijan Tower in Baku. Kilometer-plus structures present architectural challenges that may eventually place them in a new architectural category. The first building under construction and planned to be over one kilometre tall is the Jeddah Tower.
Several wooden skyscraper designs have been designed and built. A 14-storey housing project in Bergen, Norway known as 'Treet' or 'The Tree' became the world's tallest wooden apartment block when it was completed in late 2015. The Tree's record was eclipsed by Brock Commons, an 18-storey wooden dormitory at the University of British Columbia in Canada, when it was completed in September 2016.
A 40-storey residential building 'Trätoppen' has been proposed by architect Anders Berensson to be built in Stockholm, Sweden. Trätoppen would be the tallest building in Stockholm, though there are no immediate plans to begin construction. The tallest currently-planned wooden skyscraper is the 70-storey W350 Project in Tokyo, to be built by the Japanese wood products company Sumitomo Forestry Co. to celebrate its 350th anniversary in 2041. An 80-storey wooden skyscraper, the River Beech Tower, has been proposed by a team including architects Perkins + Will and the University of Cambridge. The River Beech Tower, on the banks of the Chicago River in Chicago, Illinois, would be 348 feet shorter than the W350 Project despite having 10 more storeys.
Wooden skyscrapers are estimated to be around a quarter of the weight of an equivalent reinforced-concrete structure as well as reducing the building carbon footprint by 60–75 %. Buildings have been designed using cross-laminated timber (CLT) which gives a higher rigidity and strength to wooden structures. CLT panels are prefabricated and can therefore speed up building time. | https://en.wikipedia.org/wiki?curid=29485 |
Sagas of Icelanders
The sagas of Icelanders (), also known as family sagas, are one genre of Icelandic sagas. They are prose narratives mostly based on historical events that mostly took place in Iceland in the ninth, tenth, and early eleventh centuries, during the so-called Saga Age. They are the best-known specimens of Icelandic literature.
They are focused on history, especially genealogical and family history. They reflect the struggle and conflict that arose within the societies of the early generations of Icelandic settlers.
Eventually many of these Icelandic sagas were recorded, mostly in the thirteenth and fourteenth centuries. The 'authors', or rather recorders of these sagas are largely unknown. One saga, "Egils saga", is believed by some scholars to have been written by Snorri Sturluson, a descendant of the saga's hero, but this remains uncertain. The standard modern edition of Icelandic sagas is known as Íslenzk fornrit.
Among the several literary reviews of the sagas is that by Sigurður Nordal's "Sagalitteraturen", which divides the sagas into five chronological groups distinguished by the state of literary development:
A small number of sagas are thought to have existed and now to be lost. One example is the supposed "Gauks saga Trandilssonar". | https://en.wikipedia.org/wiki?curid=29486 |
Staind
Staind ( ) is an American rock band formed in Springfield, Massachusetts, in 1995. The original lineup consisted of lead vocalist and rhythm guitarist Aaron Lewis, lead guitarist Mike Mushok, bassist and backing vocalist Johnny April, and drummer Jon Wysocki. The lineup has been stable outside of the 2011 departure of Wysocki, who was replaced by Sal Giancarelli. Staind has recorded seven studio albums: "Tormented" (1996), "Dysfunction" (1999), "Break the Cycle" (2001), "14 Shades of Grey" (2003), "Chapter V" (2005), "The Illusion of Progress" (2008), and "Staind" (2011). The band's activity became more sporadic after their self-titled release, with Lewis pursuing a solo country music career and Mushok subsequently joining the band Saint Asonia, but they have continued to tour on and off in the following years. In 2016, Lewis reiterated that the band had not broken up, and would possibly create another album, but that his then-current focus was on his solo career. The band reunited more permanently in 2019 for several shows, continuing with live appearances in 2020. Many of their singles have reached high positions on US rock and all-format charts as well, including "It's Been Awhile", "Fade", "Price to Play", "So Far Away", and "Right Here".
In 1993, vocalist Aaron Lewis and guitarist Mike Mushok met at a Christmas party in Springfield, Massachusetts. Mushok introduced drummer Jon Wysocki while Lewis brought in bassist Johnny April to form the band in 1995. Their first public performance was in February 1995, playing a heavy, dark, and introspective style of metal. Extensive touring in the Northeast helped Staind acquire a regional following over the next few years.
The band started covering Korn, Rage Against the Machine, Pearl Jam, Tool, and Alice in Chains, among others, and played at local clubs (most commonly playing at Club Infinity) for a year and a half. Staind self-released their debut album, "Tormented", in November 1996, citing Tool, Faith No More, and Pantera as their influences. In October 1997, Staind acquired a concert slot through Aaron's cousin Justin Cantor with Limp Bizkit. Just prior to the performance, Limp Bizkit frontman Fred Durst was appalled by Staind's grotesque album cover and unsuccessfully attempted to remove them from the bill. Durst thought that Staind were Theistic Satanists. After being persuaded to let them perform, however, Durst was so impressed that he signed them to Flip Records by February 1998.
On April 13, 1999, Staind released their major label debut "Dysfunction" on Flip Records. The album, which was co-produced by Fred Durst and Terry Date (who also produced acts like Soundgarden, Deftones, and Pantera), received comparisons to alternative metal giants Tool and Korn. In particular, Aaron Lewis was lauded for his vocals, which were likened to those of Pearl Jam's Eddie Vedder.
The album achieved slow success, reaching the No. 1 spot on Billboard's Heatseeker Charts almost six months after its debut. In the same week, the record jumped to No. 74 on Billboard's Top 200 Album Charts. The nine-track LP (with one hidden track, "Excess Baggage") produced three singles, "Just Go", "Mudshovel", and "Home". "Mudshovel" and "Home" both received radio play, cracking the Top 20 of Billboard's Modern Rock and Mainstream Rock charts. In promotion of "Dysfunction", Staind went on several tours, including the Family Values Tour with acts like Limp Bizkit and The Crystal Method, as well as opening for Sevendust's headlining tour.
Staind toured with Limp Bizkit for the Family Values Tour during the fall of 1999, where Aaron Lewis performed an early version of "Outside" with Fred Durst at the Mississippi Coast Coliseum. Staind released their third studio album, "Break the Cycle", on May 22, 2001. Propelled by the success of the first single, "It's Been Awhile", the album debuted at No. 1 on Billboard's Top 200 Album charts, selling 716,000 copies in its first week. The record's first-week sales were the second highest of any album that year, behind Creed's "Weathered".
"Break the Cycle" saw the band retaining the nu metal sound from their previous album. Despite this, the album saw the band going further into a post-grunge sound which is evident in the smash hit song "It's Been Awhile", and the song led critics to compare the band to several other post-grunge bands at the time. The record spawned the singles "It's Been Awhile" (which hit the Billboard Top 10), "Fade", "Outside", "For You", and the acoustic ballad "Epiphany". "It's Been Awhile" spent a total of 16 and 14 weeks on top of the modern and mainstream rock charts respectively, making it one of the highest joint numbers of all time. In 2001, "Break the Cycle" sold four million copies worldwide, making it one of the best selling albums that year. "Break the Cycle" would go on to sell seven million copies worldwide, making this Staind's bestselling album.
In early 2003, Staind embarked on a worldwide tour to promote the release of the follow-up to "Break the Cycle", "14 Shades of Grey", which sold two million copies and debuted at number 1 on the Billboard 200. The album saw a departure from their previous nu metal sound as it mostly contained a lighter and more melodic post-grunge sound. "14 Shades of Grey" produced two mainstream hits, "Price to Play" and "So Far Away", which spent 14 weeks on top of the rock chart. In addition, two other singles were released: "How About You" and "Zoe Jane". The band's appearance at the Reading Festival during their 2003 tour had another impromptu acoustic set, this time due to equipment failure. The singles "So Far Away" and "Price to Play" came with two unreleased tracks, "Novocaine" and "Let It Out", which were released for the special edition of the group's subsequent album "Chapter V", which came out in late 2005. In 2003, Staind unsuccessfully sued their logo designer Jon Stainbrook in New York Federal Court for attempting to re-use the logo he had sold to the band. They re-opened the case in mid-2005.
Staind's fifth album, titled "Chapter V", was released on August 9, 2005, and became their third consecutive album to top the "Billboard" 200. The album opened to sales of 185,000 and has since been certified platinum in the U.S. The first single, "Right Here", was the biggest success from the album, garnering much mainstream radio play and peaking at number 1 on the mainstream rock chart. "Falling" was released as the second single, followed by "Everything Changes" and "King of All Excuses". Staind went on the road when the album came out, doing live shows and promoting it for a full year, including participating in the Fall Brawl tour with P.O.D., Taproot, and Flyleaf; they also had a solo tour across Europe and a mini-promotional tour in Australia for the first time. Other live shows included a cover of Pantera's "This Love", a tribute to Dimebag Darrell. Staind appeared on "The Howard Stern Show" on August 10, 2005 to promote "Chapter V". They performed acoustic renditions of the single "Right Here" and Beetlejuice's song "This is Beetle". In early November 2005, Staind released the limited edition 2-CD/DVD set of "Chapter V". On September 6, 2006, they performed an acoustic show in the Hiro Ballroom, New York City, that was recorded for their singles collection. The band played sixteen songs, including three covers: Tool's "Sober", Pink Floyd's "Comfortably Numb", and Alice in Chains's "Nutshell".
The collection "" was released on November 14, 2006. It included all the band's singles, the three covers performed at the New York show, and a remastered version of "Come Again" from Staind's first independent release "Tormented".
On August 19, 2008, Staind released their sixth album, "The Illusion of Progress". Prior to the album's release, the track "This Is It" was available for download on the iTunes Store, as well as for "Rock Band". The album debuted at No. 3 on the US Billboard 200, No. 1 on the Top Modern Rock/Alternative Albums Chart, No. 1 on the Top Digital Albums Chart, and also No. 1 on the Top Internet Albums Chart, with first-week sales of 91,800 units. The first single on the album, "Believe", topped Billboard's Top 10 Modern Rock Tracks on September 5, 2008. The band also supported Nickelback on their 2008 European tour. The second single was "All I Want", and came out on November 24. The single also became Staind's 13th top 20 hit on the rock charts. In Europe, the second single was "The Way I Am", released on January 26, 2009. The final single released from the album, "This Is It", was sent to radio stations across the country on May 4, 2009. The track was also included on the successful "" released in late June 2009. The same year, Staind embarked on a fall tour with the newly reunited Creed.
In March 2010, Aaron Lewis stated the band would start working on their seventh studio album by the end of the year. Lewis had finished recording his country music solo EP and had started a nonprofit organization to reopen his daughter's elementary school in Worthington, Massachusetts. Guitarist Mike Mushok stated in a Q&A session with fans that the band was looking to make a heavy record, but still "explore some of the things we did on the last record and take them somewhere new for us". In a webisode posted on the band's website, Lewis stated that eight songs were written and that "every one of them is as heavy or heavier than the heaviest song on the last record".
In December 2010, Staind posted three webisodes from the studio, which featured the band members discussing the writing and recording process of their new album. They announced that as of April 20, they had completed the recording of their seventh and would release it later that year.
On May 20, 2011, Staind announced that original drummer Jon Wysocki had left the band. Drummer Will Hunt filled in for a few dates, while Wysocki's drum tech Sal Giancarelli filled in for the rest of the tour. Three days later, it was reported that Staind's new album would be a self-titled release. It was released on September 13, 2011. The first single, "Not Again", was released to active radio stations on July 18. The song "The Bottom" appeared on the "" soundtrack. On June 30, Staind released a song called "Eyes Wide Open" from their new record. "Eyes Wide Open" would later be released on November 29 as the album's second single.
In November 2011, the band announced through their YouTube page that Sal Giancarelli was now an official member. The band continued to perform into 2012, embarking on an April and May tour with Godsmack and Halestorm, and they played the Uproar Festival in August and September with Shinedown and a number of other artists.
It was announced in July 2012 that the band was to be taking a hiatus. In an interview with Billboard, Aaron Lewis stated that "We're not breaking up. We're not gonna stop making music. We're just going to take a little hiatus that really hasn't ever been taken in our career. We put out seven records in 14 years. We've been pretty busy." Lewis also had plans to release his first solo album "The Road". During this time, Mike Mushok auditioned, and was selected, to play guitar for former Metallica bassist Jason Newsted's new band Newsted. He featured on their debut album "Heavy Metal Music".
Staind played their first show in two years at the Welcome To Rockville Festival on April 27, 2014. They also played the Carolina Rebellion and Rock on the Range festivals in May 2014.
In late 2014, the band went on a hiatus. Aaron Lewis continued to play solo shows and work on his next solo album. He also confirmed that the hiatus would last "for a while". Mike Mushok teamed up with former Three Days Grace singer Adam Gontier, former Finger Eleven drummer Rich Beddoe, and Eye Empire bassist Corey Lowery to form Saint Asonia.
On August 4, 2017, the band performed for the first time since November 2014 for an acoustic performance at Aaron Lewis' 6th annual charity golf tournament and concert when bassist Johnny April and drummer Sal Giancarelli joined Aaron Lewis and Mike Mushok to perform "Outside", "Something to Remind You", and "It's Been Awhile". Three days later, Lewis announced that Staind would never tour extensively again, with Lewis explaining:
In April 2019, the band announced they would reform in September 2019 for some live performances. The band was scheduled to play at Epicenter Festival on May 3rd 2020 at Charlotte Motor Speedway.
The topics of Staind's lyrics cover issues of depression, relationships, death, addiction, finding one's self, betrayal, and Lewis' thoughts about becoming a father in the song "Zoe Jane" from "14 Shades of Grey", as well as reflecting on his upbringing in the song "The Corner" from "The Illusion of Progress". Also from "14 Shades of Grey", the track titled "Layne" was written about Alice in Chains frontman Layne Staley in response to his death in 2002. The song is also about Staley's legacy and the effect his music had on the members of Staind, especially Aaron Lewis. Staind has been categorized as nu metal, alternative metal, heavy metal, hard rock, and post-grunge.
In 2001, "Rolling Stone" outlined the band's relationship to the nu metal label:
Staind's influences include Pantera, The Doors, Suicidal Tendencies, Kiss, Van Halen, Slayer, Led Zeppelin, Sepultura, Whitesnake, the Beatles, Alice in Chains, Faith No More, Deftones, Black Sabbath, Pearl Jam, Tool, Rage Against the Machine, Nirvana, Stone Temple Pilots, Helmet, James Taylor, Korn, and Crosby, Stills & Nash.
Current line-up
Former members
Studio albums | https://en.wikipedia.org/wiki?curid=29489 |
Saddam Hussein
Saddam Hussein Abd al-Majid al-Tikriti (; Arabic: ""; 28 April 1937 – 30 December 2006) was the fifth President of Iraq from 16 July 1979 until 9 April 2003. A leading member of the revolutionary Arab Socialist Ba'ath Party, and later, the Baghdad-based Ba'ath Party and its regional organization the Iraqi Ba'ath Party—which espoused Ba'athism, a mix of Arab nationalism and socialism—Saddam played a key role in the 1968 coup (later referred to as the 17 July Revolution) that brought the party to power in Iraq.
As vice president under the ailing General Ahmed Hassan al-Bakr, and at a time when many groups were considered capable of overthrowing the government, Saddam created security forces through which he tightly controlled conflicts between the government and the armed forces. In the early 1970s, Saddam nationalized oil and foreign banks leaving the system eventually insolvent mostly due to the Iran–Iraq War, the Gulf War, and UN sanctions. Through the 1970s, Saddam cemented his authority over the apparatus of government as oil money helped Iraq's economy to grow at a rapid pace. Positions of power in the country were mostly filled with Sunni Arabs, a minority that made up only a fifth of the population.
Saddam formally rose to power in 1979, although he had already been the "de facto" head of Iraq for several years. He suppressed several movements, particularly Shi'a and Kurdish movements which sought to overthrow the government or gain independence, respectively, and maintained power during the Iran–Iraq War and the Gulf War. Hussein's rule was a repressive dictatorship. The total number of Iraqis killed by the security services of Saddam's government in various purges and genocides is conservatively estimated to be 250,000. Saddam's invasions of Iran and Kuwait also resulted in hundreds of thousands of deaths.
In 2003, a coalition led by the United States invaded Iraq to depose Saddam, in which U.S. President George W. Bush and British Prime Minister Tony Blair erroneously accused him of possessing weapons of mass destruction and having ties to al-Qaeda. Saddam's Ba'ath party was disbanded and the country's first ever set of democratic elections were held. Following his capture on 13 December 2003, the trial of Saddam took place under the Iraqi Interim Government. On 5 November 2006, Saddam was convicted by an Iraqi court of crimes against humanity related to the 1982 killing of 148 Iraqi Shi'a, and sentenced to death by hanging. He was executed on 30 December 2006.
Before he was born, cancer killed both Saddam's father and brother. These deaths made Saddam's mother, Sabha, so depressed that she attempted to abort her pregnancy and commit suicide. When her son was born, Sabha "would have nothing to do with him," and Saddam was taken in by an uncle.
His mother remarried, and Saddam gained three half-brothers through this marriage. His stepfather, Ibrahim al-Hassan, treated Saddam harshly after his return. At about age 10, Saddam fled the family and returned to live in Baghdad with his uncle Kharaillah Talfah, who became a father figure to Saddam. Talfah, the father of Saddam's future wife, was a devout Sunni Muslim and a veteran of the 1941 Anglo-Iraqi War between Iraqi nationalists and the United Kingdom, which remained a major colonial power in the region. Talfah later became the mayor of Baghdad during Saddam's time in power, until his notorious corruption compelled Saddam to force him out of office.
Later in his life relatives from his native Tikrit became some of his closest advisors and supporters. Under the guidance of his uncle he attended a nationalistic high school in Baghdad. After secondary school Saddam studied at an Iraqi law school for three years, dropping out in 1957 at the age of 20 to join the revolutionary pan-Arab Ba'ath Party, of which his uncle was a supporter. During this time, Saddam apparently supported himself as a secondary school teacher. Ba'athist ideology originated in Syria and the Ba'ath Party had a large following in Syria at the time, but in 1955 there were fewer than 300 Ba'ath Party members in Iraq and it is believed that Saddam's primary reason for joining the party as opposed to the more established Iraqi nationalist parties was his familial connection to Ahmed Hassan al-Bakr and other leading Ba'athists through his uncle.
Revolutionary sentiment was characteristic of the era in Iraq and throughout the Middle East. In Iraq progressives and socialists assailed traditional political elites (colonial-era bureaucrats and landowners, wealthy merchants and tribal chiefs, and monarchists). Moreover, the pan-Arab nationalism of Gamal Abdel Nasser in Egypt profoundly influenced young Ba'athists like Saddam. The rise of Nasser foreshadowed a wave of revolutions throughout the Middle East in the 1950s and 1960s, with the collapse of the monarchies of Iraq, Egypt, and Libya. Nasser inspired nationalists throughout the Middle East by fighting the British and the French during the Suez Crisis of 1956, modernizing Egypt, and uniting the Arab world politically.
In 1958, a year after Saddam had joined the Ba'ath party, army officers led by General Abd al-Karim Qasim overthrew Faisal II of Iraq in the 14 July Revolution.
Of the 16 members of Qasim's cabinet, 12 were Ba'ath Party members; however, the party turned against Qasim due to his refusal to join Gamal Abdel Nasser's United Arab Republic (UAR). To strengthen his own position within the government, Qasim created an alliance with the Iraqi Communist Party, which was opposed to any notion of pan-Arabism. Later that year, the Ba'ath Party leadership was planning to assassinate Qasim. Saddam was a leading member of the operation. At the time, the Ba'ath Party was more of an ideological experiment than a strong anti-government fighting machine. The majority of its members were either educated professionals or students, and Saddam fit the bill. The choice of Saddam was, according to journalist Con Coughlin, "hardly surprising." The idea of assassinating Qasim may have been Nasser's, and there is speculation that some of those who participated in the operation received training in Damascus, which was then part of the UAR. However, "no evidence has ever been produced to implicate Nasser directly in the plot." Saddam himself is not believed to have received any training outside of Iraq, as he was a late addition to the assassination team.
The assassins planned to ambush Qasim at Al-Rashid Street on 7 October 1959: one man was to kill those sitting at the back of the car, the rest killing those in front. During the ambush it is claimed that Saddam began shooting prematurely, which disorganised the whole operation. Qasim's chauffeur was killed, and Qasim was hit in the arm and shoulder. The assassins believed they had killed him and quickly retreated to their headquarters, but Qasim survived. At the time of the attack the Ba'ath Party had fewer than 1,000 members. Saddam's role in the failed assassination became a crucial part of his public image for decades. Kanan Makiya recounts:
The man and the myth merge in this episode. His biography—and Iraqi television, which stages the story ad nauseam—tells of his familiarity with guns from the age of ten; his fearlessness and loyalty to the party during the 1959 operation; his bravery in saving his comrades by commandeering a car at gunpoint; the bullet that was gouged out of his flesh under his direction in hiding; the iron discipline that led him to draw a gun on weaker comrades who would have dropped off a seriously wounded member of the hit team at a hospital; the calculating shrewdness that helped him save himself minutes before the police broke in leaving his wounded comrades behind; and finally the long trek of a wounded man from house to house, city to town, across the desert to refuge in Syria.
Some of the plotters (including Saddam) quickly managed to leave the country for Syria, the spiritual home of Ba'athist ideology. There Saddam was given full membership in the party by Michel Aflaq. Some members of the operation were arrested and taken into custody by the Iraqi government. At the show trial, six of the defendants were given death sentences; for unknown reasons the sentences were not carried out. Aflaq, the leader of the Ba'athist movement, organised the expulsion of leading Iraqi Ba'athist members, such as Fuad al-Rikabi, on the grounds that the party should not have initiated the attempt on Qasim's life. At the same time, Aflaq secured seats in the Iraqi Ba'ath leadership for his supporters, one of them being Saddam. Saddam moved from Syria to Egypt itself in February 1960, and he continued to live there until 1963, graduating from high school in 1961 and unsuccessfully pursuing a law degree.
Army officers with ties to the Ba'ath Party overthrew Qasim in the Ramadan Revolution coup of February 1963. Ba'athist leaders were appointed to the cabinet and Abdul Salam Arif became president. Arif dismissed and arrested the Ba'athist leaders later that year in the November 1963 Iraqi coup d'état. Being exiled in Egypt at the time, Saddam played no role in the 1963 coup or the brutal anti-communist purge that followed; although he returned to Iraq after the coup, Saddam remained "on the fringes of the newly installed Ba'thi administration and [had] to content himself with the minor position of a member of the Party's central bureau for peasants," in the words of Efraim Karsh and Inari Rautsi Unlike during the Qasim years, Saddam remained in Iraq following Arif's anti-Ba'athist purge in November 1963, and became involved in planning to assassinate Arif. In marked contrast to Qasim, Saddam knew that he faced no death penalty from Arif's government and knowingly accepted the risk of being arrested rather than fleeing to Syria again. Saddam was arrested in October 1964 and served approximately two years in prison before escaping in 1966. In 1966, Ahmed Hassan al-Bakr appointed him Deputy Secretary of the Regional Command. Saddam, who would prove to be a skilled organiser, revitalised the party. He was elected to the Regional Command, as the story goes, with help from Michel Aflaq—the founder of Ba'athist thought. In September 1966, Saddam initiated an extraordinary challenge to Syrian domination of the Ba'ath Party in response to the Marxist takeover of the Syrian Ba'ath earlier that year, resulting in the Party's formalized split into two separate factions. Saddam then created a Ba'athist security service, which he alone controlled.
In July 1968, Saddam participated in a bloodless coup led by Ahmed Hassan al-Bakr that overthrew Abdul Rahman Arif, Salam Arif's brother and successor. While Saddam's role in the coup was not hugely significant (except in the official account), Saddam planned and carried out the subsequent purge of the non-Ba'athist faction led by Prime Minister Abd ar-Razzaq an-Naif, whose support had been essential to the coup's success. According to a semi-official biography, Saddam personally led Naif at gunpoint to the plane that escorted him out of Iraq. Arif was given refuge in London and then Istanbul. Al-Bakr was named president and Saddam was named his deputy, and deputy chairman of the Ba'athist Revolutionary Command Council. According to biographers, Saddam never forgot the tensions within the first Ba'athist government, which formed the basis for his measures to promote Ba'ath party unity as well as his resolve to maintain power and programs to ensure social stability. Although Saddam was al-Bakr's deputy, he was a strong behind-the-scenes party politician. Al-Bakr was the older and more prestigious of the two, but by 1969 Saddam clearly had become the moving force behind the party.
In the late 1960s and early 1970s, as vice chairman of the Revolutionary Command Council, formally al-Bakr's second-in-command, Saddam built a reputation as a progressive, effective politician. At this time, Saddam moved up the ranks in the new government by aiding attempts to strengthen and unify the Ba'ath party and taking a leading role in addressing the country's major domestic problems and expanding the party's following.
After the Ba'athists took power in 1968, Saddam focused on attaining stability in a nation riddled with profound tensions. Long before Saddam, Iraq had been split along social, ethnic, religious, and economic fault lines: Sunni versus Shi'ite, Arab versus Kurd, tribal chief versus urban merchant, nomad versus peasant. The desire for stable rule in a country rife with factionalism led Saddam to pursue both massive repression and the improvement of living standards.
Saddam actively fostered the modernization of the Iraqi economy along with the creation of a strong security apparatus to prevent coups within the power structure and insurrections apart from it. Ever concerned with broadening his base of support among the diverse elements of Iraqi society and mobilizing mass support, he closely followed the administration of state welfare and development programs.
At the center of this strategy was Iraq's oil. On 1 June 1972, Saddam oversaw the seizure of international oil interests, which, at the time, dominated the country's oil sector. A year later, world oil prices rose dramatically as a result of the 1973 energy crisis, and skyrocketing revenues enabled Saddam to expand his agenda.
Within just a few years, Iraq was providing social services that were unprecedented among Middle Eastern countries. Saddam established and controlled the "National Campaign for the Eradication of Illiteracy" and the campaign for "Compulsory Free Education in Iraq," and largely under his auspices, the government established universal free schooling up to the highest education levels; hundreds of thousands learned to read in the years following the initiation of the program. The government also supported families of soldiers, granted free hospitalization to everyone, and gave subsidies to farmers. Iraq created one of the most modernized public-health systems in the Middle East, earning Saddam an award from the United Nations Educational, Scientific and Cultural Organization (UNESCO).
With the help of increasing oil revenues, Saddam diversified the largely oil-based Iraqi economy. Saddam implemented a national infrastructure campaign that made great progress in building roads, promoting mining, and developing other industries. The campaign helped Iraq's energy industries. Electricity was brought to nearly every city in Iraq, and many outlying areas. Before the 1970s, most of Iraq's people lived in the countryside and roughly two-thirds were peasants. This number would decrease quickly during the 1970s as global oil prices helped revenues to rise from less than a half billion dollars to tens of billions of dollars and the country invested into industrial expansion.
The oil revenue benefited Saddam politically. According to "The Economist", "Much as Adolf Hitler won early praise for galvanising German industry, ending mass unemployment and building autobahns, Saddam earned admiration abroad for his deeds. He had a good instinct for what the "Arab street" demanded, following the decline in Egyptian leadership brought about by the trauma of Israel's six-day victory in the 1967 war, the death of the pan-Arabist hero, Gamal Abdul Nasser, in 1970, and the "traitorous" drive by his successor, Anwar Sadat, to sue for peace with the Jewish state. Saddam's self-aggrandising propaganda, with himself posing as the defender of Arabism against Jewish or Persian intruders, was heavy-handed, but consistent as a drumbeat. It helped, of course, that his mukhabarat (secret police) put dozens of Arab news editors, writers and artists on the payroll."
In 1972, Saddam signed a 15-year Treaty of Friendship and Cooperation with the Soviet Union. According to historian Charles R. H. Tripp, the treaty upset "the U.S.-sponsored security system established as part of the Cold War in the Middle East. It appeared that any enemy of the Baghdad regime was a potential ally of the United States." In response, the U.S. covertly financed Kurdish rebels led by Mustafa Barzani during the Second Iraqi–Kurdish War; the Kurds were defeated in 1975, leading to the forcible relocation of hundreds of thousands of Kurdish civilians.
Saddam focused on fostering loyalty to the Ba'athists in the rural areas. After nationalizing foreign oil interests, Saddam supervised the modernization of the countryside, mechanizing agriculture on a large scale, and distributing land to peasant farmers. The Ba'athists established farm cooperatives and the government also doubled expenditures for agricultural development in 1974–1975. Saddam's welfare programs were part of a combination of "carrot and stick" tactics to enhance support for Saddam. The state-owned banks were put under his thumb. Lending was based on cronyism. Development went forward at such a fevered pitch that two million people from other Arab countries and even Yugoslavia worked in Iraq to meet the growing demand for labor.
In 1976, Saddam rose to the position of general in the Iraqi armed forces, and rapidly became the strongman of the government. As the ailing, elderly al-Bakr became unable to execute his duties, Saddam took on an increasingly prominent role as the face of the government both internally and externally. He soon became the architect of Iraq's foreign policy and represented the nation in all diplomatic situations. He was the "de facto" leader of Iraq some years before he formally came to power in 1979. He slowly began to consolidate his power over Iraq's government and the Ba'ath party. Relationships with fellow party members were carefully cultivated, and Saddam soon accumulated a powerful circle of support within the party.
In 1979, al-Bakr started to make treaties with Syria, also under Ba'athist leadership, that would lead to unification between the two countries. Syrian President Hafez al-Assad would become deputy leader in a union, and this would drive Saddam to obscurity. Saddam acted to secure his grip on power. He forced the ailing al-Bakr to resign on 16 July 1979, and formally assumed the presidency.
Saddam convened an assembly of Ba'ath party leaders on 22 July 1979. During the assembly, which he ordered videotaped, Saddam claimed to have found a fifth column within the Ba'ath Party and directed Muhyi Abdel-Hussein to read out a confession and the names of 68 alleged co-conspirators. These members were labelled "disloyal" and were removed from the room one by one and taken into custody. After the list was read, Saddam congratulated those still seated in the room for their past and future loyalty. The 68 people arrested at the meeting were subsequently tried together and found guilty of treason. 22 were sentenced to execution. Other high-ranking members of the party formed the firing squad. By 1 August 1979, hundreds of high-ranking Ba'ath party members had been executed.
Iraqi society fissures along lines of language, religion and ethnicity. The Ba'ath Party, secular by nature, adopted Pan-Arab ideologies which in turn were problematic for significant parts of the population. Following the Iranian Revolution of 1979, Iraq faced the prospect of régime change from two Shi'ite factions (Dawa and SCIRI) which aspired to model Iraq on its neighbour Iran as a Shia theocracy. A separate threat to Iraq came from parts of the ethnic Kurdish population of northern Iraq which opposed being part of an Iraqi state and favoured independence (an ongoing ideology which had preceded Ba'ath Party rule). To alleviate the threat of revolution, Saddam afforded certain benefits to the potentially hostile population. Membership in the Ba'ath Party remained open to all Iraqi citizens regardless of background. However, repressive measures were taken against its opponents.
The major instruments for accomplishing this control were the paramilitary and police organizations. Beginning in 1974, Taha Yassin Ramadan (himself a Kurdish Ba'athist), a close associate of Saddam, commanded the People's Army, which had responsibility for internal security. As the Ba'ath Party's paramilitary, the People's Army acted as a counterweight against any coup attempts by the regular armed forces. In addition to the People's Army, the Department of General Intelligence was the most notorious arm of the state-security system, feared for its use of torture and assassination. Barzan Ibrahim al-Tikriti, Saddam's younger half-brother, commanded Mukhabarat. Foreign observers believed that from 1982 this department operated both at home and abroad in its mission to seek out and eliminate Saddam's perceived opponents.
Saddam was notable for using terror against his own people. "The Economist" described Saddam as "one of the last of the 20th century's great dictators, but not the least in terms of egotism, or cruelty, or morbid will to power." Saddam's regime brought about the deaths of at least 250,000 Iraqis and committed war crimes in Iran, Kuwait, and Saudi Arabia. Human Rights Watch and Amnesty International issued regular reports of widespread imprisonment and torture.
As a sign of his consolidation of power, Saddam's personality cult pervaded Iraqi society. He had thousands of portraits, posters, statues and murals erected in his honor all over Iraq. His face could be seen on the sides of office buildings, schools, airports, and shops, as well as on Iraqi currency. Saddam's personality cult reflected his efforts to appeal to the various elements in Iraqi society. This was seen in his variety of apparel: he appeared in the costumes of the Bedouin, the traditional clothes of the Iraqi peasant (which he essentially wore during his childhood), and even Kurdish clothing, but also appeared in Western suits fitted by his favorite tailor, projecting the image of an urbane and modern leader. Sometimes he would also be portrayed as a devout Muslim, wearing full headdress and robe, praying toward Mecca.
He also conducted two show elections, in 1995 and 2002. In the 1995 referendum, conducted on 15 October, he reportedly received 99.96% of the votes in a 99.47% turnout, getting only 3,052 negative votes among an electorate of 8.4 million. In the 15 October 2002 referendum he officially achieved 100% of approval votes and 100% turnout, as the electoral commission reported the next day that every one of the 11,445,638 eligible voters cast a "Yes" vote for the president.
He erected statues around the country, which Iraqis toppled after his fall.
Iraq's relations with the Arab world have been extremely varied. Relations between Iraq and Egypt violently ruptured in 1977, when the two nations broke relations with each other following Iraq's criticism of Egyptian President Anwar Sadat's peace initiatives with Israel. In 1978, Baghdad hosted an Arab League summit that condemned and ostracized Egypt for accepting the Camp David Accords. However, Egypt's strong material and diplomatic support for Iraq in the war with Iran led to warmer relations and numerous contacts between senior officials, despite the continued absence of ambassadorial-level representation. Since 1983, Iraq has repeatedly called for restoration of Egypt's "natural role" among Arab countries.
Saddam developed a reputation for liking expensive goods, such as his diamond-coated Rolex wristwatch, and sent copies of them to his friends around the world. To his ally Kenneth Kaunda Saddam once sent a Boeing 747 full of presents—rugs, televisions, ornaments.
Saddam enjoyed a close relationship with Russian intelligence agent Yevgeny Primakov that dated back to the 1960s; Primakov may have helped Saddam to stay in power in 1991.
Saddam visited only two Western countries. The first visit took place in December 1974, when the dictator of Spain, Francisco Franco, invited him to Madrid and he visited Granada, Córdoba and Toledo. In September 1975 he met with Prime Minister Jacques Chirac in Paris, France.
Several Iraqi leaders, Lebanese arms merchant Sarkis Soghanalian and others have claimed that Saddam financed Chirac's party. In 1991 Saddam threatened to expose those who had taken largesse from him: "From Mr. Chirac to Mr. Chevènement, politicians and economic leaders were in open competition to spend time with us and flatter us. We have now grasped the reality of the situation. If the trickery continues, we will be forced to unmask them, all of them, before the French public." France armed Saddam and it was Iraq's largest trade partner throughout Saddam's rule. Seized documents show how French officials and businessmen close to Chirac, including Charles Pasqua, his former interior minister, personally benefitted from the deals with Saddam.
Because Saddam Hussein rarely left Iraq, Tariq Aziz, one of Saddam's aides, traveled abroad extensively and represented Iraq at many diplomatic meetings. In foreign affairs, Saddam sought to have Iraq play a leading role in the Middle East. Iraq signed an aid pact with the Soviet Union in 1972, and arms were sent along with several thousand advisers. However, the 1978 crackdown on Iraqi Communists and a shift of trade toward the West strained Iraqi relations with the Soviet Union; Iraq then took on a more Western orientation until the Gulf War in 1991.
After the oil crisis of 1973, France had changed to a more pro-Arab policy and was accordingly rewarded by Saddam with closer ties. He made a state visit to France in 1975, cementing close ties with some French business and ruling political circles. In 1975 Saddam negotiated an accord with Iran that contained Iraqi concessions on border disputes. In return, Iran agreed to stop supporting opposition Kurds in Iraq. Saddam led Arab opposition to the Camp David Accords between Egypt and Israel (1979).
Saddam initiated Iraq's nuclear enrichment project in the 1980s, with French assistance. The first Iraqi nuclear reactor was named by the French "Osirak." Osirak was destroyed on 7 June 1981 by an Israeli air strike (Operation Opera).
Nearly from its founding as a modern state in 1920, Iraq has had to deal with Kurdish separatists in the northern part of the country. Saddam did negotiate an agreement in 1970 with separatist Kurdish leaders, giving them autonomy, but the agreement broke down. The result was brutal fighting between the government and Kurdish groups and even Iraqi bombing of Kurdish villages in Iran, which caused Iraqi relations with Iran to deteriorate. However, after Saddam had negotiated the 1975 treaty with Iran, the Shah withdrew support for the Kurds, who suffered a total defeat.
In early 1979, Iran's Shah Mohammad Reza Pahlavi was overthrown by the Islamic Revolution, thus giving way to an Islamic republic led by the Ayatollah Ruhollah Khomeini. The influence of revolutionary Shi'ite Islam grew apace in the region, particularly in countries with large Shi'ite populations, especially Iraq. Saddam feared that radical Islamic ideas—hostile to his secular rule—were rapidly spreading inside his country among the majority Shi'ite population.
There had also been bitter enmity between Saddam and Khomeini since the 1970s. Khomeini, having been exiled from Iran in 1964, took up residence in Iraq, at the Shi'ite holy city of An Najaf. There he involved himself with Iraqi Shi'ites and developed a strong, worldwide religious and political following against the Iranian Government, which Saddam tolerated. However, when Khomeini began to urge the Shi'ites there to overthrow Saddam and under pressure from the Shah, who had agreed to a rapprochement between Iraq and Iran in 1975, Saddam agreed to expel Khomeini in 1978 to France. However this turned out to be an imminent failure and a political catalyst, for Khomeini had access to more media connections and also collaborated with a much larger Iranian community under his support which he used to his advantage.
After Khomeini gained power, skirmishes between Iraq and revolutionary Iran occurred for ten months over the sovereignty of the disputed Shatt al-Arab waterway, which divides the two countries. During this period, Saddam Hussein publicly maintained that it was in Iraq's interest not to engage with Iran, and that it was in the interests of both nations to maintain peaceful relations. However, in a private meeting with Salah Omar al-Ali, Iraq's permanent ambassador to the United Nations, he revealed that he intended to invade and occupy a large part of Iran within months. Later (probably to appeal for support from the United States and most Western nations), he would make toppling the Islamic government one of his intentions as well.
Iraq invaded Iran, first attacking Mehrabad Airport of Tehran and then entering the oil-rich Iranian land of Khuzestan, which also has a sizable Arab minority, on 22 September 1980 and declared it a new province of Iraq. With the support of the Arab states, the United States, and Europe, and heavily financed by the Arab states of the Persian Gulf, Saddam Hussein had become "the defender of the Arab world" against a revolutionary Iran. The only exception was the Soviet Union, who initially refused to supply Iraq on the basis of neutrality in the conflict, although in his memoirs, Mikhail Gorbachev claimed that Leonid Brezhnev refused to aid Saddam over infuriation of Saddam's treatment of Iraqi communists. Consequently, many viewed Iraq as "an agent of the civilized world." The blatant disregard of international law and violations of international borders were ignored. Instead Iraq received economic and military support from its allies, who overlooked Saddam's use of chemical warfare against the Kurds and the Iranians, in addition to Iraq's efforts to develop nuclear weapons.
In the first days of the war, there was heavy ground fighting around strategic ports as Iraq launched an attack on Khuzestan. After making some initial gains, Iraq's troops began to suffer losses from human wave attacks by Iran. By 1982, Iraq was on the defensive and looking for ways to end the war.
At this point, Saddam asked his ministers for candid advice. Health Minister Dr. Riyadh Ibrahim suggested that Saddam temporarily step down to promote peace negotiations. Initially, Saddam Hussein appeared to take in this opinion as part of his cabinet democracy. A few weeks later, Dr. Ibrahim was sacked when held responsible for a fatal incident in an Iraqi hospital where a patient died from intravenous administration of the wrong concentration of potassium supplement.
Dr. Ibrahim was arrested a few days after he started his new life as a sacked minister. He was known to have publicly declared before that arrest that he was "glad that he got away alive." Pieces of Ibrahim's dismembered body were delivered to his wife the next day.
Iraq quickly found itself bogged down in one of the longest and most destructive wars of attrition of the 20th century. During the war, Iraq used chemical weapons against Iranian forces fighting on the southern front and Kurdish separatists who were attempting to open up a northern front in Iraq with the help of Iran. These chemical weapons were developed by Iraq from materials and technology supplied primarily by West German companies as well as using dual-use technology imported following the Reagan administration's lifting of export restrictions. The United States also supplied Iraq with "satellite photos showing Iranian deployments." In a US bid to open full diplomatic relations with Iraq, the country was removed from the US list of State Sponsors of Terrorism. Ostensibly, this was because of improvement in the regime's record, although former United States Assistant Secretary of Defense Noel Koch later stated, "No one had any doubts about [the Iraqis'] continued involvement in terrorism ... The real reason was to help them succeed in the war against Iran." The Soviet Union, France, and China together accounted for over 90% of the value of Iraq's arms imports between 1980 and 1988.
Saddam reached out to other Arab governments for cash and political support during the war, particularly after Iraq's oil industry severely suffered at the hands of the Iranian navy in the Persian Gulf. Iraq successfully gained some military and financial aid, as well as diplomatic and moral support, from the Soviet Union, China, France, and the United States, which together feared the prospects of the expansion of revolutionary Iran's influence in the region. The Iranians, demanding that the international community should force Iraq to pay war reparations to Iran, refused any suggestions for a cease-fire. Despite several calls for a ceasefire by the United Nations Security Council, hostilities continued until 20 August 1988.
On 16 March 1988, the Kurdish town of Halabja was attacked with a mix of mustard gas and nerve agents, killing 5,000 civilians, and maiming, disfiguring, or seriously debilitating 10,000 more. ("see Halabja poison gas attack") The attack occurred in conjunction with the 1988 al-Anfal Campaign designed to reassert central control of the mostly Kurdish population of areas of northern Iraq and defeat the Kurdish peshmerga rebel forces. The United States now maintains that Saddam ordered the attack to terrorize the Kurdish population in northern Iraq, but Saddam's regime claimed at the time that Iran was responsible for the attack which some including the U.S. supported until several years later.
The bloody eight-year war ended in a stalemate. There were hundreds of thousands of casualties with estimates of up to one million dead. Neither side had achieved what they had originally desired and the borders were left nearly unchanged. The southern, oil rich and prosperous Khuzestan and Basra area (the main focus of the war, and the primary source of their economies) were almost completely destroyed and were left at the pre-1979 border, while Iran managed to make some small gains on its borders in the Northern Kurdish area. Both economies, previously healthy and expanding, were left in ruins.
Saddam borrowed tens of billions of dollars from other Arab states and a few billions from elsewhere during the 1980s to fight Iran, mainly to prevent the expansion of Shi'a radicalism. However, this had proven to completely backfire both on Iraq and on the part of the Arab states, for Khomeini was widely perceived as a hero for managing to defend Iran and maintain the war with little foreign support against the heavily backed Iraq and only managed to boost Islamic radicalism not only within the Arab states, but within Iraq itself, creating new tensions between the Sunni Ba'ath Party and the majority Shi'a population. Faced with rebuilding Iraq's infrastructure and internal resistance, Saddam desperately re-sought cash, this time for postwar reconstruction.
The Al-Anfal Campaign was a genocidal campaign against the Kurdish people (and many others) in Kurdish regions of Iraq led by the government of Saddam Hussein and headed by Ali Hassan al-Majid. The campaign takes its name from Surat al-Anfal in the Qur'an, which was used as a code name by the former Iraqi Ba'athist administration for a series of attacks against the "peshmerga" rebels and the mostly Kurdish civilian population of rural Northern Iraq, conducted between 1986 and 1989 culminating in 1988. This campaign also targeted Shabaks and Yazidis, Assyrians, Turkoman people and Mandeans and many villages belonging to these ethnic groups were also destroyed. Human Rights Watch estimates that between 50,000 and 100,000 people were killed. Some Kurdish sources put the number higher, estimating that 182,000 Kurds were killed.
The end of the war with Iran served to deepen latent tensions between Iraq and its wealthy neighbor Kuwait. Saddam urged the Kuwaitis to waive the Iraqi debt accumulated in the war, some $30 billion, but they refused.
Saddam pushed oil-exporting countries to raise oil prices by cutting back production; Kuwait refused, however. In addition to refusing the request, Kuwait spearheaded the opposition in OPEC to the cuts that Saddam had requested. Kuwait was pumping large amounts of oil, and thus keeping prices low, when Iraq needed to sell high-priced oil from its wells to pay off a huge debt.
Saddam had always argued that Kuwait was historically an integral part of Iraq, and that Kuwait had only come into being through the maneuverings of British imperialism; this echoed a belief that Iraqi nationalists had voiced for the past 50 years. This belief was one of the few articles of faith uniting the political scene in a nation rife with sharp social, ethnic, religious, and ideological divides.
The extent of Kuwaiti oil reserves also intensified tensions in the region. The oil reserves of Kuwait (with a population of 2 million next to Iraq's 25) were roughly equal to those of Iraq. Taken together, Iraq and Kuwait sat on top of some 20 percent of the world's known oil reserves; as an article of comparison, Saudi Arabia holds 25 percent.
Saddam complained to the U.S. State Department that Kuwait had slant drilled oil out of wells that Iraq considered to be within its disputed border with Kuwait. Saddam still had an experienced and well-equipped army, which he used to influence regional affairs. He later ordered troops to the Iraq–Kuwait border.
As Iraq-Kuwait relations rapidly deteriorated, Saddam was receiving conflicting information about how the U.S. would respond to the prospects of an invasion. For one, Washington had been taking measures to cultivate a constructive relationship with Iraq for roughly a decade. The Reagan administration gave Iraq roughly $4 billion in agricultural credits to bolster it against Iran. Saddam's Iraq became "the third-largest recipient of U.S. assistance."
Reacting to Western criticism in April 1990 Saddam threatened to destroy half of Israel with chemical weapons if it moved against Iraq. In May 1990 he criticized U.S. support for Israel warning that "the United States cannot maintain such a policy while professing friendship towards the Arabs." In July 1990 he threatened force against Kuwait and the UAE saying "The policies of some Arab rulers are American ... They are inspired by America to undermine Arab interests and security." The U.S. sent aerial planes and combat ships to the Persian Gulf in response to these threats.
U.S. ambassador to Iraq April Glaspie met with Saddam in an emergency meeting on 25 July 1990, where the Iraqi leader attacked American policy with regards to Kuwait and the United Arab Emirates:
Glaspie replied:
Saddam stated that he would attempt last-ditch negotiations with the Kuwaitis but Iraq "would not accept death."
U.S. officials attempted to maintain a conciliatory line with Iraq, indicating that while George H. W. Bush and James Baker did not want force used, they would not take any position on the Iraq–Kuwait boundary dispute and did not want to become involved.
Later, Iraq and Kuwait met for a final negotiation session, which failed. Saddam then sent his troops into Kuwait. As tensions between Washington and Saddam began to escalate, the Soviet Union, under Mikhail Gorbachev, strengthened its military relationship with the Iraqi leader, providing him military advisers, arms and aid.
On 2 August 1990, Saddam invaded Kuwait, initially claiming assistance to "Kuwaiti revolutionaries," thus sparking an international crisis. On 4 August an Iraqi-backed "Provisional Government of Free Kuwait" was proclaimed, but a total lack of legitimacy and support for it led to an 8 August announcement of a "merger" of the two countries. On 28 August Kuwait formally became the 19th Governorate of Iraq. Just two years after the 1988 Iraq and Iran truce, "Saddam Hussein did what his Gulf patrons had earlier paid him to prevent." Having removed the threat of Iranian fundamentalism he "overran Kuwait and confronted his Gulf neighbors in the name of Arab nationalism and Islam."
When later asked why he invaded Kuwait, Saddam first claimed that it was because Kuwait was rightfully Iraq's 19th province and then said "When I get something into my head I act. That's just the way I am." After Saddam's seizure of Kuwait in August 1990, a UN coalition led by the United States drove Iraq's troops from Kuwait in February 1991. The ability for Saddam Hussein to pursue such military aggression was from a "military machine paid for in large part by the tens of billions of dollars Kuwait and the Gulf states had poured into Iraq and the weapons and technology provided by the Soviet Union, Germany, and France."
Shortly before he invaded Kuwait, he shipped 100 new Mercedes 200 Series cars to top editors in Egypt and Jordan. Two days before the first attacks, Saddam reportedly offered Egypt's Hosni Mubarak 50 million dollars in cash, "ostensibly for grain."
U.S. President George H. W. Bush responded cautiously for the first several days. On one hand, Kuwait, prior to this point, had been a virulent enemy of Israel and was the Persian Gulf monarchy that had the most friendly relations with the Soviets. On the other hand, Washington foreign policymakers, along with Middle East experts, military critics, and firms heavily invested in the region, were extremely concerned with stability in this region. The invasion immediately triggered fears that the world's price of oil, and therefore control of the world economy, was at stake. Britain profited heavily from billions of dollars of Kuwaiti investments and bank deposits. Bush was perhaps swayed while meeting with British prime minister Margaret Thatcher, who happened to be in the U.S. at the time.
Cooperation between the United States and the Soviet Union made possible the passage of resolutions in the United Nations Security Council giving Iraq a deadline to leave Kuwait and approving the use of force if Saddam did not comply with the timetable. U.S. officials feared Iraqi retaliation against oil-rich Saudi Arabia, since the 1940s a close ally of Washington, for the Saudis' opposition to the invasion of Kuwait. Accordingly, the U.S. and a group of allies, including countries as diverse as Egypt, Syria and Czechoslovakia, deployed a massive number of troops along the Saudi border with Kuwait and Iraq in order to encircle the Iraqi army, the largest in the Middle East.
Saddam's officers looted Kuwait, stripping even the marble from its palaces to move it to Saddam's own palace.
During the period of negotiations and threats following the invasion, Saddam focused renewed attention on the Palestinian problem by promising to withdraw his forces from Kuwait if Israel would relinquish the occupied territories in the West Bank, the Golan Heights, and the Gaza Strip. Saddam's proposal further split the Arab world, pitting U.S.- and Western-supported Arab states against the Palestinians. The allies ultimately rejected any linkage between the Kuwait crisis and Palestinian issues.
Saddam ignored the Security Council deadline. Backed by the Security Council, a U.S.-led coalition launched round-the-clock missile and aerial attacks on Iraq, beginning 16 January 1991. Israel, though subjected to attack by Iraqi missiles, refrained from retaliating in order not to provoke Arab states into leaving the coalition. A ground force consisting largely of U.S. and British armoured and infantry divisions ejected Saddam's army from Kuwait in February 1991 and occupied the southern portion of Iraq as far as the Euphrates.
On 6 March 1991, Bush announced "What is at stake is more than one small country, it is a big idea—a new world order, where diverse nations are drawn together in common cause to achieve the universal aspirations of mankind: peace and security, freedom, and the rule of law."
In the end, the out-numbered and under-equipped Iraqi army proved unable to compete on the battlefield with the highly mobile coalition land forces and their overpowering air support. Some 175,000 Iraqis were taken prisoner and casualties were estimated at over 85,000. As part of the cease-fire agreement, Iraq agreed to scrap all poison gas and germ weapons and allow UN observers to inspect the sites. UN trade sanctions would remain in effect until Iraq complied with all terms. Saddam publicly claimed victory at the end of the war.
Iraq's ethnic and religious divisions, together with the brutality of the conflict that this had engendered, laid the groundwork for postwar rebellions. In the aftermath of the fighting, social and ethnic unrest among Shi'ite Muslims, Kurds, and dissident military units threatened the stability of Saddam's government. Uprisings erupted in the Kurdish north and Shi'a southern and central parts of Iraq, but were ruthlessly repressed. Uprisings in 1991 led to the death of 100,000–180,000 people, mostly civilians.
The United States, which had urged Iraqis to rise up against Saddam, did nothing to assist the rebellions. The Iranians, despite the widespread Shi'ite rebellions, had no interest in provoking another war, while Turkey opposed any prospect of Kurdish independence, and the Saudis and other conservative Arab states feared an Iran-style Shi'ite revolution. Saddam, having survived the immediate crisis in the wake of defeat, was left firmly in control of Iraq, although the country never recovered either economically or militarily from the Gulf War.
Saddam routinely cited his survival as "proof" that Iraq had in fact won the war against the U.S. This message earned Saddam a great deal of popularity in many sectors of the Arab world. John Esposito, however, claims that "Arabs and Muslims were pulled in two directions. That they rallied not so much to Saddam Hussein as to the bipolar nature of the confrontation (the West versus the Arab Muslim world) and the issues that Saddam proclaimed: Arab unity, self-sufficiency, and social justice." As a result, Saddam Hussein appealed to many people for the same reasons that attracted more and more followers to Islamic revivalism and also for the same reasons that fueled anti-Western feelings.
As one U.S. Muslim observer noted: "People forgot about Saddam's record and concentrated on America ... Saddam Hussein might be wrong, but it is not America who should correct him." A shift was, therefore, clearly visible among many Islamic movements in the post war period "from an initial Islamic ideological rejection of Saddam Hussein, the secular persecutor of Islamic movements, and his invasion of Kuwait to a more populist Arab nationalist, anti-imperialist support for Saddam (or more precisely those issues he represented or championed) and the condemnation of foreign intervention and occupation."
Saddam, therefore, increasingly portrayed himself as a devout Muslim, in an effort to co-opt the conservative religious segments of society. Some elements of Sharia law were re-introduced, and the ritual phrase "Allahu Akbar" ("God is great"), in Saddam's handwriting, was added to the national flag. Saddam also commissioned the production of a "Blood Qur'an," written using 27 litres of his own blood, to thank God for saving him from various dangers and conspiracies.
The United Nations sanctions placed upon Iraq when it invaded Kuwait were not lifted, blocking Iraqi oil exports. During the late 1990s, the UN considered relaxing the sanctions imposed because of the hardships suffered by ordinary Iraqis. Studies dispute the number of people who died in south and central Iraq during the years of the sanctions. On 9 December 1996, Saddam's government accepted the Oil-for-Food Programme that the UN had first offered in 1992.
Relations between the United States and Iraq remained tense following the Gulf War. The U.S. launched a missile attack aimed at Iraq's intelligence headquarters in Baghdad 26 June 1993, citing evidence of repeated Iraqi violations of the "no fly zones" imposed after the Gulf War and for incursions into Kuwait. U.S. officials continued to accuse Saddam of violating the terms of the Gulf War's cease fire, by developing weapons of mass destruction and other banned weaponry, and violating the UN-imposed sanctions. Also during the 1990s, President Bill Clinton maintained sanctions and ordered air strikes in the "Iraqi no-fly zones" (Operation Desert Fox), in the hope that Saddam would be overthrown by political enemies inside Iraq. Western charges of Iraqi resistance to UN access to suspected weapons were the pretext for crises between 1997 and 1998, culminating in intensive U.S. and British missile strikes on Iraq, 16–19 December 1998. After two years of intermittent activity, U.S. and British warplanes struck harder at sites near Baghdad in February 2001. Former CIA case officer Robert Baer reports that he "tried to assassinate" Saddam in 1995, amid "a decade-long effort to encourage a military coup in Iraq."
Saddam continued involvement in politics abroad. Video tapes retrieved after show his intelligence chiefs meeting with Arab journalists, including a meeting with the former managing director of Al-Jazeera, Mohammed Jassem al-Ali, in 2000. In the video Saddam's son Uday advised al-Ali about hires in Al-Jazeera: "During your last visit here along with your colleagues we talked about a number of issues, and it does appear that you indeed were listening to what I was saying since changes took place and new faces came on board such as that lad, Mansour." He was later sacked by Al-Jazeera.
In 2002, Austrian prosecutors investigated Saddam government's transactions with Fritz Edlinger that possibly violated Austrian money laundering and embargo regulations. Fritz Edlinger, president of the "General Secretary of the Society for Austro-Arab relations" (GÖAB) and a former member of Socialist International's Middle East Committee, was an outspoken supporter of Saddam Hussein. In 2005, an Austrian journalist revealed that Fritz Edlinger's GÖAB had received $100,000 from an Iraqi front company as well as donations from Austrian companies soliciting business in Iraq.
In 2002, a resolution sponsored by the European Union was adopted by the Commission for Human Rights, which stated that there had been no improvement in the human rights crisis in Iraq. The statement condemned President Saddam Hussein's government for its "systematic, widespread and extremely grave violations of human rights and international humanitarian law." The resolution demanded that Iraq immediately put an end to its "summary and arbitrary executions ... the use of rape as a political tool and all enforced and involuntary disappearances."
Many members of the international community, especially the U.S., continued to view Saddam as a bellicose tyrant who was a threat to the stability of the region. In his January 2002 state of the union address to Congress, President George W. Bush spoke of an "axis of evil" consisting of Iran, North Korea, and Iraq. Moreover, Bush announced that he would possibly take action to topple the Iraqi government, because of the threat of its weapons of mass destruction. Bush stated that "The Iraqi regime has plotted to develop anthrax, and nerve gas, and nuclear weapons for over a decade ... Iraq continues to flaunt its hostility toward America and to support terror."
After the passing of United Nations Security Council Resolution 1441, which demanded that Iraq give "immediate, unconditional and active cooperation" with UN and IAEA inspections, Saddam allowed U.N. weapons inspectors led by Hans Blix to return to Iraq. During the renewed inspections beginning in November 2002, Blix found no stockpiles of WMD and noted the "proactive" but not always "immediate" Iraqi cooperation as called for by UN Security Council Resolution 1441.
With war still looming on 24 February 2003, Saddam Hussein took part in an interview with CBS News reporter Dan Rather. Talking for more than three hours, he denied possessing any weapons of mass destruction, or any other weapons prohibited by UN guidelines. He also expressed a wish to have a live televised debate with George W. Bush, which was declined. It was his first interview with a U.S. reporter in over a decade. CBS aired the taped interview later that week. Saddam Hussein later told an FBI interviewer that he once left open the possibility that Iraq possessed weapons of mass destruction in order to appear strong against Iran.
The Iraqi government and military collapsed within three weeks of the beginning of the U.S.-led 2003 invasion of Iraq on 20 March. By the beginning of April, U.S.-led forces occupied much of Iraq. The resistance of the much-weakened Iraqi Army either crumbled or shifted to guerrilla tactics, and it appeared that Saddam had lost control of Iraq. He was last seen in a video which purported to show him in the Baghdad suburbs surrounded by supporters. When Baghdad fell to U.S.-led forces on 9 April, marked symbolically by the toppling of his statue, Saddam was nowhere to be found.
In April 2003, Saddam's whereabouts remained in question during the weeks following the fall of Baghdad and the conclusion of the major fighting of the war. Various sightings of Saddam were reported in the weeks following the war, but none was authenticated. At various times Saddam released audio tapes promoting popular resistance to his ousting.
Saddam was placed at the top of the "U.S. list of most-wanted Iraqis." In July 2003, his sons Uday and Qusay and 14-year-old grandson Mustapha were killed in a three-hour gunfight with U.S. forces.
On 13 December 2003, in Operation Red Dawn, Saddam Hussein was captured by American forces after being found hiding in a hole in the ground near a farmhouse in ad-Dawr, near Tikrit. Following his capture, Saddam was transported to a U.S. base near Tikrit, and later taken to the American base near Baghdad. Documents obtained and released by the National Security Archive detail FBI interviews and conversations with Hussein while he was in U.S. custody. On 14 December, U.S. administrator in Iraq Paul Bremer confirmed that Saddam Hussein had indeed been captured at a farmhouse in ad-Dawr near Tikrit. Bremer presented video footage of Saddam in custody.
Saddam was shown with a full beard and hair longer than his familiar appearance. He was described by U.S. officials as being in good health. Bremer reported plans to put Saddam on trial, but claimed that the details of such a trial had not yet been determined. Iraqis and Americans who spoke with Saddam after his capture generally reported that he remained self-assured, describing himself as a "firm, but just leader."
British tabloid newspaper "The Sun" posted a picture of Saddam wearing white briefs on the front cover of a newspaper. Other photographs inside the paper show Saddam washing his trousers, shuffling, and sleeping. The United States government stated that it considered the release of the pictures a violation of the Geneva Convention, and that it would investigate the photographs. During this period Saddam was interrogated by FBI agent George Piro.
The guards at the Baghdad detention facility called their prisoner "Vic," which stands for 'Very Important Criminal', and let him plant a small garden near his cell. The nickname and the garden are among the details about the former Iraqi leader that emerged during a March 2008 tour of the Baghdad prison and cell where Saddam slept, bathed, and kept a journal and wrote poetry in the final days before his execution; he was concerned to ensure his legacy and how the history would be told. The tour was conducted by U.S. Marine Maj. Gen. Doug Stone, overseer of detention operations for the U.S. military in Iraq at the time.
On 30 June 2004, Saddam Hussein, held in custody by U.S. forces at the U.S. base "Camp Cropper," along with 11 other senior Ba'athist leaders, were handed over to the interim Iraqi government to stand trial for crimes against humanity and other offences.
A few weeks later, he was charged by the Iraqi Special Tribunal with crimes committed against residents of Dujail in 1982, following a failed assassination attempt against him. Specific charges included the murder of 148 people, torture of women and children and the illegal arrest of 399 others.
Among the many challenges of the trial were:
On 5 November 2006, Saddam Hussein was found guilty of crimes against humanity and sentenced to death by hanging. Saddam's half-brother, Barzan Ibrahim, and Awad Hamed al-Bandar, head of Iraq's Revolutionary Court in 1982, were convicted of similar charges. The verdict and sentencing were both appealed, but subsequently affirmed by Iraq's Supreme Court of Appeals.
Saddam was hanged on the first day of Eid ul-Adha, 30 December 2006, despite his wish to be executed by firing squad (which he argued was the lawful military capital punishment citing his military position as the commander-in-chief of the Iraqi military). The execution was carried out at Camp Justice, an Iraqi army base in Kadhimiya, a neighborhood of northeast Baghdad.
Saudi Arabia condemned Iraqi authorities for carrying on with the execution on a holy day. A presenter from the Al—Ikhbariya television station officially stated "There is a feeling of surprise and disapproval that the verdict has been applied during the holy months and the first days of Eid al-Adha. Leaders of Islamic countries should show respect for this blessed occasion ... not demean it."
Video of the execution was recorded on a mobile phone and his captors could be heard insulting Saddam. The video was leaked to electronic media and posted on the Internet within hours, becoming the subject of global controversy. It was later claimed by the head guard at the tomb where his remains lay that Saddam's body had been stabbed six times after the execution. Saddam's demeanor while being led to the gallows has been discussed by two witnesses, Iraqi Judge Munir Haddad and Iraqi national security adviser Mowaffak al-Rubaie. The accounts of the two witnesses are contradictory as Haddad describes Saddam as being strong in his final moments whereas al-Rubaie says Saddam was clearly afraid.
Saddam's last words during the execution, "May God’s blessings be upon Muhammad and his household. And may God hasten their appearance and curse their enemies." Then one of the crowd repeatedly said the name of the Iraqi Shiite cleric, Moqtada Al-Sadr. Saddam later said, "Do you consider this manhood?" The crowd shouted, "go to Hell." Saddam replied, "To the hell that is Iraq!?" Again, one of the crowd asked those who shouted to keep quiet for God. Saddam Hussein started recitation of final Muslim prayers, "I bear witness that there is no god but Allah and I testify that Mohammed is the Messenger of Allah." One of the crowd shouted, "The tyrant [dictator] has collapsed!" Saddam said, "May God’s blessings be upon Mohammed and his household (family)". He recited the shahada one and a half times, as while he was about to say ‘Mohammad’ on the second shahada, the trapdoor opened, cutting him off mid-sentence. The rope broke his neck and he died instantly.
Not long before the execution, Saddam's lawyers released his last letter.
A second unofficial video, apparently showing Saddam's body on a trolley, emerged several days later. It sparked speculation that the execution was carried out incorrectly as Saddam Hussein had a gaping hole in his neck.
Saddam was buried at his birthplace of Al-Awja in Tikrit, Iraq, on 31 December 2006. He was buried 3 km (2 mi) from his sons Uday and Qusay Hussein. His tomb was reported to have been destroyed in March 2015. Before it was destroyed, a Sunni tribal group reportedly removed his body to a secret location, fearful of what might happen.
In August 1995, Raghad and her husband Hussein Kamel al-Majid and Rana and her husband, Saddam Kamel al-Majid, defected to Jordan, taking their children with them. They returned to Iraq when they received assurances that Saddam would pardon them. Within three days of their return in February 1996, both of the Kamel brothers were attacked and killed in a gunfight with other clan members who considered them traitors.
In August 2003, Saddam's daughters Raghad and Rana received sanctuary in Amman, Jordan, where they are currently staying with their nine children. That month, they spoke with CNN and the Arab satellite station Al-Arabiya in Amman. When asked about her father, Raghad told CNN, "He was a very good father, loving, has a big heart." Asked if she wanted to give a message to her father, she said: "I love you and I miss you." Her sister Rana also remarked, "He had so many feelings and he was very tender with all of us."
With the intention of discrediting Saddam Hussein with his supporters, the CIA was considering in 2003 before the Iraq War to make a video in which he (Saddam) would be seen having sex with a teenager.
In 1979, Rev. Jacob Yasso of Chaldean Sacred Heart Church congratulated Saddam Hussein on his presidency. In return, Rev. Yasso said that Saddam Hussein donated US$250,000 to his church, which is made up of at least 1,200 families of Middle Eastern descent. In 1980, Detroit Mayor Coleman Young allowed Rev. Yasso to present the key to the city of Detroit to Saddam Hussein. At the time, Saddam then asked Rev. Yasso, "I heard there was a debt on your church. How much is it?" After the inquiry, Saddam then donated another $200,000 to Chaldean Sacred Heart Church. Rev. Yasso said that Saddam made donations to Chaldean churches all over the world, and even went on record as saying "He's very kind to Christians." | https://en.wikipedia.org/wiki?curid=29490 |
Oceania
Oceania (, , ) is a geographic region that includes Australasia, Melanesia, Micronesia and Polynesia. Spanning the eastern and western hemispheres, Oceania has a land area of and a population of over 41 million. When compared to continents, the region of Oceania is the smallest in land area and the second smallest in population after Antarctica.
Oceania has a diverse mix of economies from the highly developed and globally competitive financial markets of Australia and New Zealand, which rank high in quality of life and human development index, to the much less developed economies that belong to countries such as Kiribati and Tuvalu, while also including medium-sized economies of Pacific islands such as Palau, Fiji and Tonga. The largest and most populous country in Oceania is Australia, with Sydney being the largest city of both Oceania and Australia.
The first settlers of Australia, New Guinea, and the large islands just to the east arrived more than 60,000 years ago. Oceania was first explored by Europeans from the 16th century onward. Portuguese navigators, between 1512 and 1526, reached the Tanimbar Islands, some of the Caroline Islands and west Papua New Guinea. On his first voyage in the 18th century, James Cook, who later arrived at the highly developed Hawaiian Islands, went to Tahiti and followed the east coast of Australia for the first time. The Pacific front saw major action during the Second World War, mainly between Allied powers the United States and Australia, and Axis power Japan.
The arrival of European settlers in subsequent centuries resulted in a significant alteration in the social and political landscape of Oceania. In more contemporary times there has been increasing discussion on national flags and a desire by some Oceanians to display their distinguishable and
individualistic identity. The rock art of Australian Aborigines is the longest continuously practiced artistic tradition in the world. Puncak Jaya in Papua is the highest peak in Oceania at 4,884 metres. Most Oceanian countries have a parliamentary representative democratic multi-party system, with tourism being a large source of income for the Pacific Islands nations.
Definitions of Oceania vary; however, the islands at the geographic extremes of Oceania are generally considered to be the Bonin Islands, a politically integral part of Japan; Hawaii, a state of the United States; Clipperton Island, a possession of France; the Juan Fernández Islands, belonging to Chile; and Macquarie Island, belonging to Australia. (The United Nations has its own geopolitical definition of Oceania, but this consists of discrete political entities, and so excludes the Bonin Islands, Hawaii, Clipperton Island, and the Juan Fernández Islands, along with Easter Island.)
The geographer Conrad Malte-Brun coined the French term "Océanie" 1812. "Océanie" derives from the Latin word , and this from the Greek word ("ōkeanós"), "ocean". The term "Oceania" is used because, unlike the other continental groupings, it is the ocean that links the parts of the region together.
In some countries (such as Brazil) however, Oceania is still regarded as a continent (Portuguese: "continente") in the sense of "one of the parts of the world", and the concept of Australia as a continent does not exist.
Some geographers group the Australian continental plate with other islands in the Pacific into one "quasi-continent" called Oceania.
Indigenous Australians are the original inhabitants of the Australian continent and nearby islands who migrated from Africa to Asia around 70,000 years ago and arrived in Australia around 50,000 years ago. They are believed to be among the earliest human migrations out of Africa. Although they likely migrated to Australia through Southeast Asia they are not demonstrably related to any known Asian or Polynesian population. There is evidence of genetic and linguistic interchange between Australians in the far north and the Austronesian peoples of modern-day New Guinea and the islands, but this may be the result of recent trade and intermarriage.
They reached Tasmania approximately 40,000 years ago by migrating across a land bridge from the mainland that existed during the last ice age. It is believed that the first early human migration to Australia was achieved when this landmass formed part of the Sahul continent, connected to the island of New Guinea via a land bridge. The Torres Strait Islanders are indigenous to the Torres Strait Islands, which are at the northernmost tip of Queensland near Papua New Guinea. The earliest definite human remains found in Australia are that of Mungo Man, which have been dated at about 40,000 years old.
The original inhabitants of the group of islands now named Melanesia were likely the ancestors of the present-day Papuan-speaking people. Migrating from South-East Asia, they appear to have occupied these islands as far east as the main islands in the Solomon Islands archipelago, including Makira and possibly the smaller islands farther to the east.
Particularly along the north coast of New Guinea and in the islands north and east of New Guinea, the Austronesian people, who had migrated into the area somewhat more than 3,000 years ago, came into contact with these pre-existing populations of Papuan-speaking peoples. In the late 20th century, some scholars theorized a long period of interaction, which resulted in many complex changes in genetics, languages, and culture among the peoples.
Micronesia began to be settled several millennia ago, although there are competing theories about the origin and arrival of the first settlers. There are numerous difficulties with conducting archaeological excavations in the islands, due to their size, settlement patterns and storm damage. As a result, much evidence is based on linguistic analysis.
The earliest archaeological traces of civilization have been found on the island of Saipan, dated to 1500 BC or slightly before. The ancestors of the Micronesians settled there over 4,000 years ago. A decentralized chieftain-based system eventually evolved into a more centralized economic and religious culture centered on Yap and Pohnpei. The prehistories of many Micronesian islands such as Yap are not known very well.
The first people of the Northern Mariana Islands navigated to the islands and discovered it at some period between 4000 BC to 2000 BC from South-East Asia. They became known as the Chamorros. Their language was named after them. The ancient Chamorro left a number of megalithic ruins, including Latte stone. The Refaluwasch, or Carolinian, people came to the Marianas in the 1800s from the Caroline Islands. Micronesian colonists gradually settled the Marshall Islands during the 2nd millennium BC, with inter-island navigation made possible using traditional stick charts.
The Polynesian people are considered to be by linguistic, archaeological and human genetic ancestry a subset of the sea-migrating Austronesian people and tracing Polynesian languages places their prehistoric origins in the Malay Archipelago, and ultimately, in Taiwan. Between about 3000 and 1000 BCE speakers of Austronesian languages began spreading from Taiwan into Island South-East Asia, as tribes whose natives were thought to have arrived through South China about 8,000 years ago to the edges of western Micronesia and on into Melanesia.
In the archaeological record there are well-defined traces of this expansion which allow the path it took to be followed and dated with some certainty. It is thought that by roughly 1400 BC, "Lapita Peoples", so-named after their pottery tradition, appeared in the Bismarck Archipelago of north-west Melanesia.
Easter Islanders claimed that a chief Hotu Matu'a discovered the island in one or two large canoes with his wife and extended family. They are believed to have been Polynesian. Around 1200, Tahitian explorers discovered and began settling the area. This date range is based on glottochronological calculations and on three radiocarbon dates from charcoal that appears to have been produced during forest clearance activities. Moreover, a recent study which included radiocarbon dates from what is thought to be very early material suggests that the island was discovered and settled as recently as 1200.
From 1527 to 1595 a number of other large Spanish expeditions crossed the Pacific Ocean, leading to the arrival in Marshall Islands and Palau in the North Pacific, as well as Tuvalu, the Marquesas, the Solomon Islands archipelago, the Cook Islands and the Admiralty Islands in the South Pacific.
In the quest for Terra Australis, Spanish explorations in the 17th century, such as the expedition led by the Portuguese navigator Pedro Fernandes de Queirós, sailed to Pitcairn and Vanuatu archipelagos, and sailed the Torres Strait between Australia and New Guinea, named after navigator Luís Vaz de Torres. Willem Janszoon, made the first completely documented European landing in Australia (1606), in Cape York Peninsula. Abel Janszoon Tasman circumnavigated and landed on parts of the Australian continental coast and discovered Van Diemen's Land (now Tasmania), New Zealand in 1642, and Fiji islands. He was the first known European explorer to reach these islands.
On 23 April 1770 British explorer James Cook made his first recorded direct observation of indigenous Australians at Brush Island near Bawley Point. On 29 April, Cook and crew made their first landfall on the mainland of the continent at a place now known as the Kurnell Peninsula. It is here that James Cook made first contact with an aboriginal tribe known as the Gweagal. His expedition became the first recorded Europeans to have encountered its eastern coastline of Australia.
In 1789 the Mutiny on the Bounty against William Bligh led to several of the mutineers escaping the Royal Navy and settling on Pitcairn Islands, which later became a British colony. Britain also established colonies in Australia in 1788, New Zealand in 1840 and Fiji in 1872, with much of Oceania becoming part of the British Empire. The Gilbert Islands (now known as Kiribati) and the Ellice Islands (now known as Tuvalu) came under Britain's sphere of influence in the late 19th century.
French Catholic missionaries arrived on Tahiti in 1834; their expulsion in 1836 caused France to send a gunboat in 1838. In 1842, Tahiti and Tahuata were declared a French protectorate, to allow Catholic missionaries to work undisturbed. The capital of Papeetē was founded in 1843. On 24 September 1853, under orders from Napoleon III, Admiral Febvrier Despointes took formal possession of New Caledonia and Port-de-France (Nouméa) was founded 25 June 1854.
The Spanish explorer Alonso de Salazar landed in the Marshall Islands in 1529. They were named by Krusenstern, after English explorer John Marshall, who visited them together with Thomas Gilbert in 1788, en route from Botany Bay to Canton (two ships of the First Fleet). In 1905 the British government transferred some administrative responsibility over south-east New Guinea to Australia (which renamed the area "Territory of Papua"); and in 1906, transferred all remaining responsibility to Australia. The Marshall Islands were claimed by Spain in 1874. Germany established colonies in New Guinea in 1884, and Samoa in 1900. The United States also expanded into the Pacific, beginning with Baker Island and Howland Island in 1857, and with Hawaii becoming a U.S. territory in 1898. Disagreements between the US, Germany and UK over Samoa led to the Tripartite Convention of 1899.
One of the first land offensives in Oceania was the Occupation of German Samoa in August 1914 by New Zealand forces. The campaign to take Samoa ended without bloodshed after over 1,000 New Zealanders landed on the German colony. Australian forces attacked German New Guinea in September 1914. A company of Australians and a British warship besieged the Germans and their colonial subjects, ending with a German surrender.
The attack on Pearl Harbor by the Japanese Imperial General Headquarters, was a surprise military strike conducted by the Imperial Japanese Navy against the United States naval base at Pearl Harbor, Hawaii, on the morning of 7 December 1941. The attack led to the United States' entry into World War II. The Japanese subsequently invaded New Guinea, Solomon Islands and other Pacific islands. The Japanese were turned back at the Battle of the Coral Sea and the Kokoda Track campaign before they were finally defeated in 1945. Some of the most prominent Oceanic battlegrounds were the Battle of Bita Paka, the Solomon Islands campaign, the Air raids on Darwin, the Kokada Track, and the Borneo campaign. The United States fought the Battle of Guam from July 21 to August 10, 1944, to recapture the island from Japanese military occupation.
Australia and New Zealand became dominions in the 20th century, adopting the Statute of Westminster Act in 1942 and 1947 respectively. In 1946, Polynesians were granted French citizenship and the islands' status was changed to an overseas territory; the islands' name was changed in 1957 to "Polynésie Française" (French Polynesia). Hawaii became a U.S. state in 1959. Fiji and Tonga became independent in 1970. On 1 May 1979, in recognition of the evolving political status of the Marshall Islands, the United States recognized the constitution of the Marshall Islands and the establishment of the Government of the Republic of the Marshall Islands. The South Pacific Forum was founded in 1971, which became the Pacific Islands Forum in 2000.
Oceania was originally conceived as the lands of the Pacific Ocean, stretching from the Strait of Malacca to the coast of the Americas. It comprised four regions: "Polynesia", "Micronesia", "Malaysia" (now called the Malay Archipelago), and "Melanesia". Today, parts of three geological continents are included in the term "Oceania": Eurasia, Australia, and Zealandia, as well the non-continental volcanic islands of the Philippines, Wallacea, and the open Pacific.
Oceania extends to New Guinea in the west, the Bonin Islands in the northwest, the Hawaiian Islands in the northeast, Rapa Nui and Sala y Gómez Island in the east, and Macquarie Island in the south. Not included are the Pacific islands of Taiwan, the Ryukyu Islands, the Japanese archipelago, and the Maluku Islands, all on the margins of Asia, and the Aleutian Islands of North America. In its periphery, Oceania sprawls 28 degrees north to the Bonin Islands in the northern hemisphere, and 55 degrees south to Macquarie Island in the southern hemisphere.
Oceanian islands are of four basic types: continental islands, high islands, coral reefs and uplifted coral platforms. High islands are of volcanic origin, and many contain active volcanoes. Among these are Bougainville, Hawaii, and Solomon Islands.
Oceania is one of eight terrestrial ecozones, which constitute the major ecological regions of the planet. Related to these concepts are Near Oceania, that part of western Island Melanesia which has been inhabited for tens of millennia, and Remote Oceania which is more recently settled. Although the majority of the Oceanian islands lie in the South Pacific, a few of them are not restricted to the Pacific Ocean – Kangaroo Island and Ashmore and Cartier Islands, for instance, are situated in the Southern Ocean and Indian Ocean, respectively, and Tasmania's west coast faces the Southern Ocean.
The coral reefs of the South Pacific are low-lying structures that have built up on basaltic lava flows under the ocean's surface. One of the most dramatic is the Great Barrier Reef off northeastern Australia with chains of reef patches. A second island type formed of coral is the uplifted coral platform, which is usually slightly larger than the low coral islands. Examples include Banaba (formerly Ocean Island) and Makatea in the Tuamotu group of French Polynesia.
Micronesia, which lies north of the equator and west of the International Date Line, includes the Mariana Islands in the northwest, the Caroline Islands in the center, the Marshall Islands to the west and the islands of Kiribati in the southeast.
Melanesia, to the southwest, includes New Guinea, the world's second largest island after Greenland and by far the largest of the Pacific islands. The other main Melanesian groups from north to south are the Bismarck Archipelago, the Solomon Islands archipelago, Santa Cruz, Vanuatu, Fiji and New Caledonia.
Polynesia, stretching from Hawaii in the north to New Zealand in the south, also encompasses Tuvalu, Tokelau, Samoa, Tonga and the Kermadec Islands to the west, the Cook Islands, Society Islands and Austral Islands in the center, and the Marquesas Islands, Tuamotu, Mangareva Islands, and Easter Island to the east.
Australasia comprises Australia, New Zealand, the island of New Guinea, and neighbouring islands in the Pacific Ocean. Along with India most of Australasia lies on the Indo-Australian Plate with the latter occupying the Southern area. It is flanked by the Indian Ocean to the west and the Southern Ocean to the south.
The Pacific Plate, which makes up most of Oceania, is an oceanic tectonic plate that lies beneath the Pacific Ocean. At , it is the largest tectonic plate. The plate contains an interior hot spot forming the Hawaiian Islands. It is almost entirely oceanic crust. The oldest member disappearing by way of the plate tectonics cycle is early-Cretaceous (145 to 137 million years ago).
Australia, being part of the Indo-Australian plate, is the lowest, flattest, and oldest landmass on Earth and it has had a relatively stable geological history. Geological forces such as tectonic uplift of mountain ranges or clashes between tectonic plates occurred mainly in Australia's early history, when it was still a part of Gondwana. Australia is situated in the middle of the tectonic plate, and therefore currently has no active volcanism.
The geology of New Zealand is noted for its volcanic activity, earthquakes and geothermal areas because of its position on the boundary of the Australian Plate and Pacific Plates. Much of the basement rock of New Zealand was once part of the super-continent of Gondwana, along with South America, Africa, Madagascar, India, Antarctica and Australia. The rocks that now form the continent of Zealandia were nestled between Eastern Australia and Western Antarctica.
The Australia-New Zealand continental fragment of Gondwana split from the rest of Gondwana in the late Cretaceous time (95–90 Ma). By 75 Ma, Zealandia was essentially separate from Australia and Antarctica, although only shallow seas might have separated Zealandia and Australia in the north. The Tasman Sea, and part of Zealandia then locked together with Australia to form the Australian Plate (40 Ma), and a new plate boundary was created between the Australian Plate and Pacific Plate.
Most islands in the Pacific are high islands (volcanic islands), such as, Easter Island, American Samoa and Fiji, among others, having peaks up to 1300 m rising abruptly from the shore. The Northwestern Hawaiian Islands were formed approximately 7 to 30 million years ago, as shield volcanoes over the same volcanic hotspot that formed the Emperor Seamounts to the north and the Main Hawaiian Islands to the south. Hawaii's tallest mountain Mauna Kea is above mean sea level.
The most diverse country of Oceania when it comes to the environment is Australia, with tropical rainforests in the north-east, mountain ranges in the south-east, south-west and east, and dry desert in the centre. Desert or semi-arid land commonly known as the outback makes up by far the largest portion of land. The coastal uplands and a belt of Brigalow grasslands lie between the coast and the mountains, while inland of the dividing range are large areas of grassland. The northernmost point of the east coast is the tropical-rainforested Cape York Peninsula.
Prominent features of the Australian flora are adaptations to aridity and fire which include scleromorphy and serotiny. These adaptations are common in species from the large and well-known families Proteaceae ("Banksia"), Myrtaceae ("Eucalyptus" – gum trees), and Fabaceae ("Acacia" – wattle). The flora of Fiji, Solomon Islands, Vanuatu and New Caledonia is tropical dry forest, with tropical vegetation that includes palm trees, premna protrusa, psydrax odorata, gyrocarpus americanus and derris trifoliata.
New Zealand's landscape ranges from the fjord-like sounds of the southwest to the tropical beaches of the far north. South Island is dominated by the Southern Alps. There are 18 peaks of more than 3000 metres (9800 ft) in the South Island. All summits over 2,900 m are within the Southern Alps, a chain that forms the backbone of the South Island; the highest peak of which is Aoraki / Mount Cook, at . Earthquakes are common, though usually not severe, averaging 3,000 per year. There is a wide variety of native trees, adapted to all the various micro-climates in New Zealand.
In Hawaii, one endemic plant, "Brighamia", now requires hand-pollination because its natural pollinator is presumed to be extinct. The two species of "Brighamia" – "B. rockii" and "B. insignis" – are represented in the wild by around 120 individual plants. To ensure these plants set seed, biologists rappel down cliffs to brush pollen onto their stigmas.
The aptly-named Pacific kingfisher is found in the Pacific Islands, as is the Red-vented bulbul, Polynesian starling, Brown goshawk,Pacific Swallow and the Cardinal myzomela, among others. Birds breeding on Pitcairn include the fairy tern, common noddy and red-tailed tropicbird. The Pitcairn reed warbler, endemic to Pitcairn Island, was added to the endangered species list in 2008.
Native to Hawaii is the Hawaiian crow, which has been extinct in the wild since 2002. The brown tree snake is native to northern and eastern coasts of Australia, Papua New Guinea, Guam and Solomon Islands. Native to Australia, New Guinea and proximate islands are birds of paradise, honeyeaters, Australasian treecreeper, Australasian robin, kingfishers, butcherbirds and bowerbirds.
A unique feature of Australia's fauna is the relative scarcity of native placental mammals, and dominance of the marsupials – a group of mammals that raise their young in a pouch, including the macropods, possums and dasyuromorphs. The passerines of Australia, also known as songbirds or perching birds, include wrens, the magpie group, thornbills, corvids, pardalotes, lyrebirds. Predominant bird species in the country include the Australian magpie, Australian raven, the pied currawong, crested pigeons and the laughing kookaburra. The koala, emu, platypus and kangaroo are national animals of Australia, and the Tasmanian devil is also one of the well-known animals in the country. The goanna is a predatory lizard native to the Australian mainland.
The birds of New Zealand evolved into an avifauna that included a large number of endemic species. As an island archipelago New Zealand accumulated bird diversity and when Captain James Cook arrived in the 1770s he noted that the bird song was deafening. The mix includes species with unusual biology such as the kakapo which is the world's only flightless, nocturnal, lek breeding parrot, but also many species that are similar to neighboring land areas. Some of the more well known and distinctive bird species in New Zealand are the kiwi, kea, takahe, kakapo, mohua, tui and the bellbird. The tuatara is a notable reptile endemic to New Zealand.
The Pacific Islands are ruled by a tropical rainforest and tropical savanna climate. In the tropical and subtropical Pacific, the El Niño Southern Oscillation (ENSO) affects weather conditions. In the tropical western Pacific, the monsoon and the related wet season during the summer months contrast with dry winds in the winter which blow over the ocean from the Asian landmass. November is the only month in which all the tropical cyclone basins are active.
To the southwest of the region, in the Australian landmass, the climate is mostly desert or semi-arid, with the southern coastal corners having a temperate climate, such as oceanic and humid subtropical climate in the east coast and Mediterranean climate in the west. The northern parts of the country have a tropical climate. Snow falls frequently on the highlands near the east coast, in the states of Victoria, New South Wales, Tasmania and in the Australian Capital Territory.
Most regions of New Zealand belong to the temperate zone with a maritime climate (Köppen climate classification: Cfb) characterised by four distinct seasons. Conditions vary from extremely wet on the West Coast of the South Island to almost semi-arid in Central Otago and subtropical in Northland. Snow falls in New Zealand's South Island and at higher altitudes in the North Island. It is extremely rare at sea level in the North Island.
Hawaii, although being in the tropics, experiences many different climates, depending on latitude and its geography. The island of Hawaii for example hosts 4 (out of 5 in total) climate groups on a surface as small as according to the Köppen climate types: tropical, arid, temperate and polar. The Hawaiian Islands receive most of their precipitation during the winter months (October to April). A few islands in the northwest, such as Guam, are susceptible to typhoons in the wet season.
The highest recorded temperature in Oceania occurred in Oodnadatta, South Australia (2 January 1960), where the temperature reached . The lowest temperature ever recorded in Oceania was , at Ranfurly in Otago in 1903, with a more recent temperature of recorded in 1995 in nearby Ophir. Pohnpei of the Senyavin Islands in Micronesia is the wettest settlement in Oceania, and one of the wettest places on earth, with annual recorded rainfall exceeding each year in certain mountainous locations. The Big Bog on the island of Maui is the wettest place, receiving an average each year.
The linked map below shows the exclusive economic zones (EEZs) of the islands of Oceania and neighbouring areas, as a guide to the following table (there are few land boundaries that can be drawn on a map of the Pacific at this scale).
The demographic table below shows the subregions and countries of geopolitical Oceania. The countries and territories in this table are categorised according to the scheme for geographic subregions used by the United Nations. The information shown follows sources in cross-referenced articles; where sources differ, provisos have been clearly indicated. These territories and regions are subject to various additional categorisations, depending on the source and purpose of each description.
The predominant religion in Oceania is Christianity (73%). A 2011 survey found that 92% in Melanesia, 93% in Micronesia and 96% in Polynesia described themselves as Christians. Traditional religions are often animist, and prevalent among traditional tribes is the belief in spirits ("masalai" in Tok Pisin) representing natural forces. In the 2018 census, 37% of New Zealanders affiliated themselves with Christianity and 48% declared no religion. In the 2016 Census, 52% of the Australian population declared some variety of Christianity and 30% stated "no religion".
In recent Australian and New Zealand censuses, large proportions of the population say they belong to "no religion" (which includes atheism, agnosticism, deism, secular humanism). In Tonga, everyday life is heavily influenced by Polynesian traditions and especially by the Christian faith. The Ahmadiyya mosque in Marshall Islands is the only mosque in Micronesia. Another one in Tuvalu belongs to the same sect. The Bahá'í House of Worship in Tiapapata, Samoa, is one of seven designations administered in the Bahá'í Faith.
Other religions in the region include Islam, Buddhism and Hinduism, which are prominent minority religions in Australia and New Zealand. Judaism, Sikhism and Jainism are also present. Sir Isaac Isaacs was the first Australian born Governor General of Australia and was the first Jewish vice-regal representative in the British Empire. Prince Philip Movement is followed around Yaohnanen village on the southern island of Tanna in Vanuatu.
Native languages of Oceania fall into three major geographic groups:
Colonial languages include English in Australia, New Zealand, Hawaii, and many other territories; French in New Caledonia, French Polynesia, Wallis and Futuna, and Vanuatu, Japanese in the Bonin Islands, Spanish on Galápagos Islands and Easter Island. There are also Creoles formed from the interaction of Malay or the colonial languages with indigenous languages, such as Tok Pisin, Bislama, Chavacano, various Malay trade and creole languages, Hawaiian Pidgin, Norfuk, and Pitkern. Contact between Austronesian and Papuan resulted in several instances in mixed languages such as Maisin.
Immigrants brought their own languages to the region, such as Mandarin, Italian, Arabic, Polish, Hindi, German, Spanish, Korean, Cantonese and Greek, among others, namely in Australia and New Zealand, or Fiji Hindi in Fiji.
The most multicultural areas in Oceania, which have a high degree of immigration, are Australia, New Zealand and Hawaii. Since 1945, more than 7 million people have settled in Australia. From the late 1970s, there was a significant increase in immigration from Asian and other non-European countries, making Australia a multicultural country.
Sydney is the most multicultural city in Oceania, having more than 250 different languages spoken with about 40 percent of residents speaking a language other than English at home. Furthermore, 36 percent of the population reported having been born overseas, with top countries being Italy, Lebanon, Vietnam and Iraq, among others. Melbourne is also fairly multicultural, having the largest Greek-speaking population outside of Europe, and the second largest Asian population in Australia after Sydney.
European migration to New Zealand provided a major influx following the signing of the Treaty of Waitangi in 1840. Subsequent immigration has been chiefly from the British Isles, but also from continental Europe, the Pacific, The Americas and Asia. Auckland is home to over half (51.6 percent) of New Zealand's overseas born population, including 72 percent of the country's Pacific Island-born population, 64 percent of its Asian-born population, and 56 percent of its Middle Eastern and African born population.
Hawaii is a majority-minority state. Chinese workers on Western trading ships settled in Hawaii starting in 1789. In 1820, the first American missionaries arrived to preach Christianity and teach the Hawaiians Western ways. , a large proportion of Hawaii's population have Asian ancestry – especially Filipino, Japanese, Korean and Chinese. Many are descendants of immigrants brought to work on the sugarcane plantations in the mid-to-late 19th century. Almost 13,000 Portuguese immigrants had arrived by 1899; they also worked on the sugarcane plantations. Puerto Rican immigration to Hawaii began in 1899 when Puerto Rico's sugar industry was devastated by two hurricanes, causing a worldwide shortage of sugar and a huge demand for sugar from Hawaii.
Between 2001 and 2007 Australia's Pacific Solution policy transferred asylum seekers to several Pacific nations, including the Nauru detention centre. Australia, New Zealand and other nations took part in the Regional Assistance Mission to Solomon Islands between 2003 and 2017 after a request for aid.
Archaeology, linguistics, and existing genetic studies indicate that Oceania was settled by two major waves of migration. The first migration Australo-Melanesian) took place approximately 40 to 80 thousand years ago, and these migrants, Papuans, colonised much of Near Oceania. Approximately 3.5 thousand years ago, a second expansion of Austronesian speakers arrived in Near Oceania, and the descendants of these people spread to the far corners of the Pacific, colonising Remote Oceania.
Mitochondrial DNA (mtDNA) studies quantify the magnitude of the Austronesian expansion and demonstrate the homogenising effect of this expansion. With regards to Papuan influence, autochthonous haplogroups support the hypothesis of a long history in Near Oceania, with some lineages suggesting a time depth of 60 thousand years. Santa Cruz, a population located in Remote Oceania, is an anomaly with extreme frequencies of autochthonous haplogroups of Near Oceanian origin.
Large areas of New Guinea are unexplored by scientists and anthropologists due to extensive forestation and mountainous terrain. Known indigenous tribes in Papua New Guinea have very little contact with local authorities aside from the authorities knowing who they are. Many remain preliterate and, at the national or international level, the names of tribes and information about them is extremely hard to obtain. The Indonesian provinces of Papua and West Papua on the island of New Guinea are home to an estimated 44 uncontacted tribal groups.
Australia and New Zealand are the only developed nations in the region, although the economy of Australia is by far the largest and most dominant economy in the region and one of the largest in the world. Australia's per-capita GDP is higher than that of the UK, Canada, Germany, and France in terms of purchasing power parity. New Zealand is also one of the most globalised economies and depends greatly on international trade.
The Australian Securities Exchange in Sydney is the largest stock exchange in Australia and in the South Pacific. New Zealand is the 53rd-largest national economy in the world measured by nominal gross domestic product (GDP) and 68th-largest in the world measured by purchasing power parity (PPP). In 2012, Australia was the 12th largest national economy by nominal GDP and the 19th-largest measured by PPP-adjusted GDP.
Mercer Quality of Living Survey ranks Sydney tenth in the world in terms of quality of living, making it one of the most livable cities. It is classified as an Alpha+ World City by GaWC. Melbourne also ranked highly in the world's most liveable city list, and is a leading financial centre in the Asia-Pacific region. Auckland and Wellington, in New Zealand, are frequently ranked among the world's most liveable cities with Auckland being ranked 3rd according to the Mercer Quality of Living Survey.
The majority of people living in Australia and to a lesser extent, New Zealand work in mining, electrical and manufacturing sectors also. Australia boasts the largest amount of manufacturing in the region, producing cars, electrical equipment, machinery and clothes.
The overwhelming majority of people living in the Pacific islands work in the service industry which includes tourism, education and financial services. Oceania's largest export markets include Japan, China, the United States and South Korea. The smallest Pacific nations rely on trade with Australia, New Zealand and the United States for exporting goods and for accessing other products. Australia and New Zealand's trading arrangements are known as Closer Economic Relations. Australia and New Zealand, along with other countries, are members of Asia-Pacific Economic Cooperation (APEC) and the East Asia Summit (EAS), which may become trade blocs in the future particularly EAS.
The main produce from the Pacific is copra or coconut, but timber, beef, palm oil, cocoa, sugar and ginger are also commonly grown across the tropics of the Pacific. Fishing provides a major industry for many of the smaller nations in the Pacific, although many fishing areas are exploited by other larger countries, namely Japan. Natural Resources, such as lead, zinc, nickel and gold, are mined in Australia and Solomon Islands. Oceania's largest export markets include Japan, China, the United States, India, South Korea and the European Union.
Endowed with forest, mineral, and fish resources, Fiji is one of the most developed of the Pacific island economies, though it remains a developing country with a large subsistence agriculture sector. Agriculture accounts for 18% of gross domestic product, although it employed some 70% of the workforce as of 2001. Sugar exports and the growing tourist industry are the major sources of foreign exchange. Sugar cane processing makes up one-third of industrial activity. Coconuts, ginger, and copra are also significant.
The history of Hawaii's economy can be traced through a succession of dominant industries; sandalwood, whaling, sugarcane, pineapple, the military, tourism and education. Hawaiian exports include food and clothing. These industries play a small role in the Hawaiian economy, due to the shipping distance to viable markets, such as the West Coast of the contiguous U.S. The state's food exports include coffee, macadamia nuts, pineapple, livestock, sugarcane and honey. , Honolulu was ranked high on world livability rankings, and was also ranked as the 2nd safest city in the U.S.
Tourists mostly come from Japan, the United Kingdom and the United States. Fiji currently draws almost half a million tourists each year; more than a quarter from Australia. This contributes $1 billion or more since 1995 to Fiji's economy but the Government of Fiji islands underestimate these figures due to invisible economy inside tourism industry.
Vanuatu is widely recognised as one of the premier vacation destinations for scuba divers wishing to explore coral reefs of the South Pacific region. Tourism has been promoted, in part, by Vanuatu being the site of several reality-TV shows. The ninth season of the reality TV series "Survivor" was filmed on Vanuatu, entitled " – Islands of Fire". Two years later, Australia's "Celebrity Survivor" was filmed at the same location used by the US version.
Tourism in Australia is an important component of the Australian economy. In the financial year 2014/15, tourism represented 3% of Australia's GDP contributing A$47.5 billion to the national economy. In 2015, there were 7.4 million visitor arrivals. Popular Australian destinations include the Sydney Harbour (Sydney Opera House, Sydney Harbour Bridge, Royal Botanic Garden, etc.), Gold Coast (theme parks such as Warner Bros. Movie World, Dreamworld and Sea World), Walls of Jerusalem National Park and Mount Field National Park in Tasmania, Royal Exhibition Building in Melbourne, the Great Barrier Reef in Queensland, The Twelve Apostles in Victoria, Uluru (Ayers Rock) and the Australian outback.
Tourism in New Zealand contributes NZ$7.3 billion (or 4%) of the country's GDP in 2013, as well as directly supporting 110,800 full-time equivalent jobs (nearly 6% of New Zealand's workforce). International tourist spending accounted for 16% of New Zealand's export earnings (nearly NZ$10 billion). International and domestic tourism contributes, in total, NZ$24 billion to New Zealand's economy every year. Tourism New Zealand, the country's official tourism agency, is actively promoting the country as a destination worldwide. Milford Sound in South Island is acclaimed as New Zealand's most famous tourist destination.
In 2003 alone, according to state government data, there were over 6.4 million visitors to the Hawaiian Islands with expenditures of over $10.6 billion. Due to the mild year-round weather, tourist travel is popular throughout the year. In 2011, Hawaii saw increasing arrivals and share of foreign tourists from Canada, Australia and China increasing 13%, 24% and 21% respectively from 2010.
Australia is a federal parliamentary constitutional monarchy with Elizabeth II at its apex as the Queen of Australia, a role that is distinct from her position as monarch of the other Commonwealth realms. The Queen is represented in Australia by the Governor-General at the federal level and by the Governors at the state level, who by convention act on the advice of her ministers. There are two major political groups that usually form government, federally and in the states: the Australian Labor Party and the Coalition which is a formal grouping of the Liberal Party and its minor partner, the National Party. Within Australian political culture, the Coalition is considered centre-right and the Labor Party is considered centre-left. The Australian Defence Force is by far the largest military force in Oceania.
New Zealand is a constitutional monarchy with a parliamentary democracy, although its constitution is not codified. Elizabeth II is the Queen of New Zealand and the head of state. The Queen is represented by the Governor-General, whom she appoints on the advice of the Prime Minister. The New Zealand Parliament holds legislative power and consists of the Queen and the House of Representatives. A parliamentary general election must be called no later than three years after the previous election. New Zealand is identified as one of the world's most stable and well-governed states, with high government transparency and among the lowest perceived levels of corruption.
In Samoan politics, the Prime Minister of Samoa is the head of government. The 1960 constitution, which formally came into force with independence from New Zealand in 1962, builds on the British pattern of parliamentary democracy, modified to take account of Samoan customs. The national government ("malo") generally controls the legislative assembly. Politics of Tonga takes place in a framework of a constitutional monarchy, whereby the King is the Head of State.
Fiji has a multiparty system with the Prime Minister of Fiji as head of government. The executive power is exercised by the government. Legislative power is vested in both the government and the Parliament of Fiji. Fiji's Head of State is the President. He is elected by Parliament of Fiji after nomination by the Prime Minister or the Leader of the Opposition, for a three-year term.
In the politics of Papua New Guinea the Prime Minister is the head of government. In Kiribati, the President of Kiribati is the head of government, and of a multi-party system.
New Caledonia remains an integral part of the French Republic. Inhabitants of New Caledonia are French citizens and carry French passports. They take part in the legislative and presidential French elections. New Caledonia sends two representatives to the French National Assembly and two senators to the French Senate.
Hawaii is dominated by the Democratic Party. As codified in the Constitution of Hawaii, there are three branches of government: executive, legislative and judicial. The governor is elected statewide. The lieutenant governor acts as the Secretary of State. The governor and lieutenant governor oversee twenty agencies and departments from offices in the State Capitol.
Since 1788, the primary influence behind Australian culture has been Anglo-Celtic Western culture, with some Indigenous influences. The divergence and evolution that has occurred in the ensuing centuries has resulted in a distinctive Australian culture. Since the mid-20th century, American popular culture has strongly influenced Australia, particularly through television and cinema. Other cultural influences come from neighbouring Asian countries, and through large-scale immigration from non-English-speaking nations. "The Story of the Kelly Gang" (1906), the world's first feature length film, spurred a boom in Australian cinema during the silent film era. The Australian Museum in Sydney and the National Gallery of Victoria in Melbourne are the oldest and largest museums in Oceania. The city's New Year's Eve celebrations are the largest in Oceania.
Australia is also known for its cafe and coffee culture in urban centres. Australia and New Zealand were responsible for the flat white coffee. Most Indigenous Australian tribal groups subsisted on a simple hunter-gatherer diet of native fauna and flora, otherwise called bush tucker. The first settlers introduced British food to the continent, much of which is now considered typical Australian food, such as the Sunday roast. Multicultural immigration transformed Australian cuisine; post-World War II European migrants, particularly from the Mediterranean, helped to build a thriving Australian coffee culture, and the influence of Asian cultures has led to Australian variants of their staple foods, such as the Chinese-inspired dim sim and Chiko Roll.
The music of Hawaii includes traditional and popular styles, ranging from native Hawaiian folk music to modern rock and hip hop. Hawaii's musical contributions to the music of the United States are out of proportion to the state's small size. Styles such as slack-key guitar are well known worldwide, while Hawaiian-tinged music is a frequent part of Hollywood soundtracks. Hawaii also made a major contribution to country music with the introduction of the steel guitar. The Hawaiian religion is polytheistic and animistic, with a belief in many deities and spirits, including the belief that spirits are found in non-human beings and objects such as animals, the waves, and the sky.
The cuisine of Hawaii is a fusion of many foods brought by immigrants to the Hawaiian Islands, including the earliest Polynesians and Native Hawaiian cuisine, and American, Chinese, Filipino, Japanese, Korean, Polynesian and Portuguese origins. Native Hawaiian musician and Hawaiian sovereignty activist Israel Kamakawiwoʻole, famous for his medley of "Somewhere Over the Rainbow/What a Wonderful World", was named "The Voice of Hawaii" by NPR in 2010 in its 50 great voices series.
New Zealand as a culture is a Western culture, which is influenced by the cultural input of the indigenous Māori and the various waves of multi-ethnic migration which followed the British colonisation of New Zealand. Māori people constitute one of the major cultures of Polynesia. The country has been broadened by globalisation and immigration from the Pacific Islands, East Asia and South Asia. New Zealand marks two national days of remembrance, Waitangi Day and ANZAC Day, and also celebrates holidays during or close to the anniversaries of the founding dates of each province.
The New Zealand recording industry began to develop from 1940 onwards and many New Zealand musicians have obtained success in Britain and the United States. Some artists release Māori language songs and the Māori tradition-based art of "kapa haka" (song and dance) has made a resurgence. The country's diverse scenery and compact size, plus government incentives, have encouraged some producers to film big budget movies in New Zealand, including "Avatar", "The Lord of the Rings", "The Hobbit", "The Chronicles of Narnia", "King Kong" and "The Last Samurai".
The national cuisine has been described as Pacific Rim, incorporating the native Māori cuisine and diverse culinary traditions introduced by settlers and immigrants from Europe, Polynesia and Asia. New Zealand yields produce from land and sea – most crops and livestock, such as maize, potatoes and pigs, were gradually introduced by the early European settlers. Distinctive ingredients or dishes include lamb, salmon, koura (crayfish), dredge oysters, whitebait, paua (abalone), mussels, scallops, pipi and tuatua (both are types of New Zealand shellfish), kumara (sweet potato), kiwifruit, tamarillo and pavlova (considered a national dish).
The fa'a Samoa, or traditional Samoan way, remains a strong force in Samoan life and politics. Despite centuries of European influence, Samoa maintains its historical customs, social and political systems, and language. Cultural customs such as the Samoa 'ava ceremony are significant and solemn rituals at important occasions including the bestowal of "matai" chiefly titles. Items of great cultural value include the finely woven "'ie toga".
The Samoan word for dance is "siva", which consists of unique gentle movements of the body in time to music and which tell a story. Samoan male dances can be more snappy. The "sasa" is also a traditional dance where rows of dancers perform rapid synchronised movements in time to the rhythm of wooden drums "(pate)" or rolled mats. Another dance performed by males is called the "fa'ataupati" or the slap dance, creating rhythmic sounds by slapping different parts of the body. As with other Polynesian cultures (Hawaiian, Tahitian and Māori) with significant and unique tattoos, Samoans have two gender specific and culturally significant tattoos.
The artistic creations of native Oceanians varies greatly throughout the cultures and regions. The subject matter typically carries themes of fertility or the supernatural.
Petroglyphs, Tattooing, painting, wood carving, stone carving and textile work are other common art forms. Art of Oceania properly encompasses the artistic traditions of the people indigenous to Australia and the Pacific Islands. These early peoples lacked a writing system, and made works on perishable materials, so few records of them exist from this time.
Indigenous Australian rock art is the oldest and richest unbroken tradition of art in the world, dating as far back as 60,000 years and spread across hundreds of thousands of sites. These rock paintings served several functions. Some were used in magic, others to increase animal populations for hunting, while some were simply for amusement. Sculpture in Oceania first appears on New Guinea as a series of stone figures found throughout the island, but mostly in mountainous highlands. Establishing a chronological timeframe for these pieces in most cases is difficult, but one has been dated to 1500 BC.
By 1500 BC the Lapita culture, descendants of the second wave, would begin to expand and spread into the more remote islands. At around the same time, art began to appear in New Guinea, including the earliest examples of sculpture in Oceania. Starting around 1100 AD, the people of Easter Island would begin construction of nearly 900 moai (large stone statues). At about 1200 AD, the people of Pohnpei, a Micronesian island, would embark on another megalithic construction, building Nan Madol, a city of artificial islands and a system of canals. Hawaiian art includes wood carvings, feather work, petroglyphs, bark cloth (called kapa in Hawaiian and tapa elsewhere in the Pacific) and tattoos. Native Hawaiians had neither metal nor woven cloth.
Rugby union is one of the region's most prominent sports, and is the national sport of New Zealand, Samoa, Fiji and Tonga. The most popular sport in Australia is cricket, the most popular sport among Australian women is netball, while Australian rules football is the most popular sport in terms of spectatorship and television ratings. Rugby is the most popular sport among New Zealanders. In Papua New Guinea, the most popular sport is Rugby league.
Australian rules football is the national sport in Nauru and is the most popular football code in Australia in terms of attendance. It has a large following in Papua New Guinea, where it is the second most popular sport after Rugby League. It attracts significant attention across New Zealand and the Pacific Islands. Fiji's sevens team is one of the most successful in the world, as is New Zealand's.
Currently Vanuatu is the only country in Oceania to call association football its national sport. However, it is also the most popular sport in Kiribati, Solomon Islands and Tuvalu, and has a significant (and growing) popularity in Australia. In 2006 Australia joined the Asian Football Confederation and qualified for the 2010, 2014 and 2018 World Cups as an Asian entrant.
Australia has hosted two Summer Olympics: Melbourne 1956 and Sydney 2000. Also, Australia has hosted five editions of the Commonwealth Games (Sydney 1938, Perth 1962, Brisbane 1982, Melbourne 2006, Gold Coast 2018). Meanwhile, New Zealand has hosted the Commonwealth Games three times: Auckland 1950, Christchurch 1974 and Auckland 1990. The Pacific Games (formerly known as the South Pacific Games) is a multi-sport event, much like the Olympics on a much smaller scale, with participation exclusively from countries around the Pacific. It is held every four years and began in 1963.
Australia and New Zealand competed in the games for the first time in 2015. | https://en.wikipedia.org/wiki?curid=22621 |
Combined oral contraceptive pill
The combined oral contraceptive pill (COCP), often referred to as the birth control pill or colloquially as "the pill", is a type of birth control that is designed to be taken orally by women. It includes a combination of an estrogen (usually ethinylestradiol) and a progestogen (specifically a progestin). When taken correctly, it alters the menstrual cycle to eliminate ovulation and prevent pregnancy.
They were first approved for contraceptive use in the United States in 1960, and are a very popular form of birth control. They are currently used by more than 100 million women worldwide and by almost 12 million women in the United States. From 2015–2017, 12.6% of women aged 15–49 in the US reported using oral contraception, making it the second most common method of contraception in this age range with female sterilization being the most common method. Use varies widely by country, age, education, and marital status. One third of women aged 16–49 in the United Kingdom currently use either the combined pill or progestogen-only pill (POP), compared with less than 3% of women in Japan (as of 1950-2014).
Two forms of combined oral contraceptives are on the World Health Organization's List of Essential Medicines, the most important medications needed in a basic health system. The pill was a catalyst for the sexual revolution.
Combined oral contraceptive pills are a type of oral medication that is designed to be taken every day, at the same time of day, in order to prevent pregnancy. There are many different formulations or brands, but the average pack is designed to be taken over a 28-day period, or cycle. For the first 21 days of the cycle, users take a daily pill that contains hormones (estrogen and progestogen). The last 7 days of the cycle are hormone free days. Some packets only contain 21 pills and users are then advised to take no pills for the following week. Other packets contain 7 additional placebo pills, or biologically inactive pills. Some newer formulations have 24 days of active hormone pills, followed by 4 days of placebo (examples include Yaz 28 and Loestrin 24 Fe) or even 84 days of active hormone pills, followed by 7 days of placebo pills (Seasonale). A woman on the pill will have a withdrawal bleed sometime during her placebo pill or no pill days, and is still protected from pregnancy during this time. Then after 28 days, or 91 days depending on which type a person is using, users start a new pack and a new cycle.
If used exactly as instructed, the estimated risk of getting pregnant is 0.3%, or about 3 in 1000 women on COCPs will become pregnant within one year. However, typical use is often not exact due to timing errors, forgotten pills, or unwanted side effects. With typical use, the estimated risk of getting pregnant is about 9%, or about 9 in 100 women on COCP will become pregnant in one year. The perfect use failure rate is based on a review of pregnancy rates in clinical trials, the typical use failure rate is based on a weighted average of estimates from the 1995 and 2002 U.S. National Surveys of Family Growth (NSFG), corrected for underreporting of abortions.
Several factors account for typical use effectiveness being lower than perfect use effectiveness:
For instance, someone using oral forms of hormonal birth control might be given incorrect information by a health care provider as to the frequency of intake, forget to take the pill one day, or simply not go to the pharmacy on time to renew the prescription.
COCPs provide effective contraception from the very first pill if started within five days of the beginning of the menstrual cycle (within five days of the first day of menstruation). If started at any other time in the menstrual cycle, COCPs provide effective contraception only after 7 consecutive days use of active pills, so a backup method of contraception (such as condoms) must be used until active pills have been taken for 7 consecutive days. COCPs should be taken at approximately the same time every day.
The effectiveness of the combined oral contraceptive pill appears to be similar whether the active pills are taken continuously for prolonged periods of time or if they are taken for 21 active days and 7 days as placebo.
Contraceptive efficacy may be impaired by:
In any of these instances, a back up method should be used until consistent use of active pills (for 7 consecutive days) has resumed, the interacting drug has been discontinued or illness has been resolved.
According to CDC guidelines, a pill is only considered 'missed' if 24 hours or more have passed since the last pill taken. If less than 24 hours have passed, the pill is considered "late."
The role of the placebo pills is two-fold: to allow the user to continue the routine of taking a pill every day and to simulate the average menstrual cycle. By continuing to take a pill everyday, users remain in the daily habit even during the week without hormones. Failure to take pills during the placebo week does not impact the effectiveness of the pill, provided that daily ingestion of active pills is resumed at the end of the week.
The placebo, or hormone-free, week in the 28-day pill package simulates an average menstrual cycle, though the hormonal events during a pill cycle are significantly different from those of a normal ovulatory menstrual cycle. Because the pill suppresses ovulation (to be discussed more in the Mechanism of Action section), birth control users do not have true menstrual periods. Instead, it is the lack of hormones for a week that causes a withdrawal bleed. The withdrawal bleeding that occurs during the break from active pills has been thought to be reassuring, a physical confirmation of not being pregnant. The withdrawal bleeding is also predictable. Unexpected breakthrough bleeding can be a possible side effect of longer term active regimens.
Since it is not uncommon for menstruating women to become anemic, some placebo pills may contain an iron supplement. This replenishes iron stores that may become depleted during menstruation.
If the pill formulation is monophasic, meaning each hormonal pill contains a fixed dose of hormones, it is possible to skip withdrawal bleeding and still remain protected against conception by skipping the placebo pills altogether and starting directly with the next packet. Attempting this with bi- or tri-phasic pill formulations carries an increased risk of breakthrough bleeding and may be undesirable. It will not, however, increase the risk of getting pregnant.
Starting in 2003, women have also been able to use a three-month version of the pill. Similar to the effect of using a constant-dosage formulation and skipping the placebo weeks for three months, Seasonale gives the benefit of less frequent periods, at the potential drawback of breakthrough bleeding. Seasonique is another version in which the placebo week every three months is replaced with a week of low-dose estrogen.
A version of the combined pill has also been packaged to completely eliminate placebo pills and withdrawal bleeds. Marketed as Anya or Lybrel, studies have shown that after seven months, 71% of users no longer had any breakthrough bleeding, the most common side effect of going longer periods of time without breaks from active pills.
While more research needs to be done to assess the long term safety of using COCP's continuously, studies have shown there may be no difference in short term adverse effects when comparing continuous use versus cyclic use of birth control pills.
The hormones in the pill have also been used to treat other medical conditions, such as polycystic ovary syndrome (PCOS), endometriosis, adenomyosis, acne, hirsutism, amenorrhea, menstrual cramps, menstrual migraines, menorrhagia (excessive menstrual bleeding), menstruation-related or fibroid-related anemia and dysmenorrhea (painful menstruation). Besides acne, no oral contraceptives have been approved by the U.S. FDA for the previously mentioned uses despite extensive use for these conditions.
PCOS, or polycystic ovary syndrome, is a syndrome that is caused by hormonal imbalances. Women with PCOS often have higher than normal levels of estrogen all the time because their hormonal cycles are not regular. Over time, high levels of uninhibited estrogen can lead to endometrial hyperplasia, or overgrowth of tissue in the uterus. This overgrowth is more likely to become cancerous than normal endometrial tissue. Thus, although the data varies, it is generally agreed upon by most gynecological societies that due to the high estrogen levels that women with PCOS have, they are at higher risk for endometrial hyperplasia. To reduce this risk, it is often recommended that women with PCOS take hormonal contraceptives to regulate their hormones. Both COCPs and progestin-only methods are recommended. COCPs are preferred in women who also suffer from uncontrolled acne and symptoms of hirsutism, or male patterned hair growth, because COCPs can help treats these symptoms.
For pelvic pain associated with endometriosis, COCPs are considered a first-line medical treatment, along with NSAIDs, GnRH agonists, and aromatase inhibitors. COCPs work to suppress the growth of the extra-uterine endometrial tissue. This works to lessen its inflammatory effects. COCPs, along with the other medical treatments listed above, do not eliminate the extra-uterine tissue growth, they just reduce the symptoms. Surgery is the only definitive treatment. Studies looking at rates of pelvic pain reoccurrence after surgery have shown that continuous use of COCPs is more effective at reducing the recurrence of pain than cyclic use
Similar to endometriosis, adenomyosis is often treated with COCPs to suppress the growth the endometrial tissue that has grown into the myometrium. Unlike endometriosis however, levonorgetrel containing IUDs are more effective at reducing pelvic pain in adenomyosis than COCPs.
Combined oral contraceptives are sometimes prescribed as medication for mild or moderate acne, although none are approved by the U.S. FDA for that sole purpose. Four different oral contraceptives have been FDA approved to treat moderate acne if the person is at least 14 or 15 years old, have already begun menstruating, and need contraception. These include Ortho Tri-Cyclen, Estrostep, Beyaz, and YAZ.
Although the pill is sometimes prescribed to induce menstruation on a regular schedule for women bothered by irregular menstrual cycles, it actually suppresses the normal menstrual cycle and then mimics a regular 28-day monthly cycle.
Women who are experiencing menstrual dysfunction due to female athlete triad are sometimes prescribed oral contraceptives as pills that can create menstrual bleeding cycles. However, the condition's underlying cause is energy deficiency and should be treated by correcting the imbalance between calories eaten and calories burned by exercise. Oral contraceptives should not be used as an initial treatment for female athlete triad.
While combined oral contraceptives are generally considered to be a relatively safe medication, they are contraindicated for people with certain medical conditions. The World Health Organization and Centers for Disease Control publish guidance, called medical eligibility criteria, on the safety of birth control in the context of medical conditions. Estrogen in high doses can increase a person's risk for blood clots. Current formulations of COCP's do not contain doses high enough to increase the absolute risk of thrombotic events in otherwise healthy people, but people with any pre-existing medical condition that also increases their risk for blood clots makes using COCPs more dangerous. These conditions include but are not limited to high blood pressure, pre-existing cardiovascular disease (such as valvular heart disease or ischemic heart disease), history of thromboembolism or pulmonary embolism, cerebrovascular accident, migraine with aura, a familial tendency to form blood clots (such as familial factor V Leiden), and in smokers over age 35.
COCPs are also contraindicated for people with advanced diabetes, liver tumors, hepatic adenoma or severe cirrhosis of the liver. COCPs are metabolized in the liver and thus liver disease can lead to reduced elimination of the medication. People with known or suspected breast cancer, endometrial cancer, or unexplained uterine bleeding should also not take COCPs to avoid health risks.
Women who are known to be pregnant should not take COCPs. Postpartum women who are breastfeeding are also advised not to start COCPs until 4 weeks after birth due to increased risk of blood clots. Severe hypercholesterolemia and hypertriglyceridemia are also currently contraindications, but the evidence showing that COCP's lead to worse outcomes in this population is weak. Obesity is not considered to be a contraindication to taking COCPs.
It is generally accepted that the health risks of oral contraceptives are lower than those from pregnancy and birth, and "the health benefits of any method of contraception are far greater than any risks from the method". Some organizations have argued that comparing a contraceptive method to no method (pregnancy) is not relevant—instead, the comparison of safety should be among available methods of contraception.
Different sources note different incidences of side effects. The most common side effect is breakthrough bleeding. A 1992 French review article said that as many as 50% of new first-time users discontinue the birth control pill before the end of the first year because of the annoyance of side effects such as breakthrough bleeding and amenorrhea. A 2001 study by the Kinsey Institute exploring predictors of discontinuation of oral contraceptives found that 47% of 79 women discontinued the pill. One 1994 study found that women using birth control pills blinked 32% more often than those not using the contraception.
On the other hand, the pills can sometimes improve conditions such as pelvic inflammatory disease, dysmenorrhea, premenstrual syndrome, and acne, reduce symptoms of endometriosis and polycystic ovary syndrome, and decrease the risk of anemia. Use of oral contraceptives also reduces lifetime risk of ovarian cancer.
Nausea, vomiting, headache, bloating, breast tenderness, swelling of the ankles/feet (fluid retention), or weight change may occur. Vaginal bleeding between periods (spotting) or missed/irregular periods may occur, especially during the first few months of use.
Combined oral contraceptives increase the risk of venous thromboembolism (including deep vein thrombosis (DVT) and pulmonary embolism (PE)).
COC pills with more than 50 µg of estrogen increase the risk of ischemic stroke and myocardial infarction but lower doses appear safe. These risks are greatest in women with additional risk factors, such as smoking (which increases risk substantially) and long-continued use of the pill, especially in women over 35 years of age.
The overall absolute risk of venous thrombosis per 100,000 woman-years in current use of combined oral contraceptives is approximately 60, compared with 30 in non-users. The risk of thromboembolism varies with different types of birth control pills; compared with combined oral contraceptives containing levonorgestrel (LNG), and with the same dose of estrogen and duration of use, the rate ratio of deep venous thrombosis for combined oral contraceptives with norethisterone is 0.98, with norgestimate 1.19, with desogestrel (DSG) 1.82, with gestodene 1.86, with drospirenone (DRSP) 1.64, and with cyproterone acetate 1.88. In comparison, venous thromboembolism occurs in 100–200 per 100.000 pregnant women every year.
One study showed more than a 600% increased risk of blood clots for women taking COCPs with drospirenone compared with non-users, compared with 360% higher for women taking birth control pills containing levonorgestrel. The U.S. Food and Drug Administration (FDA) initiated studies evaluating the health of more than 800,000 women taking COCPs and found that the risk of VTE was 93% higher for women who had been taking drospirenone COCPs for 3 months or less and 290% higher for women taking drospirenone COCPs for 7–12 months, compared with women taking other types of oral contraceptives.
Based on these studies, in 2012 the FDA updated the label for drospirenone COCPs to include a warning that contraceptives with drospirenone may have a higher risk of dangerous blood clots.
A systematic review in 2010 did not support an increased overall cancer risk in users of combined oral contraceptive pills, but did find a slight increase in breast cancer risk among current users, which disappears 5–10 years after use has stopped.
COC decreased the risk of ovarian cancer, endometrial cancer, and colorectal cancer. Two large cohort studies published in 2010 both found a significant reduction in adjusted relative risk of ovarian and endometrial cancer mortality in ever-users of OCs compared with never-users.
The use of oral contraceptives (birth control pills) for five years or more decreases the risk of ovarian cancer in later life by 50%. Combined oral contraceptive use reduces the risk of ovarian cancer by 40% and the risk of endometrial cancer by 50% compared with never users. The risk reduction increases with duration of use, with an 80% reduction in risk for both ovarian and endometrial cancer with use for more than 10 years. The risk reduction for both ovarian and endometrial cancer persists for at least 20 years.
A report by a 2005 International Agency for Research on Cancer (IARC) working group said COCs increase the risk of cancers of the breast (among current and recent users), cervix and liver (among populations at low risk of hepatitis B virus infection). A 2013 meta-analysis concluded that every use of birth control pills is associated with a modest increase in the risk of breast cancer (relative risk 1.08) and a reduced risk of colorectal cancer (relative risk 0.86) and endometrial cancer (relative risk 0.57). Cervical cancer risk in those infected with human papilloma virus is increased. A similar small increase in breast cancer risk was seen in other meta analyses.
A 2013 Cochrane systematic review found that studies of combination hormonal contraceptives showed no large difference in weight when compared with placebo or no intervention groups. The evidence was not strong enough to be certain that contraceptive methods do not cause some weight change, but no major effect was found. This review also found "that women did not stop using the pill or patch because of weight change."
COCPs may increase natural vaginal lubrication. Other women experience reductions in libido while on the pill, or decreased lubrication. Some researchers question a causal link between COCP use and decreased libido; a 2007 study of 1700 women found COCP users experienced no change in sexual satisfaction. A 2005 laboratory study of genital arousal tested fourteen women before and after they began taking COCPs. The study found that women experienced a significantly wider range of arousal responses after beginning pill use; decreases and increases in measures of arousal were equally common.
A 2006 study of 124 pre-menopausal women measured sex hormone binding globulin (SHBG), including before and after discontinuation of the oral contraceptive pill. Women continuing use of oral contraceptives had SHBG levels four times higher than those who never used it, and levels remained elevated even in the group that had discontinued its use. | https://en.wikipedia.org/wiki?curid=22623 |
Organized crime
Organized crime is a category of transnational, national, or local groupings of highly centralized enterprises run by criminals to engage in illegal activity, most commonly for profit. Some criminal organizations, such as terrorist groups, are politically motivated. Sometimes criminal organizations force people to do business with them, such as when a gang extorts money from shopkeepers for "protection". Gangs may become disciplined enough to be considered "organized". A criminal organization or gang can also be referred to as a mafia, mob, ring, or syndicate; the network, subculture and community of criminals may be referred to as the underworld. European sociologists (e.g. Diego Gambetta) define a “mafia” as a type of organized crime group that specializes in the supply of extra-legal protection and quasi law enforcement. Gambetta's classic work on the original “Mafia”, or the Sicilian Mafia, generates an economic study of the mafia, which exerts great influence on studies of the Russian mafia, the Chinese triads, Hong Kong mafia and the Japanese yakuza.
Other organizations—including states, churches, militaries, police forces, and corporations—may sometimes use organized-crime methods to conduct their activities, but their powers derive from their status as formal social institutions. There is a tendency to distinguish organized crime from other forms of crime, such as white-collar crime, financial crimes, political crimes, war crime, state crimes, and treason. This distinction is not always apparent and academics continue to debate the matter. For example, in failed states that can no longer perform basic functions such as education, security, or governance (usually due to fractious violence or to extreme poverty), organized crime, governance and war sometimes complement each other. The term "oligarchy" has been used to describe democratic countries whose political, social and economic institutions come under the control of a few families and business oligarchs.
In the United States, the Organized Crime Control Act (1970) defines organized crime as "[t]he unlawful activities of [...] a highly organized, disciplined association [...]". Criminal activity as a structured process is referred to as racketeering. In the UK, police estimate that organized crime involves up to 38,000 people operating in 6,000 various groups. Historically, the largest organized crime force in the United States has been the La Cosa Nostra (Italian-American Mafia), but other transnational criminal organizations have also risen in prominence in recent decades. A 2012 article in a U.S. Department of Justice journal stated that: "Since the end of the Cold War, organized crime groups from Russia, China, Italy, Nigeria, and Japan have increased their international presence and worldwide networks or have become involved in more transnational criminal activities. Most of the world's major international organized crime groups are present in the United States." The U.S. Drug Enforcement Administration's 2017 "National Drug Threat Assessment" classified Mexican transnational criminal organizations (TCOs) as the "greatest criminal drug threat to the United States," citing their dominance "over large regions in Mexico used for the cultivation, production, importation, and transportation of illicit drugs" and identifying the Sinaloa, Jalisco New Generation, Juárez, Gulf, Los Zetas, and Beltrán-Leyva cartels as the six Mexican TCO with the greatest influence in drug trafficking to the United States.
Various models have been proposed to describe the structure of criminal organizations.
Patron-client networks are defined by fluid interactions. They produce crime groups that operate as smaller units within the overall network, and as such tend towards valuing significant others, familiarity of social and economic environments, or tradition. These networks are usually composed of:
Bureaucratic/corporate organized crime groups are defined by the general rigidity of their internal structures. They focus more on how the operations works, succeeds, sustains itself or avoids retribution, they are generally typified by:
However, this model of operation has some flaws:
While bureaucratic operations emphasize business processes and strongly authoritarian hierarchies, these are based on enforcing power relationships rather than an overlying aim of protectionism, sustainability or growth.
An estimate on youth street gangs nationwide provided by Hannigan, et al., marked an increase of 35% between 2002 and 2010. A distinctive gang culture underpins many, but not all, organized groups; this may develop through recruiting strategies, social learning processes in the corrective system experienced by youth, family or peer involvement in crime, and the coercive actions of criminal authority figures. The term “street gang” is commonly used interchangeably with “youth gang,” referring to neighborhood or street-based youth groups that meet “gang” criteria. Miller (1992) defines a street gang as “a self-formed association of peers, united by mutual interests, with identifiable leadership and internal organization, who act collectively or as individuals to achieve specific purposes, including the conduct of illegal activity and control of a particular territory, facility, or enterprise." Some reasons youth join gangs include to feel accepted, attain status, and increase their self-esteem. A sense of unity brings together many of the youth gangs that lack the family aspect at home.
"Zones of transition" are deteriorating neighborhoods with shifting populations. In such areas, conflict between groups, fighting, "turf wars", and theft promote solidarity and cohesion. Cohen (1955): working class teenagers joined gangs due to frustration of inability to achieve status and goals of the middle class; Cloward and Ohlin (1960): blocked opportunity, but unequal distribution of opportunities lead to creating different types of gangs (that is, some focused on robbery and property theft, some on fighting and conflict and some were retreatists focusing on drug taking); Spergel (1966) was one of the first criminologists to focus on "evidence-based practice" rather than intuition into gang life and culture. Participation in gang-related events during adolescence perpetuate a pattern of maltreatment on their own children years later. Klein (1971) like Spergel studied the effects on members of social workers’ interventions. More interventions actually lead to greater gang participation and solidarity and bonds between members. Downes and Rock (1988) on Parker's analysis: strain theory applies, labeling theory (from experience with police and courts), control theory (involvement in trouble from early childhood and the eventual decision that the costs outweigh the benefits) and conflict theories. No ethnic group is more disposed to gang involvement than another, rather it is the status of being marginalized, alienated or rejected that makes some groups more vulnerable to gang formation, and this would also be accounted for in the effect of social exclusion, especially in terms of recruitment and retention. These may also be defined by age (typically youth) or peer group influences, and the permanence or consistency of their criminal activity. These groups also form their own symbolic identity or public representation which are recognizable by the community at large (include colors, symbols, patches, flags and tattoos).
Research has focused on whether the gangs have formal structures, clear hierarchies and leadership in comparison with adult groups, and whether they are rational in pursuit of their goals, though positions on structures, hierarchies and defined roles are conflicting. Some studied street gangs involved in drug dealing - finding that their structure and behavior had a degree of organizational rationality. Members saw themselves as organized criminals; gangs were formal-rational organizations, Strong organizational structures, well defined roles and rules that guided members’ behavior. Also a specified and regular means of income (i.e., drugs). Padilla (1992) agreed with the two above. However some have found these to be loose rather than well-defined and lacking persistent focus, there was relatively low cohesion, few shared goals and little organizational structure. Shared norms, value and loyalties were low, structures "chaotic", little role differentiation or clear distribution of labor. Similarly, the use of violence does not conform to the principles behind protection rackets, political intimidation and drug trafficking activities employed by those adult groups. In many cases gang members graduate from youth gangs to highly developed OC groups, with some already in contact with such syndicates and through this we see a greater propensity for imitation. Gangs and traditional criminal organizations cannot be universally linked (Decker, 1998), however there are clear benefits to both the adult and youth organization through their association. In terms of structure, no single crime group is archetypal, though in most cases there are well-defined patterns of vertical integration (where criminal groups attempt to control the supply and demand), as is the case in arms, sex and drug trafficking.
The entrepreneurial model looks at either the individual criminal or a smaller group of organized criminals, that capitalize off the more fluid 'group-association' of contemporary organized crime. This model conforms to social learning theory or differential association in that there are clear associations and interaction between criminals where knowledge may be shared, or values enforced, however, it is argued that rational choice is not represented in this. The choice to commit a certain act, or associate with other organized crime groups, may be seen as much more of an entrepreneurial decision - contributing to the continuation of a criminal enterprise, by maximizing those aspects that protect or support their own individual gain. In this context, the role of risk is also easily understandable, however it is debatable whether the underlying motivation should be seen as true entrepreneurship or entrepreneurship as a product of some social disadvantage.
The criminal organization, much in the same way as one would assess pleasure and pain, weighs such factors as legal, social and economic risk to determine potential profit and loss from certain criminal activities. This decision-making process rises from the entrepreneurial efforts of the group's members, their motivations and the environments in which they work. Opportunism is also a key factor – the organized criminal or criminal group is likely to frequently reorder the criminal associations they maintain, the types of crimes they perpetrate, and how they function in the public arena (recruitment, reputation, etc.) in order to ensure efficiency, capitalization and protection of their interests.
Culture and ethnicity provide an environment where trust and communication between criminals can be efficient and secure. This may ultimately lead to a competitive advantage for some groups; however, it is inaccurate to adopt this as the only determinant of classification in organized crime. This categorization includes the Sicilian Mafia, ’Ndrangheta, ethnic Chinese criminal groups, Japanese Yakuza (or Boryokudan), Colombian drug trafficking groups, Nigerian organized crime groups, Corsican mafia, Korean criminal groups and Jamaican posses. From this perspective, organized crime is not a modern phenomenon - the construction of 17th and 18th century crime gangs fulfill all the present day criteria of criminal organizations (in opposition to the Alien Conspiracy Theory). These roamed the rural borderlands of central Europe embarking on many of the same illegal activities associated with today's crime organizations, with the exception of money laundering.
When the French revolution created strong nation states, the criminal gangs moved to other poorly controlled regions like the Balkans and Southern Italy, where the seeds were sown for the Sicilian Mafia - the linchpin of organized crime in the New World.
While most of the conceptual frameworks used to model organised crime emphasize the role of "actors" and/or "activities", computational approaches built on the foundations of data science and Artificial Intelligence are focusing on deriving new insights on organised crime from big data. For example, novel machine learning models have been applied to study and detect urban crime and online prostitution networks. Big Data have also been used to develop online tools predicting the risk for an individual to be a victim of online sex trade or getting drawn into online sex work. In addition, data from Twitter and Google Trends have been used to study the public perceptions of organised crime
Organized crime groups provide a range of illegal services and goods. Organized crime often victimizes businesses through the use of extortion or theft and fraud activities like hijacking cargo trucks and ships, robbing goods, committing bankruptcy fraud (also known as "bust-out"), insurance fraud or stock fraud (inside trading). Organized crime groups also victimize individuals by car theft (either for dismantling at "chop shops" or for export), art theft, bank robbery, burglary, jewelry and gems theft and heists, shoplifting, computer hacking, credit card fraud, economic espionage, embezzlement, identity theft, and securities fraud ("pump and dump" scam). Some organized crime groups defraud national, state, or local governments by bid rigging public projects, counterfeiting money, smuggling or manufacturing untaxed alcohol (rum-running) or cigarettes (buttlegging), and providing immigrant workers to avoid taxes.
Organized crime groups seek out corrupt public officials in executive, law enforcement, and judicial roles so that their activities on the black market can avoid, or at least receive early warnings about, investigation and prosecution.
Activities of organized crime include loansharking of money at very high interest rates, assassination, blackmailing, bombings, bookmaking and illegal gambling, confidence tricks, copyright infringement, counterfeiting of intellectual property, fencing, kidnapping, prostitution, smuggling, drug trafficking, arms trafficking, oil smuggling, antiquities smuggling, organ trafficking, contract killing, identity document forgery, money laundering, bribery, seduction, electoral fraud, insurance fraud, point shaving, price fixing, illegal taxicab operation, illegal dumping of toxic waste, illegal trading of nuclear materials, military equipment smuggling, nuclear weapons smuggling, passport fraud, providing illegal immigration and cheap labor, people smuggling, trading in endangered species, and trafficking in human beings. Organized crime groups also do a range of business and labor racketeering activities, such as skimming casinos, insider trading, setting up monopolies in industries such as garbage collecting, construction and cement pouring, bid rigging, getting "no-show" and "no-work" jobs, political corruption and bullying.
The commission of violent crime may form part of a criminal organization's 'tools' used to achieve criminogenic goals (for example, its threatening, authoritative, coercive, terror-inducing, or rebellious role), due to psycho-social factors (cultural conflict, aggression, rebellion against authority, access to illicit substances, counter-cultural dynamic), or may, in and of itself, be crime rationally chosen by individual criminals and the groups they form. Assaults are used for coercive measures, to "rough up" debtors, competition or recruits, in the commission of robberies, in connection to other property offenses, and as an expression of counter-cultural authority; violence is normalized within criminal organizations (in direct opposition to mainstream society) and the locations they control. Whilst the intensity of violence is dependent on the types of crime the organization is involved in (as well as their organizational structure or cultural tradition) aggressive acts range on a spectrum from low-grade physical assaults to murder. Bodily harm and grievous bodily harm, within the context of organized crime, must be understood as indicators of intense social and cultural conflict, motivations contrary to the security of the public, and other psycho-social factors.
Murder has evolved from the honor and vengeance killings of the Yakuza or Sicilian mafia which placed large physical and symbolic importance on the act of murder, its purposes and consequences, to a much less discriminate form of expressing power, enforcing criminal authority, achieving retribution or eliminating competition. The role of the hit man has been generally consistent throughout the history of organized crime, whether that be due to the efficiency or expediency of hiring a professional assassin or the need to distance oneself from the commission of murderous acts (making it harder to prove liability). This may include the assassination of notable figures (public, private or criminal), once again dependent on authority, retribution or competition. Revenge killings, armed robberies, violent disputes over controlled territories and offenses against members of the public must also be considered when looking at the dynamic between different criminal organizations and their (at times) conflicting needs.
In addition to what is considered traditional organized crime involving direct crimes of fraud swindles, scams, racketeering and other Racketeer Influenced and Corrupt Organizations Act (RICO) predicate acts motivated for the accumulation of monetary gain, there is also non-traditional organized crime which is engaged in for political or ideological gain or acceptance. Such crime groups are often labelled terrorist groups.
There is no universally agreed, legally binding, criminal law definition of terrorism. Common definitions of terrorism refer only to those violent acts which are intended to create fear (terror), are perpetrated for a religious, political or ideological goal, deliberately target or disregard the safety of non-combatants (e.g., neutral military personnel or civilians), and are committed by non-government agencies. Some definitions also include acts of unlawful violence and war, especially crimes against humanity ("see the Nuremberg Trials"), Allied authorities deeming the German Nazi Party, its paramilitary and police organizations, and numerous associations subsidiary to the Nazi Party "criminal organizations". The use of similar tactics by criminal organizations for protection rackets or to enforce a code of silence is usually not labeled terrorism though these same actions may be labeled terrorism when done by a politically motivated group.
Notable groups include Al-Qaeda, Animal Liberation Front, Army of God, Black Liberation Army, The Covenant, The Sword, and the Arm of the Lord, Earth Liberation Front, Irish Republican Army, Kurdistan Workers' Party, Lashkar e Toiba, May 19th Communist Organization, The Order, Revolutionary Armed Forces of Colombia, Symbionese Liberation Army, Taliban, United Freedom Front and Weather Underground..
Organized crime groups generate large amounts of money by activities such as drug trafficking, arms smuggling, extortion, theft, and financial crime. These illegally sourced assets are of little use to them unless they can disguise it and convert it into funds that are available for investment into legitimate enterprise. The methods they use for converting its ‘dirty’ money into ‘clean’ assets encourages corruption. Organized crime groups need to hide the money's illegal origin. This allows for the expansion of OC groups, as the ‘laundry’ or ‘wash cycle’ operates to cover the money trail and convert proceeds of crime into usable assets. Money laundering is bad for international and domestic trade, banking reputations and for effective governments and rule of law. This is due to the methods used to hide the proceeds of crime. These methods include, but are not limited to: buying easily transported values, transfer pricing, and using "underground banks." Launderers will also co-mingle illegal money with revenue made from businesses in order to further mask their illicit funds. Accurate figures for the amounts of criminal proceeds laundered are almost impossible to calculate, rough estimates have been made, but only give a sense of the scale of the problem and not quite how great the problem truly is. The United Nations Office on Drugs and Crime conducted a study, they estimated that in 2009, money laundering equated to about 2.7% of global GDP being laundered; this is equal to about 1.6 trillion US dollars. The Financial Action Task Force on Money Laundering (FATF), an intergovernmental body set up to combat money laundering, has stated that "A sustained effort between 1996 and 2000 by the Financial Action Task Force (FATF) to produce such estimates failed." However, anti-money laundering efforts that seize money laundered assets in 2001 amounted to $386 million. The rapid growth of money laundering is due to:
Money laundering is a three-stage process:
Means of money laundering:
The policy aim in this area is to make the financial markets transparent, and minimize the circulation of criminal money and its cost upon legitimate markets.
Counterfeiting money is another financial crime. The counterfeiting of money includes printing money illegally, and then using that money in order to pay for anything you want. Counterfeiting is not only a financial crime, it also involves manufacturing or distributing goods under a name that is not your own. Counterfeiters benefit because consumers believe they are buying goods from companies that they trust, when in reality they are buying low quality counterfeit goods. In 2007, the OECD reported the scope of counterfeit products to include food, pharmaceuticals, pesticides, electrical components, tobacco and even household cleaning products in addition to the usual films, music, literature, games and other electrical appliances, software and fashion. A number of qualitative changes in the trade of counterfeit products:
The economic effects of organized crime have been approached from a number of both theoretical and empirical positions, however the nature of such activity allows for misrepresentation. The level of taxation taken by a nation-state, rates of unemployment, mean household incomes and level of satisfaction with government and other economic factors all contribute to the likelihood of criminals to participate in tax evasion. As most organized crime is perpetrated in the liminal state between legitimate and illegitimate markets, these economic factors must adjusted to ensure the optimal amount of taxation without promoting the practice of tax evasion. As with any other crime, technological advancements have made the commission of tax evasion easier, faster and more globalized. The ability for organized criminals to operate fraudulent financial accounts, utilize illicit offshore bank accounts, access tax havens or tax shelters, and operating goods smuggling syndicates to evade importation taxes help ensure financial sustainability, security from law enforcement, general anonymity and the continuation of their operations.
Identity theft is a form of fraud or cheating of another person's identity in which someone pretends to be someone else by assuming that person's identity, typically in order to access resources or obtain credit and other benefits in that person's name. Victims of identity theft (those whose identity has been assumed by the identity thief) can suffer adverse consequences if held accountable for the perpetrator's actions, as can organizations and individuals who are defrauded by the identity thief, and to that extent are also victims. Internet fraud refers to the actual use of Internet services to present fraudulent solicitations to prospective victims, to conduct fraudulent transactions, or to transmit the proceeds of fraud to financial institutions or to others connected with the scheme. In the context of organized crime, both may serve as means through which other criminal activity may be successfully perpetrated or as the primary goal themselves. Email fraud, advance-fee fraud, romance scams, employment scams, and other phishing scams are the most common and most widely used forms of identity theft, though with the advent of social networking fake websites, accounts and other fraudulent or deceitful activity has become commonplace.
Copyright infringement is the unauthorized or prohibited use of works under copyright, infringing the copyright holder's exclusive rights, such as the right to reproduce or perform the copyrighted work, or to make derivative works. Whilst almost universally considered under civil procedure, the impact and intent of organized criminal operations in this area of crime has been the subject of much debate. Article 61 of the Agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPs) requires that signatory countries establish criminal procedures and penalties in cases of willful trademark counterfeiting or copyright piracy on a commercial scale. More recently copyright holders have demanded that states provide criminal sanctions for all types of copyright infringement. Organized criminal groups capitalize on consumer complicity, advancements in security and anonymity technology, emerging markets and new methods of product transmission, and the consistent nature of these provides a stable financial basis for other areas of organized crime.
Cyberwarfare refers to politically motivated hacking to conduct sabotage and espionage. It is a form of information warfare sometimes seen as analogous to conventional warfare although this analogy is controversial for both its accuracy and its political motivation. It has been defined as activities by a nation-state to penetrate another nation's computers or networks with the intention of causing civil damage or disruption. Moreover, it acts as the "fifth domain of warfare," and William J. Lynn, U.S. Deputy Secretary of Defense, states that "as a doctrinal matter, the Pentagon has formally recognized cyberspace as a new domain in warfare . . . [which] has become just as critical to military operations as land, sea, air, and space." Cyber espionage is the practice of obtaining confidential, sensitive, proprietary or classified information from individuals, competitors, groups, or governments using illegal exploitation methods on internet, networks, software and/or computers. There is also a clear military, political, or economic motivation. Unsecured information may be intercepted and modified, making espionage possible internationally. The recently established Cyber Command is currently debating whether such activities as commercial espionage or theft of intellectual property are criminal activities or actual "breaches of national security." Furthermore, military activities that use computers and satellites for coordination are at risk of equipment disruption. Orders and communications can be intercepted or replaced. Power, water, fuel, communications, and transportation infrastructure all may be vulnerable to sabotage. According to Clarke, the civilian realm is also at risk, noting that the security breaches have already gone beyond stolen credit card numbers, and that potential targets can also include the electric power grid, trains, or the stock market.
The term "computer virus" may be used as an overarching phrase to include all types of true viruses, malware, including computer worms, Trojan horses, most rootkits, spyware, dishonest adware and other malicious and unwanted software (though all are technically unique), and proves to be quite financially lucrative for criminal organizations, offering greater opportunities for fraud and extortion whilst increasing security, secrecy and anonymity. Worms may be utilized by organized crime groups to exploit security vulnerabilities (duplicating itself automatically across other computers a given network), while a Trojan horse is a program that appears harmless but hides malicious functions (such as retrieval of stored confidential data, corruption of information, or interception of transmissions). Worms and Trojan horses, like viruses, may harm a computer system's data or performance. Applying the Internet model of organized crime, the proliferation of computer viruses and other malicious software promotes a sense of detachment between the perpetrator (whether that be the criminal organization or another individual) and the victim; this may help to explain vast increases in cyber-crime such as these for the purpose of ideological crime or terrorism. In mid July 2010, security experts discovered a malicious software program that had infiltrated factory computers and had spread to plants around the world. It is considered "the first attack on critical industrial infrastructure that sits at the foundation of modern economies," notes the "New York Times."
Corporate crime refers to crimes committed either by a corporation (i.e., a business entity having a separate legal personality from the natural persons that manage its activities), or by individuals that may be identified with a corporation or other business entity (see vicarious liability and corporate liability). Corporate crimes are motivated by either the individuals desire or the corporations desire to increase profits. The cost of corporate crimes to United States taxpayers is about $500 billion. Note that some forms of corporate corruption may not actually be criminal if they are not specifically illegal under a given system of laws. For example, some jurisdictions allow insider trading.
Labor Racketeering, as defined by the United States Department of Labor, is the infiltrating, exploiting, and controlling of employee benefit plan, union, employer entity, or workforce that is carried out through illegal, violent, or fraudulent means for profit or personal benefit. Labor racketeering has developed since the 1930s, affecting national and international construction, mining, energy production and transportation sectors immensely. Activity has focused on the importation of cheap or unfree labor, involvement with union and public officials (political corruption), and counterfeiting.
Political corruption is the use of legislated powers by government officials for illegitimate private gain. Misuse of government power for other purposes, such as repression of political opponents and general police brutality, is not considered political corruption. Neither are illegal acts by private persons or corporations not directly involved with the government. An illegal act by an officeholder constitutes political corruption only if the act is directly related to their official duties. Forms of corruption vary, but include bribery, extortion, cronyism, nepotism, patronage, graft, and embezzlement. While corruption may facilitate criminal enterprise such as drug trafficking, money laundering, and human trafficking, it is not restricted to these activities. The activities that constitute illegal corruption differ depending on the country or jurisdiction. For instance, certain political funding practices that are legal in one place may be illegal in another. In some cases, government officials have broad or poorly defined powers, which make it difficult to distinguish between legal and illegal actions. Worldwide, bribery alone is estimated to involve over 1 trillion US dollars annually. A state of unrestrained political corruption is known as a kleptocracy, literally meaning "rule by thieves".
There are three major regions that center around drug trafficking, known as the Golden Triangle (Burma, Laos, Thailand), Golden Crescent (Afghanistan) and Central and South America. There are suggestions that due to the continuing decline in opium production in South East Asia, traffickers may begin to look to Afghanistan as a source of heroin."
The U.S. supply of heroin comes mainly from foreign sources which include Southeast Asia, Southwest Asia, and Latin America. Heroin comes in two forms. The first is its chemical base form which presents itself as brown and the second is a salt form that is white. The former is mainly produced in Afghanistan and some south-west countries while the latter had a history of being produced in only south-east Asia, but has since moved to also being produced in Afghanistan. There is some suspicion white Heroin is also being produced in Iran and Pakistan, but it is not confirmed. This area of Heroin production is referred to as the Golden Crescent. Heroin is not the only drug being used in these areas. The European market has shown signs of growing use in opioids on top of the long-term heroin use.
Human trafficking for the purpose of sexual exploitation is a major cause of contemporary sexual slavery and is primarily for prostituting women and children into sex industries. Sexual slavery encompasses most, if not all, forms of forced prostitution. The terms "forced prostitution" or "enforced prostitution" appear in international and humanitarian conventions but have been insufficiently understood and inconsistently applied. "Forced prostitution" generally refers to conditions of control over a person who is coerced by another to engage in sexual activity. Official numbers of individuals in sexual slavery worldwide vary. In 2001 International Organization for Migration estimated 400,000, the Federal Bureau of Investigation estimated 700,000 and UNICEF estimated 1.75 million. The most common destinations for victims of human trafficking are Thailand, Japan, Israel, Belgium, the Netherlands, Germany, Italy, Turkey and the United States, according to a report by UNODC.
"See Snakehead (gang), Coyotaje"
People smuggling is defined as "the facilitation, transportation, attempted transportation or illegal entry of a person or persons across an international border, in violation of one or more countries laws, either clandestinely or through deception, such as the use of fraudulent documents". The term is understood as and often used interchangeably with migrant smuggling, which is defined by the United Nations Convention Against Transnational Organized Crime as "...the procurement, in order to obtain, directly or indirectly, a financial or other material benefit, of the illegal entry of a person into a state party of which the person is not a national". This practice has increased over the past few decades and today now accounts for a significant portion of illegal immigration in countries around the world. People smuggling generally takes place with the consent of the person or persons being smuggled, and common reasons for individuals seeking to be smuggled include employment and economic opportunity, personal and/or familial betterment, and escape from persecution or conflict.
The number of slaves today remains as high as 12 million to 27 million. This is probably the smallest proportion of slaves to the rest of the world's population in history. Most are debt slaves, largely in South Asia, who are under debt bondage incurred by lenders, sometimes even for generations. It is the fastest growing criminal industry and is predicted to eventually outgrow drug trafficking.
Today, crime is sometimes thought of as an urban phenomenon, but for most of human history it was the rural interfaces that encountered the majority of crimes (bearing in mind the fact that for most of human history, rural areas were the vast majority of inhabited places). For the most part, within a village, members kept crime at very low rates; however, outsiders such as pirates, highwaymen, and bandits attacked trade routes and roads, at times severely disrupting commerce, raising costs, insurance rates and prices to the consumer. According to criminologist Paul Lunde, "Piracy and banditry were to the pre-industrial world what organized crime is to modern society."
As Lunde states, "Barbarian conquerors, whether Vandals, Goths, the Norse, Turks or Mongols are not normally thought of as organized crime groups, yet they share many features associated with thriving criminal organizations. They were for the most part non-ideological, predominantly ethnically based, used violence and intimidation, and adhered to their own codes of law." In Ancient Rome, there was an infamous outlaw called Bulla Felix who organized and led a gang of up to six hundred bandits. Terrorism is linked to organized crime, but has political aims rather than solely financial ones, so there is overlap but separation between terrorism and organized crime.
A fence, or receiver, (銷贓者), was a merchant who bought and sold stolen goods. Fences were part of the extensive network of accomplices in the criminal underground of Ming and Qing China. Their occupation entailed criminal activity, but as fences often acted as liaisons between the more respectable community to the underground criminals, they were seen as living a “precarious existence on the fringes of respectable society”.
A fence worked alongside bandits, but in a different line of work. The network of criminal accomplices that was often acquired was essential to ensuring both the safety and the success of fences.
The path into the occupation of a fence stemmed, in a large degree, from necessity. As most fences came from the ranks of poorer people, they often took whatever work they could – both legal and illegal.
Like most bandits operated within their own community, fences also worked within their own town or village. For example, in some satellite areas of the capital, military troops lived within or close to the commoner population and they had the opportunity to hold illegal trades with commoners.
In areas like Baoding and Hejian, local peasants and community members not only purchased military livestock such as horses and cattle, but also helped to hide the “stolen livestock from military allured by the profits”. Local peasants and community members became fences and they hid criminal activities from officials in exchange of products or money from these soldiers.
Most fences were not individuals who only bought and sold stolen goods to make a living. The majority of fences had other occupations within the "polite" society and held a variety of official occupations. These occupations included laborers, coolies, and peddlers. Such individuals often encountered criminals in markets in their line of work, and, recognizing a potential avenue for an extra source of income, formed acquaintances and temporary associations for mutual aid and protection with criminals. In one example, an owner of a tea house overheard the conversation between Deng Yawen, a criminal, and others planning a robbery and he offered to help to sell the loot for an exchange of spoils.
At times, the robbers themselves filled the role of fences, selling to people they met on the road. This may actually have been preferable for robbers, in certain circumstances, because they would not have to pay the fence a portion of the spoils.
Butchers were also prime receivers for stolen animals because of the simple fact that owners could no longer recognize their livestock once butchers slaughtered them. Animals were very valuable commodities within Ming China, and a robber could potentially sustain a living from stealing livestock and selling them to butcher-fences.
Although the vast majority of the time, fences worked with physical stolen property, fences who also worked as itinerant barbers also sold information as a good. Itinerant barbers often amassed important sources of information and news as they traveled, and sold significant pieces of information, often to criminals in search of places to hide or individuals to rob. In this way, itinerant barbers also served the role as a keeper of information that could be sold to both members of the criminal underground, as well as powerful clients in performing the function of a spy.
He or she not only sold items such as jewelry and clothing but was also involved in human trafficking hostages that banditires kidnapped. Women and children were the easiest and among the most common “objects” the fences sold. Most of the female hostages were sold to fences and then sold as prostitutes, wives, or concubines. One example of human trafficking can be seen from Chen Akuei's gang who abducted a servant girl and sold her to Lin Baimao, who in turn sold her to a thirty parts of silver as wives. In contrast to women, who required beauty to sell for a high price, children were sold regardless of their physical appearance or family background. Children were often sold as servants or entertainers, while young girls were often sold as prostitutes.
Like merchants of honest goods, one of the most significant tools of a fence was their network of connections. As they were the middlemen between robbers and clients, fences needed to form and maintain connections in both the “polite” society, as well as among criminals. However, there were a few exceptions in which members of the so-called “well-respected” society become receivers and harborers. They not only help bandits to sell the stolen goods but also acted as agents of bandits to collect protection money from local merchants and residents. These "part-time" fences with high social status used their connection with bandits to help themselves gain social capital as well as wealth.
It was extremely important to their occupation that fences maintained a positive relationship with their customers, especially their richer gentry clients. When some members of the local elites joined the ranks of fences, they not only protect bandits to protect their business interests, they actively took down any potential threats to their illegal profiting, even government officials. In the Zhejiang Province, the local elites not only got the provincial commissioner, Zhu Wan, dismissed from his office but also eventually “[drove] him to suicide”. This was possible because fences often had legal means of making a living, as well as illegal activities and could threaten to turn in bandits into the authorities.
It was also essential for them to maintain a relationship with bandits. However, it was just as true that bandits needed fences to make a living. As a result, fences often held dominance in their relationship with bandits. Taking advantage of their dominance in their relationships with bandits, fences also cheated bandits by manipulating the prices they paid bandits for the stolen property.
Aside from simply buying and selling stolen goods, fences often played additional roles in the criminal underground of early China. Because of the high floating population in public places such as inns and tea houses, they often became ideal places for bandies and gangs to gather to exchange information and plan for their next crime. Harborers, people who provided safe houses for criminals, often played the role of receiving stolen goods from their harbored criminals to sell to other customers. Safe houses included inns, tea houses, brothels, opium dens, as well as gambling parlors, and employees or owners of such institutions often functioned as harborers, as well as fences. These safe houses locate in places where there are high floating population and people from all kinds of social backgrounds.
Brothels themselves helped these bandits to hide and sell stolen goods because of the special Ming Law that exempted brothels from being held responsible “for the criminal actions of their clients.” Even though government requires owners of these places to report any suspicious activities, lack of enforcement from government itself and some of the owners being fences for the bandits make an ideal safe house for bandits and gangs.
Pawnshops were also often affiliated with fencing stolen goods. The owners or employees of such shops often paid cash for stolen goods at a price a great deal below market value to bandits, that were often desperate for money, and resold the goods to earn a profit.
Two different Ming Laws, the "Da Ming Lü 大明律" and the "Da Gao 大诰," drafted by the Hongwu Emperor Zhu Yuanzhang, sentenced fences with different penalties based on the categories and prices of the products that were stolen.
In coastal regions, illegal trading with foreigners, as well as smuggling became a huge concern for the government during the mid to late Ming era. In order to prohibit this crime, the government passed a law in which illegal smugglers who traded with foreigners without the consent of the government would be punished with exile to the border for military service.
In areas where military troops were stationed, stealing and selling military property would result in a more severe punishment. In the Jiaqing time, a case was recorded of stealing and selling military horses. The emperor himself gave direction that the thieves who stole the horses and the people who helped to sell the horses would be put on cangue and sent to labor in a border military camp.
In the salt mines, the penalty for workers who stole salt and people who sold the stolen salt was the most severe. Consider salt is a very valuable property in Ming China, anyone who was arrested and found guilty of stealing and selling government salt was put to death.
During the Victorian era, criminals and gangs started to form organizations which would be collectively become London's criminal underworld. Criminal societies in the underworld started to develop their own ranks and groups which were sometimes called "families", and were often made up of lower-classes and operated on pick-pocketry, prostitution, forgery and counterfeiting, commercial burglary and even money-laundering schemes. Unique also were the use of slang and argots used by Victorian criminal societies to distinguish each other. One of the most infamous crime bosses in the Victorian underworld was Adam Worth, who was nicknamed "the Napoleon of the criminal world" or "the Napoleon of Crime" and became the inspiration behind the popular character of Professor Moriarty.
Organized crime in the United States first came to prominence in the Old West and historians such as Brian J. Robb and Erin H. Turner traced the first organized crime syndicates to the Coschise Cowboy Gang and the Wild Bunch. The Cochise Cowboys, though loosely organized, were unique for their criminal operations in the Mexican border, in which they would steal and sell cattle as well smuggled contraband goods in between the countries. In the Old west there were other examples of gangs that operated in ways similar to an organized crime syndicate such as the Innocents gang, the Jim Miller gang, the Soupy Smith gang, the Belle Starr gang, and the Bob Dozier gang.
Donald Cressey’s Cosa Nostra model studied Mafia families exclusively and this limits his broader findings. Structures are formal and rational with allocated tasks, limits on entrance, and influence the rules established for organizational maintenance and sustainability. In this context there is a difference between organized and professional crime; there is well-defined hierarchy of roles for leaders and members, underlying rules and specific goals that determine their behavior, and these are formed as a social system, one that was rationally designed to maximize profits and to provide forbidden goods. Albini saw organized criminal behavior as consisting of networks of patrons and clients, rather than rational hierarchies or secret societies.
The networks are characterized by a loose system of power relations. Each participant is interested in furthering his own welfare. Criminal entrepreneurs are the patrons and they exchange information with their clients in order to obtain their support. Clients include members of gangs, local and national politicians, government officials and people engaged in legitimate business. People in the network may not directly be part of the core criminal organization. Furthering the approach of both Cressey and Albini, Ianni and Ianni studied Italian-American crime syndicates in New York and other cities.
Kinship is seen as the basis of organized crime rather than the structures Cressey had identified; this includes fictive godparental and affinitive ties as well as those based on blood relations, and it is the impersonal actions, not the status or affiliations of their members, that define the group. Rules of conduct and behavioral aspects of power and networks and roles include the following:
Strong family ties are derived from the traditions of southern Italy, where family rather than the church or state is the basis of social order and morality.
One of the most important trends to emerge in criminological thinking about OC in recent years is the suggestion that it is not, in a formal sense, "organized" at all. Evidence includes lack of centralized control, absence of formal lines of communication, fragmented organizational structure. It is distinctively disorganized. For example, Seattle's crime network in the 1970s and 80s consisted of groups of businessmen, politicians and of law enforcement officers. They all had links to a national network via Meyer Lansky, who was powerful, but there was no evidence that Lansky or anyone else exercised centralized control over them.
While some crime involved well-known criminal hierarchies in the city, criminal activity was not subject to central management by these hierarchies nor by other controlling groups, nor were activities limited to a finite number of objectives. The networks of criminals involved with the crimes did not exhibit organizational cohesion. Too much emphasis had been placed on the Mafia as controlling OC. The Mafia were certainly powerful but they "were part of a heterogeneous underworld, a network characterized by complex webs of relationships." OC groups were violent and aimed at making money but because of the lack of structure and fragmentation of objectives, they were "disorganized".
Further studies showed neither bureaucracy nor kinship groups are the primary structure of organized crime, rather they were in partnerships or a series of joint business ventures. Despite these conclusions, all researchers observed a degree of managerial activities among the groups they studied. All observed networks and a degree of persistence, and there may be utility in focusing on the identification of organizing roles of people and events rather than the group's structure. There may be three main approaches to understand the organizations in terms of their roles as social systems:
Organized crime groups may be a combination of all three.
International consensus on defining organized crime has become important since the 1970s due its increased prevalence and impact. e.g., UN in 1976 and EU 1998. OC is "...the large scale and complex criminal activity carried on by groups of persons, however loosely or tightly organized for the enrichment of those participating at the expense of the community and its members. It is frequently accomplished through ruthless disregard of any law, including offenses against the person and frequently in connection with political corruption." (UN) "A criminal organization shall mean a lasting, structured association of two or more persons, acting in concert with a view to committing crimes or other offenses which are punishable by deprivation of liberty or a detention order of a maximum of at least four years or a more serious penalty, whether such crimes or offenses are an end in themselves or a means of obtaining material benefits and, if necessary, of improperly influencing the operation of public authorities." (UE) Not all groups exhibit the same characteristics of structure. However, violence and corruption and the pursuit of multiple enterprises and continuity serve to form the essence of OC activity.
There are eleven characteristics from the European Commission and Europol pertinent to a working definition of organized crime. Six of those must be satisfied and the four in italics are mandatory. Summarized, they are:
with the Convention against Transnational Organized Crime (the "Palermo Convention") having a similar definition:
Others stress the importance of power, profit and perpetuity, defining organized criminal behavior as:
Definitions need to bring together its legal and social elements. OC has widespread social, political and economic effects. It uses violence and corruption to achieve its ends: "OC when group primarily focused on illegal profits systematically commit crimes that adversely affect society and are capable of successfully shielding their activities, in particular by being willing to use physical violence or eliminate individuals by way of corruption."
It is a mistake to use the term "OC" as though it denotes a clear and well-defined phenomenon. The evidence regarding OC "shows a less well-organized, very diversified landscape of organizing criminals…the economic activities of these organizing criminals can be better described from the viewpoint of 'crime enterprises' than from a conceptually unclear frameworks such as 'OC'." Many of the definitions emphasize the ‘group nature’ of OC, the ‘organization’ of its members, its use of violence or corruption to achieve its goals, and its extra-jurisdictional character...OC may appear in many forms at different times and in different places. Due to the variety of definitions, there is “evident danger” in asking “what is OC?” and expecting a simple answer.
Some espouse that all organized crime operates at an international level, though there is currently no international court capable of trying offenses resulting from such activities (the International Criminal Court's remit extends only to dealing with people accused of offenses against humanity, e.g., genocide). If a network operates primarily from one jurisdiction and carries out its illicit operations there and in some other jurisdictions it is ‘international,' though it may be appropriate to use the term ‘transnational’ only to label the activities of a major crime group that is centered in no one jurisdiction but operating in many. The understanding of organized crime has therefore progressed to combined internationalization and an understanding of social conflict into one of power, control, efficiency risk and utility, all within the context of organizational theory. The accumulation of social, economic and political power have sustained themselves as a core concerns of all criminal organizations:
Contemporary organized crime may be very different from traditional Mafia style, particularly in terms of the distribution and centralization of power, authority structures and the concept of 'control' over one's territory and organization. There is a tendency away from centralization of power and reliance upon family ties towards a fragmentation of structures and informality of relationships in crime groups. Organized crime most typically flourishes when a central government and civil society is disorganized, weak, absent or untrustworthy.
This may occur in a society facing periods of political, economic or social turmoil or transition, such as a change of government or a period of rapid economic development, particularly if the society lacks strong and established institutions and the rule of law. The dissolution of the Soviet Union and the Revolutions of 1989 in Eastern Europe that saw the downfall of the Communist Bloc created a breeding ground for criminal organizations.
The newest growth sectors for organized crime are identity theft and online extortion. These activities are troubling because they discourage consumers from using the Internet for e-commerce. E-commerce was supposed to level the playing ground between small and large businesses, but the growth of online organized crime is leading to the opposite effect; large businesses are able to afford more bandwidth (to resist denial-of-service attacks) and superior security. Furthermore, organized crime using the Internet is much harder to trace down for the police (even though they increasingly deploy cybercops) since most police forces and law enforcement agencies operate within a local or national jurisdiction while the Internet makes it easier for criminal organizations to operate across such boundaries without detection.
In the past criminal organizations have naturally limited themselves by their need to expand, putting them in competition with each other. This competition, often leading to violence, uses valuable resources such as manpower (either killed or sent to prison), equipment and finances. In the United States, James "Whitey" Bulger, the Irish Mob boss of the Winter Hill Gang in Boston turned informant for the Federal Bureau of Investigation (FBI). He used this position to eliminate competition and consolidate power within the city of Boston which led to the imprisonment of several senior organized crime figures including Gennaro Angiulo, underboss of the Patriarca crime family. Infighting sometimes occurs within an organization, such as the Castellamarese war of 1930–31 and the Boston Irish Mob Wars of the 1960s and 1970s.
Today criminal organizations are increasingly working together, realizing that it is better to work in cooperation rather than in competition with each other (once again, consolidating power). This has led to the rise of global criminal organizations such as Mara Salvatrucha, 18th Street gang, and Barrio Azteca. The American Mafia, in addition to having links with organized crime groups in Italy such as the Camorra, the 'Ndrangheta, Sacra Corona Unita, and Sicilian Mafia, has at various times done business with the Irish Mob, Jewish-American organized crime, the Japanese Yakuza, Indian mafia, the Russian mafia, Thief in law, and Post-Soviet Organized crime groups, the Chinese Triads, Chinese Tongs, and Asian street gangs, Motorcycle Gangs, and numerous White, Black, and Hispanic prison and street gangs. The United Nations Office on Drugs and Crime estimated that organized crime groups held $322 billion in assets in 2005.
This rise in cooperation between criminal organizations has meant that law enforcement agencies are increasingly having to work together. The FBI operates an organized crime section from its headquarters in Washington, D.C. and is known to work with other national (e.g., Polizia di Stato, Russian Federal Security Service (FSB), and the Royal Canadian Mounted Police), federal (e.g., Bureau of Alcohol, Tobacco, Firearms, and Explosives, Drug Enforcement Administration, United States Marshals Service, Immigration and Customs Enforcement, United States Secret Service, US Diplomatic Security Service, United States Postal Inspection Service, U.S. Customs and Border Protection, United States Border Patrol, and the United States Coast Guard), state (e.g., Massachusetts State Police Special Investigation Unit, New Jersey State Police organized crime unit, Pennsylvania State Police organized crime unit, and the New York State Police Bureau of Criminal Investigation) and city (e.g., New York City Police Department Organized Crime Unit, Philadelphia Police Department Organized crime unit, Chicago Police Organized Crime Unit, and the Los Angeles Police Department Special Operations Division) law enforcement agencies.
Criminal psychology is defined as the study of the intentions, behaviors, and actions of a criminal or someone who allows themselves to participate in criminal behavior. The goal is understand what is going on in the criminals head and explain why they are doing what they are doing. This varies depending on whether the person is facing the punishment for what they did, are roaming free, or if they are punishing themselves. Criminal psychologists get called to court to explain the inside the mind of the criminal.
This theory treats all individuals as rational operators, committing criminal acts after consideration of all associated risks (detection and punishment) compared with the rewards of crimes (personal, financial etc.). Little emphasis is placed on the offenders’ emotional state. The role of criminal organizations in lowering the perceptions of risk and increasing the likelihood of personal benefit is prioritized by this approach, with the organizations structure, purpose, and activity being indicative of the rational choices made by criminals and their organizers.
This theory sees criminal behavior as reflective of an individual, internal calculation by the criminal that the benefits associated with offending (whether financial or otherwise) outweigh the perceived risks. The perceived strength, importance or infallibility of the criminal organization is directly proportional to the types of crime committed, their intensity and arguably the level of community response. The benefits of participating in organized crime (higher financial rewards, greater socioeconomic control and influence, protection of the family or significant others, perceived freedoms from 'oppressive' laws or norms) contribute greatly to the psychology behind highly organized group offending.
Criminals learn through associations with one another. The success of organized crime groups is therefore dependent upon the strength of their communication and the enforcement of their value systems, the recruitment and training processes employed to sustain, build or fill gaps in criminal operations. An understanding of this theory sees close associations between criminals, imitation of superiors, and understanding of value systems, processes and authority as the main drivers behind organized crime. Interpersonal relationships define the motivations the individual develops, with the effect of family or peer criminal activity being a strong predictor of inter-generational offending. This theory also developed to include the strengths and weaknesses of reinforcement, which in the context of continuing criminal enterprises may be used to help understand propensities for certain crimes or victims, level of integration into the mainstream culture and likelihood of recidivism / success in rehabilitation.
Under this theory, organized crime exists because legitimate markets leave many customers and potential customers unsatisfied. High demand for a particular good or service (e.g., drugs, prostitution, arms, slaves), low levels of risk detection and high profits lead to a conducive environment for entrepreneurial criminal groups to enter the market and profit by supplying those goods and services. For success, there must be:
Under these conditions competition is discouraged, ensuring criminal monopolies sustain profits. Legal substitution of goods or services may (by increasing competition) force the dynamic of organized criminal operations to adjust, as will deterrence measures (reducing demand), and the restriction of resources (controlling the ability to supply or produce to supply).
Sutherland goes further to say that deviancy is contingent on conflicting groups within society, and that such groups struggle over the means to define what is criminal or deviant within society. Criminal organizations therefore gravitate around illegal avenues of production, profit-making, protectionism or social control and attempt (by increasing their operations or membership) to make these acceptable. This also explains the propensity of criminal organizations to develop protection rackets, to coerce through the use of violence, aggression and threatening behavior (at times termed 'terrorism'). Preoccupation with methods of accumulating profit highlight the lack of legitimate means to achieve economic or social advantage, as does the organization of white-collar crime or political corruption (though it is debatable whether these are based on wealth, power or both). The ability to effect social norms and practices through political and economic influence (and the enforcement or normalization of criminogenic needs) may be defined by differential association theory.
Social disorganization theory is intended to be applied to neighborhood level street crime, thus the context of gang activity, loosely formed criminal associations or networks, socioeconomic demographic impacts, legitimate access to public resources, employment or education, and mobility give it relevance to organized crime. Where the upper- and lower-classes live in close proximity this can result in feelings of anger, hostility, social injustice and frustration. Criminals experience poverty; and witness affluence they are deprived of and which is virtually impossible for them to attain through conventional means. The concept of neighborhood is central to this theory, as it defines the social learning, locus of control, cultural influences and access to social opportunity experienced by criminals and the groups they form. Fear of or lack of trust in mainstream authority may also be a key contributor to social disorganization; organized crime groups replicate such figures and thus ensure control over the counter-culture. This theory has tended to view violent or antisocial behavior by gangs as reflective of their social disorganization rather than as a product or tool of their organization.
Sociologist Robert K. Merton believed deviance depended on society's definition of success, and the desires of individuals to achieve success through socially defined avenues. Criminality becomes attractive when expectations of being able to fulfill goals (therefore achieving success) by legitimate means cannot be fulfilled. Criminal organizations capitalize on states with a lack of norm by imposing criminogenic needs and illicit avenues to achieve them. This has been used as the basis for numerous meta-theories of organized crime through its integration of social learning, cultural deviance, and criminogenic motivations. If crime is seen as a function of anomie, organized behavior produces stability, increases protection or security, and may be directly proportional to market forces as expressed by entrepreneurship- or risk-based approaches. It is the inadequate supply of legitimate opportunities that constrains the ability for the individual to pursue valued societal goals and reduces the likelihood that using legitimate opportunities will enable them to satisfy such goals (due to their position in society).
Criminals violate the law because they belong to a unique subculture - the counter-culture - their values and norms conflicting with those of the working-, middle- or upper-classes upon which criminal laws are based. This subculture shares an alternative lifestyle, language and culture, and is generally typified by being tough, taking care of their own affairs and rejecting government authority. Role models include drug dealers, thieves and pimps, as they have achieved success and wealth not otherwise available through socially-provided opportunities. It is through modeling organized crime as a counter-cultural avenue to success that such organizations are sustained.
The alien conspiracy theory and queer ladder of mobility theories state that ethnicity and 'outsider' status (immigrants, or those not within the dominant ethnocentric groups) and their influences are thought to dictate the prevalence of organized crime in society. The alien theory posits that the contemporary structures of organized crime gained prominence during the 1860s in Sicily and that elements of the Sicilian population are responsible for the foundation of most European and North American organized crime, made up of Italian-dominated crime families. Bell's theory of the 'queer ladder of mobility' hypothesized that 'ethnic succession' (the attainment of power and control by one more marginalized ethnic group over other less marginalized groups) occurs by promoting the perpetration of criminal activities within a disenfranchised or oppressed demographic. Whilst early organized crime was dominated by the Irish Mob (early 1800s), they were relatively substituted by the Sicilian Mafia and Italian-American Mafia, the Aryan Brotherhood (1960s onward), Colombian Medellin cartel and Cali cartel (mid-1970s - 1990s), and more recently the Mexican Tijuana Cartel (late 1980s onward), Mexican Los Zetas (late 1990s to onward), the Russian Mafia (1988 onward), terrorism-related organized crime Al-Qaeda (1988 onward), the Taliban (1994 onward), and Islamic State of Iraq and the Levant (ISIL) (2010s to onward) . Many argue this misinterprets and overstates the role of ethnicity in organized crime. A contradiction of this theory is that syndicates had developed long before large-scale Sicilian immigration in 1860s, with these immigrants merely joining a widespread phenomenon of crime and corruption. | https://en.wikipedia.org/wiki?curid=22625 |
One Foot in the Grave
One Foot in the Grave is a British television sitcom written by David Renwick. There were six series and seven Christmas specials over a period of eleven years from early 1990 to late 2000. The first five series were broadcast between January 1990 and January 1995. For the next five years, the show appeared only as Christmas specials, followed by one final series in 2000.
The series features the exploits of Victor Meldrew, played by Richard Wilson and his long-suffering wife, Margaret, played by Annette Crosbie. Wilson initially turned down the part of Meldrew and David Renwick considered Les Dawson for the role, until Wilson changed his mind. The programmes invariably deal with Meldrew's battle against the problems he creates for himself. Set in a typical suburb in southern England, Victor takes involuntary early retirement. His various efforts to keep himself busy, while encountering various misfortunes and misunderstandings are the themes of the sitcom. Indoor scenes were filmed at BBC Television Centre with most exterior scenes filmed on Tresillian Way in Walkford in Christchurch, Dorset. Despite its traditional production, the series subverts its domestic sitcom setting with elements of black humour and surrealism.
The series was occasionally the subject of controversy for some of its darker story elements, but nevertheless received a number of awards, including the 1992 BAFTA for Best Comedy. The programme came 80th in the British Film Institute's 100 Greatest British Television Programmes. The series, originally shown on BBC One, is now available on DVD and is regularly repeated in the United Kingdom. Four episodes were remade for BBC Radio 2. The series inspired a novel, published in 1992, featuring the most memorable moments from the first two series and the first Christmas special.
The series features the exploits and mishaps of irascible early retiree Victor Meldrew, who after being made redundant from his job as a security guard, finds himself at war with the world and everything in it. Meldrew, cursed with misfortune and always complaining, is married to long-suffering wife Margaret, who is often left exasperated by his many misfortunes.
Amongst other witnesses to Victor's wrath are tactless family friend Jean Warboys and next-door couple Patrick (Victor's nemesis) and Pippa Trench. Patrick often discovers Victor in inexplicably bizarre or compromising situations, leading him to believe that he is insane. The Meldrews' neighbour on the other side, overly cheery charity worker Nick Swainey, also adds to Victor's frustration.
Although set in a traditional suburban setting, the show subverts this genre with a strong overtone of black comedy. Series One's "The Valley of Fear" is an episode which caused controversy, when Victor finds a frozen cat in his freezer. Writer David Renwick also combined farce with elements of tragedy. For example, in the final episode, Victor is killed by a hit-and-run driver and although there is no explicit reference that Victor and Margaret had children, the episode "Timeless Time" contained a reference to someone named Stuart; the strong implication being that they once had a son who had died as a child.
A number of episodes were also experimental in that they took place entirely in one setting. Such episodes include: Victor, Margaret and Mrs Warboys stuck in a traffic jam; Victor and Margaret in bed suffering insomnia; Victor left alone in the house waiting to see if he has to take part in jury service; Victor and Margaret having a long wait in their solicitor's waiting room; and Victor and Margaret trying to cope during a power cut on the hottest night of the year.
Despite Margaret's frequent exasperation with her husband's antics, the series shows that the couple have a deep affection for one another. This is demonstrated several times throughout the series.
Victor Meldrew (Richard Wilson) – Victor is the main protagonist of the sitcom and finds himself constantly battling against all that life throws at him as he becomes entangled in complicated misfortunes and farcical situations. Renwick once pointed out in an interview that the name "Victor" was ironic, since he almost always ends up a loser. From being buried alive to being prosecuted for attacking a feisty pit bull terrier with a collection of coconut meringues, Victor tries to adjust to life after an automatic security system made him redundant at the office where he worked as a security guard, but to no avail. He believes that everything is going wrong for him all the time and he has the right to be upset because it is always someone else's fault. Victor does not see himself as retired and is always trying to find another job, but all his attempts end in failure. Victor is a tragic comedy character and sympathy is directed towards him as he becomes embroiled in complex misunderstandings, bureaucratic vanity and, at times, sheer bad luck. The audience sees a philosophical ebb to his character, however, along with a degree of optimism. Yet his polite façade collapses when events get the better of him and a full verbal onslaught is forthcoming. "Victor-isms" include "I do not believe it!", "I don't believe it!", "Un-be-lievable!", "What in the name of bloody hell?", "In the name of sanity!". Despite his grumpy demeanor Victor isn't totally devoid of compassionin "Hearts of Darkness" he liberates elderly nursing home residents that were being mistreated by the staff and in "Descent Into The Maelstrom" he calls the incident room number and gives the location of an emotionally disturbed girl that abducted a baby and stole Margaret's pearl earrings, which resulted in the girl getting picked up by the police. However, because the girl was a friend of Margaret's and knowing she meant a lot to her, Victor never said anything. Victor has also shown a vast amount of loyalty to Margaret as, throughout their entire 42 years of lifelong marriage together, not once has the thought of infidelity ever occurred to him. In "Rearranging the Dust", Victor and Margaret recollect the days of their courtship at a party after which Victor says "You were always my first choice", which leaves Margaret stunned. In another episode, Margaret recounts the time Victor took her to the funfair and they ended up getting stuck in the hall of mirrors for over an hour. Victor had said he didn't mind as he was happy to stay there and look at all the reflections of her. Victor's very best act of compassion came in the episode "The Wisdom of the Witch" in which he ends up saving Patrick's life from his new secretary's psychopathic boyfriend by forcing Patrick's would-be murderer, with himself along with him as well, out of the window of the house in which they were trapped during a snowstorm.
Margaret Meldrew (née Pellow) (Annette Crosbie) – Victor's long-suffering, tolerant and kind-hearted wife. Margaret tries to maintain a degree of calmness and to rise above her husband's antics. However, she is often engulfed in these follies, mishaps and confusion and often vents her anger at Victor. In early episodes, her character acts more as a comic foil to Victor's misfortunes. Examples include fearfully asking if a cat found frozen in their freezer is definitely dead and mentioning a friend who died of a terminal illness. When Victor reminds her that the woman actually fell from a cliff, Margaret retorts she only did so because "she went to the seaside to convalesce".
In later episodes, Margaret develops into a more complex character. She is shown to be fiercely protective of her marriage to Victor by becoming easily suspicious and jealous. For example, of a Dutch marionette that Victor becomes occupied with repairing in the episode "Hole in the Sky", eventually leading her to destroy it. In "The Affair of the Hollow Lady", a greengrocer (played by Barbara Windsor) develops a soft spot for Victor and tries to convince Margaret that he has been unfaithful to her. In revenge, Margaret assaults her with a pair of boxing gloves. However, Margaret herself is shown to have contemplated infidelity with a man called Ben whom she met on holiday in the episode "Warm Champagne". She decides against cheating on Victor. In this episode, she sums up her relationship with Victor by telling Ben, "He's the most sensitive person I've ever met and that's why I love him and why I constantly want to ram his head through a television screen." She also began to develop a sense of cynicism, slowly beginning to see the world the way her husband Victor sees it. This is especially evident in "Things aren't simple anymore" where she voices that the world is "all speed and greed" and that "nobody does anything about anything". In "Rearranging the Dust", Margaret recounts the time she first chose Victor at a party and, during a power cut, "shared their bodies" in the garden. After this moment of passion, they went back inside and when the lights came back on Margaret realised that she had "grabbed hold of the wrong person". Margaret's demeanor seemed to stem from an incident she had at school when she was a child. When she was five, she had two budgies; one day when she opened the door of their cage, one flew straight out and hit the window killing itself, while the other stayed in the cage despite her best efforts to get it to come out. The next day at school her teacher asked the class to write a story about something that had happened to them so Margaret wrote her story about the budgies. Her teacher made Margaret read it out loud in front of the whole class which resulted in everyone laughing at her. She then realized that the teacher had done it deliberately just to be cruel to her and knew why the other budgie never wanted to leave its cage.
Margaret could be said to have a catchphrase - typically a long, exasperated use of the word "God", usually when making a realisation about the reasons behind one of Victor's mishaps. These are occasionally inadvertently aided by herself in some way, such as leaving the phone off the hook or giving permission to someone to enter the Meldrews' house when she isn't there. Margaret works at a florist's until series five, in which she is made redundant after the store goes under.
Jean Warboys (Doreen Mantle) – Mrs Warboys is a friend of Margaret (and a rather annoying one in Victor's eyes) who attached herself to the Meldrews, accompanying them on many of their exploits. Until the fourth series she was married to (unseen) Chris until he left her for a private detective she hired when she believed he was having an affair, and they divorced.
She often bears the brunt of Victor's temper due to muddled misunderstandings and in part due to her aloof nature. One such occasion saw Victor asking her to pick up a suit of his from the dry-cleaners, only for her to return with a gorilla suit. Another occasion saw her persuading Victor to take on a dog whose owner had just died. Victor spent time building a kennel in the garden and when Mrs Warboys arrives with the dog, she forgets to mention that the dog is stuffed - much to Victor and Margaret's consternation. On another occasion she won a competition where the prize was either to earn £500 or to have a life-size waxwork model made of herself, which had to be delivered to the Meldrews' house; she chose the waxwork. As it turned out, she hated it as much as Victor and Margaret did and the waxwork ended up in the dustbin.
Despite being friends, she has driven Margaret to distraction on several occasions. Most notably in "Only a Story", when she stayed with the Meldrews after her flat had been flooded and enraged Margaret with her complaining and laziness. Jean was also shown as a somewhat absent-minded character, as she has a pet cockatiel despite having a lifelong allergy to feathers. She would often bore the Meldrews by showing them her complete collection of holiday pictures at the most unwelcome times. A running joke is her beating Victor at board-games, including Trivial Pursuit and chess, while having a conversation with someone else. Doreen Mantle described her character as "wanting to do the right thing but always finding out that it was the wrong thing". Victor's annoyance with her is often demonstrated by shouting her name in an inpatient tone, being "MRS WARBOYS!!", sometimes repeatedly.
Patrick Trench (Angus Deayton) – Patrick and his wife Pippa live next door to Victor and often catches Victor engrossed in seemingly preposterous situations, all of which in context are perfectly innocuous. The couple's relationship with their neighbours begins badly after Victor mistakes Patrick and Pippa for distant relations when they arrive outside with three suitcasesnot realising that they are his next-door neighbours, having been on a lengthy holiday from the day Victor and Margaret moved in. Victor subsequently invites the bemused pair to stay; this and later incidents cause Patrick to suspect that Victor is quite insane, possibly bordering on malicious.
However, Patrick's rift with Victor eventually transforms him into a rather cynical character (much like Victor) and he often responds to him in similarly vindictive ways as a means of trying to settle the score. For example, writing complaints and grievances on post-it notes. This aspect of Patrick's character came to a head in the episode "The Executioner's Song" where his face temporarily morphs into that of Victor's as he looks into a mirror.
It is mentioned several times that Patrick would like to have children. After Pippa miscarries and Patrick is, so he claims, rendered infertile by a freak accident (for which he unfairly blames Victor), he adopts a dachshund called Denzil, which Pippa describes as his "baby substitute". Denzil frequently appears with Patrick through series 3–5. Despite their animosity towards each other, Victor ends up saving Patrick's life in "The Wisdom of the Witch".
Pippa Trench (née Croker) (Janine Duvitski) – Patrick's wife sought friendly relations with the Meldrews and, after a while, became good friends with Margaret. The two women usually attempt to get the men to make peace with each other at least once per series. Eventually Patrick proposes that the Trenches move house, but they soon realise that the Meldrew curse has followed them: Victor sent workmen to their home, thinking they were removal men who had initially come to the wrong house. They were in fact from a house clearance firm Margaret had employed to clear her late cousin Ursula's country mansion. The workmen consequently cleared Patrick and Pippa's house of their entire furniture and sold it for a mere four hundred and seventy five pounds. Pippa is slightly dim-witted (once described by Victor as a "gormless twerp" on an answering machine message, unaware she was listening)for example, believing Victor had murdered an elderly blind man simply because the victim had been found clutching a double-one domino in his hand and Victor had two pimples on his nose.
New neighbours Derek and Betty McVitie replaced the Trenches for the 1997 special "Endgame", however this turned out to be their only appearances in the series and they were said to have emigrated by the penultimate episode which caused Nick Swainey to leap straight in with the offer for their old house. Series six saw the Trenches return as prominent characters, albeit living in a house some distance from the Meldrews. Despite appearing in five out of six series and three Christmas specials, neither of the Trenches ever share a scene with Mrs Warboys and Pippa only ever shares one scene with Nick Swainey (in the episode "Who Will Buy?").
Nick Swainey (Owen Brenman) – The excessively cheerful and often oblivious Mr Swainey appeared in the first episode, encouraging Victor to join his OAPs' trip to Eastbourne and being greeted with Victor's trademark abuse. When the Meldrews move house, they discover he is their neighbour, living on the other side of the Meldrews from the Trenches. He remains continuously optimistic; even his being told to "piss off" by Victor is laughed off. Despite this run-in he later befriends Victor and they frequently chat in their gardens, where Victor is often surprised by Mr Swainey's activities, ranging from archery and preparing amateur dramatics props, to bizarre games he arranges for his bedridden senile mother, whom the audience never actually see. Despite his cheery demeanour, he does occasionally drop his guard, once displaying apparent depression at being nothing more than "an overgrown boy-scout". Following his mother's death, he moved house near the end of the series, but only went as far as the Trenches'/McVitie's old house, claiming he'd always wanted to live in an "end house, without leaving the area". This took Victor by surprise; he did not learn where Mr Swainey was moving to until, while reminiscing in the garden about his departure, Mr Swainey suddenly appeared from the other side.
Ronnie and Mildred (Gordon Peters and Barbara Ashcroft) – Ronnie and Mildred were a constantly cheerful, but incredibly boring, couple who provided yet another annoyance to the Meldrews, who dreaded any upcoming visits to them; Victor once said that he had hoped they were both dead. In "The Worst Horror of All", when the couple attempted a surprise visit, the Meldrews hid in their house to give the impression they were away on holiday and then took the phone off the hook for several days afterwards, though these efforts to avoid them were in vain. They are referenced a number of times in the series for giving the Meldrews bizarre and always unwanted presents that are seldom opened, usually involving a garish photograph of themselves. In the final series, however it was clear that their cheerfulness was a façade and, in a particularly dark scene, Mildred hanged herself "during a game of Happy Families". The shot of Mildred's feet dangling outside the window is usually cut from pre-watershed screenings.
Alfred Meldrew (Richard Pearson) - Victor's absent-minded brother, who lives in New Zealand. During the episode "The Broken Reflection", he comes to visit after 25 years, to the disdain of Victor. Alfred is an eccentric character, often walking around with his hat on fire and bringing over his and Victor's great-grandfather's skull. He is a clumsy character too, mistaking the table-cloth for a napkin and dropping the entire contents of the table all over the floor when he stands up and breaking a mirror in the middle of the night after mistaking his own reflection for a burglar. Victor starts to warm to Alfred towards the end of his visit, but Alfred leaves early the next day after finding an unpleasant message about him that Victor had accidentally recorded on a dictaphone. He is not seen again, but keeps in touch with the Meldrews, as Victor is seen looking at some photographs Alfred had sent over in "The Trial".
Cousin Wilfred (John Rutland) – Mrs. Warboys' cousin Wilfred, first appeared in an episode in the third season. In the final season the character returned, but the effects of a stroke had rendered him mute and forced him to "speak" with the aid of an electronic voice generator. His poor typing on the generator led to several misunderstandings, such as asking Victor for a "bra of soup" (as opposed to a "bar of soap") and describing a visit to his "brothel" (as opposed to "brother").
Great Aunt Joyce and Uncle Dick - Unseen characters, they are sometimes mentioned by Victor and Margaret, as an aging and grim couple whom Victor and Margaret dread having anything to do with. Great Aunt Joyce is mentioned as having a glass eye and has the habit of knitting bizarre items (such as six-fingered gloves) for Victor. Uncle Dick has a wooden arm; in the final Comic Relief (2001) episode, it transpires that a nurse had mistakenly placed a drip in the false arm for 18 hours after a trip to hospital after trying to remove a kidney stone with a wire coat hanger.
Mimsy Berkovitz - Another unseen character, she is the local agony aunt, whom many of the characters turn to for advice. In the episode "The Secret of the Seven Sorcerers", Patrick is heard talking to her on the radio, seeking her advice on how to cope when Victor and Margaret invite him and Pippa around to dinner.
Mrs Birkett (Gabrielle Blunt) An elderly neighbour. She accidentally gets trapped in the Meldrews' loft when Victor closes the trap door whilst she is up there looking for jumble that Margaret has prepared. She continues to be mentioned throughout the rest of the series, but is not seen again.
Martin Trout (Peter Cook) - A paparazzo in "One Foot in the Algarve". He manages to take a number of compromising photographs, involving a high-ranking politician. Trout compares the potential impact of the photos to the Profumo affair. On his way to sell the images, he loses the roll of film whilst arguing at a phone box with the Meldrews and subsequently pursues them across the Algarve to retrieve it. He suffers a number of disasters both related and unrelated to Victor and Margaret's own misfortunes, only to find that the film had actually fallen into the lining of his jacket and had been with him for much of his journey. He lost it in the door of the Meldrews' car. Retrieving the roll after a brief spell in hospital, Trout attempts to leave the Algarve in a taxi but is involved in a car crash.
The production of the show was in a conventional sitcom format, with episodes taped live in front of a studio audience, interposed with pre-filmed location material.
Most of the first five series of "One Foot in the Grave" were produced and directed by Susan Belbin, the exceptions being "Love and Death", which was partly directed by veteran sitcom director Sydney Lotterby and "Starbound", for which Gareth Gwenlan (who in fact had originally commissioned the series in 1989) stepped in to direct some sequences after Belbin was taken ill. Afterward, Belbin retired owing to ill-health, and the final series was produced by Jonathan P. Llewellyn and directed by Christine Gernon. Wilson and Renwick felt that Gernon's experience of working with Belbin on earlier series of "One Foot" as a production secretary and assistant, as well as other shows, meant that her style was similar to Belbin's, aiding the transition between directors.
"One Foot" used Bournemouth to film some exterior sequences because of its favourable climate, easy access to London and economical benefits relative to filming in the capital. After the first series was filmed, the house—near Pokesdown, Bournemouth—which had been used for the Meldrews' house in location sequences, changed hands and the new owners demanded nearly treble the usage fees that the previous owners had asked for. Rather than agree to this, the production team decided to find a new house and the first episode of the second series was rewritten to have the Meldrews' house destroyed in a fire (this was filmed on waste ground in Northcote Road, Springbourne). This also gave the opportunity for a new interior set to be designed, as Belbin had been unhappy with the original set designed for the series, which she felt was too restrictive to shoot in.
Since series two, the exterior scenes of the Meldrew's home were filmed at Tresillian Way, Walkford, near New Milton in Hampshire. These later series make extensive use of specific street and garden locations in most episodes, particularly for scenes involving the Meldrew's neighbours. Most outside locations were filmed in and around Bournemouth and Christchurch. These include Richmond Hill, Undercliff Drive and Boscombe Pier, Bournemouth Town Hall, Lansdowne College, Christchurch Hospital and the former Royal Victoria Hospital (Boscombe). Later episodes, such as "Hearts of Darkness", were filmed entirely on location. Victor's death by a hit and run driver in the final episode was filmed at Shawford railway station, Hampshire. Fans left floral tributes at the site.
Over the show's history, it featured a number of notable comic actors in one-off roles. These included Susie Blake, John Bird, Tim Brooke-Taylor, Peter Cook, Diana Coupland, Phil Daniels, Edward de Souza, Hannah Gordon, Georgina Hale, Jimmy Jewel, Rula Lenska, Stephen Lewis, Brian Murphy, Christopher Ryan, Barbara Windsor, Joan Sims and Ray Winstone. Two of Angus Deayton's former "Radio Active" and "KYTV" co-stars, Geoffrey Perkins and Michael Fenton Stevens were cast, in separate episodes, as respectively the brother and brother-in-law of Deayton's character. A few actors little-known at the time also appeared in one-off roles before going on to greater fame, including Lucy Davis, Joanna Scanlan, Eamonn Walker and Arabella Weir.
The show was produced with an aspect ratio of from 1990 to 1997. Three years later, the show returned to television for its final series, which was produced with an aspect ratio of . All episodes are of Standard Definition 576i.
The "One Foot in the Grave" theme song was written, composed and sung by Eric Idle. A longer version was produced for the special "One Foot in the Algarve", released as a single with five remixes and a karaoke version in November 1994. Idle included a live version of the song on his album "Eric Idle Sings Monty Python". It is preluded by a similar adaptation of "Bread of Heaven" to that used in the episode "The Beast in the Cage" by disgruntled car mechanics.
The title music on the TV series is accompanied at the beginning and end of each episode by footage of Galápagos tortoises.
The series also made extensive use of incidental music, composed by Ed Welch, which often hinted at a particular genre to fit the mood of the scenes, frequently incorporating well-known pieces of music such as "God Rest You Merry, Gentlemen" or "Intermezzo" from Jean Sibelius' "Karelia Suite". In the Christmas special "Endgame" during Margaret's alleged death scene, a compilation of clips from past episodes are accompanied by the song "River Runs Deep" performed by J. J. Cale. The final episode ended with a montage of some of the mishaps Victor encountered, which were mentioned in the episode – backed by "End of the Line" by the Traveling Wilburys.
The programme received a number of prestigious awards. In 1992, it won a BAFTA as Best Comedy (Programme or Series). During its ten-year run, the series was nominated a further six times. Richard Wilson also won Best Light Entertainment Performance in 1992 and 1994 and Annette Crosbie was nominated for the same award in 1994.
The series also won the Best Television Sitcom in 1992 from the Royal Television Society and the British Comedy Award for Best Sitcom in 1992, 1995 and 2001.
In 2004, "One Foot in the Grave" came tenth in a BBC poll to find "Britain's Best Sitcom" with 31,410 votes. The programme also came 80th in the British Film Institute's 100 Greatest British Television Programmes
A number of complaints were made during the series' run for its depiction of animal deaths. For example, in the episode "The Valley of Fear", a dead cat is found in the Meldrews' freezer; in another, a tortoise is roasted in a brazier. However, this was later cited as a positive feature of the programme's daring scripts in "Britain's Best Sitcom" by its advocate Rowland Rivron. The programme was censured, however, for a scene in the episode "Hearts of Darkness" in which an elderly resident is abused in an old people's home and following complaints, the scene was slightly cut when the episode was repeated. In the DVD commentary for the episode, David Renwick stated his continued opposition to the cuts. Another controversial scene in the episode "Tales of Terror" saw the Meldrews visit Ronnie and Mildred on the understanding that Mildred had gone upstairs during a game of Happy Families and not returned; Ronnie then shows her feet hanging outside of the window, revealing that she has committed suicide. The Broadcasting Standards Commission received complaints about this scene.
When the final episode, "Things Aren't Simple Any More" originally aired on 20 November 2000 at 9pm, it coincided with the broadcast of the first jackpot winner in the UK version of "Who Wants to Be a Millionaire?", which had been filmed the Sunday before the broadcast. ITV was accused of engineering this in order to damage the final episode's expected high ratings, but was later cleared by the Independent Television Commission.
Due to the series' popularity, people who constantly complain and are irritated by minor things are often compared to Victor Meldrew by the British media. Renwick disputes this usage however, claiming that Victor's reactions are entirely in proportion to the things that happen to him.
Renwick integrated some of the plots and dialogue from the series into a novel, which was first published by BBC Books in 1992. Renwick also adapted four episodes for BBC Radio 2, which first aired between 21 January 1995 and 11 February 1995. The episodes are "Alive and Buried", "In Luton Airport, No One Can Hear You Scream", "Timeless Time" and "The Beast in the Cage". They are regularly repeated on the digital speech station BBC Radio 4 Extra and are available on audio CD.
Wilson dislikes saying his character's catchphrase ("I don't believe it!") and only performs the line for charity events for a small fee. This became a joke in the actor's guest appearance as himself in the "Father Ted" episode "The Mainland", where Ted annoys him by constantly repeating his catchphrase. The situation was conceived when "Father Ted" writers Graham Linehan and Arthur Mathews sat behind Wilson at a performance of "Le Cirque du Soleil" at the Royal Albert Hall. They considered how "tasteless and wrong" it would be to lean forward to him every time that an acrobat did a stunt and yell the catchphrase and then they realised that that's exactly what their fictional priests would do. This was also played upon when Wilson made a guest appearance on the comedy TV quiz show "Shooting Stars", in which Vic Reeves and Bob Mortimer purposefully misquoted his catchphrase by referring to him as "Richard 'I don't believe you' Wilson".
All six series and specials were initially available on BBC Worldwide VHS video tapes during the late 1990s and early 2000s. The Comic Relief Shorts from 1993 and 2001 have not been released on DVD. A One Foot in the Grave Very Best Of DVD featuring five of the greatest episodes was released on 22 October 2001 in Region 2. Then on 8 July 2004, a One Foot in the Grave Very Best Of was also released in Region 4. Each series was gradually released on DVD in Region 2 between 2004 and 2006, with a complete series 1-6 box set towards the end of 2006. A slimmer series 1-6 box set was released in 2010 in Region 2. The first slim set (Region 2) were individual seasons in 7mm cases (rather than the standard 14mm ones) and then re-released where the discs were on trays that could be turned like a book, this reduced the need to print covers for each season. | https://en.wikipedia.org/wiki?curid=22626 |
Ottoman Turks
The Ottoman Turks (or Osmanlı Turks, ) were the Turkish-speaking people of the Ottoman Empire ( 1299–1922/1923). Reliable information about the early history of Ottoman Turks remains scarce, but they take their Turkish name, "Osmanlı" ("Osman" became corrupted in some European languages as "Ottoman"), from the house of Osman I (reigned 1299–1326), the founder of the dynasty that ruled the Ottoman Empire for its entire 624 years. Expanding from its base in Bithynia, the Ottoman principality began incorporating other Turkish-speaking Muslims and non-Turkish Christians. Crossing into Europe from the 1350s, coming to dominate the Mediterranean and capturing (1453) Constantinople (the capital city of the Byzantine Empire), the Ottoman Turks blocked all major land routes between Asia and Europe; Western Europeans had to find other ways to trade with the East -
and "vice versa".
The "Ottomans" first became known to the West in the 14th century when they migrated westward into the Seljuk Empire in Anatolia. The Ottoman Turks created a beylik in Western Anatolia under Ertugrul, the capital of which was Söğüt in western Anatolia. Ertugrul, leader of the nomadic Kayı tribe, first established a principality as part of the decaying Seljuk empire. His son Osman expanded the principality; the empire and the people were named "Ottomans" by Europeans after him ("Ottoman" being a corruption of "Osman"). Osman's son Orhan expanded the growing Ottoman Empire, taking Nicaea (present-day İznik) and crossed the Dardanelles in 1362. The Ottoman Empire came into its own when Mehmed II captured the reduced Byzantine Empire's well-defended capital, Constantinople (present-day Istanbul), in 1453.
The Ottoman Empire came to rule much of the Balkans, the Caucasus, the Middle East (excluding Iran), and North Africa over the course of several centuries, with an advanced army and navy. The Empire lasted until the end of the First World War, when it was defeated by the Allies and partitioned. Following the successful Turkish War of Independence that ended with the Turkish national movement retaking most of the land lost to the Allies, the movement abolished the Ottoman sultanate on November 1, 1922 and proclaimed the Republic of Turkey on October 29, 1923. The movement nullified the Treaty of Sèvres and negotiated the significantly more favorable Treaty of Lausanne (1923), assuring recognition of modern Turkish national borders, termed "Misak-ı Milli" (National Pact).
Not all Ottomans were Muslims and not all Ottoman Muslims were Turks, but by 1923, the majority of people living within the borders of the new Turkish republic identified as Turks. Notable exceptions were the Kurds and the few remaining Armenians, Georgians and Greeks.
The conquest of Constantinople began to make the Ottomans the rulers of one of the most profitable empires, connected to the flourishing Islamic cultures of the time, and at the crossroads of trade into Europe. The Ottomans made major developments in calligraphy, writing, law, architecture, and military science, and became the standard of opulence.
Because Islam is a monotheistic religion that focuses heavily on learning the central text of the Quran and Islamic culture has historically tended towards discouraging or prohibiting figurative art, calligraphy became one of the foremost of the arts.
The early Yâkût period was supplanted in the late 15th century by a new style pioneered by Şeyh Hamdullah (1429–1520), which became the basis for Ottoman calligraphy, focusing on the Nesih version of the script, which became the standard for copying the Quran (see Islamic calligraphy).
The next great change in Ottoman calligraphy came from the style of Hâfiz Osman (1642–1698), whose rigorous and simplified style found favour with an empire at its peak of territorial extent and governmental burdens.
The late calligraphic style of the Ottomans was created by Mustafa Râkim (1757–1826) as an extension and reform of Osman's style, placing greater emphasis on technical perfection, which broadened the calligraphic art to encompass the sülüs script as well as the Nesih script.
Ottoman poetry included epic-length verse but is better known for shorter forms such as the gazel. For example, the epic poet Ahmedi (-1412) is remembered for his "Alexander the Great". His contemporary Sheykhi wrote verses on love and romance. Yaziji-Oglu produced a religious epic on Mohammed's life, drawing from the stylistic advances of the previous generation and Ahmedi's epic forms.
By the 14th century, the Ottoman Empire's prosperity made manuscript works available to merchants and craftsmen, and produced a flowering of miniatures that depicted pageantry, daily life, commerce, cities and stories, and chronicled events.
By the late 18th century, European influences in painting were clear, with the introduction of oils, perspective, figurative paintings, use of anatomy and composition. | https://en.wikipedia.org/wiki?curid=22629 |
Object Management Group
The Object Management Group (OMG) is a computer industry standards consortium. OMG Task Forces develop enterprise integration standards for a range of technologies.
The goal of the OMG was a common portable and interoperable object model with methods and data that work using all types of development environments on all types of platforms.
The group provides only specifications, not implementations. But before a specification can be accepted as a standard by the group, the members of the submitter team must guarantee that they will bring a conforming product to market within a year. This is an attempt to prevent unimplemented (and unimplementable) standards. Other private companies or open source groups are encouraged to produce conforming products and OMG is attempting to develop mechanisms to enforce true interoperability.
OMG hosts four technical meetings per year for its members and interested nonmembers. The Technical Meetings provide a neutral forum to discuss, develop and adopt standards that enable software interoperability.
Founded in 1989 by eleven companies (including Hewlett-Packard, IBM, Sun Microsystems, Apple Computer, American Airlines, iGrafx, and Data General), OMG's initial focus was to create a heterogeneous distributed object standard. The founding executive team included Christopher Stone and John Slitz. Current leadership includes Chairman and CEO Richard Soley, President and COO Bill Hoffman and Vice President and Technical Director Larry L. Johnson.
Since 2000, the group's international headquarters has been located in Needham, Massachusetts.
In 1997, the Unified Modeling Language (UML) was added to the list of OMG adopted technologies. UML is a standardized general-purpose modeling language in the field of object-oriented software engineering.
In June 2005, the Business Process Management Initiative (BPMI.org) and OMG announced the merger of their respective Business Process Management (BPM) activities to form the Business Modeling and Integration Domain Task Force (BMI DTF).
In 2006 the Business Process Model and Notation (BPMN) was adopted as a standard by OMG. In 2007 the Business Motivation Model (BMM) was adopted as a standard by the OMG. The BMM is a metamodel that provides a vocabulary for corporate governance and strategic planning and is particularly relevant to businesses undertaking governance, regulatory compliance, business transformation and strategic planning activities.
In 2009 OMG, together with the Software Engineering Institute at Carnegie Mellon launched the Consortium of IT Software Quality (CISQ). In 2011 OMG formed the Cloud Standards Customer Council. Founding sponsors included CA, IBM, Kaavo, Rackspace and Software AG. The CSCC is an OMG end user advocacy group dedicated to accelerating cloud's successful adoption, and drilling down into the standards, security and interoperability issues surrounding the transition to the cloud.
In September 2011, the OMG Board of Directors voted to adopt the Vector Signal and Image Processing Library (VSIPL) as the latest OMG specification. Work for adopting the specification was led by Mentor Graphics' Embedded Software Division, RunTime Computing Solutions, The Mitre Corporation as well as the High Performance Embedded Computing Software Initiative (HPEC-SI). VSIPL is an application programming interface (API). VSIPL and VSIPL++ contain functions used for common signal processing kernel and other computations. These functions include basic arithmetic, trigonometric, transcendental, signal processing, linear algebra, and image processing. The VSIPL family of libraries has been implemented by multiple vendors for a range of processor architectures, including x86, PowerPC, Cell, and NVIDIA GPUs. VSIPL and VSIPL++ are designed to maintain portability across a range of processor architectures. Additionally, VSIPL++ was designed from the start to include support for parallelism.
Late 2012 early 2013, the group's Board of Directors adopted the Automated Function Point (AFP) specification. The push for adoption was led by the Consortium for IT Software Quality (CISQ). AFP provides a standard for automating the popular function point measure according to the counting guidelines of the International Function Point User Group (IFPUG).
On March 27 2014, OMG announced it would be managing the newly formed Industrial Internet Consortium (IIC).
Of the many standards maintained by the OMG, 11 have been ratified as ISO standards. These standards are: | https://en.wikipedia.org/wiki?curid=22637 |
Oxford English Dictionary
The Oxford English Dictionary (OED) is the principal historical dictionary of the English language, published by Oxford University Press (OUP). It traces the historical development of the English language, providing a comprehensive resource to scholars and academic researchers, as well as describing usage in its many variations throughout the world. The second edition, comprising 21,728 pages in 20 volumes, was published in 1989.
Work began on the dictionary in 1857, but it was only in 1884 that it began to be published in unbound fascicles as work continued on the project, under the name of "A New English Dictionary on Historical Principles; Founded Mainly on the Materials Collected by The Philological Society". In 1895, the title "The Oxford English Dictionary" was first used unofficially on the covers of the series, and in 1928 the full dictionary was republished in ten bound volumes. In 1933, the title "The Oxford English Dictionary" fully replaced the former name in all occurrences in its reprinting as twelve volumes with a one-volume supplement. More supplements came over the years until 1989, when the second edition was published. Since 2000, compilation of a third edition of the dictionary has been underway, approximately half of which is complete .
The first electronic version of the dictionary was made available in 1988. The online version has been available since 2000, and as of April 2014 was receiving over two million visits per month. The third edition of the dictionary will most likely only appear in electronic form; the Chief Executive of Oxford University Press has stated that it is unlikely that it will ever be printed.
As a historical dictionary, the "Oxford English Dictionary" features entries in which the earliest ascertainable recorded sense of a word, whether current or obsolete, is presented first, and each additional sense is presented in historical order according to the date of its earliest ascertainable recorded use. Following each definition are several brief illustrating quotations presented in chronological order from the earliest ascertainable use of the word in that sense to the last ascertainable use for an obsolete sense, to indicate both its life span and the time since its desuetude, or to a relatively recent use for current ones.
The format of the "OED"'s entries has influenced numerous other historical lexicography projects. The forerunners to the "OED", such as the early volumes of the "Deutsches Wörterbuch", had initially provided few quotations from a limited number of sources, whereas the "OED" editors preferred larger groups of quite short quotations from a wide selection of authors and publications. This influenced later volumes of this and other lexicographical works.
According to the publishers, it would take a single person 120 years to "key in" the 59 million words of the "OED" second edition, 60 years to proofread them, and 540 megabytes to store them electronically. As of 30 November 2005, the "Oxford English Dictionary" contained approximately 301,100 main entries. Supplementing the entry headwords, there are 157,000 bold-type combinations and derivatives; 169,000 italicized-bold phrases and combinations; 616,500 word-forms in total, including 137,000 pronunciations; 249,300 etymologies; 577,000 cross-references; and 2,412,400 usage quotations. The dictionary's latest, complete print edition (second edition, 1989) was printed in 20 volumes, comprising 291,500 entries in 21,730 pages. The longest entry in the "OED2" was for the verb "set", which required 60,000 words to describe some 430 senses. As entries began to be revised for the "OED3" in sequence starting from M, the longest entry became "make" in 2000, then "put" in 2007, then "run" in 2011.
Despite its considerable size, the "OED" is neither the world's largest nor the earliest exhaustive dictionary of a language. Another earlier large dictionary is the Grimm brothers' dictionary of the German language, begun in 1838 and completed in 1961. The first edition of the "Vocabolario degli Accademici della Crusca" is the first great dictionary devoted to a modern European language (Italian) and was published in 1612; the first edition of "Dictionnaire de l'Académie française" dates from 1694. The official dictionary of Spanish is the "Diccionario de la lengua española" (produced, edited, and published by the Real Academia Española), and its first edition was published in 1780. The Kangxi dictionary of Chinese was published in 1716.
The dictionary began as a Philological Society project of a small group of intellectuals in London (and unconnected to Oxford University): Richard Chenevix Trench, Herbert Coleridge, and Frederick Furnivall, who were dissatisfied with the existing English dictionaries. The society expressed interest in compiling a new dictionary as early as 1844, but it was not until June 1857 that they began by forming an "Unregistered Words Committee" to search for words that were unlisted or poorly defined in current dictionaries. In November, Trench's report was not a list of unregistered words; instead, it was the study "On Some Deficiencies in our English Dictionaries", which identified seven distinct shortcomings in contemporary dictionaries:
The society ultimately realized that the number of unlisted words would be far more than the number of words in the English dictionaries of the 19th century, and shifted their idea from covering only words that were not already in English dictionaries to a larger project. Trench suggested that a new, truly "comprehensive" dictionary was needed. On 7 January 1858, the society formally adopted the idea of a comprehensive new dictionary. Volunteer readers would be assigned particular books, copying passages illustrating word usage onto quotation slips. Later the same year, the society agreed to the project in principle, with the title "A New English Dictionary on Historical Principles" ("NED").
Richard Chenevix Trench (1807–1886) played the key role in the project's first months, but his appointment as Dean of Westminster meant that he could not give the dictionary project the time that it required. He withdrew and Herbert Coleridge became the first editor.
On 12 May 1860, Coleridge's dictionary plan was published and research was started. His house was the first editorial office. He arrayed 100,000 quotation slips in a 54 pigeon-hole grid. In April 1861, the group published the first sample pages; later that month, Coleridge died of tuberculosis, aged 30.
Thereupon Furnivall became editor; he was enthusiastic and knowledgeable, but temperamentally ill-suited for the work. Many volunteer readers eventually lost interest in the project, as Furnivall failed to keep them motivated. Furthermore, many of the slips were misplaced.
Furnivall believed that, since many printed texts from earlier centuries were not readily available, it would be impossible for volunteers to efficiently locate the quotations that the dictionary needed. As a result, he founded the Early English Text Society in 1864 and the Chaucer Society in 1868 to publish old manuscripts. Furnivall's preparatory efforts lasted 21 years and provided numerous texts for the use and enjoyment of the general public, as well as crucial sources for lexicographers, but they did not actually involve compiling a dictionary. Furnivall recruited more than 800 volunteers to read these texts and record quotations. While enthusiastic, the volunteers were not well trained and often made inconsistent and arbitrary selections. Ultimately, Furnivall handed over nearly two tons of quotation slips and other materials to his successor.
In the 1870s, Furnivall unsuccessfully attempted to recruit both Henry Sweet and Henry Nicol to succeed him. He then approached James Murray, who accepted the post of editor. In the late 1870s, Furnivall and Murray met with several publishers about publishing the dictionary. In 1878, Oxford University Press agreed with Murray to proceed with the massive project; the agreement was formalized the following year. 20 years after its conception, the dictionary project finally had a publisher. It would take another 50 years to complete.
Late in his editorship, Murray learned that a prolific reader named W. C. Minor was a criminal lunatic. Minor was a Yale University-trained surgeon and military officer in the American Civil War, and was confined to Broadmoor Asylum for the Criminally Insane after killing a man in London. Minor invented his own quotation-tracking system, allowing him to submit slips on specific words in response to editors' requests. The story of Murray and Minor later served as the central focus of "The Surgeon of Crowthorne" (US title: "The Professor and the Madman"), a popular book about the creation of the "OED". This book was then the basis for the 2019 film "The Professor and the Madman", starring Mel Gibson and Sean Penn.
During the 1870s, the Philological Society was concerned with the process of publishing a dictionary with such an immense scope. They had pages printed by publishers, but no publication agreement was reached; both the Cambridge University Press and the Oxford University Press were approached. The OUP finally agreed in 1879 (after two years of negotiating by Sweet, Furnivall, and Murray) to publish the dictionary and to pay Murray, who was both the editor and the Philological Society president. The dictionary was to be published as interval fascicles, with the final form in four volumes, totalling 6,400 pages. They hoped to finish the project in ten years.
Murray started the project, working in a corrugated iron outbuilding called the "Scriptorium" which was lined with wooden planks, book shelves, and 1,029 pigeon-holes for the quotation slips. He tracked and regathered Furnivall's collection of quotation slips, which were found to concentrate on rare, interesting words rather than common usages. For instance, there were ten times as many quotations for "abusion" as for "abuse". He appealed, through newspapers distributed to bookshops and libraries, for readers who would report "as many quotations as you can for ordinary words" and for words that were "rare, obsolete, old-fashioned, new, peculiar or used in a peculiar way". Murray had American philologist and liberal arts college professor Francis March manage the collection in North America; 1,000 quotation slips arrived daily to the Scriptorium and, by 1880, there were 2,500,000.
The first dictionary fascicle was published on 1 February 1884—twenty-three years after Coleridge's sample pages. The full title was "A New English Dictionary on Historical Principles; Founded Mainly on the Materials Collected by The Philological Society"; the 352-page volume, words from "A" to "Ant", cost 12s 6d. (or about $668.24 in 2013) The total sales were only 4,000 copies.
The OUP saw that it would take too long to complete the work with unrevised editorial arrangements. Accordingly, new assistants were hired and two new demands were made on Murray. The first was that he move from Mill Hill to Oxford, which he did in 1885. Murray had his Scriptorium re-erected on his new property.
Murray resisted the second demand: that if he could not meet schedule, he must hire a second, senior editor to work in parallel to him, outside his supervision, on words from elsewhere in the alphabet. Murray did not want to share the work, feeling that he would accelerate his work pace with experience. That turned out not to be so, and Philip Gell of the OUP forced the promotion of Murray's assistant Henry Bradley (hired by Murray in 1884), who worked independently in the British Museum in London beginning in 1888. In 1896, Bradley moved to Oxford University.
Gell continued harassing Murray and Bradley with his business concerns—containing costs and speeding production—to the point where the project's collapse seemed likely. Newspapers reported the harassment, particularly the "Saturday Review", and public opinion backed the editors. Gell was fired, and the university reversed his cost policies. If the editors felt that the dictionary would have to grow larger, it would; it was an important work, and worth the time and money to properly finish.
Neither Murray nor Bradley lived to see it. Murray died in 1915, having been responsible for words starting with "A–D", "H–K", "O–P", and "T", nearly half the finished dictionary; Bradley died in 1923, having completed "E–G", "L–M", "S–Sh", "St", and "W–We". By then, two additional editors had been promoted from assistant work to independent work, continuing without much trouble. William Craigie started in 1901 and was responsible for "N", "Q–R", "Si–Sq", "U–V", and "Wo–Wy." The OUP had previously thought London too far from Oxford but, after 1925, Craigie worked on the dictionary in Chicago, where he was a professor. The fourth editor was Charles Talbut Onions, who compiled the remaining ranges starting in 1914: "Su–Sz", "Wh–Wo", and "X–Z".
In 1919–1920, J. R. R. Tolkien was employed by the "OED", researching etymologies of the "Waggle" to "Warlock" range; later he parodied the principal editors as "The Four Wise Clerks of Oxenford" in the story "Farmer Giles of Ham".
By early 1894, a total of 11 fascicles had been published, or about one per year: four for "A–B", five for "C", and two for "E". Of these, eight were 352 pages long, while the last one in each group was shorter to end at the letter break (which eventually became a volume break). At this point, it was decided to publish the work in smaller and more frequent instalments; once every three months beginning in 1895 there would be a fascicle of 64 pages, priced at 2s 6d. If enough material was ready, 128 or even 192 pages would be published together. This pace was maintained until World War I forced reductions in staff. Each time enough consecutive pages were available, the same material was also published in the original larger fascicles. Also in 1895, the title "Oxford English Dictionary" was first used. It then appeared only on the outer covers of the fascicles; the original title was still the official one and was used everywhere else.
The 125th and last fascicle covered words from "Wise" to the end of "W" and was published on 19 April 1928, and the full dictionary in bound volumes followed immediately. William Shakespeare is the most-quoted writer in the completed dictionary, with "Hamlet" his most-quoted work. George Eliot (Mary Ann Evans) is the most-quoted female writer. Collectively, the Bible is the most-quoted work (in many translations); the most-quoted single work is "Cursor Mundi".
Additional material for a given letter range continued to be gathered after the corresponding fascicle was printed, with a view towards inclusion in a supplement or revised edition. A one-volume supplement of such material was published in 1933, with entries weighted towards the start of the alphabet where the fascicles were decades old. The supplement included at least one word ("bondmaid") accidentally omitted when its slips were misplaced; many words and senses newly coined (famously "appendicitis", coined in 1886 and missing from the 1885 fascicle, which came to prominence when Edward VII's 1902 appendicitis postponed his coronation); and some previously excluded as too obscure (notoriously "radium", omitted in 1903, months before its discoverers Pierre and Marie Curie won the Nobel Prize in Physics.). Also in 1933 the original fascicles of the entire dictionary were re-issued, bound into 12 volumes, under the title ""The Oxford English Dictionary"". This edition, of 13 volume including the supplement, was subsequently reprinted in 1961 and 1970.
In 1933, Oxford had finally put the dictionary to rest; all work ended, and the quotation slips went into storage. However, the English language continued to change and, by the time 20 years had passed, the dictionary was outdated.
There were three possible ways to update it. The cheapest would have been to leave the existing work alone and simply compile a new supplement of perhaps one or two volumes; but then anyone looking for a word or sense and unsure of its age would have to look in three different places. The most convenient choice for the user would have been for the entire dictionary to be re-edited and retypeset, with each change included in its proper alphabetical place; but this would have been the most expensive option, with perhaps 15 volumes required to be produced. The OUP chose a middle approach: combining the new material with the existing supplement to form a larger replacement supplement.
Robert Burchfield was hired in 1957 to edit the second supplement; Onions turned 84 that year but was still able to make some contributions as well. The work on the supplement was expected to take about seven years. It actually took 29 years, by which time the new supplement "(OEDS)" had grown to four volumes, starting with "A", "H", "O", and "Sea". They were published in 1972, 1976, 1982, and 1986 respectively, bringing the complete dictionary to 16 volumes, or 17 counting the first supplement.
Burchfield emphasized the inclusion of modern-day language and, through the supplement, the dictionary was expanded to include a wealth of new words from the burgeoning fields of science and technology, as well as popular culture and colloquial speech. Burchfield said that he broadened the scope to include developments of the language in English-speaking regions beyond the United Kingdom, including North America, Australia, New Zealand, South Africa, India, Pakistan, and the Caribbean. Burchfield also removed, for unknown reasons, many entries that had been added to the 1933 supplement. In 2012, an analysis by lexicographer Sarah Ogilvie revealed that many of these entries were in fact foreign loanwords, despite Burchfield's claim that he included more such words. The proportion was estimated from a sample calculation to amount to 17% of the foreign loan words and words from regional forms of English. Some of these had only a single recorded usage, but many had multiple recorded citations, and it ran against what was thought to be the established "OED" editorial practice and a perception that he had opened up the dictionary to "World English".
This was published in 1968 at $300. There were changes in the arrangement of the volumes - for example volume 7 covered only N-Poy, the remaining "P" entries being transferred to volume 8.
By the time the new supplement was completed, it was clear that the full text of the dictionary would need to be computerized. Achieving this would require retyping it once, but thereafter it would always be accessible for computer searching – as well as for whatever new editions of the dictionary might be desired, starting with an integration of the supplementary volumes and the main text. Preparation for this process began in 1983, and editorial work started the following year under the administrative direction of Timothy J. Benbow, with John A. Simpson and Edmund S. C. Weiner as co-editors. In 2016, Simpson published his memoir chronicling his years at the OED. See "The Word Detective: Searching for the Meaning of It All at the Oxford English Dictionary – A Memoir." Basic Books, New York.
Thus began the "New Oxford English Dictionary (NOED)" project. In the United States, more than 120 typists of the International Computaprint Corporation (now Reed Tech) started keying in over 350,000,000 characters, their work checked by 55 proof-readers in England. Retyping the text alone was not sufficient; all the information represented by the complex typography of the original dictionary had to be retained, which was done by marking up the content in SGML. A specialized search engine and display software were also needed to access it. Under a 1985 agreement, some of this software work was done at the University of Waterloo, Canada, at the "Centre for the New Oxford English Dictionary", led by Frank Tompa and Gaston Gonnet; this search technology went on to become the basis for the Open Text Corporation. Computer hardware, database and other software, development managers, and programmers for the project were donated by the British subsidiary of IBM; the colour syntax-directed editor for the project, LEXX, was written by Mike Cowlishaw of IBM. The University of Waterloo, in Canada, volunteered to design the database. A. Walton Litz, an English professor at Princeton University who served on the Oxford University Press advisory council, was quoted in "Time" as saying "I've never been associated with a project, I've never even heard of a project, that was so incredibly complicated and that met every deadline."
By 1989, the "NOED" project had achieved its primary goals, and the editors, working online, had successfully combined the original text, Burchfield's supplement, and a small amount of newer material, into a single unified dictionary. The word "new" was again dropped from the name, and the second edition of the "OED," or the "OED2," was published. The first edition retronymically became the "OED1".
The "Oxford English Dictionary 2" was printed in 20 volumes. Up to a very late stage, all the volumes of the first edition were started on letter boundaries. For the second edition, there was no attempt to start them on letter boundaries, and they were made roughly equal in size. The 20 volumes started with "A", "B.B.C.", "Cham", "Creel", "Dvandva", "Follow", "Hat", "Interval", "Look", "Moul", "Ow", "Poise", "Quemadero", "Rob", "Ser", "Soot", "Su", "Thru", "Unemancipated", and "Wave".
The content of the "OED2" is mostly just a reorganization of the earlier corpus, but the retypesetting provided an opportunity for two long-needed format changes. The headword of each entry was no longer capitalized, allowing the user to readily see those words that actually require a capital letter. Murray had devised his own notation for pronunciation, there being no standard available at the time, whereas the "OED2" adopted the modern International Phonetic Alphabet. Unlike the earlier edition, all foreign alphabets except Greek were transliterated.
The British quiz show "Countdown" has awarded the leather-bound complete version to the champions of each series since its inception in 1982.
When the print version of the second edition was published in 1989, the response was enthusiastic. Author Anthony Burgess declared it "the greatest publishing event of the century", as quoted by the "Los Angeles Times". "Time" dubbed the book "a scholarly Everest", and Richard Boston, writing for "The Guardian", called it "one of the wonders of the world".
The supplements and their integration into the second edition were a great improvement to the "OED" as a whole, but it was recognized that most of the entries were still fundamentally unaltered from the first edition. Much of the information in the dictionary published in 1989 was already decades out of date, though the supplements had made good progress towards incorporating new vocabulary. Yet many definitions contained disproven scientific theories, outdated historical information, and moral values that were no longer widely accepted. Furthermore, the supplements had failed to recognize many words in the existing volumes as obsolete by the time of the second edition's publication, meaning that thousands of words were marked as current despite no recent evidence of their use.
Accordingly, it was recognized that work on a third edition would have to begin to rectify these problems. The first attempt to produce a new edition came with the "Oxford English Dictionary Additions Series," a new set of supplements to complement the "OED2" with the intention of producing a third edition from them. The previous supplements appeared in alphabetical installments, whereas the new series had a full A–Z range of entries within each individual volume, with a complete alphabetical index at the end of all words revised so far, each listed with the volume number which contained the revised entry.
However, in the end only three "Additions" volumes were published this way, two in 1993 and one in 1997, each containing about 3,000 new definitions. The possibilities of the World Wide Web and new computer technology in general meant that the processes of researching the dictionary and of publishing new and revised entries could be vastly improved. New text search databases offered vastly more material for the editors of the dictionary to work with, and with publication on the Web as a possibility, the editors could publish revised entries much more quickly and easily than ever before. A new approach was called for, and for this reason it was decided to embark on a new, complete revision of the dictionary.
Beginning with the launch of the first "OED Online" site in 2000, the editors of the dictionary began a major revision project to create a completely revised third edition of the dictionary ("OED3"), expected to be completed in 2037 with the projected cost of about £34 million.
Revisions were started at the letter "M", with new material appearing every three months on the "OED Online" website. The editors chose to start the revision project from the middle of the dictionary in order that the overall quality of entries be made more even, since the later entries in the "OED1" generally tended to be better than the earlier ones. However, in March 2008, the editors announced that they would alternate each quarter between moving forward in the alphabet as before and updating "key English words from across the alphabet, along with the other words which make up the alphabetical cluster surrounding them". With the relaunch of the "OED Online" website in December 2010, alphabetical revision was abandoned altogether.
The revision is expected roughly to double the dictionary in size. Apart from general updates to include information on new words and other changes in the language, the third edition brings many other improvements, including changes in formatting and stylistic conventions to make entries clearer to read and enable more thorough searches to be made by computer, more thorough etymological information, and a general change of focus away from individual words towards more general coverage of the language as a whole. While the original text drew its quotations mainly from literary sources such as novels, plays, and poetry, with additional material from newspapers and academic journals, the new edition will reference more kinds of material that were unavailable to the editors of previous editions, such as wills, inventories, account books, diaries, journals, and letters.
John Simpson was the first chief editor of the "OED3". He retired in 2013 and was replaced by Michael Proffitt, who is the eighth chief editor of the dictionary.
The production of the new edition exploits computer technology, particularly since the June 2005 inauguration of the "Perfect All-Singing All-Dancing Editorial and Notation Application", or "Pasadena". With this XML-based system, lexicographers can spend less effort on presentation issues such as the numbering of definitions. This system has also simplified the use of the quotations database, and enabled staff in New York to work directly on the dictionary in the same way as their Oxford-based counterparts.
Other important computer uses include internet searches for evidence of current usage, and email submissions of quotations by readers and the general public.
"Wordhunt" was a 2005 appeal to the general public for help in providing citations for 50 selected recent words, and produced antedatings for many. The results were reported in a BBC TV series, "Balderdash and Piffle". The "OED"s readers contribute quotations: the department currently receives about 200,000 a year.
"OED" currently contains over 600,000 entries.They update the OED on a quarterly basis to make up for its Third Edition revising their existing entries and adding new words and senses.
More than 600 new words, senses, and subentries have been added to the OED in December 2018, including "to drain the swamp", "TGIF", and "burkini". South African additions—like "eina", "dwaal", and "amakhosi"—were also included. The phrase "taffety tarts" entered the OED for the first time.
In 1971, the 13-volume "OED1" (1933) was reprinted as a two-volume "Compact Edition", by photographically reducing each page to one-half its linear dimensions; each compact edition page held four "OED1" pages in a four-up ("4-up") format. The two volume letters were "A" and "P"; the first supplement was at the second volume's end. The "Compact Edition" included, in a small slip-case drawer, a magnifying glass to help in reading reduced type. Many copies were inexpensively distributed through book clubs. In 1987, the second supplement was published as a third volume to the "Compact Edition".
In 1991, for the 20-volume "OED2" (1989), the compact edition format was re-sized to one-third of original linear dimensions, a nine-up ("9-up") format requiring greater magnification, but allowing publication of a single-volume dictionary. It was accompanied by a magnifying glass as before and "A User's Guide to the "Oxford English Dictionary"", by Donna Lee Berg. After these volumes were published, though, book club offers commonly continued to sell the two-volume 1971 "Compact Edition".
Once the text of the dictionary was digitized and online, it was also available to be published on CD-ROM. The text of the first edition was made available in 1987. Afterward, three versions of the second edition were issued. Version 1 (1992) was identical in content to the printed second edition, and the CD itself was not copy-protected. Version 2 (1999) included the "Oxford English Dictionary" "Additions" of 1993 and 1997.
Version 3.0 was released in 2002 with additional words from the "OED3" and software improvements. Version 3.1.1 (2007) added support for hard disk installation, so that the user does not have to insert the CD to use the dictionary. It has been reported that this version will work on operating systems other than Microsoft Windows, using emulation programs. Version 4.0 of the CD has been available since June 2009 and works with Windows 7 and Mac OS X (10.4 or later). This version uses the CD drive for installation, running only from the hard drive.
On 14 March 2000, the "Oxford English Dictionary Online" ("OED Online") became available to subscribers. The online database contains the entire "OED2" and is updated quarterly with revisions that will be included in the "OED3" (see above). The online edition is the most up-to-date version of the dictionary available. The "OED" web site is not optimized for mobile devices, but the developers have stated that there are plans to provide an API that would enable developers to develop different interfaces for querying the "OED".
The price for an individual to use this edition is £195 or US$295 every year, even after a reduction in 2004; consequently, most subscribers are large organizations such as universities. Some public libraries and companies have subscribed, as well, including public libraries in the United Kingdom, where access is funded by the Arts Council, and public libraries in New Zealand. Individuals who belong to a library which subscribes to the service are able to use the service from their own home without charge.
The "OED"'s utility and renown as a historical dictionary have led to numerous offspring projects and other dictionaries bearing the Oxford name, though not all are directly related to the "OED" itself.
The "Shorter Oxford English Dictionary," originally started in 1902 and completed in 1933, is an abridgement of the full work that retains the historical focus, but does not include any words which were obsolete before 1700 except those used by Shakespeare, Milton, Spenser, and the King James Bible. A completely new edition was produced from the "OED2" and published in 1993, with revisions in 2002 and 2007.
The "Concise Oxford Dictionary" is a different work, which aims to cover current English only, without the historical focus. The original edition, mostly based on the "OED1", was edited by Francis George Fowler and Henry Watson Fowler and published in 1911, before the main work was completed. Revised editions appeared throughout the twentieth century to keep it up to date with changes in English usage.
"The Pocket Oxford Dictionary of Current English" was originally conceived by F. G. Fowler and H. W. Fowler to be compressed, compact, and concise. Its primary source is the Oxford English Dictionary, and it is nominally an abridgment of the Concise Oxford Dictionary. It was first published in 1924.
In 1998 the "New Oxford Dictionary of English" ("NODE") was published. While also aiming to cover current English, "NODE" was not based on the "OED". Instead, it was an entirely new dictionary produced with the aid of corpus linguistics. Once "NODE" was published, a similarly brand-new edition of the "Concise Oxford Dictionary" followed, this time based on an abridgement of "NODE" rather than the "OED"; "NODE" (under the new title of the "Oxford Dictionary of English", or "ODE") continues to be principal source for Oxford's product line of current-English dictionaries, including the "New Oxford American Dictionary", with the "OED" now only serving as the basis for scholarly historical dictionaries.
The "OED" lists British headword spellings (e.g., "labour", "centre") with variants following ("labor", "center", etc.). For the suffix more commonly spelt "-ise" in British English, OUP policy dictates a preference for the spelling "-ize", e.g., "realize" vs. "realise" and "globalization" vs. "globalisation". The rationale is etymological, in that the English suffix is mainly derived from the Greek suffix "-ιζειν", ("-izein"), or the Latin "-izāre". However, "-ze" is also sometimes treated as an Americanism insofar as the "-ze" suffix has crept into words where it did not originally belong, as with "analyse" (British English), which is spelt "analyze" in American English.
British prime minister Stanley Baldwin described the "OED" as a "national treasure". Author Anu Garg, founder of Wordsmith.org, has called it a "lex icon". Tim Bray, co-creator of Extensible Markup Language (XML), credits the "OED" as the developing inspiration of that markup language.
However, despite, and at the same time precisely because of, its claims of authority, the dictionary has been criticized since at least the 1960s from various angles. It has become a target precisely of its scope, its claims to authority, its British-centredness and relative neglect of World Englishes, its implied but not acknowledged focus on literary language and, above all, its influence. The "OED", as a commercial product, has always had to manoeuvre a thin line between PR, marketing and scholarship and one can argue that its biggest problem is the critical uptake of the work by the interested public. In his review of the 1982 supplement, University of Oxford linguist Roy Harris writes that criticizing the "OED" is extremely difficult because "one is dealing not just with a dictionary but with a national institution", one that "has become, like the English monarchy, virtually immune from criticism in principle". He further notes that neologisms from respected "literary" authors such as Samuel Beckett and Virginia Woolf are included, whereas usage of words in newspapers or other less "respectable" sources hold less sway, even though they may be commonly used. He writes that the "OED"'s "[b]lack-and-white lexicography is also black-and-white in that it takes upon itself to pronounce authoritatively on the rights and wrongs of usage", faulting the dictionary's prescriptive rather than descriptive usage. To Harris, this prescriptive classification of certain usages as ""erroneous"" and the complete omission of various forms and usages cumulatively represent the "social bias[es]" of the (presumably well-educated and wealthy) compilers. However, the identification of "erroneous and catachrestic" usages is being removed from third edition entries, sometimes in favour of usage notes describing the attitudes to language which have previously led to these classifications.
Harris also faults the editors' "donnish conservatism" and their adherence to prudish Victorian morals, citing as an example the non-inclusion of "various centuries-old 'four-letter words until 1972. However, no English dictionary included such words, for fear of possible prosecution under British obscenity laws, until after the conclusion of the "Lady Chatterley's Lover" obscenity trial in 1960. The first dictionary to include the word "fuck" was the "Penguin English Dictionary" of 1965. Joseph Wright's "English Dialect Dictionary" had included "shit" in 1905.
The "OED"s claims of authority have also been questioned by linguists such as Pius ten Hacken, who notes that the dictionary actively strives towards definitiveness and authority but can only achieve those goals in a limited sense, given the difficulties of defining the scope of what it includes.
Founding editor James Murray was also reluctant to include scientific terms, despite their documentation, unless he felt that they were widely enough used. In 1902, he declined to add the word "radium" to the dictionary. | https://en.wikipedia.org/wiki?curid=22641 |
Ottonian dynasty
The Ottonian dynasty () was a Saxon dynasty of German monarchs (919–1024), named after three of its kings and Holy Roman Emperors named Otto, especially its first Emperor Otto I. It is also known as the Saxon dynasty after the family's origin in the German stem duchy of Saxony. The family itself is also sometimes known as the Liudolfings (), after its earliest known member Count Liudolf (d. 866) and one of its primary leading-names. The Ottonian rulers were successors of the Germanic king Conrad I who was the only Germanic king to rule in East Francia after the Carolingian dynasty and before this dynasty.
In the 9th century, the Saxon count Liudolf held large estates on the Leine river west of the Harz mountain range and in the adjacent Eichsfeld territory of Thuringia. His ancestors probably acted as "ministeriales" in the Saxon stem duchy, which had been incorporated into the Carolingian Empire after the Saxon Wars of Charlemagne. Liudolf married Oda, a member of the Frankish House of Billung. About 852 the couple together with Bishop Altfrid of Hildesheim founded Brunshausen Abbey, which, relocated to Gandersheim, rose to a family monastery and burial ground.
Liudolf already held the high social position of a Saxon "dux", documented by the marriage of his daughter Liutgard with Louis the Younger, son of the Carolingian king Louis the German in 869. Liudolf's sons Bruno and Otto the Illustrious ruled over large parts of Saxon Eastphalia, moreover, Otto acted as lay abbot of the Imperial abbey of Hersfeld with large estates in Thuringia. He married Hedwiga, a daughter of the Babenberg duke Henry of Franconia. Otto possibly accompanied King Arnulf on his 894 campaign to Italy; the marriage of his daughter Oda with Zwentibold, Arnulf's illegitimate son, documents the efforts of the Carolingian ruler to win the mighty Saxon dynasty over as an ally. According to the Saxon chronicler Widukind of Corvey, Otto upon the death of the last Carolingian king Louis the Child in 911 was already a candidate for the East Frankish crown, which however passed to the Franconian duke Conrad I.
Upon Otto's death in 912, his son Henry the Fowler succeeded him as Duke of Saxony. Henry had married Matilda of Ringelheim, a descendant of the legendary Saxon ruler Widukind and heiress to extended estates in Westphalia.
The Ottonian rulers of East Francia, the German kingdom and the Holy Roman Empire were:
Although never Emperor, Henry the Fowler was arguably the founder of the imperial dynasty. While East Francia under the rule of the last Carolingian kings was ravaged by Hungarian invasions, he was chosen to be "primus inter pares" among the German dukes. Elected "Rex Francorum" in May 919, Henry abandoned the claim to dominate the whole disintegrating Carolingian Empire and, unlike his predecessor Conrad I, succeeded in gaining the support of the Franconian, Bavarian, Swabian and Lotharingian dukes. In 933 he led a German army to victory over the Hungarian forces at the Battle of Riade and campaigned both the land of the Polabian Slavs and the Duchy of Bohemia. Because he had assimilated so much power through his conquest, he was able to transfer power to his second son Otto I.
Otto I, Duke of Saxony upon the death of his father in 936, was elected king within a few weeks. He continued the work of unifying all of the German tribes into a single kingdom, greatly expanding the powers of the king at the expense of the aristocracy. Through strategic marriages and personal appointments, he installed members of his own family to the kingdom's most important duchies. This, however, did not prevent his relatives from entering into civil war: both Otto's brother Duke Henry of Bavaria and his son Duke Liudolf of Swabia revolted against his rule. Otto was able to suppress their uprisings, in consequence, the various dukes, who had previously been co-equals with the king, were reduced into royal subjects under the king's authority. His decisive victory over the Magyars at the Battle of Lechfeld in 955 ended the Hungarian invasions of Europe and secured his hold over his kingdom.
The defeat of the pagan Magyars earned King Otto the reputation as the savior of Christendom and the epithet "the Great". He transformed the Church in Germany into a kind of proprietary church and major royal power base to which he donated charity and for the creation of which his family was responsible. By 961, Otto had conquered the Kingdom of Italy, which was a troublesome inheritance that none wanted, and extended his kingdom's borders to the north, east, and south. In control of much of central and southern Europe, the patronage of Otto and his immediate successors caused a limited cultural renaissance of the arts and architecture. He confirmed the 754 Donation of Pepin and, with recourse to the concept of "translatio imperii" in succession of Charlemagne, proceeded to Rome to have himself crowned Holy Roman Emperor by Pope John XII in 962. He even reached a settlement with the Byzantine emperor John I Tzimiskes by marrying his son and heir Otto II to John's niece Theophanu. In 968 he established the Archbishopric of Magdeburg at his long-time residence.
Co-ruler with his father since 961 and crowned emperor in 967, Otto II ascended the throne at the age of 18. By excluding the Bavarian line of Ottonians from the line of succession, he strengthened Imperial authority and secured his own son's succession to the Imperial throne. During his reign, Otto II attempted to annex the whole of Italy into the Empire, bringing him into conflict with the Byzantine emperor and with the Saracens of the Fatimid Caliphate. His campaign against the Saracens ended in 982 with a disastrous defeat at the Battle of Stilo. Moreover, in 983 Otto II experienced a Great Slav Rising against his rule.
Otto II died in 983 at the age of 28 after a ten-year reign. Succeeded by his three-year-old son Otto III as king, his sudden death plunged the Ottonian dynasty into crisis. During her regency for Otto III, the Byzantine princess Theophanu abandoned her late husband's imperialistic policy and devoted herself entirely to furthering her own agenda in Italy.
When Otto III came of age, he concentrated on securing the rule in the Italian domains, installing his confidants Bruno of Carinthia and Gerbert of Aurillac as Popes. In 1000 he made a pilgrimage to the Congress of Gniezno in Poland, establishing the Archdiocese of Gniezno and confirming the royal status of the Piast ruler Bolesław I the Brave. Expelled from Rome in 1001, Otto III died at age 21 the next year, without an opportunity to reconquer the city.
The childless Otto III was succeeded by Henry II, a son of Duke Henry II of Bavaria and his wife Gisela of Burgundy, thereby a member of the Bavarian line of the Ottonians. Duke of Bavaria since 995, he was crowned king on 7 June 1002. Henry II spent the first years of his rule consolidating his political power on the borders of the German kingdom. He waged several campaigns against Bolesław I of Poland and then moved successfully to Italy where he was crowned emperor by Pope Benedict VIII on 14 February 1014. He reinforced his rule by endowing and founding numerous dioceses, such as the Bishopric of Bamberg in 1007, intertwining the secular and ecclesiastical authority over the Empire. Henry II was canonised by Pope Eugene III in 1146.
As his marriage with Cunigunde of Luxembourg remained childless, the Ottonian dynasty became extinct with the death of Henry II in 1024. The crown passed to Conrad II of the Salian dynasty, great-grandson of Liutgarde, a daughter of Otto I, and the Salian duke Conrad the Red of Lorraine. When King Rudolph III of Burgundy died without heirs on 2 February 1032, Conrad II successfully claimed also this kingship on the basis of an inheritance Emperor Henry II had extorted from the former in 1006, having invaded Burgundy to enforce his claim after Rudolph attempted to renounce it in 1016.
Notes:
"For further detailed dynastic relationships, see also :Family tree of the German monarchs". | https://en.wikipedia.org/wiki?curid=22644 |
Orkney
Orkney (; ), also known as the Orkney Islands, is an archipelago in the Northern Isles of Scotland, situated off the north coast of the island of Great Britain. Orkney is 10 miles (16 km) north of the coast of Caithness and has about 70 islands, of which 20 are inhabited. The largest island, Mainland, is often referred to as "the Mainland", and has an area of , making it the sixth-largest Scottish island and the tenth-largest island in the British Isles. The largest settlement and administrative centre is Kirkwall.
Orkney is one of the 32 council areas of Scotland, a constituency of the Scottish Parliament, a lieutenancy area, and a historic county. The local council is Orkney Islands Council, one of only three Councils in Scotland with a majority of elected members who are independents.
The islands have been inhabited for at least years, originally occupied by Mesolithic and Neolithic tribes and then by the Picts. Orkney was colonised and later annexed by Norway in 875 and settled by the Norse. The Scottish Parliament then absorbed the earldom to the Scottish Crown in 1472, following the failed payment of a dowry for James III's bride Margaret of Denmark.
In addition to the Mainland, most of the remaining islands are in two groups, the North and South Isles, all of which have an underlying geological base of Old Red Sandstone. The climate is relatively mild and the soils are extremely fertile, most of the land being farmed. Agriculture is the most important sector of the economy. The significant wind and marine energy resources are of growing importance, and Orkney generates more than its total yearly electricity demand using renewables. The local people are known as Orcadians and have a distinctive dialect of the Scots language and a rich inheritance of folklore. Orkney contains some of the oldest and best-preserved Neolithic sites in Europe, and the "Heart of Neolithic Orkney" is a designated UNESCO World Heritage Site. There is an abundance of marine and avian wildlife.
Pytheas of Massilia visited Britain – probably sometime between 322 and 285 BC – and described it as triangular in shape, with a northern tip called "Orcas".
This may have referred to Dunnet Head, from which Orkney is visible. Writing in the 1st century AD, the Roman geographer Pomponius Mela called the islands , as did Tacitus in 98 AD, claiming that his father-in-law Agricola had "discovered and subjugated the Orcades hitherto unknown" (although both Mela and Pliny had previously referred to the islands).
Etymologists usually interpret the element as a Pictish tribal name meaning "young pig" or "young boar". Speakers of Old Irish referred to the islands as "islands of the young pigs". The archipelago is known as in modern Welsh and in modern Scottish Gaelic, the representing a fossilized prepositional case ending. Some earlier sources alternately hypothesise that Orkney comes from the Latin , whale. The Anglo-Saxon monk Bede refers to the islands as in "Ecclesiastical History of the English People".
Norwegian settlers arriving from the late ninth century reinterpreted "orc" as the Old Norse "seal" and added "islands" to the end, so the name became "Seal Islands". The plural suffix was later removed in English leaving the modern name "Orkney". According to the , Orkney was named after an earl called Orkan.
The Norse knew Mainland, Orkney as "Mainland" or as "Horse Island". The island is sometimes referred to as "Pomona" (or "Pomonia"), a name that stems from a 16th-century mistranslation by George Buchanan, which has rarely been used locally.
A charred hazelnut shell, recovered in 2007 during excavations in Tankerness on the Mainland has been dated to 6820–6660 BC indicating the presence of Mesolithic nomadic tribes. The earliest known permanent settlement is at Knap of Howar, a Neolithic farmstead on the island of Papa Westray, which dates from 3500 BC. The village of Skara Brae, Europe's best-preserved Neolithic settlement, is believed to have been inhabited from around 3100 BC. Other remains from that era include the Standing Stones of Stenness, the Maeshowe passage grave, the Ring of Brodgar and other standing stones. Many of the Neolithic settlements were abandoned around 2500 BC, possibly due to changes in the climate.
During the Bronze Age fewer large stone structures were built although the great ceremonial circles continued in use as metalworking was slowly introduced to Britain from Europe over a lengthy period. There are relatively few Orcadian sites dating from this era although there is the impressive Plumcake Mound near the Ring of Brodgar and various islands sites such as Tofts Ness on Sanday and the remains of two houses on Holm of Faray.
Excavations at Quanterness on the Mainland have revealed an Atlantic roundhouse built about 700 BC and similar finds have been made at Bu on the Mainland and Pierowall Quarry on Westray. The most impressive Iron Age structures of Orkney are the ruins of later round towers called "brochs" and their associated settlements such as the Broch of Burroughston and Broch of Gurness. The nature and origin of these buildings is a subject of ongoing debate. Other structures from this period include underground storehouses, and aisled roundhouses, the latter usually in association with earlier broch sites.
During the Roman invasion of Britain the "King of Orkney" was one of 11 British leaders who is said to have submitted to the Emperor Claudius in AD 43 at Colchester. After the Agricolan fleet had come and gone, possibly anchoring at Shapinsay, direct Roman influence seems to have been limited to trade rather than conquest.
However, Polemius Silvius wrote a list of Late Roman provinces, which Seeck appended to his edition of the Notitia Dignitatum. The list is famous because it names six provinces in Roman Britannia: the sixth is the dubious "Orcades provincia", of which recent researches re-evaluate the possibility of real existence.
By the late Iron Age, Orkney was part of the Pictish kingdom, and although the archaeological remains from this period are less impressive there is every reason to suppose the fertile soils and rich seas of Orkney provided the Picts with a comfortable living. The Dalriadic Gaels began to influence the islands towards the close of the Pictish era, perhaps principally through the role of Celtic missionaries, as evidenced by several islands bearing the epithet "Papa" in commemoration of these preachers. However, before the Gaelic presence could establish itself the Picts were gradually dispossessed by the Norse from the late 8th century onwards. The nature of this transition is controversial, and theories range from peaceful integration to enslavement and genocide. It has been suggested that an assault by forces from Fortriu in 681 in which Orkney was "annihilated" may have led to a weakening of the local power base and helped the Norse come to prominence.
Both Orkney and Shetland saw a significant influx of Norwegian settlers during the late 8th and early 9th centuries. Vikings made the islands the headquarters of their pirate expeditions carried out against Norway and the coasts of mainland Scotland. In response, Norwegian king Harald Fairhair (Harald Hårfagre) annexed the Northern Isles, comprising Orkney and Shetland, in 875. (It is clear that this story, which appears in the "Orkneyinga Saga", is based on the later voyages of Magnus Barelegs and some scholars believe it to be apocryphal.) Rognvald Eysteinsson received Orkney and Shetland from Harald as an earldom as reparation for the death of his son in battle in Scotland, and then passed the earldom on to his brother Sigurd the Mighty.
However, Sigurd's line barely survived him and it was Torf-Einarr, Rognvald's son by a slave, who founded a dynasty that controlled the islands for centuries after his death. He was succeeded by his son Thorfinn Skull-splitter and during this time the deposed Norwegian King Eric Bloodaxe often used Orkney as a raiding base before being killed in 954. Thorfinn's death and presumed burial at the broch of Hoxa, on South Ronaldsay, led to a long period of dynastic strife.
Initially a pagan culture, detailed information about the turn to the Christian religion to the islands of Scotland during the Norse-era is elusive. The "Orkneyinga Saga" suggests the islands were Christianised by Olaf Tryggvasson in 995 when he stopped at South Walls on his way from Ireland to Norway. The King summoned the "jarl" Sigurd the Stout and said, "I order you and all your subjects to be baptised. If you refuse, I'll have you killed on the spot and I swear I will ravage every island with fire and steel." Unsurprisingly, Sigurd agreed and the islands became Christian at a stroke, receiving their own bishop in the early 11th century.
Thorfinn the Mighty was a son of Sigurd and a grandson of King Máel Coluim mac Cináeda (Malcolm II of Scotland). Along with Sigurd's other sons he ruled Orkney during the first half of the 11th century and extended his authority over a small maritime empire stretching from Dublin to Shetland. Thorfinn died around 1065 and his sons Paul and Erlend succeeded him, fighting at the Battle of Stamford Bridge in 1066. Paul and Erlend quarreled as adults and this dispute carried on to the next generation. The martyrdom of Magnus Erlendsson, who was killed in April 1116 by his cousin Haakon Paulsson, resulted in the building of St. Magnus Cathedral, still today a dominating feature of Kirkwall.
Unusually, from c. 1100 onwards the Norse "jarls" owed allegiance both to Norway for Orkney and to the Scottish crown through their holdings as Earls of Caithness. In 1231 the line of Norse earls, unbroken since Rognvald, ended with Jon Haraldsson's murder in Thurso. The Earldom of Caithness was granted to Magnus, second son of the Earl of Angus, whom Haakon IV of Norway confirmed as Earl of Orkney in 1236. In 1290, the death of the child princess Margaret, Maid of Norway in Orkney, en route to mainland Scotland, created a disputed succession that led to the Wars of Scottish Independence. In 1379 the earldom passed to the Sinclair family, who were also barons of Roslin near Edinburgh.
Evidence of the Viking presence is widespread, and includes the settlement at the Brough of Birsay, the vast majority of place names, and the runic inscriptions at Maeshowe.
In 1468 Orkney was pledged by Christian I, in his capacity as King of Norway, as security against the payment of the dowry of his daughter Margaret, betrothed to James III of Scotland. However the money was never paid, and Orkney was absorbed by the Kingdom of Scotland in 1472.
The history of Orkney prior to this time is largely the history of the ruling aristocracy. From now on the ordinary people emerge with greater clarity. An influx of Scottish entrepreneurs helped to create a diverse and independent community that included farmers, fishermen and merchants that called themselves "comunitas Orcadie" and who proved themselves increasingly able to defend their rights against their feudal overlords.
From at least the 16th century, boats from mainland Scotland and the Netherlands dominated the local herring fishery. There is little evidence of an Orcadian fleet until the 19th century but it grew rapidly and 700 boats were involved by the 1840s with Stronsay and later Stromness becoming leading centres of development. White fish never became as dominant as in other Scottish ports.
In the 17th century, Orcadians formed the overwhelming majority of employees of the Hudson's Bay Company in Canada. The harsh winter weather of Orkney and the Orcadian reputation for sobriety and their boat handling skills made them ideal candidates for the rigours of the Canadian north. During this period, burning kelp briefly became a mainstay of the islands' economy. For example on Shapinsay over of burned seaweed were produced per annum to make soda ash, bringing in £20,000 to the local economy. The industry collapsed suddenly in 1830 after the removal of tariffs on imported alkali.
Agricultural improvements beginning in the 17th century resulted in the enclosure of the commons and ultimately in the Victoria era the emergence of large and well-managed farms using a five-shift rotation system and producing high-quality beef cattle.
During the 18th century Jacobite risings, Orkney was largely Jacobite in its sympathies. At the end of the 1715 rebellion, a large number of Jacobites who had fled north from mainland Scotland sought refuge in Orkney and were helped on to safety in Sweden. In 1745, the Jacobite lairds on the islands ensured that Orkney remained pro-Jacobite in outlook, and was a safe place to land supplies from Spain to aid their cause. Orkney was the last place in the British Isles that held out for the Jacobites and was not retaken by the British Government until 24 May 1746, over a month after the defeat of the main Jacobite army at Culloden.
Orkney was the site of a Royal Navy base at Scapa Flow, which played a major role in World War I and II. After the Armistice in 1918, the German High Seas Fleet was transferred in its entirety to Scapa Flow to await a decision on its future. The German sailors opened the sea-cocks and scuttled all the ships. Most ships were salvaged, but the remaining wrecks are now a favoured haunt of recreational divers. One month into World War II, a German U-boat sank the Royal Navy battleship in Scapa Flow. As a result, barriers were built to close most of the access channels; these had the additional advantage of creating causeways enabling travellers to go from island to island by road instead of being obliged to rely on ferries. The causeways were constructed by Italian prisoners of war, who also constructed the ornate Italian Chapel.
The navy base became run down after the war, eventually closing in 1957. The problem of a declining population was significant in the post-war years, though in the last decades of the 20th century there was a recovery and life in Orkney focused on growing prosperity and the emergence of a relatively classless society. Orkney was rated as the best place to live in Scotland in both 2013 and 2014, and in 2019 the best place to live in the UK, according to the Halifax Quality of Life survey.
In the modern era, population peaked in the mid 19th century at just over 32,000 and declined for a century thereafter to a low of fewer than 18,000 in the 1970s. Declines were particularly significant in the outlying islands, some of which remain vulnerable to ongoing losses. Although Orkney is in many ways very distinct from the other islands and archipelagos of Scotland these trends are very similar to those experienced elsewhere. The archipelago's population grew by 11% in the decade to 2011 as recorded by the census. During the same period Scottish island populations as a whole grew by 4% to 103,702.
Orkney is separated from the mainland of Scotland by the Pentland Firth, a wide seaway between Brough Ness on the island of South Ronaldsay and Duncansby Head in Caithness. Orkney lies between 58°41′ and 59°24′ North, and 2°22′ and 3°26′ West, measuring from northeast to southwest and from east to west, and covers .
Orkney is separated from the Shetland Islands, a group farther out, by a body of water called the Fair Isle Channel.
The islands are mainly low-lying except for some sharply rising sandstone hills on Mainland, Rousay and Hoy (where the tallest point in Orkney, Ward Hill, can be found) and rugged cliffs on some western coasts. Nearly all of the islands have lochs, but the watercourses are merely streams draining the high land. The coastlines are indented, and the islands themselves are divided from each other by straits generally called "sounds" or "firths".
The tidal currents, or "roosts" as some of them are called locally, off many of the isles are swift, with frequent whirlpools. The islands are notable for the absence of trees, which is partly accounted for by the strong winds.
Genetic studies have shown that 25% of the gene pool of Orkney derives from Norwegian ancestors who invaded the islands in the 9th century.
The Mainland is the largest island of Orkney. Both of Orkney's burghs, Kirkwall and Stromness, are on this island, which is also the heart of Orkney's transportation system, with ferry and air connections to the other islands and to the outside world. The island is more densely populated (75% of Orkney's population) than the other islands and has much fertile farmland. The Mainland is split into areas called East and West Mainland. These areas are determined by whether they lie East or West of Kirkwall. The bulk of the mainland lies West of Kirkwall, with comparatively little land lying East of Kirkwall.
West Mainland parishes are:
Stromness, Sandwick, Birsay, Harray, Stenness, Orphir, Evie, Rendall and Firth.
East Mainland Parishes are:
St Ola, Tankerness, St Andrews, Holm and Deerness.
The island is mostly low-lying (especially East Mainland) but with coastal cliffs to the north and west and two sizeable lochs: the Loch of Harray and the Loch of Stenness. The Mainland contains the remnants of numerous Neolithic, Pictish and Viking constructions. Four of the main Neolithic sites are included in the Heart of Neolithic Orkney World Heritage Site, inscribed in 1999.
The other islands in the group are classified as north or south of the Mainland. Exceptions are the remote islets of Sule Skerry and Sule Stack, which lie west of the archipelago, but form part of Orkney for local government purposes. In island names, the suffix "a" or "ay" represents the Norse "ey", meaning "island". Those described as "holms" are very small.
The northern group of islands is the most extensive and consists of a large number of moderately sized islands, linked to the Mainland by ferries and by air services. Farming, fishing and tourism are the main sources of income for most of the islands.
The most northerly is North Ronaldsay, which lies beyond its nearest neighbour, Sanday. To the west is Westray, which has a population of 550. It is connected by ferry and air to Papa Westray, also known as "Papay". Eday is at the centre of the North Isles. The centre of the island is moorland and the island's main industries have been peat extraction and limestone quarrying.
Rousay, Egilsay and Gairsay lie north of the west Mainland across the Eynhallow Sound. Rousay is well known for its ancient monuments, including the Quoyness chambered cairn and Egilsay has the ruins of the only round-towered church in Orkney. Wyre to the south-east contains the site of Cubbie Roo's castle. Stronsay and Papa Stronsay lie much further to the east across the Stronsay Firth. Auskerry is south of Stronsay and has a population of only five. Shapinsay and its Balfour Castle are a short distance north of Kirkwall.
Other small uninhabited islands in the North Isles group include: Calf of Eday, Damsay, Eynhallow, Faray, Helliar Holm, Holm of Faray, Holm of Huip, Holm of Papa, Holm of Scockness, Kili Holm, Linga Holm, Muckle Green Holm, Rusk Holm and Sweyn Holm.
The southern group of islands surrounds Scapa Flow. Hoy is the second largest of the Orkney Isles and Ward Hill at its northern end is the highest elevation in the archipelago. The Old Man of Hoy is a well-known seastack. Burray lies to the east of Scapa Flow and is linked by causeway to South Ronaldsay, which hosts the cultural events, the Festival of the Horse and the Boys' Ploughing Match on the third Saturday in August. It is also the location of the Neolithic Tomb of the Eagles. Graemsay and Flotta are both linked by ferry to the Mainland and Hoy, and the latter is known for its large oil terminal. South Walls has a 19th-century Martello tower and is connected to Hoy by the Ayre. South Ronaldsay, Burray, Glimps Holm, and Lamb Holm are connected by road to the Mainland by the Churchill Barriers.
Uninhabited South Islands include: Calf of Flotta, Cava, Copinsay, Corn Holm, Fara, Glimps Holm, Hunda, Lamb Holm, Rysa Little, Switha and Swona. The Pentland Skerries lie further south, closer to the Scottish mainland.
The superficial rock of Orkney is almost entirely Old Red Sandstone, mostly of Middle Devonian age. As in the neighbouring mainland county of Caithness, this sandstone rests upon the metamorphic rocks of the Moine series, as may be seen on the Mainland, where a narrow strip is exposed between Stromness and Inganess, and again in the small island of Graemsay; they are represented by grey gneiss and granite.
The Middle Devonian is divided into three main groups. The lower part of the sequence, mostly Eifelian in age, is dominated by lacustrine beds of the lower and upper Stromness Flagstones that were deposited in Lake Orcadie. The later Rousay flagstone formation is found throughout much of the North and South Isles and East Mainland.
The Old Man of Hoy is formed from sandstone of the uppermost Eday group that is up to thick in places. It lies unconformably upon steeply inclined flagstones, the interpretation of which is a matter of continuing debate.
The Devonian and older rocks of Orkney are cut by a series of WSW-ENE to N-S trending faults, many of which were active during deposition of the Devonian sequences. A strong synclinal fold traverses Eday and Shapinsay, the axis trending north-south.
Middle Devonian basaltic volcanic rocks are found on western Hoy, on Deerness in eastern Mainland and on Shapinsay. Correlation between the Hoy volcanics and the other two exposures has been proposed, but differences in chemistry means this remains uncertain. Lamprophyre dykes of Late Permian age are found throughout Orkney.
Glacial striation and the presence of chalk and flint erratics that originated from the bed of the North Sea demonstrate the influence of ice action on the geomorphology of the islands. Boulder clay is also abundant and moraines cover substantial areas.
Orkney has a cool temperate climate that is remarkably mild and steady for such a northerly latitude, due to the influence of the Gulf Stream. The average temperature for the year is ; for winter and for summer .
The average annual rainfall varies from to . Winds are a key feature of the climate and even in summer there are almost constant breezes. In winter, there are frequent strong winds, with an average of 52 hours of gales being recorded annually.
To tourists, one of the fascinations of the islands is their "nightless" summers. On the longest day, the sun rises at 04:00 and sets at 22:29 BST and complete darkness is unknown. This long twilight is known in the Northern Isles as the "simmer dim". Winter nights are long. On the shortest day the sun rises at 09:05 and sets at 15:16. At this time of year the aurora borealis can occasionally be seen on the northern horizon during moderate auroral activity.
The averages table below is for largest settlement Kirkwall's weather station.
Orkney is represented in the House of Commons as part of the Orkney and Shetland constituency, which elects one Member of Parliament (MP), the current incumbent being Alistair Carmichael. This seat has been held by the Liberal Democrats or their predecessors the Liberal Party since 1950, longer than any other they represent in Great Britain.
In the Scottish Parliament the Orkney constituency elects one Member of the Scottish Parliament (MSP) by the first past the post system. The current MSP is Liam McArthur of the Liberal Democrats. Before McArthur the MSP was Jim Wallace, who was previously Deputy First Minister. Orkney is within the Highlands and Islands electoral region.
Orkney Islands Council consists of 21 members, 18 of whom are independent, that is they do not stand as representatives of a political party. Two councillors are members of the indigenous Orkney Manifesto Group, and the remaining councillor represents the Green Party.
The Orkney Movement, a political party that supported devolution for Orkney from the rest of Scotland, contested the 1987 general election as the Orkney and Shetland Movement (a coalition of the Orkney movement and its equivalent for Shetland). The Scottish National Party chose not to contest the seat to give the movement a "free run". Their candidate, John Goodlad, came 4th with 3,095 votes, 14.5% of those cast, but the experiment has not been repeated.
In the 2014 Scottish independence referendum 67.2% of voters in Orkney voted No to the question "Should Scotland be an independent country?" This was the highest % No vote in any council area in Scotland. Turnout for the referendum was at 83.7% in Orkney with 10,004 votes cast in the area against independence by comparison to 4,883 votes for independence.
The soil of Orkney is generally very fertile and most of the land is taken up by farms, agriculture being by far the most important sector of the economy and providing employment for a quarter of the workforce. More than 90% of agricultural land is used for grazing for sheep and cattle, with cereal production utilising about 4% () and woodland occupying only .
Fishing has declined in importance, but still employed 345 individuals in 2001, about 3.5% of the islands' economically active population, the modern industry concentrating on herring, white fish, lobsters, crabs and other shellfish, and salmon fish farming.
Today, the traditional sectors of the economy export beef, cheese, whisky, beer, fish and other seafood. In recent years there has been growth in other areas including tourism, food and beverage manufacture, jewellery, knitwear, and other crafts production, construction and oil transportation through the Flotta oil terminal. Retailing accounts for 17.5% of total employment, and public services also play a significant role, employing a third of the islands' workforce.
In 2007, of the 1,420 VAT registered enterprises 55% were in agriculture, forestry and fishing, 12% in manufacturing and construction, 12% in wholesale, retail and repairs, and 5% in hotels and restaurants. A further 5% were public service related. 55% of these businesses employ between 5 and 49 people.
Orkney has significant wind and marine energy resources, and renewable energy has recently come into prominence. Although Orkney is connected to the mainland, it generates over 100% of its net power from renewables. This comes mainly from wind turbines situated across Orkney.
The European Marine Energy Centre (EMEC) is a research facility operating a grid-connected wave test site at Billia Croo, off the west coast of the Orkney Mainland, and a tidal power test site in the Fall of Warness, off the northern island of Eday. At the official opening of the Eday project the site was described as "the first of its kind in the world set up to provide developers of wave and tidal energy devices with a purpose-built performance testing facility."
During 2007 Scottish and Southern Energy plc in conjunction with the University of Strathclyde began the implementation of a Regional Power Zone in the Orkney archipelago, involving "active network management" that will make better use of existing infrastructure and allow a further 15MW of new "non-firm generation" output from renewables onto the network. 1.5 MW of polymer electrolyte membrane electrolysis form a partial hydrogen economy for hydrogen vehicles and district heating, and grid batteries and electric vehicles also use local energy.
Highland and Islands Airports operates the main airport in Orkney, Kirkwall Airport. Loganair provides services to the Scottish mainland (Aberdeen, Edinburgh, Glasgow and Inverness), as well as to Sumburgh Airport in Shetland.
Within Orkney, the council operates airfields on most of the larger islands including Stronsay, Eday, North Ronaldsay, Westray, Papa Westray, and Sanday. The shortest scheduled air service in the world, between the islands of Westray and Papa Westray, is scheduled at two minutes' duration but can take less than one minute if the wind is in the right direction.
Ferries serve both to link Orkney to the rest of Scotland, and also to link together the various islands of the Orkney archipelago. Ferry services operate between Orkney and the Scottish mainland and Shetland on the following routes:
Inter-island ferry services connect all the inhabited islands to Orkney Mainland, and are operated by Orkney Ferries, a company owned by Orkney Islands Council.
Orkney has one of the highest uptakes of electric vehicles in the UK with more than 2% of the vehicles on the road being electric.
Orkney is served by a weekly local newspaper, "The Orcadian", published on Thursdays.
A local BBC radio station, BBC Radio Orkney, the local opt-out of BBC Radio Scotland, broadcasts twice daily, with local news and entertainment. Orkney also had a commercial radio station, The Superstation Orkney, which broadcast to Kirkwall and parts of the mainland and also to most of Caithness until its closure in November 2014. Moray Firth Radio broadcasts throughout Orkney on AM and from an FM transmitter just outside Thurso. The community radio station Caithness FM also broadcasts to Orkney.
Orkney is home to the Orkney Library and Archive, located in Kirkwall, Scotland, on the mainland. The Library service provides access to over 145,000 items. They have a wide range of fiction and non-fiction titles available for loan as well as audiobooks, maps, eBooks, music CDs, and DVDs. Orkney Library and Archive operates a Mobile Library Service that serves the rural parishes and islands of Orkney. The Mobile Library carries a wide range of books and audio books suitable for all ages and is completely free to use.
The islands are the home of several international festivals, including the Orkney International Science Festival in September, a folk festival in May, and the St Magnus International Arts Festival in June.
At the beginning of recorded history, the islands were inhabited by the Picts, whose language was Brythonic. The Ogham script on the Buckquoy spindle-whorl is cited as evidence for the pre-Norse existence of Old Irish in Orkney.
After the Norse occupation, the toponymy of Orkney became almost wholly West Norse. The Norse language changed into the local Norn, which lingered until the end of the 18th century, when it eventually died out. Norn was replaced by the Orcadian dialect of Insular Scots. This dialect is at a low ebb due to the pervasive influences of television, education, and the large number of incomers. However, attempts are being made by some writers and radio presenters to revitalise its use and the distinctive sing-song accent and many dialect words of Norse origin remain in use. The Orcadian word most frequently encountered by visitors is "peedie", meaning "small", which may be derived from the French "petit".
Orkney has a rich folklore, and many of the former tales concern trows, an Orcadian form of troll that draws on the islands' Scandinavian connections. Local customs in the past included marriage ceremonies at the Odin Stone that formed part of the Stones of Stenness.
King Lot in certain versions of the Arthurian legend (e.g., Malory) is ruler of Orkney. His sons Gawaine, Agravaine, Gareth, and Gaheris are major characters in the Matter of Britain.
The best known literary figures from modern Orkney are the poet Edwin Muir, the poet and novelist George Mackay Brown, and the novelist Eric Linklater.
An Orcadian is a native of Orkney, a term that reflects a strongly held identity with a tradition of understatement. Although the annexation of the earldom by Scotland took place over five centuries ago in 1472, some Orcadians regard themselves as Orcadians first and Scots second. However in response to the national identity question in the 2011 Scotland Census, self-reported levels of Scottish identity in Orkney were in line with the national average.
The Scottish mainland is often referred to as "Scotland" in Orkney, with "the mainland" referring to Mainland, Orkney. The archipelago also has a distinct culture, with traditions of the Scottish Highlands such as tartan, clans, bagpipes not indigenous to the culture of the islands. However, at least two tartans with Orkney connections have been registered and a tartan has been designed for Sanday by one of the island's residents, and there are pipe bands in Orkney.
Native Orcadians refer to the non-native residents of the islands as "ferry loupers", ("loup" meaning "jump" in the Scots language ) a term that has been in use for nearly two centuries at least.
Orkney has an abundance of wildlife, especially of grey and common seals and seabirds such as puffins, kittiwakes, tysties, ravens, and bonxies. Whales, dolphins, and otters are also seen around the coasts. Inland the Orkney vole, a distinct subspecies of the common vole introduced by Neolithic humans, is an endemic. There are five distinct varieties, found on the islands of Sanday, Westray, Rousay, South Ronaldsay, and the Mainland, all the more remarkable as the species is absent on mainland Britain.
The coastline is well known for its colourful flowers including sea aster, sea squill, sea thrift, common sea-lavender, bell and common heather. The Scottish primrose is found only on the coasts of Orkney and nearby Caithness and Sutherland. Although stands of trees are generally rare, a small forest named Happy Valley with 700 trees and lush gardens was created from a boggy hillside near Stenness during the second half of the 20th century.
The North Ronaldsay sheep is an unusual breed of domesticated animal, subsisting largely on a diet of seaweed, since they are confined to the foreshore for most of the year to conserve the limited grazing inland. The island was also a habitat for the Atlantic walrus until the mid-16th century.
The Orkney char ("Salvelinus inframundus") used to live in Heldale Water on Hoy. It has been considered locally extinct since 1908.
The introduction of alien stoats just prior to 2015, a natural predator of the common vole and thus of the Orkney vole, may be harming native bird populations.
There are 13 Special Protection Areas and 6 Special Areas of Conservation in Orkney. One of Scotland's 40 national scenic areas, the Hoy and West Mainland National Scenic Area, is also located in the islands. The seas to the northwest of Orkney are important for sand eels that provides a food source for many species of fish, seabirds, seals, whales and dolphins, and are now protected as Nature Conservation Marine Protected Area (NCMPA) that covers . | https://en.wikipedia.org/wiki?curid=22645 |
Hoy
Hoy (from Norse "Háey" meaning "high island") is an island in Orkney, Scotland, measuring — ranked largest in the archipelago after Mainland. A natural causeway, "the Ayre", links to much smaller South Walls; the two islands are treated as one entity by the UK census.
The Dwarfie Stane lies in the north of the Rackwick valley and dates back to around 5000 BCE. It is unique in northern Europe, bearing similarity to Neolithic or Bronze Age tombs around the Mediterranean. The tomb has a small rectangular entrance and cleft, hence its name. Discoveries have been made on the mainland of Orkney at the Ness of Brodgar that date back as early as 3510 BCE with the first stone circle in the British Isles found there.
The two most northerly Martello Towers in the UK stand here, built in 1814 to defend merchant shipping in the natural harbour of Longhope against privateers commissioned by American president Madison, who declared war in 1812. Arguably, an effective deterrent as there is no record of them ever being engaged in serious combat.
The main naval base for the British fleet in both the First and Second World Wars, Scapa Flow, was at Lyness in the southeast of the island.
During the early years of World War II, up to 12,000 personnel were based in and around Lyness to support the defences of the naval anchorage at Scapa Flow and the ships that used it. To support this huge population, hundreds of accommodation huts were built in a number of camps around Lyness. A large wharf was built (known as the Golden Wharf because of its huge cost) along with a series of piers and slipways. Offices, workshops, stores and recreational buildings were erected, including a cinema, a theatre and several churches. An earlier headquarters building was replaced in 1943 by an imposing concrete HQ and communications centre, also located high on Wee Fea which now serves as a hotel.
Lyness Royal Naval Cemetery is situated around 1.0km inland from the naval base and has an area of around 10,000 square meters (2.5 acres).
Although the population of Hoy is now only around 400, there was a much larger population in the past. In 1890 there were 4 schools on the island and 4 churches suggesting a much larger population. Despite the larger population there was no paved road between the north of the island and the southern tip of the island and South Walls island, only a footpath. There was however an unsurfaced road between the two villages on the north of the isle; Rackwick and Moaness.
Hoy is probably most famous though for the stack of the Rackwick coast; the Old Man of Hoy a sea stack formed from Old Red Sandstone, it is one of the tallest stacks in the United Kingdom at a height of 449-foot (137-metres). The Old Man is popular with climbers, and was first climbed in 1966. Created by the erosion of a cliff through hydraulic action some time after 1750, the stack is no more than a few hundred years old, and paintings 1817 shows the stack with an arch at the bottom which has now further eroded and no longer remains hence it may soon collapse into the sea.
The dramatic coastline of Hoy can be seen by visitors travelling to Orkney by ferry from the Scottish mainland. It has some of the highest sea cliffs in the UK at St John's Head, which reach 350m (1,150ft)
The name Hoy comes from the Norse word "Háey" meaning "high island". It is therefore not surprising that the island of Hoy is the most mountainous in the Orkney archipelago. The highest point on the island (and indeed the whole archipelago is Ward Hill which stands at 481 m (1,578 ft) and is to the north of the island. There is a trig point at the summit.
Hoy is part of the Hoy and West Mainland National Scenic Area, one of 40 in Scotland.
Orkneys only woodland is found on Hoy and is the most northerly woodland in the UK. Patches of the woodland are scattered across the island and most significantly there is the remote possibility of locally extant Orkney charr ("Salvelinus inframundus") documented in 1908 at Heldale Water.
There is evidence that there was at one point an airfield on Hoy, possibly due to its connections with the navy. There are two suggested sites both on South Walls, one on the southern coast (Snelsetter) which opened in August 1934 and was closed at the end of World War Two. It was used by military and civil aircraft, and now is open land. Another just east of the causeway that links the two islands of Hoy and South Walls. Which opened in November 1972 and closed in 1993. It was used by civilian aircraft solely and was operated by the airline Loganair it is also now open land. The first flight to a nearby island of Flotta on 1 March 1977 was recorded to have landed a Hoy. Both airfields are now disused.
Orkney Ferries traverse the west of Scapa Flow with two routes:
A Lifeboat has been on Hoy since 1874 which was originally housed in a prominent stone building close to the west end of the causeway that links the two islands of Hoy and South Walls together at a cost of £228. It was stationed there as it meant that the lifeboat could be dragged over wooden skids and into the sea in either North Bay, giving access to Scapa Flow or in Aith Hope, an offshoot of the notorious Pentland Firth to the south. The shed continued to serve as the base of the Longhope lifeboat until 1906, when it was replaced.
The lifeboat station that stands slightly to the south of the original station which is now home to the Longhope Lifeboat Museum, cost £2,700 to build in 1906. It continued to serve as the base for the Longhope lifeboat until 1999. Whilst based at this station,1969 On 17 March the lifeboat 'T.G.B.' ON 962, capsized while on service to the Liberian vessel 'Irene' and her entire crew of eight lost their lives, known as the Longhope lifeboat disaster. In August of that year an Arun-class lifeboat, Sir Max Aitken II became the Longhope lifeboat. This class was designed to stay permanently afloat, and the decision was taken to move her to purpose-built moorings at Longhope pier. The lifeboats that have served here since have also been stationed at Longhope, including the current vessel the Helen Comrie (a Tamar class lifeboat) and her predecessor The Queen Mother, which was based here between 2004 and 2006. A station has been built where the lifeboat is moored at Longhope which is also the main harbour for boats to and from the island.
In Norse mythology, Hoy hosted Hjaðningavíg, the never-ending battle between Heðin and Högni.
Hoy is an Important Bird Area.
The northern part of the island is an RSPB reserve due to its importance for birdlife, particularly great skuas and red-throated divers. It was sold to the RSPB by the Hoy Trust for a nominal amount.
"Anastrepta orcadensis", a liverwort also known as Orkney Notchwort, was first discovered on Ward Hill by William Jackson Hooker in 1808.
The northern and western parts of Hoy, along with much of the adjoining sea area, is designated as a Special Protection Area due to its importance for nine breeding bird species: arctic skua, fulmar, great black-backed gull, great skua, guillemot, Black-legged kittiwake, peregrine falcon, puffin and red-throated diver. The area is important for its seabird assemblage, which regularly supports 120,000 individual seabirds during the breeding season.
Hoy is featured prominently in the 1984 video for "Here Comes The Rain Again" by Eurythmics.
Hoy also has a performing arts theatre, the Gable End Theatre, which opened in 2000 and has a capacity of 75. The theatre is managed by the Hoy and Walls drama community.
Some rather incongruous Art Deco structures nearby date from this period. The Arts and Crafts architect William Lethaby rebuilt Melsetter house for mountaineer Thomas Middlemore at the end of the nineteenth century leaving untouched the adjacent barn which is probably mid-18th century. | https://en.wikipedia.org/wiki?curid=22647 |
Rousay
Rousay ( meaning Rolf's Island) is a small, hilly island about north of Mainland, the largest island in the Orkney Islands of Scotland. It has been nicknamed "Egypt of the north", due to its archaeological diversity and importance.
Like its neighbours Egilsay and Wyre, it can be reached by ro-ro ferry from Tingwall. This service is operated by Orkney Ferries, and can take up to 95 passengers (reduced to 50 in winter), and 10 cars. The ferry links the islands of Rousay, Egilsay, and Wyre with each other, and with the mainland of Orkney.
In the 2001 census, Rousay had a population of 212. Most employment is in farming, fishing or fish-farming; craft businesses and seasonal tourism-related work are present.
It is separated from mainland Orkney by Eynhallow Sound.
One road circles the island, about long, and most arable land lies in the few hundred yards between it and the coastline. With an area of , it is the fifth largest of the Orkney Islands.
Among several freshwater lochs on the island, the biggest is Muckle Water.
Rousay is a 'Site of Special Scientific Interest' with notable cliff formations and wildflower colonies, and has an RSPB bird reserve. The hilliest Orkney island after Hoy, it offers good views of neighbouring islands from Blotchnifiold , and Keirfea or Knitchen (both over ).
Its natural environment and wildlife includes Rousay's seals and otters. Archaeological remains are present, especially a cluster of sites connected by a footpath near the western shore.
Humans first made a Neolithic settlement at Rinyo. Other remnants include Bronze Age burnt mounds, Iron Age crannogs and brochs (the highest density anywhere in Scotland: three within of coastline), Viking boat burials, remains of a medieval church and a stately home at Trumland.
Over 100 archaeological sites have been identified. Only a small fraction have been excavated and characterized. The most spectacular of the sites is the complex of Midhowe Broch and Midhowe Chambered Cairn. Blackhammer Chambered Cairn, Taversoe Tuick and Yarso are important tombs.
Rousay placenames reflect its Norse heritage. 'Hrólfs-øy' or 'Hrolfsey' was based on the male name 'Hrolf' (Rolf). Hugh Marwick's work showed the name developing from 'Rollesay' in the 14th century, through 'Rolsay' in the 15th, and 'Rowsay' in the early 16th, with the spelling 'Rousay' first recorded in 1549.
Most Rousay people earned their living from farming and/or fishing. In the 19th century, records reflect tradespeople supplying the needs of a rural community: blacksmiths and joiners, shoemakers and shopkeepers, with women making dresses and plaiting straw. Throughout the century, Rousay's landlords demanded high rents from crofters, many of whom became homeless in a series of clearances along the western coast, ordered by landowner George William Traill in the 1820s and 1830s.
Traill's nephew General Sir Frederick Traill-Burroughs inherited much of the island and bought more. Traill-Burroughs built a large house at Trumland, designed by David Bryce of Edinburgh. From 1870-1883, improvements transformed the island: Trumland pier, island schools, a public market, the first steamship service, a post office, and the first resident doctor. He was known locally as "the little general" as he was a man of short stature. Poet Edwin Muir recalled in a memoir of his childhood seeing the little general walking around his estates.
Rousay's population in the mid-19th century was over 900, but emigration following land clearances reduced that to 627 by 1900, and half a century later it had fallen to 342. Depopulation accelerated, and in the next twenty years the number fell to 181, its lowest ever. From the 1970s onward new families settled on Rousay: most came from the south, especially from England. The population is now over 200.
The Yetnasteen stone is said to have once been a giant who revives every New Year at midnight visits the Loch of Scockness to drink.
A primary school enrolls 24 boys and girls aged 3 to 12. Once a child completes his/her primary education, they attend Kirkwall Grammar School or Stromness Academy.
Poet Pauline Stainer spent several years on the island, and in 1999 published a collection of her poems about Rousay, "Parable Island".
Robert C. Marwick (1922-2013) was a school teacher, headmaster and author born on Innister farm, in the Wasbister district. His publications about Rousay include "From My Rousay Schoolbag" (1995), "Rousay Roots" (1995) and "In Dreams We Moor" (2000).
Astronomer, musician and writer, John Vetterlein first came to Rousay in 1970 and moved there full-time in 1995. He established the small publishing house Spring Ast LIX in 1997, whose publications include: "Braes Woodland Diary - the First Ten Years" by Ann Chapman.
Actor Graham Fellows owns a disused church there, which he intended to turn into an "artists refuge".
Late artist Margaret Gardiner spent a large part of her life there and in 1979 founded the Pier Art Gallery in Stromness.
Rousay is separated from the neighbouring island of Egilsay by Rousay Sound. The sound experiences strong tides, which creates the perfect conditions for maerl beds to form. The maerl beds in turn provide a sheltered habitat for species such as peacock worms and various sponges, as well as small fish, shrimps, gobies and crabs. Since 2014 the sound, along with the neighbouring Wyre Sound (which separates Rousay from Wyre), has been designated as a Nature Conservation Marine Protected Area (NCMPA). Fishing activities are controlled within the MPA, and no dredging, beam trawling, demersal trawling or Seine fishing is permitted. | https://en.wikipedia.org/wiki?curid=22648 |
Observation
Observation is the active acquisition of information from a primary source. In living beings, observation employs the senses. In science, observation can also involve the perception and recording of data via the use of scientific instruments. The term may also refer to any data collected during the scientific activity. Observations can be qualitative, that is, only the absence or presence of a property is noted, or quantitative if a numerical value is attached to the observed phenomenon by counting or measuring.
The scientific method requires observations of natural phenomena to formulate and test hypotheses. It consists of the following steps:
Observations play a role in the second and fifth steps of the scientific method. However, the need for reproducibility requires that observations by different observers can be comparable. Human sense impressions are subjective and qualitative, making them difficult to record or compare. The use of measurement developed to allow recording and comparison of observations made at different times and places, by different people. Measurement consists of using observation to compare the phenomenon being observed to a standard unit. The standard unit can be an artifact, process, or definition which can be duplicated or shared by all observers. In measurement the number of standard units which is equal to the observation is counted. Measurement reduces an observation to a number which can be recorded, and two observations which result in the same number are equal within the resolution of the process.
Human senses are limited and subject to errors in perception, such as optical illusions. Scientific instruments were developed to aid human abilities of observation, such as weighing scales, clocks, telescopes, microscopes, thermometers, cameras, and tape recorders, and also translate into perceptible form events that are unobservable by the senses, such as indicator dyes, voltmeters, spectrometers, infrared cameras, oscilloscopes, interferometers, geiger counters, and radio receivers.
One problem encountered throughout scientific fields is that the observation may affect the process being observed, resulting in a different outcome than if the process was unobserved. This is called the "observer effect". For example, it is not normally possible to check the air pressure in an automobile tire without letting out some of the air, thereby changing the pressure. However, in most fields of science it is possible to reduce the effects of observation to insignificance by using better instruments.
Considered as a physical process itself, all forms of observation (human or instrumental) involve amplification and are thus thermodynamically irreversible processes, increasing entropy.
In some specific fields of science the results of observation differ depending on factors which are not important in everyday observation. These are usually illustrated with "paradoxes" in which an event appears different when observed from two different points of view, seeming to violate "common sense".
The human senses do not function like a video camcorder, impartially recording all observations. Human perception occurs by a complex, unconscious process of abstraction, in which certain details of the incoming sense data are noticed and remembered, and the rest forgotten. What is kept and what is thrown away depends on an internal model or representation of the world, called by psychologists a "schema", that is built up over our entire lives. The data is fitted into this schema. Later when events are remembered, memory gaps may even be filled by "plausible" data the mind makes up to fit the model; this is called "reconstructive memory". How much attention the various perceived data are given depends on an internal value system, which judges how important it is to the individual. Thus two people can view the same event and come away with entirely different perceptions of it, even disagreeing about simple facts. This is why eyewitness testimony is notoriously unreliable.
Several of the more important ways observations can be affected by human psychology are given below.
Human observations are biased toward confirming the observer's conscious and unconscious expectations and view of the world; we ""see what we expect to see"". In psychology, this is called confirmation bias. Since the object of scientific research is the discovery of new phenomena, this bias can and has caused new discoveries to be overlooked; one example is the discovery of x-rays. It can also result in erroneous scientific support for widely held cultural myths, on the other hand, as in the scientific racism that supported ideas of racial superiority in the early 20th century. Correct scientific technique emphasizes careful recording of observations, separating experimental observations from the conclusions drawn from them, and techniques such as blind or double blind experiments, to minimize observational bias.
Modern scientific instruments can extensively process "observations" before they are presented to the human senses, and particularly with computerized instruments, there is sometimes a question as to where in the data processing chain "observing" ends and "drawing conclusions" begins. This has recently become an issue with digitally enhanced images published as experimental data in papers in scientific journals. The images are enhanced to bring out features that the researcher wants to emphasize, but this also has the effect of supporting the researcher's conclusions. This is a form of bias that is difficult to quantify. Some scientific journals have begun to set detailed standards for what types of image processing are allowed in research results. Computerized instruments often keep a copy of the "raw data" from sensors before processing, which is the ultimate defense against processing bias, and similarly scientific standards require preservation of the original unenhanced "raw" versions of images used as research data.
Observation in philosophical terms is the process of filtering sensory information through the thought process. Input is received via hearing, sight, smell, taste, or touch and then analyzed through either rational or irrational thought.
For example, let us suppose that an observer "sees" a parent beat their child; and consequently may observe that such an action is either good or bad. Deductions about what behaviors are good or bad may be based on preferences about building relationships, or study of the consequences resulting from the observed behavior. With the passage of time, impressions stored in the consciousness about many, together with the resulting relationships and consequences, permit the individual to build a construct about the moral implications of behavior. | https://en.wikipedia.org/wiki?curid=22649 |
Oftel
The Office of Telecommunications (Oftel) ("the telecommunications regulator") was a department in the United Kingdom government, under civil service control, charged with promoting competition and maintaining the interests of consumers in the UK telecommunications market. It was set up under the Telecommunications Act 1984 after privatisation of the nationalised operator BT.
Oftel was accused by critics such as Freeserve of having been "captured" by BT, and of giving the dominant operator too much freedom to leverage its monopoly status in fixed line telephony into other markets such as ADSL.
On 29 December 2003 the duties of Oftel were inherited by Ofcom, which was the result of the consolidation of five separate British telecommunications, radio spectrum and broadcasting regulators. | https://en.wikipedia.org/wiki?curid=22652 |
Ohio-class submarine
The "Ohio" class of nuclear-powered submarines includes the United States Navy's 14 ballistic missile submarines (SSBNs) and its four cruise missile submarines (SSGNs). Each displacing 18,750 tons submerged, the "Ohio"-class boats are the largest submarines ever built for the U.S. Navy. They are the world's third-largest submarines, behind the Russian Navy's Soviet-designed 48,000-ton and 24,000-ton . The "Ohios" carry more missiles than either: 24 Trident II missiles apiece, versus 16 by the Borei class (20 by the Borei II) and 20 by the "Typhoon" class.
Like its predecessor - and subs, the "Ohio" SSBNs are part of the United States' nuclear-deterrent triad, along with U.S. Air Force strategic bombers and intercontinental ballistic missiles. The 14 SSBNs together carry about half of U.S. active strategic thermonuclear warheads. Although the Trident missiles have no preset targets when the submarines go on patrol, they can be given targets quickly, from the United States Strategic Command based in Nebraska, using secure and constant radio communications links, including very low frequency systems.
The lead submarine of this class is . All the "Ohio"-class submarines, except for , are named for U.S. states, which U.S. Navy tradition had previously reserved for battleships and cruisers.
The "Ohio"-class submarine was designed for extended strategic deterrent patrols. Each submarine is assigned two complete crews, called the Blue crew and the Gold crew, each typically serving 70-to-90-day deterrent patrols. To decrease the time in port for crew turnover and replenishment, three large logistics hatches have been installed to provide large-diameter resupply and repair access. These hatches allow rapid transfer of supply pallets, equipment replacement modules, and machinery components, speeding up replenishment and maintenance of the submarines. Moreover, the "stealth" ability of the submarines was significantly improved over all previous ballistic-missile subs. " Ohio" was virtually undetectable in her sea trials in 1982, giving the U.S. Navy extremely advanced flexibility.
The class's design allows the boat to operate for about 15 years between major overhauls. These submarines are reported to be as quiet at their cruising speed of or more than the previous s at , although exact information remains classified. Fire control for their Mark 48 torpedoes is carried out by Mark 118 Mod 2 system, while the Missile Fire Control system is a Mark 98.
The "Ohio"-class submarines were constructed from sections of hull, with each four-deck section being in diameter. The sections were produced at the General Dynamics Electric Boat facility, Quonset Point, Rhode Island, and then assembled at its shipyard at Groton, Connecticut.
The US Navy has a total of 18 "Ohio"-class submarines which consist of 14 ballistic missile submarines (SSBNs), and four cruise missile submarines (SSGNs). The SSBN submarines provide the sea-based leg of the U.S. nuclear triad. Each SSBN submarine is armed with up to 24 Trident II submarine-launched ballistic missiles (SLBM). Each SSGN is capable of carrying 154 Tomahawk cruise missiles, plus a complement of Harpoon missiles to be fired through their torpedo tubes.
As part of the New START treaty, four tubes on each SSBN will be deactivated, leaving each ship with only 20 available for war loads.
The "Ohio" class was designed in the 1970s to carry the concurrently designed Trident submarine-launched ballistic missile. The first eight "Ohio"-class submarines were armed at first with 24 Trident I C4 SLBMs. Beginning with the ninth Trident submarine, , the remaining boats were equipped with the larger, three-stage Trident II D5 missile. The Trident I missile carries eight multiple independently targetable reentry vehicles, while the Trident II missile carries 12, in total delivering more destructive power than the Trident I missile and with greater accuracy. Starting with in 2000, the Navy began converting its remaining ballistic missile submarines armed with C4 missiles to carry D5 missiles. This task was completed in mid-2008. The first eight submarines had their home ports at Bangor, Washington, to replace the submarines carrying Polaris A3 missiles that were then being decommissioned. The remaining 10 submarines originally had their home ports at Kings Bay, Georgia, replacing the Poseidon and Trident Backfit submarines of the Atlantic Fleet.
In 1994, the Nuclear Posture Review study determined that, of the 18 "Ohio" SSBNs the U.S. Navy would be operating in total, 14 would be sufficient for the strategic needs of the U.S. The decision was made to convert four "Ohio"-class boats into SSGNs capable of conducting conventional land attack and special operations. As a result, the four oldest boats of the class—"Ohio", "Michigan", "Florida", and "Georgia"—progressively entered the conversion process in late 2002 and were returned to active service by 2008. The boats could thereafter carry 154 Tomahawk cruise missiles and 66 special operations personnel, among other capabilities and upgrades. The cost to refit the four boats was around US$1 billion (2008 dollars) per vessel. During the conversion of the first four submarines to SSGNs (see below), five of the submarines, , , , , and , were transferred from Kings Bay to Bangor. Further transfers occur as the strategic weapons goals of the United States change.
In 2011, "Ohio"-class submarines carried out 28 deterrent patrols. Each patrol lasts around 70 days. Four boats are on station ("hard alert") in designated patrol areas at any given time. From January to June 2014, "Pennsylvania" carried out a 140-day-long patrol, the longest to date.
The conversion modified 22 of the 24 diameter Trident missile tubes to contain large vertical launch systems, one configuration of which may be a cluster of seven Tomahawk cruise missiles. In this configuration, the number of cruise missiles carried could be a maximum of 154, the equivalent of what is typically deployed in a surface battle group. Other payload possibilities include new generations of supersonic and hypersonic cruise missiles, and Submarine Launched Intermediate Range Ballistic Missiles, unmanned aerial vehicles, the ADM-160 MALD, sensors for antisubmarine warfare or intelligence, surveillance, and reconnaissance missions, counter mine warfare payloads such as the AN/BLQ-11 Long Term Mine Reconnaissance System, and the broaching universal buoyant launcher and stealthy affordable capsule system specialized payload canisters.
The missile tubes also have room for stowage canisters that can extend the forward deployment time for special forces. The other two Trident tubes are converted to swimmer lockout chambers. For special operations, the Advanced SEAL Delivery System and the dry deck shelter can be mounted on the lockout chamber and the boat will be able to host up to 66 special-operations sailors or Marines, such as Navy SEALs, or USMC MARSOC teams. Improved communications equipment installed during the upgrade allows the SSGNs to serve as a forward-deployed, clandestine Small Combatant Joint Command Center.
On 26 September 2002, the Navy awarded General Dynamics Electric Boat a US$442.9 million contract to begin the first phase of the SSGN submarine conversion program. Those funds covered only the initial phase of conversion for the first two boats on the schedule. Advanced procurement was funded at $355 million in fiscal year 2002, $825 million in the FY 2003 budget and, through the five-year defense budget plan, at $936 million in FY 2004, $505 million in FY 2005, and $170 million in FY 2006. Thus, the total cost to refit the four boats is just under $700 million per vessel.
In November 2002, "Ohio" entered a dry-dock, beginning her 36-month refueling and missile-conversion overhaul. Electric Boat announced on 9 January 2006 that the conversion had been completed. The converted "Ohio" rejoined the fleet in February 2006, followed by "Florida" in April 2006. The converted "Michigan" was delivered in November 2006. The converted "Ohio" went to sea for the first time in October 2007. "Georgia" returned to the fleet in March 2008 at Kings Bay. These four SSGNs are expected to remain in service until about 2023–2026. At that point, their capabilities will be replaced with Virginia Payload Module-equipped .
Note: Boats based at Naval Base Kitsap, Washington are operated by the U.S. Pacific Fleet, while boats based at Naval Submarine Base Kings Bay, Georgia are operated by U.S. Fleet Forces Command, (formerly the U.S. Atlantic Fleet).
The U.S. Department of Defense anticipates a continued need for a sea-based strategic nuclear force. The first of the current "Ohio" SSBNs is expected to be retired by 2029. So the replacement submarine must be seaworthy by that time. A replacement may cost over $4 billion per unit compared to "Ohio"s $2 billion. The U.S. Navy is exploring two options. The first is a variant of the nuclear-powered attack submarines. The second is a dedicated SSBN, either with a new hull or based on an overhaul of the current "Ohio".
With the cooperation of both Electric Boat and Newport News Shipbuilding, in 2007, the U.S. Navy began a cost-control study. Then in December 2008, the U.S. Navy awarded Electric Boat a contract for the missile compartment design of the "Ohio"-class replacement, worth up to $592 million. Newport News is expected to receive close to 4% of that project. The U.S. Navy has yet to confirm an "Ohio"-class replacement program. In April 2009, U.S. Defense Secretary Robert M. Gates stated that the U.S. Navy was expected to begin such a program in 2010. The new vessel was scheduled to enter the design phase by 2014. If a new hull design is used, the program needed to be initiated by 2016 to meet the 2029 deadline.
As ballistic-missile submarines, the "Ohio" class has occasionally been portrayed in fiction books and films. | https://en.wikipedia.org/wiki?curid=22654 |
Ossian
Ossian (; Irish Gaelic/Scottish Gaelic: "Oisean") is the narrator and purported author of a cycle of epic poems published by the Scottish poet James Macpherson from 1760. Macpherson claimed to have collected word-of-mouth material in Scottish Gaelic, said to be from ancient sources, and that the work was his translation of that material. Ossian is based on Oisín, son of Finn or Fionn mac Cumhaill, anglicised to Finn McCool, a legendary bard who is a character in Irish mythology. Contemporary critics were divided in their view of the work's authenticity, but the consensus since is that Macpherson framed the poems himself, based on old folk tales he had collected.
The work was internationally popular, translated into all the literary languages of Europe and was highly influential both in the development of the Romantic movement and the Gaelic revival. "The contest over the authenticity of Macpherson's pseudo-Gaelic productions," Curley asserts, "became a seismograph of the fragile unity within restive diversity of imperial Great Britain in the age of Johnson." Macpherson's fame was crowned by his burial among the literary giants in Westminster Abbey. W.P. Ker, in the "Cambridge History of English Literature", observes that "all Macpherson's craft as a philological impostor would have been nothing without his literary skill."
In 1760 Macpherson published the English-language text "Fragments of ancient poetry, collected in the Highlands of Scotland, and translated from the Gaelic or Erse language". Later that year, he claimed to have obtained further manuscripts and in 1761 he claimed to have found an epic on the subject of the hero Fingal (with Fingal or "Fionnghall" meaning "white stranger" ), written by Ossian. According to Macpherson's prefatory material, his publisher, claiming that there was no market for these works except in English, required that they be translated. Macpherson published these translations during the next few years, culminating in a collected edition, "The Works of Ossian", in 1765. The most famous of these Ossianic poems was "Fingal", written in 1762.
The supposed original poems are translated into poetic prose, with short and simple sentences. The mood is epic, but there is no single narrative, although the same characters reappear. The main characters are Ossian himself, relating the stories when old and blind, his father Fingal (very loosely based on the Irish hero Fionn mac Cumhaill), his dead son Oscar (also with an Irish counterpart), and Oscar's lover Malvina (like Fiona a name invented by Macpherson), who looks after Ossian in his old age. Though the stories "are of endless battles and unhappy loves", the enemies and causes of strife are given little explanation and context.
Characters are given to killing loved ones by mistake, and dying of grief, or of joy. There is very little information given on the religion, culture or society of the characters, and buildings are hardly mentioned. The landscape "is more real than the people who inhabit it. Drowned in eternal mist, illuminated by a decrepit sun or by ephemeral meteors, it is a world of greyness." Fingal is king of a region of south-west Scotland perhaps similar to the historical kingdom of Dál Riata and the poems appear to be set around the 3rd century, with the "king of the world" mentioned being the Roman Emperor; Macpherson and his supporters detected references to Caracalla (d. 217, as "Caracul") and Carausius (d. 293, as "Caros", the "king of ships").
The poems achieved international success. Napoleon and Diderot were prominent admirers and Voltaire was known to have written parodies of them. Thomas Jefferson thought Ossian "the greatest poet that has ever existed", and planned to learn Gaelic so as to read his poems in the original. They were proclaimed as a Celtic equivalent of the Classical writers such as Homer. "The genuine remains of Ossian...are in many respects of the same stamp as the Iliad," was Thoreau's opinion. Many writers were influenced by the works, including Walter Scott, and painters and composers chose Ossianic subjects.
One poem was translated into French in 1762, and by 1777 the whole "corpus". In the German-speaking states Michael Denis made the first full translation in 1768–69, inspiring the proto-nationalist poets Klopstock and Goethe, whose own German translation of a portion of Macpherson's work figures prominently in a climactic scene of "The Sorrows of Young Werther" (1774). Goethe's associate Johann Gottfried Herder wrote an essay titled "Extract from a correspondence about Ossian and the Songs of Ancient Peoples" (1773) in the early days of the Sturm und Drang movement.
Complete Danish translations were made in 1790, and Swedish ones in 1794–1800. In Scandinavia and Germany the Celtic nature of the setting was ignored or not understood, and Ossian was regarded as a Nordic or Germanic figure who became a symbol for nationalist aspirations. The French general Jean-Baptiste Bernadotte, who was made King Charles XIV John of Sweden and King of Norway, had already named his only son after a character from Ossian, at the suggestion of Napoleon, the child's godfather and an admirer of Ossian;. Born in 1799, Bernadotte's son later became King Oscar I of Sweden and Norway, who was, in turn, succeeded by his sons Charles XV of Sweden (d. 1872) and Oscar II (d. 1907). "Oscar" being a Royal Swedish name led to its becoming also a common male first name, especially in Scandinavia but also in other European countries.
Melchiore Cesarotti was an Italian clergyman whose translation into Italian is said by many to improve on the original, and was a tireless promoter of the poems, in Vienna and Warsaw as well as Italy. It was his translation that Napoleon especially admired, and among others it influenced Ugo Foscolo who was Cesarotti's pupil in the University of Padua.
By 1800 Ossian was translated into Spanish and Russian, with Dutch following in 1805, and Polish, Czech and Hungarian in 1827–33. The poems were as much admired in Hungary as in France and Germany; Hungarian János Arany wrote "Homer and Ossian" in response, and several other Hungarian writers – Baróti Szabó, Csokonai, Sándor Kisfaludy, Kazinczy, Kölcsey, Ferenc Toldy, and Ágost Greguss, were also influenced by it.
The first partial Polish translation of Ossian was made by Ignacy Krasicki in 1793. The complete translation appeared in 1838 by Seweryn Goszczyński.
The opera "Ossian, ou Les bardes" by Le Sueur (with the famous, multimedial scene of "Ossian's Dream“) was a sell-out at the Paris Opera in 1804, and transformed the composer's career. The poems also exerted an influence on the burgeoning of Romantic music, and Franz Schubert in particular composed Lieder setting many of Ossian's poems. In 1829 Felix Mendelssohn was inspired to visit the Hebrides and composed the "Hebrides Overture", also known as "Fingal's Cave". His friend Niels Gade devoted his first published work, the concert overture "Efterklange af Ossian" ("Echoes of Ossian") written in 1840, to the same subject.
The great Hungarian national poet Sándor Petőfi wrote a poem entitled "Homer and Ossian", of which the first verse reads:
Likewise, William Wordsworth was evidently positively impressed by Macpherson's text when he wrote his poem "Glen-Almain, the Narrow Glen":
There were immediate disputes of Macpherson's claims on both literary and political grounds. Macpherson promoted a Scottish origin for the material, and was hotly opposed by Irish historians who felt that their heritage was being appropriated. However, both Scotland and Ireland shared a common Gaelic culture during the period in which the poems are set, and some Fenian literature common in both countries was composed in Scotland.
Samuel Johnson, English author, critic, and biographer, was convinced that Macpherson was "a mountebank, a liar, and a fraud, and that the poems were forgeries". Johnson also dismissed the poems' quality. Upon being asked, "But Doctor Johnson, do you really believe that any man today could write such poetry?" he famously replied, "Yes. Many men. Many women. And many children." Johnson is cited as calling the story of Ossian "as gross an imposition as ever the world was troubled with". In support of his claim, Johnson also called Gaelic the rude speech of a barbarous people, and said there were no manuscripts in it more than 100 years old. In reply, it was proved that the Advocates' library at Edinburgh contained Gaelic manuscripts 500 years old, and one of even greater antiquity.
Scottish author Hugh Blair's 1763 "A Critical Dissertation on the Poems of Ossian" upheld the work's authenticity against Johnson's scathing criticism and from 1765 was included in every edition of "Ossian" to lend the work credibility. The work also had a timely resonance for those swept away by the emerging Romantic movement and the theory of the "noble savage", and it echoed the popularity of Burke's seminal "A Philosophical Enquiry into the Origin of Our Ideas of the Sublime and Beautiful" (1757).
In 1766 the Irish antiquarian and Gaelic scholar Charles O'Conor dismissed Ossian's authenticity in a new chapter "Remarks on Mr. Mac Pherson's translation of Fingal and Temora" that he added to the second edition of his seminal history. In 1775 he expanded his criticism in a new book, "Dissertation on the origin and antiquities of the antient Scots".
Faced with the controversy, the Committee of the Highland Society enquired after the authenticity of Macpherson's supposed original. It was because of these circumstances that the so-called Glenmasan manuscript (Adv. 72.2.3) came to light in the late 18th century, a compilation which contains the tale "Oided mac n-Uisnig". This text is a version of the Irish "Longes mac n-Uislenn" and offers a tale which bears some comparison to Macpherson's "Darthula", although it is radically different in many respects. Donald Smith cited it in his report for the Committee.
The controversy raged on into the early years of the 19th century, with disputes as to whether the poems were based on Irish sources, on sources in English, on Gaelic fragments woven into his own composition as Johnson concluded, or largely on Scots Gaelic oral traditions and manuscripts as Macpherson claimed. Defences of the authenticity of the poems continued to be made. For example, Peter Hately Waddell argued in "Ossian and the Clyde" (1875) that poems contained topographical references that could not have been known to Macpherson.
In 1952, the Scottish literary scholar Derick Thomson investigated the sources for Macpherson's work and concluded that Macpherson had collected genuine Scottish Gaelic ballads, employing scribes to record those that were preserved orally and collating manuscripts, but had adapted them by altering the original characters and ideas, and had introduced a great deal of his own.
Perhaps the strongest evidence that Macpherson's 'Ossian' was not a total fabrication is to be found in the oldest extant Scottish manuscript in Gaelic known as the "Book of the Dean of Lismore" (1512). In the section of this manuscript which consists of heroic poetry and includes verse from as early as AD 1310, we find the names and exploits of almost all the leading protagonists in Macpherson's text (Cairbe, Caoilte, Conán, Cormac mac Airt, Cú Chulainn, Diarmad, Eimhear, Fionn mac Cumhaill, Goll mac Morna, Osgar mac Oiséin, Tréanmhor, etc.), together with legends and traditions associated with these characters. (See 'Heroic Poetry from the Book of the Dean of Lismore'. Neil Ross, editor. Scottish Gaelic Texts Society, Edinburgh, 1939.)
Macpherson's "Ossian" made a strong impression on Dugald Buchanan (1716–68), a Perthshire poet whose celebrated "Spiritual Hymns" are written in a Scots Gaelic of a high quality that to some extent reflects the language of the classical Gaelic common to the bards of both Ireland and Scotland. Buchanan, taking the poems of "Ossian" to be authentic, was moved to revalue the genuine traditions and rich cultural heritage of the Gaels. At around the same time, he wrote to Sir James Clerk of Penicuik, the leading antiquary of the movement, proposing that someone should travel to the Isles and Western Coast of Scotland and collect the work of the ancient and modern bards, in which alone he could find the language in its purity.
Much later, in the 19th and 20th centuries, this task was taken up by collectors such as Alexander Carmichael and Lady Evelyn Stewart Murray, and to be recorded and continued by the work of the School of Scottish Studies and the Scottish Gaelic Texts Society.
Subjects from the Ossian poems were popular in the art of northern Europe, but at rather different periods depending on the country; by the time French artists began to depict Ossian, British artists had largely dropped him. Ossian was especially popular in Danish art, but also found in Germany and the rest of Scandinavia.
British artists began to depict the Ossian poems early on, with the first major work a cycle of paintings decorating the ceiling the "Grand Hall" of Penicuik House in Midlothian, built by Sir James Clerk, who commissioned the paintings in 1772. These were by the Scottish painter Alexander Runciman and lost when the house burnt down in 1899, though drawings and etchings survive, and two pamphlets describing them were published in the 18th century. A subject from Ossian by Angelica Kauffman was shown in the Royal Academy exhibition of 1773, and Ossian was depicted in "Elysium", part of the Irish painter James Barry's "magnum opus" decorating the Royal Society of Arts, at the Adelphi Buildings in London (still "in situ").
Works on paper by Thomas Girtin and John Sell Cotman have survived, though the Ossianic landscapes by George Augustus Wallis, which the Ossian fan August Wilhelm Schlegel praised in a letter to Goethe, seem to have been lost, as has a picture by J.M.W. Turner exhibited in 1802. Henry Singleton exhibited paintings, some of which were engraved and used in editions of the poems.
A fragment by Novalis, written in 1789, refers to Ossian as an inspired, holy and poetical singer.
The Danish painter Nicolai Abildgaard, Director of the Copenhagen Academy from 1789, painted several scenes from Ossian, as did his pupils including Asmus Jacob Carstens. His friend Joseph Anton Koch painted a number of subjects, and two large series of illustrations for the poems, which never got properly into print; like many Ossianic works by Wallis, Carstens, Krafft and others, some of these were painted in Rome, perhaps not the best place to evoke the dim northern light of the poems. In Germany the request in 1804 to produce some drawings as illustrations so excited Philipp Otto Runge that he planned a series of 100, far more than asked for, in a style heavily influenced by the linear illustrations of John Flaxman; these remain as drawings only. Many other German works are recorded, some as late as the 1840s; word of the British scepticism over the Ossian poems was slow to penetrate the continent, or considered irrelevant.
In France the enthusiasm of Napoleon for the poems accounts for most artistic depictions, and those by the most famous artists, but a painting exhibited in the Paris Salon in 1800 by Paul Duqueylar (now Musée Granet, Aix-en-Provence) excited "Les Barbus" ("the Bearded Ones") a group of primitivist artists including Pierre-Maurice Quays (or Quaï) who promoted living in the style of "early civilizations as described in Homer, Ossian, and the Bible". Quays is reported as saying: "Homère? Ossian? ... le soleil? la lune? Voilà la question. En vérité, je crois que je préfère la lune. C'est plus simple, plus grand, plus "primitif"". ("Homer? Ossian? ... the sun? the moon? That's the question. Truthfully I think I prefer the moon. It's more simple, more grand, more "primitive""). The same year Napoleon was planning the renovation of the Château de Malmaison as a summer palace, and though he does not seem to have suggested Ossianic subjects for his painters, two large and significant works were among those painted for the reception hall, for which six artists had been commissioned.
These were Girodet's painting of 1801–02 "Ossian receiving the Ghosts of the French Heroes", and "Ossian Evoking ghosts on the Edge of the Lora" (1801), by François Pascal Simon Gérard. Gérard's original was lost in a shipwreck after being bought by the King of Sweden after the fall of Napoleon, but survives in three replicas by the artist (a further one in Berlin was lost in 1945). One is now at Malmaison (184.5 × 194.5 cm / 72.6 × 76.6 in), and the Kunsthalle Hamburg has another (180,5 × 198,5 cm). A watercolour copy by Jean-Baptiste Isabey was placed as frontispiece to Napoleon's copy of the poems.
Duqueylar, Girodet and Gérard, like Johann Peter Krafft (above) and most of the "Barbus", were all pupils of David, and the clearly unclassical subjects of the Ossian poems were useful for emergent French Romantic painting, marking a revolt against David's Neoclassical choice of historical subject-matter. David's recorded reactions to the paintings were guarded or hostile; he said of Girodet's work: "Either Girodet is mad or I no longer know anything of the art of painting".
Girodet's painting (still at Malmaison; 192.5 x 184 cm) was a "success de scandale" when exhibited in 1802, and remains a key work in the emergence of French Romantic painting, but the specific allusions to the political situation that he intended it to carry were largely lost on the public, and overtaken by the Peace of Amiens with Great Britain, signed in 1802 between the completion and exhibition of the work. He also produced "Malvina dying in the arms of Fingal" (c. 1802), and other works.
Another pupil of David, Jean-Auguste-Dominique Ingres, was to depict Ossianic scenes over most of his long career. He made a drawing in 1809, when studying in Rome, and in 1810 or 1811 was commisissioned to make two paintings, the "Dream of Ossian" and a classical scene, to decorate the bedroom Napoleon was to occupy in the Palazzo Quirinale on a visit to Rome. In fact the visit never came off and in 1835 Ingres repurchased the work, now in poor condition.
National Library of Scotland has 327 books and associated materials in its Ossian Collection. The collection was originally assembled by J. Norman Methven of Perth and includes different editions and translations of James Macpherson's epic poem 'Ossian', some with a map of the 'Kingdom of Connor'. It also contains secondary material relating to Ossianic poetry and the Ossian controversy. More than 200 items from the collection have been digitised.
Below are some other online editions of interest and recent works:
in French: | https://en.wikipedia.org/wiki?curid=22655 |
Operand
In mathematics an operand is the object of a mathematical operation, i.e., it is the object or quantity that is operated on.
The following arithmetic expression shows an example of operators and operands:
In the above example, '+' is the symbol for the operation called addition.
The operand '3' is one of the inputs (quantities) followed by the addition operator, and the operand '6' is the other input necessary for the operation.
The result of the operation is 9. (The number '9' is also called the sum of the augend 3 and the addend 6.)
An operand, then, is also referred to as "one of the inputs (quantities) for an operation".
Operands may be complex, and may consist of expressions also made up of operators with operands.
In the above expression '(3 + 5)' is the first operand for the multiplication operator and '2' the second. The operand '(3 + 5)' is an expression in itself, which contains an addition operator, with the operands '3' and '5'.
Rules of precedence affect which values form operands for which operators:
In the above expression, the multiplication operator has the higher precedence than the addition operator, so the multiplication operator has operands of '5' and '2'. The addition operator has operands of '3' and '5 × 2'.
Depending on the mathematical notation being used the position of an operator in relation to its operand(s) may vary. In everyday usage infix notation is the most common, however other notations also exist, such as the prefix and postfix notations. These alternate notations are most common within computer science.
Below is a comparison of three different notations — all represent an addition of the numbers '1' and '2'
In a mathematical expression, the order of operation is carried out from left to right. Start with the leftmost value and seek the first operation to be carried out in accordance with the order specified above (i.e., start with parentheses and end with the addition/subtraction group). For example, in the expression
the first operation to be acted upon is any and all expressions found inside a parenthesis. So beginning at the left and moving to the right, find the first (and in this case, the only) parenthesis, that is, (2 + 22). Within the parenthesis itself is found the expression 22. The reader is required to find the value of 22 before going any further. The value of 22 is 4. Having found this value, the remaining expression looks like this:
The next step is to calculate the value of expression inside the parenthesis itself, that is, (2 + 4) = 6. Our expression now looks like this:
Having calculated the parenthetical part of the expression, we start over again beginning with the left most value and move right. The next order of operation (according to the rules) is exponents. Start at the left most value, that is, 4, and scan your eyes to the right and search for the first exponent you come across. The first (and only) expression we come across that is expressed with an exponent is 22. We find the value of 22, which is 4. What we have left is the expression
The next order of operation is multiplication. 4 × 4 is 16. Now our expression looks like this:
The next order of operation according to the rules is division. However, there is no division operator sign (÷) in the expression, 16 − 6. So we move on to the next order of operation, i.e., addition and subtraction, which have the same precedence and are done left to right.
So the correct value for our original expression, 4 × 22 − (2 + 22), is 10.
It is important to carry out the order of operation in accordance with rules set by convention. If the reader evaluates an expression but does not follow the correct order of operation, the reader will come forth with a different value. The different value will be the incorrect value because the order of operation was not followed. The reader will arrive at the correct value for the expression if and only if each operation is carried out in the proper order.
The number of operands of an operator is called its arity. Based on arity, operators are classified as nullary (no operands), unary (1 operand), binary (2 operands), ternary (3 operands), etc.
In computer programming languages, the definitions of operator and operand are almost the same as in mathematics.
In computing, an operand is the part of a computer instruction which specifies what data is to be manipulated or operated on, while at the same time representing the data itself.
A computer instruction describes an operation such as add or multiply X, while the operand (or operands, as there can be more than one) specify on which X to operate as well as the value of X.
Additionally, in assembly language, an operand is a value (an argument) on which the instruction, named by mnemonic, operates. The operand may be a processor register, a memory address, a literal constant, or a label. A simple example (in the x86 architecture) is
MOV DS, AX
where the value in register operand codice_1 is to be moved (codice_2) into register codice_3. Depending on the instruction, there may be zero, one, two, or more operands. | https://en.wikipedia.org/wiki?curid=22656 |
Order of magnitude
An order of magnitude is an approximation of the logarithm of a value relative to some contextually understood reference value, usually ten, interpreted as the base of the logarithm and the representative of values of magnitude one. Logarithmic distributions are common in nature and
considering the order of magnitude of values sampled from such a distribution can be more intuitive. When the reference value is ten, the order of magnitude can be understood as the number of digits in the base-10 representation of the value. Similarly, if the reference value is one of certain powers of two, the magnitude can be understood as the amount of computer memory needed to store the exact integer value.
Differences in order of magnitude can be measured on a base-10 logarithmic scale in “decades” (i.e., factors of ten). Examples of numbers of different magnitudes can be found at Orders of magnitude (numbers).
Generally, the order of magnitude of a number is the smallest power of 10 used to represent that number. To work out the order of magnitude of a number formula_1, the number is first expressed in the following form:
where formula_3. Then, formula_4 represents the order of magnitude of the number. The order of magnitude can be any integer. The table below enumerates the order of magnitude of some numbers in light of this definition:
The geometric mean of formula_5 and formula_6 is formula_7, meaning that a value of exactly formula_5 (i.e., formula_9) represents a geometric "halfway point" within the range of possible values of formula_10.
Some use a simpler definition where 0.5, perhaps because the arithmetic mean of formula_5 and formula_12 approaches formula_13 for increasing formula_14. This definition has the effect of lowering the values of formula_4 slightly:
Yet others restrict formula_10 to values where formula_17, making the order of magnitude of a number exactly equal to its exponent part in scientific notation.
Orders of magnitude are used to make approximate comparisons. If numbers differ by one order of magnitude, "x" is "about" ten times different in quantity than "y". If values differ by two orders of magnitude, they differ by a factor of about 100. Two numbers of the same order of magnitude have roughly the same scale: the larger value is less than ten times the smaller value.
The order of magnitude of a number is, intuitively speaking, the number of powers of 10 contained in the number. More precisely, the order of magnitude of a number can be defined in terms of the common logarithm, usually as the integer part of the logarithm, obtained by truncation. For example, the number has a logarithm (in base 10) of 6.602; its order of magnitude is 6. When truncating, a number of this order of magnitude is between 106 and 107. In a similar example, with the phrase "He had a seven-figure income", the order of magnitude is the number of figures minus one, so it is very easily determined without a calculator to 6. An order of magnitude is an approximate position on a logarithmic scale.
An order-of-magnitude estimate of a variable, whose precise value is unknown, is an estimate rounded to the nearest power of ten. For example, an order-of-magnitude estimate for a variable between about 3 billion and 30 billion (such as the human population of the Earth) is 10 billion. To round a number to its nearest order of magnitude, one rounds its logarithm to the nearest integer. Thus , which has a logarithm (in base 10) of 6.602, has 7 as its nearest order of magnitude, because "nearest" implies rounding rather than truncation. For a number written in scientific notation, this logarithmic rounding scale requires rounding up to the next power of ten when the multiplier is greater than the square root of ten (about 3.162). For example, the nearest order of magnitude for is 8, whereas the nearest order of magnitude for is 9. An order-of-magnitude estimate is sometimes also called a zeroth order approximation.
An order-of-magnitude difference between two values is a factor of 10. For example, the mass of the planet Saturn is 95 times that of Earth, so Saturn is "two orders of magnitude" more massive than Earth. Order-of-magnitude differences are called decades when measured on a logarithmic scale.
Other orders of magnitude may be calculated using bases other than 10. The ancient Greeks ranked the nighttime brightness of celestial bodies by 6 levels in which each level was the fifth root of one hundred (about 2.512) as bright as the nearest weaker level of brightness, and thus the brightest level being 5 orders of magnitude brighter than the weakest indicates that it is (1001/5)5 or a factor of 100 times brighter.
The different decimal numeral systems of the world use a larger base to better envision the size of the number, and have created names for the powers of this larger base. The table shows what number the order of magnitude aim at for base 10 and for base . It can be seen that the order of magnitude is included in the number name in this example, because bi- means 2 and tri- means 3 (these make sense in the long scale only), and the suffix -illion tells that the base is . But the number names billion, trillion themselves (here with other meaning than in the first chapter) are not names of the "orders of" magnitudes, they are names of "magnitudes", that is the "numbers" etc.
SI units in the table at right are used together with SI prefixes, which were devised with mainly base 1000 magnitudes in mind. The IEC standard prefixes with base 1024 were invented for use in electronic technology.
The ancient apparent magnitudes for the brightness of stars uses the base formula_18 and is reversed. The modernized version has however turned into a logarithmic scale with non-integer values.
For extremely large numbers, a generalized order of magnitude can be based on their double logarithm or super-logarithm. Rounding these downward to an integer gives categories between very "round numbers", rounding them to the nearest integer and applying the inverse function gives the "nearest" round number.
The double logarithm yields the categories:
(the first two mentioned, and the extension to the left, may not be very useful, they merely demonstrate how the sequence mathematically continues to the left).
The super-logarithm yields the categories:
The "midpoints" which determine which round number is nearer are in the first case:
and, depending on the interpolation method, in the second case
For extremely small numbers (in the sense of close to zero) neither method is suitable directly, but the generalized order of magnitude of the reciprocal can be considered.
Similar to the logarithmic scale one can have a double logarithmic scale (example provided here) and super-logarithmic scale. The intervals above all have the same length on them, with the "midpoints" actually midway. More generally, a point midway between two points corresponds to the generalised "f"-mean with "f"("x") the corresponding function log log "x" or slog "x". In the case of log log "x", this mean of two numbers (e.g. 2 and 16 giving 4) does not depend on the base of the logarithm, just like in the case of log "x" (geometric mean, 2 and 8 giving 4), but unlike in the case of log log log "x" (4 and giving 16 if the base is 2, but not otherwise). | https://en.wikipedia.org/wiki?curid=22657 |
Occam (programming language)
occam is a programming language which is concurrent and builds on the communicating sequential processes (CSP) process algebra, and shares many of its features. It is named after philosopher William of Ockham after whom Occam's razor is named.
occam is an imperative procedural language (such as Pascal). It was developed by David May and others at Inmos (trademark INMOS), advised by Tony Hoare, as the native programming language for their transputer microprocessors, but implementations for other platforms are available. The most widely known version is occam 2; its programming manual was written by Steven Ericsson-Zenith and others at Inmos.
In the following examples indentation and formatting are critical for parsing the code: expressions are terminated by the end of the line, lists of expressions need to be on the same level of indentation. This feature, named the off-side rule, is also found in other languages such as Haskell and Python.
Communication between processes work through named "channels". One process outputs data to a channel via codice_1 while another one inputs data with codice_2. Input and output cannot proceed until the other end is ready to accept or offer data. (In the "not proceeding" case it is often said that the process "blocks" on the channel. However, the program will neither spin nor poll; thus terms like "wait", "hang" or "yield" may also convey the behaviour; also in the context that it will not "block" other independent processes from running.) Examples (c is a variable):
codice_3 introduces a list of expressions that are evaluated sequentially. This is not implicit as it is in most other programming languages. Example:
codice_4 begins a list of expressions that may be evaluated concurrently. Example:
codice_5 specifies a list of "guarded" commands. The "guards" are a combination of a boolean condition and an input expression (both optional). Each guard for which the condition is true and the input channel is ready is successful. One of the successful alternatives is selected for execution. Example:
This will read data from channels c1 or c2 (whichever is ready) and pass it into a merged channel. If countN reaches 100, reads from the corresponding channel will be disabled. A request on the status channel is answered by outputting the counts to codice_6.
"occam 1" (released 1983) was a preliminary version of the language which borrowed from David May's work on EPL and Tony Hoare's CSP. This supported only the VAR data type, which was an integral type corresponding to the native word length of the target architecture, and arrays of only one dimension.
"occam 2" is an extension produced by Inmos Ltd in 1987 that adds floating-point support, functions, multi-dimensional arrays and more data types such as varying sizes of integers (INT16, INT32) and bytes.
With this revision, occam became a language able to express useful programs, whereas occam 1 was more suited to examining algorithms and exploring the new language (however, the occam 1 compiler was written in occam 1, so there is an existence proof that reasonably sized, useful programs could be written in occam 1, despite its limits).
"occam 2.1" was the last of the series of occam language developments contributed by Inmos. Defined in 1994, it was influenced by an earlier proposal for an occam 3 language (also referred to as "occam91" during its early development) created by Geoff Barrett at Inmos in the early 1990s. A revised Reference Manual describing occam 3 was distributed for community comment, but the language was never fully implemented in a compiler.
occam 2.1 introduced several new features to occam 2, including:
For a full list of the changes see Appendix P of the Inmos occam 2.1 Reference Manual.
"occam-π" is the common name for the occam variant implemented by later versions of the Kent Retargetable occam Compiler (KRoC). The addition of the symbol "π" (pi) to the occam name is an allusion to KRoC occam including several ideas inspired by the π-calculus. It contains several significant extensions to the occam 2.1 compiler, for example: | https://en.wikipedia.org/wiki?curid=22660 |
October Revolution
The October Revolution (commonly referred to as the Bolshevik Revolution, the October Uprising, or Red October), officially known in Soviet historiography as the Great October Socialist Revolution, was a revolution in Russia led by the Bolshevik Party of Vladimir Lenin that was instrumental in the larger Russian Revolution of 1917–1923. It took place through an armed insurrection in Petrograd on 25 October (Old Style, O.S.; 7 November, New Style or N.S.) 1917.
The October Revolution had followed and capitalized on the February Revolution earlier in the year. The February Revolution had overthrown the Tsarist autocracy, resulting in a provisional government. The provisional government had taken power after being proclaimed by Grand Duke Michael, Tsar Nicholas II's younger brother, declined to take power after the Tsar had stepped down.
During this time, urban workers began to organize into councils (soviets) wherein revolutionaries criticized the provisional government and its actions. After the Congress of Soviets, the new governing body, had its second session it elected members of the Bolsheviks and other left-wing groups such as the Left Socialist Revolutionaries (Left SR) to important positions within the new state of affairs. This immediately initiated the establishment of the Russian Soviet Republic. On 17 July 1918, the Tsar and his family, including his five children aged 13 to 22, were executed.
The revolution was led by the Bolsheviks, who used their influence in the Petrograd Soviet to organize the armed forces. Bolshevik Red Guards forces under the Military-Revolutionary Committee began the occupation of government buildings on 25 October (O.S.; 7 November, N.S.), 1917. The following day, the Winter Palace (the seat of the Provisional government located in Petrograd, then capital of Russia) was captured.
The slogan of the October revolution was All Power to the Soviets, meaning all power to grassroots democratically elected councils. For a time, this was observed, with the interim Bolshevik-only Sovnarkom or Soviet government replaced by a Bolshevik-Left SR coalition government with an All-Russian Central Executive Committee of Soviets composed of all representatives of all factions who supported Soviet power and legally entrenching the peasant land seizures. Throughout 1918, the Treaty of Brest-Litovsk, which resulted in a Left SR walkout, and other policies disputed by both the other pro-soviet parties and minority factions of the Bolsheviks progressively dissipated until 1920, where there were no free elections, but delegates were appointed by a one-party state.
The long-awaited Constituent Assembly elections were held on 12 November (O.S., 25 November, N.S.) 1917. In contrast to their majority in the Soviets, the Bolsheviks only won 175 seats in the 715-seat legislative body, coming in second behind the Socialist Revolutionary Party, which won 370 seats, although the SR Party no longer existed as a whole party by that time, as the Left SRs had gone into coalition with the Bolsheviks from October 1917 to March 1918 (a cause of dispute of the legitimacy of the returned seating of the Constituent Assembly, as the old lists, were drawn up by the old SR Party leadership, and thus represented mostly Right SRs, whereas the peasant soviet deputies had returned majorities for the pro-Bolshevik Left SRs). The Constituent Assembly was to first meet on 28 November (O.S.) 1917, but its convocation was delayed until 5 January (O.S.; 18 January, N.S.) 1918 by the Bolsheviks. On its first and only day in session, the Constituent Assembly came into conflict with the Soviets, and it rejected Soviet decrees on peace and land, resulting in the Constituent Assembly being dissolved the next day by order of the Congress of Soviets.
As the revolution was not universally recognized, there followed the struggles of the Russian Civil War (1917–22) and the creation of the Soviet Union in 1922.
At first, the event was referred to as the "October coup" () or the "Uprising of the 3rd," as seen in contemporary documents (for example, in the first editions of Lenin's complete works). However, has a meaning similar to "revolution" and also means "upheaval" or "overturn", so "coup" is not necessarily the correct translation.
With time, the term "October Revolution" () came into use. It is also known as the "November Revolution" having occurred in November, according to the Gregorian Calendar (for details, see Soviet calendar).
The February Revolution had toppled Tsar Nicholas II of Russia and replaced his government with the Russian Provisional Government. However, the provisional government was weak and riven by internal dissension. It continued to wage World War I, which became increasingly unpopular. There was a nationwide crisis affecting social, economic, and political relations. Disorder in industry and transport had intensified, and difficulties in obtaining provisions had increased. Gross industrial production in 1917 decreased by over 36% of what it had been in 1914. In the autumn, as much as 50% of all enterprises in the Urals, the Donbas, and other industrial centers were closed down, leading to mass unemployment. At the same time, the cost of living increased sharply. Real wages fell to about 50% of what they had been in 1913. By October 1917, Russia's national debt had risen to 50 billion rubles. Of this, debts to foreign governments constituted more than 11 billion rubles. The country faced the threat of financial bankruptcy.
Throughout June, July, and August 1917, it was common to hear working-class Russians speak about their lack of confidence in the Provisional Government. Factory workers around Russia felt unhappy with the growing shortages of food, supplies, and other materials. They blamed their managers or foremen and would even attack them in the factories. The workers blamed many rich and influential individuals for the overall shortage of food and poor living conditions. Workers saw these rich and powerful individuals as opponents of the Revolution, and called them "bourgeois", "capitalist", and "imperialist".
In September and October 1917, there were mass strike actions by the Moscow and Petrograd workers, miners in the Donbas, metalworkers in the Urals, oil workers in Baku, textile workers in the Central Industrial Region, and railroad workers on 44 railway lines. In these months alone, more than a million workers took part in strikes. Workers established control over production and distribution in many factories and plants in a social revolution. Workers organized these strikes through factory committees. The factory committees represented the workers and were able to negotiate better working conditions, pay, and hours. Even though workplace conditions may have been increasing in quality, the overall quality of life for workers was not improving. There were still shortages of food and the increased wages workers had obtained did little to provide for their families.
By October 1917, peasant uprisings were common. By autumn, the peasant movement against the landowners had spread to 482 of 624 counties, or 77% of the country. As 1917 progressed, the peasantry increasingly began to lose faith that the land would be distributed to them by the Social Revolutionaries and the Mensheviks. Refusing to continue living as before, they increasingly took measures into their own hands, as can be seen by the increase in the number and militancy of the peasant's actions. From the beginning of September to the October Revolution there were over a third as many peasant actions than since March. Over 42% of all the cases of destruction (usually burning down and seizing property from the landlord's estate) recorded between February and October occurred in October. While the uprisings varied in severity, complete uprisings and seizures of the land were not uncommon. Less robust forms of protest included marches on landowner manors and government offices, as well as withholding and storing grains rather than selling them. When the Provisional Government sent punitive detachments, it only enraged the peasants. In September, the garrisons in Petrograd, Moscow, and other cities, the Northern and Western fronts, and the sailors of the Baltic Fleet declared through their elected representative body Tsentrobalt that they did not recognize the authority of the Provisional Government and would not carry out any of its commands.
Soldiers' wives were key players in the unrest in the villages. From 1914 to 1917, almost 50% of healthy men were sent to war, and many were killed on the front, resulting in many females being head of the household. Often—when government allowances were late and were not sufficient to match the rising costs of goods—soldiers' wives sent masses of appeals to the government, which went largely unanswered. Frustration resulted, and these women were influential in inciting "subsistence riots"—also referred to as "hunger riots," "pogroms," or "baba riots." In these riots, citizens seized food and resources from shop owners, who they believed to be charging unfair prices. Upon police intervention, protesters responded with "rakes, sticks, rocks, and fists."
In a diplomatic note of 1 May, the minister of foreign affairs, Pavel Milyukov, expressed the Provisional Government's desire to continue the war against the Central Powers "to a victorious conclusion", arousing broad indignation. On 1–4 May, about 100,000 workers and soldiers of Petrograd, and, after them, the workers and soldiers of other cities, led by the Bolsheviks, demonstrated under banners reading "Down with the war!" and "All power to the soviets!" The mass demonstrations resulted in a crisis for the Provisional Government. 1 July saw more demonstrations, as about 500,000 workers and soldiers in Petrograd demonstrated, again demanding "all power to the soviets," "down with the war," and "down with the ten capitalist ministers." The Provisional Government opened an offensive against the Central Powers on 1 July, which soon collapsed. The news of the offensive's failure intensified the struggle of the workers and the soldiers. A new crisis in the Provisional Government began on 15 July.
On 16 July, spontaneous demonstrations of workers and soldiers began in Petrograd, demanding that power be turned over to the soviets. The Central Committee of the Russian Social Democratic Labour Party provided leadership to the spontaneous movements. On 17 July, over 500,000 people participated in what was intended to be a peaceful demonstration in Petrograd, the so-called July Days. The Provisional Government, with the support of Socialist-Revolutionary Party-Menshevik leaders of the All-Russian Executive Committee of the Soviets, ordered an armed attack against the demonstrators, killing hundreds.
A period of repression followed. On 5–6 July, attacks were made on the editorial offices and printing presses of "Pravda" and on the Palace of Kshesinskaya, where the Central Committee and the Petrograd Committee of the Bolsheviks were located. On 7 July, the government ordered the arrest and trial of Vladimir Lenin, who was forced to go underground, as he had done under the Tsarist regime. Bolsheviks were arrested, workers were disarmed, and revolutionary military units in Petrograd were disbanded or sent to the war front. On 12 July, the Provisional Government published a law introducing the death penalty at the front. The second coalition government was formed on 24 July, chaired by Alexander Kerensky.
In response to a Bolshevik appeal, Moscow's working class began a protest strike of 400,000 workers. They were supported by strikes and protest rallies by workers in Kiev, Kharkov, Nizhny Novgorod, Ekaterinburg, and other cities.
In what became known as the Kornilov affair, General Lavr Kornilov, who had been Commander-in-Chief since 18 July, with Kerensky's agreement directed an army under Aleksandr Krymov to march toward Petrograd to restore order. Details remain sketchy, but Kerensky appeared to become frightened by the possibility that the army would stage a coup, and reversed the order. By contrast, historian Richard Pipes has argued that the episode was engineered by Kerensky. On 27 August, feeling betrayed by the government, Kornilov pushed on towards Petrograd. With few troops to spare at the front, Kerensky turned to the Petrograd Soviet for help. Bolsheviks, Mensheviks, and Socialist Revolutionaries confronted the army and convinced them to stand down. The Bolsheviks' influence over railroad and telegraph workers also proved vital in stopping the movement of troops. Right-wingers felt betrayed, and the left-wing was resurgent.
With Kornilov defeated, the Bolsheviks' popularity in the soviets grew significantly, both in the central and local areas. On 31 August, the Petrograd Soviet of Workers and Soldiers Deputies—and, on 5 September, the Moscow Soviet Workers Deputies—adopted the Bolshevik resolutions on the question of power. The Bolsheviks won a majority in the soviets of Briansk, Samara, Saratov, Tsaritsyn, Minsk, Kiev, Tashkent, and other cities.
Vladimir Lenin, who had been living in exile in Switzerland, with other dissidents organized a plan to negotiate a passage for them through Germany, with whom Russia was then at war. Recognizing that these dissidents could cause problems for their Russian enemies, the German government agreed to permit 32 Russian citizens, among them Lenin and his wife, to travel in a sealed train carriage through their territory. According to "Deutsche Welle":
On November 7, 1917, a coup d'état went down in history as the October Revolution. The interim government was toppled, the Soviets seized power, and Russia later terminated the Triple Entente military alliance with France and Britain. For Russia, it was effectively the end of the war. Kaiser Wilhelm II had spent around half a billion euros ($582 million) in today's money to weaken his wartime enemy.
On 10 October 1917 (O.S.; 23 October, N.S.), the Bolsheviks' Central Committee voted 10–2 for a resolution saying that "an armed uprising is inevitable, and that the time for it is fully ripe." At the Committee meeting, Lenin discussed how the people of Russia had waited long enough for "an armed uprising," and it was the Bolsheviks' time to take power. Lenin expressed his confidence in the success of the planned insurrection. His confidence stemmed from months of Bolshevik buildup of power and successful elections to different committees and councils in major cities such as Petrograd and Moscow.
The Bolsheviks created a revolutionary military committee within the Petrograd soviet, led by the soviet's president, Trotsky. The committee included armed workers, sailors, and soldiers, and assured the support or neutrality of the capital's garrison. The committee methodically planned to occupy strategic locations through the city, almost without concealing their preparations: the Provisional Government's president Kerensky was himself aware of them; and some details, leaked by Kamenev and Zinoviev, were published in newspapers.
In the early morning of 24 October (O.S.; 6 November N.S.), a group of soldiers loyal to Kerensky's government marched on the printing house of the Bolshevik newspaper, "Rabochiy put" ("Worker's Path"), seizing and destroying printing equipment and thousands of newspapers. Shortly thereafter, the government announced the immediate closure of not only "Rabochiy put" but also the left-wing "Soldat", as well as the far-right newspapers "Zhivoe slovo" and "Novaia Rus". The editors and contributors of these newspapers were seen to be calling for insurrection and were to be prosecuted on criminal charges.
In response, at 9a.m. the Bolshevik Military-Revolutionary Committee issued a statement denouncing the government's actions. At 10a.m., Bolshevik-aligned soldiers successfully retook the "Rabochiy put" printing house. Kerensky responded at approximately 3p.m. that afternoon by ordering the raising of all but one of Petrograd's bridges, a tactic used by the government several months earlier during the July Days. What followed was a series of sporadic clashes over control of the bridges, between Red Guard militias aligned with the Military-Revolutionary Committee and military units still loyal to the government. At approximately 5p.m. the Military-Revolutionary Committee seized the Central Telegraph of Petrograd, giving the Bolsheviks control over communications through the city.
On 25 October (O.S.; 7 November, N.S.) 1917, the Bolsheviks led their forces in the uprising in Petrograd (now St. Petersburg, then capital of Russia) against the Provisional Government. The event coincided with the arrival of a pro-Bolshevik flotilla—consisting primarily of five destroyers and their crews, as well as marines—in Petrograd harbor. At Kronstadt, sailors announced their allegiance to the Bolshevik insurrection. In the early morning, from its heavily guarded and picketed headquarters in Smolny Palace, the Military-Revolutionary Committee designated the last of the locations to be assaulted or seized. The Red Guards systematically captured major government facilities, key communication installations, and vantage points with little opposition. The Petrograd Garrison and most of the city's military units joined the insurrection against the Provisional Government. The insurrection was timed and organized to hand state power to the Second All-Russian Congress of Soviets of Workers' and Soldiers' Deputies, which began on this day.
Kerensky and the Provisional Government were virtually helpless to offer significant resistance. Railways and railway stations had been controlled by Soviet workers and soldiers for days, making rail travel to and from Petrograd impossible for Provisional Government officials. The Provisional Government was also unable to locate any serviceable vehicles. On the morning of the insurrection, Kerensky desperately searched for a means of reaching military forces he hoped would be friendly to the Provisional Government outside the city and ultimately borrowed a Renault car from the American embassy, which he drove from the Winter Palace, along with a Pierce Arrow. Kerensky was able to evade the pickets going up around the palace and to drive to meet approaching soldiers.
As Kerensky left Petrograd, Lenin wrote a proclamation "To the Citizens of Russia", stating that the Provisional Government had been overthrown by the Military-Revolutionary Committee. The proclamation was sent by telegraph throughout Russia, even as the pro-Soviet soldiers were seizing important control centers throughout the city. One of Lenin's intentions was to present members of the Soviet congress, who would assemble that afternoon, with a "fait accompli" and thus forestall further debate on the wisdom or legitimacy of taking power.
A final assault against the Winter Palace—against 3,000 cadets, officers, cossacks, and female soldiers—wasn't vigorously resisted. The Bolsheviks delayed the assault because they could not find functioning artillery and acted with restraint to avoid needless violence. At 6:15p.m., a large group of artillery cadets abandoned the palace, taking their artillery with them. At 8:00p.m., 200 cossacks left the palace and returned to their barracks.
While the cabinet of the provisional government within the palace debated what action to take, the Bolsheviks issued an ultimatum to surrender. Workers and soldiers occupied the last of the telegraph stations, cutting off the cabinet's communications with loyal military forces outside the city. As the night progressed, crowds of insurgents surrounded the palace, and many infiltrated it. At 9:45p.m, the cruiser "Aurora" fired a blank shot from the harbor. Some of the revolutionaries entered the palace at 10:25p.m. and there was a mass entry 3 hours later.
By 2:10a.m. on 26 October, Bolshevik forces had gained control. The Cadets and the 140 volunteers of the Women's Battalion surrendered rather than resist the 40,000 strong attacking force. After sporadic gunfire throughout the building, the cabinet of the Provisional Government surrendered, and were imprisoned in Peter and Paul Fortress. The only member who was not arrested was Kerensky himself, who had already left the palace.
With the Petrograd Soviet now in control of government, garrison, and proletariat, the Second All Russian Congress of Soviets held its opening session on the day, while Trotsky dismissed the opposing Mensheviks and the Socialist Revolutionaries (SR) from Congress.
Some sources contend that as the leader of Tsentrobalt, Pavlo Dybenko played a crucial role in the revolt and that the ten warships that arrived at the city with ten thousand Baltic Fleet mariners were the force that took the power in Petrograd and put down the Provisional Government. The same mariners then dispersed by force the elected parliament of Russia, and used machine-gun fire against demonstrators in Petrograd, killing about 100 demonstrators and wounding several hundred. Dybenko in his memoirs mentioned this event as "several shots in the air". These are disputed by various sources, such as Louise Bryant, who claims that news outlets in the West at the time reported that the unfortunate loss of life occurred in Moscow, not Petrograd, and the number was much less than suggested above. As for the "several shots in the air", there is little evidence suggesting otherwise.
While the seizure of the Winter Palace happened almost without resistance, Soviet historians and officials later tended to depict the event in dramatic and heroic terms. The historical reenactment titled "The Storming of the Winter Palace" was staged in 1920. This reenactment, watched by 100,000 spectators, provided the model for official films made later, which showed fierce fighting during the storming of the Winter Palace, although, in reality, the Bolshevik insurgents had faced little opposition.
Later stories of the heroic "Storming of the Winter Palace" and "defense of the Winter Palace" were propaganda by Bolshevik publicists. Grandiose paintings depicting the "Women's Battalion" and photo stills taken from Sergei Eisenstein's staged film depicting the "politically correct" version of the October events in Petrograd came to be taken as truth.
The Second Congress of Soviets consisted of 670 elected delegates: 300 were Bolshevik and nearly 100 were Left Socialist-Revolutionaries, who also supported the overthrow of the Alexander Kerensky government. When the fall of the Winter Palace was announced, the Congress adopted a decree transferring power to the Soviets of Workers', Soldiers' and Peasants' Deputies, thus ratifying the Revolution.
The transfer of power was not without disagreement. The center and right wings of the Socialist Revolutionaries, as well as the Mensheviks, believed that Lenin and the Bolsheviks had illegally seized power and they walked out before the resolution was passed. As they exited, they were taunted by Trotsky who told them "You are pitiful isolated individuals; you are bankrupts; your role is played out. Go where you belong from now on — into the dustbin of history!"
The following day, 26 October, the Congress elected a new cabinet of Bolsheviks, pending the convocation of a Constituent Assembly. This new Soviet government was known as the Council (Soviet) of People's Commissars (Sovnarkom), with Lenin as a leader. Lenin allegedly approved of the name, reporting that it "smells of revolution". The cabinet quickly passed the Decree on Peace and the Decree on Land. This new government was also officially called "provisional" until the Assembly was dissolved.
That same day, posters were pinned on walls and fences by the right-wing Socialist Revolutionaries, describing the takeover as a "crime against the motherland" and "revolution"; this signaled the next wave of anti-Bolshevik sentiment. The next day, the Mensheviks seized power in Georgia and declared it an independent republic; the Don Cossacks also claimed control of their government. The Bolshevik strongholds were in the cities, particularly Petrograd, with support much more mixed in rural areas. The peasant-dominated Left SR party was in coalition with the Bolsheviks. There were reports that the Provisional Government had not conceded defeat and were meeting with the army at the Front.
Anti-Bolshevik sentiment continued to grow as posters and newspapers started criticizing the actions of the Bolsheviks and refuted their authority. The Executive Committee of Peasants Soviets "[refuted] with indignation all participation of the organized peasantry in this criminal violation of the will of the working class". This eventually developed into major counter-revolutionary action, as on the 30th October (O.S., 12 November, N.S.) when cossacks, welcomed by church bells, entered Tsarskoye Selo on the outskirts of Petrograd with Kerensky riding on a white horse. Kerensky gave an ultimatum to the rifle garrison to lay down weapons, which was promptly refused. They were then fired upon by Kerensky's cossacks, which resulted in 8 deaths. This turned soldiers in Petrograd against Kerensky as being the Tsarist regime. Kerensky's failure to assume authority over troops was described by John Reed as a "fatal blunder" that signaled the final end of his government. Over the following days, the battle against the anti-Bolsheviks continued. The Red Guard fought against cossacks at Tsarskoye Selo, with the cossacks breaking rank and fleeing, leaving their artillery behind. On 31 October 1917 (13 November, N.S), the Bolsheviks gained control of Moscow after a week of bitter street-fighting. Artillery had been freely used, with an estimated 700 casualties. However, there was continued support for Kerensky in some of the provinces.
After the fall of Moscow, there was only minor public anti-Bolshevik sentiment, such as the newspaper "Novaya Zhizn", which criticized the Bolsheviks' lack of manpower and organization in running their party, let alone a government. Lenin confidently claimed that there is "not a shadow of hesitation in the masses of Petrograd, Moscow and the rest of Russia" in accepting Bolshevik rule.
On 10 November 1917 (23 November, N.S.), the government applied the term "citizens of the Russian Republic" to Russians, whom they sought to make equal in all possible respects, by the nullification of all "legal designations of civil inequality, such as estates, titles, and ranks."
On 12 November (25 November, N.S.), a Constituent Assembly was elected. In these elections, 26 mandatory delegates were proposed by the Bolshevik Central Committee, and 58 were proposed by the Socialist Revolutionaries. The outcome of the election gave the majority to the Socialist Revolutionary Party, which no longer existed as a full party by that time, as the Left SR Party was in coalition with the Bolsheviks. The Bolsheviks dissolved the Constituent Assembly in January 1918, when it came into conflict with the Soviets.
On 16 December 1917 (29 December, N.S.), the government ventured to eliminate hierarchy in the army, removing all titles, ranks, and uniform decorations. The tradition of saluting was also eliminated.
On 20 December 1917 (2 January 1918, N.S.), the Cheka was created by Lenin's decree. These were the beginnings of the Bolsheviks' consolidation of power over their political opponents. The Red Terror began in September 1918, following a failed assassination attempt on Lenin. The French Jacobin Terror was an example for the Soviet Bolsheviks. Trotsky had compared Lenin to Maximilien Robespierre as early as 1904.
The Decree on Land ratified the actions of the peasants who throughout Russia had taken private land and redistributed it among themselves. The Bolsheviks viewed themselves as representing an alliance of workers and peasants signified by the Hammer and Sickle on the flag and the coat of arms of the Soviet Union.
Other decrees:
Not all private property was nationalized by the government in the days, weeks, and months that followed the revolution of 25 October. The government of the Bolshevik party and Left SR did not support the workers taking over large corporations and collectively organizing the economy. As chairman of the government, Lenin negotiated with factions of the upper bourgeoisie, so that the bourgeoisie would manage the corporations according to orders from the new government. This failed utterly, because it presupposed the masses would accept class cooperation in a revolutionary situation. In this context, the Bolshevik party understood "workers' control" as checking and supervision by the employees to ensure that orders from the government were followed. Some factories continued in private hands because the masses either had no managerial competence or they hesitated to support the Bolshevik party. Other factories were taken over by the employees and some by the government, after pressure from below or by governmental initiative. There was a lack of class consciousness of the masses who put their hands in an authoritarian political party. Only a minority of the working-class population fought to establish a democratic rule over the main capitalist factories.
The Bolshevik party opposed the masses ruling the economy from below as it opposed political institutions being ruled from below. Through democratic elections to the soviets in autumn 1917, the Bolshevik party built its power to control the trade unions which became state institutions. Later the same year, the factory committees were subordinated to the trade unions. From this base it was not difficult to establish one-man-rule over the factories. One administrative and one technical manager had daily control, the technical manager having the last word relating to the economy, independent of what the employees wanted, based on orders from higher ups in the state. The system of one-man-management was fiercely defended by Lenin at a trade union congress in spring 1918, where he said that if the party is not in charge the whole point of a party ceased to exist and thereby the revolution itself would cease.
A system of appointment from above was established step by step. Local soviets resisting this policy were either met with armed Cheka troops and forced to submit, or the soviets were denied access to ration cards for food and fuel. The Bolshevik party blocked democratic elections to the soviets, factory committees, the trade unions, and other institutions, which made this transfer of power easier.
The October Revolution enabled a political revolution by taking down the old regime but failed to establish a democratic system. That the economy was not transferred to the masses reflected what happened in the political institutions. The political elite saw itself as crucial to world revolution but blocked power being exerted from below. When that same elite also got control of the economy, answering only it transformed itself into a ruling state capitalist class. Later the Bolshevik party went further by placing the working class under martial law to force obedience – Sovnarkom. This development led to a totalitarian state where Joseph Stalin had even greater power than Lenin and Trotsky.
Bolshevik-led attempts to gain power in other parts of the Russian Empire were largely successful in Russia proper—although the fighting in Moscow lasted for two weeks—but they were less successful in ethnically non-Russian parts of the Empire, which had been clamoring for independence since the February Revolution. For example, the Ukrainian Rada, which had declared autonomy on 23 June 1917, created the Ukrainian People's Republic on 20 November, which was supported by the Ukrainian Congress of Soviets. This led to an armed conflict with the Bolshevik government in Petrograd and, eventually, a Ukrainian declaration of independence from Russia on 25 January 1918. In Estonia, two rival governments emerged: the Estonian Provincial Assembly, established in April 1917, proclaimed itself the supreme legal authority of Estonia on 28 November 1917 and issued the Declaration of Independence on 24 February 1918; but Soviet Russia recognized the Executive Committee of the Soviets of Estonia as the legal authority in the province, although the Soviets in Estonia controlled only the capital and a few other major towns.
After the success of the October Revolution transformed the Russian state into a soviet republic, a coalition of anti-Bolshevik groups attempted to unseat the new government in the Russian Civil War from 1918 to 1922. In an attempt to intervene in the civil war after the Bolsheviks' separate peace with the Central Powers, the Ailied Powers (United Kingdom, France, Italy, United States, and Japan) occupied parts of the Soviet Union for over two years before finally withdrawing. The United States did not recognize the new Russian government until 1933. The European powers recognized the Soviet Union in the early 1920s and began to engage in business with it after the New Economic Policy (NEP) was implemented.
Historical research into few events has been as influenced by the researcher's political outlook as that of the October Revolution. The historiography of the Revolution generally divides into three camps: Soviet-Marxist, Western-Totalitarian, and Revisionist.
Soviet historiography of the October Revolution is intertwined with Soviet historical development. Many of the initial Soviet interpreters of the Revolution were themselves Bolshevik revolutionaries. After the initial wave of revolutionary narratives, Soviet historians worked within "narrow guidelines" defined by the Soviet government. The rigidity of interpretive possibilities reached its height under Stalin.
Soviet historians of the Revolution interpreted the October Revolution as being about establishing the legitimacy of Marxist ideology and the Bolshevik government. To establish the accuracy of Marxist ideology, Soviet historians generally described the Revolution as the product of class struggle and that it was the supreme event in a world history governed by historical laws. The Bolshevik Party is placed at the center of the Revolution, as it exposes the errors of both the moderate Provisional Government and the spurious "socialist" Mensheviks in the Petrograd Soviet. Guided by Lenin's leadership and his firm grasp of scientific Marxist theory, the Party led the "logically predetermined" events of the October Revolution from beginning to end. The events were, according to these historians, logically predetermined because of the socio-economic development of Russia, where monopolistic industrial capitalism had alienated the masses. In this view, the Bolshevik party took the leading role in organizing these alienated industrial workers, and thereby established the construction of the first socialist state.
Although Soviet historiography of the October Revolution stayed relatively constant until 1991, it did undergo some changes. Following Stalin's death, historians such as E. N. Burdzhalov and P. V. Volobuev published historical research that deviated significantly from the party line in refining the doctrine that the Bolshevik victory "was predetermined by the state of Russia's socio-economic development". These historians, who constituted the "New Directions Group", posited that the complex nature of the October Revolution "could only be explained by a multi-causal analysis, not by recourse to the mono-causality of monopoly capitalism". For them, the central actor is still the Bolshevik party, but this party triumphed "because it alone could solve the preponderance of 'general democratic' tasks the country faced" (such as the struggle for peace and the exploitation of landlords).
During the late Soviet period, the opening of select Soviet archives during glasnost sparked innovative research that broke away from some aspects of Marxism–Leninism, though the key features of the orthodox Soviet view remained intact.
Following the turn of the 21st century, some Soviet historians began to implement an "anthropological turn" in their historiographical analysis of the Russian Revolution. This method of analysis focuses on the average person's experience of day-to-day life during the revolution, and pulls the analytical focus away from larger events, notable revolutionaries, and overarching claims about party views. In 2006, S. V. Iarov employed this methodology when he focused on citizen adjustment to the new Soviet system. Iarov explored the dwindling labor protests, evolving forms of debate, and varying forms of politicization as a result of the new Soviet rule from 1917 to 1920. In 2010, O. S. Nagornaia took interest in the personal experiences of Russian prisoners-of-war taken by Germany, examining Russian soldiers and officers' ability to cooperate and implement varying degrees of autocracy despite being divided by class, political views, and race. Other analyses following this "anthropological turn" have explored texts from soldiers and how they used personal war-experiences to further their political goals, as well as how individual life-structure and psychology may have shaped major decisions in the civil war that followed the revolution.
During the Cold War, Western historiography of the October Revolution developed in direct response to the assertions of the Soviet view. As a result, Western historians exposed what they believed were flaws in the Soviet view, thereby undermining the Bolsheviks' original legitimacy, as well as the precepts of Marxism.
These Western historians described the revolution as the result of a chain of contingent accidents. Examples of these accidental and contingent factors they say precipitated the Revolution included World War I's timing, chance, and the poor leadership of Tsar Nicholas II as well as that of liberal and moderate socialists. According to Western historians, it was not popular support, but rather a manipulation of the masses, ruthlessness, and the party discipline of the Bolsheviks that enabled their triumph. For these historians, the Bolsheviks' defeat in the Constituent Assembly elections of November–December 1917 demonstrated popular opposition to the Bolsheviks' coup, as did the scale and breadth of the Civil War.
Western historians saw the organization of the Bolshevik party as proto-totalitarian. Their interpretation of the October Revolution as a violent coup organized by a proto-totalitarian party reinforced for them the idea that totalitarianism was an inherent part of Soviet history. The democratic promise of the February Revolution came to an end with the forced dissolution of the Constituent Assembly. Thus, Stalinist totalitarianism developed as a natural progression from Leninism and the Bolshevik party's tactics and organization.
The dissolution of the Soviet Union affected historical interpretations of the October Revolution. Since 1991, increasing access to large amounts of Soviet archival materials has made it possible to re‑examine the October Revolution. Though both Western and Russian historians now have access to many of these archives, the effect of the dissolution of the USSR can be seen most clearly in the work of the latter. While the disintegration essentially helped solidify the Western and Revisionist views, post-USSR Russian historians largely repudiated the former Soviet historical interpretation of the Revolution. As Stephen Kotkin argues, 1991 prompted "a return to political history and the apparent resurrection of totalitarianism, the interpretive view that, in different ways…revisionists sought to bury".
The October Revolution marks the inception of the first communist government in Russia, and thus the first large-scale socialist state in world history. After this, Russia became the Russian SFSR and, later, part of the USSR, which dissolved in late 1991.
The October Revolution also made the ideology of communism influential on a global scale in the 20th century. Communist parties would start to form in certain countries after 1917.
"Ten Days That Shook the World", a book written by American journalist John Reed and first published in 1919, gives a firsthand exposition of the events. Reed died in 1920, shortly after the book was finished.
Dmitri Shostakovich wrote his "Symphony No. 2 in B major", Op. 14, and subtitled it "To October", for the 10th anniversary of the October Revolution. The choral finale of the work, "To October", is set to a text by Alexander Bezymensky, which praises Lenin and the revolution. The "Symphony No. 2" was first performed on 5 November 1927 by the Leningrad Philharmonic Orchestra and the Academy Capella Choir under the direction of Nikolai Malko.
Sergei Eisenstein and Grigori Aleksandrov's film "", first released on 20 January 1928 in the USSR and on 2 November 1928 in New York City, describes and glorifies the revolution, having been commissioned to commemorate the event.
The term "Red October" (Красный Октябрь, "Krasnyy Oktyabr") has been used to signify the October Revolution. "Red October" was given to a steel factory that was made notable by the Battle of Stalingrad, a Moscow sweets factory that is well known in Russia, and a fictional Soviet submarine.
7 November, the anniversary of the October Revolution according to the Gregorian Calendar, was the official national day of the Soviet Union from 1918 onward and still is a public holiday in Belarus and the breakaway territory of Transnistria. | https://en.wikipedia.org/wiki?curid=22661 |
Opole Voivodeship
Opole Voivodeship, or Opole Province ( , , ), is the smallest and least populated voivodeship (province) of Poland. The province's name derives from that of the region's capital and largest city, Opole. It is part of Upper Silesia. A relatively large German minority, with representatives in the Sejm, lives in the voivodeship, and the German language is co-official in 28 communes.
Opole Voivodeship is bordered by Lower Silesian Voivodeship to the west, Greater Poland and Łódź Voivodeships to the north, Silesian Voivodeship to the east, and the Czech Republic (Olomouc Region and Moravian-Silesian Region) to the south.
Opole Province's geographic location, economic potential, and its population's level of education make it an attractive business partner for other Polish regions (especially Lower Silesian and Silesian Voivodeships) and for foreign investors. Formed in 1997, the Praděd/Pradziad Euroregion has facilitated economic, cultural and tourist exchanges between the border areas of Poland and the Czech Republic.
Opole Voivodeship was created on January 1, 1999, out of the former Opole Voivodeship and parts of Częstochowa Voivodeship, pursuant to the Polish local government reforms adopted in 1998.
Originally, the government, advised by prominent historians, had wanted to disestablish Opolskie and partition its territory between the more historically Polish regions of Lower Silesia and Silesian Voivodeship (eastern Upper Silesia and western Malopolska. The plan was that Brzeg and Namysłów, as the Western part of the region, were to be transferred to Lower Silesia, while the rest was to become, along with a part of the Częstochowa Voivodeship, an integral part of the new 'Silesian' region. However, the plans resulted in an outcry from the German minority population of Opole Voivodeship, who feared that should their region be abolished, they would lose all hope of regional representation (in the proposed Silesian Region, they would have formed a very small minority among a great number of ethnic Poles). To the surprise of many of the ethnic Germans in Opole however, the local Polish Silesian population and groups of ethnic Poles also rose up to oppose the planned reforms; this came about as a result of an overwhelming feeling of attachment to the voivodeships that were scheduled to be 'redrawn', as well as a fear of 'alienation' should one find themselves residing in a new, unfamiliar region.
The solution came in late 1999, when Olesno was, after 24 years apart, finally reunited with the Opole Voivodeship to form the new legally defined region. A historic moment came in 2006 when the town of Radłów changed its local laws to make German, alongside Polish, the district's second official language; thus becoming the first town in the region to achieve such a feat.
The voivodeship lies in southwestern Poland, the major part on the Silesian Lowland (). To the east, the region touches upon the Silesian Upland (Silesian Uplands, ) with the famous Saint Anne Mountain; the Sudetes range, the Opawskie Mountains, lies to the southwest. The Oder River cuts across the middle of the voivodeship. The northern part of the voivodeship, along the Mała Panew River, is densely forested, while the southern part consists of arable land.
The region has the warmest climate in the country.
Protected areas in Opole Voivodeship include the following three areas designated as Landscape Parks:
Opole Voivodeship is divided into 12 counties (powiats): 1 city county and 11 land counties. These are further divided into 71 gminas.
The counties are listed in the following table (ordering is by decreasing population).
The voivodeship contains 36 cities and towns. These are listed below in descending order of population (as of 2019):
The Opole Voivodeship is the smallest region in the administrative makeup of the country in terms of both area and population.
About 15% of the one million inhabitants of this voivodeship are ethnic Germans, which constitutes 90% of all ethnic Germans in Poland. As a result, many areas are officially bilingual and the German language and culture play a significant role in education in the region. Ethnic Germans first came to this region during the Late Middle Ages. The area was once part of the Prussian province of Silesia.
The Gross domestic product (GDP) of the province was 10.1 billion euros in 2018, accounting for 2.0% of Polish economic output. GDP per capita adjusted for purchasing power was 17,000 euros or 56% of the EU27 average in the same year. The GDP per employee was 66% of the EU average.
The Opole Voivodeship is an industrial as well as an agricultural region. With respect to mineral resources, of major importance are deposits of raw materials for building: limestone (Strzelce Opolskie), marl (near Opole), marble, and basalt. The favourable climate, fertile soils, and high farming culture contribute to the development of agriculture, which is among the most productive in the country.
A total of nineteen industries are represented in the voivodeship. The most important are cement and lime, furniture, food, car manufacturing, and chemical industries. In 1997, the biggest production growth in the area was in companies producing wood and wood products, electrical equipment, machinery and appliances, as well as cellulose and paper products. In 1997, the top company in the region was Zakłady Azotowe S.A. in Kędzierzyn-Koźle, whose income was over PLN 860 million. The voivodship's economy consists of more than 53,000 businesses, mostly small and medium-sized, employing over 332,000 people. Manufacturing companies employ over 89,000 people; 95.7% of all the region's business operate in the private sector.
The Opole Voivodeship is a green region with three large lakes: Turawskie, Nyskie, and Otmuchów (the latter two are connected). The Opawskie Mountains are extremely popular. The region also includes the castle in Brzeg, built during the reign of the Piast dynasty—pearl of the Silesian Renaissance, the Franciscan monastery on top of Saint Anne Mountain, as well as the medieval defence fortifications in Paczków (referred to as the Upper Silesian Carcassonne).
According to the Central Statistical Office of Poland, Opole Voivodeship is most frequently visited by international tourists from countries located in Europe (94.6%). The rank was followed by tourists from Asia, compromising 2.4% of the total international tourist figure, followed by that of North America at 1.8%. The general composition of international tourists visiting the Opole Voivodeship remains unchanged, with 46.2% of tourists heading from Germany.
International tourists visiting Opole Voivodeship with an overnight stay according to country of permanent residence:
In 2015, a total of c. 90,800 overnight stays were hosted for international tourists, a figure making up 12.4% of the total amount of overnight stays for Opole Voivodeship. The majority (44.7%) of international overnight stays were hosted in the city of Opole, followed by Kędzierzyn-Koźle County (9.9%) and Nysa County at (9.4%).
The transport route from Germany to Ukraine, the A4, runs through Opole. The region has four border crossings, and direct rail connections to all important Polish cities, as well as to Frankfurt, Munich, Budapest, Kiev, and the Baltic ports.
There are three state-run universities in the region: the Opole University, the Opole University of Technology, and the Public Higher Medical Professional School in Opole. All of them are based in the voivodeship's capital. Among the region's private schools, the Opole School of Management and Administration has been certified as a degree-granting institution by the Ministry of National Education.
Most popular surnames in Opole Voivodeship:
Opole Voivodeship was also a unit of administrative division and local government in Poland between 1975 and 1998.
Major cities and towns (population in 1995):
This administrative region of the People's Republic of Poland (1950–1975) was created as a result of the partition of Katowice Voivodeship in 1950. | https://en.wikipedia.org/wiki?curid=22665 |
Old Norse
Old Norse, Old Nordic, or Old Scandinavian was a North Germanic language that was spoken by inhabitants of Scandinavia and their overseas settlements from about the 7th to the 15th centuries.
The Proto-Norse language developed into Old Norse by the 8th century, and Old Norse began to develop into the modern North Germanic languages in the mid-to-late 14th century, ending the language phase known as Old Norse. These dates, however, are not absolute, since written Old Norse is found well into the 15th century.
Old Norse was divided into three dialects: Old West Norse (often referred to as "Old Norse"), Old East Norse, and Old Gutnish. Old West and East Norse formed a dialect continuum, with no clear geographical boundary between them. For example, Old East Norse traits were found in eastern Norway, although Old Norwegian is classified as Old West Norse, and Old West Norse traits were found in western Sweden. Most speakers spoke Old East Norse in what is present-day Denmark and Sweden. Old Gutnish, the more obscure dialectal branch, is sometimes included in the Old East Norse dialect due to geographical associations. It developed its own unique features and shared in changes to both other branches.
The 12th-century Icelandic "Gray Goose Laws" state that Swedes, Norwegians, Icelanders, and Danes spoke the same language, "dǫnsk tunga" ("Danish tongue"; speakers of Old East Norse would have said "dansk tunga"). Another term was "norrœnt mál" ("northern speech"). Today Old Norse has developed into the modern North Germanic languages Icelandic, Faroese, Norwegian, Danish, and Swedish, of which Norwegian, Danish and Swedish retain considerable mutual intelligibility.
Old Icelandic was very close to Old Norwegian, and together they formed the Old West Norse dialect, which was also spoken in settlements in Ireland, Scotland, the Isle of Man and northwest England, and in Norse settlements in Normandy. The Old East Norse dialect was spoken in Denmark, Sweden, settlements in Kievan Rus', eastern England, and Danish settlements in Normandy. The Old Gutnish dialect was spoken in Gotland and in various settlements in the East. In the 11th century, Old Norse was the most widely spoken European language, ranging from Vinland in the West to the Volga River in the East. In Kievan Rus', it survived the longest in Veliky Novgorod, probably lasting into the 13th century there. The age of the Swedish-speaking population of Finland is strongly contested, but at latest by the time of the Second Swedish Crusade in the 13th century, Swedish settlement had spread the language into the region.
The modern descendants of the Old West Norse dialect are the West Scandinavian languages of Icelandic, Faroese, Norwegian and the extinct Norn language of Orkney and Shetland; the descendants of the Old East Norse dialect are the East Scandinavian languages of Danish and Swedish. Norwegian is descended from Old West Norse, but over the centuries it has been heavily influenced by East Norse, particularly during the Denmark–Norway union.
Among these, the grammar of Icelandic and the Faroese have changed the least from Old Norse in the last thousand years. In contrast, the pronunciation of both Icelandic and Faroese have changed considerably from Old Norse. With Danish rule of the Faroe Islands, Faroese has also been influenced by Danish. Old Norse also had an influence on English dialects and Lowland Scots, which contain many Old Norse loanwords. It also influenced the development of the Norman language, and through it and to a smaller extent, that of modern French.
Of the modern languages, Icelandic is the closest to Old Norse seen to grammar and vocabulary. Written modern Icelandic derives from the Old Norse phonemic writing system. Contemporary Icelandic-speakers can read Old Norse, which varies slightly in spelling as well as semantics and word order. However, pronunciation, particularly of the vowel phonemes, has changed at least as much in Icelandic as in the other North Germanic languages.
Faroese retains many similarities but is influenced by Danish, Norwegian, and Gaelic (Scottish and/or Irish). Although Swedish, Danish and Norwegian have diverged the most, they still retain considerable mutual intelligibility. Speakers of modern Swedish, Norwegian and Danish can mostly understand each other without studying their neighboring languages, particularly if speaking slowly. The languages are also sufficiently similar in writing that they can mostly be understood across borders. This could be because these languages have been mutually affected by each other, as well as having a similar development influenced by Middle Low German.
Various other languages, which are not closely related, have been heavily influenced by Norse, particularly the Norman language. Russian, Ukrainian, Belarusian, Lithuanian, Latvian, Finnish and Estonian also have a number of Norse loanwords; the words "Rus" and "Russia", according to one theory, may be named after the Rus' people, a Norse tribe; "see Rus (name)", probably from present-day east-central Sweden. The current Finnish and Estonian words for Sweden are "Ruotsi" and "Rootsi", respectively.
A number of loanwords have been introduced into the Irish language – many but not all are associated with fishing and sailing. A similar influence is found in Scottish Gaelic, with over one hundred loanwords estimated to be in the language, many of which, but not all, are related to fishing and sailing.
The vowel phonemes mostly come in pairs of long and short. The standardized orthography marks the long vowels with an acute accent. In medieval manuscripts, it is often unmarked but sometimes marked with an accent or through gemination.
Old Norse had nasalized versions of all ten vowel places. These occurred as allophones of the vowels before nasal consonants and in places where a nasal had followed it in an older form of the word, before it was absorbed into a neighboring sound. If the nasal was absorbed by a stressed vowel, it would also lengthen the vowel. These nasalizations also occurred in the other Germanic languages, but were not retained long. They were noted in the First Grammatical Treatise, and otherwise might have remained unknown. The First Grammarian marked these with a dot above the letter. This notation did not catch on, and would soon be obsolete. Nasal and oral vowels probably merged around the 11th century in most of Old East Norse. However, the distinction still holds in Dalecarlian dialects. The dots in the following vowel table separate the oral from nasal phonemes.
Note: The open or open-mid vowels may be transcribed differently:
Sometime around the 13th century, (spelled "ǫ") in most dialects except Old Danish, and Icelandic where ("ǫ") merged with . This can be determined by their distinction within the 12th-century First Grammatical Treatise but not within the early 13th-century Prose Edda. The nasal vowels, also noted in the First Grammatical Treatise, are assumed to have been lost in most dialects by this time (but notably they are retained in Elfdalian). See Old Icelandic for the mergers of (spelled "œ") with (spelled "æ") and (spelled "ę") with ("e").
Old Norse had three diphthong phonemes: , , (spelled "ei", "au", "ey" respectively). In East Norse these would monophthongize and merge with and ; whereas in West Norse and its descendants the diphthongs remained.
Old Norse has six plosive phonemes, being rare word-initially and and pronounced as voiced fricative allophones between vowels except in compound words (e.g. veðrabati), already in the Proto-Germanic language (e.g. "*b" > between vowels). The phoneme was pronounced as after an "n" or another "g" and as before and . Some accounts have it a voiced velar fricative in all cases, and others have that realisation only in the middle of words and between vowels (with it otherwise being realised ). The Old East Norse was an apical consonant, with its precise position is unknown; it is reconstructed as a palatal sibilant. It descended from Proto-Germanic and eventually developed into , as had already occurred in Old West Norse.
The consonant digraphs "hl", "hr", "hn" occurred word-initially. It is unclear whether they were sequences of two consonants (with the first element realised as or perhaps ) or as single voiceless sonorants , and respectively. In Old Norwegian, Old Danish and later Old Swedish, the groups "hl", "hr", "hn" were reduced to plain "l", "r", "n", which suggests that they had most likely already been pronounced as voiceless sonorants by Old Norse times.
The pronunciation of "hv" is unclear, but it may have been (the Proto-Germanic pronunciation), or the similar phoneme . Unlike the three other digraphs, it was retained much longer in all dialects. Without ever developing into a voiceless sonorant in Icelandic, it instead underwent fortition to a plosive , which suggests that instead of being a voiceless sonorant, it retained a stronger frication.
Unlike Proto-Norse, which was written with the Elder Futhark, runic Old Norse was originally written with the Younger Futhark, which had only 16 letters. Because of the limited number of runes, several runes were used for different sounds, and long and short vowels were not distinguished in writing. Medieval runes came into use some time later.
As for the Latin alphabet, there was no standardized orthography in use in the Middle Ages. A modified version of the letter wynn called vend was used briefly for the sounds , , and . Long vowels were sometimes marked with acutes but also sometimes left unmarked or geminated. The standardized Old Norse spelling was created in the 19th century and is, for the most part, phonemic. The most notable deviation is that the nonphonemic difference between the voiced and the voiceless dental fricative is marked. The oldest texts and runic inscriptions use "þ" exclusively. Long vowels are denoted with acutes. Most other letters are written with the same glyph as the IPA phoneme, except as shown in the table below.
Primary stress in Old Norse falls on the word stem, so that "hyrjar" would be pronounced . In compound words, secondary stress falls on the second stem (e.g. "lærisveinn", ).
Ablaut patterns are groups of vowels which are swapped, or "ablauted," in the nucleus of a word. Strong verbs ablaut the lemma's nucleus to derive the past forms of the verb. This parallels English conjugation, where, e.g., the nucleus of "sing" becomes "sang" in the past tense and "sung" in the past participle. Some verbs are derived by ablaut, as the present-in-past verbs do by consequence of being derived from the past tense forms of strong verbs.
Umlaut or mutation is an assimilatory process acting on vowels preceding a vowel or semivowel of a different vowel backness. In the case of "i-umlaut" and "ʀ-umlaut", this entails a fronting of back vowels, with retention of lip rounding. In the case of "u-umlaut", this entails labialization of unrounded vowels. Umlaut is phonemic and in many situations grammatically significant as a side effect of losing the Proto-Germanic morphological suffixes whose vowels created the umlaut allophones.
Some , , , , , , , and all were obtained by i-umlaut from , , , , , , , and respectively. Others were formed via ʀ-umlaut from , , , , and .
Some , , , , and all , were obtained by u-umlaut from , , , , and , respectively. See Old Icelandic for information on .
OEN often preserves the original value of the vowel directly preceding runic "ʀ" while OWN receives ʀ-umlaut. Compare runic OEN "glaʀ, haʀi, hrauʀ" with OWN "gler, heri" (later "héri"), "hrøyrr/hreyrr" ("glass", "hare", "pile of rocks").
U-umlaut is more common in Old West Norse in both phonemic and allophonic positions, while it only occurs sparsely in post-runic Old East Norse and even in runic Old East Norse. Compare West Old Norse "fǫður" (accusative of "faðir", 'father'), "vǫrðr" (guardian/caretaker), "ǫrn" (eagle), "jǫrð" ('earth', Modern Icelandic: "jörð"), "mjǫlk" ('milk', Modern Icelandic: "mjólk") with Old Swedish "faður", "varðer", "ørn", "jorð", "miolk" and Modern Swedish "fader", "vård", "örn", "jord", "mjölk" with the latter two demonstrating the u-umlaut found in Swedish.
This is still a major difference between Swedish and Faroese and Icelandic today. Plurals of neuters do not have u-umlaut at all in Swedish, but in Faroese and Icelandic they do, for example the Faroese and Icelandic plurals of the word "land", "lond" and "lönd" respectively, in contrast to the Swedish plural "länder" and numerous other examples. That also applies to almost all feminine nouns, for example the largest feminine noun group, the o-stem nouns (except the Swedish noun "jord" mentioned above), and even i-stem nouns and root nouns, such as Old West Norse "mǫrk" ("mörk" in Icelandic) in comparison with Modern and Old Swedish "mark".
Vowel breaking, or fracture, caused a front vowel to be split into a semivowel-vowel sequence before a back vowel in the following syllable. While West Norse only broke "e", East Norse also broke "i". The change was blocked by a "v", "l", or "r" preceding the potentially-broken vowel.
Some or and or result from breaking of and respectively.
When a noun, pronoun, adjective, or verb has a long vowel or diphthong in the accented syllable and its stem ends in a single "l", "n", or "s", the "r" (or the elder "r"- or "z"-variant "ʀ") in an ending is assimilated. When the accented vowel is short, the ending is dropped.
The nominative of the strong masculine declension and some i-stem feminine nouns uses one such -r (ʀ). "Óðin-r" ("Óðin-ʀ") becomes "Óðinn" instead of "*Óðinr" ("*Óðinʀ").
The verb "blása" 'to blow', has third person present tense blæss for "[he] blows" rather than "*blæsr" ("*blæsʀ"). Similarly, the verb "skína" 'to shine' had present tense third person skínn (rather than "*skínr", "*skínʀ"); while "kala" 'to cool down' had present tense third person kell (rather than "*kelr", "*kelʀ").
The rule is not absolute, with certain counter-examples such as "vinr", which has the synonym "vin", yet retains the unabsorbed version, and "jǫtunn", where assimilation takes place even though the root vowel, "ǫ", is short.
The clusters cannot yield respectively, instead . The effect of this shortening can result in the lack of distinction between some forms of the noun. In the case of "vetr", the nominative and accusative singular and plural forms are identical. The nominative singular and nominative and accusative plural would otherwise have been OWN "*vetrr", OEN "*vintrʀ". These forms are impossible because the cluster cannot be realized as , nor as , nor as . The same shortening as in "vetr" also occurs in lax = "laks" (as opposed to *"lakss", *"laksʀ"), botn (as opposed to *"botnn", "*botnʀ"), and jarl (as opposed to *"jarll", *"jarlʀ").
Furthermore, wherever the cluster is expected to exist, such as in the male names "Ragnarr", "Steinarr" (supposedly "*Ragnarʀ", "*Steinarʀ"), the result is apparently always rather than or . This is observable in the Runic corpus.
"I/j" adjacent to "i", "e", their u-umlauts, and "æ" was not possible, nor "u/v" adjacent to "u", "o", their i-umlauts, and "ǫ". At the beginning of words, this manifested as a dropping of the initial "j" or "v". Compare ON "orð, úlfr, ár" with English "word, wolf, year". In inflections, this manifested as the dropping of the inflectional vowels. Thus, "klæði" + dat "-i" remains "klæði", and "sjáum" in Icelandic progressed to "sjǫ́um" > "sjǫ́m" > "sjám". The "jj" and "ww" of Proto-Germanic became "ggj" and "ggv" respectively in Old Norse, a change known as Holtzmann's law.
An epenthetic vowel became popular by 1200 in Old Danish, 1250 in Old Swedish and Norwegian, and 1300 in Old Icelandic. An unstressed vowel was used which varied by dialect. Old Norwegian exhibited all three: was used in West Norwegian south of Bergen, as in "aftur", "aftor" (older "aptr"); North of Bergen, appeared in "aftir", "after"; and East Norwegian used , "after", "aftær".
Old Norse was a moderately inflected language with high levels of nominal and verbal inflection. Most of the fused morphemes are retained in modern Icelandic, especially in regard to noun case declensions, whereas modern Norwegian in comparison has moved towards more analytical word structures.
Old Norse had three grammatical genders – masculine, feminine and neuter. Adjectives or pronouns referring to a noun must mirror the gender of that noun, so that one says, "heill maðr!" but, "heilt barn!" As in other languages, the grammatical gender of an impersonal noun is generally unrelated to an expected natural gender of that noun. While indeed "karl", "man" is masculine, "kona", "woman", is feminine, and "hús", house, is neuter, so also are "hrafn" and "kráka", for "raven" and "crow", masculine and feminine respectively, even in reference to a female raven or a male crow.
All neuter words have identical nominative and accusative forms, and all feminine words have identical nominative and accusative plurals.
The gender of some words' plurals does not agree with that of their singulars, such as "lim" and "mund". Some words, such as "hungr", have multiple genders, evidenced by their determiners being declined in different genders within a given sentence.
Nouns, adjectives and pronouns were declined in four grammatical casesnominative, accusative, genitive and dativein singular and plural numbers. Adjectives and pronouns were additionally declined in three grammatical genders. Some pronouns (first and second person) could have dual number in addition to singular and plural. The genitive was used partitively and in compounds and kennings (e.g., "Urðarbrunnr", the well of Urðr; "Lokasenna", the gibing of Loki).
There were several classes of nouns within each gender. The following is an example of the "strong" inflectional paradigms:
The numerous "weak" noun paradigms had a much higher degree of syncretism between the different cases; i.e., they had fewer forms than the "strong" nouns.
A definite article was realised as a suffix that retained an independent declension; e.g., troll ("a troll") – trollit ("the troll"), hǫll ("a hall") – hǫllin ("the hall"), armr ("an arm") – armrinn ("the arm"). This definite article, however, was a separate word and did not become attached to the noun before later stages of the Old Norse period.
The earliest inscriptions in Old Norse are runic, from the 8th century. Runes continued to be commonly used until the 15th century and have been recorded to be in use in some form as late as the 19th century in some parts of Sweden. With the conversion to Christianity in the 11th century came the Latin alphabet. The oldest preserved texts in Old Norse in the Latin alphabet date from the middle of the 12th century. Subsequently, Old Norse became the vehicle of a large and varied body of vernacular literature, unique in medieval Europe. Most of the surviving literature was written in Iceland. Best known are the Norse sagas, the Icelanders' sagas and the mythological literature, but there also survives a large body of religious literature, translations into Old Norse of courtly romances, classical mythology, and the Old Testament, as well as instructional material, grammatical treatises and a large body of letters and official documents.
Most of the innovations that appeared in Old Norse spread evenly through the Old Norse area. As a result, the dialects were very similar and considered to be the same language, a language that they sometimes called the Danish tongue ("Dǫnsk tunga"), sometimes Norse language ("Norrœnt mál"), as evidenced in the following two quotes from Heimskringla by Snorri Sturluson:
However, some changes were geographically limited and so created a dialectal difference between Old West Norse and Old East Norse.
As Proto-Norse evolved into Old Norse, in the 8th century, the effects of the umlauts seem to have been very much the same over the whole Old Norse area. But in later dialects of the language a split occurred mainly between west and east as the use of umlauts began to vary. The typical umlauts (for example "fylla" from *"fullijan") were better preserved in the West due to later generalizations in the east where many instances of umlaut were removed (many archaic Eastern texts as well as eastern runic inscriptions however portray the same extent of umlauts as in later Western Old Norse).
All the while, the changes resulting in breaking (for example "hiarta" from *"hertō") were more influential in the East probably once again due to generalizations within the inflectional system. This difference was one of the greatest reasons behind the dialectalization that took place in the 9th and 10th centuries, shaping an Old West Norse dialect in Norway and the Atlantic settlements and an Old East Norse dialect in Denmark and Sweden.
Old West Norse and Old Gutnish did not take part in the monophthongization which changed "æi" ("ei") into "ē", "øy" ("ey") and "au" into "ø̄", nor did certain peripheral dialects of Swedish, as seen in modern Ostrobothnian dialects. Another difference was that Old West Norse lost certain combinations of consonants. The combinations -"mp"-, -"nt"-, and -"nk"- were assimilated into -"pp"-, -"tt"- and -"kk"- in Old West Norse, but this phenomenon was limited in Old East Norse.
Here is a comparison between the two dialects as well as Old Gutnish. It is a transcription from one of the Funbo Runestones (U 990) from the eleventh century (translation: 'Veðr and Thane and Gunnar raised this stone after Haursi, their father. God help his spirit'):
The OEN original text above is transliterated according to traditional scholarly methods, wherein u-umlaut is not regarded in runic Old East Norse. Modern studies have shown that the positions where it applies are the same as for runic Old West Norse. An alternative and probably more accurate transliteration would therefore render the text in OEN as such:
Some past participles and other words underwent i-umlaut in Old West Norse but not in Old East Norse dialects. Examples of that are Icelandic slegið/sleginn and tekið/tekinn, which in Swedish are slagit/slagen and tagit/tagen. This can also be seen in the Icelandic and Norwegian words sterkur and sterk ("strong"), which in Swedish is stark as in Old Swedish. These differences can also be seen in comparison between Norwegian and Swedish.
Old West Norse is by far the best attested variety of Old Norse. The term "Old Norse" is often used to refer to Old West Norse specifically, in which case the subject of this article receives another name, such as "Old Scandinavian".
The combinations "-mp-", "-nt-", and "-nk-" mostly merged to "-pp-", "-tt-" and "-kk-" in Old West Norse around the 7th century, marking the first distinction between the Eastern and Western dialects. The following table illustrates this:
An early difference between Old West Norse and the other dialects was that Old West Norse had the forms "bú", "dwelling", "kú", "cow" (accusative) and "trú", "faith", whereas Old East Norse had "bó", "kó" and "tró". Old West Norse was also characterized by the preservation of "u"-umlaut, which meant that, for example, Proto-Norse *"tanþu", "tooth", was pronounced "tǫnn" and not "tann" as in post-runic Old East Norse; OWN "gǫ́s" and runic OEN "gǫ́s", while post-runic OEN "gás" "goose".
The earliest body of text appears in runic inscriptions and in poems composed c. 900 by Þjóðólfr of Hvinir (although the poems are not preserved in contemporary sources, but only in much later manuscripts). The earliest manuscripts are from the period 1150–1200 and concern both legal, religious and historical matters. During the 12th and 13th centuries, Trøndelag and Western Norway were the most important areas of the Norwegian kingdom and they shaped Old West Norse as an archaic language with a rich set of declensions. In the body of text that has survived into the modern day from until c. 1300, Old West Norse had little dialect variation, and Old Icelandic does not diverge much more than the Old Norwegian dialects do from each other.
Old Norwegian differentiated early from Old Icelandic by the loss of the consonant "h" in initial position before "l", "n" and "r"; thus whereas Old Icelandic manuscripts might use the form "hnefi", "fist", Old Norwegian manuscripts might use "nefi".
From the late 13th century, Old Icelandic and Old Norwegian started to diverge more. After c. 1350, the Black Death and following social upheavals seem to have accelerated language changes in Norway. From the late 14th century, the language used in Norway is generally referred to as Middle Norwegian.
Old West Norse underwent a lengthening of initial vowels at some point, especially in Norwegian, so that OWN "eta" became "éta," ONW "akr" > "ákr", OIC "ek" > "ék".
In Iceland, initial before was lost: compare Icelandic "rangur" with Norwegian "vrangr", OEN "vrangʀ". The change is shared with Old Gutnish.
A specifically Icelandic sound, the long, "u"-umlauted A, spelled Ǫ́ and pronounced , developed around the early 11th century. It was short-lived, being marked in the Grammatical Treatises and remaining until the end of the 12th century.
Around the 13th century, Œ/Ǿ (, which had probably already lowered to ) merged to Æ (). Thus, pre-13th-century "grœnn" 'green' became modern Icelandic "grænn". The 12th-century Gray Goose Laws manuscripts distinguish the vowels, and so the Codex Regius copy does as well. However, the 13th-century Codex Regius copy of the Poetic Edda probably relied on newer and/or poorer quality sources. Demonstrating either difficulty with or total lack of natural distinction, the manuscripts show separation of the two phonemes in some places, but they frequently confuse the letters chosen to distinguish them in others.
Towards the end of the 13th century, Ę () merged to E ().
Around the 11th century, Old Norwegian , , and became , and . It is debatable whether the sequences represented a consonant cluster () or devoicing ().
Orthographic evidence suggests that in a confined dialect of Old Norwegian, /ɔ/ may have been unrounded before /u/ and that "u"-umlaut was reversed unless the "u" had been eliminated: "ǫll", "ǫllum" > "ǫll", "allum".
This dialect of Old West Norse was spoken by Icelandic colonies in Greenland. When the colonies died out around the 15th century, the dialect went with it. The phoneme and some instances of merged to and so Old Icelandic Þórðr became Tortr.
The following text is from "Alexanders saga", an Alexander romance. The manuscript, AM 519 a 4to, is dated c. 1280. The facsimile demonstrates the sigla used by scribes to write Old Norse. Many of them were borrowed from Latin. Without familiarity with these abbreviations, the facsimile will be unreadable to many. In addition, reading the manuscript itself requires familiarity with the letterforms of the native script. The abbreviations are expanded in a version with normalized spelling like that of the standard normalization system. Compared to the spelling of the same text in Modern Icelandic, pronunciation has changed greatly, but spelling has changed little since Icelandic orthography was intentionally modelled after Old Norse in the 19th century.
* a printed in uncial. Uncials not encoded separately in Unicode as of this section's writing.
Old East Norse, between 800 and 1100, is called "Runic Swedish" in Sweden and "Runic Danish" in Denmark, but for geographical rather than linguistic reasons. Any differences between the two were minute at best during the more ancient stages of this dialect group. Changes had a tendency to occur earlier in the Danish region. Even today many Old Danish changes have still not taken place in modern Swedish. Swedish is therefore the more conservative of the two in both the ancient and the modern languages, sometimes by a profound margin but in general, differences are still minute. The language is called "runic" because the body of text appears in runes.
Runic Old East Norse is characteristically conservative in form, especially Swedish (which is still true for modern Swedish compared to Danish). In essence it matches or surpasses the conservatism of post-runic Old West Norse, which in turn is generally more conservative than post-runic Old East Norse. While typically "Eastern" in structure, many later post-runic changes and trademarks of OEN had yet to happen.
The phoneme "ʀ", which evolved during the Proto-Norse period from "z", was still clearly separated from "r" in most positions, even when being geminated, while in OWN it had already merged with "r".
The Proto-Germanic phoneme /w/ was preserved in initial sounds in Old East Norse (w-), unlike in West Norse where it developed into /v/, and did survive in rural Swedish dialects in the provinces of Skåne, Halland, Västergötland and south of Bohuslän into the 18th, 19th and 20th centrury. It is still preserved in the Dalecarlian dialects in the province of Dalarna, Sweden. The /w/-phoneme did also occur after consonants (kw-, tw- etc) in Old East Norse and did so into modern times in said Swedish dialects, as well as in the Westro- and North Bothnian tongues in northern Sweden.
Monophthongization of "æi > ē" and "øy, au > ø̄" started in mid-10th-century Denmark. Compare runic OEN: "fæigʀ", "gæiʀʀ", "haugʀ", "møydōmʀ", "diūʀ"; with Post-runic OEN: "fēgher", "gēr", "hø̄gher", "mø̄dōmber", "diūr"; OWN: "feigr", "geirr", "haugr", "meydómr", "dýr"; from PN *faigiaz, *gaizaz, *haugaz, *mawi- + dōmaz 'maidendom; virginity', *diuza '(wild) animal'.
Feminine o-stems often preserve the plural ending -aʀ, while in OWN they more often merge with the feminine i-stems: (runic OEN) "*sōlaʀ", "*hafnaʀ"/"*hamnaʀ", "*wāgaʀ" versus OWN "sólir", "hafnir" and "vágir" (modern Swedish "solar", "hamnar", "vågar" ("suns, havens, scales"); Danish has mainly lost the distinction between the two stems, with both endings now being rendered as "-er" or "-e" alternatively for the o-stems).
Vice versa, masculine i-stems with the root ending in either "g" or "k" tended to shift the plural ending to that of the ja-stems while OEN kept the original: "drængiaʀ", "*ælgiaʀ" and "*bænkiaʀ" versus OWN "drengir", "elgir" ("elks") and "bekkir" (modern Danish "drenge", "elge", "bænke", modern Swedish "drängar", "älgar", "bänkar").
The plural ending of ja-stems were mostly preserved while those of OEN often acquired that of the i-stems: "*bæðiaʀ", "*bækkiaʀ", "*wæfiaʀ" versus OWN "beðir" ("beds"), "bekkir", "vefir" (modern Swedish "bäddar", "bäckar", "vävar").
Until the early 12th century, Old East Norse was very much a uniform dialect. It was in Denmark that the first innovations appeared that would differentiate Old Danish from Old Swedish () as these innovations spread north unevenly (unlike the earlier changes that spread more evenly over the East Norse area), creating a series of isoglosses going from Zealand to Svealand.
In Old Danish, merged with during the 9th century. From the 11th to 14th centuries, the unstressed vowels -"a", -"o" and -"e" (standard normalization -"a", -"u" and -"i") started to merge into -"ə", represented with the letter "e". This vowel came to be epenthetic, particularly before "-ʀ" endings. At the same time, the voiceless stop consonants "p", "t" and "k" became voiced plosives and even fricative consonants. Resulting from these innovations, Danish has "kage" (cake), "tunger" (tongues) and "gæster" (guests) whereas (Standard) Swedish has retained older forms, "kaka", "tungor" and "gäster" (OEN "kaka", "tungur", "gæstir").
Moreover, the Danish pitch accent shared with Norwegian and Swedish changed into "stød" around this time.
At the end of the 10th and early 11th century initial "h-" before "l", "n" and "r" was still preserved in the middle and northern parts of Sweden, and is sporadically still preserved in some northern dialects as "g-", e.g. "gly" (lukewarm), from "hlýʀ". The Dalecarlian dialects developed independently from Old Swedish and as such can be considered separate languages from Swedish.
This is an extract from "Västgötalagen", the Westrogothic law. It is the oldest text written as a manuscript found in Sweden and from the 13th century. It is contemporaneous with most of the Icelandic literature. The text marks the beginning of Old Swedish as a distinct dialect.
Due to Gotland's early isolation from the mainland, many features of Old Norse did not spread from or to the island, and Old Gutnish developed as an entirely separate branch from Old East and West Norse. For example, the diphthong "ai" in "aigu", "þair" and "waita" was not retroactively umlauted to "ei" as in e.g. Old Icelandic "eigu", "þeir" and "veita". Gutnish also shows dropping of in initial , which it shares with the Old West Norse dialects (except Old East Norwegian), but which is otherwise abnormal. Breaking was also particularly active in Old Gutnish, leading to e.g. "biera" versus mainland "bera".
The Gutasaga is the longest text surviving from Old Gutnish. It was written in the 13th century and dealt with the early history of the Gotlanders. This part relates to the agreement that the Gotlanders had with the Swedish king sometime before the 9th century:
Old English and Old Norse were related languages. It is therefore not surprising that many words in Old Norse look familiar to English speakers; e.g., "armr" (arm), "fótr" (foot), "land" (land), "fullr" (full), "hanga" (to hang), "standa" (to stand). This is because both English and Old Norse stem from a Proto-Germanic mother language. In addition, numerous common, everyday Old Norse words were adopted into the Old English language during the Viking Age. A few examples of Old Norse loanwords in modern English are (English/Viking Age Old East Norse), in some cases even displacing their Old English cognates:
In a simple sentence like "They are both weak," the extent of the Old Norse loanwords becomes quite clear (Old East Norse with archaic pronunciation: "Þæiʀ eʀu báðiʀ wæikiʀ" while Old English "híe syndon bégen (þá) wáce"). The words "they" and "weak" are both borrowed from Old Norse, and the word "both" might also be a borrowing, though this is disputed (cf. German "beide"). While the number of loanwords adopted from the Norse was not as numerous as that of Norman French or Latin, their depth and everyday nature make them a substantial and very important part of everyday English speech as they are part of the very core of the modern English vocabulary.
Tracing the origins of words like "bull" and "Thursday" is more difficult. "Bull" may derive from either Old English "bula" or Old Norse "buli", while "Thursday" may be a borrowing or simply derive from the Old English "Þunresdæg", which could have been influenced by the Old Norse cognate. The word "are" is from Old English "earun"/"aron", which stems back to Proto-Germanic as well as the Old Norse cognates. | https://en.wikipedia.org/wiki?curid=22666 |
Old English
Old English (, ), or Anglo-Saxon, is the earliest historical form of the English language, spoken in England and southern and eastern Scotland in the early Middle Ages. It was brought to Great Britain by Anglo-Saxon settlers in the mid-5th century, and the first Old English literary works date from the mid-7th century. After the Norman conquest of 1066, English was replaced, for a time, as the language of the upper classes by Anglo-Norman, a relative of French. This is regarded as marking the end of the Old English era, as during this period the English language was heavily influenced by Anglo-Norman, developing into a phase known now as Middle English.
Old English developed from a set of Anglo-Frisian or Ingvaeonic dialects originally spoken by Germanic tribes traditionally known as the Angles, Saxons and Jutes. As the Anglo-Saxons became dominant in England, their language replaced the languages of Roman Britain: Common Brittonic, a Celtic language, and Latin, brought to Britain by Roman invasion. Old English had four main dialects, associated with particular Anglo-Saxon kingdoms: Mercian, Northumbrian, Kentish and West Saxon. It was West Saxon that formed the basis for the literary standard of the later Old English period, although the dominant forms of Middle and Modern English would develop mainly from Mercian. The speech of eastern and northern parts of England was subject to strong Old Norse influence due to Scandinavian rule and settlement beginning in the 9th century.
Old English is one of the West Germanic languages, and its closest relatives are Old Frisian and Old Saxon. Like other old Germanic languages, it is very different from Modern English and difficult for Modern English speakers to understand without study. Within Old English grammar nouns, adjectives, pronouns and verbs have many inflectional endings and forms, and word order is much freer. The oldest Old English inscriptions were written using a runic system, but from about the 8th century this was replaced by a version of the Latin alphabet.
"Englisc", which the term "English" is derived from, means 'pertaining to the Angles'. In Old English, this word was derived from "Angles" (one of the Germanic tribes who conquered parts of Great Britain in the 5th century). During the 9th century, all invading Germanic tribes were referred to as "Englisc". It has been hypothesised that the Angles acquired their name because their land on the coast of Jutland (now mainland Denmark) resembled a fishhook. Proto-Germanic also had the meaning of 'narrow', referring to the shallow waters near the coast. That word ultimately goes back to Proto-Indo-European "", also meaning 'narrow'.
Another theory is that the derivation of 'narrow' is the more likely connection to angling (as in fishing), which itself stems from a Proto-Indo-European (PIE) root meaning "bend, angle". The semantic link is the fishing hook, which is curved or bent at an angle. In any case, the Angles may have been called such because they were a fishing people or were originally descended from such, and therefore England would mean 'land of the fishermen', and English would be 'the fishermen's language'.
Old English was not static, and its usage covered a period of 700 years, from the Anglo-Saxon settlement of Britain in the 5th century to the late 11th century, some time after the Norman invasion. While indicating that the establishment of dates is an arbitrary process, Albert Baugh dates Old English from 450 to 1150, a period of full inflections, a synthetic language. Perhaps around 85% of Old English words are no longer in use, but those that survived are basic elements of Modern English vocabulary.
Old English is a West Germanic language, developing out of Ingvaeonic (also known as North Sea Germanic) dialects from the 5th century. It came to be spoken over most of the territory of the Anglo-Saxon kingdoms which became the Kingdom of England. This included most of present-day England, as well as part of what is now southeastern Scotland, which for several centuries belonged to the Anglo-Saxon kingdom of Northumbria. Other parts of the island – Wales and most of Scotland – continued to use Celtic languages, except in the areas of Scandinavian settlements where Old Norse was spoken. Celtic speech also remained established in certain parts of England: Medieval Cornish was spoken all over Cornwall and in adjacent parts of Devon, while Cumbric survived perhaps to the 12th century in parts of Cumbria, and Welsh may have been spoken on the English side of the Anglo-Welsh border. Norse was also widely spoken in the parts of England which fell under Danish law.
Anglo-Saxon literacy developed after Christianisation in the late 7th century. The oldest surviving work of Old English literature is "Cædmon's Hymn", which was composed between 658 and 680 but not written down until the early 8th century. There is a limited corpus of runic inscriptions from the 5th to 7th centuries, but the oldest coherent runic texts (notably the inscriptions on the Franks Casket) date to the early 8th century. The Old English Latin alphabet was introduced around the 8th century.
With the unification of the Anglo-Saxon kingdoms (outside the Danelaw) by Alfred the Great in the later 9th century, the language of government and literature became standardised around the West Saxon dialect (Early West Saxon). Alfred advocated education in English alongside Latin, and had many works translated into the English language; some of them, such as Pope Gregory I's treatise "Pastoral Care", appear to have been translated by Alfred himself. In Old English, typical of the development of literature, poetry arose before prose, but Alfred chiefly inspired the growth of prose.
A later literary standard, dating from the late 10th century, arose under the influence of Bishop Æthelwold of Winchester, and was followed by such writers as the prolific Ælfric of Eynsham ("the Grammarian"). This form of the language is known as the "Winchester standard", or more commonly as Late West Saxon. It is considered to represent the "classical" form of Old English. It retained its position of prestige until the time of the Norman Conquest, after which English ceased for a time to be of importance as a literary language.
The history of Old English can be subdivided into:
The Old English period is followed by Middle English (12th to 15th century), Early Modern English (c. 1480 to 1650) and finally Modern English (after 1650).
Old English should not be regarded as a single monolithic entity, just as Modern English is also not monolithic. It emerged over time out of the dialects of the colonising tribes, and it is only towards the later Anglo-Saxon period that these can be considered to have constituted a single national language. Even then, Old English continued to exhibit much local and regional variation, remnants of which remain in Modern English dialects.
The four main dialectal forms of Old English were Mercian, Northumbrian, Kentish, and West Saxon. Mercian and Northumbrian are together referred to as "Anglian". In terms of geography the Northumbrian region lay north of the Humber River; the Mercian lay north of the Thames and South of the Humber River; West Saxon lay south and southwest of the Thames; and the smallest, Kentish region lay southeast of the Thames, a small corner of England. The Kentish region, settled by the Jutes from Jutland, has the scantest literary remains.
Each of these four dialects was associated with an independent kingdom on the island. Of these, Northumbria south of the Tyne, and most of Mercia, were overrun by the Vikings during the 9th century. The portion of Mercia that was successfully defended, and all of Kent, were then integrated into Wessex under Alfred the Great.
From that time on, the West Saxon dialect (then in the form now known as Early West Saxon) became standardised as the language of government, and as the basis for the many works of literature and religious materials produced or translated from Latin in that period.
The later literary standard known as Late West Saxon (see History, above), although centred in the same region of the country, appears not to have been directly descended from Alfred's Early West Saxon. For example, the former diphthong tended to become monophthongised to in EWS, but to in LWS.
Due to the centralisation of power and the Viking invasions, there is relatively little written record of the non-Wessex dialects after Alfred's unification. Some Mercian texts continued to be written, however, and the influence of Mercian is apparent in some of the translations produced under Alfred's programme, many of which were produced by Mercian scholars. Other dialects certainly continued to be spoken, as is evidenced by the continued variation between their successors in Middle and Modern English. In fact, what would become the standard forms of Middle English and of Modern English are descended from Mercian rather than West Saxon, while Scots developed from the Northumbrian dialect. It was once claimed that, owing to its position at the heart of the Kingdom of Wessex, the relics of Anglo-Saxon accent, idiom and vocabulary were best preserved in the dialect of Somerset.
For details of the sound differences between the dialects, see Phonological history of Old English (dialects).
The language of the Anglo-Saxon settlers appears not to have been significantly affected by the native British Celtic languages which it largely displaced. The number of Celtic loanwords introduced into the language is very small, although dialect and toponymic terms are more often retained in western language contact zones (Cumbria, Devon, Welsh Marches and Borders and so on) than in the east. However, various suggestions have been made concerning possible influence that Celtic may have had on developments in English syntax in the post-Old English period, such as the regular progressive construction and analytic word order, as well as the eventual development of the periphrastic auxiliary verb "do". These ideas have generally not received widespread support from linguists, particularly as many of the theorized Brittonicisms do not become widespread until the late Middle English and Early Modern English periods, in addition to the fact that similar forms exist in other modern Germanic languages.
Old English contained a certain number of loanwords from Latin, which was the scholarly and diplomatic "lingua franca" of Western Europe. It is sometimes possible to give approximate dates for the borrowing of individual Latin words based on which patterns of sound change they have undergone. Some Latin words had already been borrowed into the Germanic languages before the ancestral Angles and Saxons left continental Europe for Britain. More entered the language when the Anglo-Saxons were converted to Christianity and Latin-speaking priests became influential. It was also through Irish Christian missionaries that the Latin alphabet was introduced and adapted for the writing of Old English, replacing the earlier runic system. Nonetheless, the largest transfer of Latin-based (mainly Old French) words into English occurred after the Norman Conquest of 1066, and thus in the Middle English rather than the Old English period.
Another source of loanwords was Old Norse, which came into contact with Old English via the Scandinavian rulers and settlers in the Danelaw from the late 9th century, and during the rule of Cnut and other Danish kings in the early 11th century. Many place-names in eastern and northern England are of Scandinavian origin. Norse borrowings are relatively rare in Old English literature, being mostly terms relating to government and administration. The literary standard, however, was based on the West Saxon dialect, away from the main area of Scandinavian influence; the impact of Norse may have been greater in the eastern and northern dialects. Certainly in Middle English texts, which are more often based on eastern dialects, a strong Norse influence becomes apparent. Modern English contains a great many, often everyday, words that were borrowed from Old Norse, and the grammatical simplification that occurred after the Old English period is also often attributed to Norse influence.
The influence of Old Norse certainly helped move English from a synthetic language along the continuum to a more analytic word order, and Old Norse most likely made a greater impact on the English language than any other language. The eagerness of Vikings in the Danelaw to communicate with their Anglo-Saxon neighbours produced a friction that led to the erosion of the complicated inflectional word-endings. Simeon Potter notes: "No less far-reaching was the influence of Scandinavian upon the inflexional endings of English in hastening that wearing away and leveling of grammatical forms which gradually spread from north to south. It was, after all, a salutary influence. The gain was greater than the loss. There was a gain in directness, in clarity, and in strength."
The strength of the Viking influence on Old English appears from the fact that the indispensable elements of the language – pronouns, modals, comparatives, pronominal adverbs (like "hence" and "together"), conjunctions and prepositions – show the most marked Danish influence; the best evidence of Scandinavian influence appears in the extensive word borrowings for, as Jespersen indicates, no texts exist in either Scandinavia or in Northern England from this time to give certain evidence of an influence on syntax. The change to Old English from Old Norse was substantive, pervasive, and of a democratic character. Old Norse and Old English resembled each other closely like cousins and with some words in common, they roughly understood each other; in time the inflections melted away and the analytic pattern emerged. It is most "important to recognize that in many words the English and Scandinavian language differed chiefly in their inflectional elements. The body of the word was so nearly the same in the two languages that only the endings would put obstacles in the way of mutual understanding. In the mixed population which existed in the Danelaw these endings must have led to much confusion, tending gradually to become obscured and finally lost." This blending of peoples and languages resulted in "simplifying English grammar".
The inventory of classical Old English (Late West Saxon) surface phones, as usually reconstructed, is as follows.
The sounds enclosed in parentheses in the chart above are not considered to be phonemes:
The above system is largely similar to that of Modern English, except that (and for most speakers) have generally been lost, while the voiced affricate and fricatives (now also including ) have become independent phonemes, as has .
The mid-front rounded vowels had merged into unrounded before the Late West Saxon period. During the 11th century such vowels arose again, as monophthongisations of the diphthongs , but quickly merged again with in most dialects.
The exact pronunciation of the West Saxon close diphthongs, spelt , is disputed; it may have been . Other dialects may have had different systems of diphthongs; for example, Anglian dialects retained , which had merged with in West Saxon.
For more on dialectal differences, see Phonological history of Old English (dialects).
Some of the principal sound changes occurring in the pre-history and history of Old English were the following:
For more details of these processes, see the main article, linked above. For sound changes before and after the Old English period, see Phonological history of English.
Nouns decline for five cases: nominative, accusative, genitive, dative, instrumental; three genders: masculine, feminine, neuter; and two numbers: singular, and plural; and are strong or weak. The instrumental is vestigial and only used with the masculine and neuter singular and often replaced by the dative. Only pronouns and strong adjectives retain separate instrumental forms. There is also sparse early Northumbrian evidence of a sixth case: the locative. The evidence comes from Northumbrian Runic texts (e.g., "on rodi" "on the Cross").
Adjectives agree with nouns in case, gender, number, and strong, or weak forms. Pronouns and sometimes participles agree in case, gender, and number. First-person and second-person personal pronouns occasionally distinguish dual-number forms. The definite article and its inflections serve as a definite article ("the"), a demonstrative adjective ("that"), and demonstrative pronoun. Other demonstratives are ("this"), and ("yon"). These words inflect for case, gender, number. Adjectives have both strong and weak sets of endings, weak ones being used when a definite or possessive determiner is also present.
Verbs conjugate for three persons: first, second, and third; two numbers: singular, plural; two tenses: present, and past; three moods: indicative, subjunctive, and imperative; and are strong (exhibiting ablaut) or weak (exhibiting a dental suffix). Verbs have two infinitive forms: bare, and bound; and two participles: present, and past. The subjunctive has past and present forms. Finite verbs agree with subjects in person, and number. The future tense, passive voice, and other aspects are formed with compounds.
Adpositions are mostly before but often after their object. If the object of an adposition is marked in the dative case, an adposition may conceivably be located anywhere in the sentence.
Remnants of the Old English case system in Modern English are in the forms of a few pronouns (such as "I/me/mine", "she/her", "who/whom/whose") and in the possessive ending "-'s", which derives from the masculine and neuter genitive ending "-es". The modern English plural ending "-(e)s" derives from the Old English "-as", but the latter applied only to "strong" masculine nouns in the nominative and accusative cases; different plural endings were used in other instances. Old English nouns had grammatical gender, while modern English has only natural gender. Pronoun usage could reflect either natural or grammatical gender when those conflicted, as in the case of , a neuter noun referring to a female person.
In Old English's verbal compound constructions are the beginnings of the compound tenses of Modern English. Old English verbs include strong verbs, which form the past tense by altering the root vowel, and weak verbs, which use a suffix such as . As in Modern English, and peculiar to the Germanic languages, the verbs formed two great classes: weak (regular), and strong (irregular). Like today, Old English had fewer strong verbs, and many of these have over time decayed into weak forms. Then, as now, dental suffixes indicated the past tense of the weak verbs, as in "work" and "worked".
Old English syntax is similar to that of modern English. Some differences are consequences of the greater level of nominal and verbal inflection, allowing freer word order.
Old English was first written in runes, using the futhorc – a rune set derived from the Germanic 24-character elder futhark, extended by five more runes used to represent Anglo-Saxon vowel sounds, and sometimes by several more additional characters. From around the 8th century, the runic system came to be supplanted by a (minuscule) half-uncial script of the Latin alphabet introduced by Irish Christian missionaries. This was replaced by Insular script, a cursive and pointed version of the half-uncial script. This was used until the end of the 12th century when continental Carolingian minuscule (also known as "Caroline") replaced the insular.
The Latin alphabet of the time still lacked the letters and , and there was no as distinct from ; moreover native Old English spellings did not use , or . The remaining 20 Latin letters were supplemented by four more: (, modern "ash") and (, now called eth or edh), which were modified Latin letters, and thorn and wynn , which are borrowings from the futhorc. A few letter pairs were used as digraphs, representing a single sound. Also used was the Tironian note (a character similar to the digit 7) for the conjunction "and". A common scribal abbreviation was a thorn with a stroke , which was used for the pronoun . Macrons over vowels were originally used not to mark long vowels (as in modern editions), but to indicate stress, or as abbreviations for a following "m" or "n".
Modern editions of Old English manuscripts generally introduce some additional conventions. The modern forms of Latin letters are used, including in place of the insular G, for long S, and others which may differ considerably from the insular script, notably , and . Macrons are used to indicate long vowels, where usually no distinction was made between long and short vowels in the originals. (In some older editions an acute accent mark was used for consistency with Old Norse conventions.) Additionally, modern editions often distinguish between velar and palatal and by placing dots above the palatals: , . The letter wynn is usually replaced with , but , eth and thorn are normally retained (except when eth is replaced by thorn).
In contrast with Modern English orthography, that of Old English was reasonably regular, with a mostly predictable correspondence between letters and phonemes. There were not usually any silent letters—in the word "cniht", for example, both the and were pronounced, unlike the and in the modern "knight". The following table lists the Old English letters and digraphs together with the phonemes they represent, using the same notation as in the Phonology section above.
Doubled consonants are geminated; the geminate fricatives /, and cannot be voiced.
The corpus of Old English literature is small but still significant, with some 400 surviving manuscripts. The pagan and Christian streams mingle in Old English, one of the richest and most significant bodies of literature preserved among the early Germanic peoples. In his supplementary article to the 1935 posthumous edition of Bright's "Anglo-Saxon Reader", Dr. James Hulbert writes:
Some of the most important surviving works of Old English literature are "Beowulf", an epic poem; the "Anglo-Saxon Chronicle", a record of early English history; the Franks Casket, an inscribed early whalebone artefact; and Cædmon's Hymn, a Christian religious poem. There are also a number of extant prose works, such as sermons and saints' lives, biblical translations, and translated Latin works of the early Church Fathers, legal documents, such as laws and wills, and practical works on grammar, medicine, and geography. Still, poetry is considered the heart of Old English literature. Nearly all Anglo-Saxon authors are anonymous, with a few exceptions, such as Bede and Cædmon. Cædmon, the earliest English poet known by name, served as a lay brother in the monastery at Whitby.
The first example is taken from the opening lines of the folk-epic "Beowulf", a poem of some 3,000 lines and the single greatest work of Old English. This passage describes how Hrothgar's legendary ancestor Scyld was found as a baby, washed ashore, and adopted by a noble family. The translation is literal and represents the original poetic word order. As such, it is not typical of Old English prose. The modern cognates of original words have been used whenever practical to give a close approximation of the feel of the original poem.
The words in brackets are implied in the Old English by noun case and the bold words in brackets are explanations of words that have slightly different meanings in a modern context. Notice how "what" is used by the poet where a word like "lo" or "behold" would be expected. This usage is similar to "what-ho!", both an expression of surprise and a call to attention.
English poetry is based on stress and alliteration. In alliteration, the first consonant in a word alliterates with the same consonant at the beginning of another word, as with and . Vowels alliterate with any other vowel, as with and . In the text below, the letters that alliterate are bolded.
A semi-fluent translation in Modern English would be:
This text of the Lord's Prayer is presented in the standardised West Saxon literary dialect, with added macrons for vowel length, markings for probable palatalised consonants, modern punctuation, and the replacement of the letter ƿynn with w.
This is a proclamation from King Cnut the Great to his earl Thorkell the Tall and the English people written in AD 1020. Unlike the previous two examples, this text is prose rather than poetry. For ease of reading, the passage has been divided into sentences while the pilcrows represent the original division.
The earliest history of Old English lexicography lies in the Anglo-Saxon period itself, when English-speaking scholars created English glosses on Latin texts. At first these were often marginal or interlinear glosses, but soon came to be gathered into word-lists such as the Épinal-Erfurt, Leiden and Corpus Glossaries. Over time, these word-lists were consolidated and alphabeticised to create extensive Latin-Old English glossaries with some of the character of dictionaries, such as the Cleopatra Glossaries, the Harley Glossary and the Brussels Glossary. In some cases, the material in these glossaries continued to be circulated and updated in Middle English glossaries, such as the Durham Plant-Name Glossary and the Laud Herbal Glossary.
Old English lexicography was revived in the early modern period, drawing heavily on Anglo-Saxons' own glossaries. The major publication at this time was William Somner's "Dictionarium Saxonico-Latino-Anglicum". The next substantial Old English dictionary was Joseph Bosworth's "Anglo-Saxon Dictionary" of 1838.
In modern scholarship, the following dictionaries remain current:
Though focused on later periods, the "Oxford English Dictionary", "Middle English Dictionary", "Dictionary of the Older Scottish Tongue", and "Historical Thesaurus of English" all also include material relevant to Old English.
Like other historical languages, Old English has been used by scholars and enthusiasts of later periods to create texts either imitating Anglo-Saxon literature or deliberately transferring it to a different cultural context. Examples include Alistair Campbell and J. R. R. Tolkien. Ransom Riggs uses several Old English words, such as syndrigast (singular, peculiar), ymbryne (period, cycle), etc., dubbed as "Old Peculiar" ones.
A number of websites devoted to Modern Paganism and historical reenactment offer reference material and forums promoting the active use of Old English. There is also an . However, one investigation found that many Neo-Old English texts published online bear little resemblance to the historical language and have many basic grammatical mistakes. | https://en.wikipedia.org/wiki?curid=22667 |
Open cluster
An open cluster is a group of up to a few thousand stars that were formed from the same giant molecular cloud and have roughly the same age. More than 1,100 open clusters have been discovered within the Milky Way Galaxy, and many more are thought to exist. They are loosely bound by mutual gravitational attraction and become disrupted by close encounters with other clusters and clouds of gas as they orbit the galactic center. This can result in a migration to the main body of the galaxy and a loss of cluster members through internal close encounters. Open clusters generally survive for a few hundred million years, with the most massive ones surviving for a few billion years. In contrast, the more massive globular clusters of stars exert a stronger gravitational attraction on their members, and can survive for longer. Open clusters have been found only in spiral and irregular galaxies, in which active star formation is occurring.
Young open clusters may be contained within the molecular cloud from which they formed, illuminating it to create an H II region. Over time, radiation pressure from the cluster will disperse the molecular cloud. Typically, about 10% of the mass of a gas cloud will coalesce into stars before radiation pressure drives the rest of the gas away.
Open clusters are key objects in the study of stellar evolution. Because the cluster members are of similar age and chemical composition, their properties (such as distance, age, metallicity, extinction, and velocity) are more easily determined than they are for isolated stars. A number of open clusters, such as the Pleiades, Hyades or the Alpha Persei Cluster are visible with the naked eye. Some others, such as the Double Cluster, are barely perceptible without instruments, while many more can be seen using binoculars or telescopes. The Wild Duck Cluster, M11, is an example.
The prominent open cluster the Pleiades has been recognized as a group of stars since antiquity, while the Hyades forms part of Taurus, one of the oldest constellations. Other open clusters were noted by early astronomers as unresolved fuzzy patches of light. In his "Almagest", the Roman astronomer Ptolemy mentions the Praesepe cluster, the Double Cluster in Perseus, the Coma Star Cluster, and the Ptolemy Cluster, while the Persian astronomer Al-Sufi wrote of the Omicron Velorum cluster. However, it would require the invention of the telescope to resolve these "nebulae" into their constituent stars. Indeed, in 1603 Johann Bayer gave three of these clusters designations as if they were single stars.
The first person to use a telescope to observe the night sky and record his observations was the Italian scientist Galileo Galilei in 1609. When he turned the telescope toward some of the nebulous patches recorded by Ptolemy, he found they were not a single star, but groupings of many stars. For Praesepe, he found more than 40 stars. Where previously observers had noted only 6–7 stars in the Pleiades, he found almost 50. In his 1610 treatise "Sidereus Nuncius", Galileo Galilei wrote, "the galaxy is nothing else but a mass of innumerable stars planted together in clusters." Influenced by Galileo's work, the Sicilian astronomer Giovanni Hodierna became possibly the first astronomer to use a telescope to find previously undiscovered open clusters. In 1654, he identified the objects now designated Messier 41, Messier 47, NGC 2362 and NGC 2451.
It was realised as early as 1767 that the stars in a cluster were physically related, when the English naturalist Reverend John Michell calculated that the probability of even just one group of stars like the Pleiades being the result of a chance alignment as seen from Earth was just 1 in 496,000. Between 1774–1781, French astronomer Charles Messier published a catalogue of celestial objects that had a nebulous appearance similar to comets. This catalogue included 26 open clusters. In the 1790s, English astronomer William Herschel began an extensive study of nebulous celestial objects. He discovered that many of these features could be resolved into groupings of individual stars. Herschel conceived the idea that stars were initially scattered across space, but later became clustered together as star systems because of gravitational attraction. He divided the nebulae into eight classes, with classes VI through VIII being used to classify clusters of stars.
The number of clusters known continued to increase under the efforts of astronomers. Hundreds of open clusters were listed in the New General Catalogue, first published in 1888 by the Danish-Irish astronomer J. L. E. Dreyer, and the two supplemental Index Catalogues, published in 1896 and 1905. Telescopic observations revealed two distinct types of clusters, one of which contained thousands of stars in a regular spherical distribution and was found all across the sky but preferentially towards the centre of the Milky Way. The other type consisted of a generally sparser population of stars in a more irregular shape. These were generally found in or near the galactic plane of the Milky Way. Astronomers dubbed the former globular clusters, and the latter open clusters. Because of their location, open clusters are occasionally referred to as "galactic clusters", a term that was introduced in 1925 by the Swiss-American astronomer Robert Julius Trumpler.
Micrometer measurements of the positions of stars in clusters were made as early as 1877 by the German astronomer E. Schönfeld and further pursued by the American astronomer E. E. Barnard prior to his death in 1923. No indication of stellar motion was detected by these efforts. However, in 1918 the Dutch-American astronomer Adriaan van Maanen was able to measure the proper motion of stars in part of the Pleiades cluster by comparing photographic plates taken at different times. As astrometry became more accurate, cluster stars were found to share a common proper motion through space. By comparing the photographic plates of the Pleiades cluster taken in 1918 with images taken in 1943, van Maanen was able to identify those stars that had a proper motion similar to the mean motion of the cluster, and were therefore more likely to be members. Spectroscopic measurements revealed common radial velocities, thus showing that the clusters consist of stars bound together as a group.
The first color-magnitude diagrams of open clusters were published by Ejnar Hertzsprung in 1911, giving the plot for the Pleiades and Hyades star clusters. He continued this work on open clusters for the next twenty years. From spectroscopic data, he was able to determine the upper limit of internal motions for open clusters, and could estimate that the total mass of these objects did not exceed several hundred times the mass of the Sun. He demonstrated a relationship between the star colors and their magnitudes, and in 1929 noticed that the Hyades and Praesepe clusters had different stellar populations than the Pleiades. This would subsequently be interpreted as a difference in ages of the three clusters.
The formation of an open cluster begins with the collapse of part of a giant molecular cloud, a cold dense cloud of gas and dust containing up to many thousands of times the mass of the Sun. These clouds have densities that vary from 102 to 106 molecules of neutral hydrogen per cm3, with star formation occurring in regions with densities above 104 molecules per cm3. Typically, only 1–10% of the cloud by volume is above the latter density. Prior to collapse, these clouds maintain their mechanical equilibrium through magnetic fields, turbulence, and rotation.
Many factors may disrupt the equilibrium of a giant molecular cloud, triggering a collapse and initiating the burst of star formation that can result in an open cluster. These include shock waves from a nearby supernova, collisions with other clouds, or gravitational interactions. Even without external triggers, regions of the cloud can reach conditions where they become unstable against collapse. The collapsing cloud region will undergo hierarchical fragmentation into ever smaller clumps, including a particularly dense form known as infrared dark clouds, eventually leading to the formation of up to several thousand stars. This star formation begins enshrouded in the collapsing cloud, blocking the protostars from sight but allowing infrared observation. In the Milky Way galaxy, the formation rate of open clusters is estimated to be one every few thousand years.
The hottest and most massive of the newly formed stars (known as OB stars) will emit intense ultraviolet radiation, which steadily ionizes the surrounding gas of the giant molecular cloud, forming an H II region. Stellar winds and radiation pressure from the massive stars begins to drive away the hot ionized gas at a velocity matching the speed of sound in the gas. After a few million years the cluster will experience its first core-collapse supernovae, which will also expel gas from the vicinity. In most cases these processes will strip the cluster of gas within ten million years and no further star formation will take place. Still, about half of the resulting protostellar objects will be left surrounded by circumstellar disks, many of which form accretion disks.
As only 30 to 40 per cent of the gas in the cloud core forms stars, the process of residual gas expulsion is highly damaging to the star formation process. All clusters thus suffer significant infant weight loss, while a large fraction undergo infant mortality. At this point, the formation of an open cluster will depend on whether the newly formed stars are gravitationally bound to each other; otherwise an unbound stellar association will result. Even when a cluster such as the Pleiades does form, it may only hold on to a third of the original stars, with the remainder becoming unbound once the gas is expelled. The young stars so released from their natal cluster become part of the Galactic field population.
Because most if not all stars form in clusters, star clusters are to be viewed as the fundamental building blocks of galaxies. The violent gas-expulsion events that shape and destroy many star clusters at birth leave their imprint in the morphological and kinematical structures of galaxies. Most open clusters form with at least 100 stars and a mass of 50 or more solar masses. The largest clusters can have over 104 solar masses, with the massive cluster Westerlund 1 being estimated at 5 × 104 solar masses and R136 at almost 5 x 105, typical of globular clusters. While open clusters and globular clusters form two fairly distinct groups, there may not be a great deal of intrinsic difference between a very sparse globular cluster such as Palomar 12 and a very rich open cluster. Some astronomers believe the two types of star clusters form via the same basic mechanism, with the difference being that the conditions that allowed the formation of the very rich globular clusters containing hundreds of thousands of stars no longer prevail in the Milky Way.
It is common for two or more separate open clusters to form out of the same molecular cloud. In the Large Magellanic Cloud, both Hodge 301 and R136 have formed from the gases of the Tarantula Nebula, while in our own galaxy, tracing back the motion through space of the Hyades and Praesepe, two prominent nearby open clusters, suggests that they formed in the same cloud about 600 million years ago. Sometimes, two clusters born at the same time will form a binary cluster. The best known example in the Milky Way is the Double Cluster of NGC 869 and NGC 884 (sometimes mistakenly called h and χ Persei; h refers to a neighboring star and χ to "both" clusters), but at least 10 more double clusters are known to exist. Many more are known in the Small and Large Magellanic Clouds—they are easier to detect in external systems than in our own galaxy because projection effects can cause unrelated clusters within the Milky Way to appear close to each other.
Open clusters range from very sparse clusters with only a few members to large agglomerations containing thousands of stars. They usually consist of quite a distinct dense core, surrounded by a more diffuse 'corona' of cluster members. The core is typically about 3–4 light years across, with the corona extending to about 20 light years from the cluster centre. Typical star densities in the centre of a cluster are about 1.5 stars per cubic light year; the stellar density near the Sun is about 0.003 stars per cubic light year.
Open clusters are often classified according to a scheme developed by Robert Trumpler in 1930. The Trumpler scheme gives a cluster a three part designation, with a Roman numeral from I-IV indicating its concentration and detachment from the surrounding star field (from strongly to weakly concentrated), an Arabic numeral from 1 to 3 indicating the range in brightness of members (from small to large range), and "p", "m" or "r" to indication whether the cluster is poor, medium or rich in stars. An 'n' is appended if the cluster lies within nebulosity.
Under the Trumpler scheme, the Pleiades are classified as I3rn (strongly concentrated and richly populated with nebulosity present), while the nearby Hyades are classified as II3m (more dispersed, and with fewer members).
There are over 1,000 known open clusters in our galaxy, but the true total may be up to ten times higher than that. In spiral galaxies, open clusters are largely found in the spiral arms where gas densities are highest and so most star formation occurs, and clusters usually disperse before they have had time to travel beyond their spiral arm. Open clusters are strongly concentrated close to the galactic plane, with a scale height in our galaxy of about 180 light years, compared to a galactic radius of approximately 50,000 light years.
In irregular galaxies, open clusters may be found throughout the galaxy, although their concentration is highest where the gas density is highest. Open clusters are not seen in elliptical galaxies: star formation ceased many millions of years ago in ellipticals, and so the open clusters which were originally present have long since dispersed.
In our galaxy, the distribution of clusters depends on age, with older clusters being preferentially found at greater distances from the galactic centre, generally at substantial distances above or below the galactic plane. Tidal forces are stronger nearer the centre of the galaxy, increasing the rate of disruption of clusters, and also the giant molecular clouds which cause the disruption of clusters are concentrated towards the inner regions of the galaxy, so clusters in the inner regions of the galaxy tend to get dispersed at a younger age than their counterparts in the outer regions.
Because open clusters tend to be dispersed before most of their stars reach the end of their lives, the light from them tends to be dominated by the young, hot blue stars. These stars are the most massive, and have the shortest lives of a few tens of millions of years. The older open clusters tend to contain more yellow stars.
Some open clusters contain hot blue stars which seem to be much younger than the rest of the cluster. These blue stragglers are also observed in globular clusters, and in the very dense cores of globulars they are believed to arise when stars collide, forming a much hotter, more massive star. However, the stellar density in open clusters is much lower than that in globular clusters, and stellar collisions cannot explain the numbers of blue stragglers observed. Instead, it is thought that most of them probably originate when dynamical interactions with other stars cause a binary system to coalesce into one star.
Once they have exhausted their supply of hydrogen through nuclear fusion, medium- to low-mass stars shed their outer layers to form a planetary nebula and evolve into white dwarfs. While most clusters become dispersed before a large proportion of their members have reached the white dwarf stage, the number of white dwarfs in open clusters is still generally much lower than would be expected, given the age of the cluster and the expected initial mass distribution of the stars. One possible explanation for the lack of white dwarfs is that when a red giant expels its outer layers to become a planetary nebula, a slight asymmetry in the loss of material could give the star a 'kick' of a few kilometres per second, enough to eject it from the cluster.
Because of their high density, close encounters between stars in an open cluster are common. For a typical cluster with 1,000 stars with a 0.5 parsec half-mass radius, on average a star will have an encounter with another member every 10 million years. The rate is even higher in denser clusters. These encounters can have a significant impact on the extended circumstellar disks of material that surround many young stars. Tidal perturbations of large disks may result in the formation of massive planets and brown dwarfs, producing companions at distances of 100 AU or more from the host star.
Many open clusters are inherently unstable, with a small enough mass that the escape velocity of the system is lower than the average velocity of the constituent stars. These clusters will rapidly disperse within a few million years. In many cases, the stripping away of the gas from which the cluster formed by the radiation pressure of the hot young stars reduces the cluster mass enough to allow rapid dispersal.
Clusters that have enough mass to be gravitationally bound once the surrounding nebula has evaporated can remain distinct for many tens of millions of years, but over time internal and external processes tend also to disperse them. Internally, close encounters between stars can increase the velocity of a member beyond the escape velocity of the cluster. This results in the gradual 'evaporation' of cluster members.
Externally, about every half-billion years or so an open cluster tends to be disturbed by external factors such as passing close to or through a molecular cloud. The gravitational tidal forces generated by such an encounter tend to disrupt the cluster. Eventually, the cluster becomes a stream of stars, not close enough to be a cluster but all related and moving in similar directions at similar speeds. The timescale over which a cluster disrupts depends on its initial stellar density, with more tightly packed clusters persisting for longer. Estimated cluster half lives, after which half the original cluster members will have been lost, range from 150–800 million years, depending on the original density.
After a cluster has become gravitationally unbound, many of its constituent stars will still be moving through space on similar trajectories, in what is known as a stellar association, moving cluster, or moving group. Several of the brightest stars in the 'Plough' of Ursa Major are former members of an open cluster which now form such an association, in this case, the Ursa Major Moving Group. Eventually their slightly different relative velocities will see them scattered throughout the galaxy. A larger cluster is then known as a stream, if we discover the similar velocities and ages of otherwise well separated stars.
When a Hertzsprung-Russell diagram is plotted for an open cluster, most stars lie on the main sequence. The most massive stars have begun to evolve away from the main sequence and are becoming red giants; the position of the turn-off from the main sequence can be used to estimate the age of the cluster.
Because the stars in an open cluster are all at roughly the same distance from Earth, and were born at roughly the same time from the same raw material, the differences in apparent brightness among cluster members are due only to their mass. This makes open clusters very useful in the study of stellar evolution, because when comparing one star to another, many of the variable parameters are fixed.
The study of the abundances of lithium and beryllium in open cluster stars can give important clues about the evolution of stars and their interior structures. While hydrogen nuclei cannot fuse to form helium until the temperature reaches about 10 million K, lithium and beryllium are destroyed at temperatures of 2.5 million K and 3.5 million K respectively. This means that their abundances depend strongly on how much mixing occurs in stellar interiors. By studying their abundances in open cluster stars, variables such as age and chemical composition are fixed.
Studies have shown that the abundances of these light elements are much lower than models of stellar evolution predict. While the reason for this underabundance is not yet fully understood, one possibility is that convection in stellar interiors can 'overshoot' into regions where radiation is normally the dominant mode of energy transport.
Determining the distances to astronomical objects is crucial to understanding them, but the vast majority of objects are too far away for their distances to be directly determined. Calibration of the astronomical distance scale relies on a sequence of indirect and sometimes uncertain measurements relating the closest objects, for which distances can be directly measured, to increasingly distant objects. Open clusters are a crucial step in this sequence.
The closest open clusters can have their distance measured directly by one of two methods. First, the parallax (the small change in apparent position over the course of a year caused by the Earth moving from one side of its orbit around the Sun to the other) of stars in close open clusters can be measured, like other individual stars. Clusters such as the Pleiades, Hyades and a few others within about 500 light years are close enough for this method to be viable, and results from the Hipparcos position-measuring satellite yielded accurate distances for several clusters.
The other direct method is the so-called moving cluster method. This relies on the fact that the stars of a cluster share a common motion through space. Measuring the proper motions of cluster members and plotting their apparent motions across the sky will reveal that they converge on a vanishing point. The radial velocity of cluster members can be determined from Doppler shift measurements of their spectra, and once the radial velocity, proper motion and angular distance from the cluster to its vanishing point are known, simple trigonometry will reveal the distance to the cluster. The Hyades are the best known application of this method, which reveals their distance to be 46.3 parsecs.
Once the distances to nearby clusters have been established, further techniques can extend the distance scale to more distant clusters. By matching the main sequence on the Hertzsprung-Russell diagram for a cluster at a known distance with that of a more distant cluster, the distance to the more distant cluster can be estimated. The nearest open cluster is the Hyades: the stellar association consisting of most of the Plough stars is at about half the distance of the Hyades, but is a stellar association rather than an open cluster as the stars are not gravitationally bound to each other. The most distant known open cluster in our galaxy is Berkeley 29, at a distance of about 15,000 parsecs. Open clusters, especially super star clusters, are also easily detected in many of the galaxies of the Local Group and nearby: e.g., NGC 346 and the SSCs R136 and NGC 1569 A and B.
Accurate knowledge of open cluster distances is vital for calibrating the period-luminosity relationship shown by variable stars such as cepheid stars, which allows them to be used as standard candles. These luminous stars can be detected at great distances, and are then used to extend the distance scale to nearby galaxies in the Local Group. Indeed, the open cluster designated NGC 7790 hosts three classical Cepheids. RR Lyrae variables are too old to be associated with open clusters, and are instead found in globular clusters.
The open cluster NGC 6811 contains two known planetary systems Kepler 66 and Kepler 67. | https://en.wikipedia.org/wiki?curid=22669 |
Orimulsion
Orimulsion is a registered trademark name for a bitumen-based fuel that was developed for industrial use by Intevep, the Research and Development Affiliate of Petroleos de Venezuela SA (PDVSA), following earlier collaboration on oil emulsions with BP.
Like coal and oil, bitumen occurs naturally and is obtained from the world's largest deposit in the Orinoco Belt in Venezuela. The deposit is estimated to be more than 1,300 billion barrels (190 billion m3) of bitumen, an amount approximately equivalent to the world's estimated proven oil reserves.
Raw bitumen has an extremely high viscosity and specific gravity between 8 and 10 API gravity, at ambient temperatures and is unsuitable for direct use in conventional power stations. Orimulsion is made by mixing the bitumen with about 30% fresh water and a small amount of surfactant. The result behaves similarly to fuel oil. An alcohol-based surfactant recently replaced the original phenol-based version; improving the transport properties of the fuel and eliminating the health concerns associated with the phenol group of surfactants.
As a fuel for electricity generation, Orimulsion has a number of attractive characteristics:
If there is a spill while shipping over water the mixture de-emulsifies and the bitumen drops out of suspension.
It is a non-Newtonian fluid, and if it is allowed to cool below 30 °C, it will 'set'. Pumping becomes impossible, and there is no way of restarting operations or the flow through the pipeline again.
Orimulsion is currently used as a commercial boiler fuel in power plants worldwide ("e.g.", Japan, Italy and China). Use of fuel used to be much wider and demand was increasing. However, many of PDVSA's engineers were fired following the Venezuelan general strike of 2002–03. Orimulsion had been the pride of the PDVSA engineers, so Orimulsion fell out of favor with the key political leaders. As a result, the government is trying to "Wind Down" the Orimulsion program. The one exception is the sales of Orimulsion to China. The Venezuelan government has close ties to China, as it has with Cuba. The result is that China is still supplied with Orimulsion, while the rest of the world has either had their supplies terminated, or are still experiencing the "Wind Down" phase. Orimulsion still has excellent potential for domestic consumption.
Another reason given by current PDVSA management is that with rising crude oil prices, it has been found that mixing or diluting Orinoco bitumen (extra-heavy oil) with a lighter crude oil can make this blend more profitable as a crude oil on the world market than by selling it as Orimulsion. An example of this is the popular Merey blend (Orinoco bitumen and Mesa crude oil). ConocoPhillips along with PDVSA operate the Merey Sweeny (bpd) delayed coker, vacuum tower and related facilities at ConocoPhillips' refinery in Sweeny, Texas, U.S.A. for processing and upgrading heavy sour Merey crude oil.
Air pollutant control technology that is commonly available can limit emissions from Orimulsion to levels considered "Best Available Control Technology", as defined by the United States Environmental Protection Agency. | https://en.wikipedia.org/wiki?curid=22673 |
Oxfordian theory of Shakespeare authorship
The Oxfordian theory of Shakespeare authorship contends that Edward de Vere, 17th Earl of Oxford, wrote the plays and poems traditionally attributed to William Shakespeare. Though literary scholars reject all alternative authorship candidates, including Oxford, interest in the Oxfordian theory continues. Since the 1920s, the Oxfordian theory has been the most popular alternative Shakespeare authorship theory.
The convergence of documentary evidence of the type used by academics for authorial attribution – title pages, testimony by other contemporary poets and historians, and official records – sufficiently establishes Shakespeare's authorship for the overwhelming majority of Shakespeare scholars and literary historians, and no such documentary evidence links Oxford to Shakespeare's works. Oxfordians, however, reject the historical record and claim that circumstantial evidence supports Oxford’s authorship, proposing that the contradictory historical evidence is part of a conspiracy theory that falsified the record to protect the identity of the real author. Scholarly literary specialists consider the Oxfordian method of interpreting the plays and poems as autobiographical, and then using them to construct a hypothetical author's biography, as unreliable and logically unsound.
Oxfordian arguments rely heavily on biographical allusions; adherents find correspondences between incidents and circumstances in Oxford's life and events in Shakespeare's plays, sonnets, and longer poems. The case also relies on perceived parallels of language, idiom, and thought between Shakespeare's works and Oxford's own poetry and letters. Oxfordians claim that marked passages in Oxford's Bible can be linked to Biblical allusions in Shakespeare's plays. That no plays survive under Oxford's name is also important to the Oxfordian theory. Oxfordians interpret certain 16th- and 17th-century literary allusions as indicating that Oxford was one of the more prominent suppressed anonymous and/or pseudonymous writers of the day. Under this scenario, Shakespeare was either a "front man" or "play-broker" who published the plays under his own name or was merely an actor with a similar name, misidentified as the playwright since the first Shakespeare biographies of the early 1700s.
The most compelling evidence against the Oxfordian theory is de Vere's death in 1604, since the generally accepted chronology of Shakespeare's plays places the composition of approximately twelve of the plays after that date. Oxfordians respond that the annual publication of "new" or "corrected" Shakespeare plays stopped in 1604, and that the dedication to "Shakespeare's Sonnets" implies that the author was dead prior to their publication in 1609. Oxfordians believe the reason so many of the "late plays" show evidence of revision and collaboration is because they were completed by other playwrights after Oxford's death.
The theory that the works of Shakespeare were in fact written by someone other than William Shakespeare dates back to the mid-nineteenth century. In 1857, the first book on the topic, "The Philosophy of the Plays of Shakspere Unfolded", by Delia Bacon, was published. Bacon proposed the first "group theory" of Shakespearian authorship, attributing the works to a committee headed by Francis Bacon and including Walter Raleigh. De Vere is mentioned once in the book, in a list of "high-born wits and poets", who were associated with Raleigh. Some commentators have interpreted this to imply that he was part of the group of authors. Throughout the 19th century Bacon was the preferred hidden author. Oxford is not known to have been mentioned again in this context.
By the beginning of the twentieth century other candidates, typically aristocrats, were put forward, most notably Roger Manners, 5th Earl of Rutland, and William Stanley, 6th Earl of Derby. Oxford's candidacy as sole author was first proposed by J. Thomas Looney in his 1920 book "Shakespeare Identified in Edward de Vere, 17th Earl of Oxford". Following earlier anti-Stratfordians, Looney argued that the known facts of Shakespeare's life did not fit the personality he ascribed to the author of the plays. Like other anti-Stratfordians before him, Looney referred to the absence of records concerning Shakespeare's education, his limited experience of the world, his allegedly poor handwriting skills (evidenced in his signatures), and the "dirt and ignorance" of Stratford at the time. Shakespeare had a petty "acquisitive disposition", he said, while the plays made heroes of free-spending figures. They also portrayed middle and lower-class people negatively, while Shakespearian heroes were typically aristocratic. Looney referred to scholars who found in the plays evidence that their author was an expert in law, widely read in ancient Latin literature, and could speak French and Italian. Looney believed that even very early works such as "Love's Labour's Lost" implied that he was already a person of "matured powers", in his forties or fifties, with wide experience of the world. Looney considered that Oxford's personality fitted that he deduced from the plays, and also identified characters in the plays as detailed portraits of Oxford's family and personal contacts. Several characters, including Hamlet and Bertram (in "All's Well that Ends Well"), were, he believed, self-portraits. Adapting arguments earlier used for Rutland and Derby, Looney fitted events in the plays to episodes in Oxford's life, including his travels to France and Italy, the settings for many plays. Oxford's death in 1604 was linked to a drop-off in the publication of Shakespeare plays. Looney declared that the late play "The Tempest" was not written by Oxford, and that others performed or published after Oxford's death were most probably left incomplete and finished by other writers, thus explaining the apparent idiosyncrasies of style found in the late Shakespeare plays. Looney also introduced the argument that the reference to the "ever-living poet" in the 1609 dedication to Shakespeare's sonnets implied that the author was dead at the time of publication.
Sigmund Freud, the novelist Marjorie Bowen, and several 20th-century celebrities found the thesis persuasive, and Oxford soon overtook Bacon as the favoured alternative candidate to Shakespeare, though academic Shakespearians mostly ignored the subject. Looney's theory attracted a number of activist followers who published books supplementing his own and added new arguments, most notably Percy Allen, Bernard M. Ward, Louis P. Bénézet and Charles Wisner Barrell. Mainstream scholar Steven W. May has noted that Oxfordians of this period made genuine contributions to knowledge of Elizabethan history, citing "Ward's quite competent biography of the Earl" and "Charles Wisner Barrell's identification of Edward Vere, Oxford's illegitimate son by Anne Vavasour" as examples. In 1921, Sir George Greenwood, Looney, and others founded The Shakespeare Fellowship, an organization originally dedicated to the discussion and promotion of ecumenical anti-Stratfordian views, but which later became devoted to promoting Oxford as the true Shakespeare.
After a period of decline of the Oxfordian theory beginning with World War II, in 1952 Dorothy and Charlton Greenwood Ogburn published the 1,300-page "This Star of England", which briefly revived Oxfordism. A series of critical academic books and articles, however, held in check any appreciable growth of anti-Stratfordism and Oxfordism, most notably "The Shakespeare Ciphers Examined" (1957), by William and Elizebeth Friedman, "The Poacher from Stratford" (1958), by Frank Wadsworth, "Shakespeare and His Betters" (1958), by Reginald Churchill, "The Shakespeare Claimants" (1962), by H. N. Gibson, and "Shakespeare and his Rivals: A Casebook on the Authorship Controversy" (1962), by George L. McMichael and Edgar M. Glenn. By 1968 the newsletter of The Shakespeare Oxford Society reported that "the missionary or evangelical spirit of most of our members seems to be at a low ebb, dormant, or non-existent". In 1974, membership in the society stood at 80. In 1979, the publication of an analysis of the Ashbourne portrait dealt a further blow to the movement. The painting, long claimed to be one of the portraits of Shakespeare, but considered by Barrell to be an overpaint of a portrait of the Earl of Oxford, turned out to represent neither, but rather depicted Hugh Hamersley.
Charlton Ogburn, Jr., was elected president of The Shakespeare Oxford Society in 1976 and kick-started the modern revival of the Oxfordian movement by seeking publicity through moot court trials, media debates, television and later the Internet, including Wikipedia, methods which became standard for Oxfordian and anti-Stratfordian promoters because of their success in recruiting members of the lay public. He portrayed academic scholars as self-interested members of an "entrenched authority" that aimed to "outlaw and silence dissent in a supposedly free society", and proposed to counter their influence by portraying Oxford as a candidate on equal footing with Shakespeare.
In 1985 Ogburn published his 900-page "The Mysterious William Shakespeare: the Myth and the Reality", with a Foreword by Pulitzer prize-winning historian David McCullough who wrote: "[T]his brilliant, powerful book is a major event for everyone who cares about Shakespeare. The scholarship is surpassing—brave, original, full of surprise... The strange, difficult, contradictory man who emerges as the real Shakespeare, Edward de Vere, 17th Earl of Oxford, is not just plausible but fascinating and wholly believable."
By framing the issue as one of fairness in the atmosphere of conspiracy that permeated America after Watergate, he used the media to circumnavigate academia and appeal directly to the public. Ogburn's efforts secured Oxford the place as the most popular alternative candidate.
Although Shakespearian experts disparaged Ogburn's methodology and his conclusions, one reviewer, Richmond Crinkley, the Folger Shakespeare Library's former director of educational programs, acknowledged the appeal of Ogburn's approach, writing that the doubts over Shakespeare, "arising early and growing rapidly", have a "simple, direct plausibility", and the dismissive attitude of established scholars only worked to encourage such doubts. Though Crinkley rejected Ogburn's thesis, calling it "less satisfactory than the unsatisfactory orthodoxy it challenges", he believed that one merit of the book lay in how it forces orthodox scholars to reexamine their concept of Shakespeare as author. Spurred by Ogburn's book, "[i]n the last decade of the twentieth century members of the Oxfordian camp gathered strength and made a fresh assault on the Shakespearean citadel, hoping finally to unseat the man from Stratford and install de Vere in his place."
The Oxfordian theory returned to public attention in anticipation of the late October 2011 release of Roland Emmerich's drama film "Anonymous". Its distributor, Sony Pictures, advertised that the film "presents a compelling portrait of Edward de Vere as the true author of Shakespeare's plays", and commissioned high school and college-level lesson plans to promote the authorship question to history and literature teachers across the United States. According to Sony Pictures, "the objective for our Anonymous program, as stated in the classroom literature, is 'to encourage critical thinking by challenging students to examine the theories about the authorship of Shakespeare's works and to formulate their own opinions.' The study guide does not state that Edward de Vere is the writer of Shakespeare's work, but it does pose the authorship question which has been debated by scholars for decades".
Although most Oxfordians agree on the main arguments for Oxford, the theory has spawned schismatic variants that have not met with wide acceptance by all Oxfordians, although they have gained much attention.
In a letter written by Looney in 1933, he mentions that Allen and Ward were "advancing certain views respecting Oxford and Queen Eliz. which appear to me extravagant & improbable, in no way strengthen Oxford’s Shakespeare claims, and are likely to bring the whole cause into ridicule." Allen and Ward believed that they had discovered that Elizabeth and Oxford were lovers and had conceived a child. Allen developed the theory in his 1934 book "Anne Cecil, Elizabeth & Oxford". He argued that the child was given the name William Hughes, who became an actor under the stage-name "William Shakespeare". He adopted the name because his father, Oxford, was already using it as a pen-name for his plays. Oxford had borrowed the name from a third Shakespeare, the man of that name from Stratford-upon-Avon, who was a law student at the time, but who was never an actor "or" a writer. Allen later changed his mind about Hughes and decided that the concealed child was the Earl of Southampton, the dedicatee of Shakespeare's narrative poems. This secret drama, which has become known as the Prince Tudor theory, was covertly represented in Oxford's plays and poems and remained hidden until Allen and Ward's discoveries. The narrative poems and sonnets had been written by Oxford for his son. "This Star of England" (1952) by Charlton and Dorothy Ogburn included arguments in support of this version of the theory. Their son, Charlton Ogburn, Jr, agreed with Looney that the theory was an impediment to the Oxfordian movement and omitted all discussion about it in his own Oxfordian works.
However, the theory was revived and expanded by Elisabeth Sears in "Shakespeare and the Tudor Rose" (2002), and Hank Whittemore in "The Monument" (2005), an analysis of Shakespeare's Sonnets which interprets the poems as a poetic history of Queen Elizabeth, Oxford, and Southampton. Paul Streitz's "Oxford: Son of Queen Elizabeth I" (2001) advances a variation on the theory: that Oxford himself was the illegitimate son of Queen Elizabeth by her stepfather, Thomas Seymour. Oxford was thus the half-brother of his own son by the queen. Streitz also believes that the queen had children by the Earl of Leicester. These were Robert Cecil, 1st Earl of Salisbury, Robert Devereux, 2nd Earl of Essex, Mary Sidney and Elizabeth Leighton.
As with other candidates for authorship of Shakespeare's works, Oxford's advocates have attributed numerous non-Shakespearian works to him. Looney began the process in his 1921 edition of de Vere's poetry. He suggested that de Vere was also responsible for some of the literary works credited to Arthur Golding, Anthony Munday and John Lyly. Streitz credits Oxford with the Authorized King James Version of the Bible. Two professors of linguistics have claimed that de Vere wrote not only the works of Shakespeare, but most of what is memorable in English literature during his lifetime, with such names as Edmund Spenser, Christopher Marlowe, Philip Sidney, John Lyly, George Peele, George Gascoigne, Raphael Holinshed, Robert Greene, Thomas Phaer, and Arthur Golding being among dozens of further pseudonyms of de Vere. Ramon Jiménez has credited Oxford with such plays as "The True Tragedy of Richard III" and "Edmund Ironside".
Group theories in which Oxford played the principal role as writer, but collaborated with others to create the Shakespeare canon, were adopted by a number of early Oxfordians. Looney himself was willing to concede that Oxford may have been assisted by his son-in-law William Stanley, 6th Earl of Derby, who perhaps wrote "The Tempest". B.M. Ward also suggested that Oxford and Derby worked together. In his later writings Percy Allen argued that Oxford led a group of writers, among whom was William Shakespeare. Group theories with Oxford as the principal author or creative "master mind" were also proposed by Gilbert Standen in "Shakespeare Authorship" (1930), Gilbert Slater in "Seven Shakespeares" (1931) and Montagu William Douglas in "Lord Oxford and the Shakespeare Group" (1952).
Specialists in Elizabethan literary history object to the methodology of Oxfordian arguments. In lieu of any evidence of the type commonly used for authorship attribution, Oxfordians discard the methods used by historians and employ other types of arguments to make their case, the most common being supposed parallels between Oxford's life and Shakespeare's works.
Another is finding cryptic allusions to Oxford's supposed play writing in other literary works of the era that to them suggest that his authorship was obvious to those "in the know". David Kathman writes that their methods are subjective and devoid of any evidential value, because they use a "double standard". Their arguments are "not taken seriously by Shakespeare scholars because they consistently distort and misrepresent the historical record", "neglect to provide necessary context" and are in some cases "outright fabrication[s]". One major evidential objection to the Oxfordian theory is Edward de Vere's 1604 death, after which a number of Shakespeare's plays are generally believed to have been written. In "The Shakespeare Claimants", a 1962 examination of the authorship question, H. N. Gibson concluded that "... on analysis the Oxfordian case appears to me a very weak one".
Mainstream academics have often argued that the Oxford theory is based on snobbery: that anti-Stratfordians reject the idea that the son of a mere tradesman could write the plays and poems of Shakespeare. The Shakespeare Oxford Society has responded that this claim is "a substitute for reasoned responses to Oxfordian evidence and logic" and is merely an "ad hominem" attack.
Mainstream critics further say that, if William Shakespeare were a fraud instead of the true author, the number of people involved in suppressing this information would have made it highly unlikely to succeed. And citing the "testimony of contemporary writers, court records and much else" supporting Shakespeare's authorship, Columbia University professor James S. Shapiro says any theory claiming that "there must have been a conspiracy to suppress the truth of de Vere's authorship" based on the idea that "the very absence of surviving evidence proves the case" is a logically fatal tautology.
While no documentary evidence connects Oxford (or any alternative author) to the plays of Shakespeare, Oxfordian writers, including Mark Anderson and Charlton Ogburn, say that connection is made by considerable circumstantial evidence inferred from Oxford's connections to the Elizabethan theatre and poetry scene; the participation of his family in the printing and publication of the First Folio; his relationship with the Earl of Southampton (believed by most Shakespeare scholars to have been Shakespeare's patron); as well as a number of specific incidents and circumstances of Oxford's life that Oxfordians say are depicted in the plays themselves.
Oxford was noted for his literary and theatrical patronage, garnering dedications from a wide range of authors. For much of his adult life, Oxford patronised both adult and boy acting companies, as well as performances by musicians, acrobats and performing animals, and in 1583, he was a leaseholder of the first Blackfriars Theatre in London.
Oxford was related to several literary figures. His mother, Margory Golding, was the sister of the Ovid translator Arthur Golding, and his uncle, Henry Howard, Earl of Surrey, was the inventor of the English or Shakespearian sonnet form.
The three dedicatees of Shakespeare's works (the earls of Southampton, Montgomery and Pembroke) were each proposed as husbands for the three daughters of Edward de Vere. "Venus and Adonis" and "The Rape of Lucrece" were dedicated to Southampton (whom many scholars have argued was the Fair Youth of the "Sonnets"), and the "First Folio" of Shakespeare's plays was dedicated to Montgomery (who married Susan de Vere) and Pembroke (who was once engaged to Bridget de Vere).
In the late 1990s, Roger A. Stritmatter conducted a study of the marked passages found in Edward de Vere's Geneva Bible, which is now owned by the Folger Shakespeare Library. The Bible contains 1,028 instances of underlined words or passages and a few hand-written annotations, most of which consist of a single word or fragment. Stritmatter believes about a quarter of the marked passages appear in Shakespeare's works as either a theme, allusion, or quotation. Stritmatter grouped the marked passages into eight themes. Arguing that the themes fitted de Vere's known interests, he proceeded to link specific themes to passages in Shakespeare. Critics have doubted that any of the underlinings or annotations in the Bible can be reliably attributed to de Vere and not the book's other owners prior to its acquisition by the Folger Shakespeare Library in 1925, as well as challenging the looseness of Stritmatter's standards for a Biblical allusion in Shakespeare's works and arguing that there is no statistical significance to the overlap.
Shakespeare's native Avon and Stratford are referred to in two prefatory poems in the 1623 First Folio, one of which refers to Shakespeare as "Swan of Avon" and another to the author's "Stratford monument". Oxfordians say the first of these phrases could refer to one of Edward de Vere's manors, Bilton Hall, near the Forest of Arden, in Rugby, on the River Avon. This view was first expressed by Charles Wisner Barrell, who argued that De Vere "kept the place as a literary hideaway where he could carry on his creative work without the interference of his father-in-law, Burghley, and other distractions of Court and city life." Oxfordians also consider it significant that the nearest town to the parish of Hackney, where de Vere later lived and was buried, was also named Stratford. Mainstream scholar Irvin Matus demonstrated that Oxford sold the Bilton house in 1580, having previously rented it out, making it unlikely that Ben Jonson's 1623 poem would identify Oxford by referring to a property he once owned, but never lived in, and sold 43 years earlier. Nor is there any evidence of a monument to Oxford in Stratford, London, or anywhere else; his widow provided for the creation of one at Hackney in her 1613 will, but there is no evidence that it was ever erected.
Oxfordians also believe that Rev. Dr. John Ward's 1662 diary entry stating that Shakespeare wrote two plays a year "and for that had an allowance so large that he spent at the rate of £1,000 a year" as a critical piece of evidence, since Queen Elizabeth I gave Oxford an annuity of exactly £1,000 beginning in 1586 that was continued until his death. Ogburn wrote that the annuity was granted "under mysterious circumstances", and Anderson suggests it was granted because of Oxford's writing patriotic plays for government propaganda. However, the documentary evidence indicates that the allowance was meant to relieve Oxford's embarrassed financial situation caused by the ruination of his estate.
Almost half of Shakespeare's plays are set in Italy, many of them containing details of Italian laws, customs, and culture which Oxfordians believe could only have been obtained by personal experiences in Italy, and especially in Venice. The author of "The Merchant of Venice", Looney believed, "knew Italy first hand and was touched with the life and spirit of the country". This argument had earlier been used by supporters of the Earl of Rutland and the Earl of Derby as authorship candidates, both of whom had also travelled on the continent of Europe. Oxfordian William Farina refers to Shakespeare's apparent knowledge of the Jewish ghetto, Venetian architecture and laws in "The Merchant of Venice", especially the city's "notorious Alien Statute". Historical documents confirm that Oxford lived in Venice, and travelled for over a year through Italy. He disliked the country, writing in a letter to Lord Burghley dated 24 September 1575, "I am glad I have seen it, and I care not ever to see it any more". Still, he remained in Italy for another six months, leaving Venice in March 1576. According to Anderson, Oxford definitely visited Venice, Padua, Milan, Genoa, Palermo, Florence, Siena and Naples, and probably passed through Messina, Mantua and Verona, all cities used as settings by Shakespeare. In testimony before the Venetian Inquisition, Edward de Vere was said to be fluent in Italian.
However, some Shakespeare scholars say that Shakespeare gets many details of Italian life wrong, including the laws and urban geography of Venice. Kenneth Gross writes that "the play itself knows nothing about the Venetian ghetto; we get no sense of a legally separate region of Venice where Shylock must dwell." Scott McCrea describes the setting as "a nonrealistic Venice" and the laws invoked by Portia as part of the "imaginary world of the play", inconsistent with actual legal practice. Charles Ross points out that Shakespeare's Alien Statute bears little resemblance to any Italian law. For later plays such as "Othello", Shakespeare probably used Lewes Lewknor's 1599 English translation of Gasparo Contarini's "The Commonwealth and Government of Venice" for some details about Venice's laws and customs.
Shakespeare derived much of this material from John Florio, an Italian scholar living in England who was later thanked by Ben Jonson for helping him get Italian details right for his play "Volpone". Kier Elam has traced Shakespeare's Italian idioms in "Shrew" and some of the dialogue to Florio's "Second Fruits", a bilingual introduction to Italian language and culture published in 1591. Jason Lawrence believes that Shakespeare’s Italian dialogue in the play derives "almost entirely" from Florio’s "First Fruits" (1578). He also believes that Shakespeare became more proficient in reading the language as set out in Florio’s manuals, as evidenced by his increasing use of Florio and other Italian sources for writing the plays.
In 1567 Oxford was admitted to Gray's Inn, one of the Inns of Court which Justice Shallow reminisces about in "Henry IV, Part 2". Sobran observes that the Sonnets "abound not only in legal terms – more than 200 – but also in elaborate legal conceits." These terms include: "allege, auditor, defects, exchequer, forfeit, heirs, impeach, lease, moiety, recompense, render, sureties," and "usage". Shakespeare also uses the legal term "quietus" (final settlement) in Sonnet 134, the last Fair Youth sonnet.
Regarding Oxford's knowledge of court life, which Oxfordians believe is reflected throughout the plays, mainstream scholars say that any special knowledge of the aristocracy appearing in the plays can be more easily explained by Shakespeare's life-time of performances before nobility and royalty, and possibly, as Gibson theorises, "by visits to his patron's house, as Marlowe visited Walsingham."
Some of Oxford's lyric works have survived. Steven W. May, an authority on Oxford's poetry, attributes sixteen poems definitely, and four possibly, to Oxford noting that these are probably "only a good sampling" as "both Webbe (1586) and Puttenham (1589) rank him first among the courtier poets, an eminence he probably would not have been granted, despite his reputation as a patron, by virtue of a mere handful of lyrics".
May describes Oxford as a "competent, fairly experimental poet working in the established modes of mid-century lyric verse" and his poetry as "examples of the standard varieties of mid-Elizabethan amorous lyric". In 2004, May wrote that Oxford's poetry was "one man's contribution to the rhetorical mainstream of an evolving Elizabethan poetic" and challenged readers to distinguish any of it from "the output of his mediocre mid-century contemporaries". C. S. Lewis wrote that de Vere's poetry shows "a faint talent", but is "for the most part undistinguished and verbose."
In the opinion of J. Thomas Looney, as "far as forms of versification are concerned De Vere presents just that rich variety which is so noticeable in Shakespeare; and almost all the forms he employs we find reproduced in the Shakespeare work." Oxfordian Louis P. Bénézet created the "Bénézet test", a collage of lines from Shakespeare and lines he thought were representative of Oxford, challenging non-specialists to tell the difference between the two authors. May notes that Looney compared various motifs, rhetorical devices and phrases with certain Shakespeare works to find similarities he said were "the most crucial in the piecing together of the case", but that for some of those "crucial" examples Looney used six poems mistakenly attributed to Oxford that were actually written by Greene, Campion, and Greville. Bénézet also used two lines from Greene that he thought were Oxford's, while succeeding Oxfordians, including Charles Wisner Barrell, have also misattributed poems to Oxford. "This on-going confusion of Oxford's genuine verse with that of at least three other poets", writes May, "illustrates the wholesale failure of the basic Oxfordian methodology."
According to a computerised textual comparison developed by the Claremont Shakespeare Clinic, the styles of Shakespeare and Oxford were found to be "light years apart", and the odds of Oxford having written Shakespeare were reported as "lower than the odds of getting hit by lightning". Furthermore, while the First Folio shows traces of a dialect identical to Shakespeare's, the Earl of Oxford, raised in Essex, spoke an East Anglian dialect. John Shahan and Richard Whalen condemned the Claremont study, calling it "apples to oranges", and noting that the study did not compare Oxford's songs to Shakespeare's songs, did not compare a clean unconfounded sample of Oxford's poems with Shakespeare's poems, and charged that the students under Elliott and Valenza's supervision incorrectly assumed that Oxford's youthful verse was representative of his mature poetry.
Joseph Sobran's book, "Alias Shakespeare", includes Oxford's known poetry in an appendix with what he considers extensive verbal parallels with the work of Shakespeare, and he argues that Oxford's poetry is comparable in quality to some of Shakespeare's early work, such as "Titus Andronicus". Other Oxfordians say that de Vere's extant work is that of a young man and should be considered juvenilia, while May believes that all the evidence dates his surviving work to his early 20s and later.
Four contemporary critics praise Oxford as a poet and a playwright, three of them within his lifetime:
Mainstream scholarship characterises the extravagant praise for de Vere's poetry more as a convention of flattery than honest appreciation of literary merit. Alan Nelson, de Vere's documentary biographer, writes that "[c]ontemporary observers such as Harvey, Webbe, Puttenham and Meres clearly exaggerated Oxford's talent in deference to his rank."
Before the advent of copyright, anonymous and pseudonymous publication was a common practice in the sixteenth century publishing world, and a passage in the "Arte of English Poesie" (1589), an anonymously published work itself, mentions in passing that literary figures in the court who wrote "commendably well" circulated their poetry only among their friends, "as if it were a discredit for a gentleman to seem learned" (Book 1, Chapter 8). In another passage 23 chapters later, the author (probably George Puttenham) speaks of aristocratic writers who, if their writings were made public, would appear to be excellent. It is in this passage that Oxford appears on a list of poets.
According to Daniel Wright, these combined passages confirm that Oxford was one of the concealed writers in the Elizabethan court. Critics of this view argue that Oxford nor any other writer is not here identified as a concealed writer, but as the first in a list of "known" modern writers whose works have already been "made public", "of which number is first" Oxford, adding to the publicly acknowledged literary tradition dating back to Geoffrey Chaucer. Other critics interpret the passage to mean that the courtly writers and their works are known within courtly circles, but not to the general public. In either case, neither Oxford nor anyone else is identified as a hidden writer or one that used a pseudonym.
Oxfordians argue that at the time of the passage's composition (pre-1589), the writers referenced were not in print, and interpret Puttenham's passage (that the noblemen preferred to 'suppress' their work to avoid the discredit of appearing learned) to mean that they were 'concealed'. They cite Sir Philip Sidney, none of whose poetry was published until after his premature death, as an example. Similarly, by 1589 nothing by Greville was in print, and only one of Walter Raleigh's works had been published.
Critics point out that six of the nine poets listed had appeared in print under their own names long before 1589, including a number of Oxford's poems in printed miscellanies, and the first poem published under Oxford's name was printed in 1572, 17 years before Puttenham's book was published. Several other contemporary authors name Oxford as a poet, and Puttenham himself quotes one of Oxford's verses elsewhere in the book, referring to him by name as the author, so Oxfordians misread Puttenham.
Oxfordians also believe other texts refer to the Edward de Vere as a concealed writer. They argue that satirist John Marston's "Scourge of Villanie" (1598) contains further cryptic allusions to Oxford, named as "Mutius". Marston expert Arnold Davenport believes that Mutius is the bishop-poet Joseph Hall and that Marston is criticising Hall's satires.
There is a description of the figure of Oxford in "The Revenge of Bussy D'Ambois", a 1613 play by George Chapman, who has been suggested as the Rival Poet of Shakespeare's Sonnets. Chapman describes Oxford as "Rare and most absolute" in form and says he was "of spirit passing great / Valiant and learn’d, and liberal as the sun". He adds that he "spoke and writ sweetly" of both learned subjects and matters of state ("public weal").
For mainstream Shakespearian scholars, the most compelling evidence against Oxford (besides the historical evidence for William Shakespeare) is his death in 1604, since the generally accepted chronology of Shakespeare's plays places the composition of approximately twelve of the plays after that date. Critics often cite "The Tempest" and "Macbeth", for example, as having been written after 1604.
The exact dates of the composition of most of Shakespeare's plays are uncertain, although David Bevington says it is a 'virtually unanimous' opinion among teachers and scholars of Shakespeare that the canon of late plays depicts an artistic journey that extends well beyond 1604. Evidence for this includes allusions to historical events and literary sources which postdate 1604, as well as Shakespeare's adaptation of his style to accommodate Jacobean literary tastes and the changing membership of the King's Men and their different venues.
Oxfordians say that the conventional composition dates for the plays were developed by mainstream scholars to fit within Shakespeare's lifetime and that no evidence exists that any plays were written after 1604. Anderson argues that all of the Jacobean plays were written before 1604, selectively citing non-Oxfordian scholars like Alfred Harbage, Karl Elze, and Andrew Cairncross to bolster his case. Anderson notes that from 1593 through 1603, the publication of new plays appeared at the rate of two per year, and whenever an inferior or pirated text was published, it was typically followed by a genuine text described on the title page as "newly augmented" or "corrected". After the publication of the Q1 and Q2 "Hamlet" in 1603, no new plays were published until 1608. Anderson observes that, "After 1604, the 'newly correct[ing]' and 'augment[ing]' stops. Once again, the Shake-speare ["sic"] enterprise appears to have shut down".
Because Shakespeare lived until 1616, Oxfordians question why, if he were the author, did he not eulogise Queen Elizabeth at her death in 1603 or Henry, Prince of Wales, at his in 1612. They believe Oxford's 1604 death provides the explanation. In an age when such actions were expected, Shakespeare also failed to memorialise the coronation of James I in 1604, the marriage of Princess Elizabeth in 1612, and the investiture of Prince Charles as the new Prince of Wales in 1613.
Anderson contends that Shakespeare refers to the latest scientific discoveries and events through the end of the 16th century, but "is mute about science after de Vere’s [Oxford’s] death in 1604". He believes that the absence of any mention of the spectacular supernova of October 1604 or Kepler’s revolutionary 1609 study of planetary orbits are especially noteworthy.
Professor Jonathan Bate writes that Oxfordians cannot "provide any explanation for ... technical changes attendant on the King's Men's move to the Blackfriars theatre four years after their candidate's death ... Unlike the Globe, the Blackfriars was an indoor playhouse" and so required plays with frequent breaks in order to replace the candles it used for lighting. "The plays written after Shakespeare's company began using the Blackfriars in 1608, "Cymbeline" and "The Winter's Tale" for instance, have what most ... of the earlier plays do not have: a carefully planned five-act structure". If new Shakespearian plays were being written especially for presentation at the Blackfriars' theatre after 1608, they could not have been written by Edward de Vere.
Oxfordians argue that Oxford was well acquainted with the Blackfriars Theatre, having been a leaseholder of the venue, and note that the "assumption" that Shakespeare wrote plays for the Blackfriars is not universally accepted, citing Shakespearian scholars such as A. Nicoll who said that "all available evidence is either completely negative or else runs directly counter to such a supposition" and Harley Granville-Barker, who stated "Shakespeare did not write (except for Henry V) five-act plays at any stage of his career. The five-act structure was formalized in the First Folio, and is inauthentic".
Further, attribution studies have shown that certain plays in the canon were written by two or three hands, which Oxfordians believe is explained by these plays being either drafted earlier than conventionally believed, or simply revised/completed by others after Oxford's death. Shapiro calls this a 'nightmare' for Oxfordians, implying a 'jumble sale scenario' for his literary remains long after his death.
Some Oxfordians have identified titles or descriptions of lost works from Oxford's lifetime that suggest a thematic similarity to a particular Shakespearian play and asserted that they were earlier versions. For example, in 1732, the antiquarian Francis Peck published in "Desiderata Curiosa" a list of documents in his possession that he intended to print someday. They included "a pleasant conceit of Vere, earl of Oxford, discontented at the rising of a mean gentleman in the English court, circa 1580." Peck never published his archives, which are now lost. To Anderson, Peck's description suggests that this conceit is "arguably an early draft of "Twelfth Night"."
Oxfordian writers say some literary allusions imply that the playwright and poet died prior to 1609, when "Shake-Speares Sonnets" appeared with the epithet "our ever-living poet" in its dedication. They claim that the phrase "ever-living" rarely, if ever, referred to a living person, but instead was used to refer to the eternal soul of the deceased. Bacon, Derby, Neville, and William Shakespeare all lived well past the 1609 publication of the Sonnets.
However, Don Foster, in his study of Early Modern uses of the phrase "ever-living", argues that the phrase most frequently refers to God or other supernatural beings, suggesting that the dedication calls upon God to bless the living begetter (writer) of the sonnets. He states that the initials "W. H." were a misprint for "W. S." or "W. SH". Bate thinks it a misprint as well, but he thinks it "improbable" that the phrase refers to God and suggests that the "ever-living poet" might be "a great dead English poet who had written on the great theme of poetic immortality", such as Sir Philip Sidney or Edmund Spenser.
Joseph Sobran, in "Alias Shakespeare," argued that in 1607 William Barksted, a minor poet and playwright, implies in his poem "Mirrha the Mother of Adonis" that Shakespeare was already deceased. Shakespeare scholars explain that Sobran has simply misread Barksted’s poem, the last stanza of which is a comparison of Barksted’s poem to Shakespeare’s "Venus and Adonis", and has mistaken the grammar also, which makes it clear that Barksted is referring to Shakespeare’s "song" in the past tense, not Shakespeare himself. This context is obvious when the rest of the stanza is included.
Against the Oxford theory are several references to Shakespeare, later than 1604, which imply that the author was then still alive. Scholars point to a poem written circa 1620 by a student at Oxford, William Basse, that mentioned the author Shakespeare died in 1616, which is the year Shakespeare deceased and not Edward de Vere.
Tom Veal has noted that the early play "The Two Gentlemen of Verona" reveals no familiarity on the playwright's part with Italy other than "a few place names and the scarcely recondite fact that the inhabitants were Roman Catholics." For example, the play's Verona is situated on a tidal river and has a duke, and none of the characters have distinctly Italian names like in the later plays. Therefore, if the play was written by Oxford, it must have been before he visited Italy in 1575. However, the play's principal source, the Spanish "Diana Enamorada", would not be translated into French or English until 1578, meaning that someone basing a play on it that early could only have read it in the original Spanish, and there is no evidence that Oxford spoke this language. Furthermore, Veal argues, the only explanation for the verbal parallels with the English translation of 1582 would be that the translator saw the play performed and echoed it in his translation, which he describes as "not an impossible theory but far from a plausible one."
The composition date of "Hamlet" has been frequently disputed. Several surviving references indicate that a Hamlet-like play was well-known throughout the 1590s, well before the traditional period of composition (1599–1601). Most scholars refer to this lost early play as the Ur-Hamlet; the earliest reference is in 1589. A 1594 performance record of "Hamlet" appears in Philip Henslowe's diary, and Thomas Lodge wrote of it in 1596.
Oxfordian researchers believe that the play is an early version of Shakespeare's own play, and point to the fact that Shakespeare's version survives in three quite different early texts, Q1 (1603), Q2 (1604) and F (1623), suggesting the possibility that it was revised by the author over a period of many years.
Scholars contend that the composition date of "Macbeth" is one of the most overwhelming pieces of evidence against the Oxfordian position; the vast majority of critics believe the play was written in the aftermath of the Gunpowder Plot. This plot was brought to light on 5 November 1605, a year after Oxford died. In particular, scholars identify the porter's lines about "equivocation" and treason as an allusion to the trial of Henry Garnet in 1606. Oxfordians respond that the concept of "equivocation" was the subject of a 1583 tract by Queen Elizabeth's chief councillor (and Oxford's father-in-law) Lord Burghley, as well as of the 1584 "Doctrine of Equivocation" by the Spanish prelate Martín de Azpilcueta, which was disseminated across Europe and into England in the 1590s.
Shakespearian scholar David Haley asserts that if Edward de Vere had written "Coriolanus", he "must have foreseen the Midland Revolt grain riots [of 1607] reported in Coriolanus", possible topical allusions in the play that most Shakespearians accept.
The play that can be dated within a fourteen-month period is "The Tempest". This play has long been believed to have been inspired by the 1609 wreck at Bermuda, then feared by mariners as the "Isle of the Devils", of the flagship of the Virginia Company, the Sea Venture, while leading the Third Supply to relieve Jamestown in the Colony of Virginia. The Sea Venture was captained by Christopher Newport, and carried the Admiral of the company's fleet, Sir George Somers (for whom the archipelago would subsequently be named "The Somers Isles"). The survivors spent nine months in Bermuda before most completed the journey to Jamestown on 23 May 1610 aboard two new ships built from scratch. One of the survivors was the newly-appointed Governor, Sir Thomas Gates. Jamestown, then little more than a rudimentary fort, was found in such a poor condition, with the majority of the previous settlers dead or dying, that Gates and Somers decided to abandon the settlement and the continent, returning everyone to England. However, with the company believing all aboard the Sea Venture dead, a new governor, Baron De La Warr, had been sent with the Fourth Supply fleet, which arrived on 10 June 1610 as Jamestown was being abandoned.
De la Warr remained in Jamestown as Governor, while Gates returned to England (and Somers to Bermuda), arriving in September, 1610. The news of the survival of the Sea Venture's passengers and crew caused a great sensation in England. Two accounts were published: Sylvester Jordain's "A Discovery of the Barmvdas, Otherwise Called the Ile of Divels", in October, 1610, and "A True Declaration of the Estate of the Colonie in Virginia" a month later. The "True Reportory of the Wrack, and Redemption of Sir Thomas Gates Knight", an account by William Strachey dated 15 July 1610, returned to England with Gates in the form of a letter which was circulated privately until its eventual publication in 1625. Shakespeare had multiple contacts to the circle of people amongst whom the letter circulated, including to Strachey. "The Tempest" shows clear evidence that he had read and relied on Jordain and especially Strachey. The play shares premise, basic plot, and many details of the Sea Venture's wrecking and the adventures of the survivors, as well as specific details and linguistics. A detailed comparative analysis shows the "Declaration" to have been the primary source from which the play was drawn. This firmly dates the writing of the play to the months between Gates' return to England and 1 November 1611.
Oxfordians have dealt with this problem in several ways. Looney expelled the play from the canon, arguing that its style and the "dreary negativism" it promoted were inconsistent with Shakespeare's "essentially positivist" soul, and so could not have been written by Oxford. Later Oxfordians have generally abandoned this argument; this has made severing the connection of the play with the wreck of the Sea Venture a priority amongst Oxfordians. A variety of attacks have been directed on the links. These include attempting to cast doubt on whether the "Declaration" travelled back to England with Gates, whether Gates travelled back to England early enough, whether the lowly Shakespeare would have had access to the lofty circles in which the "Declaration" was circulated, to understating the points of similarity between the Sea Venture wreck and the accounts of it, on the one hand, and the play on the other. Oxfordians have even claimed that the writers of the first-hand accounts of the real wreck based them on "The Tempest", or, at least, the same antiquated sources that Shakespeare, or rather Oxford, is imagined to have used exclusively, including Richard Eden's "The Decades of the New Worlde Or West India" (1555) and Desiderius Erasmus's "Naufragium"/"The Shipwreck" (1523). Alden Vaughan commented in 2008 that "[t]he argument that Shakespeare could have gotten every thematic thread, every detail of the storm, and every similarity of word and phrase from other sources stretches credulity to the limits."
Oxfordians note that while the conventional dating for "Henry VIII" is 1610–13, the majority of 18th and 19th century scholars, including notables such as Samuel Johnson, Lewis Theobald, George Steevens, Edmond Malone, and James Halliwell-Phillipps, placed the composition of "Henry VIII" prior to 1604, as they believed Elizabeth's execution of Mary, Queen of Scots (the then king James I's mother) made any vigorous defence of the Tudors politically inappropriate in the England of James I. Though it is described as a new play by two witnesses in 1613, Oxfordians argue that this refers to the fact it was new on stage, having its first production in that year.
Although searching Shakespeare's works for encrypted clues supposedly left by the true author is associated mainly with the Baconian theory, such arguments are often made by Oxfordians as well. Early Oxfordians found many references to Oxford's family name "Vere" in the plays and poems, in supposed puns on words such as "ever" (E. Vere). In "The De Vere Code", a book by English actor Jonathan Bond, the author believes that Thomas Thorpe's 30-word dedication to the original publication of Shakespeare's Sonnets contains six simple encryptions which conclusively establish de Vere as the author of the poems. He also writes that the alleged encryptions settle the question of the identity of "the Fair Youth" as Henry Wriothesley and contain striking references to the sonnets themselves and de Vere's relationship to Sir Philip Sidney and Ben Jonson.
Similarly, a 2009 article in the Oxfordian journal "Brief Chronicles" noted that Francis Meres, in "Palladis Tamia" compares 17 named English poets to 16 named classical poets. Writing that Meres was obsessed with numerology, the authors propose that the numbers should be symmetrical, and that careful readers are meant to infer that Meres knew two of the English poets (viz., Oxford and Shakespeare) to actually be one and the same.
Literary scholars say that the idea that an author's work must reflect his or her life is a Modernist assumption not held by Elizabethan writers, and that biographical interpretations of literature are unreliable in attributing authorship. Further, such lists of similarities between incidents in the plays and the life of an aristocrat are flawed arguments because similar lists have been drawn up for many competing candidates, such as Francis Bacon and William Stanley, 6th Earl of Derby. Harold Love writes that "The very fact that their application has produced so many rival claimants demonstrates their unreliability," and Jonathan Bate writes that the Oxfordian biographical method "is in essence no different from the cryptogram, since Shakespeare's range of characters and plots, both familial and political, is so vast that it would be possible to find in the plays 'self-portraits' of ... anybody one cares to think of."
Despite this, Oxfordians list numerous incidents in Oxford's life that they say parallel those in many of the Shakespeare plays. Most notable among these, they say, are certain similar incidents found in Oxford's biography and "Hamlet", and "Henry IV, Part 1", which includes a well-known robbery scene with uncanny parallels to a real-life incident involving Oxford.
Most Oxfordians consider "Hamlet" the play most easily seen as portraying Oxford's life story, though mainstream scholars say that incidents from the lives of other contemporary figures such as King James or the Earl of Essex, fit the play just as closely, if not more so.
Hamlet's father was murdered and his mother made an "o'er-hasty marriage" less than two months later. Oxfordians see a parallel with Oxford's life, as Oxford's father died at the age of 46 on 3 August 1562, although not before making a will six days earlier, and his stepmother remarried within 15 months, although exactly when is unknown.
Another frequently-cited parallel involves Hamlet's revelation in Act IV that he was earlier taken captive by pirates. On Oxford's return from Europe in 1576, he encountered a cavalry division outside of Paris that was being led by a German duke, and his ship was hijacked by pirates who robbed him and left him stripped to his shirt, and who might have murdered him had not one of them recognised him. Anderson notes that "[n]either the encounter with Fortinbras' army nor Hamlet's brush with buccaneers appears in any of the play's sources – to the puzzlement of numerous literary critics."
Such speculation often identifies the character of Polonius as a caricature of Lord Burghley, Oxford's guardian from the age of 12.
In the First Quarto the character was not named Polonius, but Corambis. Ogburn writes that "Cor ambis" can be interpreted as "two-hearted" (a view not independently supported by Latinists). He says the name is a swipe "at Burghley's motto, "Cor unum, via una", or 'one heart, one way.'" Scholars suggest that it derives from the Latin phrase "crambe repetita" meaning "reheated cabbage", which was expanded in Elizabethan usage to ""Crambe bis" posita mors est" ("twice served cabbage is deadly"), which implies "a boring old man" who spouts trite rehashed ideas. Similar variants such as "Crambo" and "Corabme" appear in Latin-English dictionaries at the time.
In his "Memoires" (1658), Francis Osborne writes of "the last great "Earle of Oxford", whose "Lady" was brought to his bed under the notion of his "Mistris", and from such a virtuous deceit she (Oxford's youngest daughter) is said to proceed" (p. 79).
Such a bed trick has been a dramatic convention since antiquity and was used more than 40 times by every major playwright in the Early Modern theatre era except for Ben Jonson. Thomas Middleton used it five times and Shakespeare and James Shirley used it four times. Shakespeare's use of it in "All's Well That Ends Well" and "Measure for Measure" followed his sources for the plays (stories by Boccaccio and Cinthio); nevertheless Oxfordians say that de Vere was drawn to these stories because they "paralleled his own", based on Osborne's anecdote.
Oxfordians claim that flattering treatment of Oxford's ancestors in Shakespeare's history plays is evidence of his authorship. Shakespeare omitted the character of the traitorous Robert de Vere, 3rd Earl of Oxford in "The Life and Death of King John", and the character of the 12th Earl of Oxford is given a much more prominent role in "Henry V" than his limited involvement in the actual history of the times would allow. The 12th Earl is given an even more prominent role in the non-Shakespearian play "The Famous Victories of Henry the fifth". Some Oxfordians argue that this was another play written by Oxford, based on the exaggerated role it gave to the 11th Earl of Oxford.
J. Thomas Looney found John de Vere, 13th Earl of Oxford is "hardly mentioned except to be praised" in "Henry VI, Part Three"; the play ahistorically depicts him participating in the Battle of Tewkesbury and being captured. Oxfordians, such as Dorothy and Charlton Ogburn, believe Shakespeare created such a role for the 13th Earl because it was the easiest way Edward de Vere could have "advertised his loyalty to the Tudor Queen" and remind her of "the historic part borne by the Earls of Oxford in defeating the usurpers and restoring the Lancastrians to power". Looney also notes that in "Richard III", when the future Henry VII appears, the same Earl of Oxford is "by his side; and it is Oxford who, as premier nobleman, replies first to the king's address to his followers".
Non-Oxfordian writers do not see any evidence of partiality for the de Vere family in the plays. Richard de Vere, 11th Earl of Oxford, who plays a prominent role in the anonymous "The Famous Victories of Henry V", does not appear in Shakespeare's "Henry V", nor is he even mentioned. In "Richard III", Oxford's reply to the king noted by Looney is a mere two lines, the only lines he speaks in the play. He has a much more prominent role in the non-Shakespearian play "The True Tragedy of Richard III". On these grounds the scholar Benjamin Griffin argues that the non-Shakespearian plays, the "Famous Victories" and "True Tragedy", are the ones connected to Oxford, possibly written for Oxford's Men. Oxfordian Charlton Ogburn Jr. argues that the role of the Earls of Oxford was played down in "Henry V" and "Richard III" to maintain Oxford's nominal anonymity. This is because "It would not do to have a performance of one of his plays at Court greeted with ill-suppressed knowing chuckles."
In 1577 the Company of Cathay was formed to support Martin Frobisher's hunt for the Northwest Passage, although Frobisher and his investors quickly became distracted by reports of gold at Hall’s Island. With thoughts of an impending Canadian gold-rush and trusting in the financial advice of Michael Lok, the treasurer of the company, de Vere signed a bond for £3,000 in order to invest £1,000 and to assume £2,000 worth – about half – of Lok's personal investment in the enterprise. Oxfordians say this is similar to Antonio in "The Merchant of Venice", who was indebted to Shylock for 3,000 ducats against the successful return of his vessels.
Oxfordians also note that when de Vere travelled through Venice, he borrowed 500 crowns from a Baptista Nigrone. In Padua, he borrowed from a man named Pasquino Spinola. In "The Taming of the Shrew", Kate's father is described as a man "rich in crowns." He, too, is from Padua, and his name is Baptista Minola, which Oxfordians take to be a conflation of Baptista Nigrone and Pasquino Spinola.
When the character of Antipholus of Ephesus in "The Comedy of Errors" tells his servant to go out and buy some rope, the servant (Dromio) replies, "I buy a thousand pounds a year! I buy a rope!" (Act 4, scene 1). The meaning of Dromio’s line has not been satisfactorily explained by critics, but Oxfordians say the line is somehow connected to the fact that de Vere was given a £1,000 annuity by the Queen, later continued by King James.
Oxfordians see Oxford's marriage to Anne Cecil, Lord Burghley's daughter, paralleled in such plays as "Hamlet", "Othello", "Cymbeline", "The Merry Wives of Windsor", "All's Well That Ends Well", "Measure for Measure", "Much Ado About Nothing", and "The Winter's Tale".
Oxford's illicit congress with Anne Vavasour resulted in an intermittent series of street battles between the Knyvet clan, led by Anne's uncle, Sir Thomas Knyvet, and Oxford’s men. As in "Romeo and Juliet", this imbroglio produced three deaths and several other injuries. The feud was finally put to an end only by the intervention of the Queen.
In May 1573, in a letter to Lord Burghley, two of Oxford's former employees accused three of Oxford's friends of attacking them on "the highway from Gravesend to Rochester." In Shakespeare's "Henry IV, Part 1", Falstaff and three roguish friends of Prince Hal also waylay unwary travellers at Gad's Hill, which is on the highway from Gravesend to Rochester. Scott McCrea says that there is little similarity between the two events, since the crime described in the letter is unlikely to have occurred near Gad's Hill and was not a robbery, but rather an attempted shooting. Mainstream writers also say that this episode derives from an earlier anonymous play, "The Famous Victories of Henry V", which was Shakespeare's source. Some Oxfordians argue that "The Famous Victories" was written by Oxford, based on the exaggerated role it gave to the 11th Earl of Oxford.
In 1609, a volume of 154 linked poems was published under the title "SHAKE-SPEARES SONNETS". Oxfordians believe the title ("Shake-Speares Sonnets") suggests a finality indicating that it was a completed body of work with no further sonnets expected, and consider the differences of opinion among Shakespearian scholars as to whether the Sonnets are fictional or autobiographical to be a serious problem facing orthodox scholars. Joseph Sobran questions why Shakespeare (who lived until 1616) failed to publish a corrected and authorised edition if they are fiction, as well as why they fail to match Shakespeare's life story if they are autobiographic. According to Sobran and other researchers, the themes and personal circumstances expounded by the author of the Sonnets are remarkably similar to Oxford's biography.
The focus of the 154 sonnet series appears to narrate the author's relationships with three characters: the Fair Youth, the Dark Lady or Mistress, and the Rival Poet. Beginning with Looney, most Oxfordians (exceptions are Percy Allen and Louis Bénézet) believe that the "Fair Youth" referred to in the early sonnets refers to Henry Wriothesley, 3rd Earl of Southampton, Oxford's peer and prospective son-in-law. The Dark Lady is believed by some Oxfordians to be Anne Vavasour, Oxford's mistress who bore him a son out of wedlock. A case was made by the Oxfordian Peter R. Moore that the Rival Poet was Robert Devereux, Earl of Essex.
Sobran suggests that the so-called procreation sonnets were part of a campaign by Burghley to persuade Southampton to marry his granddaughter, Oxford's daughter Elizabeth de Vere, and says that it was more likely that Oxford would have participated in such a campaign than that Shakespeare would know the parties involved or presume to give advice to the nobility.
Oxfordians also assert that the tone of the poems is that of a nobleman addressing an equal rather than that of a poet addressing his patron. According to them, Sonnet 91 (which compares the Fair Youth's love to such treasures as high birth, wealth, and horses) implies that the author is in a position to make such comparisons, and the 'high birth' he refers to is his own.
Oxford was born in 1550, and was between 40 and 53 years old when he presumably would have written the sonnets. Shakespeare was born in 1564. Even though the average life expectancy of Elizabethans was short, being between 26 and 39 was not considered old. In spite of this, age and growing older are recurring themes in the Sonnets, for example, in Sonnets 138 and 37. In his later years, Oxford described himself as "lame". On several occasions, the author of the sonnets also described himself as lame, such as in Sonnets 37 and 89.
Sobran also believes "scholars have largely ignored one of the chief themes of the Sonnets: the poet's sense of disgrace ... [T]here can be no doubt that the poet is referring to something real that he expects his friends to know about; in fact, he makes clear that a wide public knows about it ... Once again the poet's situation matches Oxford's ... He has been a topic of scandal on several occasions. And his contemporaries saw the course of his life as one of decline from great wealth, honor, and promise to disgrace and ruin. This perception was underlined by enemies who accused him of every imaginable offense and perversion, charges he was apparently unable to rebut." Examples include Sonnets 29 and 112.
As early as 1576, Edward de Vere was writing about this subject in his poem "Loss of Good Name", which Steven W. May described as "a defiant lyric without precedent in English Renaissance verse."
The poems "Venus and Adonis" and "Lucrece", first published in 1593 and 1594 under the name "William Shakespeare", proved highly popular for several decades – with "Venus and Adonis" published six more times before 1616, while "Lucrece" required four additional printings during this same period. By 1598, they were so famous, London poet and sonneteer Richard Barnefield wrote:
Shakespeare...
Whose "Venus" and whose "Lucrece" (sweet and chaste)
Thy name in fame's immortal Book have plac't
Live ever you, at least in Fame live ever:
Well may the Body die, but Fame dies never.
Despite such publicity, Sobran observed, "[t]he author of the Sonnets expects and hopes to be forgotten. While he is confident that his poetry will outlast marble and monument, it will immortalize his young friend, not himself. He says that his style is so distinctive and unchanging that 'every word doth almost tell my name,' implying that his name is otherwise concealed – at a time when he is publishing long poems under the name William Shakespeare. This seems to mean that he is not writing these Sonnets under that (hidden) name." Oxfordians have interpreted the phrase "every word" as a pun on the word "every", standing for "e vere" – thus telling his name. Mainstream writers respond that several sonnets literally do tell his name, containing numerous puns on the name Will[iam]; in sonnet 136 the poet directly says "thou lov'st me for my name is Will."
Based on Sonnets 81, 72, and others, Oxfordians assert that if the author expected his "name" to be "forgotten" and "buried", it would not have been the name that permanently adorned the published works themselves.
The UK and US editions of differ significantly in pagination. The citations to the book are to the UK edition and page numbers will reflect that edition. | https://en.wikipedia.org/wiki?curid=22676 |
Oxymoron
An oxymoron (usual plural oxymorons, more rarely oxymora) is a rhetorical device that uses an ostensible self-contradiction to illustrate a rhetorical point or to reveal a paradox.
A more general meaning of "contradiction in terms" (not necessarily for rhetoric effect) is recorded by the "OED" for 1902..
The term is first recorded as Latinized Greek ', in Maurus Servius Honoratus (c. AD 400); it is derived from the Greek ' "sharp, keen, pointed" and "dull, stupid, foolish"; as it were, "sharp-dull", "keenly stupid", or "pointedly foolish". The word "oxymoron" is autological, i.e. it is itself an example of an oxymoron. The Greek compound word "", which would correspond to the Latin formation, does not seem to appear in any known Ancient Greek works prior to the formation of the Latin term.
Oxymorons in the narrow sense are a rhetorical device used deliberately by the speaker, and intended to be understood as such by the listener.
In a more extended sense, the term "oxymoron" has also been applied to inadvertent or incidental contradictions, as in the case of "dead metaphors" ("barely clothed" or "terribly good"). Lederer (1990), in the spirit of "recreational linguistics", goes as far as to construct "logological oxymorons" such as reading the word "nook" composed of "no" and "ok" or the surname "Noyes" as composed of "no" plus "yes", or far-fetched punning such as "divorce court", "U.S. Army Intelligence" or "press release".
There are a number of single-word oxymorons built from "dependent morphemes" (i.e. no longer a productive compound in English, but loaned as a compound from a different language), as with "pre-posterous" (lit. "with the hinder part before", compare "hysteron proteron", "upside-down", "head over heels", "ass-backwards" etc.) or "sopho-more" (an artificial Greek compound, lit. "wise-foolish").
The most common form of oxymoron involves an adjective–noun combination of two words, but they can also be devised in the meaning of sentences or phrases.
One classic example of the use of oxymorons in English literature can be found in this example from Shakespeare's "Romeo and Juliet", where Romeo strings together thirteen in a row:
Shakespeare heaps up many more oxymorons in "Romeo and Juliet," in particular ("Beautiful tyrant! fiend angelical! Dove-feather'd raven! wolvish-ravening lamb! Despised substance of divinest show!" etc.) and uses them in other plays, e.g. "I must be cruel only to be kind" ("Hamlet"), "fearful bravery" ("Julius Caesar"), "good mischief" ("The Tempest"), and in his sonnets, e.g. "tender churl", "gentle thief".
Other examples from English-language literature include:
"hateful good" (Chaucer, translating "odibile bonum")
"proud humility" (Spenser),
"darkness visible" (Milton),
"beggarly riches" (John Donne),
"damn with faint praise" (Pope),
"expressive silence" (Thomson, echoing Cicero's ),
"melancholy merriment" (Byron),
"faith unfaithful", "falsely true" (Tennyson),
"conventionally unconventional", "tortuous spontaneity" (Henry James)
"delighted sorrow", "loyal treachery", "scalding coolness" (Hemingway).
In literary contexts, the author does not usually signal the use of an oxymoron, but in rhetorical usage, it has become common practice to advertise the use of an oxymoron explicitly to clarify the argument, as in:
In this example, "Epicurean pessimist" would be recognized as an oxymoron in any case, as the core tenet of Epicureanism is equanimity (which would preclude any sort of pessimist outlook). However, the explicit advertisement of the use of oxymorons opened up a sliding scale of less than obvious construction, ending in the "opinion oxymorons" such as "business ethics".
J. R. R. Tolkien interpreted his own surname as derived from the Low German equivalent of "dull-keen" (High German "") which would be a literal equivalent of Greek "oxy-moron".
"Comical oxymoron" is a term for the claim, for comical effect, that a certain phrase or expression is an oxymoron (called "opinion oxymorons" by Lederer (1990)).
The humour derives from implying that an assumption (which might otherwise be expected to be controversial or at least non-evident) is so obvious as to be part of the lexicon.
An example of such a "comical oxymoron" is "educational television": the humour derives entirely from the claim that it is an oxymoron by the implication that "television" is so trivial as to be inherently incompatible with "education".
In a 2009 article called "Daredevil", Garry Wills accused William F. Buckley of popularising this trend, based on the success of the latter's claim that "an intelligent liberal is an oxymoron."
Examples popularized by comedian George Carlin in 1975 include "military intelligence" (a play on the lexical meanings of the term "intelligence", implying that "military" inherently excludes the presence of "intelligence") and "business ethics" (similarly implying that the mutual exclusion of the two terms is evident or commonly understood rather than the partisan anti-corporate position).
Similarly, the term "civil war" is sometimes jokingly referred to as an "oxymoron" (punning on the lexical meanings of the word "civil").
Other examples include "honest politician", " Mexican food" (1989), "affordable caviar" (1993),, and "happily married", "Microsoft Works" (2000)
Listing of antonyms, such as "good and evil", "male and female", "great and small", etc., does not create oxymorons, as it is not implied that any given object has the two opposing properties simultaneously.
In some languages, it is not necessary to place a conjunction like "and" between the two antonyms; such compounds (not necessarily of antonyms) are known as dvandvas (a term taken from Sanskrit grammar).
For example, in Chinese, compounds like 男女 (man and woman, male and female, gender), 阴阳 (yin and yang), 善恶 (good and evil, morality) are used to indicate couples, ranges, or the trait that these are extremes of.
The Italian "pianoforte" or "fortepiano" is an example from a Western language; the term is short for "gravicembalo col piano e forte", as it were "harpiscord with a range of different volumes", implying that it is possible to play both soft and loud (as well as intermediate) notes, not that the sound produced is somehow simultaneously "soft and loud". | https://en.wikipedia.org/wiki?curid=22677 |
Office of Strategic Services
The Office of Strategic Services (OSS) was a wartime intelligence agency of the United States during World War II, and a predecessor to the Central Intelligence Agency (CIA). The OSS was formed as an agency of the Joint Chiefs of Staff (JCS) to coordinate espionage activities behind enemy lines for all branches of the United States Armed Forces. Other OSS functions included the use of propaganda, subversion, and post-war planning. On December 14, 2016, the organization was collectively honored with a Congressional Gold Medal.
Prior to the formation of the OSS, the various departments of the executive branch, including the State, Treasury, Navy, and War Departments conducted American intelligence activities on an "ad hoc" basis, with no overall direction, coordination, or control. The US Army and US Navy had separate code-breaking departments: Signal Intelligence Service and OP-20-G. (A previous code-breaking operation of the State Department, the MI-8, run by Herbert Yardley, had been shut down in 1929 by Secretary of State Henry Stimson, deeming it an inappropriate function for the diplomatic arm, because "gentlemen don't read each other's mail.") The FBI was responsible for domestic security and anti-espionage operations.
President Franklin D. Roosevelt was concerned about American intelligence deficiencies. On the suggestion of William Stephenson, the senior British intelligence officer in the western hemisphere, Roosevelt requested that William J. Donovan draft a plan for an intelligence service based on the British Secret Intelligence Service (MI6) and Special Operations Executive (SOE). After submitting his work, "Memorandum of Establishment of Service of Strategic Information", Colonel Donovan was appointed "coordinator of information" on July 11, 1941, heading the new organization known as the office of the Coordinator of Information (COI).
Thereafter the organization was developed with British assistance; Donovan had responsibilities but no actual powers and the existing US agencies were skeptical if not hostile. Until some months after Pearl Harbor, the bulk of OSS intelligence came from the UK. British Security Co-ordination (BSC) trained the first OSS agents in Canada, until training stations were set up in the US with guidance from BSC instructors, who also provided information on how the SOE was arranged and managed. The British immediately made available their short-wave broadcasting capabilities to Europe, Africa, and the Far East and provided equipment for agents until American production was established.
The Office of Strategic Services was established by a Presidential military order issued by President Roosevelt on June 13, 1942, to collect and analyze strategic information required by the Joint Chiefs of Staff and to conduct special operations not assigned to other agencies. During the war, the OSS supplied policymakers with facts and estimates, but the OSS never had jurisdiction over all foreign intelligence activities. The FBI was left responsible for intelligence work in Latin America, and the Army and Navy continued to develop and rely on their own sources of intelligence.
OSS proved especially useful in providing a worldwide overview of the German war effort, its strengths and weaknesses. In direct operations it was successful in supporting Operation Torch in French North Africa in 1942, where it identified pro-Allied potential supporters and located landing sites. OSS operations in neutral countries, especially Stockholm, Sweden, provided in-depth information on German advanced technology. The Madrid station set up agent networks in France that supported the Allied invasion of southern France in 1944. Most famous were the operations in Switzerland run by Allen Dulles that provided extensive information on German strength, air defenses, submarine production, and the V-1 and V-2 weapons. It revealed some of the secret German efforts in chemical and biological warfare. Switzerland's station also supported resistance fighters in France and Italy, and helped with the surrender of German forces in Italy in 1945.
For the duration of World War II, the Office of Strategic Services was conducting multiple activities and missions, including collecting intelligence by spying, performing acts of sabotage, waging propaganda war, organizing and coordinating anti-Nazi resistance groups in Europe, and providing military training for anti-Japanese guerrilla movements in Asia, among other things. At the height of its influence during World War II, the OSS employed almost 24,000 people.
From 1943–1945, the OSS played a major role in training Kuomintang troops in China and Burma, and recruited Kachin and other indigenous irregular forces for sabotage as well as guides for Allied forces in Burma fighting the Japanese Army. Among other activities, the OSS helped arm, train, and supply resistance movements in areas occupied by the Axis powers during World War II, including Mao Zedong's Red Army in China (known as the Dixie Mission) and the Viet Minh in French Indochina. OSS officer Archimedes Patti played a central role in OSS operations in French Indochina and met frequently with Ho Chi Minh in 1945.
One of the greatest accomplishments of the OSS during World War II was its penetration of Nazi Germany by OSS operatives. The OSS was responsible for training German and Austrian individuals for missions inside Germany. Some of these agents included exiled communists and Socialist party members, labor activists, anti-Nazi prisoners-of-war, and German and Jewish refugees. The OSS also recruited and ran one of the war's most important spies, the German diplomat Fritz Kolbe.
From 1943 the OSS was in contact with the Austrian resistance group around Kaplan Heinrich Maier. As a result, plans and production facilities for V-2 rockets, Tiger tanks and aircraft (Messerschmitt Bf 109, Messerschmitt Me 163 Komet, etc.) were passed on to Allied general staffs in order to enable Allied bombers to get accurate air strikes. The Maier group informed very early about the mass murder of Jews through its contacts with the Semperit factory near Auschwitz. The group was gradually dismantled by the German authorities because of a double agent who worked for both the OSS and the Gestapo. This uncovered a transfer of money from the Americans to Vienna via Istanbul and Budapest, and most of the members were executed after a People's Court hearing.
In 1943, the Office of Strategic Services set up operations in Istanbul. Turkey, as a neutral country during the Second World War, was a place where both the Axis and Allied powers had spy networks. The railroads connecting central Asia with Europe, as well as Turkey's close proximity to the Balkan states, placed it at a crossroads of intelligence gathering. The goal of the OSS Istanbul operation called Project Net-1 was to infiltrate and extenuate subversive action in the old Ottoman and Austro-Hungarian Empires.
The head of operations at OSS Istanbul was a banker from Chicago named Lanning "Packy" Macfarland, who maintained a cover story as a banker for the American lend-lease program. Macfarland hired Alfred Schwarz, a Czechoslovakian engineer and businessman who came to be known as "Dogwood" and ended up establishing the Dogwood information chain. Dogwood in turn hired a personal assistant named Walter Arndt and established himself as an employee of the Istanbul Western Electrik Kompani. Through Schwartz and Arndt the OSS was able to infiltrate anti-fascist groups in Austria, Hungary, and Germany. Schwartz was able to convince Romanian, Bulgarian, Hungarian, and Swiss diplomatic couriers to smuggle American intelligence information into these territories and establish contact with elements antagonistic to the Nazis and their collaborators. Couriers and agents memorized information and produced analytical reports; when they were not able to memorize effectively they recorded information on microfilm and hid it in their shoes or hollowed pencils. Through this process information about the Nazi regime made its way to Macfarland and the OSS in Istanbul and eventually to Washington.
While the OSS "Dogwood-chain" produced a lot of information, its reliability was increasingly questioned by British intelligence. By May 1944, through collaboration between the OSS, British intelligence, Cairo, and Washington, the entire Dogwood-chain was found to be unreliable and dangerous. Planting phony information into the OSS was intended to misdirect the resources of the Allies. Schwartz's Dogwood-chain, which was the largest American intelligence gathering tool in occupied territory, was shortly thereafter shut down.
The OSS purchased Soviet code and cipher material (or Finnish information on them) from émigré Finnish army officers in late 1944. Secretary of State Edward Stettinius, Jr., protested that this violated an agreement President Roosevelt made with the Soviet Union not to interfere with Soviet cipher traffic from the United States. General Donovan might have copied the papers before returning them the following January, but there is no record of Arlington Hall receiving them, and CIA and NSA archives have no surviving copies. This codebook was in fact used as part of the Venona decryption effort, which helped uncover large-scale Soviet espionage in North America.
The OSS espionage and sabotage operations produced a steady demand for highly specialized equipment. General Donovan invited experts, organized workshops, and funded labs that later formed the core of the Research & Development Branch. Boston chemist Stanley P. Lovell became its first head, and Donovan humorously called him his "Professor Moriarty". Throughout the war years, the OSS Research & Development successfully adapted Allied weapons and espionage equipment, and produced its own line of novel spy tools and gadgets, including silenced pistols, lightweight sub-machine guns, "Beano" grenades that exploded upon impact, explosives disguised as lumps of coal ("Black Joe") or bags of Chinese flour ("Aunt Jemima"), acetone time delay fuses for limpet mines, compasses hidden in uniform buttons, playing cards that concealed maps, a 16mm Kodak camera in the shape of a matchbox, tasteless poison tablets ("K" and "L" pills), and cigarettes laced with tetrahydrocannabinol acetate (an extract of Indian hemp) to induce uncontrollable chattiness.
The OSS also developed innovative communication equipment such as wiretap gadgets, electronic beacons for locating agents, and the "Joan-Eleanor" portable radio system that made it possible for operatives on the ground to establish secure contact with a plane that was preparing to land or drop cargo. The OSS Research & Development also printed fake German and Japanese-issued identification cards, and various passes, ration cards, and counterfeit money.
On August 28, 1943, Stanley Lovell was asked to make a presentation in front of a not very friendly audience of the Joint Chiefs of Staff, since the U.S. top brass were largely skeptical of all OSS plans beyond collecting military intelligence and were ready to split the OSS between the Army and the Navy. While explaining the purpose and mission of his department and introducing various gadgets and tools, he reportedly casually dropped into a waste basket a Hedy, a panic-inducing explosive device in the shape of a firecracker, which shortly produced a loud shrieking sound followed by a deafening boom. The presentation was interrupted and did not resume since everyone in the room fled. In reality, the Hedy, jokingly named after Hollywood movie star Hedy Lamarr for her ability to distract men, later saved the lives of some trapped OSS operatives.
Not all projects worked. Some ideas were odd, such as a failed attempt to use insects to spread anthrax in Spain. Stanley Lovell was later quoted saying, "It was my policy to consider any method whatever that might aid the war, however unorthodox or untried".
In 1939, a young physician named Christian J. Lambertsen developed an oxygen rebreather set (the Lambertsen Amphibious Respiratory Unit) and demonstrated it to the OSS—after already being rejected by the U.S. Navy—in a pool at the Shoreham Hotel in Washington D.C., in 1942. The OSS not only bought into the concept, they hired Lambertsen to lead the program and build up the dive element for the organization. His responsibilities included training and developing methods of combining self-contained diving and swimmer delivery including the Lambertsen Amphibious Respiratory Unit for the OSS "Operational Swimmer Group". Growing involvement of the OSS with coastal infiltration and water-based sabotage eventually led to creation of the OSS Maritime Unit.
At Camp X, near Whitby, Ontario, an "assassination and elimination" training program was operated by the British Special Operations Executive, assigning exceptional masters in the art of knife-wielding combat, such as William E. Fairbairn and Eric A. Sykes, to instruct trainees. Many members of the Office of Strategic Services also were trained there. It was dubbed "the school of mayhem and murder" by George Hunter White who trained at the facility in the 1950s.
From these incipient beginnings, the OSS began to take charge of its own destiny, and opened camps in the United States, and finally abroad. Prince William Forest Park (then known as Chopawamsic Recreational Demonstration Area) was the site of an OSS training camp that operated from 1942 to 1945. Area "C", consisting of approximately , was used extensively for communications training, whereas Area "A" was used for training some of the OGs (Operational Groups). Catoctin Mountain Park, now the location of Camp David, was the site of OSS training Area "B" where the first Special Operations, or SO, were trained. Special Operations was modeled after Great Britain's Special Operations Executive, which included parachute, sabotage, self-defense, weapons, and leadership training to support guerrilla or partisan resistance. Considered most mysterious of all was the "cloak and dagger" Secret Intelligence, or SI branch. Secret Intelligence employed "country estates as schools for introducing recruits into the murky world of espionage. Thus, it established Training Areas E and RTU-11 ("the Farm") in spacious manor houses with surrounding horse farms." Morale Operations training included psychological warfare and propaganda. The Congressional Country Club (Area F) in Bethesda, Maryland, was the primary OSS training facility. The Facilities of the Catalina Island Marine Institute at Toyon Bay on Santa Catalina Island, Calif., are composed (in part) of a former OSS survival training camp. The National Park Service commissioned a study of OSS National Park training facilities by Professor John Chambers of Rutgers University.
The main OSS training camps abroad were located initially in Great Britain, French Algeria, and Egypt; later as the Allies advanced, a school was established in southern Italy. In the Far East, OSS training facilities were established in India, Ceylon, and then China. The London branch of the OSS, its first overseas facility, was at 70 Grosvenor Street, W1.In addition to training local agents, the overseas OSS schools also provided advanced training and field exercises for graduates of the training camps in the United States and for Americans who enlisted in the OSS in the war zones. The most famous of the latter was Virginia Hall in France.
The OSS's Mediterranean training center in Cairo, Egypt, known to many as the "Spy School", was a lavish palace belonging to King Farouk's brother-in-law, called "Ras el Kanayas". It was modeled after the SOE's training facility STS 102 in Haifa, Palestine. Americans whose heritage stemmed from Italy, Yugoslavia, and Greece were trained at the "Spy School" and also sent for parachute, weapons and commando training, and Morse code and encryption lessons at STS 102. After completion of their spy training, these agents were sent back on missions to the Balkans and Italy where their accents would not pose a problem for their assimilation.
The names of all 13,000 OSS personnel and documents of their OSS service, previously a closely guarded secret, were released by the US National Archives on August 14, 2008. Among the 24,000 names were those of Carl C. Cable, Julia Child, Ralph Bunche, Arthur Goldberg, Saul K. Padover, Arthur Schlesinger, Jr., Bruce Sundlun, Rene Joyeuse MD and John Ford. The 750,000 pages in the 35,000 personnel files include applications of people who were not recruited or hired, as well as the service records of those who served.
OSS soldiers were primarily inducted from the United States Armed Forces. Other members included foreign nationals including displaced individuals from the former czarist Russia, an example being Prince Serge Obolensky.
Donovan sought independent thinkers, and in order to bring together those many intelligent, quick-witted individuals who could think out-of-the box, he chose them from all walks of life, backgrounds, without distinction to culture or religion. Donovan was quoted as saying, "I'd rather have a young lieutenant with enough guts to disobey a direct order than a colonel too regimented to think for himself." In a matter of a few short months, he formed an organization which equalled and then rivalled Great Britain's Secret Intelligence Service and its Special Operations Executive. One such agent was Ivy league polyglot and Jewish-American baseball catcher Moe Berg, who played 15 seasons in the major leagues. As a Secret Intelligence agent, he was dispatched to seek information on German physicist Werner Heisenberg and his knowledge on the atomic bomb. One of the most highly decorated and flamboyant OSS soldiers was US Marine Colonel Peter Ortiz. Enlisting early in the war, as a French Foreign Legionnaire, he went on to join the OSS and earn the title of the most highly decorated US Marine in the OSS during World War II. Julia Child, who later authored cookbooks worked directly under Donovan.
"Jumping Joe" Savoldi (code name Sampson) was recruited by the OSS in 1942 because of his hand-to-hand combat and language skills as well as his deep knowledge of the Italian geography and Benito Mussolini's compound. He was assigned to the Special Operations branch and took part in missions in North Africa, Italy, and France during 1943–1945.
Taro and Mitsu Yashima, both Japanese political dissidents who were imprisoned in Japan for protesting its militarist regime, worked for the OSS in psychological warfare against the Japanese Empire.
Nisei linguists
In late 1943, a representative from OSS visited the 442nd Infantry Regiment looking to recruit volunteers willing to undertake "extremely hazardous assignment." All selected were Nisei. The recruits were assigned to OSS Detachments 101 and 202, in the China-Burma-India Theater. "Once deployed, they were to interrogate prisoners, translate documents, monitor radio communications, and conduct covert operations... Detachment 101 and 102's clandestine operations were extremely successful."
On September 20, 1945, President Truman signed Executive Order 9621, terminating the OSS. The State Department took over the Research and Analysis Branch; it became the Bureau of Intelligence and Research, The War Department took over the Secret Intelligence (SI) and Counter-Espionage (X-2) Branches, which were then housed in the new Strategic Services Unit (SSU). Brigadier General John Magruder (formerly Donovan's Deputy Director for Intelligence in OSS) became the new SSU director. He oversaw the liquidation of the OSS and managed the institutional preservation of its clandestine intelligence capability.
In January 1946, President Truman created the Central Intelligence Group (CIG), which was the direct precursor to the CIA. SSU assets, which now constituted a streamlined "nucleus" of clandestine intelligence, were transferred to the CIG in mid-1946 and reconstituted as the Office of Special Operations (OSO). The National Security Act of 1947 established the first permanent peacetime intelligence agency in the United States, the Central Intelligence Agency, which then took up OSS functions. The direct descendant of the paramilitary component of the OSS is the CIA Special Activities Division.
Today, the joint-branch United States Special Operations Command, founded in 1987, uses the same spearhead design on its insignia, as homage to its indirect lineage.
Tabletop roleplaying games'
The OSS also is mentioned in Pelgrane Press "The Fall of DELTA GREEN". Player Characters can be ex-OSS agents in other agencies such as the CIA, which can be beneficial due the claim and carry authenticity, experience and authority due their past career in the OSS. | https://en.wikipedia.org/wiki?curid=22679 |
Oda Nobunaga
Nobunaga was head of the powerful Oda clan of Owari Province and launched a war against other "samurai" to unify Japan in the 1560s. Nobunaga emerged as the most powerful "daimyō" in Japan, overthrowing the nominally ruling "shōgun" Ashikaga Yoshiaki and dissolved the Ashikaga Shogunate in 1573, conquering most of Honshu and defeating the "Ikkō-ikki" rebels by the 1580s. Nobunaga's rule was noted for innovative military tactics, fostering free trade, reform of Japan's civil administration, and encouraging the start of the Momoyama historical art period, but also the brutal suppression of opponents, eliminating those who refused to cooperate or yield to his demands. Nobunaga was killed in the Honno-ji Incident in 1582 when his retainer Akechi Mitsuhide ambushed him in Kyoto and compelled to forced suicide. Nobunaga was succeeded by Toyotomi Hideyoshi who completed his war of unification shortly afterwards.
Nobunaga was an influential figure in Japanese history and is regarded as one of three great unifiers along with his retainers Toyotomi Hideyoshi and Tokugawa Ieyasu. Nobunaga initiated the Azuchi-Momoyama period of Japan partially named after his castle, Azuchi Castle, which led to the end of the Sengoku period and eventual transition to the Edo period.
The goal of national unification and a return to the comparative political stability of the earlier Muromachi period was widely shared by the multitude of autonomous "daimyōs" during the Sengoku period. Oda Nobunaga was the first for whom this goal seemed attainable. Nobunaga had gained control over most of Honshu (see map below) before his death during the 1582 Honnō-ji incident, a coup attempt executed by Nobunaga's vassal, Akechi Mitsuhide. Nobunaga was betrayed by his own retainers who set the Honno-Ji temple on fire; then, instead of burning in flames, Oda Nobunaga had committed seppuku to escape the flames. The motivation behind Mitsuhide's betrayal was never revealed to anyone who survived the incident, and has been a subject of debate and conjecture ever since the incident.
Following the incident, Mitsuhide declared himself master over Nobunaga's domains, but was quickly defeated by Toyotomi Hideyoshi, who regained control of and greatly expanded the Oda holdings. Nobunaga's successful subjugation of much of Honshu enabled the later successes of his allies Hideyoshi and Tokugawa Ieyasu toward the goal of national unification by subjugating local "daimyōs" under a hereditary shogunate, which was ultimately accomplished in 1603 when Ieyasu was granted the title of "shōgun" by Emperor Go-Yōzei following the successful Sekigahara Campaign of 1600. The nature of the succession of power through the three "daimyōs" is reflected in a well-known Japanese idiom: "Nobunaga pounds the national rice cake, Hideyoshi kneads it, and in the end, Ieyasu sits down and eats it." All three were born within eight years of each other (1534 to 1542), started their careers as samurai and finished them as statesmen. Nobunaga inherited his father's domain at the age of 17, and quickly gained control of Owari province through gekokujo. Hideyoshi started his career in Nobunaga's army as an ashigaru, but quickly rose up through the ranks as a samurai. Ieyasu initially fought against Nobunaga as the heir of a rival daimyo, but later expanded his own inheritance through a profitable alliance with Nobunaga.
Oda Nobunaga was born on 23 June 1534 in Nagoya, Owari Province, the second son of Oda Nobuhide, the head of the powerful Oda clan and a deputy "shugo" (military governor) with land holdings in Owari. Nobunaga is said to have been born in Nagoya Castle, the future seat of the Owari Domain, although this is subject to debate. Nobunaga was given the childhood name of , and through his childhood and early teenage years became well known for his bizarre behavior, receiving the name of . Nobunaga was known to run around with other youths from the area, without any regard to his own rank in society, and with the introduction of firearms into Japan he became known for his fondness for tanegashima guns.
In 1551, Oda Nobuhide died unexpectedly, and Nobunaga was said to have acted outrageously during his funeral, throwing ceremonial incense at the altar. Although Nobunaga was Nobuhide's legitimate heir, a succession crisis occurred when some of the Oda clan were divided against him. Hirate Masahide, a valuable mentor and retainer to Nobunaga, performed "seppuku" to startle Nobunaga into his obligations. Nobunaga, collecting about a thousand men, suppressed members of his family who were hostile to his rule and their allies.
Nobunaga's main rival as head of the Oda clan was his younger brother, Oda Nobuyuki. In 1555, Nobunaga defeated Nobuyuki at Battle of Ino, though Nobuyuki survived and began plotting a second rebellion. In 1556, Nobunaga destroyed a rival branch of the Oda clan located in Kiyosu Castle. Nobunaga took an army to Mino Province to aid Saitō Dōsan after Dōsan's son, Saitō Yoshitatsu, turned against him. The campaign failed, as Dōsan was killed in the Battle of Nagara-gawa, and Yoshitatsu became the new master of Mino in 1556. In 1557, Nobuyuki was defeated by Nobunaga's retainer Ikeda Nobuteru in the Siege of Suemori (1557), killing Nobuyuki and destroying Suemori Castle.
In 1558, Nobunaga protected Suzuki Shigeteru in the Siege of Terabe. Shigeteru had defected to Nobunaga's side from Imagawa Yoshimoto, a "daimyō" from Suruga Province and one of the most powerful men in the Tōkaidō region. Yoshimoto was a long-time opponent of Nobunaga's father, and had sought to expand his domain into Oda territory in Owari.
By 1559, Nobunaga had eliminated all opposition within the Oda clan and established his uncontested rule in Owari Province.
In 1560, Imagawa Yoshimoto gathered an army of 25,000 men and started his march toward the capital city of Kyoto, with the pretext of aiding the frail Ashikaga Shogunate. The Matsudaira clan of Mikawa Province also joined Yoshimoto's forces. Against this, the Oda clan could rally an army of only 2,000 to 3,000 men. Advisers suggested "to stand a siege at Kiyosu" but Nobunaga refused, stating that "only a strong offensive policy could make up for the superior numbers of the enemy", and calmly ordered a counterattack against Yoshimoto.
Nobunaga's scouts reported that Yoshimoto was resting at the narrow gorge of Dengaku-hazama, ideal for a surprise attack, and that the Imagawa army was celebrating their victories while Yoshimoto viewed the heads. Nobunaga moved towards Imagawa's camp and set up a position some distance away. An array of flags and dummy troops made of straw and spare helmets gave the impression of a large host, while the real Oda army hurried round in a rapid march to get behind Yoshimoto's camp. The heat gave way to a terrific thunderstorm, and as the Imagawa samurai sheltered from the rain, Nobunaga deployed his troops. When the storm ceased, they charged down upon the enemy in the gorge, so suddenly that Yoshimoto thought a brawl had broken out among his men, only realizing it was an attack when two of Nobunaga's samurais, Mōri Shinsuke and Hattori Koheita, charged up at him. One aimed a spear at him, which Yoshimoto deflected with his sword, but the second swung his blade and decapitated him.
Rapidly weakening in the wake of this battle, the Imagawa clan no longer exerted control over the Matsudaira clan. In 1561, an alliance was forged between Oda Nobunaga and Matsudaira Motoyasu (who would become Tokugawa Ieyasu), despite the decades-old hostility between the two clans. Nobunaga also formed an alliance with Takeda Shingen through the marriage of his daughter to Shingen's son. A similar relationship was forged when Nobunaga's sister Oichi married Azai Nagamasa of Ōmi Province.
Tradition dates this battle as the first time that Nobunaga noticed the talents of the sandal-bearer who would eventually become Toyotomi Hideyoshi.
In 1561, Saitō Yoshitatsu, the anti-Nobunaga ruler of Mino, died suddenly of illness and was succeeded by his son, Saitō Tatsuoki. However, Tatsuoki was young and much less effective as a ruler and military strategist compared to his father and grandfather. Taking advantage of this situation, Nobunaga moved his base to Komaki Castle and started his campaign in Mino at the Battle of Moribe in 1561 . By convincing Saitō retainers to abandon their incompetent and foolish master, Nobunaga significantly weakened the Saitō clan, eventually mounting a victorious final attack at the Siege of Inabayama Castle in 1567. After taking possession of the castle, Nobunaga changed the name of both Inabayama Castle and the surrounding town to Gifu. Nobunaga derived the term "Gifu" from the legendary Mount Qi (岐山 "Qi" in Standard Chinese) in China, on which the Zhou dynasty is fabled to have started. Nobunaga revealed his ambition to conquer the whole of Japan, and also started using a new personal seal that read "Tenka Fubu" (天下布武), which means "All the world by force of arms" or "Rule the Empire by Force". Remains of Nobunaga's residence in Gifu can be found today in Gifu Park.
In 1568, Ashikaga Yoshiaki went to Gifu to ask Nobunaga to start a campaign toward Kyoto. Yoshiaki was the brother of the murdered 13th "shōgun" of the Ashikaga Shogunate, Yoshiteru, and wanted revenge against the killers who had already set up a puppet "shōgun", Ashikaga Yoshihide. Nobunaga agreed to install Yoshiaki as the new "shōgun" and, grasping the opportunity to enter Kyoto, started his campaign. An obstacle in southern Ōmi Province was the Rokkaku clan, led by Rokkaku Yoshikata, which refused to recognize Yoshiaki as "shōgun" and was ready to go to war to defend Yoshihide. In response, Nobunaga launched a rapid attack, driving the Rokkaku clan out of their castles.
On 9 November 1568, Nobunaga entered Kyoto and installed Yoshiaki as the 15th "shōgun" of the Ashikaga Shogunate. However, Nobunaga refused any appointment from Yoshiaki, and their relationship grew difficult, though Nobunaga showed the Emperor Ōgimachi great respect.
The Asakura clan was particularly disdainful of the Oda clan's increasing power in Japan. Furthermore, Asakura Yoshikage had also protected Ashikaga Yoshiaki, but had not been willing to march toward Kyoto. When Nobunaga launched a campaign into the Asakura clan's domain, Azai Nagamasa, to whom Nobunaga's sister Oichi was married, broke the alliance with Oda to honor the Azai-Asakura alliance which had lasted for generations. With the help of allied Ikko rebels, the anti-Nobunaga alliance sprang into full force, taking a heavy toll on the Oda clan. At the Battle of Anegawa, Tokugawa Ieyasu joined forces with Nobunaga and defeated the combined forces of the Asakura and Azai clans.
The Enryaku-ji monastery on Mt. Hiei, with its "sōhei" (warrior monks) of the Tendai school who aided the anti-Nobunaga group by helping Azai-Asakura alliance, was an issue for Nobunaga since the monastery was so close to his base of power. In September 1571, Nobunaga preemptively attacked Enryaku-ji and razed it in the Siege of Mount Hiei, in the process killing "monks, laymen, women and children" and "The whole mountainside was a great slaughterhouse, and the sight was one of unbearable horror."
Nobunaga faced a significant threat from the "Ikkō-ikki", a resistance movement centered around the Jōdo Shinshū sect of Buddhism. The "Ikkō-ikki" began as a religious association for self-defence, but popular antipathy against the samurai from the constant violence of the Sengoku period caused their numbers to swell. By the time of Nobunaga's rise to power, the "Ikkō-ikki" was a major organized armed force opposed to samurai rule in Japan. In August 1570, Nobunaga began a campaign against the "Ikkō-ikki" while fighting against his samurai rivals. In May 1571, Nobunaga besieged Nagashima, a series of "Ikkō-ikki" fortifications in Owari Province, beginning the Sieges of Nagashima. Nobunaga's first siege was a definite failure, as his trusted general Shibata Katsuie was severely wounded and many of his samurai were lost before retreating. Despite this defeat, Nobunaga was inspired to launch another siege after the success of the Siege of Mount Hiei. In July 1573, Nobunaga besieged Nagashima for a second time, this time personally leading a sizeable force with many arquebusiers. However, a rainstorm rendered his arquebusiers unable to fire their weapons while the "Ikkō-ikki"'s own arquebusiers could fire from covered positions. Nobunaga himself was almost killed and forced to retreat, with the second siege being considered his greatest defeat. In 1574, Nobunaga launched a third siege as his general Kuki Yoshitaka began a naval blockade and bombardment of Nagashima, allowing him to capture the outer forts of Nakae and Yanagashima as well as part of the Nagashima complex. The Sieges of Nagashima finally ended when Nobunaga's men completely surrounded the complex and setting fire to it, killing the remaining tens of thousands of defenders and inflicting tremendous losses to the "Ikkō-ikki".
Simultaneously, Nobunaga had been besieging the "Ikkō-ikki"'s main stronghold at Ishiyama Hongan-ji in present-day Osaka. Nobunaga's Siege of Ishiyama Hongan-ji began to slowly make some progress, but the Mori clan of the Chūgoku region broke his naval blockade and started sending supplies into the strongly fortified complex by sea. As a result, in 1577, Hashiba Hideyoshi was ordered by Nobunaga to confront the warrior monks at the Siege of Negoroji, and Nobunaga eventually blocked the Mori's supply lines. In 1580, ten years after the siege of Ishiyama Hongan-ji began, the son of Chief Abbot Kōsa surrendered the fortress to Nobunaga after their supplies were exhausted, and they received an official request from the Emperor to do so. Nobunaga spared the lives of Ishiyama Hongan-ji's defenders, but expelled them from Osaka and burnt the fortress to the ground. Although the "Ikkō-ikki" continued to make a last stand in Kaga Province, Nobunaga's capture of Ishiyama Hongan-ji crippled them as a major militant force.
One of the strongest rulers in the anti-Nobunaga alliance was Takeda Shingen, in spite of his generally peaceful relationship and a nominal alliance with the Oda clan. In 1572, Shingen decided to make a drive for Kyoto at the urgings of the "shōgun" Yoshiaki, starting with invading Tokugawa territory . Nobunaga, tied down on the western front, sent lackluster aid to Tokugawa Ieyasu who suffered defeat at the Battle of Mikatagahara in 1573. However, after the battle, Tokugawa's forces launched night raids and convinced Takeda of an imminent counter-attack, thus saving the vulnerable Tokugawa with the bluff. This would play a pivotal role in Tokugawa's philosophy of strategic patience in his campaigns with Nobunaga. Shortly thereafter, the Takeda forces were neutralized after Shingen died from throat cancer in April 1573. This was a relief for Nobunaga because he could now focus on Yoshiaki, who had openly declared hostility more than once, despite the Imperial Court's intervention. Nobunaga was able to defeat Yoshiaki's forces and send him into exile, bringing the Ashikaga Shogunate to an end in 1573. That same year, Nobunaga successfully destroyed Asakura and Asai by driving them both to suicide.
The combined forces of Nobunaga and Tokugawa Ieyasu devastated the Takeda clan with the strategic use of arquebuses at the decisive battle in Nagashino, Mikawa Province. Nobunaga compensated for the arquebus's slow reloading time by arranging the arquebusiers in three lines, firing in rotation. From there, Nobunaga continued his expansion, sending Akechi Mitsuhide to pacify Tanba Province before advancing upon the Mori in Nagato Province. However, Uesugi Kenshin, the rival of both the Takeda and Oda, clashed with Nobunaga during the Battle of Tedorigawa in Kaga Province in November 1577. The result was a decisive Uesugi victory, and Nobunaga considered ceding the northern provinces to Kenshin, but his sudden death in early 1578 caused a succession crisis that ended the Uesugi's movement south.
is the name of two invasions of Iga province by the Oda clan during the Sengoku period. The province was conquered by Oda Nobunaga in 1581 after an unsuccessful attempt in 1579 by his son Oda Nobukatsu. The names of the wars are derived from the Tenshō era name (1573–1592) in which they occurred. Other names for the campaign include or .
By 1582, Nobunaga was at the height of his power and, as the most powerful warlord, the "de facto" leader of Japan. Nobunaga acquired many official titles, including "Gondainagon" and "Ukon'etaishō" in 1574, and Minister of the Right ("Udaijin") in 1576. Nobunaga and Ieyasu finally defeated the Takeda at the Battle of Tenmokuzan, destroying the clan and resulting in Takeda Katsuyori fleeing from the battle before committing suicide with his wife while being pursued by Oda forces. By this point, Nobunaga was preparing to launch invasions into Echigo Province and Shikoku. Nobunaga's former sandal bearer, Hashiba Hideyoshi, invaded Bitchū Province and laid siege to Takamatsu Castle. The castle was vital to the Mori clan, and losing it would have left the Mori's home domain vulnerable. Mori reinforcements led by Mōri Terumoto arrived to relieve the siege, prompting Hideyoshi to ask for reinforcements from Nobunaga, who promptly ordered his leading generals to prepare their armies, with the overall expedition to be led by Nobunaga. Nobunaga left Azuchi Castle for Honnō-ji, a temple in Kyoto he frequented when visiting the city, where he was to hold a tea ceremony. Hence, Nobunaga only had 30 pages with him, while his son Oda Nobutada had brought 2000 of his cavalrymen.
Akechi Mitsuhide, stationed in the Chūgoku region, decided to assassinate Nobunaga for unknown reasons, and the cause of his betrayal is controversial. Mitsuhide, aware that Nobunaga was nearby and unprotected for his tea ceremony, saw an opportunity to act. Mitsuhide led his army toward Kyoto under the pretense of following the order of Nobunaga, but as they were crossing Katsura River, Mitsuhide announced to his troops that "The enemy awaits at Honnō-ji!" (敵は本能寺にあり, Teki wa Honnō-ji ni ari). On 21 June 1582, before dawn, the Akechi army surrounded the Honnō-ji temple with Nobunaga present, while another unit of Akechi troops were sent to Myōkaku-ji in a coup. Although Nobunaga and his servants resisted the unexpected intrusion, they were soon overwhelmed. As the Akechi troops closed in, Nobunaga decided to commit "seppuku" in one of the inner rooms. Reportedly his last words were, "Ran, don't let them come in ..." referring to his young page, Mori Ranmaru, who set the temple on fire as Nobunaga requested so that no one would be able to get his decapitated head. Ranmaru then followed his lord, with his loyalty and devotion makes him a revered figure in Japanese history. Nobunaga's remains were never found, a fact often speculated about by writers and historians. After capturing Honnō-ji, Mitsuhide attacked Nobutada, eldest son and heir of Nobunaga, who also committed suicide.
Nobunaga was succeeded by his retainer Toyotomi Hideyoshi, who subsequently abandoned his campaign against the Mōri clan to pursue Mitsuhide to avenge his beloved lord. Hideyoshi's intercepted one of Mitsuhide's messengers trying to deliver a letter to the Mōri requesting to form an alliance against the Oda after informing them of Nobunaga's death. Hideyoshi managed to pacify the Mōri by demanding the suicide of Shimizu Muneharu in exchange ending his siege of Takamatsu Castle, which the Mōri accepted. Mitsuhide failed to establish his position after Nobunaga's death and Hideyoshi defeated his army at the Battle of Yamazaki in July, but Mitsuhide was murdered by bandits while fleeing after the battle. Hideyoshi continued and completed Nobunaga's conquest of Japan within the following decade.
Militarily, Nobunaga changed the way war was fought in Japan. His matchlock armed foot soldiers displaced mounted soldiers armed with bow and sword. He built iron plated warships and imported saltpeter and lead for manufacturing gunpowder and bullets respectively, while also manufacturing artillery. His ashigaru foot soldiers were trained and disciplined for mass movements, which replaced hand-to-hand fighting tactics. They wore distinctive uniforms which fostered esprit de corps. He was ruthless and cruel in battle, pursuing fugitives without compassion. Through wanton slaughter, he became the ruler of 20 provinces.
After consolidating military power in provinces he came to dominate, starting with Owari and Mino, Nobunaga implemented a plan for economic development. This included the declaration of free markets ("rakuichi"), the breaking of trade monopolies, and providing for open guilds ("rakuza"). Nobunaga instituted policies as a way to stimulate business and the overall economy through the use of a free market system. These policies abolished and prohibited monopolies and opened once closed and privileged unions, associations and guilds, which he saw as impediments to commerce. Even though these policies provided a major boost to the economy, it was still heavily dependent on "daimyōs" support. Copies of his original proclamations can be found in Entoku-ji in the city of Gifu.
Nobunaga initiated policies for civil administration, which included currency regulations, construction of roads and bridges. This included setting standards for the road widths and planting trees along roadsides. This was to ease the transport of soldiers and war material in addition to commerce. In general, Nobunaga thought in terms of "unifying factors," in the words of George Sansom.
Nobunaga initiated a period in Japanese art history known as Fushimi, or the Azuchi-Momoyama period, in reference to the area south of Kyoto. He built extensive gardens and castles which were themselves great works of art. Azuchi Castle included a seven-story Tenshukaku, which included a treasury filled with gold and precious objects. Works of art included paintings on movable screens ("byōbu"), sliding doors ("fusuma"), and walls by Kanō Eitoku. During this time, Nobunaga's tea master Sen no Rikyū established key elements of the Japanese tea ceremony. Nobunaga was also famous for his "meibutsu-gari" hunt-down and acquisition of famous objects by which he collected tea ceremony objects with famous poetic or historic lineages.
Additionally, Nobunaga was very interested in European culture which was still very new to Japan. He collected pieces of Western art as well as arms and armor, and he is considered to be among the first Japanese people in recorded history to wear European clothes. He also became the patron of the Jesuit missionaries in Japan and supported the establishment of the first Christian church in Kyoto in 1576, although he never converted to Christianity.
Depending upon the source, Oda Nobunaga and the entire Oda clan are descendants of either the Fujiwara clan or the Taira clan (specifically, Taira no Shigemori's branch). His lineage can be directly traced to his great-great-grandfather, Oda Hisanaga, who was followed by Oda Toshisada, Oda Nobusada, Oda Nobuhide, and Nobunaga himself.
Nobunaga was the eldest legitimate son of Nobuhide, a minor warlord from Owari Province, and Tsuchida Gozen, who was also the mother to three of his brothers (Nobuyuki, Nobukane, and Hidetaka) and two of his sisters (Oinu and Oichi).
Nobunaga married Nōhime, the daughter of Saitō Dōsan, as a matter of political strategy; however, she was unable to give birth to children and was considered to be barren. It was his concubines Kitsuno and Lady Saka who bore his children. Kitsuno gave birth to Nobunaga's eldest son, Nobutada. Nobutada's son Hidenobu became ruler of the Oda clan after the deaths of Nobunaga and Nobutada. His son Oda Nobuhide was a Christian, and took the baptismal name Peter; he was adopted by Toyotomi Hideyoshi and commissioned chamberlain.
One of Nobunaga's younger sisters, Oichi, gave birth to three daughters. These three nieces of Nobunaga became involved with important historical figures. Chacha (also known as Lady Yodo), the eldest, became the mistress of Toyotomi Hideyoshi. O-Hatsu married Kyōgoku Takatsugu. The youngest, O-go, married the son of Tokugawa Ieyasu, Tokugawa Hidetada (the second "shōgun" of the Tokugawa shogunate). O-go's daughter Senhime married her cousin Toyotomi Hideyori, Lady Yodo's son.
Nobunaga's nephew was Tsuda Nobuzumi, the son of Nobuyuki. Nobusumi married Akechi Mitsuhide's daughter and was killed after the Honnō-ji coup by Nobunaga's third son, Nobutaka, who suspected him of being involved in the plot.
Nobunaga's granddaughter Oyu no Kata, by his son Oda Nobuyoshi, married Tokugawa Tadanaga.
Nobunari Oda, a retired figure skater, claims to be a 17th generation direct descendant of Nobunaga. The ex-monk celebrity Mudō Oda also claims descent from the Sengoku period warlord, but his claims have not been verified.
Nobunaga appears frequently within fiction and continues to be portrayed in many different anime, manga, video games, and cinematic films. Many depictions show him as villainous or even demonic in nature, though some portray him in a more positive light. The latter type of works include Akira Kurosawa's film "Kagemusha", which portrays Nobunaga as energetic, athletic and respectful towards his enemies. The film "Goemon" portrays him as a saintly mentor of Ishikawa Goemon. Nobunaga is a central character in Eiji Yoshikawa's historical novel "Taiko Ki", where he is a firm but benevolent lord. Nobunaga is also portrayed in a heroic light in some video games such as "Kessen III", "Ninja Gaiden II", and the "Warriors Orochi" series. While in the anime series "Nobunaga no Shinobi" Nobunaga is portrayed as a kind person as well as having a major sweet tooth.
By contrast, the novel "The Samurai's Tale" by Erik Christian Haugaard, he is portrayed as an antagonist "known for his merciless cruelty". He is portrayed as evil or megalomaniacal in some anime and manga series including "Samurai Deeper Kyo" and "Flame of Recca". Nobunaga is portrayed as evil, villainous, bloodthirsty, and/or demonic in many video games, such as the "Onimusha" series, "Ninja Master's", "Sengoku", "Maplestory", "", "Atlantica Online", the "Samurai Warriors" series, the "Sengoku BASARA series" (and its anime adaptation), and the "Soulcalibur" series.
Nobunaga has been portrayed numerous times in a more neutral or historic framework, especially in the Taiga dramas shown on television in Japan. Oda Nobunaga appears in the manga series "Tail of the Moon", "Kacchū no Senshi Gamu", and Tsuji Kunio's historical fiction "The Signore: Shogun of the Warring States". Historical representations in video games (mostly Western-made strategy or action titles) include "", "", "Throne of Darkness", the eponymous "Nobunaga's Ambition" series, as well as "Civilization V", "", "Nioh", and "Nioh 2". Kamenashi Kazuya of the Japanese pop group KAT-TUN wrote and performed a song titled "1582" which is written from the perspective of Mori Ranmaru during the coup at Honnō temple.
Nobunaga has also been portrayed fictively, such as when the figure of Nobunaga influences a story or inspires a characterization. In James Clavell's novel "Shōgun", the character Goroda is a pastiche of Nobunaga. In the film "Sengoku Jieitai 1549", Nobunaga is killed by time-travellers. The novel and anime series "Yōtōden", the novel "The Ouka Ninja Scrolls: Basilisk New Chapter" and the anime and manga "Basilisk" portray Nobunaga as a literal demon in addition to a power-mad warlord. Nobunaga also appears as a major character in the eroge "Sengoku Rance" and is a playable character in "Pokémon Conquest", with his partner Pokémon being Hydreigon, Rayquaza and Zekrom. Nobunaga is depicted as a female character in the anime "", "Sengoku Collection", the video game "Fate/Grand Order", and in the light novel and anime series "The Ambition of Oda Nobuna". He is the main character of the stage action and anime adaptation of "Nobunaga the Fool". In Kouta Hirano's "Drifters", Nobunaga is rescued before the moment of his death and is sent to another world to fight against other historical figures. Therein, he displays equal parts tactical brilliance and gleeful brutality. In the 2014 anime "Nobunaga Concerto", and its 2015 film adaptation, he is the subject of a complex plot involving time travel and alternate history. | https://en.wikipedia.org/wiki?curid=22680 |
Otto Wilhelm Hermann von Abich
Otto Wilhelm Hermann von Abich (December 11, 1806July 1, 1886) was a German mineralogist and geologist. Full member of St Petersburg Academy of Sciences (hon. member since 1866).
He was born in Berlin and educated at the local university. His earliest scientific work is related to spinels and other minerals. Later he made special studies of fumaroles, of the mineral deposits around volcanic vents, and of the structure of volcanoes. In 1842 he was appointed professor of mineralogy in the university of Dorpat (Tartu), and henceforth gave attention to the geology and mineralogy of the Russian Empire. Residing for some time at Tiflis, he investigated the geology of the Armenian Highland (this term was introduced by Abich) and Caucasus. In 1844 and 1845 he ascended Ararat volcano several times, studied the geological event of 1840 that was centered on Ararat (Akori village). In 1877 he retired to Vienna, where he died. The mineral Abichite was named after him.
The following are listed in Chisholm (1911), p. 62: | https://en.wikipedia.org/wiki?curid=22684 |
Organization of the Communist Party of the Soviet Union
The organization of the Communist Party of the Soviet Union was nominally based on the principles of democratic centralism.
The governing body of the Communist Party of the Soviet Union (CPSU) was the Party Congress, which initially met annually but whose meetings became less frequent, particularly under Joseph Stalin (dominant from the late 1920s to 1953). Party Congresses would elect a Central Committee which, in turn, would elect a Politburo. Under Stalin, the most powerful position in the party became the General Secretary, who was elected by the Politburo. In 1952 the title of "General Secretary" became "First Secretary" and the "Politburo" became the "Presidium"; the names reverted to their former forms under Leonid Brezhnev in 1966.
In theory, supreme power in the party was invested in the Party Congress. However, in practice the power structure became reversed and, particularly after the death of Lenin in January 1924, supreme power became the domain of the General Secretary.
In the late Soviet Union the CPSU incorporated the communist parties of the 15 constituent republics (the communist branch of the Russian SFSR was established in 1990). Before 1990 the communist party organization in Russian oblasts, autonomous republics and some other major administrative units were subordinated directly to the CPSU Central Committee.
At lower levels, the organizational hierarchy was managed by Party Committees, or partkoms (партком). A partkom was headed by the elected "partkom bureau secretary" ("partkom secretary", секретарь парткома). At enterprises, institutions, kolkhozes, etc., they were called as such, i.e., "partkoms". At higher levels the Committees were abbreviated accordingly: obkoms (обком) at oblast (zone) levels (known earlier as gubkoms (губком) for guberniyas), raikoms (райком) at raion (district) levels (known earlier as ukoms (уком) for uyezds), gorkom (горком) at city levels, etc.
The same terminology ("raikom", etc.) was used in the organizational structure of Komsomol.
The bottom level of the Party was the primary party organization (первичная партийная организация) or party cell (партийная ячейка). It was created within any organizational entity of any kind where there were at least three communists. The management of a cell was called party bureau/partbureau (партийное бюро, партбюро). A partbureau was headed by the elected bureau secretary (секретарь партбюро).
At smaller party cells, secretaries were regular employees of the corresponding plant/hospital/school/etc. Sufficiently large party organizations were usually headed by an exempt secretary, who drew his salary from the Party money. | https://en.wikipedia.org/wiki?curid=22685 |
Oromo people
The Oromo people (pron. or ; Oromo: "Oromoo") are a Cushitic ethnic group and nation native to Ethiopia who speak the Oromo language. They are the largest ethnic group in Ethiopia and represent 34.5% of Ethiopia's population. Oromos speak the Oromo language as their mother tongue (also called "Afaan Oromoo" and "Oromiffa"), which is part of the Cushitic branch of the Afroasiatic language family. The word "Oromo" appeared in European literature for the first time in 1893 and slowly became common in the second half of the 20th century.
Some Oromo people still follow their traditional religion, Waaqeffanna, and they used the "gadaa" system of governance. A leader elected by the "gadaa" system remains in power for only 8 years, with an election taking place at the end of those 8 years. From the 18th to the 19th centuries, Oromos were the dominant influence in northern Ethiopia, during the Zemene Mesafint period.
The origins and prehistory of the Oromo people prior to the 16th century are based on Oromo oral tradition. Older and subsequent colonial era documents mention the Oromo people as "Galla", which has now developed derogatory connotations, but these documents were generally written by members of other ethnic groups. The first verifiable record mentioning the Oromo people by a European cartographer is in the map made by the Italian Fra Mauro in 1460, which uses the term "Galla".
Fra Mauro's term Galla is the most used term, however, until the early 20th century. The term, stated Juxon Barton in 1924, was in use for Oromo people by Abyssinians and Arabs. It was a term for a river and a forest, as well as for the pastoral people established in the highlands of southern Ethiopia. This historical information, according to Mohammed Hassen, is consistent with the written and oral traditions of the Somalis. A journal published by International African Institute suggests it is an Oromo word (adopted by neighbours) for there is a word galla "wandering" or "to go home" in their language.
The Oromo never called themselves "Galla" and resist its use because the term is considered derogatory. They traditionally identified themselves by one of their clans ("gosas") and now use the common umbrella term of Oromo which connotes "free born people". The word Oromo is derived from "Ilm Orma" meaning "children of Oromo", or "sons of Men", or "person, stranger". The first known use of the word "Oromo" to refer to the ethnic group is traceable to 1893.
After Fra Mauro's mention, there is a profusion of literature about the peoples of this region including the Oromo, particularly mentioning their wars and resistance to religious conversion, primarily by European explorers, Catholic Christians missionaries. The earliest primary account of Oromo ethnography is the 16th-century ""History of Galla"" by Christian monk Bahrey who comes from the Sidama country of Gammo, written in the Ge'ez language. According to an 1861 book by D'Abbadie.
Historical linguistics and comparative ethnology studies suggest that the Oromo people probably originated around the lakes Lake Chew Bahir and Lake Chamo. They are a Cushitic people who have inhabited the East and Northeast Africa since at least the early 1st millennium. The aftermath of the sixteenth century Abyssinian–Adal war led to Oromos to move to the north. The Harla were assimilated by the Oromo in Ethiopia.
The historical evidence, suggests that the Oromo people were already established in the southern highlands in or before the 15th century and that at least some Oromo people were interacting with other Ethiopian ethnic groups. While Oromo people have lived in the region for a long time, the ethnic mixture of peoples who have lived here is unclear. According to Alessandro Triulzi, the interactions and encounters between Oromo people and Nilo-Saharan groups likely began early. The Oromos increased their numbers through Oromization ("Meedhicca", "Mogasa" and "Gudifacha") of mixed peoples ("Gabbaro"). The native ancient names of the territories were replaced by the name of the Oromo clans who conquered it while the people were made Gabbaros.
Historically, Afaan Oromo-speaking people used their own "Gadaa" system of governance. Oromos also had a number of independent kingdoms, which they shared with the Sidama people. Among these were the Gibe region kingdoms of Kaffa, Gera, Gomma, Garo, Gumma, Jimma, Leeqa-Nekemte and Limmu-Ennarea.
The earliest known documented and detailed history of the Oromo people was by the Ethiopian monk Abba Bahrey who wrote "Zenahu le Galla" in 1593, though the synonymous term "Gallas" was mentioned in maps or elsewhere much earlier. After the 16th century, they are mentioned more often, such as in the records left by Abba Pawlos, Joao Bermudes, Jerorimo Lobo, Galawdewos, Sarsa Dengel and others. These records suggest that the Oromo were pastoral people in their history, who stayed together. Their animal herds began to expand rapidly and they needed more grazing lands. They began migrating, not together, but after separating. They lacked kings, and had elected leaders called "luba" based on a "gada" system of government instead. By the late 16th century, two major Oromo confederations emerged: "Afre" and "Sadaqa", which respectively refer to four and three in their language, with Afre emerging from four older clans, and Sadaqa out of three. These Oromo confederations were originally located in south-central Ethiopia, specifically the northwest of the Borena region near Lake Abaya, but started moving north in the 16th century in what is termed as the "Great Oromo Migration".
According to Richard Pankhurst, an Ethiopia historian, this migration is linked to the first incursions into inland Horn of Africa by Imam Ahmad ibn Ibrahim. According to historian Marianne Bechhaus-Gerst, the migration was one of the consequences of fierce wars of attrition between Christian and Muslim armies in the Horn of Africa region in the 15th and 16th century which killed a lot of people and depopulated the regions near the Galla lands, but also probably a result of droughts in their traditional homelands. Further, they acquired horses and their "gada" system helped coordinate well equipped Oromo warriors who enabled fellow Oromos to advance and settle into newer regions starting in the 1520s. This expansion continued through the 17th century.
Both peaceful integration and violent competition between Oromos and other neighboring ethnicities such as the Amhara, Sidama, Afar and the Somali affected politics within the Oromo community. Between 1500 and 1800, there were waves of wars and struggle between highland Christians, coastal Muslim and polytheist population in the Horn of Africa. This caused major redistribution of populations. The northern, eastern and western movement of the Oromos from the south around 1535 mirrored the large-scale expansion by Somalis inland. The 1500–1800 period also saw relocation of the Amhara people, and helped influence contemporary ethnic politics in Ethiopia.
According to oral and literary evidence, Borana Oromo clan and Garre Somali clan mutually victimized each other in seventeenth and eighteenth centuries, particularly near their eastern borders. There were also periods of relative peace. According to Günther Schlee, the Garre Somali clan replaced the Borana Oromo clan as the dominant ethnic group in this region. The Borana violence against their neighbors, states Schlee, was unusual and unlike their behavior inside their community where violence was considered deviant.
The Oromos are the largest ethnic group in Ethiopia (34.5% of the population), numbering about 25 million. They are predominantly concentrated in Oromia Region in central Ethiopia, the largest region in the country by both population and area. They speak Afaan Oromo, the official language of Oromia. Oromos constitute the fifth most populous ethnic group among Africans as a whole and the most populous among Horners specifically.
Oromo also have a notable presence in northern Kenya in the Marsabit County, Isiolo County and Tana River County Totaling to about 470,700: 210,000 Borana 110,500 Gabra 85,000 Orma 45,200 Sakuye and 20,000 Waata. There are also Oromo in the former Wollo and Tigray provinces of Ethiopia.
The Oromo are divided into two major branches that break down into an assortment of clan families. From west to east. The Borana Oromo, also called the Boran, are a pastoralist group living in southern Ethiopia (Oromia) and northern Kenya. The Boran inhabit the former provinces of Shewa, Welega, Illubabor, Kafa, Jimma, Sidamo, northern and northeastern Kenya, and a small refugee population in some parts of Somalia.
Barentu/Barentoo or (older) Baraytuma is the other moiety of the Oromo people. The Barentu Oromo inhabit the eastern parts of the Oromia Region in the Zones of Mirab Hararghe or West Hararghe, Arsi Zone, Bale Zone, Debub Mirab Shewa Zone or South West Shewa, Dire Dawa region, the Jijiga Zone of the Somali Region, Administrative Zone 3 of the Afar Region, Oromia Zone of the Amhara Region, and are also found in the Raya Azebo Aanaas in the Tigray Region.
The Oromo speak the Oromo language as a mother tongue. It belongs to the Cushitic branch of the Afroasiatic family. It is the most widely spoken language of the Cushitic languages and the fourth most widely spoken language of Africa after Arabic, Hausa, and Swahili. The Oromo language's main linguistic varieties are Borana-Arsi-Guji Oromo, Eastern Oromo, Orma and West Central Oromo.
Modern Oromo writing systems used to transcribe in Latin script. Additionally, the Sapalo script was historically used to write Oromo. It was invented by the Oromo scholar Sheikh Bakri Sapalo (also known by his birth name, Abubaker Usman Odaa) during the 1950s.
The Oromo people followed their traditional religion Waaqeffanna and resistant to religious conversion before assimilation in the Christian kingdoms and sultanates. The influential 30-year war from 1529 to 1559 between the three parties – the Oromo, the Christians and the Muslims – dissipated the political strengths of all three. The religious beliefs of the Oromo people evolved in this socio-political environment. In the 19th century and first half of the 20th century, Protestant or Catholic missionaries efforts abled to create Oromo Protestant or Catholic followers.
In the late 19th century, Orthodox was endorsed by the state. Tewodros and Yohannes were known for their intolerance towards other religions. The religion hostile to that of Amhara race who lorded over them helped the expansion of Islam. The first to accept Islam as a resisistance ideology were the Wollo Oromo. The Arsi Oromo also accepted Islam in response to the war and massacre by the Christian state under Minilik. Although Minilik baptized by force the Oromo of Shewa, the emperor felt he had to tolerate the Islam in areas like Jimma and Harar after the use of force in the past turned out to be dangerous.
In the 2007 Ethiopian census for Oromia region, which included both Oromo and non-Oromo residents, there was a total of 13,107,963 followers of Christianity (8,204,908 Orthodox, 4,780,917 Protestant, 122,138 Catholic), 12,835,410 followers of Islam, 887,773 followers of traditional religions, and 162,787 followers of other religions. Accordingly, Oromo is 48.1% Christian (8,204,908 or 30.4% Orthodox, 4,780,917 or 17.7% Protestant, 122,138 Catholic), 47.6% Muslim and 3.3% followers of traditional religions
According to a 2009 publication of Association of Muslim Social Scientists and International Institute of Islamic Thought, "probably just over 60% of the Oromos follow Islam, over 30% follow Christianity and less than 3% follow traditional religion".
According to a 2016 estimate by James Minahan, about half of the Oromo people are Sunni Muslim, a third are Ethiopian Orthodox, and the rest are mostly Protestants or follow their traditional religious beliefs. The traditional religion is more common in southern Oromo populations and Christianity more common in and near the urban centers, while Muslims are more common near the Somalian border and in the north.
Oromo people governed themselves in accordance with Gadaa system long before the 16th century. The system regulates political, economic, social and religious activities of the community. Oromo were traditionally a culturally homogeneous society with genealogical ties. A male born in the Oromo clan went through five stages of eight years, where his life established his role and status for consideration to a "Gadaa" office. Every eight years, the Oromo would choose by consensus nine leaders for the office. A leader elected by the gadaa system remains in power only for 8 years, with an election taking place at the end of those 8 years.
There are three Gadaa Organs of Governance: Gadaa Council, Gadaa General Assembly(gumi gayo), and the Qallu Assembly. Gadaa Council is considered as it is the collective achievements the members of the Gadaa class. It is responsible in coordinating irreecha. Gadaa General Assembly is the legislative body of the Gadaa government, while Qallu Assembly is the religious institution.
The Oromo people developed a luni-solar calendar, which different geographically and religiously distinct Oromo communities use the same calendar. This calendar is sophisticated and similar to ones found among the Chinese, the Hindus and the Mayans. It was tied to the traditional religion of the Oromos, and used to schedule the "Gadda" system of elections and power transfer.
The Borana Oromo calendar system was once thought to be based upon an earlier Cushitic calendar developed around 300 BC found at Namoratunga. Reconsideration of the Namoratunga site led astronomer and archaeologist Clive Ruggles to conclude that there is no relationship. The new year of the Oromo people, according to this calendar, falls in the month of October. The calendar has no weeks but a name for each day of the month. It is a lunar-stellar calendar system.
Some modern authors such as Gemetchu Megerssa have proposed the concept of "Oromumma", or "Oromoness" as a cultural common between Oromo people. The word is derived by combining "Oromo" with the Arabic term "Ummah" (community). However, according to Terje Østebø and other scholars this term is a neologism from the late 1990s and has been questioned to its link to Oromo ethno-nationalism and Salafi Islamic discourse, in their disagreement with Christian Amhara and other ethnic groups.
The Oromo people, depending on their geographical location and historical events, have variously converted to Islam, to Christianity, or remained with their traditional religion (Waaqeffanna). According to Gemetchu Megerssa, the subjective reality is that "neither traditional Oromo rituals nor traditional Oromo beliefs function any longer as a cohesive and integral symbol system" for the Oromo people, not just regionally but even locally. The cultural and ideological divergence within the Oromo people, in part from their religious differences, is apparent from the constant impetus for negotiations between broader Oromo spokespersons and those Oromo who are Ahl al-Sunna followers, states Terje Østebø. The internally evolving cultural differences within the Oromos have led some scholars such as Mario Aguilar and Abdullahi Shongolo to conclude that "a common identity acknowledged by all Oromo in general does not exist".
Like other ethnic groups in the Horn of Africa and East Africa, Oromo people regionally developed social stratification consisting of four hierarchical strata. The highest strata were the nobles called the "Borana", below them were the "Gabbaro" (some 17th to 19th century Ethiopian texts refer them as the "dhalatta"). Below these two upper castes were the despised castes of artisans, and at the lowest level were the slaves.
In the Islamic Kingdom of Jimma, the Oromo society's caste strata predominantly consisted of endogamous, inherited artisanal occupations. Each caste group has specialized in a particular occupation such as iron working, carpentry, weapon making, pottery, weaving, leather working and hunting.
Each caste in the Oromo society had a designated name. For example, "Tumtu" were smiths, "Fuga" were potters, "Faqi" were tanners and leatherworkers, "Semmano" were weavers, "Gagurtu" were bee keepers and honey makers, and "Watta" were hunters and foragers. While slaves were a stratum within the society, many Oromos, regardless of caste, were sold into slavery elsewhere. By the 19th century, Oromo slaves were sought after and a major part of slaves sold in Gondar and Gallabat slave markets at Ethiopia-Sudan border, as well as the Massawa and Tajura markets on the Red Sea.
The Oromo people are engaged in many occupations. The southern Oromo (specifically the Borana Oromo) are largely pastoralists who raise goats and cattle. Other Oromo groups have a more diverse economy which includes agriculture and work in urban centers. Some Oromo also sell many products and food items like coffee beans (coffee being a favorite beverage among the Oromo) at local markets.
In December 2009, a 96-page report titled "Human Rights in Ethiopia: Through the Eyes of the Oromo Diaspora", compiled by the Advocates for Human Rights, documented human rights violations against the Oromo in Ethiopia under three successive regimes: the Ethiopian Empire under Haile Selassie, the Marxist Derg and the current Ethiopian government of the Ethiopian People's Revolutionary Democratic Front (EPRDF), dominated by members of the Tigrayan People’s Liberation Front (TPLF) and which was accused to have arrested approximately 20,000 suspected OLF members, to have driven most OLF leadership into exile, and to have effectively neutralized the OLF as a political force in Ethiopia.
According to the Office of the United Nations High Commissioner for Human Rights, the Oromia Support Group (OSG) recorded 594 extrajudicial killings of Oromos by Ethiopian government security forces and 43 disappearances in custody between 2005 and August 2008.
Starting in November 2015, during a wave of mass protests, mainly by Oromos, over the expansion of the municipal boundary of the capital, Addis Ababa, into Oromia, over 500 people have been killed and many more have been injured, according to human-rights advocates and independent monitors. The protests have since spread to other ethnic groups and encompass wider social grievances. Ethiopia declared a state of emergency in response to Oromo and Amhara protests in October 2016.
With the rising political unrest, there was ethnic violence involving the Oromo such as the Oromo–Somali clashes between the Oromo and the ethnic Somalis, leading to up to 400,000 to be displaced in 2017. Gedeo–Oromo clashes between the Oromo and the Gedeo people in the south of the country, and continued violence in the Oromia-Somali border region led to Ethiopia having the largest number of people to flee their homes in the world in 2018, with 1.4 million newly displaced people. In September 2018 in the minorities protest that took place in Oromia near the Ethiopian capital Addis Ababa, 23 people were killed following 43 Oromos murder in the Addis Ababa neighborhood of Saris Abo. Some have blamed the rise in ethnic violence in the Oromia Special Zone Surrounding Finfinne on the Prime Minister Abiy Ahmed for giving space to groups formerly banned by previous Tigrayan led governments, such as the Oromo Liberation Front and Ginbot 7.
Most Oromos do not have political unity today due to their historical roles in the Ethiopian state and the region, the spread-out movement of different Oromo clans, and the differing religions inside the Oromo nation. Accordingly, Oromos played major roles in all three main political movements in Ethiopia (centralist, federalist and secessionist) during the 19th and 20th century. In addition to holding high powers during the centralist government and the monarchy, the Raya Oromos in the Tigray regional state played a major role in the "Weyane" revolt, challenging Emperor Haile Selassie I's rule in the 1940s. Simultaneously, both federalist and secessionist political forces developed inside the Oromo community.
At present a number of ethnic-based political organizations have been formed to promote the interests of the Oromo. The first was the Mecha and Tulama Self-Help Association founded in January 1963, but disbanded by the government after several increasingly tense confrontations in November 1966. Later groups include the Oromo Liberation Front (OLF), Oromo Federalist Democratic Movement (OFDM), the United Liberation Forces of Oromia (ULFO), the Islamic Front for the Liberation of Oromia (IFLO), the Oromia Liberation Council (OLC), the Oromo National Congress (ONC, recently changed to OPC) and others. Another group, the Oromo People's Democratic Organization (OPDO), is one of the four parties that form the ruling Ethiopian People's Revolutionary Democratic Front (EPRDF) coalition. However, these Oromo groups do not act in unity: the ONC, for example, was part of the United Ethiopian Democratic Forces coalition that challenged the EPRDF in the Ethiopian general elections of 2005.
A number of these groups seek to create an independent Oromo nation, some using armed force. Meanwhile, the ruling OPDO and several opposition political parties in the Ethiopian parliament believe in the ethnic federalism. But most Oromo opposition parties in Ethiopia condemn the economic and political inequalities in the country. Progress has been very slow, with the Oromia International Bank just recently established in 2008, though Oromo-owned Awash International Bank started early in the 1990s.
Radio broadcasts began in Oromo language in Somalia in 1960 by Radio Mogadishu. Within Kenya there has been radio broadcasting in Oromo (in the Borana dialect) on the Voice of Kenya since at least the 1980s. Broadcasting in Oromo thought in Ethiopia as it would break radio until 1974 revolution in which Radio Harar began broadcasting.
The first private Afaan Oromoo newspaper in Ethiopia, Jimma Times, also known as , was recently established, but it has faced a lot of harassment and persecution from the Ethiopian government since its beginning. Abuse of Oromo media is widespread in Ethiopia and reflective of the general oppression Oromos face in the country.
Various human rights organizations have publicized the government persecution of Oromos in Ethiopia for decades. In 2008, OFDM opposition party condemned the government's indirect role in the death of hundreds of Oromos in western Ethiopia. According to Amnesty International, "between 2011 and 2014, at least 5000 Oromos have been arrested based on their actual or suspected peaceful opposition to the government. These include thousands of peaceful protestors and hundreds of opposition political party members. The government anticipates a high level of opposition in Oromia, and signs of dissent are sought out and regularly, sometimes pre-emptively, suppressed. In numerous cases, actual or suspected dissenters have been detained without charge or trial, killed by security services during protests, arrests and in detention."
According to Amnesty International, there is a sweeping repression in the Oromo region of Ethiopia. On 12 December 2015, the German broadcaster Deutsche Welle reported violent protests in the Oromo region of Ethiopia in which more than 20 students were killed. According to the report, the students were protesting against the government's re-zoning plan named 'Addis Ababa Master Plan'.
On 2 October 2016, between 55 and 300 festival goers were massacred at the most sacred and largest event among the Oromo, the Irreecha cultural thanksgiving festival. In just one day, dozens were killed and many more injured in what will go down in history as one of the darkest days for the Oromo people. Every year, millions of Oromos, the largest ethnic group in Ethiopia, gather in Bishoftu for this annual celebration. However this year, the festive mood quickly turned chaotic after Ethiopian security forces responded to peaceful protests by firing tear gas and live bullets at over two million people surrounded by a lake and cliffs. In the week that followed, angry youth attacked government buildings and private businesses. On 8 October, the government responded by abusive and far-reaching state of emergency lifted in August 2017. During the state of emergency, security forces arbitrarily detained over 21,000 people. | https://en.wikipedia.org/wiki?curid=22686 |
Oral history
Oral history is the collection and study of historical information about individuals, families, important events, or everyday life using audiotapes, videotapes, or transcriptions of planned interviews. These interviews are conducted with people who participated in or observed past events and whose memories and perceptions of these are to be preserved as an aural record for future generations. Oral history strives to obtain information from different perspectives and most of these cannot be found in written sources. "Oral history" also refers to information gathered in this manner and to a written work (published or unpublished) based on such data, often preserved in archives and large libraries. Knowledge presented by Oral History (OH) is unique in that it shares the tacit perspective, thoughts, opinions and understanding of the interviewee in its primary form.
The term is sometimes used in a more general sense to refer to any information about past events that people who experienced them tell anybody else, but professional historians usually consider this to be oral tradition. However, as the Columbia Encyclopedia explains:
Primitive societies have long relied on oral tradition to preserve a record of the past in the absence of written histories. In Western society, the use of oral material goes back to the early Greek historians Herodotus and Thucydides, both of whom made extensive use of oral reports from witnesses. The modern concept of oral history was developed in the 1940s by Allan Nevins and his associates at Columbia University.
Oral history has become an international movement in historical research. This is partly attributed to the development of information technology, which allowed a method rooted in orality to contribute to research, particularly the use of personal testimonies made in a wide variety of public settings. For instance, oral historians have discovered the endless possibilities of posting data and information on the Internet, making them readily available to scholars, teachers, and average individuals. This reinforced the viability of oral history since the new modes of transmission allowed history to get off archival shelves and reach the larger community.
Oral historians in different countries have approached the collection, analysis, and dissemination of oral history in different modes. There are many ways of creating oral histories and carrying out the study of oral history even within individual national contexts.
According to the "Columbia Encyclopedia":, the accessibility of tape recorders in the 1960s and 1970s led to oral documentation of the era's movements and protests. Following this, oral history has increasingly become a respected record type. Some oral historians now also account for the subjective memories of interviewees due to the research of Italian historian Alessandro Portelli and his associates.
Oral histories are also used in many communities to document the experiences of survivors of tragedies. Following the Holocaust, there has emerged a rich tradition of oral history, particularly of Jewish survivors. The United States Holocaust Memorial Museum has an extensive archive of over 70,000 oral history interviews. There are also several organizations dedicated specifically to collecting and preserving oral histories of survivors. Oral history as a discipline has fairly low barriers to entry, so it is an act in which laypeople can readily participate. In his book Doing Oral History, Donald Ritchie wrote that "oral history has room for both the academic and the layperson. With reasonable training... anyone can conduct a useable oral history." This is especially meaningful in cases like the Holocaust, where survivors may be less comfortable telling their story to a journalist than they would be to a historian or family member.
In the United States, there are several organizations dedicated to doing oral history which are not affiliated with universities or specific locations. StoryCorps is one of the most well-known of these: following the model of the Federal Writers’ Project created as part of the Works Progress Administration, StoryCorps’ mission is to record the stories of Americans from all walks of life. On contrast to the scholarly tradition of oral history, StoryCorps subjects are interviewed by people they know. There are a number of StoryCorps initiatives that have targeted specific populations or problems, following in the tradition of using oral history as a method to amplify voices that might otherwise be marginalized.
The development of digital databases with their text-search tools is one of the important aspects to the technology-based oral historiography. These made it easier to collect and disseminate oral history since access to millions of documents on national and international levels can be instantaneous.
Since the early 1970s, oral history in Britain has grown from being a method in folklore studies (see for example the work of the School of Scottish Studies in the 1950s) to becoming a key component in community histories. Oral history continues to be an important means by which non-academics can actively participate in the compilation and study of history. However, practitioners across a wide range of academic disciplines have also developed the method into a way of recording, understanding, and archiving narrated memories. Influences have included women's history and labour history.
In Britain, the Oral History Society has played a key role in facilitating and developing the use of oral history.
A more complete account of the history of oral history in Britain and Northern Ireland can be found at "Making Oral History" on the Institute of Historical Research's website.
The Bureau of Military History conducted over 1700 interviews with veterans of the First World War and related episodes in Ireland. The documentation was released for research in 2003.
During 1998 and 1999, 40 BBC local radio stations recorded personal oral histories from a broad cross-section of the population for "The Century Speaks" series. The result was 640 half-hour radio documentaries, broadcast in the final weeks of the millennium, and one of the largest single oral history collections in Europe, the Millennium Memory Bank (MMB). The interview based recordings are held by the British Library Sound Archive in the oral history collection.
In one of the largest memory project anywhere, The BBC in 2003-6 invited its audiences to send in recollections of the homefront in the Second World War. It put 47,000 of the recollections online, along with 15,000 photographs.
Alessandro Portelli is an Italian oral historian. He is known for his work which compared workers' experiences in Harlan County, Kentucky and Terni, Italy. Other oral historians have drawn on Portelli's analysis of memory, identity, and the construction of history.
, since the government-run historiography in modern Belarus almost fully excludes repression during the epoch when Belarus was part of the Soviet Union, only private initiatives cover these aspects. Citizens' groups in Belarus use the methods of oral history and record narrative interviews on video: the Virtual Museum of Soviet Repression in Belarus presents a full Virtual museum with intense use of oral history. The Belarusian Oral History Archive project also provides material based on oral history recordings.
Czech oral history began to develop beginning in the 1980s with a focus on social movements and political activism. The practice of oral history and any attempts to document stories prior to this is fairly unknown. The practice of oral history began to take shape in the 1990s. In 2000, The Oral History Center (COH) at the Institute of Contemporary History, Academy of Sciences, Czech Republic (AV ČR) was established with the aim of "systematically support the development of oral history methodology and its application in historical research."
In 2001, Post Bellum, a nonprofit organization, was established to "documents the memories of witnesses of the important historical phenomenons of the 20th century" within the Czech Republic and surrounding European countries. Post Bellum works in partnership with Czech Radio and Institute for the Study of Totalitarian Regimes. Their oral history project "Memory of Nation" was created in 2008 and interviews are archived online for user access. As of January 2015, the project has more than 2100 published witness accounts in several languages, with more than 24,000 pictures.
Other projects, including articles and books have been funded by the Czech Science Foundation (AV ČR) including:
These publications aim to demonstrate that oral history contributes to the understanding of human lives and history itself, such as the motives behind the dissidents' activities, the formation of opposition groups, communication between dissidents and state representatives and the emergence of ex-communist elites and their decision-making processes.
Oral history centers in the Czech Republic emphasize educational activities (seminars, lectures, conferences), archiving and maintaining interview collections, and providing consultations to those interested in the method.
Because of repression in Francoist Spain (1939–75), the development of oral history in Spain was quite limited until the 1970s. It became well-developed in the early 1980s, and often had a focus on the Civil War years (1936–39), especially regarding the losers whose stories had been suppressed. The field was based at the University of Barcelona. Professor Mercedes Vilanova was a leading exponent, and combined it with her interest in quantification and social history. The Barcelona group sought to integrate oral sources with traditional written sources to create mainstream, not ghettoized, historical interpretations. They sought to give a public voice to neglected groups, such as women, illiterates, political leftists, and ethnic minorities. Also at Universidade De Santiago de Compostela, since 1887, Marc Wouters and Isaura Varela starter a Program of Oral History, about Spanish Civil War, exile and migración, continued since 2005 with nomesevoces.com Program about víctims of war and Francoist Dictatorship . All this Oral History Program can be used at www.terraememoria.usc.gal with 2100 interviene and 800 hours of récords.
Oral history began with a focus on national leaders in the United States, but has expanded to include groups representing the entire population. In Britain, the influence of 'history from below' and interviewing people who had been 'hidden from history' was more influential. However, in both countries elite oral history has emerged as an important strand. Scientists, for example, have been covered in numerous oral history projects. Doel (2003) discusses the use of oral interviews by scholars as primary sources, He lists major oral history projects in the history of science begun after 1950. Oral histories, he concludes, can augment the biographies of scientists and help spotlight how their social origins influenced their research. Doel acknowledges the common concerns historians have regarding the validity of oral history accounts. He identifies studies that used oral histories successfully to provide critical and unique insight into otherwise obscure subjects, such as the role scientists played in shaping US policy after World War II. Interviews furthermore can provide road maps for researching archives, and can even serve as a fail-safe resource when written documents have been lost or destroyed. Roger D. Launius (2003) shows the huge size and complexity of the National Aeronautics and Space Administration (NASA) oral history program since 1959. NASA systematically documented its operations through oral histories. They can help to explore broader issues regarding the evolution of a major federal agency. The collection consists primarily of oral histories conducted by scholars working on books about the agency. Since 1996, however, the collection has also included oral histories of senior NASA administrators and officials, astronauts, and project managers, part of a broader project to document the lives of key agency individuals. Launius emphasizes efforts to include such less-well-known groups within the agency as the Astrobiology Program, and to collect the oral histories of women in NASA.
Contemporary oral history involves recording or transcribing eyewitness accounts of historical events. Some anthropologists started collecting recordings (at first especially of Native American folklore) on phonograph cylinders in the late 19th century. In the 1930s, the Federal Writers' Project—part of the Works Progress Administration (WPA)—sent out interviewers to collect accounts from various groups, including surviving witnesses of the Civil War, slavery, and other major historical events. The Library of Congress also began recording traditional American music and folklore onto acetate discs. With the development of audio tape recordings after World War II, the task of oral historians became easier.
In 1946, David P. Boder, a professor of psychology at the Illinois Institute of Technology in Chicago, traveled to Europe to record long interviews with "displaced persons"—most of them Holocaust survivors. Using the first device capable of capturing hours of audio—the wire recorder—Boder came back with the first recorded Holocaust testimonials and in all likelihood the first recorded oral histories of significant length.
Many state and local historical societies have oral history programs. Sinclair Kopp (2002) report on the Oregon Historical Society's program. It began in 1976 with the hiring of Charles Digregorio, who had studied at Columbia with Nevins. Thousands of sound recordings, reel-to-reel tapes, transcriptions, and radio broadcasts have made it one of the largest collections of oral history on the Pacific Coast. In addition to political figures and prominent businessmen, the Oregon Historical Society has done interviews with minorities, women, farmers, and other ordinary citizens, who have contributed extraordinary stories reflecting the state's cultural and social heritage. Hill (2004) encourages oral history projects in high school courses. She demonstrates a lesson plan that encourages the study of local community history through interviews. By studying grassroots activism and the lived experiences of its participants, her high school students came to appreciate how African Americans worked to end Jim Crow laws in the 1950s.
Mark D. Naison (2005) describes the Bronx African American History Project (BAAHP), an oral community history project developed by the Bronx County Historical Society. Its goal was to document the histories of black working- and middle-class residents of the South Bronx neighborhood of Morrisania in New York City since the 1940s.
The Middle East often requires oral history methods of research, mainly because of the relative lack in written and archival history and its emphasis on oral records and traditions. Furthermore, because of its population transfers, refugees and émigrés become suitable objects for oral history research.
Katharina Lange studied the tribal histories of Syria. The oral histories in this area could not be transposed into tangible, written form due to their positionalities, which Lange describes as “taking sides.” The positionality of oral history could lead to conflict and tension. The tribal histories are typically narrated by men. While histories are also told by women, they are not accepted locally as “real history.” Oral histories often detail the lives and feats of ancestors.
Genealogy is a prominent subject in the area. According to Lange, the oral historians often tell their own personalized genealogies to demonstrate their credibility, both in their social standing and their expertise in the field.
From 2003 to 2004, Professors Marianne Kamp and Russell Zanca researched agricultural collectivization in Uzbekistan in part by using oral history methodology to fill in gaps in information missing from the Central State Archive of Uzbekistan. The goal of the project was to learn more about life in the 1920s and 1930s to study the impact of the Soviet Union's conquest. 20 interviews each were conducted in the Fergana valley, Tashkent, Bukhara, Khorezm, and Kashkadarya regions. Their interviews uncovered stories of famine and death that had not been widely known outside of local memory in the region.
The rise of oral history is a new trend in historical studies in China that began in the late twentieth century. Some oral historians, stress the collection of eyewitness accounts of the words and deeds of important historical figures and what really happened during those important historical events, which is similar to common practice in the west, while the others focus more on important people and event, asking important figures to describe the decision making and details of important historical events. In December 2004, the Chinese Association of Oral History Studies was established. The establishment of this institution is thought to signal that the field of oral history studies in China has finally moved into a new phase of organized development.
While oral tradition is an integral part of ancient Southeast Asian history, oral history is a relatively recent development. Since the 1960s, oral history has been accorded increasing attention both on institutional as well as individual levels, representing “history from above” and “history from below”.
In Oral History and Public Memories, Blackburn writes about oral history as a tool that was used “by political elites and state-run institutions to contribute to the goal of national building” in postcolonial Southeast Asian countries. Blackburn draws most of his examples of oral history as a vehicle for “history from above” from Malaysia and Singapore.
In terms of “history from below”, various oral history initiatives are being undertaken in Cambodia in an effort to record lived experiences from the rule of the Khmer Rouge regime while survivors are still living. These initiative take advantage of crowdsourced history to uncover the silences imposed on the oppressed.
Two prominent and ongoing oral history projects out of South Asia stem from time periods of ethnic violence that were decades apart: 1947 and 1984.
The 1947 Partition Archive was founded in 2010 by Guneeta Singe Bhalla, a physicist in Berkeley, California, who began conducting and recording interviews "to collect and preserve the stories of those who lived through this tumultuous time, to make sure this great human tragedy isn't forgotten."
The Sikh Diaspora Project was founded in 2014 by Brajesh Samarth, senior lecturer in Hindi-Urdu at Emory University in Atlanta, when he was a lecturer at Stanford University in California. The project focuses on interviews with members of the Sikh diaspora in the U.S. and Canada, including the many who migrated after the 1984 massacre of Sikhs in India.
Hazel de Berg began recording Australian writers, artists, musicians and others in the Arts community in 1957. She conducted nearly 1300 interviews. Together with the National Library of Australia, she was a pioneer in the field in Australia, working together for twenty-seven years.
In December 1997, in response to the first recommendation of the "Bringing Them Home: Report of the National Inquiry into the Separation of Aboriginal and Torres Strait Islander Children from Their Families" report, the Australian Government announced funding for the National Library to develop and manage an oral history project. The Bringing Them Home Oral History Project (1998–2002) collected and preserved the stories of Indigenous Australians and others involved in or affected by the child removals resulting in the Stolen Generations. Other contributors included missionaries, police and government administrators.
There are now many organisations and projects all over Australia involved in recording oral histories from Australians of all ethnicities and in all walks of life. Oral History Victoria support an annual Oral history award as part of the Victorian Community History Awards held annually to recognise the contributions made by Victorians in the preservation of the state's history, published during the previous year.
In 1948, Allan Nevins, a Columbia University historian, established the Columbia Oral History Research Office, now known as the Columbia Center for Oral History, with a mission of recording, transcribing, and preserving oral history interviews. The Regional Oral History Office was founded in 1954 as a division of the University of California, Berkeley's Bancroft Library. In 1967, American oral historians founded the Oral History Association, and British oral historians founded the Oral History Society in 1969. In 1981, Mansel G. Blackford, a business historian at Ohio State University, argued that oral history was a useful tool to write the history of corporate mergers. More recently, Harvard Business School launched the Creating Emerging Markets project, which "explores the evolution of business leadership in Africa, Asia, and Latin America throughout recent decades" through oral history. "At its core are interviews, many on video, by the School’s faculty with leaders or former leaders of firms and NGOs who have had a major impact on their societies and enterprises across three continents." There are now numerous national organizations and an International Oral History Association, which hold workshops and conferences and publish newsletters and journals devoted to oral history theory and practices. Specialized collections of oral history sometimes have archives of widespread global interest; an example is the Lewis Walpole Library in Farmington, Connecticut, a department of the University Library of Yale.
Historians, folklorists, anthropologists, human geographers, sociologists, journalists, linguists, and many others employ some form of interviewing in their research. Although multi-disciplinary, oral historians have promoted common ethics and standards of practice, most importantly the attaining of the "informed consent" of those being interviewed. Usually this is achieved through a deed of gift, which also establishes copyright ownership that is critical for publication and archival preservation.
Oral historians generally prefer to ask open-ended questions and avoid leading questions that encourage people to say what they think the interviewer wants them to say. Some interviews are "life reviews," conducted with people at the end of their careers. Other interviews focus on a specific period or a specific event in people's lives, such as in the case of war veterans or survivors of a hurricane.
Feldstein (2004) considers oral history to be akin to journalism, Both are committed to uncovering truths and compiling narratives about people, places, and events. Felstein says each could benefit from adopting techniques from the other. Journalism could benefit by emulating the exhaustive and nuanced research methodologies used by oral historians. The practice of oral historians could be enhanced by utilizing the more sophisticated interviewing techniques employed by journalists, in particular, the use of adversarial encounters as a tactic for obtaining information from a respondent.
The first oral history archives focused on interviews with prominent politicians, diplomats, military officers, and business leaders. By the 1960s and '70s, influenced by the rise of new social history, interviewing began to be employed more often when historians investigated history from below. Whatever the field or focus of a project, oral historians attempt to record the memories of many different people when researching a given event. Interviewing a single person provides a single perspective. Individuals may misremember events or distort their account for personal reasons. By interviewing widely, oral historians seek points of agreement among many different sources, and also record the complexity of the issues. The nature of memory—both individual and community—is as much a part of the practice of oral history as are the stories collected.
Archaeologists sometimes conduct oral history interviews to learn more about unknown artifacts. Oral interviews can provide narratives, social meaning, and contexts for objects. When describing the use of oral histories in archaeological work, Paul Mullins emphasizes the importance of using these interviews to replace “it-narratives.” It-narratives are the voices from objects themselves rather than people; according to Mullins, these lead to narratives that are often “sober, pessimistic, or even dystopian.”
Oral history interviews were used to provide context and social meaning in the Overstone excavation project in Northumberland. Overstone consists of a row of four cottages. The excavation team, consisting of Jane Webster, Louise Tolson, Richard Carlton, and volunteers, found the discovered artifacts difficult to identify. The team first took the artifacts to an archaeology group, but the only person with knowledge about a found fragment recognized the fragment from a type of pot her mother had. This inspired the team to conduct group interviews volunteers who grew up in households using such objects. The team took their reference collection of artifacts to the interviews in order to trigger the memories of volunteers, revealing a “shared cultural identity.”
In 1997, the Supreme Court of Canada, in the "Delgamuukw v. British Columbia" trial, ruled that oral histories were just as important as written testimony. Of oral histories, it said "that they are tangential to the ultimate purpose of the fact-finding process at trial – the determination of the historical truth."
Writers who use oral history have often discussed its relationship to historical truth. Gilda O'Neill writes in "Lost Voices", an oral history of East End hop-pickers: "I began to worry. Were the women's, and my, memories true or were they just stories? I realised that I had no 'innocent' sources of evidence - facts. I had, instead, the stories and their tellers' reasons for remembering in their own particular ways.' Duncan Barrett, one of the co-authors of "The Sugar Girls" describes some of the perils of relying on oral history accounts: "On two occasions, it became clear that a subject was trying to mislead us about what happened – telling a self-deprecating story in one interview, and then presenting a different, and more flattering, version of events when we tried to follow it up. ... often our interviewees were keen to persuade us of a certain interpretation of the past, supporting broad, sweeping comments about historical change with specific stories from their lives." Alessandro Portelli argues that oral history is valuable nevertheless: "it tells us less about events as such than about their meaning [...] the unique and precious element which oral sources force upon the historian ... is the speaker's subjectivity."
Regarding the accuracy of oral history, Jean-Loup Gassend concludes in the book "Autopsy of a Battle", "I found that each witness account can be broken down into two parts: 1) descriptions of events that the witness participated in directly, and 2) descriptions of events that the witness did not actually participate in, but that he heard about from other sources. The distinction between these two parts of a witness account is of the highest importance. I noted that concerning events that the witnesses participated in, the information provided was surprisingly reliable, as was confirmed by comparison with other sources. The imprecision or mistakes usually concerned numbers, ranks, and dates, the first two tending to become inflated with time. Concerning events that the witness had not participated in personally, the information was only as reliable as whatever the source of information had been (various rumors); that is to say, it was often very unreliable and I usually discarded such information."
Another noteworthy case is the Mau Mau Uprising in Kenya against the British colonizers. Central to the case was Historian Caroline Elkins' study on UK's brutal suppression of the uprising. Elkin's work on this matter is largely based on oral testimonies of survivors and witnesses, which causes controversy in academia: "Some praised Elkins for breaking the 'code of silence' that had squelched discussion of British imperial violence. Others branded her a self-aggrandising crusader whose overstated findings had relied on sloppy methods and dubious oral testimonies." The British court eventually ruled in the Kenyan claimants' favor, which also serves as a response to Elkin's critics as Justice McCombe's 2011 decision stressed the "substantial documentation supporting accusations of systematic abuses". After the ruling, newly discovered files containing relevant records of former colonies from the Hanslope disclosure corroborated Elkin's finding.
When using oral history as a source material, several caveats exist. The person being interviewed may not accurately recall factual information such as names or dates, and they may exaggerate. To avoid this, interviewers can do thorough research prior to the interview and formulate questions for the purpose of clarification. There also exists a pre-conceived notion that oral history is less reliable than written records. Written source materials are different in the execution of information, and that they may have additional sources. Oral sources identify intangibles such as atmosphere, insights into character, and clarifications to points made briefly in print. Oral history can also indicate lifestyle, dialect and terminology, and customs that may no longer be prominent. Successful oral history enhances its written counterpart.
In Guatemalan literature, "I, Rigoberta Menchú" (1983), brings oral history into the written form through the "testimonio" genre. "I, Rigoberta Menchú" is compiled by Venezuelan anthropologist Burgos-Debray, based on a series of interviews she conducted with Menchú. The Menchú-controversy arose when historian David Stoll took issue with Menchú's claim that “this is a story of all poor Guatemalans”. In "Rigoberta Menchú and the Story of All Poor Guatemalans" (1999), Stoll argues that the details in Menchú's "testimonio" are inconsistent with his own fieldwork and interviews he conducted with other Mayas. According to Guatemalan novelist and critic Arturo Arias, this controversy highlights a tension in oral history. On one hand, it presents an opportunity to convert the subaltern subject into a “speaking subject”. On the other hand, it challenges the historical profession in certifying the “factuality of her mediated discourse” as “subaltern subjects are forced to [translate across epistemological and linguistic frameworks and] use the discourse of the colonizer to express their subjectivity”. | https://en.wikipedia.org/wiki?curid=22687 |
Oncogene
An oncogene is a gene that has the potential to cause cancer. In tumor cells, these genes are often mutated, or expressed at high levels.
Most normal cells will undergo a programmed form of rapid cell death (apoptosis) when critical functions are altered and malfunctioning. Activated oncogenes can cause those cells designated for apoptosis to survive and proliferate instead. Most oncogenes began as proto-oncogenes: normal genes involved in cell growth and proliferation or inhibition of apoptosis. If, through mutation, normal genes promoting cellular growth are up-regulated (gain-of-function mutation), they will predispose the cell to cancer; thus, they are termed "oncogenes". Usually multiple oncogenes, along with mutated apoptotic or tumor suppressor genes will all act in concert to cause cancer. Since the 1970s, dozens of oncogenes have been identified in human cancer. Many cancer drugs target the proteins encoded by oncogenes.
The theory of oncogenes was foreshadowed by the German biologist Theodor Boveri in his 1914 book "Zur Frage der Entstehung Maligner Tumoren" (Concerning the Origin of Malignant Tumors) in which he predicted the existence of oncogenes "(Teilungsfoerdernde Chromosomen)" that become amplified "(im permanenten Übergewicht)" during tumor development.
Later on the term "oncogene" was rediscovered in 1969 by National Cancer Institute scientists George Todaro and Robert Huebner.
The first confirmed oncogene was discovered in 1970 and was termed SRC (pronounced "sarc" as it is short for sarcoma). SRC was first discovered as an oncogene in a chicken retrovirus. Experiments performed by Dr. G. Steve Martin of the University of California, Berkeley demonstrated that SRC was indeed the gene of the virus that acted as an oncogene upon infection. The first nucleotide sequence of v-Src was sequenced in 1980 by A.P. Czernilofsky et al.
In 1976, Drs. , J. Michael Bishop and Harold E. Varmus of the University of California, San Francisco demonstrated that oncogenes were activated proto-oncogenes, found in many organisms including humans. Bishop and Varmus were awarded the Nobel Prize in Physiology or Medicine in 1989 for their discovery of the cellular origin of retroviral oncogenes.
Dr. Robert Weinberg is credited with discovering the first identified human oncogene in a human bladder cancer cell line. The molecular nature of the mutation leading to oncogenesis was subsequently isolated and characterized by the Spanish biochemist Mariano Barbacid and published in "Nature" in 1982. Dr. Barbacid spent the following months extending his research, eventually discovering that the oncogene was a mutated allele of HRAS and characterizing its activation mechanism.
The resultant protein encoded by an oncogene is termed oncoprotein. Oncogenes play an important role in the regulation or synthesis of proteins linked to tumorigenic cell growth. Some oncoproteins are accepted and used as tumor markers.
A proto-oncogene is a normal gene that could become an oncogene due to mutations or increased expression. Proto-oncogenes code for proteins that help to regulate the cell growth and differentiation. Proto-oncogenes are often involved in signal transduction and execution of mitogenic signals, usually through their protein products. Upon acquiring an activating mutation, a proto-oncogene becomes a tumor-inducing agent, an oncogene. Examples of proto-oncogenes include RAS, WNT, MYC, ERK, and TRK. The MYC gene is implicated in Burkitt's lymphoma, which starts when a chromosomal translocation moves an enhancer sequence within the vicinity of the MYC gene. The MYC gene codes for widely used transcription factors. When the enhancer sequence is wrongly placed, these transcription factors are produced at much higher rates. Another example of an oncogene is the Bcr-Abl gene found on the Philadelphia chromosome, a piece of genetic material seen in Chronic Myelogenous Leukemia caused by the translocation of pieces from chromosomes 9 and 22. Bcr-Abl codes for a tyrosine kinase, which is constitutively active, leading to uncontrolled cell proliferation. (More information about the Philadelphia Chromosome below)
The proto-oncogene can become an oncogene by a relatively small modification of its original function. There are three basic methods of activation:
The expression of oncogenes can be regulated by microRNAs (miRNAs), small RNAs 21-25 nucleotides in length that control gene expression by downregulating them. Mutations in such microRNAs (known as oncomirs) can lead to activation of oncogenes. Antisense messenger RNAs could theoretically be used to block the effects of oncogenes.
There are several systems for classifying oncogenes, but there is not yet a widely accepted standard. They are sometimes grouped both spatially (moving from outside the cell inwards) and chronologically (parallelling the "normal" process of signal transduction). There are several categories that are commonly used:
Additional oncogenetic regulator properties include: | https://en.wikipedia.org/wiki?curid=22689 |
Orthogonal frequency-division multiplexing
In telecommunications, orthogonal frequency-division multiplexing (OFDM) is a type of digital transmission and a method of encoding digital data on multiple carrier frequencies. OFDM has developed into a popular scheme for wideband digital communication, used in applications such as digital television and audio broadcasting, DSL internet access, wireless networks, power line networks, and 4G/5G mobile communications.
OFDM is a frequency-division multiplexing (FDM) scheme used as a digital multi-carrier modulation method. It was introduced by Robert W. Chang of Bell Labs in 1966. In OFDM, multiple closely spaced orthogonal subcarrier signals with overlapping spectra are transmitted to carry data in parallel. Demodulation is based on Fast Fourier Transform algorithms. OFDM was improved by Weinstein and Ebert in 1971 with the introduction of a guard interval, providing better orthogonality in transmission channels affected by multipath propagation. Each subcarrier (signal) is modulated with a conventional modulation scheme (such as quadrature amplitude modulation or phase shift keying) at a low symbol rate. This maintains total data rates similar to conventional single-carrier modulation schemes in the same bandwidth.
The main advantage of OFDM over single-carrier schemes is its ability to cope with severe channel conditions (for example, attenuation of high frequencies in a long copper wire, narrowband interference and frequency-selective fading due to multipath) without complex equalization filters. Channel equalization is simplified because OFDM may be viewed as using many slowly modulated narrowband signals rather than one rapidly modulated wideband signal. The low symbol rate makes the use of a guard interval between symbols affordable, making it possible to eliminate intersymbol interference (ISI) and use echoes and time-spreading (in analog television visible as ghosting and blurring, respectively) to achieve a diversity gain, i.e. a signal-to-noise ratio improvement. This mechanism also facilitates the design of single frequency networks (SFNs) where several adjacent transmitters send the same signal simultaneously at the same frequency, as the signals from multiple distant transmitters may be re-combined constructively, sparing interference of a traditional single-carrier system.
In coded orthogonal frequency-division multiplexing (COFDM), forward error correction (convolutional coding) and time/frequency interleaving are applied to the signal being transmitted. This is done to overcome errors in mobile communication channels affected by multipath propagation and Doppler effects. COFDM was introduced by Alard in 1986 for Digital Audio Broadcasting for Eureka Project 147. In practice, OFDM has become used in combination with such coding and interleaving, so that the terms COFDM and OFDM co-apply to common applications.
The following list is a summary of existing OFDM-based standards and products. For further details, see the Usage section at the end of the article.
The OFDM-based multiple access technology OFDMA is also used in several 4G and pre-4G cellular networks, mobile broadband standards and the next generation WLAN:
The advantages and disadvantages listed below are further discussed in the Characteristics and principles of operation section below.
Conceptually, OFDM is a specialized frequency-division multiplexing (FDM) method, with the additional constraint that all subcarrier signals within a communication channel are orthogonal to one another.
In OFDM, the subcarrier frequencies are chosen so that the subcarriers are orthogonal to each other, meaning that cross-talk between the sub-channels is eliminated and inter-carrier guard bands are not required. This greatly simplifies the design of both the transmitter and the receiver; unlike conventional FDM, a separate filter for each sub-channel is not required.
The orthogonality requires that the subcarrier spacing is formula_1 Hertz, where "T"U seconds is the useful symbol duration (the receiver-side window size), and "k" is a positive integer, typically equal to 1. This stipulates that each carrier frequency undergoes "k" more complete cycles per symbol period than the previous carrier. Therefore, with "N" subcarriers, the total passband bandwidth will be "B" ≈ "N"·Δ"f" (Hz).
The orthogonality also allows high spectral efficiency, with a total symbol rate near the Nyquist rate for the equivalent baseband signal (i.e. near half the Nyquist rate for the double-side band physical passband signal). Almost the whole available frequency band can be used. OFDM generally has a nearly 'white' spectrum, giving it benign electromagnetic interference properties with respect to other co-channel users.
OFDM requires very accurate frequency synchronization between the receiver and the transmitter; with frequency deviation the subcarriers will no longer be orthogonal, causing "inter-carrier interference" (ICI) (i.e., cross-talk between the subcarriers). Frequency offsets are typically caused by mismatched transmitter and receiver oscillators, or by Doppler shift due to movement. While Doppler shift alone may be compensated for by the receiver, the situation is worsened when combined with multipath, as reflections will appear at various frequency offsets, which is much harder to correct. This effect typically worsens as speed increases, and is an important factor limiting the use of OFDM in high-speed vehicles. In order to mitigate ICI in such scenarios, one can shape each subcarrier in order to minimize the interference resulting in a non-orthogonal subcarriers overlapping. For example, a low-complexity scheme referred to as WCP-OFDM ("Weighted Cyclic Prefix Orthogonal Frequency-Division Multiplexing") consists of using short filters at the transmitter output in order to perform a potentially non-rectangular pulse shaping and a near perfect reconstruction using a single-tap per subcarrier equalization. Other ICI suppression techniques usually increase drastically the receiver complexity.
The orthogonality allows for efficient modulator and demodulator implementation using the FFT algorithm on the receiver side, and inverse FFT on the sender side. Although the principles and some of the benefits have been known since the 1960s, OFDM is popular for wideband communications today by way of low-cost digital signal processing components that can efficiently calculate the FFT.
The time to compute the inverse-FFT or FFT transform has to take less than the time for each symbol, which for example for DVB-T means the computation has to be done in or less.
For an -point FFT this may be approximated to:
The computational demand approximately scales linearly with FFT size so a double size FFT needs double the amount of time and vice versa.
As a comparison an Intel Pentium III CPU at 1.266 GHz is able to calculate a FFT in using FFTW. Intel Pentium M at 1.6 GHz does it in Intel Core Duo at 3.0 GHz does it in .
One key principle of OFDM is that since low symbol rate modulation schemes (i.e., where the symbols are relatively long compared to the channel time characteristics) suffer less from intersymbol interference caused by multipath propagation, it is advantageous to transmit a number of low-rate streams in parallel instead of a single high-rate stream. Since the duration of each symbol is long, it is feasible to insert a guard interval between the OFDM symbols, thus eliminating the intersymbol interference.
The guard interval also eliminates the need for a pulse-shaping filter, and it reduces the sensitivity to time synchronization problems.
The cyclic prefix, which is transmitted during the guard interval, consists of the end of the OFDM symbol copied into the guard interval, and the guard interval is transmitted followed by the OFDM symbol. The reason that the guard interval consists of a copy of the end of the OFDM symbol is so that the receiver will integrate over an integer number of sinusoid cycles for each of the multipaths when it performs OFDM demodulation with the FFT.
In some standards such as Ultrawideband, in the interest of transmitted power, cyclic prefix is skipped and nothing is sent during the guard interval. The receiver will then have to mimic the cyclic prefix functionality by copying the end part of the OFDM symbol and adding it to the beginning portion.
The effects of frequency-selective channel conditions, for example fading caused by multipath propagation, can be considered as constant (flat) over an OFDM sub-channel if the sub-channel is sufficiently narrow-banded (i.e., if the number of sub-channels is sufficiently large). This makes frequency domain equalization possible at the receiver, which is far simpler than the time-domain equalization used in conventional single-carrier modulation. In OFDM, the equalizer only has to multiply each detected subcarrier (each Fourier coefficient) in each OFDM symbol by a constant complex number, or a rarely changed value. On a fundamental level, simpler digital equalizers are better because they require fewer operations, which translates to fewer round-off errors in the equalizer. Those round-off errors can be viewed as numerical noise and are inevitable.
If differential modulation such as DPSK or DQPSK is applied to each subcarrier, equalization can be completely omitted, since these non-coherent schemes are insensitive to slowly changing amplitude and phase distortion.
In a sense, improvements in FIR equalization using FFTs or partial FFTs leads mathematically closer to OFDM, but the OFDM technique is easier to understand and implement, and the sub-channels can be independently adapted in other ways than varying equalization coefficients, such as switching between different QAM constellation patterns and error-correction schemes to match individual sub-channel noise and interference characteristics.
Some of the subcarriers in some of the OFDM symbols may carry pilot signals for measurement of the channel conditions (i.e., the equalizer gain and phase shift for each subcarrier). Pilot signals and training symbols (preambles) may also be used for time synchronization (to avoid intersymbol interference, ISI) and frequency synchronization (to avoid inter-carrier interference, ICI, caused by Doppler shift).
OFDM was initially used for wired and stationary wireless communications. However, with an increasing number of applications operating in highly mobile environments, the effect of dispersive fading caused by a combination of multi-path propagation and doppler shift is more significant. Over the last decade, research has been done on how to equalize OFDM transmission over doubly selective channels.
OFDM is invariably used in conjunction with channel coding (forward error correction), and almost always uses frequency and/or time interleaving.
Frequency (subcarrier) interleaving increases resistance to frequency-selective channel conditions such as fading. For example, when a part of the channel bandwidth fades, frequency interleaving ensures that the bit errors that would result from those subcarriers in the faded part of the bandwidth are spread out in the bit-stream rather than being concentrated. Similarly, time interleaving ensures that bits that are originally close together in the bit-stream are transmitted far apart in time, thus mitigating against severe fading as would happen when travelling at high speed.
However, time interleaving is of little benefit in slowly fading channels, such as for stationary reception, and frequency interleaving offers little to no benefit for narrowband channels that suffer from flat-fading (where the whole channel bandwidth fades at the same time).
The reason why interleaving is used on OFDM is to attempt to spread the errors out in the bit-stream that is presented to the error correction decoder, because when such decoders are presented with a high concentration of errors the decoder is unable to correct all the bit errors, and a burst of uncorrected errors occurs. A similar design of audio data encoding makes compact disc (CD) playback robust.
A classical type of error correction coding used with OFDM-based systems is convolutional coding, often concatenated with Reed-Solomon coding. Usually, additional interleaving (on top of the time and frequency interleaving mentioned above) in between the two layers of coding is implemented. The choice for Reed-Solomon coding as the outer error correction code is based on the observation that the Viterbi decoder used for inner convolutional decoding produces short error bursts when there is a high concentration of errors, and Reed-Solomon codes are inherently well suited to correcting bursts of errors.
Newer systems, however, usually now adopt near-optimal types of error correction codes that use the turbo decoding principle, where the decoder iterates towards the desired solution. Examples of such error correction coding types include turbo codes and LDPC codes, which perform close to the Shannon limit for the Additive White Gaussian Noise (AWGN) channel. Some systems that have implemented these codes have concatenated them with either Reed-Solomon (for example on the MediaFLO system) or BCH codes (on the DVB-S2 system) to improve upon an error floor inherent to these codes at high signal-to-noise ratios.
The resilience to severe channel conditions can be further enhanced if information about the channel is sent over a return-channel. Based on this feedback information, adaptive modulation, channel coding and power allocation may be applied across all subcarriers, or individually to each subcarrier. In the latter case, if a particular range of frequencies suffers from interference or attenuation, the carriers within that range can be disabled or made to run slower by applying more robust modulation or error coding to those subcarriers.
The term (DMT) denotes OFDM-based communication systems that adapt the transmission to the channel conditions individually for each subcarrier, by means of so-called "bit-loading". Examples are ADSL and VDSL.
The upstream and downstream speeds can be varied by allocating either more or fewer carriers for each purpose. Some forms of rate-adaptive DSL use this feature in real time, so that the bitrate is adapted to the co-channel interference and bandwidth is allocated to whichever subscriber needs it most.
OFDM in its primary form is considered as a digital modulation technique, and not a multi-user channel access method, since it is used for transferring one bit stream over one communication channel using one sequence of OFDM symbols. However, OFDM can be combined with multiple access using time, frequency or coding separation of the users.
In orthogonal frequency-division multiple access (OFDMA), frequency-division multiple access is achieved by assigning different OFDM sub-channels to different users. OFDMA supports differentiated quality of service by assigning different number of subcarriers to different users in a similar fashion as in CDMA, and thus complex packet scheduling or Media Access Control schemes can be avoided. OFDMA is used in:
OFDMA is also a candidate access method for the IEEE 802.22 "Wireless Regional Area Networks" (WRAN). The project aims at designing the first cognitive radio-based standard operating in the VHF-low UHF spectrum (TV spectrum).
In multi-carrier code division multiple access (MC-CDMA), also known as OFDM-CDMA, OFDM is combined with CDMA spread spectrum communication for coding separation of the users. Co-channel interference can be mitigated, meaning that manual fixed channel allocation (FCA) frequency planning is simplified, or complex dynamic channel allocation (DCA) schemes are avoided.
In OFDM-based wide-area broadcasting, receivers can benefit from receiving signals from several spatially dispersed transmitters simultaneously, since transmitters will only destructively interfere with each other on a limited number of subcarriers, whereas in general they will actually reinforce coverage over a wide area. This is very beneficial in many countries, as it permits the operation of national single-frequency networks (SFN), where many transmitters send the same signal simultaneously over the same channel frequency. SFNs use the available spectrum more effectively than conventional multi-frequency broadcast networks (MFN), where program content is replicated on different carrier frequencies. SFNs also result in a diversity gain in receivers situated midway between the transmitters. The coverage area is increased and the outage probability decreased in comparison to an MFN, due to increased received signal strength averaged over all subcarriers.
Although the guard interval only contains redundant data, which means that it reduces the capacity, some OFDM-based systems, such as some of the broadcasting systems, deliberately use a long guard interval in order to allow the transmitters to be spaced farther apart in an SFN, and longer guard intervals allow larger SFN cell-sizes. A rule of thumb for the maximum distance between transmitters in an SFN is equal to the distance a signal travels during the guard interval — for instance, a guard interval of 200 microseconds would allow transmitters to be spaced 60 km apart.
A "single frequency network" is a form of transmitter macrodiversity. The concept can be further used in "dynamic single-frequency networks" (DSFN), where the SFN grouping is changed from timeslot to timeslot.
OFDM may be combined with other forms of space diversity, for example antenna arrays and MIMO channels. This is done in the IEEE 802.11 Wireless LAN standards.
An OFDM signal exhibits a high peak-to-average power ratio (PAPR) because the independent phases of the subcarriers mean that they will often combine constructively. Handling this high PAPR requires:
Any non-linearity in the signal chain will cause intermodulation distortion that
The linearity requirement is demanding, especially for transmitter RF output circuitry where amplifiers are often designed to be non-linear in order to minimise power consumption. In practical OFDM systems a small amount of peak clipping is allowed to limit the PAPR in a judicious trade-off against the above consequences. However, the transmitter output filter which is required to reduce out-of-band spurs to legal levels has the effect of restoring peak levels that were clipped, so clipping is not an effective way to reduce PAPR.
Although the spectral efficiency of OFDM is attractive for both terrestrial and space communications, the high PAPR requirements have so far limited OFDM applications to terrestrial systems.
The crest factor CF (in dB) for an OFDM system with "n" uncorrelated subcarriers is
where CFc is the crest factor (in dB) for each subcarrier.
(CFc is 3.01 dB for the sine waves used for BPSK and QPSK modulation).
For example, the DVB-T signal in 2K mode is composed of 1705 subcarriers that are each QPSK-modulated, giving a crest factor of 35.32 dB.
Many crest factor reduction techniques have been developed.
The dynamic range required for an FM receiver is while DAB only require about As a comparison, each extra bit per sample increases the dynamic range by
The performance of any communication system can be measured in terms of its power efficiency and bandwidth efficiency. The power efficiency describes the ability of communication system to preserve bit error rate (BER) of the transmitted signal at low power levels. Bandwidth efficiency reflects how efficiently the allocated bandwidth is used and is defined as the throughput data rate per hertz in a given bandwidth. If the large number of subcarriers are used, the bandwidth efficiency of multicarrier system such as OFDM with using optical fiber channel is defined as
where formula_9 is the symbol rate in giga-symbols per second (Gsps), formula_10 is the bandwidth of OFDM signal, and the factor of 2 is due to the two polarization states in the fiber.
There is saving of bandwidth by using multicarrier modulation with orthogonal frequency division multiplexing. So the bandwidth for multicarrier system is less in comparison with single carrier system and hence bandwidth efficiency of multicarrier system is larger than single carrier system.
There is only 1dBm increase in receiver power, but we get 76.7% improvement in bandwidth efficiency with using multicarrier transmission technique.
This section describes a simple idealized OFDM system model suitable for a time-invariant AWGN channel.
An OFDM carrier signal is the sum of a number of orthogonal subcarriers, with baseband data on each subcarrier being independently modulated commonly using some type of quadrature amplitude modulation (QAM) or phase-shift keying (PSK). This composite baseband signal is typically used to modulate a main RF carrier.
formula_11 is a serial stream of binary digits. By inverse multiplexing, these are first demultiplexed into formula_12 parallel streams, and each one mapped to a (possibly complex) symbol stream using some modulation constellation (QAM, PSK, etc.). Note that the constellations may be different, so some streams may carry a higher bit-rate than others.
An inverse FFT is computed on each set of symbols, giving a set of complex time-domain samples. These samples are then quadrature-mixed to passband in the standard way. The real and imaginary components are first converted to the analogue domain using digital-to-analogue converters (DACs); the analogue signals are then used to modulate cosine and sine waves at the carrier frequency, formula_13, respectively. These signals are then summed to give the transmission signal, formula_14.
The receiver picks up the signal formula_15, which is then quadrature-mixed down to baseband using cosine and sine waves at the carrier frequency. This also creates signals centered on formula_16, so low-pass filters are used to reject these. The baseband signals are then sampled and digitised using analog-to-digital converters (ADCs), and a forward FFT is used to convert back to the frequency domain.
This returns formula_12 parallel streams, each of which is converted to a binary stream using an appropriate symbol detector. These streams are then re-combined into a serial stream, formula_18, which is an estimate of the original binary stream at the transmitter.
If formula_12 subcarriers are used, and each subcarrier is modulated using formula_20 alternative symbols, the OFDM symbol alphabet consists of formula_21 combined symbols.
The low-pass equivalent OFDM signal is expressed as:
where formula_23 are the data symbols, formula_12 is the number of subcarriers, and formula_25 is the OFDM symbol time. The subcarrier spacing of formula_26 makes them orthogonal over each symbol period; this property is expressed as:
where formula_28 denotes the complex conjugate operator and formula_29 is the Kronecker delta.
To avoid intersymbol interference in multipath fading channels, a guard interval of length formula_30 is inserted prior to the OFDM block. During this interval, a "cyclic prefix" is transmitted such that the signal in the interval formula_31 equals the signal in the interval formula_32. The OFDM signal with cyclic prefix is thus:
The low-pass signal above can be either real or complex-valued. Real-valued low-pass equivalent signals are typically transmitted at baseband—wireline applications such as DSL use this approach. For wireless applications, the low-pass signal is typically complex-valued; in which case, the transmitted signal is up-converted to a carrier frequency formula_13. In general, the transmitted signal can be represented as:
OFDM is used in:
Key features of some common OFDM-based systems are presented in the following table.
OFDM is used in ADSL connections that follow the ANSI T1.413 and G.dmt (ITU G.992.1) standards, where it is called "discrete multitone modulation" (DMT). DSL achieves high-speed data connections on existing copper wires. OFDM is also used in the successor standards ADSL2, ADSL2+, VDSL, VDSL2, and G.fast. ADSL2 uses variable subcarrier modulation, ranging from BPSK to 32768QAM (in ADSL terminology this is referred to as bit-loading, or bit per tone, 1 to 15 bits per subcarrier).
Long copper wires suffer from attenuation at high frequencies. The fact that OFDM can cope with this frequency selective attenuation and with narrow-band interference are the main reasons it is frequently used in applications such as ADSL modems.
OFDM is used by many powerline devices to extend digital connections through power wiring. Adaptive modulation is particularly important with such a noisy channel as electrical wiring. Some medium speed smart metering modems, "Prime" and "G3" use OFDM at modest frequencies (30–100 kHz) with modest numbers of channels (several hundred) in order to overcome the intersymbol interference in the power line environment.
The IEEE 1901 standards include two incompatible physical layers that both use OFDM. The ITU-T G.hn standard, which provides high-speed local area networking over existing home wiring (power lines, phone lines and coaxial cables) is based on a PHY layer that specifies OFDM with adaptive modulation and a Low-Density Parity-Check (LDPC) FEC code.
OFDM is extensively used in wireless LAN and MAN applications, including IEEE 802.11a/g/n and WiMAX.
IEEE 802.11a/g/n, operating in the 2.4 and 5 GHz bands, specifies per-stream airside data rates ranging from 6 to 54 Mbit/s. If both devices can use "HT mode" (added with 802.11n), the top 20 MHz per-stream rate is increased to 72.2 Mbit/s, with the option of data rates between 13.5 and 150 Mbit/s using a 40 MHz channel. Four different modulation schemes are used: BPSK, QPSK, 16-QAM, and 64-QAM, along with a set of error correcting rates (1/2–5/6). The multitude of choices allows the system to adapt the optimum data rate for the current signal conditions.
OFDM is also now being used in the WiMedia/Ecma-368 standard for high-speed wireless personal area networks in the 3.1–10.6 GHz ultrawideband spectrum (see MultiBand-OFDM).
Much of Europe and Asia has adopted OFDM for terrestrial broadcasting of digital television (DVB-T, DVB-H and T-DMB) and radio (EUREKA 147 DAB, Digital Radio Mondiale, HD Radio and T-DMB).
By Directive of the European Commission, all television services transmitted to viewers in the European Community must use a transmission system that has been standardized by a recognized European standardization body, and such a standard has been developed and codified by the DVB Project, "Digital Video Broadcasting (DVB); Framing structure, channel coding and modulation for digital terrestrial television". Customarily referred to as DVB-T, the standard calls for the exclusive use of COFDM for modulation. DVB-T is now widely used in Europe and elsewhere for terrestrial digital TV.
The ground segments of the Digital Audio Radio Service (SDARS) systems used by XM Satellite Radio and Sirius Satellite Radio are transmitted using Coded OFDM (COFDM). The word "coded" comes from the use of forward error correction (FEC).
The question of the relative technical merits of COFDM versus 8VSB for terrestrial digital television has been a subject of some controversy, especially between European and North American technologists and regulators. The United States has rejected several proposals to adopt the COFDM-based DVB-T system for its digital television services, and has instead opted for 8VSB (vestigial sideband modulation) operation.
One of the major benefits provided by COFDM is in rendering radio broadcasts relatively immune to multipath distortion and signal fading due to atmospheric conditions or passing aircraft. Proponents of COFDM argue it resists multipath far better than 8VSB. Early 8VSB DTV (digital television) receivers often had difficulty receiving a signal. Also, COFDM allows single-frequency networks, which is not possible with 8VSB.
However, newer 8VSB receivers are far better at dealing with multipath, hence the difference in performance may diminish with advances in equalizer design.
COFDM is also used for other radio standards, for Digital Audio Broadcasting (DAB), the standard for digital audio broadcasting at VHF frequencies, for Digital Radio Mondiale (DRM), the standard for digital broadcasting at shortwave and medium wave frequencies (below 30 MHz) and for DRM+ a more recently introduced standard for digital audio broadcasting at VHF frequencies. (30 to 174 MHz)
The USA again uses an alternate standard, a proprietary system developed by iBiquity dubbed "HD Radio". However, it uses COFDM as the underlying broadcast technology to add digital audio to AM (medium wave) and FM broadcasts.
Both Digital Radio Mondiale and HD Radio are classified as in-band on-channel systems, unlike Eureka 147 (DAB: Digital Audio Broadcasting) which uses separate VHF or UHF frequency bands instead.
The "band-segmented transmission orthogonal frequency division multiplexing" ("BST-OFDM") system proposed for Japan (in the ISDB-T, ISDB-TSB, and ISDB-C broadcasting systems) improves upon COFDM by exploiting the fact that some OFDM carriers may be modulated differently from others within the same multiplex. Some forms of COFDM already offer this kind of hierarchical modulation, though BST-OFDM is intended to make it more flexible. The 6 MHz television channel may therefore be "segmented", with different segments being modulated differently and used for different services.
It is possible, for example, to send an audio service on a segment that includes a segment composed of a number of carriers, a data service on another segment and a television service on yet another segment—all within the same 6 MHz television channel. Furthermore, these may be modulated with different parameters so that, for example, the audio and data services could be optimized for mobile reception, while the television service is optimized for stationary reception in a high-multipath environment.
Ultra-wideband (UWB) wireless personal area network technology may also use OFDM, such as in Multiband OFDM (MB-OFDM). This UWB specification is advocated by the WiMedia Alliance (formerly by both the Multiband OFDM Alliance [MBOA] and the WiMedia Alliance, but the two have now merged), and is one of the competing UWB radio interfaces.
"Fast low-latency access with seamless handoff orthogonal frequency division multiplexing" (Flash-OFDM), also referred to as F-OFDM, was based on OFDM and also specified higher protocol layers. It was developed by Flarion, and purchased by Qualcomm in January 2006. Flash-OFDM was marketed as a packet-switched cellular bearer, to compete with GSM and 3G networks. As an example, 450 MHz frequency bands previously used by NMT-450 and C-Net C450 (both 1G analogue networks, now mostly decommissioned) in Europe are being licensed to Flash-OFDM operators.
In Finland, the license holder Digita began deployment of a nationwide "@450" wireless network in parts of the country since April 2007. It was purchased by Datame in 2011. In February 2012 Datame announced they would upgrade the 450 MHz network to competing CDMA2000 technology.
Slovak Telekom in Slovakia offers Flash-OFDM connections with a maximum downstream speed of 5.3 Mbit/s, and a maximum upstream speed of 1.8 Mbit/s, with a coverage of over 70 percent of Slovak population. The Flash-OFDM network was switched off in the majority of Slovakia on 30 September 2015.
T-Mobile Germany used Flash-OFDM to backhaul Wi-Fi HotSpots on the Deutsche Bahn's ICE high speed trains between 2005 and 2015, until switching over to UMTS and LTE.
American wireless carrier Nextel Communications field tested wireless broadband network technologies including Flash-OFDM in 2005. Sprint purchased the carrier in 2006 and decided to deploy the mobile version of WiMAX, which is based on Scalable Orthogonal Frequency Division Multiple Access (SOFDMA) technology.
Citizens Telephone Cooperative launched a mobile broadband service based on Flash-OFDM technology to subscribers in parts of Virginia in March 2006. The maximum speed available was 1.5 Mbit/s. The service was discontinued on April 30, 2009.
OFDM has become an interesting technique for power line communications (PLC). In this area of research, a wavelet transform is introduced to replace the DFT as the method of creating orthogonal frequencies. This is due to the advantages wavelets offer, which are particularly useful on noisy power lines.
Instead of using an IDFT to create the sender signal, the wavelet OFDM uses a synthesis bank consisting of a formula_36-band transmultiplexer followed by the transform function
On the receiver side, an analysis bank is used to demodulate the signal again. This bank contains an inverse transform
followed by another formula_36-band transmultiplexer. The relationship between both transform functions is
An example of W-OFDM uses the Perfect Reconstruction Cosine Modulated Filter Bank (PR-CMFB) and Extended Lapped Transform (ELT) is used for the wavelet TF. Thus, formula_42 and formula_43 are given as
These two functions are their respective inverses, and can be used to modulate and demodulate a given input sequence. Just as in the case of DFT, the wavelet transform creates orthogonal waves with formula_47, formula_48, ..., formula_49. The orthogonality ensures that they do not interfere with each other and can be sent simultaneously. At the receiver, formula_50, formula_51, ..., formula_52 are used to reconstruct the data sequence once more.
W-OFDM is an evolution of the standard OFDM, with certain advantages.
Mainly, the sidelobe levels of W-OFDM are lower. This results in less ICI, as well as greater robustness to narrowband interference. These two properties are especially useful in PLC, where most of the lines aren't shielded against EM-noise, which creates noisy channels and noise spikes.
A comparison between the two modulation techniques also reveals that the complexity of both algorithms remains approximately the same. | https://en.wikipedia.org/wiki?curid=22691 |
Operator overloading
In computer programming, operator overloading, sometimes termed "operator ad hoc polymorphism", is a specific case of polymorphism, where different operators have different implementations depending on their arguments. Operator overloading is generally defined by a programming language, a programmer, or both.
Operator overloading is syntactic sugar, and is used because it allows programming using notation nearer to the target domain and allows user-defined types a similar level of syntactic support as types built into a language. It is common, for example, in scientific computing, where it allows computing representations of mathematical objects to be manipulated with the same syntax as on paper.
Operator overloading does not change the expressive power of a language (with functions), as it can be emulated using function calls. For example, consider variables codice_1 of some user-defined type, such as matrices:
codice_2
In a language that supports operator overloading, and with the usual assumption that the '*' operator has higher precedence than '+' operator, this is a concise way of writing:
codice_3
However, the former syntax reflects common mathematical usage.
In this case, the addition operator is overloaded to allow addition on a user-defined type "Time" (in C++):
Time operator+(const Time& lhs, const Time& rhs) {
Addition is a binary operation, which means it has two operands. In C++, the arguments being passed are the operands, and the codice_4 object is the returned value.
The operation could also be defined as a class method, replacing codice_5 by the hidden codice_6 argument; however this forces the left operand to be of type codice_7:
// This "const" means that |this| is not modified.
// ------------------------------------\
// V
Time Time::operator+(const Time& rhs) const {
Note that a unary operator defined as a class method would receive no apparent argument (it only works from codice_6):
bool Time::operator!() const {
Less than(Issues in Overloading because it allows programmers to reassign the semantics of operators depending on the types of their operands. For example, the use of the codice_9 in C++'s:
a « 1
shifts the bits in the variable a left by 1 bit if a is of an integer type, but if a is an output stream then the above code will attempt to write a "1" to the stream. Because operator overloading allows the original programmer to change the usual semantics of an operator and to catch any subsequent programmers by surprise, it is considered good practice to use operator overloading with care (the creators of Java decided not to use this feature, although not necessarily for this reason).
Another, more subtle, issue with operators is that certain rules from mathematics can be wrongly expected or unintentionally assumed. For example, the commutativity of + (i.e. that codice_10) does not always apply; an example of this occurs when the operands are strings, since + is commonly overloaded to perform a concatenation of strings (i.e. codice_11 yields codice_12, while codice_13 yields codice_14). A typical counter to this argument comes directly from mathematics: While + is commutative on integers (and more generally any complex numbers), it is not commutative for other "types" of variable. In practice, + is not associative even with floating-point values, due to rounding errors. Another example: In mathematics, multiplication is commutative for real and complex numbers but not commutative in matrix multiplication.
A classification of some common programming languages is made according to whether their operators are overloadable by the programmer and whether the operators are limited to a predefined set.
The ALGOL 68 specification allowed operator overloading.
Extract from the ALGOL 68 language specification (page 177) where the overloaded operators ¬, =, ≠, and abs are defined:
Note that no special declaration is needed to "overload" an operator, and the programmer is free to create new operators.
Ada supports overloading of operators from its inception, with the publication of the Ada 83 language standard. However, the language designers chose to preclude the definition of new operators. Only extant operators in the language may be overloaded, by defining new functions with identifiers such as "+", "*", "&" etc. Subsequent revisions of the language (in 1995 and 2005) maintain the restriction to overloading of extant operators.
In C++, operator overloading is more refined than in ALGOL 68.
Java language designers at Sun Microsystems chose to omit overloading.
Ruby allows operator overloading as syntactic sugar for simple method calls.
Lua allows operator overloading as syntactic sugar for method calls with the added feature that if the first operand doesn't define that operator, the method for the second operand will be used.
Microsoft added operator overloading to C# in 2001 and to Visual Basic .NET in 2003.
Scala treats all operators as methods and thus allows operator overloading by proxy.
In Raku, the definition of all operators is delegated to lexical functions, and so, using function definitions, operators can be overloaded or new operators added. For example, the function defined in the Rakudo source for incrementing a Date object with "+" is:
multi infix:(Date:D $d, Int:D $x) {
Since "multi" was used, the function gets added to the list of multidispatch candidates, and "+" is only overloaded for the case where the type constraints in the function signature are met.
While the capacity for overloading includes +, *, >=, the postfix and term i, and so on, it also allows for overloading various brace operators: "[x, y]", "x[ y ]", "x{ y }", and "x( y )".
Kotlin has supported operator overloading since its creation. | https://en.wikipedia.org/wiki?curid=22693 |
Omphalos hypothesis
The omphalos hypothesis is one attempt to reconcile the scientific evidence that the universe is billions of years old with a literal interpretation of the Genesis creation narrative, which implies that the Earth is only a few thousand years old. It is based on the religious belief that the universe was created by a divine being, within the past ten thousand years (in keeping with flood geology), and that the presence of objective, verifiable evidence that the universe is older than approximately ten millennia is entirely due to the creator introducing false evidence that makes the universe appear much, much older.
The idea was named after the title of an 1857 book, "Omphalos" by Philip Henry Gosse, in which Gosse argued that in order for the world to be "functional", God must have created the Earth with mountains and canyons, trees with growth rings, Adam and Eve with hair, fingernails, and navels (ὀμφαλός "omphalos" is Greek for "navel"), and that therefore "no" empirical evidence about the age of the Earth or universe can be taken as reliable.
Various supporters of Young Earth creationism have given different explanations for their belief that the universe is filled with false evidence of the universe's age, including a belief that some things needed to be created at a certain age for the ecosystems to function, or their belief that the creator was deliberately planting deceptive evidence.
The idea was widely rejected in the 19th century, when Gosse published his book. It saw some revival in the 20th century by some Young Earth creationists, who extended the argument to include visible light that appears to originate in far-off stars and galaxies.
Stories of the beginning of human life based on the creation story in Genesis have been published for centuries. The 4th-century theologian Ephrem the Syrian described a world in which divine creation instantly produced fully grown organisms:
By the 19th century, scientific evidence of the Earth's age had been collected, and it disagreed with a literal reading of the biblical accounts. This evidence was rejected by some writers at the time, such as François-René de Chateaubriand. Chateaubriand wrote in his 1802 book, "Génie du christianisme" (Part I Book IV Chapter V) that "God might have created, and doubtless did create, the world with all the marks of antiquity and completeness which it now exhibits." In modern times, Rabbi Dovid Gottlieb supported a similar position, saying that the objective scientific evidence for an old universe is strong, but wrong, and that the traditional Jewish calendar is correct.
In the middle of the 19th century, the disagreement between scientific evidence about the age of the Earth and the Western religious traditions was a significant debate among intellectuals. Gosse published "Omphalos" in 1857 to explain his answer to this question. He concluded that the religious tradition was correct. Gosse began with the earlier idea that the Earth contained mature organisms at the instant they were created, and that these organisms had false signs of their development, such as hair on mammals, which grows over time. He extended this idea of creating a single mature organism to creating mature systems, and concluded that fossils were an artifact of the creation process and merely part of what was necessary to make creation work. Therefore, he reasoned, fossils and other signs of the Earth's age could not be used to prove the age. His book sold poorly and was widely rejected.
Other contemporary proposals for reconciling the stories of creation in Genesis with the scientific evidence included the "interval theory" or gap theory of creation, in which a large interval of time passed in between the initial creation of the universe and the beginning of the six days of creation. This idea was put forward by Archbishop John Bird Sumner of Canterbury in "Treatise on the Records of Creation". Another popular idea, promoted by the English theologian John Pye Smith, was that the Garden of Eden described the events of only one small location. A third proposal, by French naturalist Georges-Louis Leclerc, Comte de Buffon, held that the six "days" of the creation story were arbitrary and large ages rather than 24-hour periods.
Theologians rejected Gosse's proposal on the grounds that it seemed to make the divine creator tell lies – either lying in the scriptures, or lying in nature. Scientists rejected it on the grounds that it disagreed with uniformitarianism, an explanation of geology that was widely supported at the time, and the impossibility of testing or falsifying the idea.
Some modern creationists still argue against scientific evidence in the same way. For instance, John D. Morris, president of the Institute for Creation Research wrote in 1990 about the "appearance of age":
He does not extend this idea to the geological record, preferring to believe that it was all created in the Flood, but others such as Gerald E. Aardsma go further, with his idea of "virtual history". This appears to suggest that events after the creation have changed the "virtual history" we now see, including the fossils:
The past president of the Missouri Association for Creation has said:
Though Gosse's original omphalos hypothesis specifies a popular creation story, others have proposed that the idea does not preclude creation as recently as five minutes ago, including memories of times before this created "in situ". This idea is sometimes called Last Thursdayism by its opponents, as in "the world might as well have been created last Thursday."
The concept is both unverifiable and unfalsifiable through any conceivable scientific study—in other words, it is impossible even "in principle" to subject it to any form of test, by reference to any empirical data, because the empirical data themselves are considered to have been arbitrarily created to look the way they do at every observable level of detail.
From a religious viewpoint, it can be interpreted as God having "created a fake", such as illusions of light in space of stellar explosions (supernovae) that never really happened, or volcanic mountains that were never really volcanoes in the first place and that never actually experienced erosion.
This conception has therefore drawn harsh rebuke from some theologians. Reverend Canon Brian Hebblethwaite, for example, preached against Bertrand Russell's Five-minute hypothesis:
The basis for Hebblethwaite's objection, however, is the presumption of a God that would not deceive people about their very humanity—an unprovable presumption that the omphalos hypothesis rejects at the outset. Hebblethwaite also suggests that God necessarily had to create certain elements of the Universe in combination with the creation of man:
In a rebuttal of the claim that God might have implanted a false history of the age of the Universe in order to test our faith in the truth of the Torah, Rabbi Natan Slifkin, an author whose works have been banned by several Haredi rabbis for going against the tenets of the Talmud, writes:
The Red Shift refers to the change in the wavelength of light that is received from objects moving away from us (thereby lengthening wavelengths, producing a red shift). Scientists interpret the red shift in light received from other galaxies as evidence that the galaxies are moving away from our own, that some galaxies are billions of light years distant from the Milky Way, and that therefore the light has been traveling for billions of years, requiring a universe billions of years in age.
According to the Omphalos Theory view, God created the red shift in light received from other galaxies in order to fool humans (beginning in the 20th century, but not before that time) into thinking that the universe is billions of years old. Among the many problems with this theory (including lack of any evidence and lack of reference to the phenomenon in the bible) is that it would require that God adjusted the shift in exquisitely precise ways for each of the billions of individual galaxies, and did so to fool humans about the age of the universe in a way that was not detectable by humans until the 20th century.
The five-minute hypothesis is a skeptical hypothesis put forth by the philosopher Bertrand Russell that proposes that the universe sprang into existence five minutes ago from nothing, with human memory and all other signs of history included. It is a commonly used example of how one may maintain extreme philosophical skepticism with regard to memory.
Jorge Luis Borges, in his 1940 work, "Tlön, Uqbar, Orbis Tertius", describes a fictional world in which some essentially follow as a religious belief a philosophy much like Russell's discussion on the logical extreme of Gosse's theory:
Borges had earlier written a short essay, "The Creation and P. H. Gosse" that explored the rejection of Gosse's "Omphalos". Borges argued that its unpopularity stemmed from Gosse's explicit (if inadvertent) outlining of what Borges characterized as absurdities in the Genesis story. | https://en.wikipedia.org/wiki?curid=22700 |
Origen
Origen of Alexandria ( 184 – 253), also known as Origen Adamantius, was an early Christian scholar, ascetic, and theologian who was born and spent the first half of his career in Alexandria. He was a prolific writer who wrote roughly 2,000 treatises in multiple branches of theology, including textual criticism, biblical exegesis and biblical hermeneutics, homiletics, and spirituality. He was one of the most influential figures in early Christian theology, apologetics, and asceticism. He has been described as "the greatest genius the early church ever produced".
Origen sought martyrdom with his father at a young age but was prevented from turning himself in to the authorities by his mother. When he was eighteen years old, Origen became a catechist at the Catechetical School of Alexandria. He devoted himself to his studies and adopted an ascetic lifestyle as both a vegetarian and teetotaler. He came into conflict with Demetrius, the bishop of Alexandria, in 231 after he was ordained as a presbyter by his friend, the bishop of Caesarea, while on a journey to Athens through Palestine. Demetrius condemned Origen for insubordination and accused him of having castrated himself and of having taught that even Satan would eventually attain salvation, an accusation which Origen vehemently denied. Origen founded the Christian School of Caesarea, where he taught logic, cosmology, natural history, and theology, and became regarded by the churches of Palestine and Arabia as the ultimate authority on all matters of theology. He was tortured for his faith during the Decian persecution in 250 and died three to four years later from his injuries.
Origen was able to produce a massive quantity of writings because of the patronage of his close friend Ambrose, who provided him with a team of secretaries to copy his works, making him one of the most prolific writers in all of antiquity. His treatise "On the First Principles" systematically laid out the principles of Christian theology and became the foundation for later theological writings. He also authored "Contra Celsum", the most influential work of early Christian apologetics, in which he defended Christianity against the pagan philosopher Celsus, one of its foremost early critics. Origen produced the "Hexapla", the first critical edition of the Hebrew Bible, which contained the original Hebrew text as well as five different Greek translations of it, all written in columns, side-by-side. He wrote hundreds of homilies covering almost the entire Bible, interpreting many passages as allegorical. Origen taught that, before the creation of the material universe, God had created the souls of all the intelligent beings. These souls, at first fully devoted to God, fell away from him and were given physical bodies. Origen was the first to propose the ransom theory of atonement in its fully developed form and, though he was probably a subordinationist, he also significantly contributed to the development of the concept of the Trinity. Origen hoped that all people might eventually attain salvation but was always careful to maintain that this was only speculation. He defended free will and advocated Christian pacifism.
Origen is a Church Father and is widely regarded as one of the most important Christian theologians of all time. His teachings were especially influential in the east, with Athanasius of Alexandria and the three Cappadocian Fathers being among his most devoted followers. Argument over the orthodoxy of Origen's teachings spawned the First Origenist Crisis in the late fourth century, in which he was attacked by Epiphanius of Salamis and Jerome but defended by Tyrannius Rufinus and John of Jerusalem. In 543, Emperor Justinian I condemned him as a heretic and ordered all his writings to be burned. The Second Council of Constantinople in 553 may have anathemized Origen, or it may have only condemned certain heretical teachings which claimed to be derived from Origen. His teachings on the pre-existence of souls were rejected by the Church.
Almost all information about Origen's life comes from a lengthy biography of him in Book VI of the "Ecclesiastical History" written by the Christian historian Eusebius ( 260 – 340). Eusebius portrays Origen as the perfect Christian scholar and as a literal saint. Eusebius, however, wrote this account almost fifty years after Origen's death and had access to few reliable sources on Origen's life, especially his early years. Anxious for more material about his hero, Eusebius recorded events based on only unreliable hearsay evidence and frequently made speculative inferences about Origen based on the sources he had available. Nonetheless, scholars can reconstruct a general impression of Origen's historical life by sorting out the parts of Eusebius' account that are accurate from those that are inaccurate.
Origen was born in either 185 or 186 AD in Alexandria. According to Eusebius, Origen's father was Leonides of Alexandria, a respected professor of literature and also a devout Christian who practiced his religion openly. Joseph Wilson Trigg deems the details of this report unreliable but states that Origen's father was certainly "a prosperous and thoroughly Hellenized bourgeois". According to John Anthony McGuckin, Origen's mother, whose name is unknown, may have been a member of the lower class who did not have the right of citizenship. It is likely that, on account of his mother's status, Origen was not a Roman citizen. Origen's father taught him about literature and philosophy and also about the Bible and Christian doctrine. Eusebius states that Origen's father made him memorize passages of scripture daily. Trigg accepts this tradition as possibly genuine, given Origen's ability as an adult to recite extended passages of scripture at will. Eusebius also reports that Origen became so learned about the holy scriptures at an early age that his father was unable to answer his questions.
In 202, when Origen was "not yet seventeen", the Roman Emperor Septimius Severus ordered Roman citizens who openly practiced Christianity to be executed. Origen's father Leonides was arrested and thrown in prison. Eusebius reports that Origen wanted to turn himself in to the authorities so they would execute him as well, but his mother hid all his clothes and he was unable to go to the authorities since he refused to leave the house naked. According to McGuckin, even if Origen had turned himself in, it is unlikely that he would have been punished, since the emperor was only intent on executing Roman citizens. Origen's father was beheaded, and the state confiscated the family's entire property, leaving them broken and impoverished. Origen was the eldest of nine children, and as his father's heir, it became his responsibility to provide for the whole family.
When he was eighteen years old, Origen was appointed as a catechist at the Catechetical School of Alexandria. Many scholars have assumed that Origen became the head of the school, but according to McGuckin, this is highly improbable and it is more likely that he was simply given a paid teaching position, perhaps as a "relief effort" for his destitute family. While employed at the school, he adopted the ascetic lifestyle of the Greek Sophists. He spent the whole day teaching and would stay up late at night writing treatises and commentaries. He went barefoot and only owned one cloak. He was a teetotaler and a vegetarian, and he often fasted for long periods of time. Although Eusebius goes to great lengths to portray Origen as one of the Christian monastics of his own era, this portrayal is now generally recognized as anachronistic.
According to Eusebius, as a young man, Origen was taken in by a wealthy Gnostic woman, who was also the patron of a very influential Gnostic theologian from Antioch, who frequently lectured in her home. Eusebius goes to great lengths to insist that, although Origen studied while in her home, he never once "prayed in common" with her or the Gnostic theologian. Later, Origen succeeded in converting a wealthy man named Ambrose from Valentinian Gnosticism to orthodox Christianity. Ambrose was so impressed by the young scholar that he gave Origen a house, a secretary, seven stenographers, a crew of copyists and calligraphers, and paid for all of his writings to be published.
Sometime when he was in his early twenties, Origen sold the small library of Greek literary works which he had inherited from his father for a sum which netted him a daily income of four obols. He used this money to continue his study of the Bible and philosophy. Origen studied at numerous schools throughout Alexandria, including the Platonic Academy of Alexandria, where he was a student of Ammonius Saccas. Eusebius claims that Origen studied under Clement of Alexandria, but according to McGuckin, this is almost certainly a retrospective assumption based on the similarity of their teachings. Origen rarely mentions Clement in his own writings, and when he does, it is usually to correct him.
Eusebius claims that, as a young man, following a literal misreading of , in which Jesus is presented as saying "there are eunuchs who have made themselves eunuch for the sake of the kingdom of heaven", Origen went to a physician and paid him to surgically remove his genitals in order to ensure his reputation as a respectable tutor to young men and women. Eusebius further alleges that Origen privately told Demetrius, the bishop of Alexandria, about the castration and that Demetrius initially praised him for his devotion to God on account of it. Origen, however, never mentions anything about having castrated himself in any of his surviving writings, and in his exegesis of this verse in his "Commentary on the Gospel of Matthew", written near the end of life, he strongly condemns any literal interpretation of Matthew 19:12, asserting that only an idiot would interpret the passage as advocating literal castration.
Since the beginning of the twentieth century, some scholars have questioned the historicity of Origen's self-castration, with many seeing it as a wholesale fabrication. Trigg states that Eusebius' account of Origen's self-castration is certainly true, because Eusebius, who was an ardent admirer of Origen, yet clearly describes the castration as an act of pure folly, would have had no motive to pass on a piece of information that might tarnish Origen's reputation unless it was "notorious and beyond question." Trigg sees Origen's condemnation of the literal interpretation of Matthew 19:12 as him "tacitly repudiating the literalistic reading he had acted on in his youth."
In sharp contrast, McGuckin dismisses Eusebius's story of Origen's self-castration as "hardly credible", seeing it as a deliberate attempt by Eusebius to distract from more serious questions regarding the orthodoxy of Origen's teachings. McGuckin also states, "We have no indication that the motive of castration for respectability was ever regarded as standard by a teacher of mixed-gender classes." He adds that Origen's female students (whom Eusebius lists by name) would have been accompanied by attendants at all times, meaning Origen would have had no good reason to think that anyone would suspect him of impropriety. Henry Chadwick argues that, while Eusebius's story may be true, it seems unlikely, given that Origen's exposition of Matthew 19:12 "strongly deplored any literal interpretation of the words". Instead, Chadwick suggests, "Perhaps Eusebius was uncritically reporting malicious gossip retailed by Origen's enemies, of whom there were many." However, many noted historians, such as Peter Brown and William Placher, continue to find no reason to conclude that the story is false. Placher theorizes that, if it is true, it may have followed an episode in which Origen received some raised eyebrows while privately tutoring a woman.
In his early twenties, Origen became less interested in being a grammarian and more interested in being a rhetor-philosopher. He gave his job as a catechist to his younger colleague Heraclas. Meanwhile, Origen began to style himself as a "master of philosophy". Origen's new position as a self-styled Christian philosopher brought him into conflict with Demetrius, the bishop of Alexandria. Demetrius was a charismatic leader who ruled the Christian congregation of Alexandria with an iron fist, and he was the one who was most directly responsible for the elevation of the bishop of Alexandria; prior to Demetrius, the bishop of Alexandria had merely been a priest who was elected to represent his fellows, but after Demetrius, the bishop was seen as clearly a rank higher than his fellow priests. By styling himself as an independent philosopher, Origen was reviving a role that had been prominent in earlier Christianity but which challenged the authority of the now-powerful bishop.
Meanwhile, Origen began composing his massive theological treatise "On the First Principles", a landmark book which systematically laid out the foundations of Christian theology for centuries to come. Origen also began travelling abroad to visit schools across the Mediterranean. In 212, he travelled to Rome, which was a major center of philosophy at the time. In Rome, Origen attended lectures by Hippolytus of Rome and was influenced by his "logos" theology. In 213 or 214, the governor of Arabia sent a message to the prefect of Egypt requesting him to send Origen to meet with him so that he could interview him and learn more about Christianity from its leading intellectual. Origen was escorted by official bodyguards and spent a short time in Arabia with the governor before returning to Alexandria.
In the autumn of 215, Roman Emperor Caracalla visited Alexandria. During the visit, the students at the schools there protested and made fun of him for having murdered his brother Geta. Caracalla was incensed and ordered his troops to ravage the city, execute the governor, and kill all the protesters. He also commanded them to expel all the teachers and intellectuals from the city. Origen fled Alexandria and travelled to the city of Caesarea Maritima in the Roman province of Palestine, where the bishops Theoctistus of Caesarea and Alexander of Jerusalem became his devoted admirers and asked him to deliver discourses on the scriptures in their respective churches. This effectively amounted to letting Origen deliver homilies, even though he was not formally ordained. While this was an unexpected phenomenon, especially given Origen's international fame as a teacher and philosopher, it infuriated Demetrius, who saw it as a direct undermining of his authority. Demetrius sent deacons from Alexandria to demand that the Palestinian hierarchs immediately return "his" catechist to Alexandria. He also issued a decree chastising the Palestinians for allowing a person who was not ordained to preach. The Palestinian bishops, in turn, issued their own condemnation, accusing Demetrius of being jealous of Origen's fame and prestige.
Origen obeyed Demetrius's order and returned to Alexandria, bringing with him an antique scroll he had purchased at Jericho containing the full text of the Hebrew Bible. The manuscript, which had purportedly been found "in a jar", became the source text for one of the two Hebrew columns in Origen's "Hexapla". Origen studied the Old Testament in great depth; Eusebius even claims that Origen learned Hebrew. Most modern scholars agree that this is implausible, but they disagree on how much Origen actually knew about the language. H. Lietzmann concludes that Origen probably only knew the Hebrew alphabet and not much else; whereas, R. P. C. Hanson and G. Bardy argue that Origen had a superficial understanding of the language but not enough to have composed the entire "Hexapla". A note in Origen's "On the First Principles" mentions an unknown "Hebrew master", but this was probably a consultant, not a teacher.
Origen also studied the entire New Testament, but especially the epistles of the apostle Paul and the Gospel of John, the writings which Origen regarded as the most important and authoritative. At Ambrose's request, Origen composed the first five books of his exhaustive "Commentary on the Gospel of John", He also wrote the first eight books of his "Commentary on Genesis", his "Commentary on Psalms 1-25", and his "Commentary on Lamentations". In addition to these commentaries, Origen also wrote two books on the resurrection of Jesus and ten books of "Stromata". It is likely that these works contained much theological speculation, which brought Origen into even greater conflict with Demetrius.
Origen repeatedly asked Demetrius to ordain him as a priest, but Demetrius continually refused. In around 231, Demetrius sent Origen on a mission to Athens. Along the way, Origen stopped in Caesarea, where he was warmly greeted by the bishops Theoctistus and Alexander of Jerusalem, who had become his close friends during his previous stay. While he was visiting Caesarea, Origen asked Theoctistus to ordain him as a priest. Theoctistus gladly complied. Upon learning of Origen's ordination, Demetrius was outraged and issued a condemnation declaring that Origen's ordination by a foreign bishop was an act of insubordination.
Eusebius reports that as a result of Demetrius's condemnations, Origen decided not to return to Alexandria and to instead take up permanent residence in Caesarea. John Anthony McGuckin, however, argues that Origen had probably already been planning to stay in Caesarea. The Palestinian bishops declared Origen the chief theologian of Caesarea. Firmilian, the bishop of Caesarea Mazaca in Cappadocia, was such a devoted disciple of Origen that he begged him to come to Cappadocia and teach there.
Demetrius raised a storm of protests against the bishops of Palestine and the church synod in Rome. According to Eusebius, Demetrius published the allegation that Origen had secretly castrated himself, a capital offense under Roman law at the time and one which would have made Origen's ordination invalid, since eunuchs were forbidden from becoming priests. Demetrius also alleged that Origen had taught an extreme form of "apokatastasis", which held that all beings, including even Satan himself, would eventually attain salvation. This allegation probably arose from a misunderstanding of Origen's argument during a debate with the Valentinian Gnostic teacher Candidus. Candidus had argued in favor of predestination by declaring that the Devil was beyond salvation. Origen had responded by arguing that, if the Devil is destined for eternal damnation, it was on account of his actions, which were the result of his own free will. Therefore, Origen had declared that Satan was only morally reprobate, not absolutely reprobate.
Demetrius died in 232, less than a year after Origen's departure from Alexandria. The accusations against Origen faded with the death of Demetrius, but they did not disappear entirely and they continued to haunt him for the rest of his career. Origen defended himself in his "Letter to Friends in Alexandria", in which he vehemently denied that he had ever taught that the Devil would attain salvation and insisted that the very notion of the Devil attaining salvation was simply ludicrous.
During his early years in Caesarea, Origen's primary task was the establishment of a Christian School; Caesarea had long been seen as a center of learning for Jews and Hellenistic philosophers, but until Origen's arrival, it had lacked a Christian center of higher education. According to Eusebius, the school Origen founded was primarily targeted towards young pagans who had expressed interest in Christianity but were not yet ready to ask for baptism. The school therefore sought to explain Christian teachings through Middle Platonism. Origen started his curriculum by teaching his students classical Socratic reasoning. After they had mastered this, he taught them cosmology and natural history. Finally, once they had mastered all of these subjects, he taught them theology, which was the highest of all philosophies, the accumulation of everything they had learning previously.
With the establishment of the Caesarean school, Origen's reputation as a scholar and theologian reached its zenith and he became known throughout the Mediterranean world as a brilliant intellectual. The hierarchs of the Palestinian and Arabian church synods regarded Origen as the ultimate expert on all matters dealing with theology. While teaching in Caesarea, Origen resumed work on his "Commentary on John", composing at least books six through ten. In the first of these books, Origen compares himself to "an Israelite who has escaped the perverse persecution of the Egyptians." Origen also wrote the treatise "On Prayer" at the request of his friend Ambrose and his "sister" Tatiana, in which he analyzes the different types of prayers described in the Bible and offers a detailed exegesis on the Lord's Prayer.
Pagans also took a fascination with Origen. The Neoplatonist philosopher Porphyry heard of Origen's fame and travelled to Caesarea to listen to his lectures. Porphyry recounts that Origen had extensively studied the teachings of Pythagoras, Plato, and Aristotle, but also those of important Middle Platonists, Neopythagoreans, and Stoics, including Numenius of Apamea, Chronius, Apollophanes, Longinus, Moderatus of Gades, Nicomachus, Chaeremon, and Cornutus. Nonetheless, Porphyry accused Origen of having betrayed true philosophy by subjugating its insights to the exegesis of the Christian scriptures. Eusebius reports that Origen was summoned from Caesarea to Antioch at the behest of Julia Avita Mamaea, the mother of Roman Emperor Severus Alexander, "to discuss Christian philosophy and doctrine with her."
In 235, approximately three years after Origen began teaching in Caesarea, Alexander Severus, who had been tolerant towards Christians, was murdered and Emperor Maximinus Thrax instigated a purge of all those who had supported his predecessor. His pogroms targeted Christian leaders and, in Rome, Pope Pontianus and Hippolytus of Rome were both sent into exile. Origen knew that he was in danger and went into hiding in the home of a faithful Christian woman named Juliana the Virgin, who had been a student of the Ebionite leader Symmachus. Origen's close friend and longtime patron Ambrose was arrested in Nicomedia, and Protoctetes, the leading priest in Caesarea, was also arrested. In their honor, Origen composed his treatise "Exhortation to Martyrdom", which is now regarded as one of the greatest classics of Christian resistance literature. After coming out of hiding following Maximinus' death, Origen founded a school where Gregory Thaumaturgus, later bishop of Pontus, was one of the pupils. He preached regularly on Wednesdays and Fridays, and later daily.
Sometime between 238 and 244, Origen visited Athens, where he completed his "Commentary on the Book of Ezekiel" and began writing his "Commentary on the Song of Songs". After visiting Athens, he visited Ambrose in Nicomedia. According to Porphyry, Origen also travelled to Rome or Antioch, where he met Plotinus, the founder of Neoplatonism. The Christians of the eastern Mediterranean continued to revere Origen as the most orthodox of all theologians, and when the Palestinian hierarchs learned that Beryllus, the bishop of Bostra and one of the most energetic Christian leaders of the time, had been preaching adoptionism (i.e., belief that Jesus was born human and only became divine after his baptism), they sent Origen to convert him to orthodoxy. Origen engaged Beryllus in a public disputation, which went so successfully that Beryllus promised to only teach Origen's theology from then on. On another occasion, a Christian leader in Arabia named Heracleides began teaching that the soul was mortal and that it perished with the body. Origen refuted these teachings, arguing that the soul is immortal and can never die.
In 249, the Plague of Cyprian broke out. In 250, Emperor Decius, believing that the plague was caused by Christians' failure to recognise him as divine, issued a decree for Christians to be persecuted. This time Origen did not escape. Eusebius recounts how Origen suffered "bodily tortures and torments under the iron collar and in the dungeon; and how for many days with his feet stretched four spaces in the stocks". The governor of Caesarea gave very specific orders that Origen was not to be killed until he had publicly renounced his faith in Christ. Origen endured two years of imprisonment and torture but obstinately refused to renounce his faith. In June 251, Decius was killed fighting the Goths in the Battle of Abritus, and Origen was released from prison. Nonetheless, Origen's health was broken by the physical tortures enacted on him, and he died less than a year later at the age of sixty-nine. A later legend, recounted by Jerome and numerous itineraries, places his death and burial at Tyre, but little value can be attached to this.
Origen was an extremely prolific writer. According to Epiphanius, he wrote a grand total of roughly 6,000 works over the course of his lifetime. Most scholars agree that this estimate is probably somewhat exaggerated. According to Jerome, Eusebius listed the titles of just under 2,000 treatises written by Origen in his lost "Life of Pamphilus". Jerome compiled an abbreviated list of Origen's major treatises, itemizing 800 different titles.
By far the most important work of Origen on textual criticism was the "Hexapla" ("Sixfold"), a massive comparative study of various translations of the Old Testament in six columns: Hebrew, Hebrew in Greek characters, the Septuagint, and the Greek translations of Theodotion (a Jewish scholar from 180 AD), Aquila of Sinope (another Jewish scholar from 117-138), and Symmachus (an Ebionite scholar from 193-211). Origen was the first Christian scholar to introduce critical markers to a Biblical text. He marked the Septuagint column of the "Hexapla" using signs adapted from those used by the textual critics of the Great Library of Alexandria: a passage found in the Septuagint that was not found in the Hebrew text would be marked with an "asterisk" (*) and a passage that was found in other Greek translations, but not in the Septuagint, would be marked with an "obelus" (÷).
The "Hexapla" was the cornerstone of the Great Library of Caesarea, which Origen founded. It was still the centerpiece of the library's collection by the time of Jerome, who records having used it in his letters on multiple occasions. When Emperor Constantine the Great ordered fifty complete copies of the Bible to be transcribed and disseminated across the empire, Eusebius used the "Hexapla" as the master copy for the Old Testament. Although the original "Hexapla" has been lost, the text of it has survived in numerous fragments and a more-or-less complete Syraic translation of the Greek column, made by the seventh-century bishop Paul of Tella, has also survived. For some sections of the "Hexapla", Origen included additional columns containing other Greek translations; for the Book of Psalms, he included no less than eight Greek translations, making this section known as "Enneapla" ("Ninefold"). Origen also produced the "Tetrapla" ("Fourfold"), a smaller, abridged version of the "Hexapla" containing only the four Greek translations and not the original Hebrew text.
According to Jerome's "Epistle" 33, Origen wrote extensive "scholia" on the books of Exodus, Leviticus, Isaiah, Psalms 1-15, Ecclesiastes, and the Gospel of John. None of these "scholia" have survived intact, but parts of them were incorporated into the "Catenaea", a collection of excerpts from major works of Biblical commentary written by the Church Fathers. Other fragments of the "scholia" are preserved in Origen's "Philocalia" and in Pamphilus of Caesarea's apology for Origen. The "Stromateis" were of a similar character, and the margin of "Codex Athous Laura", 184, contains citations from this work on Romans 9:23; I Corinthians 6:14, 7:31, 34, 9:20-21, 10:9, besides a few other fragments. Origen composed homilies covering almost the entire Bible. There are 205, and possibly 279, homilies of Origen that are extant either in Greek or in Latin translations.
The homilies preserved are on Genesis (16), Exodus (13), Leviticus (16), Numbers (28), Joshua (26), Judges (9), I Sam. (2), Psalms 36-38 (9), Canticles (2), Isaiah (9), Jeremiah (7 Greek, 2 Latin, 12 Greek and Latin), Ezekiel (14), and Luke (39). The homilies were preached in the church at Caesarea, with the exception of the two on 1 Samuel which were delivered in Jerusalem. Nautin has argued that they were all preached in a three-year liturgical cycle some time between 238 and 244, preceding the "Commentary on the Song of Songs", where Origen refers to homilies on Judges, Exodus, Numbers, and a work on Leviticus. On June 11, 2012, the Bavarian State Library announced that the Italian philologist Marina Molin Pradel had discovered twenty-nine previously unknown homilies by Origen in a twelfth-century Byzantine manuscript from their collection. Prof. Lorenzo Perrone of Bologna University and other experts confirmed the authenticity of the homilies. The texts of these manuscripts can be found online.
Origen is the main source of information on the use of the texts that were later officially canonized as the New Testament. The information used to create the late-fourth-century Easter Letter, which declared accepted Christian writings, was probably based on the lists given in Eusebius's "Ecclesiastical History" HE 3:25 and 6:25, which were both primarily based on information provided by Origen. Origen accepted the authenticity of the epistles of 1 John, 1 Peter, and Jude without question and accepted the Epistle of James as authentic with only slight hesitation. He also refers to 2 John, 3 John, and 2 Peter but notes that all three were suspected to be forgeries. Origen may have also considered other writings to be "inspired" that were rejected by later authors, including the Epistle of Barnabas, Shepherd of Hermas, and 1 Clement. "Origen is not the originator of the idea of biblical canon, but he certainly gives the philosophical and literary-interpretative underpinnings for the whole notion."
Origen's commentaries written on specific books of scripture are much more focused on systematic exegesis than his homilies. In these writings, Origen applies the precise critical methodology that had been developed by the scholars of the Mouseion in Alexandria to the Christian scriptures. The commentaries also display Origen's impressive encyclopedic knowledge of various subjects and his ability to cross-reference specific words, listing every place in which a word appears in the scriptures along with all the word's known meanings, a feat made all the more impressive by the fact that he did this in a time when Bible concordances had not yet been compiled. Origen's massive "Commentary on the Gospel of John", which spanned more than thirty-two volumes once it was completed, was written with the specific intention to not only expound the correct interpretation of the scriptures, but also to refute the interpretations of the Valentinian Gnostic teacher Heracleon, who had used the Gospel of John to support his argument that there were really two gods, not one. Of the original thirty-two books in the "Commentary on John", only nine have been preserved: Books I, II, VI, X, XIII, XX, XXVIII, XXXII, and a fragment of XIX.
Of the original twenty-five books in Origen's "Commentary on the Gospel of Matthew", only eight have survived in the original Greek (Books 10-17), covering Matthew 13.36-22.33. An anonymous Latin translation beginning at the point corresponding to Book 12, Chapter 9 of the Greek text and covering Matthew 16.13-27.66 has also survived. The translation contains parts that are not found in the original Greek and is missing parts that are found in it. Origen's "Commentary on the Gospel of Matthew" was universally regarded as a classic, even after his condemnation, and it ultimately became the work which established the Gospel of Matthew as the primary gospel. Origen's "Commentary on the Epistle to the Romans" was originally fifteen books long, but only tiny fragments of it have survived in the original Greek. An abbreviated Latin translation in ten books was produced by the monk Tyrannius Rufinus at the end of the fourth century. The historian Socrates Scholasticus records that Origen had included an extensive discussion of the application of the title "theotokos" to the Virgin Mary in his commentary, but this discussion is not found in Rufinus's translation, probably because Rufinus did not approve of Origen's position on the matter, whatever that might have been.
Origen also composed a "Commentary on the Song of Songs", in which he took explicit care to explain why the Song of Songs was relevant to a Christian audience. The "Commentary on the Song of Songs" was Origen's most celebrated commentary and Jerome famously writes in his preface to his translation of two of Origen's homilies over the Song of Songs that "In his other works, Origen habitually excels others. In this commentary, he excelled himself." Origen expanded on the exegesis of the Jewish Rabbi Akiva, interpreting the Song of Songs as a mystical allegory in which the bridegroom represents the Logos and the bride represents the soul of the believer. This was the first Christian commentary to expound such an interpretation and it became extremely influential on later interpretations of the Song of Songs. Despite this, the commentary now only survives in part through a Latin translation of it made by Tyrannius Rufinus in 410. Fragments of some other commentaries survive. Citations in Origen's "Philokalia" include fragments of the third book of the commentary on Genesis. There is also Ps. i, iv.1, the small commentary on Canticles, and the second book of the large commentary on the same, the twentieth book of the commentary on Ezekiel, and the commentary on Hosea. Of the non-extant commentaries, there is limited evidence of their arrangement.
Origen's "On the First Principles" was the first ever systematic exposition of Christian theology. He composed it as a young man between 220 and 230 while he was still living in Alexandria. Fragments from Books 3.1 and 4.1-3 of Origen's Greek original are preserved in Origen's "Philokalia". A few smaller quotations of the original Greek are preserved in Justinian's "Letter to Mennas". The vast majority of the text has only survived in a heavily abridged Latin translation produced by Tyrannius Rufinus in 397. "On the First Principles" begins with an essay explaining the nature of theology. Book One describes the heavenly world and includes descriptions of the oneness of God, the relationship between the three persons of the Trinity, the nature of the divine spirit, reason, and angels. Book Two describes the world of man, including the incarnation of the Logos, the soul, free will, and eschatology. Book Three deals with cosmology, sin, and redemption. Book Four deals with teleology and the interpretation of the scriptures.
"Against Celsus" (Greek: Κατὰ Κέλσου; Latin: "Contra Celsum"), preserved entirely in Greek, was Origen's last treatise, written about 248. It is an apologetic work defending orthodox Christianity against the attacks of the pagan philosopher Celsus, who was seen in the ancient world as early Christianity's foremost opponent. In 178, Celsus had written a polemic entitled "On the True Word", in which he had made numerous arguments against Christianity. The church had responded by ignoring Celsus's attacks, but Origen's patron Ambrose brought the matter to his attention. Origen initially wanted to ignore Celsus and let his attacks fade, but one of Celsus's major claims, which held that no self-respecting philosopher of the Platonic tradition would ever be so stupid as to become a Christian, provoked him to write a rebuttal.
In the book, Origen systematically refutes each of Celsus' arguments point-by-point and argues for a rational basis of Christian faith. Origen draws heavily on the teachings of Plato and argues that Christianity and Greek philosophy are not incompatible, and that philosophy contains much that is true and admirable, but that the Bible contains far greater wisdom than anything Greek philosophers could ever grasp. Origen responds to Celsus's accusation that Jesus had performed his miracles using magic rather than divine powers by asserting that, unlike magicians, Jesus had not performed his miracles for show, but rather to reform his audiences. "Contra Celsum" became the most influential of all early Christian apologetics works; before it was written, Christianity was seen by many as merely a folk religion for the illiterate and uneducated, but Origen raised it to a level of academic respectability. Eusebius admired "Against Celsus" so much that, in his "Against Hierocles" 1, he declared that "Against Celsus" provided an adequate rebuttal to all criticisms the church would ever face.
Between 232–235, while in Caesarea in Palestine, Origen wrote "On Prayer", of which the full text has been preserved in the original Greek. After an introduction on the object, necessity, and advantage of prayer, he ends with an exegesis of the Lord's Prayer, concluding with remarks on the position, place, and attitude to be assumed during prayer, as well as on the classes of prayer. "On Martyrdom", or the "Exhortation to Martyrdom", also preserved entire in Greek, was written some time after the beginning of the persecution of Maximinus in the first half of 235. In it, Origen warns against any trifling with idolatry and emphasises the duty of suffering martyrdom manfully; while in the second part he explains the meaning of martyrdom.
The papyri discovered at Tura in 1941 contained the Greek texts of two previously unknown works of Origen. Neither work can be dated precisely, though both were probably written after the persecution of Maximinus in 235. One is "On the Pascha". The other is "Dialogue with Heracleides", a record written by one of Origen's stenographers of a debate between Origen and the Arabian bishop Heracleides, a quasi-Monarchianist who taught that the Father and the Son were the same. In the dialogue, Origen uses Socratic questioning to persuade Heracleides to believe in the "Logos theology", in which the Son or Logos is a separate entity from God the Father. The debate between Origen and Heracleides, and Origen's responses in particular, has been noted for its unusually cordial and respectful nature in comparison to the much fiercer polemics of Tertullian or the fourth-century debates between Trinitarians and Arians.
Lost works include two books on the resurrection, written before "On First Principles", and also two dialogues on the same theme dedicated to Ambrose. Eusebius had a collection of more than one hundred letters of Origen, and the list of Jerome speaks of several books of his epistles. Except for a few fragments, only three letters have been preserved. The first, partly preserved in the Latin translation of Rufinus, is addressed to friends in Alexandria. The second is a short letter to Gregory Thaumaturgus, preserved in the "Philocalia". The third is an epistle to Sextus Julius Africanus, extant in Greek, replying to a letter from Africanus (also extant), and defending the authenticity of the Greek additions to the book of Daniel. Forgeries of the writings of Origen made in his lifetime are discussed by Rufinus in "De adulteratione librorum Origenis". The "Dialogus de recta in Deum fide", the "Philosophumena" attributed to Hippolytus of Rome, and the "Commentary on Job" by Julian the Arian have also been ascribed to him.
Origen writes that Jesus was "the firstborn of all creation [who] assumed a body and a human soul." He firmly believed that Jesus had a human soul and abhorred docetism (the teaching which held that Jesus had come to earth in spirit form rather than a physical human body). Origen envisioned Jesus' human nature as the one soul that stayed closest to God and remained perfectly faithful to Him, even when all other souls fell away. At Jesus' incarnation, his soul became fused with the Logos and they "intermingled" to become one. Thus, according to Origen, Christ was both human and divine, but like all human souls, Christ's human nature was existent from the beginning.
Origen was the first to propose the ransom theory of atonement in its fully developed form, although Irenaeus had previously proposed a prototypical form of it. According to this theory, Christ's death on the cross was a ransom to Satan in exchange for humanity's liberation. This theory holds that Satan was tricked by God because Christ was not only free of sin, but also the incarnate Deity, whom Satan lacked the ability to enslave. The theory was later expanded by theologians such as Gregory of Nyssa and Rufinus of Aquileia. In the eleventh century, Anselm of Canterbury criticized the ransom theory, along with the associated Christus Victor theory, resulting in the theory's decline in western Europe. The theory has nonetheless retained some of its popularity in the Eastern Orthodox Church.
One of Origen's main teachings was the doctrine of the preexistence of souls, which held that before God created the material world he created a vast number of incorporeal "spiritual intelligences" (ψυχαί). All of these souls were at first devoted to the contemplation and love of their Creator, but as the fervor of the divine fire cooled, almost all of these intelligences eventually grew bored of contemplating God, and their love for him "cooled off" (ψύχεσθαι). When God created the world, the souls which had previously existed without bodies became incarnate. Those whose love for God diminished the most became demons. Those whose love diminished moderately became human souls, eventually to be incarnated in fleshly bodies. Those whose love diminished the least became angels. One soul, however, who remained perfectly devoted to God became, through love, one with the Word (Logos) of God. The Logos eventually took flesh and was born of the Virgin Mary, becoming the God-man Jesus Christ.
Origen may or may not have believed in the Platonic teaching of "metempsychosis" ("the transmigration of souls"; i.e. reincarnation). He explicitly rejects "the false doctrine of the transmigration of souls into bodies", but this may refer only to a specific kind of transmigration. Geddes MacGregor has argued that Origen must have believed in "metempsychosis" because it makes sense within his eschatology and is never explicitly denied in the Bible. Roger E. Olson, however, dismisses the view that Origen believed in reincarnation as a New Age misunderstanding of Origen's teachings. It is certain that Origen rejected the Stoic notion of a cyclical universe, which is directly contrary to his eschatology.
Origen believed that, eventually, the whole world would be converted to Christianity, "since the world is continually gaining possession of more souls." He believed that the Kingdom of Heaven was not yet come, but that it was the duty of every Christian to make the eschatological reality of the kingdom present in their lives. Origen was a Universalist, who suggested that all people might eventually attain salvation, but only after being purged of their sins through "divine fire". This, of course, in line of Origen's allegorical interpretation, was not "literal" fire, but rather the inner anguish of knowing one's own sins. Origen was also careful to maintain that universal salvation was merely a possibility and not a definitive doctrine. Jerome quotes Origen as having allegedly written that "after aeons and the one restoration of all things, the state of Gabriel will be the same as that of the Devil, Paul's as that of Caiaphas, that of virgins as that of prostitutes." Jerome, however, was not above deliberately altering quotations to make Origen seem more like a heretic, and Origen expressly states in his "Letter to Friends in Alexandria" that Satan and his demons would be not included in the final salvation.
Origen was an ardent believer in free will, and he adamantly rejected the Valentinian idea of election. Instead, Origen believed that even disembodied souls have the power to make their own decisions. Furthermore, in his interpretation of the story of Jacob and Esau, Origen argues that the condition into which a person is born is actually dependent upon what their souls did in this pre-existent state. According to Origen, the superficial unfairness of a person's condition at birth—with some humans being poor, others rich, some being sick, and others healthy—is actually a by-product of what the person's soul had done in the pre-existent state. Origen defends free will in his interpretations of instances of divine foreknowledge in the scriptures, arguing that Jesus' knowledge of Judas' future betrayal in the gospels and God's knowledge of Israel's future disobedience in the Deuteronomistic history only show that God knew these events would happen in advance. Origen therefore concludes that the individuals involved in these incidents still made their decisions out of their own free will.
Origen was an ardent pacifist, and in his "Against Celsus", he argued that Christianity's inherent pacifism was one of the most outwardly noticeable aspects of the religion. While Origen did admit that some Christians served in the Roman army, he pointed out that most did not and insisted that engaging in earthly wars was against the way of Christ. Origen accepted that it was sometimes necessary for a non-Christian state to wage wars but insisted that it was impossible for a Christian to fight in such a war without compromising his or her faith, since Christ had absolutely forbidden violence of any kind. Origen explained the violence found in certain passages of the Old Testament as allegorical and pointed out Old Testament passages which he interpreted as supporting nonviolence, such as and . Origen maintained that, if everyone were peaceful and loving like Christians, then there would be no wars and the Empire would not need a military.
Origen bases his theology on the Christian scriptures and does not appeal to Platonic teachings without having first supported his argument with a scriptural basis. He saw the scriptures as divinely inspired and was cautious to never contradict his own interpretation of what was written in them. Nonetheless, Origen did have a penchant for speculating beyond what was explicitly stated in the Bible, and this habit frequently placed him in the hazy realm between strict orthodoxy and heresy.
According to Origen, there are two kinds of Biblical literature which are found in both the Old and New Testaments: "historia" ("history, or narrative") and "nomothesia" ("legislation or ethical prescription"). Origen expressly states that the Old and New Testaments should be read together and according to the same rules. Origen further taught that there were three different ways in which passages of scripture could be interpreted. The "flesh" was the literal, historical interpretation of the passage; the "soul" was the moral message behind the passage; and the "spirit" was the eternal, incorporeal reality that the passage conveyed. In Origen's exegesis, the Book of Proverbs, Ecclesiastes, and the Song of Songs represent perfect examples of the bodily, soulful, and spiritual components of scripture respectively.
Origen saw the "spiritual" interpretation as the deepest and most important meaning of the text and taught that some passages held no literal meaning at all and that their meanings were purely allegorical. Nonetheless, he stressed that "the passages which are historically true are far more numerous than those which are composed with purely spiritual meanings" and often used examples from corporeal realities. Origen noticed that the accounts of Jesus' life in the four canonical gospels contain irreconcilable contradictions, but he argued that these contradictions did not undermine the spiritual meanings of the passages in question. Origen's idea of a twofold creation was based on an allegorical interpretation of the creation story found in the first two chapters of the Book of Genesis. The first creation, described in , was the creation of the primeval spirits, who are made "in the image of God" and are therefore incorporeal like Him; the second creation described in is when the human souls are given ethereal, spiritual bodies and the description in of God clothing Adam and Eve in "tunics of skin" refers to the transformation of these spiritual bodies into corporeal ones. Thus, each phase represents a degradation from the original state of incorporeal holiness.
Origen's conception of God the Father is apophatic—a perfect unity, invisible and incorporeal, transcending all things material, and therefore inconceivable and incomprehensible. He is likewise unchangeable and transcends space and time. But his power is limited by his goodness, justice, and wisdom; and, though entirely free from necessity, his goodness and omnipotence constrained him to reveal himself. This revelation, the external self-emanation of God, is expressed by Origen in various ways, the Logos being only one of many. The revelation was the first creation of God (cf. Proverbs 8:22), in order to afford creative mediation between God and the world, such mediation being necessary, because God, as changeless unity, could not be the source of a multitudinous creation.
The Logos is the rational creative principle that permeates the universe. The Logos acts on all human beings through their capacity for logic and rational thought, guiding them to the truth of God's revelation. As they progress in their rational thinking, all humans become more like Christ. Nonetheless, they retain their individuality and do not become subsumed into Christ. Creation came into existence only through the Logos, and God's nearest approach to the world is the command to create. While the Logos is substantially a unity, he comprehends a multiplicity of concepts, so that Origen terms him, in Platonic fashion, "essence of essences" and "idea of ideas".
Origen significantly contributed to the development of the idea of the Trinity. He declared the Holy Spirit to be a part of the Godhead and interpreted the Parable of the Lost Coin to mean that the Holy Spirit dwells within each and every person and that the inspiration of the Holy Spirit was necessary for any kind of speech dealing with God. Origen taught that the activity of all three parts of the Trinity were necessary for a person to attain salvation. In one fragment preserved by Rufinus in his Latin translation of Pamphilus's "Defense of Origen", Origen seems to apply the phrase "homooúsios" (ὁμοούσιος; "of the same substance") to the relationship between the Father and the Son, but in other passages, Origen rejected the belief that the Son and the Father were one "hypostasis" as heretical. According to Rowan Williams, because the words "ousia" and "hypostasis" were used synonymously in Origen's time, Origen almost certainly would have rejected "homoousios" as heretical. Williams states that it is impossible to verify whether the quote that uses the word "homoousios" really comes from Pamphilus at all, let alone Origen.
Nonetheless, Origen was a subordinationist, meaning he believed that the Father was superior to the Son and the Son was superior to the Holy Spirit, a model based on Platonic proportions. Jerome records that Origen had written that God the Father is invisible to all beings, including even the Son and the Holy Spirit, and that the Son is invisible to the Holy Spirit as well. At one point Origen suggests that the Son was created by the Father and that the Holy Spirit was created by the Son, but, at another point, he writes that "Up to the present I have been able to find no passage in the Scriptures that the Holy Spirit is a created being." At the time when Origen was alive, orthodox views on the Trinity had not yet been formulated and subordinationism was not yet considered heretical. In fact, virtually all orthodox theologians prior to the Arian controversy in the latter half of the fourth century were subordinationists to some extent. Origen's subordinationism may have developed out of his efforts to defend the unity of God against the Gnostics.
Origen is often seen as the first major Christian theologian. Though his orthodoxy had been questioned in Alexandria while he was alive, Origen's torture during the Decian persecution led Pope Dionysius of Alexandria to rehabilitate Origen's memory there, hailing him as a martyr for the faith. After Origen's death, Dionysius became one of the foremost proponents of Origen's theology. Every Christian theologian who came after him was influenced by his theology, whether directly or indirectly. Origen's contributions to theology were so vast and complex, however, that his followers frequently emphasized drastically different parts of his teachings to the expense of other parts. Dionysius emphasized Origen's subordinationist views, which led him to deny the unity of the Trinity, causing controversy throughout North Africa. At the same time, Origen's other disciple Theognostus of Alexandria taught that the Father and the Son were "of one substance".
For centuries after his death, Origen was regarded as the bastion of orthodoxy, and his philosophy practically defined Eastern Christianity. Origen was revered as one of the greatest of all Christian teachers; he was especially beloved by monks, who saw themselves as continuing in Origen's ascetic legacy. As time progressed, however, Origen became criticized under the standard of orthodoxy in later eras, rather than the standards of his own lifetime. In the early fourth century, the Christian writer Methodius of Olympus criticized some of Origen's more speculative arguments but otherwise agreed with Origen on all other points of theology. Peter of Antioch and Eustathius of Antioch criticized Origen as heretical.
Both orthodox and heterodox theologians claimed to be following in the tradition Origen had established. Athanasius of Alexandria, the most prominent supporter of the Holy Trinity at the First Council of Nicaea, was deeply influenced by Origen, and so were Basil of Caesarea, Gregory of Nyssa, and Gregory of Nazianzus (the so-called "Cappadocian Fathers"). At the same time, Origen deeply influenced Arius of Alexandria and later followers of Arianism. Although the extent of the relationship between the two is debated, in antiquity, many orthodox Christians believed that Origen was the true and ultimate source of the Arian heresy.
The First Origenist Crisis began in the late fourth century, coinciding with the beginning of monasticism in Palestine. The first stirring of the controversy came from the Cyprian bishop Epiphanius of Salamis, who was determined to root out all heresies and refute them. Epiphanius attacked Origen in his anti-heretical treatises "Ancoratus" (375) and "Panarion" (376), compiling a list of teachings Origen had espoused that Epiphanius regarded as heretical. Epiphanius' treatises portray Origen as an originally orthodox Christian who had been corrupted and turned into a heretic by the evils of "Greek education". Epiphanius particularly objected to Origen's subordinationism, his "excessive" use of allegorical hermeneutic, and his habit of proposing ideas about the Bible "speculatively, as exercises" rather than "dogmatically".
Epiphanius asked John, the bishop of Jerusalem to condemn Origen as a heretic. John refused on the grounds that a person could not be retroactively condemned as a heretic after the person had already died. In 393, a monk named Atarbius advanced a petition to have Origen and his writings to be censured. Tyrannius Rufinus, a priest at the monastery on the Mount of Olives who had been ordained by John of Jerusalem and was a longtime admirer of Origen, rejected the petition outright. Rufinus' close friend and associate Jerome, who had also studied Origen, however, came to agree with the petition. Around the same time, John Cassian, a Semipelagian monk, introduced Origen's teachings to the West.
In 394, Epiphanius wrote to John of Jerusalem, again asking for Origen to be condemned, insisting that Origen's writings denigrated human sexual reproduction and accusing him of having been an Encratite. John once again denied this request. By 395, Jerome had allied himself with the anti-Origenists and begged John of Jerusalem to condemn Origen, a plea which John once again refused. Epiphanius launched a campaign against John, openly preaching that John was an Origenist deviant. He successfully persuaded Jerome to break communion with John and ordained Jerome's brother Paulinianus as a priest in defiance of John's authority.
In 397, Rufinus published a Latin translation of Origen's "On First Principles". Rufinus was convinced that Origen's original treatise had been interpolated by heretics and that these interpolations were the source of the heterodox teachings found in it. He therefore heavily modified Origen's text, omitting and altering any parts which disagreed with contemporary Christian orthodoxy. In the introduction to this translation, Rufinus mentioned that Jerome had studied under Origen's disciple Didymus the Blind, implying that Jerome was a follower of Origen. Jerome was so incensed by this that he resolved to produce his own Latin translation of "On the First Principles", in which he promised to translate every word exactly as it was written and lay bare Origen's heresies to the whole world. Jerome's translation has been lost in its entirety.
In 399, the Origenist crisis reached Egypt. Pope Theophilus of Alexandria was sympathetic to the supporters of Origen and the church historian, Sozomen, records that he had openly preached the Origenist teaching that God was incorporeal. In his "Festal Letter" of 399, he denounced those who believed that God had a literal, human-like body, calling them illiterate "simple ones". A large mob of Alexandrian monks who regarded God as anthropomorphic rioted in the streets. According to the church historian Socrates Scholasticus, in order to prevent a riot, Theophilus made a sudden about-face and began denouncing Origen. In 400, Theophilus summoned a council in Alexandria, which condemned Origen and all his followers as heretics for having taught that God was incorporeal, which they decreed contradicted the only true and orthodox position, which was that God had a literal, physical body resembling that of a human.
Theophilus labelled Origen as the "hydra of all heresies" and persuaded Pope Anastasius I to sign the letter of the council, which primarily denounced the teachings of the Nitrian monks associated with Evagrius Ponticus. In 402, Theophilus expelled Origenist monks from Egyptian monasteries and banished the four monks known as the "Tall Brothers", who were leaders of the Nitrian community. John Chrysostom, the Patriarch of Constantinople, granted the Tall Brothers asylum, a fact which Theophilus used to orchestrate John's condemnation and removal from his position at the Synod of the Oak in July 403. Once John Chrysostom had been deposed, Theophilus restored normal relations with the Origenist monks in Egypt and the first Origenist crisis came to an end.
The Second Origenist Crisis occurred in the sixth century, during the height of Byzantine monasticism. Although the Second Origenist Crisis is not nearly as well documented as the first, it seems to have primarily concerned the teachings of Origen's later followers, rather than what Origen had written. Origen's disciple Evagrius Ponticus had advocated contemplative, noetic prayer, but other monastic communities prioritized asceticism in prayer, emphasizing fasting, labors, and vigils. Some Origenist monks in Palestine, referred to by their enemies as "Isochristoi" (meaning "those who would assume equality with Christ"), emphasized Origen's teaching of the pre-existence of souls and held that all souls were originally equal to Christ's and would become equal again at the end of time. Another faction of Origenists in the same region instead insisted that Christ was the "leader of many brethren", as the first-created being. This faction was more moderate, and they were referred to by their opponents as "Protoktistoi" ("first createds"). Both factions accused the other of heresy, and other Christians accused both of them of heresy.
The Protoktistoi appealed to the Emperor Justinian I to condemn the Isochristoi of heresy through Pelagius, the papal "apocrisarius". In 543, Pelagius presented Justinian with documents, including a letter denouncing Origen written by Patriarch Mennas of Constantinople, along with excerpts from Origen's "On First Principles" and several anathemata against Origen. A domestic synod convened to address the issue concluded that the Isochristoi's teachings were heretical and, seeing Origen as the ultimate culprit behind the heresy, denounced Origen as a heretic as well. Emperor Justinian ordered for all of Origen's writings to be burned. In the west, the "Decretum Gelasianum", which was written sometime between 519 and 553, listed Origen as an author whose writings were to be categorically banned.
In 553, during the early days of the Second Council of Constantinople (the Fifth Ecumenical Council), when Pope Vigilius was still refusing to take part in it despite Justinian holding him hostage, the bishops at the council ratified an open letter which condemned Origen as the leader of the Isochristoi. The letter was not part of the official acts of the council, and it more or less repeated the edict issued by the Synod of Constantinople in 543. It cites objectionable writings attributed to Origen, but all the writings referred to in it were actually written by Evagrius Ponticus. After the council officially opened, but while Pope Vigillius was still refusing to take part, Justinian presented the bishops with the problem of a text known as "The Three Chapters", which attacked the Antiochene Christology.
The bishops drew up a list of anathemata against the heretical teachings contained within "The Three Chapters" and those associated with them. In the official text of the eleventh anathema, Origen is condemned as a Christological heretic, but Origen's name does not appear at all in the "Homonoia", the first draft of the anathemata issued by the imperial chancery, nor does it appear in the version of the conciliar proceedings that was eventually signed by Pope Vigillius, a long time afterwards. These discrepancies may indicate that Origen's name may have been retrospectively inserted into the text after the Council. Some authorities believe these anathemata belong to an earlier local synod. Even if Origen's name did appear in the original text of the anathema, the teachings attributed to Origen that are condemned in the anathema were actually the ideas of later Origenists, which had very little grounding in anything Origen had actually written. In fact, Popes Vigilius, Pelagius I, Pelagius II, and Gregory the Great were only aware that the Fifth Council specifically dealt with "The Three Chapters" and make no mention of Origenism or universalism, nor spoke as if they knew of its condemnation—even though Gregory the Great was opposed to universalism.
As a direct result of the numerous condemnations of his work, only a tiny fraction of Origen's voluminous writings have survived. Nonetheless, these writings still amount to a massive number of Greek and Latin texts, very few of which have yet been translated into English. Many more writings have survived in fragments through quotations from later Church Fathers. It is likely that the writings containing Origen's most unusual and speculative ideas have been lost to time, making it nearly impossible to determine whether Origen actually held the heretical views which the anathemas against him ascribed to him. Nonetheless, in spite of the decrees against Origen, the church remained enamored of him and he remained a central figure of Christian theology throughout the first millennium. He continued to be revered as the founder of Biblical exegesis, and anyone in the first millennium who took the interpretation of the scriptures seriously would have had knowledge of Origen's teachings.
Jerome's Latin translations of Origen's homilies were widely read in western Europe throughout the Middle Ages, and Origen's teachings greatly influenced those of the Byzantine monk Maximus the Confessor and the Irish theologian John Scotus Eriugena. Since the Renaissance, the debate over Origen's orthodoxy has continued to rage. Basilios Bessarion, a Greek refugee who fled to Italy after the Fall of Constantinople in 1453, produced a Latin translation of Origen's "Contra Celsum", which was printed in 1481. Major controversy erupted in 1487, after the Italian humanist scholar Giovanni Pico della Mirandola issued a thesis arguing that "it is more reasonable to believe that Origen was saved than he was damned." A papal commission condemned Pico's position on account of the anathemas against Origen, but not until after the debate had received considerable attention.
The most prominent advocate of Origen during the Renaissance was the Dutch humanist scholar Desiderius Erasmus, who regarded Origen as the greatest of all Christian authors and wrote in a letter to John Eck that he learned more about Christian philosophy from a single page of Origen than from ten pages of Augustine. Erasmus especially admired Origen for his lack of rhetorical flourishes, which were so common in the writings of other Patristic authors. Erasmus borrowed heavily from Origen's defense of free will in "On First Principles" in his 1524 treatise "On Free Will", now considered his most important theological work. In 1527, Erasmus translated and published the portion of Origen's "Commentary on the Gospel of Matthew" that survived only in Greek and in 1536, he published the most complete edition of Origen's writings that had ever been published at that time. While Origen's emphasis on the human effort in attaining salvation appealed to the Renaissance humanists, it made him far less appealing to the proponents of the Reformation. Martin Luther deplored Origen's understanding of salvation as irredeemably defective and declared "in all of Origen there is not one word about Christ." Consequently, he ordered for Origen's writings to be banned. Nonetheless, the earlier Czech reformer Jan Hus had taken inspiration from Origen for his view that the church is a spiritual reality rather than an official hierarchy, and Luther's contemporary, the Swiss reformer Huldrych Zwingli, took inspiration from Origen for his interpretation of the eucharist as symbolic.
In the seventeenth century, the English Cambridge Platonist Henry More was a devoted Origenist, and although he did reject the notion of universal salvation, he accepted most of Origen's other teachings. Pope Benedict XVI expressed admiration for Origen, describing him in a sermon as part of a series on the Church Fathers as "a figure crucial to the whole development of Christian thought", "a true 'maestro'", and "not only a brilliant theologian but also an exemplary witness of the doctrine he passed on". He concludes the sermon by inviting his audience to "welcome into your hearts the teaching of this great master of the faith". Modern Protestant evangelicals admire Origen for his passionate devotion to the scriptures but are frequently baffled or even appalled by his allegorical interpretation of them, which many believe ignores the literal, historical truth behind them. | https://en.wikipedia.org/wiki?curid=22702 |
Oliver Hazard Perry-class frigate
The "Oliver Hazard Perry" class is a class of guided-missile frigates named after the U.S. Commodore Oliver Hazard Perry, the hero of the naval Battle of Lake Erie. Also known as the "Perry" or FFG-7 (commonly "fig seven") class, the warships were designed in the United States in the mid-1970s as general-purpose escort vessels inexpensive enough to be bought in large numbers to replace World War II-era destroyers and complement 1960s-era s. In Admiral Elmo Zumwalt's "high low fleet plan", the FFG-7s were the low capability ships with the s serving as the high capability ships. Intended to protect amphibious landing forces, supply and replenishment groups, and merchant convoys from aircraft and submarines, they were also later part of battleship-centred surface action groups and aircraft carrier battle groups/strike groups. Fifty-five ships were built in the United States: 51 for the United States Navy and four for the Royal Australian Navy (RAN). In addition, eight were built in Taiwan, six in Spain, and two in Australia for their navies. Former U.S. Navy warships of this class have been sold or donated to the navies of Bahrain, Egypt, Poland, Pakistan, Taiwan, and Turkey.
The first of the 51 U.S. Navy built "Oliver Hazard Perry" frigates entered into service in 1977, and the last remaining in active service, , was decommissioned on 29 September 2015. The retired vessels were either mothballed or transferred to other navies for continued service. Some of the U.S. Navy's frigates, such as USS "Duncan" (14.6 years in service) had fairly short careers, while a few lasted as long as 30+ years in active U.S. service, with some lasting even longer after being sold or donated to other navies.
The ships were designed by the Bath Iron Works shipyard in Maine in partnership with the New York-based naval architects Gibbs & Cox. The design process was notable as the initial design was accomplished with the help of computers in 18 hours by Raye Montague, a civilian U.S. Navy naval engineer, making it the first ship designed by computer.
The "Oliver Hazard Perry"-class ships were produced in long "short-hull" (Flight I) and long "long-hull" (Flight III) variants. The long-hull ships (FFG 8, 28, 29, 32, 33, and 36-61) carry the larger SH-60 Seahawk LAMPS III helicopters, while the short-hulled warships carry the smaller and less-capable SH-2 Seasprite LAMPS I. Aside from the lengths of their hulls, the principal difference between the versions is the location of the aft capstan: on long-hull ships, it sits a step below the level of the flight deck in order to provide clearance for the tail rotor of the longer Seahawk helicopters. The long-hull ships also carry the RAST (Recovery Assist Securing and Traversing) system (also known as a Beartrap (hauldown device)) for the Seahawk, a hook, cable, and winch system that can reel in a Seahawk from a hovering flight, expanding the ship's pitch-and-roll range in which flight operations are permitted. The FFG 8, 29, 32, and 33 were built as "short-hull" warships but were later modified into "long-hull" warships.
"Oliver Hazard Perry"-class frigates were the second class of surface ship (after the s) in the US Navy to be built with gas turbine propulsion. The gas turbine propulsion plant was more automated than other Navy propulsion plants at the time and could be centrally monitored and controlled from a remote engineering control center away from the engines. The gas turbine propulsion plants also allowed the ship's speed to be controlled directly from the bridge via a throttle control, a first for the US Navy.
American shipyards constructed "Oliver Hazard Perry"-class ships for the U.S. Navy and the Royal Australian Navy (RAN). Early American-built Australian ships were originally built as the "short-hull" version, but they were modified during the 1980s to the "long-hull" design. Shipyards in Australia, Spain, and Taiwan have produced several warships of the "long-hull" design for their navies.
Although the per-ship costs rose greatly over the period of production, all 51 ships planned for the U.S. Navy were built.
During the design phase of the "Oliver Hazard Perry" class, head of the Royal Corps of Naval Constructors, R.J. Daniels, was invited by an old friend, US Chief of the Bureau of Ships, Adm Robert C Gooding, to advise upon the use of variable-pitch propellers in the class. During the course of this conversation, Daniels warned Gooding against the use of aluminium in the superstructure of the FFG-7 class as he believed it would lead to structural weaknesses. A number of ships subsequently developed structural cracks, including a fissure in USS "Duncan", before the problems were remedied.
The "Oliver Hazard Perry"-class frigates were designed primarily as anti-aircraft and anti-submarine warfare guided-missile warships intended to provide open-ocean escort of amphibious warfare ships and merchant ship convoys in moderate threat environments in a potential war with the Soviet Union and the Warsaw Pact countries. They could also provide air defense against 1970s- and 1980s-era aircraft and anti-ship missiles. These warships are equipped to escort and protect aircraft carrier battle groups, amphibious landing groups, underway replenishment groups, and merchant ship convoys. They can conduct independent operations to perform such tasks as surveillance of illegal drug smugglers, maritime interception operations, and exercises with other nations.
The addition of the Naval Tactical Data System, LAMPS helicopters, and the Tactical Towed Array System (TACTAS) gave these warships a combat capability far beyond the original expectations. They are well suited to operations in littoral regions, and for most war-at-sea scenarios.
"Oliver Hazard Perry"-class frigates made worldwide news during the 1980s. Despite being small, these frigates were shown to be extremely durable. During the Iran–Iraq War, on 17 May 1987, was attacked by an Iraqi warplane. Struck by two Exocet anti-ship missiles, thirty-seven U.S. Navy sailors died in the deadly prelude to the American Operation Earnest Will, the reflagging and escorting of oil tankers through the Persian Gulf and the Straits of Hormuz.
Less than a year later, on 14 April 1988, was nearly sunk by an Iranian mine. No lives were lost, but 10 sailors were evacuated from the warship for medical treatment. The crew of "Samuel B. Roberts" battled fire and flooding for two days, ultimately managing to save the ship. The U.S. Navy retaliated four days later with Operation Praying Mantis, a one-day attack on Iranian oil platforms being used as bases for raids on merchant shipping. Those had included bases for the minelaying operations that damaged "Samuel B. Roberts". "Stark" and "Roberts" were each repaired in American shipyards and returned to full service. "Stark" was decommissioned in 1999, and scrapped in 2006. "Roberts" was decommissioned at Mayport on 22 May 2015,
On April 18, 1988, was accompanying the cruiser and frigate when they came under attack from the Iranian gunboat which fired a U.S. made Harpoon missile at the ships. With "Simpson" having the only clear shot, the frigate fired an SM-1 standard missile which struck "Joshan". "Simpson" fired three more SM-1s, and with later naval fire from "Wainwright", sank the Iranian vessel.
On July 14, 2016, the ex- took over 12 hours to sink after being used in a live-fire, SINKEX during naval exercise RIMPAC 2016. During the exercise, the ship was directly or indirectly hit with the following ordnance: a Harpoon missile from a South Korean submarine, another Harpoon missile from the Australian frigate , a Hellfire missile from an Australian MH-60R helicopter, another Harpoon missile and a Maverick missile from US maritime patrol aircraft, another Harpoon missile from the cruiser , additional Hellfire missiles from an US Navy MH-60S helicopter, a 900 kg (2,000 lb) Mark 84 bomb from a US Navy F/A-18 Hornet, a GBU-12 Paveway laser-guided 225 kg (500 lb) bomb from a US Air Force B-52 bomber, and a Mark 48 torpedo from an unnamed US Navy submarine.
The U.S. Navy and Royal Australian Navy modified their remaining "Perry"s to reduce their operating costs, replacing Detroit Diesel Company 16V149TI electrical generators with Caterpillar, Inc.- 3512B diesel engines.
Upgrades to the "Perry"-class were problematic, due to "little reserved space for growth (39 tons in the original design), and the inflexible, proprietary electronics of the time", such that the "US Navy gave up on the idea of upgrades to face new communications realities and advanced missile threats". The U.S. Navy decommissioned 25 “FFG-7 Short” ships via "bargain basement sales to allies or outright retirement, after an average of only 18 years of service".
From 2004 to 2005, the U.S. Navy removed the frigates' Mk 13 single-arm missile launchers because the primary missile, the Standard SM-1MR, had become outmoded. It would supposedly have been too costly to refit the Standard SM-1MR missiles, which had little ability to bring down sea-skimming missiles. Another reason was to allow more SM-1MRs to go to American allies that operated "Perry"s, such as Poland, Spain, Australia, Turkey, and Taiwan. As a result the "zone-defense" anti-aircraft warfare (AAW) capability of the U.S. Navy's "Perry"s had vanished, and all that remained was a "point-defense" type of anti-air warfare armament, so they relied upon cover from AEGIS destroyers and cruisers.
The removal of the Mk 13 launchers also stripped the frigates of their Harpoon anti-ship missiles. However, their Seahawk helicopters could still carry the much shorter-range Penguin and Hellfire anti-ship missiles. The last nine ships of the class had new remotely operated 25 mm Mk 38 Mod 2 Naval Gun Systems installed on platforms over the old MK 13 launcher magazine.
Up to 2002, the U.S. Navy updated the remaining active "Oliver Hazard Perry"-class warships' Phalanx CIWS to the "Block 1B" capability, which allowed the Mk 15 20 mm Phalanx gun to shoot at fast-moving surface craft and helicopters. They were also to have been fitted with the Mk 53 DLS "Nulka" missile decoy system, in place of the SRBOC (Super Rapid Blooming Offboard Chaff) and flares, which would have better protected the ship against anti-ship missiles. It had been planned to outfit the remaining ships with a 32-cell RIM-116 Rolling Airframe Missile launcher at the location of the former Mk-13, but this did not occur.
On May 11, 2009, the first International Frigate Working Group met in Mayport Naval Station to discuss maintenance, obsolescence and logistics issues regarding "Oliver Hazard Perry"-class ships of the U.S. and foreign navies.
On June 16, 2009, Vice Admiral Barry McCullough turned down the suggestion of then-U.S. Senator Mel Martinez (R-FL) to keep the "Perry"s in service, citing their worn-out and maxed-out condition. However, U.S. Representative Ander Crenshaw (R-FL) and former U.S. Representative Gene Taylor (D-MS) took up the cause to retain the vessels.
The "Oliver Hazard Perry"-class frigates were to have been eventually replaced by Littoral Combat Ships by 2019. However, the worn out frigates were being retired faster than the LCSs are being built, which may lead to a gap in United States Southern Command mission coverage. According to Navy deactivation plans, all "Oliver Hazard Perry"-class frigates would be retired by October 2015. "Simpson" was the last to be retired (on 29 September 2015), leaving the Navy devoid of frigates for the first time since 1943. The ships will either be made available for sale to foreign navies or dismantled. "Perry"-class frigate retirement was accelerated by budget pressures, which will lead to the remaining 11 ships being replaced by only eight LCS hulls. With the timeline LCS mission packages will come online unknown, there is uncertainty if they will be able to perform the frigates' counter-narcotics and anti-submarine roles when they are gone. The Navy is looking into Military Sealift Command to see if the Joint High Speed Vessel, Mobile Landing Platform, and other auxiliary ships could handle low-end missions that the frigates performed.
The U.S. Coast Guard harvested weapons systems components from decommissioned Navy "Perry"-class frigates to save money. Harvesting components from four decommissioned frigates resulted in more than $24 million in cost savings, which increases with parts from more decommissioned frigates. Equipment including Mk 75, 76 mm/62 caliber gun mounts, gun control panels, barrels, launchers, junction boxes, and other components were returned to service aboard s to extend their service lives into the 2030s.
In June 2017, Chief of Naval Operations Admiral John Richardson revealed the Navy was "taking a hard look" at reactivating 7-8 out of 12 mothballed "Perry"-class frigates to increase fleet numbers. While the move is under consideration, there would be difficulties in returning them to service given the age of the ships and their equipment, likely requiring a significant modernization effort. Although bringing the frigates out of retirement would provide a short-term solution to fleet size, their limited combat capability would restrict them to acting as a theater security cooperation, maritime security asset. Their likely role would be serving as basic surface platforms that stay close to U.S. shores, performing missions such as assisting drug interdiction efforts or patrolling the Arctic so an extensive upgrade to the ships' combat systems wouldn't need to be undertaken. An October 2017 memo recommended against reactivating the frigates, claiming it would cost too much money that would take funding from other Navy priorities to get little effectiveness.
Australia spent A$1.46bn to upgrade the Royal Australian Navy's (RAN) guided-missile frigates, including equipping them to fire the SM-2 version of the Standard missile, adding an eight-cell Mark 41 Vertical Launching System for Evolved Sea Sparrow missiles, and installing better air-search radars and long-range sonar. The RAN had opted to retain their "Adelaide" frigates rather than purchase the US Navy's "Kidd"-class destroyers; the "Kidd"s were more expensive and manpower intensive but much more capable. However the upgrade project ran over budget and fell behind schedule.
The first of the upgraded frigates, , returned to the RAN fleet in 2005. Four frigates eventually upgraded the Garden Island shipyard in Sydney, Australia, with the modernizations lasting between 18 months and two years. The cost of the upgrades was partly offset, in the short run, by the decommissioning and disposal of the two older frigates. was decommissioned on 12 November 2005 at naval base in Western Australia, and was decommissioned at that same naval base on 20 January 2008. was decommissioned at the Garden Island naval base in 2016. HMAS Darwin was also decommissioned at Garden Island in 2018.
The "Adelaide" class frigates were replaced by three "Hobart"-class air warfare destroyers equipped with the AEGIS combat system. HMAS Melbourne and Newcastle were transferred in May 2020 to the Chilean Navy and serve as "Captain Prat" and "Almirante Latorre".
The Turkish Navy had commenced the modernization of its s with the GENESIS (Gemi Entegre Savaş İdare Sistemi) combat management system in 2007. The first GENESIS upgraded ship was delivered in 2007, and the last delivery is scheduled for 2011. The "short-hull" "Oliver Hazard Perry"-class frigates that are currently part of the Turkish Navy were modified with the ASIST landing platform system at the Gölcük Naval Shipyard, so that they can accommodate the S-70B Seahawk helicopters. Turkey is planning to add one eight-cell Mk 41 Vertical Launching System (VLS) for the Evolved Sea Sparrow missile, to be installed forward of the present Mk 13 missile launchers, similar to the case in the modernization program of the Australian "Adelaide"-class frigates. TCG "Gediz" was the first ship in the class to receive the Mk 41 VLS installation.
There are also plans for new components to be installed that are being developed for the Milgem-class warships ("Ada"-class corvettes and F-100-class frigates) of the Turkish Navy. These include modern Three-dimensional and X-band radars developed by Aselsan and Turkish-made hull-mounted sonars. One of the G-class frigates will also be used as a test-bed for Turkey's 6,000+ ton AAW frigates that are currently being designed by the Turkish Naval Institute.
On April 7, 2014, the United States House of Representatives voted to pass the Taiwan Relations Act Affirmation and Naval Vessel Transfer Act of 2014 (H.R. 3470; 113th Congress), a bill that would allow eight more "Perry" frigates to be transferred to foreign countries. The bill would authorize the President to transfer and to Mexico, and and to Thailand. The bill would also authorize the President to sell four units (, , , and ) to the Taipei Economic and Cultural Representative Office of the United States (which is the Taiwan agency designated pursuant to the Taiwan Relations Act) for about $10 million each.
On June 13, 2017, the Chief of Naval Operations, Admiral John M. Richardson announced that U.S. Navy officials are currently looking into the possibility of recommissioning several "Oliver Hazard Perry" class frigates from its inactive fleet to help build up and support President Donald Trump's proposed 355 ship navy plan. On December 11, 2017, the Navy decided against reactivating the class citing that reactivating the ships would prove too costly. | https://en.wikipedia.org/wiki?curid=22703 |
Ottawa Senators
The Ottawa Senators () are a professional ice hockey team based in Ottawa. They compete in the National Hockey League (NHL) as a member of the Atlantic Division of the Eastern Conference. The Senators play their home games at the 18,652-seat Canadian Tire Centre, which opened in 1996 as the Palladium.
Founded and established by Ottawa real estate developer Bruce Firestone, the team is the second NHL franchise to use the Ottawa Senators name. The original Ottawa Senators, founded in 1883, had a famed history, winning 11 Stanley Cups, playing in the NHL from 1917 until 1934. On December 6, 1990, after a two-year public campaign by Firestone, the NHL awarded a new franchise, which began play in the 1992–93 season. The current team owner is Eugene Melnyk, and in 2019, the franchise was valued by "Forbes" magazine at $445 million.
The Senators have won four division titles and, in 2003, the Presidents' Trophy; and have once appeared in the Stanley Cup Finals (2007).
Ottawa had been home to the original Senators, a founding NHL franchise and 11-time Stanley Cup champions. After the NHL expanded to the United States in the late 1920s, the original Senators' eventual financial losses forced the franchise to move to St. Louis in 1934 operating as the Eagles while a Senators senior amateur team took over the Senators' place in Ottawa. The NHL team was unsuccessful in St. Louis and planned to return to Ottawa, but the NHL decided instead to suspend the franchise and transfer the players to other NHL teams.
Fifty-four years later, after the NHL announced plans to expand, Ottawa real estate developer Bruce Firestone decided along with colleagues Cyril Leeder and Randy Sexton that Ottawa was now able to support an NHL franchise, and the group proceeded to put a bid together. His firm, Terrace Investments, did not have the liquid assets to finance the expansion fee and the team, but the group conceived a strategy to leverage a land development. In 1989, after finding a suitable site on farmland just west of Ottawa in Kanata on which to construct a new arena, Terrace announced its intention to win a franchise and launched a successful "Bring Back the Senators" campaign to both woo the public and persuade the NHL that the city could support an NHL franchise. Public support was high and the group would secure over 11,000 season ticket pledges. On December 12, 1990, the NHL approved a new franchise for Firestone's group, to start play in the 1992–93 season.
The new team hired former NHL player Mel Bridgman, who had no previous NHL management experience, as its first general manager in 1992. The team was initially interested in hiring former Jack Adams Award winner Brian Sutter as its first head coach, but Sutter came with a high price tag and was reluctant to be a part of an expansion team. When Sutter was eventually signed to coach the Boston Bruins, Ottawa signed Rick Bowness, the man Sutter replaced in Boston. The new Senators were placed in the Adams Division of the Wales Conference, and played their first game on October 8, 1992, in the Ottawa Civic Centre against the Montreal Canadiens with lots of pre-game spectacle. The Senators defeated the Canadiens 5–3 in one of the few highlights that season. Following the initial excitement of the opening night victory, the club floundered badly and eventually tied the San Jose Sharks for the worst record in the league, winning only 10 games with 70 losses and four ties for 24 points, three points better than the NHL record for futility. The Senators had aimed low and considered the 1992–93 season a small success, as Firestone had set a goal for the season of not setting a new NHL record for fewest points in a season. The long-term plan was to finish low in the standings for its first few years in order to secure high draft picks and eventually contend for the Stanley Cup.
Bridgman was fired after one season and Team President Randy Sexton took over the general manager duties. Firestone himself soon left the team and Rod Bryden emerged as the new owner. The strategy of aiming low and securing a high draft position did not change. The Senators finished last overall for the next three seasons. For the 1993–94 season, the team now played in the Eastern Conference's Northeast Division. Although 1993 first overall draft choice Alexandre Daigle wound up being one of the greatest draft busts in NHL history, they chose Radek Bonk in 1994, Bryan Berard (traded for Wade Redden) in 1995, Chris Phillips in 1996 and Marian Hossa in 1997, all of whom would become solid NHL players and formed a strong core of players in years to come. Alexei Yashin, the team's first-ever draft selection from 1992, emerged as one of the NHL's brightest young stars. The team traded many of their better veteran players of the era, including 1992–93 leading scorer Norm Maciver and fan favourites Mike Peluso and Bob Kudelski in an effort to stockpile prospects and draft picks.
As the 1995–96 season began, star centre Alexei Yashin refused to honour his contract and did not play. In December, after three straight last-place finishes and a team which was ridiculed throughout the league, fans began to grow restless waiting for the team's long-term plan to yield results, and arena attendance began to decline. Rick Bowness was fired in late 1995 and was replaced by the Prince Edward Island Senators' head coach Dave Allison. Allison would fare no better than his predecessor, and the team would stumble to a 2–22–3 record under him. Sexton himself was fired and replaced by Pierre Gauthier, the former assistant GM of Anaheim. Before the end of January 1996, Gauthier had resolved the team's most pressing issues by settling star player Alexei Yashin's contract dispute, and hiring the highly regarded Jacques Martin as head coach. While Ottawa finished last overall once again, the 1995–96 season ended with renewed optimism, due in part to the upgraded management and coaching, and also to the emergence of an unheralded rookie from Sweden named Daniel Alfredsson, who would win the Calder Memorial Trophy as NHL Rookie of the Year in 1996.
Martin would impose a "strong defence first" philosophy that led to the team qualifying for the playoffs every season that he coached, but he was criticized for the team's lack of success in the playoffs, notably losing four straight series against the provincial rival Toronto Maple Leafs. Martin outlasted several general managers and a change in ownership.
In 1996–97, his first season, the club qualified for the playoffs in the last game of the season, and nearly defeated the Buffalo Sabres in the first round. In 1997–98, the club finished with their first winning record and upset the heavily favoured New Jersey Devils to win their first playoff series. In 1998–99, the Senators jumped from fourteenth overall in the previous season to third, with 103 points—the first 100-point season in club history, only to be swept in the first round. In 1999–2000 despite the holdout of team captain Alexei Yashin, Martin guided the team to the playoffs, only to lose to the Maple Leafs in the first Battle of Ontario series. Yashin returned for 2000–01 and the team improved to win their division and place second in the Eastern Conference. Yashin played poorly in another first-round playoff loss and on the day of the 2001 NHL Entry Draft, he was traded to the New York Islanders in exchange for Zdeno Chara, Bill Muckalt and the second overall selection in the draft, which Ottawa used to select centre Jason Spezza.
The 2001–02 Senators regular season points total dropped, but in the playoffs, they upset the Philadelphia Flyers for the franchise's second playoff series win. Yet the Sens would lose in game seven of the second round of the playoffs. Despite speculation that Martin would be fired, it was GM Marshall Johnston who left, retiring from the team, replaced by John Muckler, the Senators' first with previous GM experience.
In 2002–03 off-ice problems dominated the headlines, as the Senators filed for bankruptcy in mid-season, but continued play after getting emergency financing. Despite the off-ice problems, Ottawa had an outstanding season, placing first overall in the NHL to win the Presidents' Trophy. In the playoffs, they came within one game of making it into the finals. Prior to the 2003–04 season, pharmaceutical billionaire Eugene Melnyk would purchase the club to bring financial stability. Martin would guide the team to another good regular season but again would lose in the first round of the playoffs, leading to Martin's dismissal as management felt that a new coach was required for playoff success.
After the playoff loss, owner Melnyk promised that changes were coming and they came quickly. In June 2004, Anaheim Ducks GM Bryan Murray of nearby Shawville, became the head coach. That summer, the team also made substantial personnel changes, trading long-time players Patrick Lalime and Radek Bonk, and signing free agent goaltender Dominik Hasek. The team would not be able to show its new line-up for a year, as the 2004–05 NHL lock-out intervened and most players played in Europe or in the minors. In a final change, just before the 2005–06 season, the team traded long-time player Marian Hossa for Dany Heatley.
The media predicted the Senators to be Stanley Cup contenders in 2005–06, as they had a strong core of players returning, played in an up-tempo style fitting the new rule changes and Hasek was expected to provide top-notch goaltending. The team rushed out of the gate, winning 19 of the first 22 games, in the end winning 52 games and 113 points, placing first in the conference, and second overall. The newly formed 'CASH' line of Alfredsson, Spezza and newly acquired Dany Heatley established itself as one of the league's top offensive lines. Hasek played well until he was injured during the 2006 Winter Olympics, forcing the team to enter the playoffs with rookie netminder Ray Emery as their starter. Without Hasek, the club bowed out in a second-round loss to the Buffalo Sabres.
In 2006–07, the Senators reached the Stanley Cup Finals after qualifying for the playoffs in nine consecutive seasons. The Senators had a high turn-over of personnel and the disappointment of 2006 to overcome and started the season poorly. Trade rumours swirled around Daniel Alfredsson for most of the last months of 2006. The team lifted itself out of last place in the division to nearly catch the Buffalo Sabres by season's end, placing fourth in the Eastern Conference. The team finished with 105 points, their fourth straight 100-point season and sixth in the last eight. In the playoffs, Ottawa continued its good play. Led by the 'CASH' line, goaltender Ray Emery, and the strong defence of Chris Phillips and Anton Volchenkov, the club defeated the Pittsburgh Penguins, the second-ranked New Jersey Devils and the top-ranked Buffalo Sabres to advance to the Stanley Cup Finals.
The 2006–07 Senators thus became the first Ottawa team to be in the Stanley Cup final since 1927 and the city was swept up in the excitement. Businesses along all of the main streets posted large hand-drawn "Go Sens Go" signs, residents put up large displays in front of their homes or decorated their cars. A large Ottawa Senators flag was draped on the City Hall, along with a large video screen showing the games. A six-storey likeness of Daniel Alfredsson was hung on the Corel building. Rallies were held outside of City Hall, car rallies of decorated cars paraded through town and a section of downtown, dubbed the "Sens Mile", was closed off to traffic during and after games for fans to congregate.
In the Final, the Senators now faced the Anaheim Ducks, considered a favourite since the start of the season, a team the Senators had last played in 2006, and a team known for its strong defence. The Ducks won the first two games in Anaheim 3–2 and 1–0. Returning home, the Senators won game three 5–3, but lost game four 3–2. The Ducks won game five 6–2 in Anaheim to clinch the series. The Ducks had played outstanding defence, shutting down the 'CASH' line, forcing Murray to split up the line. The Ducks scored timely goals and Ducks' goaltender Jean-Sebastien Giguere out-played Emery.
In the off-season after the Stanley Cup Final, Bryan Murray's contract was expiring, while GM John Muckler had one season remaining, at which he was expected to retire. Murray, who had previously been at GM for other NHL clubs, was expected to take over the GM position, although no public timetable was given. Owner Melnyk decided to offer Muckler another position in the organization and give the GM position to Murray. Muckler declined the offer and was relieved from his position. Melnyk publicly justified the move, saying that he expected to lose Murray if his contract ran out. Murray then elevated John Paddock, the assistant coach, to head coach of the Senators. Under Paddock, the team came out to a record start to the 2007–08 season. However, team play declined to a .500 level and the team looked to be falling out of the playoffs. Paddock was fired by Murray, who took over coaching on an interim basis. The club managed to qualify for the playoffs by a tie-breaker but was swept in the first round of the playoffs to the Pittsburgh Penguins. In June, the club bought-out goaltender Ray Emery, who had become notorious for off-ice events in Ottawa and lateness to several team practices.
For 2008–09, Murray hired Craig Hartsburg to coach the Senators. Under Hartsburg's style, the Senators struggled and played under .500. Uneven goaltending with Martin Gerber and Alex Auld meant the team played cautiously to protect the goaltender. Murray's patience ran out in February 2009 with the team well out of playoff contention and Hartsburg was fired, although he had two years left on his contract, and the team also had Paddock under contract. Cory Clouston was elevated from the Binghamton coaching position. The team played above .500 under Clouston and rookie goaltender Brian Elliott, who had been promoted from Binghamton. Gerber was waived from the team at the trading deadline and the team traded for goaltender Pascal Leclaire, although he would not play due to injury. The team failed to make the playoffs for the first time in 12 seasons. Auld would be traded in the off-season to make room. Clouston's coaching had caused a rift with top player Dany Heatley (although unspecified "personal issues" were also noted by Heatley) and after Clouston was given a contract to continue coaching, Heatley made a trade demand and was traded just before the start of the 2009–10 season.
In 2009–10, the Senators were a .500 team, until going on a team-record 11-game winning streak in January. The streak propelled the team to the top of the Northeast Division standings and a top-three placing for the playoffs. The team was unable to hold off the Sabres for the division lead but qualified for the playoffs in the fifth position. For the third season in four, the Senators played off against the Pittsburgh Penguins in the first round. A highlight for the Senators was winning a triple-overtime fifth game in Pittsburgh, but the team was unable to win a playoff game on home ice, losing the series in six games.
The Senators had a much poorer than expected 2010–11 campaign, resulting in constant rumours of a shakeup right through until December. The rumours were heightened in January after the team went on a lengthy losing streak. January was a dismal month for the Senators, winning only one game all month. Media speculated on the imminent firing of Clouston, Murray or both. Owner Melynk cleared the air in an article in the edition of January 22, 2011, of the "Ottawa Sun." Melnyk stated that he would not fire either Clouston or Murray, but that he had given up on this season and was in the process of developing a plan for the future. On Monday, January 24, "The Globe and Mail" reported that the plan included hiring a new general manager before the June entry draft and that Murray would be retained as an advisor to the team. A decision on whether to retain Clouston would be made by the new general manager. The article by Roy MacGregor, a long-time reporter of the Ottawa Senators, stated that former assistant coach Pierre McGuire had already been interviewed. Murray, in a press conference that day, stated that he wished to stay on as the team's general manager. He also stated that Melnyk was allowing him to continue as the general manager without restraint. Murray said that the players were now to be judged by their play until the February 28 trade deadline. Murray would attempt to move "a couple, at least" of the players for draft picks or prospects at that time if the Senators remained out of playoff contention. At the time of Murray's comments the team was eight games under .500 and 14 points out of a playoff position after 49 games.
Murray started with the trading of Mike Fisher to the Nashville Predators in exchange for a first-round pick in the 2011 draft. Fisher already had a home in Nashville with new wife Carrie Underwood. The trading of Fisher, a fan favourite in Ottawa, lead to a small anti-Underwood backlash in the city with the banning of her songs from the playlists of some local radio stations. Murray next traded Chris Kelly, another veteran, to the Boston Bruins for a second-round pick in the 2011 draft. A few days later, pending unrestricted free agent Jarkko Ruutu was sent to the Anaheim Ducks in exchange for a sixth-round pick in 2011. A swap of goaltenders was made with the Colorado Avalanche which brought Craig Anderson to Ottawa in exchange for Brian Elliott. Both goalies were having sub-par seasons prior to the trade. Under-achieving forward Alex Kovalev was traded to the Pittsburgh Penguins for a seventh-round draft pick. On trade deadline day, Ottawa picked up goaltender Curtis McElhinney on waivers and traded Chris Campoli with a seventh-round pick to the Chicago Blackhawks for a second round pick and Ryan Potulny. Goaltender Anderson played very well down the stretch for Ottawa, and the team quickly signed the soon-to-be unrestricted free agent to a four-year contract. After media speculation on the future of Murray within the organization, Murray was re-signed as general manager on April 8 to a three-year extension. On April 9, Head Coach Cory Clouston and assistants Greg Carvel and Brad Lauer were dismissed from their positions. Murray said that the decision was made based on the fact that the team entered the season believing it was a contender, but finished with a 32–40–10 record. Former Detroit Red Wings' assistant coach Paul MacLean was hired as Clouston's replacement on June 14, 2011.
As the 2011–12 season began, many hockey writers and commentators were convinced that the Senators would finish at or near the bottom of the NHL standings. In the midst of rebuilding, the Ottawa line-up contained many rookies and inexperienced players. The team struggled out of the gate, losing five of their first six games before a reversal of fortunes saw them win six games in a row. In December 2011, the team acquired forward Kyle Turris from the Phoenix Coyotes in exchange for David Rundblad and a draft pick. The team improved its play afterwards and moved into a playoff position before the All-Star Game. For the first time in Senators' history, the All-Star Game was held in Ottawa, and it was considered a great success. Five Senators were voted in or named to the event, including Daniel Alfredsson, who was named the captain of one team. The team continued its playoff push after the break. After starting goalie Craig Anderson injured his hand in a kitchen accident at home, the Senators called up Robin Lehner from Binghamton and acquired highly regarded goaltender Ben Bishop from the St. Louis Blues. While Anderson recovered, the team continued its solid play. On April 1, 2012, the Senators defeated the New York Islanders 5–1, officially ensuring a playoff position. The team finished as the eighth seed in the Eastern Conference, drawing a first-round playoff matchup against the Conference champion New York Rangers. Ultimately, Ottawa lost the series in seven games.
The next season, Ottawa would be challenged to repeat the success they had in 2011–12, due to long-term injuries to key players such as Erik Karlsson, Jason Spezza, Milan Michalek and Craig Anderson. Despite these injuries, the Senators would finish seventh in the Eastern Conference and head coach Paul MacLean would go on to win the Jack Adams Award as the NHL's coach of the year. Ottawa would play the second-seeded Montreal Canadiens in the first round of the playoffs, eventually winning in five games, blowing out Montreal 6–1 in games three and five. The Senators would advance to play the top-seeded Pittsburgh Penguins in the second round, this time losing in five games. During the off-season, the Senators traded veteran defenceman Sergei Gonchar to the Dallas Stars for a sixth-round pick in the 2013 draft. July 5, 2013, would be a day of mixed emotions for the city and fans, as long-time captain Daniel Alfredsson signed a one-year contract with the Detroit Red Wings, leaving Ottawa after 17 seasons with the Senators and 14 as captain. The signing shocked numerous fans across the city and many within the Senators organization. The day finished optimistically, however, as Murray acquired star forward Bobby Ryan from the Anaheim Ducks in exchange for forwards Jakob Silfverberg, Stefan Noesen and a first-round pick in the 2014 draft. The hope was that Ryan would be the guy to play on the top line with Jason Spezza after Alfredsson's departure. Murray would also sign free agent forward Clarke MacArthur to a two-year contract that same day and bring back former defenceman Joe Corvo to a one-year contract three days later on July 8, 2013.
For the 2013–14 NHL season, the league realigned and Ottawa was assigned to the new Atlantic Division along with the rest of the old Northeast Division, with the additions of the Columbus Blue Jackets and Detroit Red Wings, formerly of the Western Conference. The re-alignment brought increased competition to qualify for the playoffs, as there were now 16 teams in the Eastern Conference fighting for eight playoff spots. The season began with a changing of leadership, as on September 14, 2013, the Ottawa Senators named Jason Spezza their eighth captain in franchise history. While new addition Clarke MacArthur had a career year, Ryan and Spezza struggled to find chemistry, and Ryan was moved to a line with MacArthur and Kyle Turris, where he fared much better. Bobby Ryan also ran into injury problems during the season, and while there were times where Joe Corvo played solidly, he eventually lost his place in the line-up. The club struggled on defence, as shots and goals against numbers increased from the previous season. The club was a sub .500 team much of the season, or only a few games above and never was in a playoff position all season. At the trade deadline, Murray traded for flashy right winger Ales Hemsky from the Edmonton Oilers, quickly finding success on a line with Spezza and Michalek. The club, however, was eliminated from playoff contention in the last week of the season. At the end of the season, the club failed to come to terms on a new contract with Hemsky and captain Jason Spezza requested a trade out of Ottawa. At the 2014 NHL Entry Draft, a potential trade to the Nashville Predators was negotiated by Murray but rejected by Spezza, as the Predators were one of the teams on his limited no-trade list. A deal with the Dallas Stars was eventually reached, and Spezza was sent, along with Ludwig Karlsson, in exchange for Alex Chiasson, Nick Paul, Alex Guptill and a 2015 second-round pick. During the off-season, the club signed free agent forward David Legwand to a two-year, $6 million contract.
At the beginning of the 2014–15 season, defenceman Erik Karlsson was named the franchise's ninth captain, with the club also re-signing Bobby Ryan to a seven-year extension. After firing head coach Paul MacLean after 27 games with an 11–11–5 record and replacing him with Dave Cameron, the Senators would win 32 of their last 55 games. Goaltender Andrew Hammond would compile a record of 20–1–2, goals against average of 1.79, and a save percentage of .941 to get the team back into playoff position. The Senators later became the first team in modern NHL history to overcome a 14-point deficit at any juncture of the season to qualify for the playoffs. However, the Senators lost to the Canadiens in six games in the first round of the playoffs.
During the 2014–15 season, it was announced that Murray had cancer. Taking regular treatment, Murray chose to stay on as GM through the 2015–16 season. Despite posting the best record of any Canadian team in the league, the Senators failed to make the playoffs in what was considered a disappointing season (all seven Canadian teams missed the playoffs). Murray made one 'blockbuster' 11-player trade that brought Toronto Maple Leafs' captain Dion Phaneuf to the Senators before the trade deadline. The Senators outside of a playoff position at the time of the deal, but could not put together another run and finished with 85 points for fifth in the division.
On April 10, 2016, the day after the final game of the 2015–16 season, Murray announced his resignation as general manager and that he would continue in an advisory role with the club. Assistant general manager Pierre Dorion was promoted to the general manager position. On April 12, 2016, the Senators fired head coach Dave Cameron. On May 8, 2016, the Senators hired former Tampa Bay Lightning head coach Guy Boucher as their new head coach. On the following day, Marc Crawford was announced as associate coach. On June 13, 2016, the Senators hired Daniel Alfredsson as the senior advisor of hockey operations. In June 2016, the Senators hired Rob Cookson as an assistant coach, who had worked with both Boucher and Crawford in Switzerland, and Pierre Groulx as a goaltending coach.
The Senators finished second in the Atlantic Division during the 2016–17 season and faced the Boston Bruins in the first round of the playoffs, winning that series in six games. In the second round, they defeated the New York Rangers in six games. During the second game of that series, Jean-Gabriel Pageau scored four goals, including the game-winning goal in double overtime. The Senators would come within one game of the Stanley Cup Final, but lost in double overtime of the seventh game of their Eastern Conference Final series against the Pittsburgh Penguins, who went on to win their second consecutive Stanley Cup.
Following their appearance in the Eastern Conference Finals the previous season, the Senators lost defencemen Marc Methot to the 2017 NHL Expansion Draft. On November 5, 2017, the Senators conducted a blockbuster trade with the Colorado Avalanche, bringing in star-forward Matt Duchene from the Avalanche in exchange for Kyle Turris, Shane Bowers, Andrew Hammond, a conditional 1st-round pick in 2018 or 2019 and a 3rd-round pick in 2019. Following the trade, however, the Senators season began to fall apart. Forward Derick Brassard and defenceman Dion Phaneuf were dealt at the trade deadline to the Pittsburgh Penguins and Los Angeles Kings, respectively. The Senators finished the year second-to-last in the league with a 28–43–11 record and 67 points, their lowest overall point total since 1995–96.
In the 2018 off-season, the Senators traded forward Mike Hoffman to the San Jose Sharks, who was dealt later that day by the Sharks to the Florida Panthers. The Senators were given the right to the fourth-overall pick in the 2018 NHL Entry Draft. Having to give either that pick or the 2019 1st-round pick to the Avalanche, the Senators elected to keep the pick and select forward Brady Tkachuk fourth-overall. Just before the regular season started, the Senators traded their captain Erik Karlsson to the San Jose Sharks for players and draft picks. Unable to re-sign star forwards Duchene, Mark Stone, and Ryan Dzingel, the Senators traded all three forwards prior to the trade deadline. Duchene and Dzingel were traded to the Columbus Blue Jackets, while Stone was traded to the Vegas Golden Knights. The team finished last in the NHL, making it the first time since 1995–96 that the Senators missed back-to-back playoff appearances.
The new Senators' first home arena was the Ottawa Civic Centre, located on Bank Street in Ottawa, where they played from the 1992–93 season to January of the 1995–96 season; their last game was New Year's Eve, 1995, versus the Tampa Bay Lightning. They played their first home game on October 8, 1992 against the Montreal Canadiens with lots of pre-game spectacle. The Senators would defeat the Canadiens 5–3 in one of few highlights that season. Following the initial excitement of the opening night victory, the club floundered badly and would eventually tie with the San Jose Sharks for the worst record in the league, finishing with only 10 wins, 70 losses and 4 ties for 24 points, three points better than the NHL record for futility.
As part of its bid to land an NHL franchise for Ottawa, Terrace Corporation unveiled the original proposal for the arena development at a press conference in September 1989. The proposal included a hotel and 20,500 seat arena, named The Palladium on , surrounded by a mini-city, named "West Terrace." The site itself, of farmland, on the western border of Kanata, had been acquired in May 1989 by Terrace. Rezoning approval was granted by the Board on August 28, 1991, with conditions. The conditions imposed by the board included a scaling down of the arena to 18,500 seats, a moratorium on development outside the initial arena site, and that the cost of the highway interchange with highway 417 be paid by Terrace. A two-year period was used seeking financing for the site and interchange by Terrace Corporation. The corporation received a $6 million grant from the federal government but needed to borrow to pay for the rest of the costs of construction. A ground-breaking ceremony was held in June 1992 but actual construction did not start until July 7, 1994. Actual construction took 18 months, finishing in January 1996.
The newly built Palladium opened on January 15, 1996, with a concert by Canadian rocker Bryan Adams. The Senators played their first game in their new arena two days later, falling 3–0 to the Montreal Canadiens. On February 17, 1996, the name 'Palladium' was changed to 'Corel Centre' when Corel Corporation, an Ottawa software company, signed a 10-year deal for the naming rights.
When mortgage holder Covanta Energy (the former Ogden Entertainment) went into receivership in 2001, Terrace was expected to pay off the entire debt. The ownership was not able to refinance the arena, eventually leading Terrace itself to declare bankruptcy in 2003. However, on August 26, 2003, billionaire businessman Eugene Melnyk finalized the purchase of the Senators and the arena. The arena and club became solely owned by Melnyk through a new company, Capital Sports Properties.
In 2004, the ownership applied to expand its seating and the City of Ottawa amended its by-laws for the venue, increasing its seating capacity in 2005 to 19,153 and total attendance capacity to 20,500 including standing room.
On January 19, 2006, the arena became known as 'Scotiabank Place' after reaching a new 15-year naming agreement with Canadian bank Scotiabank on January 11, 2006. Scotiabank had been an advertising partner with the club for several years and took over the naming after Corel declined to renew its naming agreement with the Senators, but continued as an advertising sponsor. On June 18, 2013, the Ottawa Senators announced a new marketing agreement with Canadian Tire, and as a result, the arena was renamed Canadian Tire Centre on July 1, 2013.
In 2015, the National Capital Commission (NCC) put out a request for proposals to redevelop the LeBreton Flats area in downtown Ottawa, a longtime vacant former industrial area. In 2016 the NCC settled on the proposal presented by Senators owner Eugene Melnyk and the RendezVous LeBreton Group partnership with Trinity Developments. The proposal included housing units, park space, a recreation facility, a library and a new arena for the Ottawa Senators.
The plan to build a new arena downtown came apart in late 2018 after it was revealed that the Senators were suing Trinity for in damages. Trinity was developing a site adjacent to the LeBreton Flats site and the Senators felt this was inappropriate competition. Trinity responded with a lawsuit, accusing the Senators of being unwilling to contribute any money to the project. The NCC announced the cancellation of the partnership's bid to develop the site but gave the sides an extension when the two parties agreed to mediation. On February 27, 2019, it was announced that mediation between the parties had failed to come to an agreement and that the NCC would explore other options for the site's redevelopment.
The Senators organization operates primarily in English and provides French-language services reflecting that it is one of two NHL teams (the other being the Canadiens in Montreal) with a large francophone fan base.
The team's website and social media outlets are in both languages, and arena announcements and press releases are given in both languages.
The team colours are red, black and white, with added trim of gold. The team's away jersey is mostly white with red and black trim, while the home jersey is red, with white and black trim. The club logo is officially the head of a Roman general, a member of the Senate of the Roman Republic, projecting from a gold circle.
The original, unveiled on May 23, 1991, described the general as a "centurion figure, strong and prominent" according to its designer, Tony Milchard.
From 1992 to 1995, the Senators' primary road jerseys were black with red stripes. The numbers were red for the first season, but switched to white afterwards. White stripes were added to the uniform in 1995. The white uniforms, which were worn on home games until 2003 and on road games until 2007, featured black sleeves and tail stripes with red accents, and black lettering.
In 1997, the Senators unveiled a red third jersey, featuring the first iteration of the "forward-facing" centurion logo. The jersey became the team's primary dark jersey starting in 1999. From 2000 to 2007, the Senators also wore a black alternate jersey with gold, red and white accents.
The current jersey design was unveiled on August 22, 2007, in conjunction with the league-wide adoption of the "Rbk EDGE" jerseys by Reebok for the 2007–08 season. The jersey incorporates the original Senators' 'O' logo as a shoulder patch. At the same time, the team updated its logos, and switched their usage. The primary logo, which according to team owner Eugene Melnyk, "represents strength and determination" is an update of the old secondary logo. The old primary logo has become the team's secondary logo and only appears on Senators' merchandise.
Prior to the 2008–09 season, the Senators unveiled a new black third jersey, featuring the shortened "SENS" moniker in front. The centurion logo adorn the shoulders and the striping was inspired from the team's original black jerseys.
In 2011, the Senators introduced a throwback-inspired third jersey design. Mostly black, the jersey incorporated horizontal striping intended to be reminiscent of the original Senators' 'barber-pole' designs. Shield-type patches were added to the shoulders. The design of the shield-type patches was intended to be similar to the shield patches that the original Senators added to their jerseys after each Stanley Cup championship win. The patches spell the team name, one in English, and one in French. The design was a collaborative effort between the Senators and a fan in Gatineau, Quebec who had been circulating a version of it on the internet since 2009.
The black third jerseys served as the basis of the Senators' 2014 Heritage Classic jerseys, which used cream as the base colour.
In 2017, the Senators' jerseys received a slight makeover when Adidas replaced Reebok as uniform provider. The lettering treatment was changed to match those of their recent third jerseys, which were retired after the 2016–17 season. Prior to the 2018–19 season, the Senators brought back the red jerseys worn during the NHL 100 Classic as a third jersey. The design featured a silver "O" in front with black trim amid horizontal black, silver and white stripes.
At many home games the fans are entertained both outside and inside Canadian Tire Centre with myriad entertainers – live music, rock bands, giveaways and promotions. The live music includes the traditional Scottish music of the 'Sons of Scotland Pipe Band' of Ottawa along with highland dancers. Before and during games, entertainment is provided by Spartacat, the official mascot of the Senators, an anthropomorphic lion. He made his debut on the Senators' opening night: October 8, 1992. From 1994 until 2016, the national anthems were sung by former Ontario Provincial Police Constable Lyndon Slewidge. At home games, "O Canada" is traditionally sung in both English and French with the first half of the first stanza and chorus sung in English and the second half of the first stanza sung in French. The Senators have their own theme song titled "Ottawa Senators Theme Song" which is played as the team comes on the ice and is also used in Sens TV web videos. It was composed locally in Ottawa. The team's goal horn is an Airchime M3H horn from a retired VIA Rail train. The team initially used it in the Civic Centre.
On April 18, 2008, the club announced its final attendance figures for 2007–08. The club had 40 sell-outs out of 41 home dates, a total attendance of 812,665 during the regular season, placing the club third in attendance in the NHL. The number of sell-outs and the total attendance were both club records. The previous attendance records were set during the 2005–06 with a season total of 798,453 and 33 sell-outs. In 2006–07 regular season attendance was 794,271, with 31 sell-outs out of 41 home dates or an average attendance of 19,372. In the 2007 playoffs, the Senators played 9 games with 9 sell-outs and an attendance of 181,272 for an average of 20,141, the highest in team history. Until recent seasons, the club was regularly represented in the top half in attendance in the NHL. In 2018–19, the Senators average attendance was 14,553, 27th in the league.
On November 29, 2011, a "Forbes" magazine report valued the Ottawa Senators Hockey Club at $201 million, (17th highest in NHL). The valuation was based on $27 million for the sport, $70 million for the arena, $80 million for the market and $25 million for the brand. For 2010–11, the club had an operating income of $2.8 million on revenues of $100 million. The gate receipts for the 2010–11 season were $46 million and player expenses were $57 million. The operating income followed two years where the team posted a loss. Forbes estimates that the organization has a debt/value ratio of 65%, including arena debt. Eugene Melnyk bought the team for $92 million in 2003. A November 2014 report by Forbes valued the Senators at $400 million, 16th highest in the NHL. A 2019 report by Forbes valued the Senators at $445 million.
The fans of the Senators are known as the "Sens Army". Like most hockey fanatics, they are known to dress up for games; some in Roman legionary clothing. For the 2006–2007 playoff run, more fans than ever before would wear red, and fan activities included 'Red Rallies' of decorated cars, fan rallies at Ottawa City Hall Plaza and the 'Sens Mile' along Elgin Street where fans would congregate.
Much like the Red Mile in Calgary during the Flames' 2004 cup run and the Copper Kilometer in Edmonton during the Oilers' 2006 cup run, Ottawa Senators fans took to the streets to celebrate their team's success during the 2006–07 playoffs. The idea to have a 'Sens Mile' on the downtown Elgin Street, a street with numerous restaurants and pubs, began as a grassroots campaign on Facebook by Ottawa residents before game four of the Ottawa-Buffalo Eastern Conference Final series. After the game five win, Ottawa residents closed the street to traffic for a spontaneous celebration. The City of Ottawa then closed Elgin Street for each game of the Final.
Ottawa Senators games are broadcast locally in both the English and French languages. As of the 2014–15 season, regional television rights to the Senators' regular season games not broadcast nationally by Sportsnet, TVA Sports, or "Hockey Night in Canada" are owned by Bell Media under a 12-year contract, with games airing in English on TSN5, and in French on RDS. Regional broadcasts are available within the team's designated region (shared with the Montreal Canadiens), which includes the Ottawa River valley, Eastern Ontario (portions are shared with the Toronto Maple Leafs), along with Quebec, the Maritime provinces and Newfoundland and Labrador.
On radio, all home and away games are broadcast on a five-station network stretching across Eastern Ontario, and including one American station, WQTK in Ogdensburg, New York. The flagship radio station is CFGO in Ottawa. Radio broadcasts on CFGO began in 1997–98; the contract has since been extended through the 2025–2026 as part of Bell Media's rights deal with the team. The Senators are broadcast on radio in French through Intersport Production and CJFO-FM in Ottawa. Nicolas St. Pierre provides play-by-play, with Alain Sanscartier as colour commentator.
Sportsnet East held English regional rights to the Sens prior to the 2014–15 season. In April 2014, Dean Brown, who had called play-by-play for Senators games the team's inception, stated that it was "extremely unlikely" that he would move to TSN and continue his role. He noted that the network already had four commentators among its personalities – including Gord Miller, Chris Cuthbert, Rod Black, and Paul Romanuk (who was, however, picked up by Rogers for its national NHL coverage in June 2014), who were likely candidates to serve as the new voices of the Senators. Brown ultimately moved to the Senators' radio broadcasts alongside Gord Wilson.
During the 2006–07 and 2007–08 seasons, several games were only available in video on pay-per-view or at local movie theatres in the Ottawa area. The "Sens TV" service was suspended indefinitely as of September 24, 2008. In 2010, Sportsnet launched a secondary channel for selected Senators games as part of its Sportsnet One service. Selected broadcasts of Senators games in the French language were broadcast by RDS and TVA Sports. On the RDS network, Félix Séguin and former Senators goaltender Patrick Lalime were the announcers from the 2011–12 season to the 2013–14 season, and Michel Y. Lacroix and Norman Flynn starting in the 2014–15 season. The TVA Sports broadcast team consisted of Michel Langevin, Yvon Pedneault and Enrico Ciccone.
Statistics are accurate through the hiring of D.J. Smith.
Source: "Ottawa Senators 2009–10 Media Guide", p. 206.
"This is a partial list of the last five seasons completed by the Senators. For the full season-by-season history, see List of Ottawa Senators seasons"
"Note: GP = Games Played, W = Wins, L = Losses, T = Ties, OTL = Overtime Losses, Pts = Points, GF = Goals for, GA = Goals against, PIM = Penalties in minutes"
These are the top-ten regular season point-scorers in franchise history after the 2019–20 season:
"Note: Pos = Position; GP = Games Played; G = Goals; A = Assists; Pts = Points; P/G = Points per game average;
Source: Ottawa Senators Media Guide
Prince of Wales Trophy
Presidents' Trophy
Calder Memorial Trophy
NHL Plus-Minus Award
Jack Adams Award
James Norris Memorial Trophy
King Clancy Memorial Trophy
Mark Messier Leadership Award
Bill Masterton Memorial Trophy
NHL All-Rookie Team
NHL First All-Star Team
NHL Second All-Star Team
Source: Ottawa Senators. | https://en.wikipedia.org/wiki?curid=22705 |
Orchestra
An orchestra (; ) is a large instrumental ensemble typical of classical music, which combines instruments from different families, including bowed string instruments such as the violin, viola, cello, and double bass, brass instruments such as the horn, trumpet, trombone and tuba, woodwinds such as the flute, oboe, clarinet and bassoon, and percussion instruments such as the timpani, bass drum, triangle, snare drum, cymbals, and mallet percussion instruments each grouped in sections. Other instruments such as the piano and celesta may sometimes appear in a fifth keyboard section or may stand alone, as may the concert harp and, for performances of some modern compositions, electronic instruments.
A full-size Western orchestra may sometimes be called a or philharmonic orchestra (from Greek "phil-", "loving", and "harmonic"). The actual number of musicians employed in a given performance may vary from seventy to over one hundred musicians, depending on the work being played and the size of the venue. A "" (sometimes "concert orchestra") is a smaller-sized ensemble of about fifty musicians or fewer. Orchestras that specialize in the Baroque music of, for example, Johann Sebastian Bach and George Frideric Handel, or Classical repertoire, such as that of Haydn and Mozart, tend to be smaller than orchestras performing a Romantic music repertoire, such as the symphonies of Johannes Brahms. The typical orchestra grew in size throughout the 18th and 19th centuries, reaching a peak with the large orchestras (of as many as 120 players) called for in the works of Richard Wagner, and later, Gustav Mahler.
Orchestras are usually led by a conductor who directs the performance with movements of the hands and arms, often made easier for the musicians to see by use of a conductor's baton. The conductor unifies the orchestra, sets the tempo and shapes the sound of the ensemble. The conductor also prepares the orchestra by leading rehearsals before the public concert, in which the conductor provides instructions to the musicians on their interpretation of the music being performed.
The leader of the first violin section, commonly called the concertmaster, also plays an important role in leading the musicians. In the Baroque music era (1600–1750), orchestras were often led by the concertmaster or by a chord-playing musician performing the basso continuo parts on a harpsichord or pipe organ, a tradition that some 20th-century and 21st-century early music ensembles continue. Orchestras play a wide range of repertoire, including symphonies, opera and ballet overtures, concertos for solo instruments, and as pit ensembles for operas, ballets, and some types of musical theatre (e.g., Gilbert and Sullivan operettas).
Amateur orchestras include those made up of students from an elementary school or a high school, youth orchestras, and community orchestras; the latter two typically being made up of amateur musicians from a particular city or region.
The term "orchestra" derives from the Greek ὀρχήστρα ("orchestra"), the name for the area in front of a stage in ancient Greek theatre reserved for the Greek chorus.
The invention of the piston and rotary valve by Heinrich Stölzel and Friedrich Blühmel, both Silesians, in 1815, was the first in a series of innovations which impacted the orchestra, including the development of modern keywork for the flute by Theobald Boehm and the innovations of Adolphe Sax in the woodwinds, notably the invention of the saxophone. These advances would lead Hector Berlioz to write a landmark book on instrumentation, which was the first systematic treatise on the use of instrumental sound as an expressive element of music.
The next major expansion of symphonic practice came from Richard Wagner's Bayreuth orchestra, founded to accompany his musical dramas. Wagner's works for the stage were scored with unprecedented scope and complexity: indeed, his score to "Das Rheingold" calls for six harps. Thus, Wagner envisioned an ever-more-demanding role for the conductor of the theatre orchestra, as he elaborated in his influential work "On Conducting". This brought about a revolution in orchestral composition, and set the style for orchestral performance for the next eighty years. Wagner's theories re-examined the importance of tempo, dynamics, bowing of string instruments and the role of principals in the orchestra.
At the beginning of the 20th century, symphony orchestras were larger, better funded, and better trained than previously; consequently, composers could compose larger and more ambitious works. The works of Gustav Mahler were particularly innovative; in his later symphonies, such as the mammoth Symphony No. 8, Mahler pushes the furthest boundaries of orchestral size, employing huge forces. By the late Romantic era, orchestras could support the most enormous forms of symphonic expression, with huge string sections, massive brass sections and an expanded range of percussion instruments. With the recording era beginning, the standards of performance were pushed to a new level, because a recorded symphony could be listened to closely and even minor errors in intonation or ensemble, which might not be noticeable in a live performance, could be heard by critics. As recording technologies improved over the 20th and 21st centuries, eventually small errors in a recording could be "fixed" by audio editing or overdubbing. Some older conductors and composers could remember a time when simply "getting through" the music as best as possible was the standard. Combined with the wider audience made possible by recording, this led to a renewed focus on particular star conductors and on a high standard of orchestral execution.
The typical symphony orchestra consists of four groups of related musical instruments called the woodwinds, brass, percussion, and strings. Other instruments such as the piano and celesta may sometimes be grouped into a fifth section such as a keyboard section or may stand alone, as may the concert harp and electric and electronic instruments. The orchestra, depending on the size, contains almost all of the standard instruments in each group.
In the history of the orchestra, its instrumentation has been expanded over time, often agreed to have been standardized by the classical period and Ludwig van Beethoven's influence on the classical model. In the 20th and 21st century, new repertory demands expanded the instrumentation of the orchestra, resulting in a flexible use of the classical-model instruments and newly developed electric and electronic instruments in various combinations.
The terms "symphony orchestra" and "philharmonic orchestra" may be used to distinguish different ensembles from the same locality, such as the London Symphony Orchestra and the London Philharmonic Orchestra. A symphony orchestra will usually have over eighty musicians on its roster, in some cases over a hundred, but the actual number of musicians employed in a particular performance may vary according to the work being played and the size of the venue.
"Chamber orchestra" usually refers to smaller-sized ensembles; a major chamber orchestra might employ as many as fifty musicians; some are much smaller than that. The term "concert orchestra" may also be used, as in the BBC Concert Orchestra and the RTÉ Concert Orchestra.
The so-called "standard complement" of doubled winds and brass in the orchestra from the first half of the 19th century is generally attributed to the forces called for by Beethoven. The composer's instrumentation almost always included paired flutes, oboes, clarinets, bassoons, horns and trumpets. The exceptions to this are his Symphony No. 4, Violin Concerto, and Piano Concerto No. 4, which each specify a single flute. Beethoven carefully calculated the expansion of this particular timbral "palette" in Symphonies 3, 5, 6, and 9 for an innovative effect. The third horn in the "Eroica" Symphony arrives to provide not only some harmonic flexibility, but also the effect of "choral" brass in the Trio movement. Piccolo, contrabassoon, and trombones add to the triumphal finale of his Symphony No. 5. A piccolo and a pair of trombones help deliver the effect of storm and sunshine in the Sixth, also known as the "Pastoral Symphony". The Ninth asks for a second pair of horns, for reasons similar to the "Eroica" (four horns has since become standard); Beethoven's use of piccolo, contrabassoon, trombones, and untuned percussion—plus chorus and vocal soloists—in his finale, are his earliest suggestion that the timbral boundaries of symphony might be expanded. For several decades after his death, symphonic instrumentation was faithful to Beethoven's well-established model, with few exceptions.
Apart from the core orchestral complement, various other instruments are called for occasionally. These include the flugelhorn and cornet. Saxophones and classical guitars, for example, appear in some 19th- through 21st-century scores. While appearing only as featured solo instruments in some works, for example Maurice Ravel's orchestration of Modest Mussorgsky's "Pictures at an Exhibition" and Sergei Rachmaninoff's "Symphonic Dances", the saxophone is included in other works, such as Ravel's "Boléro", Sergei Prokofiev's Romeo and Juliet Suites 1 and 2, Vaughan Williams' Symphonies No.6 and 9 and William Walton's "Belshazzar's Feast", and many other works as a member of the orchestral ensemble. The euphonium is featured in a few late Romantic and 20th-century works, usually playing parts marked "tenor tuba", including Gustav Holst's "The Planets", and Richard Strauss's "Ein Heldenleben". The Wagner tuba, a modified member of the horn family, appears in Richard Wagner's cycle "Der Ring des Nibelungen" and several other works by Strauss, Béla Bartók, and others; it has a prominent role in Anton Bruckner's Symphony No. 7 in E Major. Cornets appear in Pyotr Ilyich Tchaikovsky's ballet "Swan Lake", Claude Debussy's "La Mer", and several orchestral works by Hector Berlioz. Unless these instruments are played by members "doubling" on another instrument (for example, a trombone player changing to euphonium or a bassoon player switching to contrabassoon for a certain passage), orchestras typically hire freelance musicians to augment their regular ensemble.
The 20th-century orchestra was far more flexible than its predecessors. In Beethoven's and Felix Mendelssohn's time, the orchestra was composed of a fairly standard core of instruments, which was very rarely modified by composers. As time progressed, and as the Romantic period saw changes in accepted modification with composers such as Berlioz and Mahler; some composers used multiple harps and sound effect such as the wind machine. During the 20th century, the modern orchestra was generally standardized with the modern instrumentation listed below. Nevertheless, by the mid- to late 20th century, with the development of contemporary classical music, instrumentation could practically be hand-picked by the composer (e.g., to add electric instruments such as electric guitar, electronic instruments such as synthesizers, non-Western instruments, or other instruments not traditionally used in orchestra).
With this history in mind, the orchestra can be analysed in five eras: the Baroque era, the Classical era, early/mid-Romantic music era, late-Romantic era and combined Modern/Postmodern eras. The first is a Baroque orchestra (i.e., J.S. Bach, Handel, Vivaldi), which generally had a smaller number of performers, and in which one or more chord-playing instruments, the basso continuo group (e.g., harpsichord or pipe organ and assorted bass instruments to perform the bassline), played an important role; the second is a typical classical period orchestra (e.g., early Beethoven along with Mozart and Haydn), which used a smaller group of performers than a Romantic music orchestra and a fairly standardized instrumentation; the third is typical of an early/mid-Romantic era (e.g., Schubert, Berlioz, Schumann, Brahms); the fourth is a late-Romantic/early 20th-century orchestra (e.g., Wagner, Mahler, Stravinsky), to the common complement of a 2010-era modern orchestra (e.g., Adams, Barber, Aaron Copland, Glass, Penderecki).
Among the instrument groups and within each group of instruments, there is a generally accepted hierarchy. Every instrumental group (or section) has a principal who is generally responsible for leading the group and playing orchestral solos. The violins are divided into two groups, first violin and second violin, with the second violins playing in lower registers than the first violins, playing an accompaniment part, or harmonizing the melody played by the first violins. The principal first violin is called the concertmaster (or "leader" in the UK) and is not only considered the leader of the string section, but the second-in-command of the entire orchestra, behind only the conductor. The concertmaster leads the pre-concert tuning and handles musical aspects of orchestra management, such as determining the bowings for the violins or for all of the string section. The concertmaster usually sits to the conductor's left, closest to the audience. There is also a principal second violin, a principal viola, a principal cello and a principal bass.
The principal trombone is considered the leader of the low brass section, while the principal trumpet is generally considered the leader of the entire brass section. While the oboe often provides the tuning note for the orchestra (due to 300-year-old convention), no principal is the leader of the woodwind section though in woodwind ensembles, often the flute is leader. Instead, each principal confers with the others as equals in the case of musical differences of opinion. Most sections also have an assistant principal (or co-principal or associate principal), or in the case of the first violins, an assistant concertmaster, who often plays a tutti part in addition to replacing the principal in his or her absence.
A section string player plays in unison with the rest of the section, except in the case of divided ("divisi") parts, where upper and lower parts in the music are often assigned to "outside" (nearer the audience) and "inside" seated players. Where a solo part is called for in a string section, the section leader invariably plays that part. The section leader (or principal) of a string section is also responsible for determining the bowings, often based on the bowings set out by the concertmaster. In some cases, the principal of a string section may use a slightly different bowing than the concertmaster, to accommodate the requirements of playing their instrument (e.g., the double-bass section). Principals of a string section will also lead entrances for their section, typically by lifting the bow before the entrance, to ensure the section plays together. Tutti wind and brass players generally play a unique but non-solo part. Section percussionists play parts assigned to them by the principal percussionist.
In modern times, the musicians are usually directed by a conductor, although early orchestras did not have one, giving this role instead to the concertmaster or the harpsichordist playing the continuo. Some modern orchestras also do without conductors, particularly smaller orchestras and those specializing in historically accurate (so-called "period") performances of baroque and earlier music.
The most frequently performed repertoire for a symphony orchestra is Western classical music or opera. However, orchestras are used sometimes in popular music (e.g., to accompany a rock or pop band in a concert), extensively in film music, and increasingly often in video game music. Orchestras are also used in the symphonic metal genre. The term "orchestra" can also be applied to a jazz ensemble, for example in the performance of big-band music.
In the 2000s, all tenured members of a professional orchestra normally audition for positions in the ensemble. Performers typically play one or more solo pieces of the auditionee's choice, such as a movement of a concerto, a solo Bach movement, and a variety of excerpts from the orchestral literature that are advertised in the audition poster (so the auditionees can prepare). The excerpts are typically the most technically challenging parts and solos from the orchestral literature. Orchestral auditions are typically held in front of a panel that includes the conductor, the concertmaster, the principal player of the section for which the auditionee is applying, and possibly other principal players.
The most promising candidates from the first round of auditions are invited to return for a second or third round of auditions, which allows the conductor and the panel to compare the best candidates. Performers may be asked to sight read orchestral music. The final stage of the audition process in some orchestras is a "test week", in which the performer plays with the orchestra for a week or two, which allows the conductor and principal players to see if the individual can function well in an actual rehearsal and performance setting.
There are a range of different employment arrangements. The most sought-after positions are permanent, tenured positions in the orchestra. Orchestras also hire musicians on contracts, ranging in length from a single concert to a full season or more. Contract performers may be hired for individual concerts when the orchestra is doing an exceptionally large late-Romantic era orchestral work, or to substitute for a permanent member who is sick. A professional musician who is hired to perform for a single concert is sometimes called a "sub". Some contract musicians may be hired to replace permanent members for the period that the permanent member is on parental leave or disability leave.
Historically, major professional orchestras have been mostly or entirely composed of male musicians. The first female members hired in professional orchestras have been harpists. The Vienna Philharmonic, for example, did not accept women to permanent membership until 1997, far later than comparable orchestras (the other orchestras ranked among the world’s top five by "Gramophone" in 2008). The last major orchestra to appoint a woman to a permanent position was the Berlin Philharmonic. In February 1996, the Vienna Philharmonic's principal flute, Dieter Flury, told "Westdeutscher Rundfunk" that accepting women would be "gambling with the emotional unity () that this organism currently has". In April 1996, the orchestra’s press secretary wrote that "compensating for the expected leaves of absence" of maternity leave would be a problem.
In 1997, the Vienna Philharmonic was "facing protests during a [US] tour" by the National Organization for Women and the International Alliance for Women in Music. Finally, "after being held up to increasing ridicule even in socially conservative Austria, members of the orchestra gathered [on 28 February 1997] in an extraordinary meeting on the eve of their departure and agreed to admit a woman, Anna Lelkes, as harpist." As of 2013, the orchestra has six female members; one of them, violinist Albena Danailova, became one of the orchestra’s concertmasters in 2008, the first woman to hold that position. In 2012, women made up 6% of the orchestra's membership. VPO president Clemens Hellsberg said the VPO now uses completely screened blind auditions.
In 2013, an article in "Mother Jones" stated that while "[m]any prestigious orchestras have significant female membership—women outnumber men in the New York Philharmonic's violin section—and several renowned ensembles, including the National Symphony Orchestra, the Detroit Symphony, and the Minnesota Symphony, are led by women violinists", the double bass, brass, and percussion sections of major orchestras "...are still predominantly male." A 2014 BBC article stated that the "...introduction of ‘blind’ auditions, where a prospective instrumentalist performs behind a screen so that the judging panel can exercise no gender or racial prejudice, has seen the gender balance of traditionally male-dominated symphony orchestras gradually shift."
There are also a variety of amateur orchestras:
Orchestras play a wide range of repertoire ranging from 17th-century dance suites, 18th-century divertimentos to 20th-century film scores and 21st-century symphonies. Orchestras have become synonymous with the symphony, an extended musical composition in Western classical music that typically contains multiple movements which provide contrasting keys and tempos. Symphonies are notated in a musical score, which contains all the instrument parts. The conductor uses the score to study the symphony before rehearsals and decide on their interpretation (e.g., tempos, articulation, phrasing, etc.), and to follow the music during rehearsals and concerts, while leading the ensemble. Orchestral musicians play from parts containing just the notated music for their instrument. A small number of symphonies also contain vocal parts (e.g., Beethoven's Ninth Symphony).
Orchestras also perform overtures, a term originally applied to the instrumental introduction to an opera. During the early Romantic era, composers such as Beethoven and Mendelssohn began to use the term to refer to independent, self-existing instrumental, programmatic works that presaged genres such as the symphonic poem, a form devised by Franz Liszt in several works that began as dramatic overtures. These were "at first undoubtedly intended to be played at the head of a programme". In the 1850s the concert overture began to be supplanted by the symphonic poem.
Orchestras also play with instrumental soloists in concertos. During concertos, the orchestra plays an accompaniment role to the soloist (e.g., a solo violinist or pianist) and, at times, introduces musical themes or interludes while the soloist is not playing. Orchestras also play during operas, ballets, some musical theatre works and some choral works (both sacred works such as Masses and secular works). In operas and ballets, the orchestra accompanies the singers and dancers, respectively, and plays overtures and interludes where the melodies played by the orchestra take centre stage.
In the Baroque era, orchestras performed in a range of venues, including at the fine houses of aristocrats, in opera halls and in churches. Some wealthy aristocrats had an orchestra in residence at their estate, to entertain them and their guests with performances. During the Classical era, as composers increasingly sought out financial support from the general public, orchestra concerts were increasingly held in public concert halls, where music lovers could buy tickets to hear the orchestra. Aristocratic patronage of orchestras continued during the Classical era, but this went on alongside public concerts. In the 20th and 21st century, orchestras found a new patron: governments. Many orchestras in North America and Europe receive part of their funding from national, regional level governments (e.g., state governments in the U.S.) or city governments. These government subsidies make up part of orchestra revenue, along with ticket sales, charitable donations (if the orchestra is registered as a charity) and other fundraising activities. With the invention of successive technologies, including sound recording, radio broadcasting, television broadcasting and Internet-based streaming and downloading of concert videos, orchestras have been able to find new revenue sources.
One of the "great unmentionable [topics] of orchestral playing" is "faking", the process by which an orchestral musician gives the "...impression of playing every note as written", typically for a very challenging passage that is very high or very fast, while not actually playing the notes that are in the printed music part. An article in "The Strad" states that all orchestral musicians, even those in the top orchestras, occasionally "fake" certain passages. One reason that musicians "fake" is because there are not enough rehearsals. Another factor is the extreme challenges in 20th-century and 21st-century contemporary pieces; some professionals said "faking" was "necessary in anything from ten to almost ninety per cent of some modern works. Professional players who were interviewed were of a consensus that faking may be acceptable when a part is not written well for the instrument, but faking "just because you haven’t practised" the music is not acceptable.
With the advent of the early music movement, smaller orchestras where players worked on execution of works in styles derived from the study of older treatises on playing became common. These include the Orchestra of the Age of Enlightenment, the London Classical Players under the direction of Sir Roger Norrington and the Academy of Ancient Music under Christopher Hogwood, among others.
In the United States, the late 20th century saw a crisis of funding and support for orchestras. The size and cost of a symphony orchestra, compared to the size of the base of supporters, became an issue that struck at the core of the institution. Few orchestras could fill auditoriums, and the time-honored season-subscription system became increasingly anachronistic, as more and more listeners would buy tickets on an ad hoc basis for individual events. Orchestral endowments and—more centrally to the daily operation of American orchestras—orchestral donors have seen investment portfolios shrink or produce lower yields, reducing the ability of donors to contribute; further, there has been a trend toward donors finding other social causes more compelling. While government funding is less central to American than European orchestras, cuts in such funding are still significant for American ensembles. Finally, the drastic falling-off of revenues from recording, tied to no small extent to changes in the recording industry itself, began a period of change that has yet to reach its conclusion.
U.S. orchestras that have gone into Chapter 11 bankruptcy include the Philadelphia Orchestra (in April 2011), and the Louisville Orchestra, in December 2010; orchestras that have gone into Chapter 7 bankruptcy and have ceased operations include the Northwest Chamber Orchestra in 2006, the Honolulu Orchestra in March 2011, the New Mexico Symphony Orchestra in April 2011, and the Syracuse Symphony in June 2011. The Festival of Orchestras in Orlando, Florida ceased operations at the end of March, 2011.
One source of financial difficulties that received notice and criticism was high salaries for music directors of US orchestras, which led several high-profile conductors to take pay cuts in recent years. Music administrators such as Michael Tilson Thomas and Esa-Pekka Salonen argued that new music, new means of presenting it, and a renewed relationship with the community could revitalize the symphony orchestra. The American critic Greg Sandow has argued in detail that orchestras must revise their approach to music, performance, the concert experience, marketing, public relations, community involvement, and presentation to bring them in line with the expectations of 21st-century audiences immersed in popular culture.
It is not uncommon for contemporary composers to use unconventional instruments, including various synthesizers, to achieve desired effects. Many, however, find more conventional orchestral configuration to provide better possibilities for color and depth. Composers like John Adams often employ Romantic-size orchestras, as in Adams' opera "Nixon in China"; Philip Glass and others may be more free, yet still identify size-boundaries. Glass in particular has recently turned to conventional orchestras in works like the "Concerto for Cello and Orchestra" and the Violin Concerto No. 2.
Along with a decrease in funding, some U.S. orchestras have reduced their overall personnel, as well as the number of players appearing in performances. The reduced numbers in performance are usually confined to the string section, since the numbers here have traditionally been flexible (as multiple players typically play from the same part).
Conducting is the art of directing a musical performance, such as an orchestral or choral concert. The primary duties of the conductor are to set the tempo, ensure correct entries by various members of the ensemble, and "shape" the phrasing where appropriate. To convey their ideas and interpretation, a conductor communicates with their musicians primarily through hand gestures, typically though not invariably with the aid of a baton, and may use other gestures or signals, such as eye contact with relevant performers. A conductor's directions will almost invariably be supplemented or reinforced by verbal instructions or suggestions to their musicians in rehearsal prior to a performance.
The conductor typically stands on a raised podium with a large music stand for the full score, which contains the musical notation for all the instruments and voices. Since the mid-18th century, most conductors have not played an instrument when conducting, although in earlier periods of classical music history, leading an ensemble while playing an instrument was common. In Baroque music from the 1600s to the 1750s, the group would typically be led by the harpsichordist or first violinist (see concertmaster), an approach that in modern times has been revived by several music directors for music from this period. Conducting while playing a piano or synthesizer may also be done with musical theatre pit orchestras. Communication is typically non-verbal during a performance (this is strictly the case in art music, but in jazz big bands or large pop ensembles, there may be occasional spoken instructions, such as a "count in"). However, in rehearsals, frequent interruptions allow the conductor to give verbal directions as to how the music should be played or sung.
Conductors act as guides to the orchestras or choirs they conduct. They choose the works to be performed and study their scores, to which they may make certain adjustments (e.g., regarding tempo, articulation, phrasing, repetitions of sections, and so on), work out their interpretation, and relay their vision to the performers. They may also attend to organizational matters, such as scheduling rehearsals, planning a concert season, hearing auditions and selecting members, and promoting their ensemble in the media. Orchestras, choirs, concert bands and other sizable musical ensembles such as big bands are usually led by conductors.
In the Baroque music era (1600–1750), most orchestras were led by one of the musicians, typically the principal first violin, called the concertmaster. The concertmaster would lead the tempo of pieces by lifting his or her bow in a rhythmic manner. Leadership might also be provided by one of the chord-playing instrumentalists playing the basso continuo part which was the core of most Baroque instrumental ensemble pieces. Typically, this would be a harpsichord player, a pipe organist or a luteist or theorbo player. A keyboard player could lead the ensemble with his or her head, or by taking one of the hands off the keyboard to lead a more difficult tempo change. A lutenist or theorbo player could lead by lifting the instrument neck up and down to indicate the tempo of a piece, or to lead a ritard during a cadence or ending. In some works which combined choirs and instrumental ensembles, two leaders were sometimes used: a concertmaster to lead the instrumentalists and a chord-playing performer to lead the singers. During the Classical music period (ca. 1720–1800), the practice of using chordal instruments to play basso continuo was gradually phased out, and it disappeared completely by 1800. Instead, ensembles began to use conductors to lead the orchestra's tempos and playing style, while the concertmaster played an additional leadership role for the musicians, especially the string players, who imitate the bowstroke and playing style of the concertmaster, to the degree that is feasible for the different stringed instruments.
In 1922, the idea of a conductor-less orchestra was revived in post-revolutionary Soviet Union. The symphony orchestra Persimfans was formed without a conductor, because the founders believed that the ensemble should be modeled on the ideal Marxist state, in which all people are equal. As such, its members felt that there was no need to be led by the dictatorial baton of a conductor; instead they were led by a committee, which determined tempos and playing styles. Although it was a partial success within the Soviet Union, the principal difficulty with the concept was in changing tempo during performances, because even if the committee had issued a decree about where a tempo change should take place, there was no leader in the ensemble to guide this tempo change. The orchestra survived for ten years before Stalin's cultural politics disbanded it by taking away its funding.
In Western nations, some ensembles, such as the Orpheus Chamber Orchestra, based in New York City, have had more success with conductorless orchestras, although decisions are likely to be deferred to some sense of leadership within the ensemble (for example, the principal wind and string players, notably the concertmaster). Others have returned to the tradition of a principal player, usually a violinist, being the artistic director and running rehearsal and leading concerts. Examples include the Australian Chamber Orchestra, Amsterdam Sinfonietta & Candida Thompson and the New Century Chamber Orchestra. As well, as part of the early music movement, some 20th- and 21st-century orchestras have revived the Baroque practice of having no conductor on the podium for Baroque pieces, using the concertmaster or a chord-playing basso continuo performer (e.g., harpsichord or organ) to lead the group.
Some orchestral works specify that an offstage trumpet should be used or that other instruments from the orchestra should be positioned off-stage or behind the stage, to create a haunted, mystical effect. To ensure that the offstage instrumentalist(s) play in time, sometimes a sub-conductor will be stationed offstage with a clear view of the principal conductor. Examples include the ending of "Neptune" from Gustav Holst's "The Planets". The principal conductor leads the large orchestra, and the sub-conductor relays the principal conductor's tempo and gestures to the offstage musician (or musicians). One of the challenges with using two conductors is that the second conductor may get out of synchronization with the main conductor, or may mis-convey (or misunderstand) the principal conductor's gestures, which can lead to the offstage instruments being out of time. In the late 20th century and early 21st century, some orchestras use a video camera pointed at the principal conductor and a closed-circuit TV set in front of the offstage performer(s), instead of using two conductors.
The techniques of polystylism and polytempo music have led a few 20th- and 21st-century composers to write music where multiple orchestras or ensembles perform simultaneously. These trends have brought about the phenomenon of polyconductor music, wherein separate sub-conductors conduct each group of musicians. Usually, one principal conductor conducts the sub-conductors, thereby shaping the overall performance. In Percy Grainger's "The Warriors" which includes three conductors: the primary conductor of the orchestra, a secondary conductor directing an off-stage brass ensemble, and a tertiary conductor directing percussion and harp. One example in the late-century orchestral music is Karlheinz Stockhausen's "Gruppen", for three orchestras, which are placed around the audience. This way, the "sound masses" could be spacialized, as in an electroacoustic work. "Gruppen" was premiered in Cologne, in 1958, conducted by Stockhausen, Bruno Maderna and Pierre Boulez. It has been performed in 1996 by Simon Rattle, John Carewe and Daniel Harding. | https://en.wikipedia.org/wiki?curid=22706 |
Oolong
Oolong (; ) is a traditional semi-oxidized Chinese tea ("Camellia sinensis)" produced through a process including withering the plant under strong sun and oxidation before curling and twisting. Most oolong teas, especially those of fine quality, involve unique tea plant cultivars that are exclusively used for particular varieties. The degree of oxidation, which varies according to the chosen duration of time before firing, can range from 8–85%, depending on the variety and production style. Oolong is especially popular in south China and among Chinese expatriates in Southeast Asia as is the Fujian preparation process known as the Gongfu tea ceremony.
Different styles of oolong tea can vary widely in flavor. They can be sweet and fruity with honey aromas, or woody and thick with roasted aromas, or green and fresh with complex aromas, all depending on the horticulture and style of production. Several types of oolong tea, including those produced in the Wuyi Mountains of northern Fujian, such as Da Hong Pao, are among the most famous Chinese teas. Different varieties of oolong are processed differently, but the leaves are usually formed into one of two distinct styles. Some are rolled into long curly leaves, while others are 'wrap-curled' into small beads, each with a tail. The former style is the more traditional.
The Chinese term "wulong" (oolong) was first used to describe a tea in the 1857 text "Miscellaneous Notes on Fujian" by Shi Hongbao. In Chinese, oolong teas are also known as "qingcha" () or "dark green teas". The term "blue tea" () in French is synonymous with the term oolong.
The manufacture of oolong tea involves repeating stages to achieve the desired amount of bruising and browning of leaves. Withering, rolling, shaping, and firing are similar to black tea, but much more attention to timing and temperature is necessary.
The exact origin of the term is impossible to state with certainty. There are three widely espoused explanations of the origin of the Chinese name. According to the "tribute tea" theory, oolong tea came directly from Dragon-Phoenix Tea Cake tribute tea. The term oolong tea replaced the old term when loose tea came into fashion. Since it was dark, long, and curly, it was called Black Dragon tea.
According to the "Wuyi" theory, oolong tea first existed in the Wuyi Mountains region. This is evidenced by Qing-dynasty poems such as Wuyi Tea Song (Wuyi Chage) and Tea Tale (Chashuo). It was said that oolong tea was named after the part of the Wuyi Mountain where it was originally produced.
According to the "Anxi" theory, oolong tea had its origin in the Anxi oolong tea plant, which was discovered by a man named Sulong, Wulong, or Wuliang.
Another tale tells of a man named Wu Liang (later corrupted to Wu Long, or Oolong) who discovered oolong tea by accident when he was distracted by a deer after a hard day's tea-picking, and by the time he remembered to return to the tea it had already started to oxidize.
Tea production in Fujian is concentrated in two regions: the Wuyi Mountains and Anxi County. Both are major historical centers of oolong tea production in China.
The most famous and expensive oolong teas are made here, and the production is still usually accredited as being organic. Some of the better known cliff teas are:
The term "dancong" originally meant phoenix teas all picked from one tree. In recent times though it has become a generic term for all Phoenix Mountain oolongs. True dancongs are still produced, but are not common outside China.
Tea cultivation in Taiwan began in the 18th century. Since then, many of the teas which are grown in Fujian province have also been grown in Taiwan. Since the 1970s, the tea industry in Taiwan has expanded at a rapid rate, in line with the rest of the economy. Due to high domestic demand and a strong tea culture, most Taiwanese tea is bought and consumed in Taiwan.
As the weather in Taiwan is highly variable, tea quality may differ from season to season. Although the island is not particularly large, it is geographically varied, with high, steep mountains rising abruptly from low-lying coastal plains. The different weather patterns, temperatures, altitudes, and soil ultimately result in differences in appearance, aroma, and flavour of the tea grown in Taiwan. In some mountainous areas, teas have been cultivated at ever higher elevations to produce a unique sweet taste that fetches a premium price.
Recommended brewing techniques for oolong tea vary widely. One common method is to use a small steeping vessel, such as a gaiwan or Yixing clay teapot, with a higher than usual leaf to water ratio. Such vessels are used in the gongfu method of tea preparation, which involves multiple short steepings.
For a single infusion, 1 to 5-minute steepings are recommended, depending on personal preference. Recommended water temperature ranges from to 205 °F, 82 °C to 96 °C.
Oolong contains caffeine, although the caffeine content in tea will vary based on terroir, when the leaf is plucked, and the production processes.
Some semi-fermented oolong teas contain acylated flavonoid tetraglycosides, named teaghrelins due to their ability to bind to ghrelin receptors. Teaghrelins were isolated from Chin-shin oolong tea and Shy‐jih‐chuen oolong tea and recently from other oolong tea varieties. | https://en.wikipedia.org/wiki?curid=22707 |
Okapi
The okapi (; "Okapia johnstoni"), also known as the forest giraffe, Congolese giraffe, or zebra giraffe, is an artiodactyl mammal native to the northeast of the Democratic Republic of the Congo in Central Africa. Although the okapi has striped markings reminiscent of zebras, it is most closely related to the giraffe. The okapi and the giraffe are the only living members of the family Giraffidae.
The okapi stands about tall at the shoulder and has a typical body length around . Its weight ranges from . It has a long neck, and large, flexible ears. Its coat is a chocolate to reddish brown, much in contrast with the white horizontal stripes and rings on the legs, and white ankles.
Male okapis have short, distinct horn-like protuberances on their heads called ossicones (which share similar features to the giraffe ossicones in terms of formation, structure and function), less than in length. Females possess hair whorls, and ossicones are absent.
Okapis are primarily diurnal, but may be active for a few hours in darkness. They are essentially solitary, coming together only to breed. Okapis are herbivores, feeding on tree leaves and buds, grasses, ferns, fruits, and fungi. Rut in males and estrus in females does not depend on the season. In captivity, estrous cycles recur every 15 days. The gestational period is around 440 to 450 days long, following which usually a single calf is born. The juveniles are kept in hiding, and nursing takes place infrequently. Juveniles start taking solid food from three months, and weaning takes place at six months.
Okapis inhabit canopy forests at altitudes of . They are endemic to the tropical forests of the Democratic Republic of the Congo, where they occur across the central, northern, and eastern regions. The International Union for the Conservation of Nature and Natural Resources classifies the okapi as endangered. Major threats include habitat loss due to logging and human settlement. Extensive hunting for bushmeat and skin and illegal mining have also led to a decline in populations. The Okapi Conservation Project was established in 1987 to protect okapi populations.
Although the okapi was unknown to the Western world until the 20th century, it may have been depicted since the early fifth century BCE on the façade of the Apadana at Persepolis, a gift from the Ethiopian procession to the Achaemenid kingdom.
For years, Europeans in Africa had heard of an animal that they came to call the African unicorn. The animal was brought to prominent European attention by speculation on its existence found in press reports covering Henry Morton Stanley's journeys in 1887. In his travelogue of exploring the Congo, Stanley mentioned a kind of donkey that the natives called the "atti", which scholars later identified as the okapi. Explorers may have seen the fleeting view of the striped backside as the animal fled through the bushes, leading to speculation that the okapi was some sort of rainforest zebra.
When the British special commissioner in Uganda, Sir Harry Johnston, discovered some Pygmy inhabitants of the Congo being abducted by a showman for exhibition, he rescued them and promised to return them to their homes. The Pygmies fed Johnston's curiosity about the animal mentioned in Stanley's book. Johnston was puzzled by the okapi tracks the natives showed him; while he had expected to be on the trail of some sort of forest-dwelling horse, the tracks were of a cloven-hoofed beast.
Though Johnston did not see an okapi himself, he did manage to obtain pieces of striped skin and eventually a skull. From this skull, the okapi was correctly classified as a relative of the giraffe; in 1901, the species was formally recognized as "Okapia johnstoni".
"Okapia johnstoni" was first described as "Equus johnstoni" by English zoologist Philip Lutley Sclater in 1901. The generic name "Okapia" derives either from the Mbuba name or the related Lese Karo name , while the specific name ("johnstoni") is in recognition of Johnston, who first acquired an okapi specimen for science from the Ituri Forest. Remains of a carcass were later sent to London by Johnston and became a media event in 1901.
In 1901, Sclater presented a painting of the okapi before the Zoological Society of London that depicted its physical features with some clarity. Much confusion arose regarding the taxonomical status of this newly discovered animal. Sir Harry Johnston himself called it a "Helladotherium", or a relative of other extinct giraffids. Based on the description of the okapi by Pygmies, who referred to it as a "horse", Sclater named the species "Equus johnstoni". Subsequently, zoologist Ray Lankester declared that the okapi represented an unknown genus of the Giraffidae, which he placed in its own genus, "Okapia", and assigned the name "Okapia johnstoni" to the species.
In 1902, Swiss zoologist Charles Immanuel Forsyth Major suggested the inclusion of "O. johnstoni" in the extinct giraffid subfamily Palaeotraginae. However, the species was placed in its own subfamily Okapiinae, by Swedish palaeontologist Birger Bohlin in 1926, mainly due to the lack of a cingulum, a major feature of the palaeotragids. In 1986, "Okapia" was finally established as a sister genus of "Giraffa" on the basis of cladistic analysis. The two genera together with "Palaeotragus" constitute the tribe Giraffini.
The earliest members of the Giraffidae first appeared in the early Miocene in Africa, having diverged from the superficially deer-like climacoceratids. Giraffids spread into Europe and Asia by the middle Miocene in a first radiation. Another radiation began in the Pliocene, but was terminated by a decline in diversity in the Pleistocene. Several important primitive giraffids existed more or less contemporaneously in the Miocene (23–10 million years ago), including "Canthumeryx", "Giraffokeryx", "Palaeotragus", and "Samotherium". According to palaeontologist and author Kathleen Hunt, "Samotherium" split into "Okapia" (18 million years ago) and "Giraffa" (12 million years ago). However, J. D. Skinner argued that "Canthumeryx" gave rise to the okapi and giraffe through the latter three genera and that the okapi is the extant form of "Palaeotragus". The okapi is sometimes referred to as a living fossil, as it has existed as a species over a long geological time period, and morphologically resembles more primitive forms (e.g. "Samotherium").
In 2016, a genetic study found that the common ancestor of giraffe and okapi lived about 11.5 million years ago.
The okapi is a medium-sized giraffid, standing tall at the shoulder. Its average body length is about and its weight ranges from . It has a long neck, and large and flexible ears. The coat is a chocolate to reddish brown, much in contrast with the white horizontal stripes and rings on the legs and white ankles. The striking stripes make it resemble a zebra. These features serve as an effective camouflage amidst dense vegetation. The face, throat, and chest are greyish white. Interdigital glands are present on all four feet, and are slightly larger on the front feet. Male okapis have short, hair-covered horns called ossicones, less than in length. The okapi exhibits sexual dimorphism, with females taller on average, slightly redder, and lacking prominent ossicones, instead possessing hair whorls.
The okapi shows several adaptations to its tropical habitat. The large number of rod cells in the retina facilitate night vision, and an efficient olfactory system is present. The large auditory bullae allow a strong sense of hearing. The dental formula of the okapi is . Teeth are low-crowned and finely cusped, and efficiently cut tender foliage. The large caecum and colon help in microbial digestion, and a quick rate of food passage allows for lower cell wall digestion than in other ruminants.
The okapi can be easily distinguished from its nearest extant relative, the giraffe. It is much smaller and shares more external similarities with the deer and bovids than with the giraffe. While both sexes possess horns in the giraffe, only males bear horns in the okapi. The okapi has large palatine sinuses, unique among the giraffids. Morphological similarities shared between the giraffe and the okapi include a similar gait – both use a pacing gait, stepping simultaneously with the front and the hind leg on the same side of the body, unlike other ungulates that walk by moving alternate legs on either side of the body – and a long, black tongue (longer in the okapi) useful in plucking buds and leaves, as well as for grooming.
Okapis are primarily diurnal, but may be active for a few hours in darkness. They are essentially solitary, coming together only to breed. They have overlapping home ranges and typically occur at densities around 0.6 animals per square kilometre. Male home ranges average , while female home ranges average . Males migrate continuously, while females are sedentary. Males often mark territories and bushes with their urine, while females use common defecation sites. Grooming is a common practice, focused at the earlobes and the neck. Okapis often rub their necks against trees, leaving a brown exudate.
The male is protective of his territory, but allows females to pass through the domain to forage. Males visit female home ranges at breeding time. Although generally tranquil, the okapi can kick and butt with its head to show aggression. As the vocal cords are poorly developed, vocal communication is mainly restricted to three sounds — "chuff" (contact calls used by both sexes), "moan" (by females during courtship) and "bleat" (by infants under stress). Individuals may engage in Flehmen response, a visual expression in which the animal curls back its upper lips, displays the teeth, and inhales through the mouth for a few seconds. The leopard is the main natural predator of the okapi.
Okapis are herbivores, feeding on tree leaves and buds, grasses, ferns, fruits, and fungi. They are unique in the Ituri Forest as they are the only known mammal that feeds solely on understory vegetation, where they use their 18-inch tongues to selectively browse for suitable plants. The tongue is also used to groom their ears and eyes. They prefer to feed in treefall gaps. The okapi has been known to feed on over 100 species of plants, some of which are known to be poisonous to humans and other animals. Fecal analysis shows that none of those 100 species dominates the diet of the okapi. Staple foods comprise shrubs and lianas. The main constituents of the diet are woody, dicotyledonous species; monocotyledonous plants are not eaten regularly. In the Ituri forest, the okapi feeds mainly upon the plant families Acanthaceae, Ebenaceae, Euphorbiaceae, Flacourtiaceae, Loganiaceae, Rubiaceae, and Violaceae.
Female okapis become sexually mature at about one-and-a-half years old, while males reach maturity after two years. Rut in males and estrous in females does not depend on the season. In captivity, estrous cycles recur every 15 days. The male and the female begin courtship by circling, smelling, and licking each other. The male shows his interest by extending his neck, tossing his head, and protruding one leg forward. This is followed by mounting and copulation.
The gestational period is around 440 to 450 days long, following which usually a single calf is born, weighing . The udder of the pregnant female starts swelling 2 months before parturition, and vulval discharges may occur. Parturition takes 3–4 hours, and the female stands throughout this period, though she may rest during brief intervals. The mother consumes the afterbirth and extensively grooms the infant. Her milk is very rich in proteins and low in fat.
As in other ruminants, the infant can stand within 30 minutes of birth. Although generally similar to adults, newborn calves have false eyelashes, a long dorsal mane, and long white hairs in the stripes. These features gradually disappear and give way to the general appearance within a year. The juveniles are kept in hiding, and nursing takes place infrequently. Calves are known not to defecate for the first month or two of life, which is hypothesized to help avoid predator detection in their most vulnerable phase of life. The growth rate of calves is appreciably high in the first few months of birth, after which it gradually declines. Juveniles start taking solid food from 3 months, and weaning takes place at 6 months. Horn development in males takes 1 year after birth. The okapi's typical lifespan is 20–30 years.
The okapi is endemic to the Democratic Republic of the Congo, where it occurs north and east of the Congo River. It ranges from the Maiko National Park northward to the Ituri rainforest, then through the river basins of the Rubi, Lake Tele, and Ebola to the west and the Ubangi River further north. Smaller populations exist west and south of the Congo River. It is also common in the Wamba and Epulu areas. It is extinct in Uganda.
The okapi inhabits canopy forests at altitudes of . It occasionally uses seasonally inundated areas, but does not occur in gallery forests, swamp forests, and habitats disturbed by human settlements. In the wet season, it visits rocky inselbergs that offer forage uncommon elsewhere. Results of research conducted in the late 1980s in a mixed "Cynometra" forest indicated that the okapi population density averaged 0.53 animals per square kilometre.
In 2008, it was recorded in Virunga National Park.
The IUCN classifies the okapi as endangered. It is fully protected under Congolese law. The Okapi Wildlife Reserve and Maiko National Park support significant populations of the okapi, though a steady decline in numbers has occurred due to several threats. Other areas of occurrence are the Rubi Tele Hunting Reserve and the Abumombanzi Reserve. Major threats include habitat loss due to logging and human settlement. Extensive hunting for bushmeat and skin and illegal mining have also led to population declines. A threat that has emerged quite recently is the presence of illegal armed groups around protected areas, inhibiting conservation and monitoring actions. A small population occurs north of the Virunga National Park, but is bereft of protection due to the presence of armed groups in the vicinity. In June 2012, a gang of poachers attacked the headquarters of the Okapi Wildlife Reserve, killing six guards and other staff as well as all 14 okapis at their breeding center.
The Okapi Conservation Project, established in 1987, works towards the conservation of the okapi, as well as the growth of the indigenous Mbuti people. In November 2011, the White Oak Conservation center and Jacksonville Zoo and Gardens hosted an international meeting of the Okapi Species Survival Plan and the Okapi European Endangered Species Programme at Jacksonville, which was attended by representatives from zoos from the US, Europe, and Japan. The aim was to discuss the management of captive okapis and arrange support for okapi conservation. Many zoos in North America and Europe currently have okapis in captivity.
Around 100 okapis are in accredited Association of Zoos and Aquariums (AZA) zoos. The okapi population is managed in America by the AZA's Species Survival Plan, a breeding program that works to ensure genetic diversity in the captive population of endangered animals, while the EEP (European studbook) and ISB (Global studbook) are managed by Antwerp Zoo, which was the first zoo to have an Okapi on display (in 1919), as well as one of the most successful in breeding them.
The Bronx Zoo was the first zoo in North America to exhibit okapi, in 1937. They have had one of the most successful breeding programs, with 13 calves having been born since 1991.
The San Diego Zoo has exhibited okapis since 1956, and had their first birth of an okapi in 1962. Since then, over 60 births have occurred between the zoo and the San Diego Zoo Safari Park, with the most recent being Mosi, a male calf born in early August 2017 at the San Diego Zoo.
The Brookfield Zoo in Chicago has also greatly contributed to the captive population of okapis in accredited zoos. The zoo has had 28 okapi births since 1959.
Other North American Zoos that exhibit and breed okapis include the Denver Zoo and Cheyenne Mountain Zoo (Colorado); Houston Zoo, Dallas Zoo and San Antonio Zoo (Texas); Disney's Animal Kingdom, Miami Zoo and Tampa's Lowry Park Zoo (Florida); Los Angeles Zoo (California); Saint Louis Zoo (Missouri); Cincinnati Zoo and Columbus Zoo (Ohio); Memphis Zoo (Tennessee); Maryland Zoo (Maryland) and The Sedgwick County Zoo and Tanganyika Wildlife Park (Kansas); Roosevelt Park Zoo (South Dakota), Omaha's Henry Doorly Zoo (Nebraska); Philadelphia Zoo (Philadelphia).
In Europe, zoos that exhibit and breed okapis include Madrid Zoo (Spain), Chester Zoo, London Zoo, Yorkshire Wildlife Park, Marwell Zoo, The Wild Place (United Kingdom); Dublin Zoo (Ireland); Berlin Zoo, Frankfurt Zoo, Wilhelma Zoo, Wuppertal Zoo, Cologne Zoo, Leipzig Zoo (Germany) and Antwerp Zoo (Belgium); Zoo Basel (Switzerland); Copenhagen Zoo (Denmark); Rotterdam Zoo, Safaripark Beekse Bergen (Netherlands) and Dvůr Králové Zoo (Czech Republic), Wrocław Zoo (Poland); Bioparc Zoo de Doué, ZooParc de Beauval (France); Lisbon Zoo (Portugal).
In Asia, only two zoos in Japan exhibit okapis; Ueno Zoo in Tokyo and Zoorasia in Yokohama. | https://en.wikipedia.org/wiki?curid=22709 |
Ovary
The ovary is an organ found in the female reproductive system that produces an ovum. When released, this travels down the fallopian tube into the uterus, where it may become fertilized by a sperm. There is an ovary () found on each side of the body. The ovaries also secrete hormones that play a role in the menstrual cycle and fertility. The ovary progresses through many stages beginning in the prenatal period through menopause. It is also an endocrine gland because of the various hormones that it secretes.
The ovaries are considered the female gonads. Each ovary is whitish in color and located alongside the lateral wall of the uterus in a region called the ovarian fossa. The ovarian fossa is the region that is bounded by the external iliac artery and in front of the ureter and the internal iliac artery. This area is about 4 cm x 3 cm x 2 cm in size.
The ovaries are surrounded by a capsule, and have an outer cortex and an inner medulla. The capsule is of dense connective tissue and is known as the tunica albuginea.
Usually, ovulation occurs in one of the two ovaries releasing an egg each menstrual cycle.
The side of the ovary closest to the fallopian tube is connected to it by infundibulopelvic ligament, and the other side points downwards attached to the uterus via the ovarian ligament.
Other structures and tissues of the ovaries include the hilum.
The ovaries lie within the peritoneal cavity, on either side of the uterus, to which they are attached via a fibrous cord called the ovarian ligament. The ovaries are uncovered in the peritoneal cavity but are tethered to the body wall via the suspensory ligament of the ovary which is a posterior extension of the broad ligament of the uterus. The part of the broad ligament of the uterus that covers the ovary is known as the mesovarium.
The ovarian pedicle is made up part of the fallopian tube, mesovarium, ovarian ligament, and ovarian blood vessels.
The surface of the ovaries is covered with membrane consisting of a lining of simple cuboidal-to-columnar shaped mesothelium, called the germinal epithelium.
The outer layer is the ovarian cortex, consisting of ovarian follicles and stroma in between them. Included in the follicles are the cumulus oophorus, membrana granulosa (and the granulosa cells inside it), corona radiata, zona pellucida, and primary oocyte. Theca of follicle, antrum and liquor folliculi are also contained in the follicle. Also in the cortex is the corpus luteum derived from the follicles. The innermost layer is the ovarian medulla. It can be hard to distinguish between the cortex and medulla, but follicles are usually not found in the medulla.
Follicular cells are flat epithelial cells that originate from surface epithelium covering the ovary, are surrounded by Granulosa cells - that have changed from flat to cuboidal and proliferated to produce a stratified epithelium
Other
The ovary also contains blood vessels and lymphatics.
At puberty, the ovary begins to secrete increasing levels of hormones. Secondary sex characteristics begin to develop in response to the hormones. The ovary changes structure and function beginning at puberty. Since the ovaries are able to regulate hormones, they also play an important role in pregnancy and fertility. When egg cells, (oocytes) are released from the Fallopian tube, a variety of feedback mechanisms stimulate the endocrine system which cause hormone levels to change. These feedback mechanisms are controlled by the hypothalamus and pituitary gland. Messages from the hypothalamus are sent to the pituitary gland. In turn, the pituitary gland releases hormones to the ovaries. From this signaling, the ovaries release their own hormones.
The ovaries are the site of production and periodical release of egg cells, the female gametes. In the ovaries, the developing egg cells (or oocytes) mature in the fluid-filled follicles. Typically, only one oocyte develops at a time, but others can also mature simultaneously. Follicles are composed of different types and number of cells according to the stage of their maturation, and their size is indicative of the stage of oocyte development.
When the oocyte finishes its maturation in the ovary, a surge of luteinizing hormone secreted by the pituitary gland stimulates the release of the oocyte through the rupture of the follicle, a process called ovulation. The follicle remains functional and reorganizes into a corpus luteum, which secretes progesterone in order to prepare the uterus for an eventual implantation of the embryo.
At maturity, ovaries secrete estrogen, androgen, inhibin, and progestogen. In women, fifty percent of testosterone is produced by the ovaries and adrenal glands and released directly into the blood stream. Estrogen is responsible for the appearance of secondary sex characteristics for females at puberty and for the maturation and maintenance of the reproductive organs in their mature functional state. Progesterone prepares the uterus for pregnancy, and the mammary glands for lactation. Progesterone functions with estrogen by promoting menstrual cycle changes in the endometrium.
As women age, they experience a decline in reproductive performance leading to menopause. This decline is tied to a decline in the number of ovarian follicles. Although about 1 million oocytes are present at birth in the human ovary, only about 500 (about 0.05%) of these ovulate, and the rest are wasted. The decline in ovarian reserve appears to occur at a constantly increasing rate with age, and leads to nearly complete exhaustion of the reserve by about age 52. As ovarian reserve and fertility decline with age, there is also a parallel increase in pregnancy failure and meiotic errors resulting in chromosomally abnormal conceptions.The ovarian reserve and fertility perform optimally around 20–30 years of age. Around 45 years of age, the menstrual cycle begins to change and the follicle pool decreases significantly. The events that lead to ovarian aging remain unclear. The variability of aging could include environmental factors, lifestyle habits or genetic factors.
Women with an inherited mutation in the DNA repair gene BRCA1 undergo menopause prematurely, suggesting that naturally occurring DNA damages in oocytes are repaired less efficiently in these women, and this inefficiency leads to early reproductive failure. The BRCA1 protein plays a key role in a type of DNA repair termed homologous recombinational repair that is the only known cellular process that can accurately repair DNA double-strand breaks. Titus et al. showed that DNA double-strand breaks accumulate with age in humans and mice in primordial follicles. Primordial follicles contain oocytes that are at an intermediate (prophase I) stage of meiosis. Meiosis is the general process in eukaryotic organisms by which germ cells are formed, and it is likely an adaptation for removing DNA damages, especially double-strand breaks, from germ line DNA (see Meiosis and Origin and function of meiosis). Homologous recombinational repair is especially promoted during meiosis. Titus et al. also found that expression of 4 key genes necessary for homologous recombinational repair of DNA double-strand breaks (BRCA1, MRE11, RAD51 and ATM) decline with age in the oocytes of humans and mice. They hypothesized that DNA double-strand break repair is vital for the maintenance of oocyte reserve and that a decline in efficiency of repair with age plays a key role in ovarian aging.
A variety of testing methods can be used in order to determine fertility based on maternal age. Many of these tests measure levels of hormones FSH, and GnrH. Methods such as measuring AMH (anti-mullerian) hormone levels, and AFC (antral follicule count) can predict ovarian aging. AMH levels serve as an indicator of ovarian aging since the quality of ovarian follicles can be determined.
Ovarian diseases can be classified as endocrine disorders or as a disorders of the reproductive system.
If the egg fails to release from the follicle in the ovary an ovarian cyst may form. Small ovarian cysts are common in healthy women. Some women have more follicles than usual (polycystic ovary syndrome), which inhibits the follicles to grow normally and this will cause cycle irregularities.
Cryopreservation of ovarian tissue, often called "ovarian tissue cryopreservation", is of interest to women who want to preserve their reproductive function beyond the natural limit, or whose reproductive potential is threatened by cancer therapy, for example in hematologic malignancies or breast cancer. The procedure is to take a part of the ovary and carry out slow freezing before storing it in liquid nitrogen whilst therapy is undertaken. Tissue can then be thawed and implanted near the fallopian, either orthotopic (on the natural location) or heterotopic (on the abdominal wall), where it starts to produce new eggs, allowing normal conception to take place. A study of 60 procedures concluded that ovarian tissue harvesting appears to be safe. The ovarian tissue may also be transplanted into mice that are immunocompromised (SCID mice) to avoid graft rejection, and tissue can be harvested later when mature follicles have developed.
In former centuries, medical authors, for example Galen, referred to a woman's ovaries as "female testes".
Birds have only one functional ovary (the left), while the other remains vestigial. Ovaries in females are analogous to testes in males, in that they are both gonads and endocrine glands. Ovaries of some kind are found in the female reproductive system of many animals that employ sexual reproduction, including invertebrates. However, they develop in a very different way in most invertebrates than they do in vertebrates, and are not truly homologous.
Many of the features found in human ovaries are common to all vertebrates, including the presence of follicular cells, tunica albuginea, and so on. However, many species produce a far greater number of eggs during their lifetime than do humans, so that, in fish and amphibians, there may be hundreds, or even millions of fertile eggs present in the ovary at any given time. In these species, fresh eggs may be developing from the germinal epithelium throughout life. Corpora lutea are found only in mammals, and in some elasmobranch fish; in other species, the remnants of the follicle are quickly resorbed by the ovary. In birds, reptiles, and monotremes, the egg is relatively large, filling the follicle, and distorting the shape of the ovary at maturity.
Amphibians and reptiles have no ovarian medulla; the central part of the ovary is a hollow, lymph-filled space.
The ovary of teleosts is also often hollow, but in this case, the eggs are shed into the cavity, which opens into the oviduct. Certain nematodes of the genus "Philometra" are parasitic in the ovary of marine fishes and can be spectacular, with females as long as 40 cm, coiled in the ovary of a fish half this length. Although most normal female vertebrates have two ovaries, this is not the case in all species. In most birds and in platypuses, the right ovary never matures, so that only the left is functional. (Exceptions include the kiwi and some, but not all raptors, in which both ovaries persist.) In some elasmobranchs, only the right ovary develops fully. In the primitive jawless fish, and some teleosts, there is only one ovary, formed by the fusion of the paired organs in the embryo. | https://en.wikipedia.org/wiki?curid=22710 |
Thomas Henry Huxley
Thomas Henry Huxley (4 May 1825 – 29 June 1895) was an English biologist and anthropologist specialising in comparative anatomy. He is known as "Darwin's Bulldog" for his advocacy of Charles Darwin's theory of evolution.
The stories regarding Huxley's famous debate in 1860 with Samuel Wilberforce were a key moment in the wider acceptance of evolution and in his own career, although historians think that the surviving story of the debate is a later fabrication. Huxley had been planning to leave Oxford on the previous day, but, after an encounter with Robert Chambers, the author of "Vestiges", he changed his mind and decided to join the debate. Wilberforce was coached by Richard Owen, against whom Huxley also debated about whether humans were closely related to apes.
Huxley was slow to accept some of Darwin's ideas, such as gradualism, and was undecided about natural selection, but despite this he was wholehearted in his public support of Darwin. Instrumental in developing scientific education in Britain, he fought against the more extreme versions of religious tradition.
Originally coining the term in 1869, Huxley elaborated on "agnosticism" in 1889 to frame the nature of claims in terms of what is knowable and what is not. Huxley statesAgnosticism, in fact, is not a creed, but a method, the essence of which lies in the rigorous application of a single principle... the fundamental axiom of modern science... In matters of the intellect, follow your reason as far as it will take you, without regard to any other consideration... In matters of the intellect, do not pretend that conclusions are certain which are not demonstrated or demonstrable. Use of that term has continued to the present day (see Thomas Henry Huxley and agnosticism). Much of Huxley's agnosticism is influenced by Kantian views on human perception and the ability to rely on rational evidence rather than belief systems.
Huxley had little formal schooling and was virtually self-taught. He became perhaps the finest comparative anatomist of the later 19th century. He worked on invertebrates, clarifying relationships between groups previously little understood. Later, he worked on vertebrates, especially on the relationship between apes and humans. After comparing "Archaeopteryx" with "Compsognathus", he concluded that birds evolved from small carnivorous dinosaurs, a theory widely accepted today.
The tendency has been for this fine anatomical work to be overshadowed by his energetic and controversial activity in favour of evolution, and by his extensive public work on scientific education, both of which had significant effects on society in Britain and elsewhere. Huxley's 1893 Romanes Lecture, “Evolution and Ethics” is exceedingly influential in China; the Chinese translation of Huxley's lecture even transformed the Chinese translation of Darwin's "Origin of Species".
Thomas Henry Huxley was born in Ealing, which was then a village in Middlesex. He was the second youngest of eight children of George Huxley and Rachel Withers. Like some other British scientists of the nineteenth century such as Alfred Russel Wallace, Huxley was brought up in a literate middle-class family which had fallen on hard times. His father was a mathematics teacher at Ealing School until it closed, putting the family into financial difficulties. As a result, Thomas left school at the age of 10, after only two years of formal schooling.
Despite this unenviable start, Huxley was determined to educate himself. He became one of the great autodidacts of the nineteenth century. At first he read Thomas Carlyle, James Hutton's "Geology", and Hamilton's "Logic". In his teens he taught himself German, eventually becoming fluent and used by Charles Darwin as a translator of scientific material in German. He learned Latin, and enough Greek to read Aristotle in the original.
Later on, as a young adult, he made himself an expert, first on invertebrates, and later on vertebrates, all self-taught. He was skilled in drawing and did many of the illustrations for his publications on marine invertebrates. In his later debates and writing on science and religion his grasp of theology was better than many of his clerical opponents. Huxley, a boy who left school at ten, became one of the most knowledgeable men in Britain.
He was apprenticed for short periods to several medical practitioners: at 13 to his brother-in-law John Cooke in Coventry, who passed him on to Thomas Chandler, notable for his experiments using mesmerism for medical purposes. Chandler's practice was in London's Rotherhithe amidst the squalor endured by the Dickensian poor. Here Thomas would have seen poverty, crime and rampant disease at its worst. Next, another brother-in-law took him on: John Salt, his eldest sister's husband. Now 16, Huxley entered Sydenham College (behind University College Hospital), a cut-price anatomy school whose founder, Marshall Hall, discovered the reflex arc. All this time Huxley continued his programme of reading, which more than made up for his lack of formal schooling.
A year later, buoyed by excellent results and a silver medal prize in the Apothecaries' yearly competition, Huxley was admitted to study at Charing Cross Hospital, where he obtained a small scholarship. At Charing Cross, he was taught by Thomas Wharton Jones, Professor of Ophthalmic Medicine and Surgery at University College London. Jones had been Robert Knox's assistant when Knox bought cadavers from Burke and Hare. The young Wharton Jones, who acted as go-between, was exonerated of crime, but thought it best to leave Scotland. He was a fine teacher, up-to-date in physiology and also an ophthalmic surgeon. In 1845, under Wharton Jones' guidance, Huxley published his first scientific paper demonstrating the existence of a hitherto unrecognised layer in the inner sheath of hairs, a layer that has been known since as Huxley's layer. No doubt remembering this, and of course knowing his merit, later in life Huxley organised a pension for his old tutor.
At twenty he passed his First M.B. examination at the University of London, winning the gold medal for anatomy and physiology. However, he did not present himself for the final (Second M.B.) exams and consequently did not qualify with a university degree. His apprenticeships and exam results formed a sufficient basis for his application to the Royal Navy.
Aged 20, Huxley was too young to apply to the Royal College of Surgeons for a licence to practise, yet he was 'deep in debt'. So, at a friend's suggestion, he applied for an appointment in the Royal Navy. He had references on character and certificates showing the time spent on his apprenticeship and on requirements such as dissection and pharmacy. Sir William Burnett, the Physician General of the Navy, interviewed him and arranged for the College of Surgeons to test his competence (by means of a "viva voce").
Finally Huxley was made Assistant Surgeon ('surgeon's mate', but in practice marine naturalist) to HMS "Rattlesnake", about to set sail on a voyage of discovery and surveying to New Guinea and Australia. The "Rattlesnake" left England on 3 December 1846 and, once they had arrived in the southern hemisphere, Huxley devoted his time to the study of marine invertebrates. He began to send details of his discoveries back to England, where publication was arranged by Edward Forbes FRS (who had also been a pupil of Knox). Both before and after the voyage Forbes was something of a mentor to Huxley.
Huxley's paper "On the anatomy and the affinities of the family of Medusae" was published in 1849 by the Royal Society in its "Philosophical Transactions". Huxley united the Hydroid and Sertularian polyps with the Medusae to form a class to which he subsequently gave the name of "Hydrozoa". The connection he made was that all the members of the class consisted of two cell layers, enclosing a central cavity or stomach. This is characteristic of the phylum now called the "Cnidaria". He compared this feature to the serous and mucous structures of embryos of higher animals. When at last he got a grant from the Royal Society for the printing of plates, Huxley was able to summarise this work in "The Oceanic Hydrozoa", published by the Ray Society in 1859.
The value of Huxley's work was recognised and, on returning to England in 1850, he was elected a Fellow of the Royal Society. In the following year, at the age of twenty-six, he not only received the Royal Society Medal but was also elected to the Council. He met Joseph Dalton Hooker and John Tyndall, who remained his lifelong friends. The Admiralty retained him as a nominal assistant-surgeon, so he might work on the specimens he collected and the observations he made during the voyage of the "Rattlesnake". He solved the problem of "Appendicularia", whose place in the animal kingdom Johannes Peter Müller had found himself wholly unable to assign. It and the Ascidians are both, as Huxley showed, tunicates, today regarded as a sister group to the vertebrates in the phylum "Chordata". Other papers on the morphology of the cephalopods and on brachiopods and rotifers are also noteworthy. The "Rattlesnake"'s official naturalist, John MacGillivray, did some work on botany, and proved surprisingly good at notating Australian aboriginal languages. He wrote up the voyage in the standard Victorian two volume format.
Huxley effectively resigned from the navy (by refusing to return to active service) and, in July 1854, he became Professor of Natural History at the Royal School of Mines and naturalist to the British Geological Survey in the following year. In addition, he was Fullerian Professor at the Royal Institution 1855–58 and 1865–67; Hunterian Professor at the Royal College of Surgeons 1863–69; President of the British Association for the Advancement of Science 1869–1870; President of the Quekett Microscopical Club 1878; President of the Royal Society 1883–85; Inspector of Fisheries 1881–85; and President of the Marine Biological Association 1884–1890.
The thirty-one years during which Huxley occupied the chair of natural history at the Royal School of Mines included work on vertebrate palaeontology and on many projects to advance the place of science in British life. Huxley retired in 1885, after a bout of depressive illness which started in 1884. He resigned the presidency of the Royal Society in mid-term, the Inspectorship of Fisheries, and his chair (as soon as he decently could) and took six months' leave. His pension was a fairly handsome £1200 a year.
In 1890, he moved from London to Eastbourne where he edited the nine volumes of his "Collected Essays". In 1894 he heard of Eugene Dubois' discovery in Java of the remains of "Pithecanthropus erectus" (now known as "Homo erectus"). Finally, in 1895, he died of a heart attack (after contracting influenza and pneumonia), and was buried in North London at St Marylebone. This small family plot had been purchased upon the death of his beloved eldest son Noel, who died of scarlet fever in 1860; Huxley's wife Henrietta Anne née Heathorn and son Noel are also buried there. No invitations were sent out, but two hundred people turned up for the ceremony; they included Joseph Dalton Hooker, William Henry Flower, Mulford B. Foster, Edwin Lankester, Joseph Lister and, apparently, Henry James.
Huxley and his wife had five daughters and three sons:
From 1870 onwards, Huxley was to some extent drawn away from scientific research by the claims of public duty. He served on eight Royal Commissions, from 1862 to 1884. From 1871 to 1880 he was a Secretary of the Royal Society and from 1883 to 1885 he was president. He was president of the Geological Society from 1868 to 1870. In 1870, he was president of the British Association at Liverpool and, in the same year was elected a member of the newly constituted London School Board. He was president of the Quekett Microscopical Club from 1877 to 1879. He was the leading person amongst those who reformed the Royal Society, persuaded government about science, and established scientific education in British schools and universities. Before him, science was mostly a gentleman's occupation; after him, science was a profession.
He was awarded the highest honours then open to British men of science. The Royal Society, who had elected him as Fellow when he was 25 (1851), awarded him the Royal Medal the next year (1852), a year before Charles Darwin got the same award. He was the youngest biologist to receive such recognition. Then later in life came the Copley Medal in 1888 and the Darwin Medal in 1894; the Geological Society awarded him the Wollaston Medal in 1876; the Linnean Society awarded him the Linnean Medal in 1890. There were many other elections and appointments to eminent scientific bodies; these and his many academic awards are listed in the "Life and Letters". He turned down many other appointments, notably the Linacre chair in zoology at Oxford and the Mastership of University College, Oxford.
In 1873 the King of Sweden made Huxley, Hooker and Tyndall Knights of the Order of the Polar Star: they could wear the insignia but not use the title in Britain. Huxley collected many honorary memberships of foreign societies, academic awards and honorary doctorates from Britain and Germany. He also became foreign member of the Royal Netherlands Academy of Arts and Sciences in 1892.
As recognition of his many public services he was given a pension by the state, and was appointed Privy Councillor in 1892.
Despite his many achievements he was given no award by the British state until late in life. In this he did better than Darwin, who got no award of any kind from the state. (Darwin's proposed knighthood was vetoed by ecclesiastical advisers, including Wilberforce) Perhaps Huxley had commented too often on his dislike of honours, or perhaps his many assaults on the traditional beliefs of organised religion made enemies in the establishment—he had vigorous debates in print with Benjamin Disraeli, William Ewart Gladstone and Arthur Balfour, and his relationship with Lord Salisbury was less than tranquil.
Huxley was for about thirty years evolution's most effective advocate, and for some Huxley was ""the" premier advocate of science in the nineteenth century [for] the whole English-speaking world".
Though he had many admirers and disciples, his retirement and later death left British zoology somewhat bereft of leadership. He had, directly or indirectly, guided the careers and appointments of the next generation, but none were of his stature. The loss of Francis Balfour in 1882, climbing the Alps just after he was appointed to a chair at Cambridge, was a tragedy. Huxley thought he was "the only man who can carry out my work": the deaths of Balfour and W. K. Clifford were "the greatest losses to science in our time".
The first half of Huxley's career as a palaeontologist is marked by a rather strange predilection for 'persistent types', in which he seemed to argue that evolutionary advancement (in the sense of major new groups of animals and plants) was rare or absent in the Phanerozoic. In the same vein, he tended to push the origin of major groups such as birds and mammals back into the Palaeozoic era, and to claim that no order of plants had ever gone extinct.
Much paper has been consumed by historians of science ruminating on this strange and somewhat unclear idea. Huxley was wrong to pitch the loss of orders in the Phanerozoic as low as 7%, and he did not estimate the number of new orders which evolved. Persistent types sat rather uncomfortably next to Darwin's more fluid ideas; despite his intelligence, it took Huxley a surprisingly long time to appreciate some of the implications of evolution. However, gradually Huxley moved away from this conservative style of thinking as his understanding of palaeontology, and the discipline itself, developed.
Huxley's detailed anatomical work was, as always, first-rate and productive. His work on fossil fish shows his distinctive approach: whereas pre-Darwinian naturalists collected, identified and classified, Huxley worked mainly to reveal the evolutionary relationships between groups.
The lobed-finned fish (such as coelacanths and lung fish) have paired appendages whose internal skeleton is attached to the shoulder or pelvis by a single bone, the humerus or femur. His interest in these fish brought him close to the origin of tetrapods, one of the most important areas of vertebrate palaeontology.
The study of fossil reptiles led to his demonstrating the fundamental affinity of birds and reptiles, which he united under the title of "Sauropsida". His papers on "Archaeopteryx" and the origin of birds were of great interest then and still are.
Apart from his interest in persuading the world that man was a primate, and had descended from the same stock as the apes, Huxley did little work on mammals, with one exception. On his tour of America Huxley was shown the remarkable series of fossil horses, discovered by O. C. Marsh, in Yale's Peabody Museum. An Easterner, Marsh was America's first professor of palaeontology, but also one who had come west into hostile Indian territory in search of fossils, hunted buffalo, and met Red Cloud (in 1874). Funded by his uncle George Peabody, Marsh had made some remarkable discoveries: the huge Cretaceous aquatic bird "Hesperornis", and the dinosaur footprints along the Connecticut River were worth the trip by themselves, but the horse fossils were really special. After a week with Marsh and his fossils, Huxley wrote excitedly, "The collection of fossils is the most wonderful thing I ever saw."
The collection at that time went from the small four-toed forest-dwelling "Orohippus" from the Eocene through three-toed species such as "Miohippus" to species more like the modern horse. By looking at their teeth he could see that, as the size grew larger and the toes reduced, the teeth changed from those of a browser to those of a grazer. All such changes could be explained by a general alteration in habitat from forest to grassland. And, it is now known, that is what did happen over large areas of North America from the Eocene to the Pleistocene: the ultimate causative agent was global temperature reduction (see Paleocene–Eocene Thermal Maximum). The modern account of the evolution of the horse has many other members, and the overall appearance of the tree of descent is more like a bush than a straight line.
The horse series also strongly suggested that the process was gradual, and that the origin of the modern horse lay in North America, not in Eurasia. If so, then something must have happened to horses in North America, since none were there when Europeans arrived. The experience with Marsh was enough for Huxley to give credence to Darwin's gradualism, and to introduce the story of the horse into his lecture series.
Marsh's and Huxley's conclusions were initially quite different. However, Marsh carefully showed Huxley his complete sequence of fossils. As Marsh put it, Huxley "then informed me that all this was new to him and that my facts demonstrated the evolution of the horse beyond question, and for the first time indicated the direct line of descent of an existing animal. With the generosity of true greatness, he gave up his own opinions in the face of new truth, and took my conclusions as the basis of his famous New York lecture on the horse."
Huxley was originally not persuaded of "development theory", as evolution was once called. This can be seen in his savage review of Robert Chambers' "Vestiges of the Natural History of Creation", a book which contained some quite pertinent arguments in favour of evolution. Huxley had also rejected Lamarck's theory of transmutation, on the basis that there was insufficient evidence to support it. All this scepticism was brought together in a lecture to the Royal Institution, which made Darwin anxious enough to set about an effort to change young Huxley's mind. It was the kind of thing Darwin did with his closest scientific friends, but he must have had some particular intuition about Huxley, who was from all accounts a most impressive person even as a young man.
Huxley was therefore one of the small group who knew about Darwin's ideas before they were published (the group included Joseph Dalton Hooker and Charles Lyell). The first publication by Darwin of his ideas came when Wallace sent Darwin his famous paper on natural selection, which was presented by Lyell and Hooker to the Linnean Society in 1858 alongside excerpts from Darwin's notebook and a Darwin letter to Asa Gray. Huxley's famous response to the idea of natural selection was "How extremely stupid not to have thought of that!" However, he never conclusively made up his mind about whether natural selection was the main method for evolution, though he did admit it was a hypothesis which was a good working basis.
Logically speaking, the prior question was whether evolution had taken place at all. It is to this question that much of Darwin's "On the Origin of Species" was devoted. Its publication in 1859 completely convinced Huxley of evolution and it was this and no doubt his admiration of Darwin's way of amassing and using evidence that formed the basis of his support for Darwin in the debates that followed the book's publication.
Huxley's support started with his anonymous favourable review of the "Origin" in the "Times" for 26 December 1859, and continued with articles in several periodicals, and in a lecture at the Royal Institution in February 1860. At the same time, Richard Owen, whilst writing an extremely hostile anonymous review of the "Origin" in the "Edinburgh Review", also primed Samuel Wilberforce who wrote one in the "Quarterly Review", running to 17,000 words. The authorship of this latter review was not known for sure until Wilberforce's son wrote his biography. So it can be said that, just as Darwin groomed Huxley, so Owen groomed Wilberforce; and both the proxies fought public battles on behalf of their principals as much as themselves. Though we do not know the exact words of the Oxford debate, we do know what Huxley thought of the review in the "Quarterly":
Since Lord Brougham assailed Dr Young, the world has seen no such specimen of the insolence of a shallow pretender to a Master in Science as this remarkable production, in which one of the most exact of observers, most cautious of reasoners, and most candid of expositors, of this or any other age, is held up to scorn as a "flighty" person, who endeavours "to prop up his utterly rotten fabric of guess and speculation," and whose "mode of dealing with nature" is reprobated as "utterly dishonourable to Natural Science."
If I confine my retrospect of the reception of the "Origin of Species" to a twelvemonth, or thereabouts, from the time of its publication, I do not recollect anything quite so foolish and unmannerly as the "Quarterly Review" article...
Huxley said "I am Darwin's bulldog". While the second half of Darwin's life was lived mainly within his family, the younger combative Huxley operated mainly out in the world at large. A letter from Huxley to Ernst Haeckel (2 November 1871) states: "The dogs have been snapping at [Darwin's] heels too much of late." At Oxford and Cambridge Universities, "Bulldog" was and still is student slang for a university policeman, whose job was to corral errant students and maintain their moral rectitude.
Famously, Huxley responded to Wilberforce in the debate at the British Association meeting, on Saturday 30 June 1860 at the Oxford University Museum. Huxley's presence there had been encouraged on the previous evening when he met Robert Chambers, the Scottish publisher and author of "Vestiges", who was walking the streets of Oxford in a dispirited state, and begged for assistance. The debate followed the presentation of a paper by John William Draper, and was chaired by Darwins's former botany tutor John Stevens Henslow. Darwin's theory was opposed by the Lord Bishop of Oxford, Samuel Wilberforce, and those supporting Darwin included Huxley and their mutual friends Hooker and Lubbock. The platform featured Brodie and Professor Beale, and Robert FitzRoy, who had been captain of HMS "Beagle" during Darwin's voyage, spoke against Darwin.
Wilberforce had a track record against evolution as far back as the previous Oxford B.A. meeting in 1847 when he attacked Chambers' "Vestiges". For the more challenging task of opposing the "Origin", and the implication that man descended from apes, he had been assiduously coached by Richard Owen—Owen stayed with him the night before the debate. On the day Wilberforce repeated some of the arguments from his "Quarterly Review" article (written but not yet published), then ventured onto slippery ground. His famous jibe at Huxley (as to whether Huxley was descended from an ape on his mother's side or his father's side) was probably unplanned, and certainly unwise. Huxley's reply to the effect that he would rather be descended from an ape than a man who misused his great talents to suppress debate—the exact wording is not certain—was widely recounted in pamphlets and a spoof play.
The letters of Alfred Newton include one to his brother giving an eyewitness account of the debate, and written less than a month afterwards. Other eyewitnesses, with one or two exceptions (Hooker especially thought "he" had made the best points), give similar accounts, at varying dates after the event. The general view was and still is that Huxley got much the better of the exchange though Wilberforce himself thought he had done quite well. In the absence of a verbatim report differing perceptions are difficult to judge fairly; Huxley wrote a detailed account for Darwin, a letter which does not survive; however, a letter to his friend Frederick Daniel Dyster does survive with an account just three months after the event. | https://en.wikipedia.org/wiki?curid=30038 |
Triumph of the Will
Triumph of the Will () is a 1935 Nazi propaganda film directed, produced, edited, and co-written by Leni Riefenstahl. It chronicles the 1934 Nazi Party Congress in Nuremberg, which was attended by more than 700,000 Nazi supporters. The film contains excerpts from speeches given by Nazi leaders at the Congress, including Adolf Hitler, Rudolf Hess and Julius Streicher, interspersed with footage of massed Sturmabteilung (SA) and Schutzstaffel (SS) troops and public reaction. Hitler commissioned the film and served as an unofficial executive producer; his name appears in the opening titles. The film's overriding theme is the return of Germany as a great power, with Hitler as the leader who will bring glory to the nation. Because the film was made after the 1934 Night of the Long Knives (on 30 June), many prominent Sturmabteilung (SA) members are absent—they were murdered in that Party purge, organised and orchestrated by Hitler to replace the SA with the Schutzstaffel (SS) as his main paramilitary force.
"Triumph of the Will" was released in 1935 and became a major example of film used as propaganda. Riefenstahl's techniques—such as moving cameras, aerial photography, the use of long-focus lenses to create a distorted perspective, and the revolutionary approach to the use of music and cinematography—have earned "Triumph of the Will" recognition as one of the greatest propaganda films in history. Riefenstahl helped to stage the scenes, directing and rehearsing some of them at least fifty times. Riefenstahl won several awards, not only in Germany but also in the United States, France, Sweden and other countries. The film was popular in the Third Reich, and has continued to influence films, documentaries and commercials to this day. In Germany, the film is not censored but the courts commonly classify it as Nazi propaganda which requires an educational context to public screenings.
An earlier film by Riefenstahl—"The Victory of Faith (Der Sieg des Glaubens)"—showed Hitler and SA leader Ernst Röhm together at the 1933 Nazi Party Congress. After Röhm's murder, the party attempted the destruction of all copies, leaving only one known to have survived in Britain. The direction and sequencing of images is almost the same as that Riefenstahl used in "Triumph of the Will" a year later.
Frank Capra's seven-film series "Why We Fight" is said to have been directly inspired by, and the United States' response to, "Triumph of the Will".
The film begins with a prologue, the only commentary in the film. It consists of the following text, shown sequentially, against a grey background:
Day 1: The film opens with shots of the clouds above the city, and then moves through the clouds to float above the assembling masses below, with the intention of portraying beauty and majesty of the scene. The cruciform shadow of Hitler's plane is visible as it passes over the tiny figures marching below, accompanied by an orchestral arrangement of the "Horst-Wessel-Lied". Upon arriving at the Nuremberg airport, Hitler and other Nazi leaders emerge from his plane to thunderous applause and a cheering crowd. He is then driven into Nuremberg, through equally enthusiastic people, to his hotel where a night rally is later held.
Day 2: The second day begins with images of Nuremberg at dawn, accompanied by an extract from the Act III Prelude ("Wach Auf!") of Richard Wagner's "Die Meistersinger von Nürnberg". Following this is a montage of the attendees preparing for the opening of the Reich Party Congress, and footage of the top Nazi officials arriving at the Luitpold Arena. The film then cuts to the opening ceremony, where Rudolf Hess announces the start of the Congress. The camera then introduces much of the Nazi hierarchy and covers their opening speeches, including Joseph Goebbels, Alfred Rosenberg, Hans Frank, Fritz Todt, Robert Ley and Julius Streicher. Then the film cuts to an outdoor rally for the "Reichsarbeitsdienst" (Labor Service), which is primarily a series of quasi-military drills by men carrying spades. This is also where Hitler gives his first speech on the merits of the Labour Service and praising them for their work in rebuilding Germany. The day then ends with a torchlight SA parade in which Viktor Lutze speaks to the crowds.
Day 3: The third day starts with a Hitler Youth rally on the parade ground. Again the camera covers the Nazi dignitaries arriving and the introduction of Hitler by Baldur von Schirach. Hitler then addresses the Youth, describing in militaristic terms how they must harden themselves and prepare for sacrifice. Everyone present, including General Werner von Blomberg, then assemble for a military pass and review, featuring Wehrmacht cavalry and various armored vehicles. That night Hitler delivers another speech to low-ranking party officials by torchlight, commemorating the first year since the Nazis took power and declaring that the party and state are one entity.
Day 4: The fourth day is the climax of the film, where the most memorable of the imagery is presented. Hitler, flanked by Heinrich Himmler and Viktor Lutze, walks through a long wide expanse with over 150,000 SA and SS troops standing at attention, to lay a wreath at a First World War memorial. Hitler then reviews the parading SA and SS men, following which Hitler and Lutze deliver a speech where they discuss the Night of the Long Knives purge of the SA several months prior. Lutze reaffirms the SA's loyalty to the regime, and Hitler absolves the SA of any crimes committed by Ernst Röhm. New party flags are consecrated by letting them touch the "Blutfahne" (the same cloth flag said to have been carried by the fallen Nazis during the Beer Hall Putsch) and, following a final parade in front of the Nuremberg Frauenkirche, Hitler delivers his closing speech. In it he reaffirms the primacy of the Nazi Party in Germany, declaring, "All loyal Germans will become National Socialists. Only the best National Socialists are party comrades!" Hess then leads the assembled crowd in a final "Sieg Heil" salute for Hitler, marking the close of the party congress. The entire crowd sings the "Horst-Wessel-Lied" as the camera focuses on the giant Swastika banner, which fades into a line of silhouetted men in Nazi party uniforms, marching in formation as the lyrics "Comrades shot by the Red Front and the Reactionaries march in spirit together in our columns" are sung.
Riefenstahl, a popular German actress, had directed her first film called "Das blaue Licht" ("The Blue Light") in 1932. Around the same time she first heard Hitler speak at a Nazi rally and, by her own admission, was impressed. She later began a correspondence with him that would last for years. Hitler, by turn, was equally impressed with "Das blaue Licht", and in 1933 asked her to direct a film about the Nazis' annual Nuremberg Rally. The Nazis had only recently taken power amid a period of political instability (Hitler was the fourth Chancellor of Germany in less than a year) and were considered an unknown quantity by many Germans, to say nothing of the world.
In "Mein Kampf", Hitler talks of the success of British propaganda in World War I, believing people's ignorance meant simple repetition and an appeal to feelings over reason would suffice. Hitler chose Riefenstahl as he wanted the film as "artistically satisfying" as possible to appeal to a non-political audience, but he also believed that propaganda must admit no element of doubt. As such, "Triumph of the Will" may be seen as a continuation of the unambiguous World War I-style propaganda, though heightened by the film's artistic or poetic nature.
Riefenstahl was initially reluctant to make any documentaries at all for Hitler. This was not because of any moral qualms, but because Riefenstahl had never made a documentary and did not feel that she truly understood the NSDAP. Hitler persisted and Riefenstahl eventually agreed to make a film at the 1933 Nuremberg Rally called "Der Sieg des Glaubens" ("Victory of Faith"). However the film had numerous technical problems, including a lack of preparation (Riefenstahl reported having just a few days) and Hitler's apparent unease at being filmed. To make matters worse, Riefenstahl had to deal with infighting by party officials, in particular Joseph Goebbels who tried to have the film released by the Propaganda Ministry. Though "Der Sieg des Glaubens" apparently did well at the box office, it later became a serious embarrassment to the Nazis after SA Leader Ernst Röhm, who had a prominent role in the film, was executed during the Night of the Long Knives. All references to Röhm were ordered to be erased from German history, which included the destruction of all copies of "Der Sieg des Glaubens". It was considered a lost film until a copy turned up in the 1980s in the German Democratic Republic's film archives.
In 1934, Riefenstahl had no wish to repeat the fiasco of "Der Sieg des Glaubens" and initially recommended fellow director Walter Ruttmann. Ruttmann's film, which would have covered the rise of the Nazi Party from 1923 to 1934 and been more overtly propagandistic (the opening text of "Triumph of the Will" was his), did not appeal to Hitler. He again asked Riefenstahl, who finally relented (there is still debate over how willing she was) after Hitler guaranteed his personal support and promised to keep other Nazi organizations, specifically the Propaganda Ministry, from meddling with her film.
The film follows a script similar to "Der Sieg des Glaubens", which is evident when one sees both films side by side. For example, the city of Nuremberg scenes—even to the shot of a cat included in the city driving sequence in both films. Furthermore, Herbert Windt reused much of his musical score for that film in "Triumph des Willens", which he also scored. Riefenstahl shot "Triumph of the Will" on a budget of roughly 280,000RM (approximately US$110,000 in 1934, $1.54 m in 2015). With that said, there were extensive preparations facilitated by the cooperation of party members, the military, and vital help from high-ranking Nazis like Goebbels. As Susan Sontag observed, "The Rally was planned not only as a spectacular mass meeting, but as a spectacular propaganda film." Albert Speer, Hitler's personal architect, designed the set in Nuremberg and did most of the coordination for the event. Pits were dug in front of the speakers' platform so Riefenstahl could get the camera angles she wanted, and tracks were laid so that her cameramen could get traveling shots of the crowd. When rough cuts weren't up to par, major party leaders and high-ranking public officials reenacted their speeches in a studio for her. Riefenstahl also used a film crew that was extravagant by the standards of the day. Her crew consisted of 172 people, including 10 technical staff, 36 cameramen and assistants (operating in 16 teams with 30 cameras), nine aerial photographers, 17 newsreel men, 12 newsreel crew, 17 lighting men, two photographers, 26 drivers, 37 security personnel, four labor service workers, and two office assistants. Many of her cameramen also dressed in SA uniforms so they could blend into the crowds.
Riefenstahl had the difficult task of condensing an estimated 61 hours of film into two hours. She labored to complete the film as fast as she could, going so far as to sleep in the editing room filled with hundreds of thousands of feet of film footage.
"Triumph of the Will" is sometimes seen as an example of Nazi political religion. The primary religion in Germany before the Second World War was Christianity. With the primary sects being Roman Catholic and Protestant, the Christian views in this movie are clearly meant to allow the movie to better connect with the intended audience.
Religion is a major theme in "Triumph of the Will". The film opens with Hitler descending god-like out of the skies past twin cathedral spires. It contains many scenes of church bells ringing, and individuals in a state of near-religious fervor, as well as a prominent shot of Reich Protestant Bishop Ludwig Müller standing in his vestments among high-ranking Nazis. It is probably not a coincidence that the final parade of the film was held in front of the Nuremberg Frauenkirche. In his final speech in the film, Hitler also directly compares the Nazi party to a holy order, and the consecration of new party flags by having Hitler touch them to the "blood banner" has obvious religious overtones. Hitler himself is portrayed in a messianic manner, from the opening where he descends from the clouds in a plane, to his drive through Nuremberg where even a cat stops what it is doing to watch him, to the many scenes where the camera films from below and looks up at him: Hitler, standing on his podium, will issue a command to hundreds of thousands of followers. The audience happily complies in unison. As Frank P. Tomasulo comments, "Hitler is cast as a veritable German Messiah who will save the nation, if only the citizenry will put its destiny in his hands."
Germany had not seen images of military power and strength since the end of World War I, and the huge formations of men would remind the audience that Germany was becoming a great power once again. Though the Labor Service men carried spades, they handled them as if they were rifles. The Eagles and Swastikas could be seen as a reference to the Roman Legions of antiquity. The large mass of well-drilled party members could be seen in a more ominous light, as a warning to dissidents thinking of challenging the regime.
Hitler's arrival in an airplane should also be viewed in this context. According to Kenneth Poferl, "Flying in an airplane was a luxury known only to a select few in the 1930s, but Hitler had made himself widely associated with the practice, having been the first politician to campaign via air travel. Victory reinforced this image and defined him as the top man in the movement, by showing him as the only one to arrive in a plane and receive an individual welcome from the crowd. Hitler's speech to the SA also contained an implied threat: if he could have Röhm, the commander of the hundreds of thousands of troops on the screen, shot, it was only logical to assume that Hitler could get away with having anyone executed."
It was very important to Adolf Hitler that his propaganda messages carry a unified theme. If a country isn't unified in saying the enemy is bad, the audience starts to have doubts. Unity is seen throughout this film, even in the camps where soldiers live. The camp outside of Nuremberg is very uniform and clean; the tents are aligned in perfect rows, each one the same as the next. The men there also make a point not to wear their shirts, because their shirts display their rankings and status. Shirtless they are all equals, unified. When they march, it is in unison and they all carry their weapons identically, one to another.
Hitler's message to the workers also includes the notion of unity:
Children were also used to convey unity:
"Triumph of the Will" has many scenes that blur the distinction between the Nazi Party, the German state, and the German people. Germans in peasant farmers' costumes and other traditional clothing greet Hitler in some scenes. The torchlight processions, though now associated by many with the Nazis, would remind the viewer of the medieval Karneval celebration. The old flag of Imperial Germany is also shown several times flying alongside the Swastika, and there is a ceremony where Hitler pays his respects to soldiers who died in World War I (as well as to President Paul von Hindenburg, who had died a month before the convention). There is also a scene where the Labor Servicemen individually call out which town or area in Germany they are from, reminding the viewers that the Nazi Party had expanded from its stronghold in Bavaria to become a pan-German movement.
Among the themes presented, the desire for pride in Germany and the purification of the German people is well exemplified through the speeches and ideals of the Third Reich in "Triumph of the Will".
In every speech given and shown in "Triumph of the Will", pride is one of the major focuses. Hitler advocates to the people that they should not be satisfied with their current state and they should not be satisfied with the descent from power and greatness Germany has endured since World War I. The German people should believe in themselves and the movement that is occurring in Germany. Hitler promotes pride in Germany through the unification of it. Unifying Germany would force the elimination of what does not amount to the standards of the Nazi regime.
To unify Germany, Hitler believes purification would have to take place. This meant not only eliminating the citizens of Germany who are not of the Aryan race, but the sick, weak, handicapped, or any other citizens deemed unhealthy or impure. In "Triumph of the Will", Hitler preaches to the people that Germany must take a look at itself and seek out that which does not belong: "[T]he elements that have become bad, and therefore do not belong with us!" Though within the context, he seems to be referring to the corrupt elements of the power structure, it later could seem in hindsight to imply that the elimination of the "inferior" people of Germany would, in theory, return Germany to its once prideful and powerful former self. Julius Streicher stresses the importance of purification in his speech, a direct reference to his own virulent anti-semitism. Hundreds of thousands of mentally ill and disabled people would be murdered in the Action T4, a programme run directly from Hitler's Chancellery ("Kanzlei des Führers").
Hitler preaches to the people in his speeches that they should believe in their country and themselves. The German people are better than what they have become because of the impurities in society. Hitler wants them to believe in him and believe what he wants to do for his people, and what he is doing is for the country's and people's benefit. Hess says in the last scene of "Triumph of the Will", "Heil Hitler, hail victory, hail victory!" Everyone in attendance cheers in support. This verbal sign represents their faith to their leader and his most trusted advisors that they believe in the Nazi cause. This is directly following Hitler's finale, "Long live the National Socialist Movement! Long live Germany!" and the crowd erupts with cheering and the fulfillment of pride for themselves and their political party.
In the closing speech of "Triumph of the Will", Hitler enters the room from the back, appearing to emerge from the people. After a one sentence introduction, he tells his faithful Nazis how the German nation has subordinated itself to the Nazi Party because its leaders are mostly of Germans. He promises that the new state that the Nazis have created will endure for thousands of years. Hitler says that the youth will carry on after the old have weakened. They close with a chant, "Hitler is the Party, Hitler." The camera focuses on the large Swastika above Hitler and the film ends with the images of this Swastika imposed on Nazis marching in a few columns. His speech brought attention to the rally and created a huge turnout in the following years. He attracted many people in the way that he addressed the issues and his people. He spoke to them as if it were a sermon and engaged the people. In 1934, over a million Germans participated in the Nuremberg Rally.
"Triumph of the Will" premiered on 28 March 1935 at the Berlin Ufa Palace Theater and was an instant success. Within two months the film had earned 815,000 Reichsmark (equivalent to million euros), and Ufa considered it one of the three most profitable films of that year. Hitler praised the film as being an "incomparable glorification of the power and beauty of our Movement." For her efforts, Riefenstahl was rewarded with the German Film Prize ("Deutscher Filmpreis"), a gold medal at the 1935 Venice Biennale, and the Grand Prix at the 1937 World Exhibition in Paris. However, there were few claims that the film would result in a mass influx of "converts" to fascism and the Nazis apparently did not make a serious effort to promote the film outside of Germany. Film historian Richard Taylor also said that "Triumph of the Will" was not generally used for propaganda purposes inside the Third Reich. "The Independent" wrote in 2003: ""Triumph of the Will" seduced many wise men and women, persuaded them to admire rather than to despise, and undoubtedly won the Nazis friends and allies all over the world."
The reception in other countries was not always as enthusiastic. British documentarian Paul Rotha called it tedious, while others were repelled by its pro-Nazi sentiments. During World War II, Frank Capra helped to create a direct response, through the film series called "Why We Fight", a series of newsreels commissioned by the United States government that spliced in footage from "Triumph of the Will", but recontextualized it so that it promoted the cause of the Allies instead. Capra later remarked that "Triumph of the Will" "fired no gun, dropped no bombs. But as a psychological weapon aimed at destroying the will to resist, it was just as lethal." Clips from "Triumph of the Will" were also used in an Allied propaganda short called "General Adolph Takes Over", set to the British dance tune "The Lambeth Walk". The legions of marching soldiers, as well as Hitler giving his Nazi salute, were made to look like wind-up dolls, dancing to the music. The Danish resistance used to take over cinemas and force the projectionist to show "Swinging the Lambeth Walk" (as it was also known); Erik Barrow has said: "The extraordinary risks were apparently felt justified by a moment of savage anti-Hitler ridicule." Also during World War II, the poet Dylan Thomas wrote a screenplay for and narrated "These Are The Men", a propaganda piece using "Triumph of the Will" footage to discredit Nazi leadership.
One of the best ways to gauge the response to "Triumph of the Will" was the instant and lasting international fame it gave Riefenstahl. "The Economist" said it "sealed her reputation as the greatest female filmmaker of the 20th century." For a director who made eight films, only two of which received significant coverage outside of Germany, Riefenstahl had unusually high name recognition for the remainder of her life, most of it stemming from "Triumph of the Will". However, her career was also permanently damaged by this association. After the war, Riefenstahl was imprisoned by the Allies for four years for allegedly being a Nazi sympathizer and was permanently blacklisted by the film industry. When she died in 2003—sixty-eight years after the film's premiere—her obituary received significant coverage in many major publications, including the Associated Press, "The Wall Street Journal", "The New York Times", and "The Guardian", most of which reaffirmed the importance of "Triumph of the Will".
Like American filmmaker D. W. Griffith's "The Birth of a Nation", "Triumph of the Will" has been criticized as a use of spectacular filmmaking to promote a profoundly unethical system. In her defense, Riefenstahl claimed that she was naïve about the Nazis when she made it and had no knowledge of Hitler's genocidal or anti-semitic policies. She also pointed out that "Triumph of the Will" contains "not one single anti-semitic word", although it does contain a veiled comment by Julius Streicher that "a people that does not protect its racial purity will perish".
However, Roger Ebert has observed that for some, "the very absence of anti-semitism in "Triumph of the Will" looks like a calculation; excluding the central motif of almost all of Hitler's public speeches must have been a deliberate decision to make the film more efficient as propaganda."
Riefenstahl also repeatedly defended herself against the charge that she was a Nazi propagandist, saying that "Triumph of the Will" focuses on images over ideas, and should therefore be viewed as a "Gesamtkunstwerk" (holistic work of art). In 1964, she returned to this topic, saying:
However, Riefenstahl was an active participant in the rally, though in later years she downplayed her influence significantly, claiming, "I just observed and tried to film it well. The idea that I helped to plan it is downright absurd." Ebert states that "Triumph of the Will" is "by general consent [one] of the best documentaries ever made", but added that because it reflects the ideology of a movement regarded by many as evil, it poses "a classic question of the contest between art and morality: Is there such a thing as pure art, or does all art make a political statement?" When reviewing the film for his "Great Movies" collection, Ebert reversed his opinion, characterizing his earlier conclusion as "the received opinion that the film is great but evil" and calling it "a terrible film, paralyzingly dull, simpleminded, overlong and not even 'manipulative', because it is too clumsy to manipulate anyone but a true believer".
Susan Sontag considers "Triumph of the Will" the "most successful, most purely propagandistic film ever made, whose very conception negates the possibility of the filmmaker's having an aesthetic or visual conception independent of propaganda." Sontag points to Riefenstahl's involvement in the planning and design of the Nuremberg ceremonies as evidence that Riefenstahl was working as a propagandist, rather than as an artist in any sense of the word. With some 30 cameras and a crew of 150, the marches, parades, speeches, and processions were orchestrated like a movie set for Riefenstahl's film. Further, this was not the first political film made by Riefenstahl for the Third Reich (there was "Victory of Faith", 1933), nor was it the last ("Day of Freedom", 1935, and "Olympia", 1938). "Anyone who defends Riefenstahl's films as documentary", Sontag states, "if documentary is to be distinguished from propaganda, is being disingenuous. In "Triumph of Will", the document (the image) is no longer simply the record of reality; 'reality' has been constructed to serve the image."
Brian Winston's essay on the film in "The Movies as History" is largely a critique of Sontag's analysis. Winston argues that any filmmaker could have made the film look impressive because the Nazis' "mise en scène" was impressive, particularly when they were offering it for camera re-stagings. In form, the film alternates repetitively between marches and speeches. Winston asks the viewers to consider if such a film should be seen as anything more than a pedestrian effort. Like Rotha, he finds the film tedious, and believes anyone who takes the time to analyze its structure will quickly agree.
The first controversy over "Triumph of the Will" occurred even before its release, when several generals in the Wehrmacht protested over the minimal army presence in the film. Only one scene—the review of the German cavalry—actually involved the German military. The other formations were party organizations that were not part of the military.
The opposition of the generals was not simply out of personalized pique or vanity. As produced by Riefenstahl, "Triumph of the Will" posits Germany as a leaderless mass of lost souls without any organizing institutions, or antecedent institutional leaders. And that the "new order" embodied by the Nazi Party and Hitler provides both a new and a singular/saving leader and institutional framework for the whole of the German nation.
However, the Army had been, and had seen itself as being, an institution that held shared responsibility for the leadership of the nation and state since at least the time of Fredrick the Great. The leaders of that Army had also been viewed throughout the history of the German-speaking peoples as an integral part of the leadership cadre. By omitting the Army (along with other institutions, e.g., the nobility, the Church, academia, business), the film demonstrated that the Army, as well as its leaders, had "disappeared" from what the Army considered to be its shared leadership role in the state, National Socialist or otherwise. The Army's leaders vehemently disagreed with this implied assertion of the film.
Hitler proposed his own "artistic" compromise where "Triumph of the Will" would open with a camera slowly tracking down a row of all the "overlooked" generals (and placate each general's ego). According to her own testimony, Riefenstahl refused his suggestion and insisted on keeping artistic control over "Triumph of the Will". She did agree to return to the 1935 rally to make a film exclusively about the Wehrmacht, which became "" ("Day of Freedom: Our Armed Forces").
"Triumph of the Will" remains well known for its striking visuals. As one historian notes, "many of the most enduring images of the [Nazi] regime and its leader derive from Riefenstahl's film."
Extensive excerpts of the film were used in Erwin Leiser's documentary "Mein Kampf", produced in Sweden in 1960. Riefenstahl unsuccessfully sued the Swedish production company Minerva-Film for copyright violation, although she did receive forty thousand marks in compensation from German and Austrian distributors of the film.
In 1942, Charles A. Ridley of the British Ministry of Information made a short propaganda film, "Lambeth Walk – Nazi Style", which edited footage of Hitler and German soldiers from the film to make it appear they were marching and dancing to the song "The Lambeth Walk". The targeted-at-Nazis parody of "The Lambeth Walk" (a British dance that had been popular in swing clubs in Germany which the Nazis denounced as "Jewish mischief and animalistic hopping") so enraged Joseph Goebbels that reportedly he ran out of the screening room kicking chairs and screaming profanities. The propaganda film was distributed uncredited to newsreel companies, who would supply their own narration.
Charlie Chaplin's satire "The Great Dictator" (1940) was inspired in large part by "Triumph of the Will". Frank Capra used significant footage, with a mocking narration in the first installment of the propagandistic film produced by the United States Army "Why We Fight" as an exposure of Nazi militarism and totalitarianism to American soldiers and sailors. The film has been studied by many contemporary artists, including film directors Peter Jackson, George Lucas and Ridley Scott. The opening sequence of "Starship Troopers" is a direct reference to the film. In Golden Kamuy, the gestures Lieutenant Tsurumi did in one of his speeches were identical to those of Hitler.
The Federal Court of Justice of Germany has addressed the matter of the film "Triumph of the Will" (see BGH UFITA 55 (1970), 313, 320/321). It ascertained that the film was a NSDAP production, where the NSDAP was granted unlimited rights of use for exploitation. According to the March 17, 1965 law regarding the regulation of liabilities of national socialist institutions and the legal relationships concerning their assets, all rights and assets of the NSDAP were transferred to the Federal Republic of Germany, and anything relating to film business was to be managed by Transit Film GmbH.
In 1996, the film was restored to copyright under the Uruguay Round Agreements Act.
Since the death of Leni Riefenstahl the federally owned Transit Film GmbH holds the exclusive right of use to all rights of the film. The respective contractual agreements had previously provided, to a certain extent, for the joint management of rights. | https://en.wikipedia.org/wiki?curid=30039 |
Titanium
Titanium is a chemical element with the symbol Ti and atomic number 22. It is a lustrous transition metal with a silver color, low density, and high strength. Titanium is resistant to corrosion in sea water, aqua regia, and chlorine.
Titanium was discovered in Cornwall, Great Britain, by William Gregor in 1791 and was named by Martin Heinrich Klaproth after the Titans of Greek mythology. The element occurs within a number of mineral deposits, principally rutile and ilmenite, which are widely distributed in the Earth's crust and lithosphere; it is found in almost all living things, as well as bodies of water, rocks, and soils. The metal is extracted from its principal mineral ores by the Kroll and Hunter processes. The most common compound, titanium dioxide, is a popular photocatalyst and is used in the manufacture of white pigments. Other compounds include titanium tetrachloride (TiCl4), a component of smoke screens and catalysts; and titanium trichloride (TiCl3), which is used as a catalyst in the production of polypropylene.
Titanium can be alloyed with iron, aluminium, vanadium, and molybdenum, among other elements, to produce strong, lightweight alloys for aerospace (jet engines, missiles, and spacecraft), military, industrial processes (chemicals and petrochemicals, desalination plants, pulp, and paper), automotive, agriculture (farming), medical prostheses, orthopedic implants, dental and endodontic instruments and files, dental implants, sporting goods, jewelry, mobile phones, and other applications.
The two most useful properties of the metal are corrosion resistance and strength-to-density ratio, the highest of any metallic element. In its unalloyed condition, titanium is as strong as some steels, but less dense. There are two allotropic forms and five naturally occurring isotopes of this element, 46Ti through 50Ti, with 48Ti being the most abundant (73.8%). Although they have the same number of valence electrons and are in the same group in the periodic table, titanium and zirconium differ in many chemical and physical properties.
As a metal, titanium is recognized for its high strength-to-weight ratio. It is a strong metal with low density that is quite ductile (especially in an oxygen-free environment), lustrous, and metallic-white in color. The relatively high melting point (more than 1,650 °C or 3,000 °F) makes it useful as a refractory metal. It is paramagnetic and has fairly low electrical and thermal conductivity compared to other metals. Titanium is superconducting when cooled below its critical temperature of 0.49 K.
Commercially pure (99.2% pure) grades of titanium have ultimate tensile strength of about 434 MPa (63,000 psi), equal to that of common, low-grade steel alloys, but are less dense. Titanium is 60% denser than aluminium, but more than twice as strong as the most commonly used 6061-T6 aluminium alloy. Certain titanium alloys (e.g., Beta C) achieve tensile strengths of over 1,400 MPa (200,000 psi). However, titanium loses strength when heated above .
Titanium is not as hard as some grades of heat-treated steel; it is non-magnetic and a poor conductor of heat and electricity. Machining requires precautions, because the material can gall unless sharp tools and proper cooling methods are used. Like steel structures, those made from titanium have a fatigue limit that guarantees longevity in some applications.
The metal is a dimorphic allotrope of an hexagonal α form that changes into a body-centered cubic (lattice) β form at . The specific heat of the α form increases dramatically as it is heated to this transition temperature but then falls and remains fairly constant for the β form regardless of temperature.
Like aluminium and magnesium, titanium metal and its alloys oxidize immediately upon exposure to air. Titanium readily reacts with oxygen at in air, and at in pure oxygen, forming titanium dioxide. It is, however, slow to react with water and air at ambient temperatures because it forms a passive oxide coating that protects the bulk metal from further oxidation. When it first forms, this protective layer is only 1–2 nm thick but continues to grow slowly; reaching a thickness of 25 nm in four years.
Atmospheric passivation gives titanium excellent resistance to corrosion, almost equivalent to platinum. Titanium is capable of withstanding attack by dilute sulfuric and hydrochloric acids, chloride solutions, and most organic acids. However, titanium is corroded by concentrated acids. As indicated by its negative redox potential, titanium is thermodynamically a very reactive metal that burns in normal atmosphere at lower temperatures than the melting point. Melting is possible only in an inert atmosphere or in a vacuum. At , it combines with chlorine. It also reacts with the other halogens and absorbs hydrogen.
Titanium is one of the few elements that burns in pure nitrogen gas, reacting at to form titanium nitride, which causes embrittlement. Because of its high reactivity with oxygen, nitrogen, and some other gases, titanium filaments are applied in titanium sublimation pumps as scavengers for these gases. Such pumps inexpensively and reliably produce extremely low pressures in ultra-high vacuum systems.
Titanium is the ninth-most abundant element in Earth's crust (0.63% by mass) and the seventh-most abundant metal. It is present as oxides in most igneous rocks, in sediments derived from them, in living things, and natural bodies of water. Of the 801 types of igneous rocks analyzed by the United States Geological Survey, 784 contained titanium. Its proportion in soils is approximately 0.5 to 1.5%.
Common titanium-containing minerals are anatase, brookite, ilmenite, perovskite, rutile, and titanite (sphene). Akaogiite is an extremely rare mineral consisting of titanium dioxide. Of these minerals, only rutile and ilmenite have economic importance, yet even they are difficult to find in high concentrations. About 6.0 and 0.7 million tonnes of those minerals were mined in 2011, respectively. Significant titanium-bearing ilmenite deposits exist in western Australia, Canada, China, India, Mozambique, New Zealand, Norway, Sierra Leone, South Africa, and Ukraine. About 186,000 tonnes of titanium metal sponge were produced in 2011, mostly in China (60,000 t), Japan (56,000 t), Russia (40,000 t), United States (32,000 t) and Kazakhstan (20,700 t). Total reserves of titanium are estimated to exceed 600 million tonnes.
The concentration of titanium is about 4 picomolar in the ocean. At 100 °C, the concentration of titanium in water is estimated to be less than 10−7 M at pH 7. The identity of titanium species in aqueous solution remains unknown because of its low solubility and the lack of sensitive spectroscopic methods, although only the 4+ oxidation state is stable in air. No evidence exists for a biological role, although rare organisms are known to accumulate high concentrations of titanium.
Titanium is contained in meteorites, and it has been detected in the Sun and in M-type stars (the coolest type) with a surface temperature of . Rocks brought back from the Moon during the Apollo 17 mission are composed of 12.1% TiO2. It is also found in coal ash, plants, and even the human body. Native titanium (pure metallic) is very rare.
Naturally occurring titanium is composed of five stable isotopes: 46Ti, 47Ti, 48Ti, 49Ti, and 50Ti, with 48Ti being the most abundant (73.8% natural abundance). At least 21 radioisotopes have been characterized, the most stable of which are 44Ti with a half-life of 63 years; 45Ti, 184.8 minutes; 51Ti, 5.76 minutes; and 52Ti, 1.7 minutes. All other radioactive isotopes have half-lives less than 33 seconds, with the majority less than half a second.
The isotopes of titanium range in atomic weight from 39.002 u (39Ti) to 63.999 u (64Ti). The primary decay mode for isotopes lighter than 46Ti is positron emission (with the exception of 44Ti which undergoes electron capture), leading to isotopes of scandium, and the primary mode for isotopes heavier than 50Ti is beta emission, leading to isotopes of vanadium.
Titanium becomes radioactive upon bombardment with deuterons, emitting mainly positrons and hard gamma rays.
The +4 oxidation state dominates titanium chemistry, but compounds in the +3 oxidation state are also common. Commonly, titanium adopts an octahedral coordination geometry in its complexes, but tetrahedral TiCl4 is a notable exception. Because of its high oxidation state, titanium(IV) compounds exhibit a high degree of covalent bonding. Unlike most other transition metals, simple aquo Ti(IV) complexes are unknown.
The most important oxide is TiO2, which exists in three important polymorphs; anatase, brookite, and rutile. All of these are white diamagnetic solids, although mineral samples can appear dark (see rutile). They adopt polymeric structures in which Ti is surrounded by six oxide ligands that link to other Ti centers.
The term "titanates" usually refers to titanium(IV) compounds, as represented by barium titanate (BaTiO3). With a perovskite structure, this material exhibits piezoelectric properties and is used as a transducer in the interconversion of sound and electricity. Many minerals are titanates, e.g. ilmenite (FeTiO3). Star sapphires and rubies get their asterism (star-forming shine) from the presence of titanium dioxide impurities.
A variety of reduced oxides (suboxides) of titanium are known, mainly reduced stoichiometries of titanium dioxide obtained by atmospheric plasma spraying. Ti3O5, described as a Ti(IV)-Ti(III) species, is a purple semiconductor produced by reduction of TiO2 with hydrogen at high temperatures, and is used industrially when surfaces need to be vapour-coated with titanium dioxide: it evaporates as pure TiO, whereas TiO2 evaporates as a mixture of oxides and deposits coatings with variable refractive index. Also known is Ti2O3, with the corundum structure, and TiO, with the rock salt structure, although often nonstoichiometric.
The alkoxides of titanium(IV), prepared by reacting TiCl4 with alcohols, are colourless compounds that convert to the dioxide on reaction with water. They are industrially useful for depositing solid TiO2 via the sol-gel process. Titanium isopropoxide is used in the synthesis of chiral organic compounds via the Sharpless epoxidation.
Titanium forms a variety of sulfides, but only TiS2 has attracted significant interest. It adopts a layered structure and was used as a cathode in the development of lithium batteries. Because Ti(IV) is a "hard cation", the sulfides of titanium are unstable and tend to hydrolyze to the oxide with release of hydrogen sulfide.
Titanium nitride (TiN) is a member of a family of refractory transition metal nitrides and exhibits properties similar to both covalent compounds including; thermodynamic stability, extreme hardness, thermal/electrical conductivity, and a high melting point. TiN has a hardness equivalent to sapphire and carborundum (9.0 on the Mohs Scale), and is often used to coat cutting tools, such as drill bits. It is also used as a gold-colored decorative finish and as a barrier metal in semiconductor fabrication. Titanium carbide, which is also very hard, is found in cutting tools and coatings.
Titanium tetrachloride (titanium(IV) chloride, TiCl4) is a colorless volatile liquid (commercial samples are yellowish) that, in air, hydrolyzes with spectacular emission of white clouds. Via the Kroll process, TiCl4 is produced in the conversion of titanium ores to titanium dioxide, e.g., for use in white paint. It is widely used in organic chemistry as a Lewis acid, for example in the Mukaiyama aldol condensation. In the van Arkel process, titanium tetraiodide (TiI4) is generated in the production of high purity titanium metal.
Titanium(III) and titanium(II) also form stable chlorides. A notable example is titanium(III) chloride (TiCl3), which is used as a catalyst for production of polyolefins (see Ziegler–Natta catalyst) and a reducing agent in organic chemistry.
Owing to the important role of titanium compounds as polymerization catalyst, compounds with Ti-C bonds have been intensively studied. The most common organotitanium complex is titanocene dichloride ((C5H5)2TiCl2). Related compounds include Tebbe's reagent and Petasis reagent. Titanium forms carbonyl complexes, e.g. (C5H5)2Ti(CO)2.
Following the success of platinum-based chemotherapy, titanium(IV) complexes were among the first non-platinum compounds to be tested for cancer treatment. The advantage of titanium compounds lies in their high efficacy and low toxicity. In biological environments, hydrolysis leads to the safe and inert titanium dioxide. Despite these advantages the first candidate compounds failed clinical trials. Further development resulted in the creation of potentially effective, selective, and stable titanium-based drugs. Their mode of action is not yet well understood.
Titanium was discovered in 1791 by the clergyman and amateur geologist William Gregor as an inclusion of a mineral in Cornwall, Great Britain. Gregor recognized the presence of a new element in ilmenite when he found black sand by a stream and noticed the sand was attracted by a magnet. Analyzing the sand, he determined the presence of two metal oxides: iron oxide (explaining the attraction to the magnet) and 45.25% of a white metallic oxide he could not identify. Realizing that the unidentified oxide contained a metal that did not match any known element, Gregor reported his findings to the Royal Geological Society of Cornwall and in the German science journal "Crell's Annalen".
Around the same time, Franz-Joseph Müller von Reichenstein produced a similar substance, but could not identify it. The oxide was independently rediscovered in 1795 by Prussian chemist Martin Heinrich Klaproth in rutile from Boinik (the German name of Bajmócska), a village in Hungary (now Bojničky in Slovakia). Klaproth found that it contained a new element and named it for the Titans of Greek mythology. After hearing about Gregor's earlier discovery, he obtained a sample of manaccanite and confirmed that it contained titanium.
The currently known processes for extracting titanium from its various ores are laborious and costly; it is not possible to reduce the ore by heating with carbon (as in iron smelting) because titanium combines with the carbon to produce titanium carbide. Pure metallic titanium (99.9%) was first prepared in 1910 by Matthew A. Hunter at Rensselaer Polytechnic Institute by heating TiCl4 with sodium at 700–800 °C under great pressure in a batch process known as the Hunter process. Titanium metal was not used outside the laboratory until 1932 when William Justin Kroll proved that it can be produced by reducing titanium tetrachloride (TiCl4) with calcium. Eight years later he refined this process with magnesium and even sodium in what became known as the Kroll process. Although research continues into more efficient and cheaper processes (e.g., FFC Cambridge, Armstrong), the Kroll process is still used for commercial production.
Titanium of very high purity was made in small quantities when Anton Eduard van Arkel and Jan Hendrik de Boer discovered the iodide, or crystal bar, process in 1925, by reacting with iodine and decomposing the formed vapours over a hot filament to pure metal.
In the 1950s and 1960s, the Soviet Union pioneered the use of titanium in military and submarine applications (Alfa class and Mike class) as part of programs related to the Cold War. Starting in the early 1950s, titanium came into use extensively in military aviation, particularly in high-performance jets, starting with aircraft such as the F-100 Super Sabre and Lockheed A-12 and SR-71.
Recognizing the strategic importance of titanium, the U.S. Department of Defense supported early efforts of commercialization.
Throughout the period of the Cold War, titanium was considered a strategic material by the U.S. government, and a large stockpile of titanium sponge was maintained by the Defense National Stockpile Center, which was finally depleted in the 2000s. According to 2006 data, the world's largest producer, Russian-based VSMPO-AVISMA, was estimated to account for about 29% of the world market share. As of 2015, titanium sponge metal was produced in seven countries: China, Japan, Russia, Kazakhstan, the US, Ukraine, and India. (in order of output).
In 2006, the U.S. Defense Advanced Research Projects Agency (DARPA) awarded $5.7 million to a two-company consortium to develop a new process for making titanium metal powder. Under heat and pressure, the powder can be used to create strong, lightweight items ranging from armour plating to components for the aerospace, transport, and chemical processing industries.
The processing of titanium metal occurs in four major steps: reduction of titanium ore into "sponge", a porous form; melting of sponge, or sponge plus a master alloy to form an ingot; primary fabrication, where an ingot is converted into general mill products such as billet, bar, plate, sheet, strip, and tube; and secondary fabrication of finished shapes from mill products.
Because it cannot be readily produced by reduction of titanium dioxide, titanium metal is obtained by reduction of TiCl4 with magnesium metal in the Kroll process. The complexity of this batch production in the Kroll process explains the relatively high market value of titanium, despite the Kroll process being less expensive than the Hunter process. To produce the TiCl4 required by the Kroll process, the dioxide is subjected to carbothermic reduction in the presence of chlorine. In this process, the chlorine gas is passed over a red-hot mixture of rutile or ilmenite in the presence of carbon. After extensive purification by fractional distillation, the TiCl4 is reduced with molten magnesium in an argon atmosphere. Titanium metal can be further purified by the van Arkel–de Boer process, which involves thermal decomposition of titanium tetraiodide.
A more recently developed batch production method, the FFC Cambridge process, reduces titanium dioxide electrochemically in molten calcium chloride to produce titanium metal as either powder or sponge. If mixed oxide powders are used, the product is an alloy.
Common titanium alloys are made by reduction. For example, cuprotitanium (rutile with copper added is reduced), ferrocarbon titanium (ilmenite reduced with coke in an electric furnace), and manganotitanium (rutile with manganese or manganese oxides) are reduced.
About fifty grades of titanium alloys are designed and currently used, although only a couple of dozen are readily available commercially. The ASTM International recognizes 31 grades of titanium metal and alloys, of which grades one through four are commercially pure (unalloyed). Those four vary in tensile strength as a function of oxygen content, with grade 1 being the most ductile (lowest tensile strength with an oxygen content of 0.18%), and grade 4 the least ductile (highest tensile strength with an oxygen content of 0.40%). The remaining grades are alloys, each designed for specific properties of ductility, strength, hardness, electrical resistivity, creep resistance, specific corrosion resistance, and combinations thereof.
In addition to the ASTM specifications, titanium alloys are also produced to meet aerospace and military specifications (SAE-AMS, MIL-T), ISO standards, and country-specific specifications, as well as proprietary end-user specifications for aerospace, military, medical, and industrial applications.
Titanium powder is manufactured using a flow production process known as the Armstrong process that is similar to the batch production Hunter process. A stream of titanium tetrachloride gas is added to a stream of molten sodium metal; the products (sodium chloride salt and titanium particles) is filtered from the extra sodium. Titanium is then separated from the salt by water washing. Both sodium and chlorine are recycled to produce and process more titanium tetrachloride.
All welding of titanium must be done in an inert atmosphere of argon or helium to shield it from contamination with atmospheric gases (oxygen, nitrogen, and hydrogen). Contamination causes a variety of conditions, such as embrittlement, which reduce the integrity of the assembly welds and lead to joint failure.
Commercially pure flat product (sheet, plate) can be formed readily, but processing must take into account of the tendency of the metal to springback. This is especially true of certain high-strength alloys. Titanium cannot be soldered without first pre-plating it in a metal that is solderable. The metal can be machined with the same equipment and the same processes as stainless steel.
Titanium is used in steel as an alloying element (ferro-titanium) to reduce grain size and as a deoxidizer, and in stainless steel to reduce carbon content. Titanium is often alloyed with aluminium (to refine grain size), vanadium, copper (to harden), iron, manganese, molybdenum, and other metals. Titanium mill products (sheet, plate, bar, wire, forgings, castings) find application in industrial, aerospace, recreational, and emerging markets. Powdered titanium is used in pyrotechnics as a source of bright-burning particles.
About 95% of all titanium ore is destined for refinement into titanium dioxide (), an intensely white permanent pigment used in paints, paper, toothpaste, and plastics. It is also used in cement, in gemstones, as an optical opacifier in paper, and a strengthening agent in graphite composite fishing rods and golf clubs.
Because titanium alloys have high tensile strength to density ratio, high corrosion resistance, fatigue resistance, high crack resistance, and ability to withstand moderately high temperatures without creeping, they are used in aircraft, armour plating, naval ships, spacecraft, and missiles. For these applications, titanium is alloyed with aluminium, zirconium, nickel, vanadium, and other elements to manufacture a variety of components including critical structural parts, fire walls, landing gear, exhaust ducts (helicopters), and hydraulic systems. In fact, about two thirds of all titanium metal produced is used in aircraft engines and frames. The titanium 6AL-4V alloy accounts for almost 50% of all alloys used in aircraft applications.
The Lockheed A-12 and its development the SR-71 "Blackbird" were two of the first aircraft frames where titanium was used, paving the way for much wider use in modern military and commercial aircraft. An estimated 59 metric tons (130,000 pounds) are used in the Boeing 777, 45 in the Boeing 747, 18 in the Boeing 737, 32 in the Airbus A340, 18 in the Airbus A330, and 12 in the Airbus A320. The Airbus A380 may use 77 metric tons, including about 11 tons in the engines. In aero engine applications, titanium is used for rotors, compressor blades, hydraulic system components, and nacelles. An early use in jet engines was for the Orenda Iroquois in the 1950s.
Because titanium is resistant to corrosion by sea water, it is used to make propeller shafts, rigging, and heat exchangers in desalination plants; heater-chillers for salt water aquariums, fishing line and leader, and divers' knives. Titanium is used in the housings and components of ocean-deployed surveillance and monitoring devices for science and the military. The former Soviet Union developed techniques for making submarines with hulls of titanium alloys forging titanium in huge vacuum tubes.
Titanium is used in the walls of the Juno spacecraft's vault to shield on-board electronics.
Welded titanium pipe and process equipment (heat exchangers, tanks, process vessels, valves) are used in the chemical and petrochemical industries primarily for corrosion resistance. Specific alloys are used in oil and gas downhole applications and nickel hydrometallurgy for their high strength (e. g.: titanium beta C alloy), corrosion resistance, or both. The pulp and paper industry uses titanium in process equipment exposed to corrosive media, such as sodium hypochlorite or wet chlorine gas (in the bleachery). Other applications include ultrasonic welding, wave soldering, and sputtering targets.
Titanium tetrachloride (TiCl4), a colorless liquid, is important as an intermediate in the process of making TiO2 and is also used to produce the Ziegler–Natta catalyst. Titanium tetrachloride is also used to iridize glass and, because it fumes strongly in moist air, it is used to make smoke screens.
Titanium metal is used in automotive applications, particularly in automobile and motorcycle racing where low weight and high strength and rigidity are critical. The metal is generally too expensive for the general consumer market, though some late model Corvettes have been manufactured with titanium exhausts, and a Corvette Z06's LT4 supercharged engine uses lightweight, solid titanium intake valves for greater strength and resistance to heat.
Titanium is used in many sporting goods: tennis rackets, golf clubs, lacrosse stick shafts; cricket, hockey, lacrosse, and football helmet grills, and bicycle frames and components. Although not a mainstream material for bicycle production, titanium bikes have been used by racing teams and adventure cyclists.
Titanium alloys are used in spectacle frames that are rather expensive but highly durable, long lasting, light weight, and cause no skin allergies. Many backpackers use titanium equipment, including cookware, eating utensils, lanterns, and tent stakes. Though slightly more expensive than traditional steel or aluminium alternatives, titanium products can be significantly lighter without compromising strength. Titanium horseshoes are preferred to steel by farriers because they are lighter and more durable.
Titanium has occasionally been used in architecture. The Monument to Yuri Gagarin, the first man to travel in space (), as well as the Monument to the Conquerors of Space on top of the Cosmonaut Museum in Moscow are made of titanium for the metal's attractive colour and association with rocketry. The Guggenheim Museum Bilbao and the Cerritos Millennium Library were the first buildings in Europe and North America, respectively, to be sheathed in titanium panels. Titanium sheathing was used in the Frederic C. Hamilton Building in Denver, Colorado.
Because of titanium's superior strength and light weight relative to other metals (steel, stainless steel, and aluminium), and because of recent advances in metalworking techniques, its use has become more widespread in the manufacture of firearms. Primary uses include pistol frames and revolver cylinders. For the same reasons, it is used in the body of laptop computers (for example, in Apple's PowerBook line).
Some upmarket lightweight and corrosion-resistant tools, such as shovels and flashlights, are made of titanium or titanium alloys.
Because of its durability, titanium has become more popular for designer jewelry (particularly, titanium rings). Its inertness makes it a good choice for those with allergies or those who will be wearing the jewelry in environments such as swimming pools. Titanium is also alloyed with gold to produce an alloy that can be marketed as 24-karat gold because the 1% of alloyed Ti is insufficient to require a lesser mark. The resulting alloy is roughly the hardness of 14-karat gold and is more durable than pure 24-karat gold.
Titanium's durability, light weight, and dent and corrosion resistance make it useful for watch cases. Some artists work with titanium to produce sculptures, decorative objects and furniture.
Titanium may be anodized to vary the thickness of the surface oxide layer, causing optical interference fringes and a variety of bright colors. With this coloration and chemical inertness, titanium is a popular metal for body piercing.
Titanium has a minor use in dedicated non-circulating coins and medals. In 1999, Gibraltar released the world's first titanium coin for the millennium celebration. The Gold Coast Titans, an Australian rugby league team, award a medal of pure titanium to their player of the year.
Because titanium is biocompatible (non-toxic and not rejected by the body), it has many medical uses, including surgical implements and implants, such as hip balls and sockets (joint replacement) and dental implants that can stay in place for up to 20 years. The titanium is often alloyed with about 4% aluminium or 6% Al and 4% vanadium.
Titanium has the inherent ability to osseointegrate, enabling use in dental implants that can last for over 30 years. This property is also useful for orthopedic implant applications. These benefit from titanium's lower modulus of elasticity (Young's modulus) to more closely match that of the bone that such devices are intended to repair. As a result, skeletal loads are more evenly shared between bone and implant, leading to a lower incidence of bone degradation due to stress shielding and periprosthetic bone fractures, which occur at the boundaries of orthopedic implants. However, titanium alloys' stiffness is still more than twice that of bone, so adjacent bone bears a greatly reduced load and may deteriorate.
Because titanium is non-ferromagnetic, patients with titanium implants can be safely examined with magnetic resonance imaging (convenient for long-term implants). Preparing titanium for implantation in the body involves subjecting it to a high-temperature plasma arc which removes the surface atoms, exposing fresh titanium that is instantly oxidized.
Titanium is used for the surgical instruments used in image-guided surgery, as well as wheelchairs, crutches, and any other products where high strength and low weight are desirable.
Titanium dioxide nanoparticles are widely used in electronics and the delivery of pharmaceuticals and cosmetics.
Because of its corrosion resistance, containers made of titanium have been studied for the long-term storage of nuclear waste. Containers lasting more than 100,000 years are thought possible with manufacturing conditions that minimize material defects. A titanium "drip shield" could also be installed over containers of other types to enhance their longevity.
The fungal species "Marasmius oreades" and "Hypholoma capnoides" can bioconvert titanium in titanium polluted soils.
Titanium is non-toxic even in large doses and does not play any natural role inside the human body. An estimated quantity of 0.8 milligrams of titanium is ingested by humans each day, but most passes through without being absorbed in the tissues. It does, however, sometimes bio-accumulate in tissues that contain silica. One study indicates a possible connection between titanium and yellow nail syndrome. An unknown mechanism in plants may use titanium to stimulate the production of carbohydrates and encourage growth. This may explain why most plants contain about 1 part per million (ppm) of titanium, food plants have about 2 ppm, and horsetail and nettle contain up to 80 ppm.
As a powder or in the form of metal shavings, titanium metal poses a significant fire hazard and, when heated in air, an explosion hazard. Water and carbon dioxide are ineffective for extinguishing a titanium fire; Class D dry powder agents must be used instead.
When used in the production or handling of chlorine, titanium should not be exposed to dry chlorine gas because it may result in a titanium–chlorine fire. Even wet chlorine presents a fire hazard when extreme weather conditions cause unexpected drying.
Titanium can catch fire when a fresh, non-oxidized surface comes in contact with liquid oxygen. Fresh metal may be exposed when the oxidized surface is struck or scratched with a hard object, or when mechanical strain causes a crack. This poses a limitation to its use in liquid oxygen systems, such as those in the aerospace industry. Because titanium tubing impurities can cause fires when exposed to oxygen, titanium is prohibited in gaseous oxygen respiration systems. Steel tubing is used for high pressure systems (3,000 p.s.i.) and aluminium tubing for low pressure systems. | https://en.wikipedia.org/wiki?curid=30040 |
Technetium
Technetium is a chemical element with the symbol Tc and atomic number 43. It is the lightest element whose isotopes are all radioactive; none are stable other than the fully ionized state of 97Tc. Nearly all available technetium is produced as a synthetic element, and only about 18,000 tons are estimated to exist at any given time in the Earth's crust. Naturally occurring technetium is a spontaneous fission product in uranium ore and thorium ore, the most common source, or the product of neutron capture in molybdenum ores. This silvery gray, crystalline transition metal lies between manganese and rhenium in group 7 of the periodic table, and its chemical properties are intermediate between those of these two adjacent elements. The most common naturally occurring isotope is 99Tc.
Many of technetium's properties were predicted by Dmitri Mendeleev before the element was discovered. Mendeleev noted a gap in his periodic table and gave the undiscovered element the provisional name "ekamanganese" ("Em"). In 1937, technetium (specifically the technetium-97 isotope) became the first predominantly artificial element to be produced, hence its name (from the Greek , meaning "Craft or Art", +
One short-lived gamma ray-emitting nuclear isomer of technetium—technetium-99m—is used in nuclear medicine for a wide variety of diagnostic tests, such as bone cancer diagnoses. The ground state of this nuclide, technetium-99, is used as a gamma-ray-free source of beta particles. Long-lived technetium isotopes produced commercially are by-products of the fission of uranium-235 in nuclear reactors and are extracted from nuclear fuel rods. Because no isotope of technetium has a half-life longer than 4.21 million years (technetium-97), the 1952 detection of technetium in red giants helped to prove that stars can produce heavier elements.
From the 1860s through 1871, early forms of the periodic table proposed by Dmitri Mendeleev contained a gap between molybdenum (element 42) and ruthenium (element 44). In 1871, Mendeleev predicted this missing element would occupy the empty place below manganese and have similar chemical properties. Mendeleev gave it the provisional name "ekamanganese" (from "eka"-, the Sanskrit word for "one)" because the predicted element was one place down from the known element manganese.
Many early researchers, both before and after the periodic table was published, were eager to be the first to discover and name the missing element. Its location in the table suggested that it should be easier to find than other undiscovered elements.
German chemists Walter Noddack, Otto Berg, and Ida Tacke reported the discovery of element 75 and element 43 in 1925, and named element 43 "masurium" (after Masuria in eastern Prussia, now in Poland, the region where Walter Noddack's family originated). The group bombarded columbite with a beam of electrons and deduced element 43 was present by examining X-ray emission spectrograms. The wavelength of the X-rays produced is related to the atomic number by a formula derived by Henry Moseley in 1913. The team claimed to detect a faint X-ray signal at a wavelength produced by element 43. Later experimenters could not replicate the discovery, and it was dismissed as an error for many years. Still, in 1933, a series of articles on the discovery of elements quoted the name "masurium" for element 43. Whether the 1925 team actually did discover element 43 is still debated.
The discovery of element 43 was finally confirmed in a 1937 experiment at the University of Palermo in Sicily by Carlo Perrier and Emilio Segrè. In mid-1936, Segrè visited the United States, first Columbia University in New York and then the Lawrence Berkeley National Laboratory in California. He persuaded cyclotron inventor Ernest Lawrence to let him take back some discarded cyclotron parts that had become radioactive. Lawrence mailed him a molybdenum foil that had been part of the deflector in the cyclotron.
Segrè enlisted his colleague Perrier to attempt to prove, through comparative chemistry, that the molybdenum activity was indeed from an element with the atomic number 43. In 1937, they succeeded in isolating the isotopes technetium-95m and technetium-97. University of Palermo officials wanted them to name their discovery ""panormium"", after the Latin name for Palermo, "Panormus". In 1947 element 43 was named after the Greek word "τεχνητός", meaning "artificial", since it was the first element to be artificially produced. Segrè returned to Berkeley and met Glenn T. Seaborg. They isolated the metastable isotope technetium-99m, which is now used in some ten million medical diagnostic procedures annually.
In 1952, astronomer Paul W. Merrill in California detected the spectral signature of technetium (specifically wavelengths of 403.1 nm, 423.8 nm, 426.2 nm, and 429.7 nm) in light from S-type red giants. The stars were near the end of their lives, yet were rich in this short-lived element, indicating that it was being produced in the stars by nuclear reactions. This evidence bolstered the hypothesis that heavier elements are the product of nucleosynthesis in stars. More recently, such observations provided evidence that elements are formed by neutron capture in the s-process.
Since that discovery, there have been many searches in terrestrial materials for natural sources of technetium. In 1962, technetium-99 was isolated and identified in pitchblende from the Belgian Congo in extremely small quantities (about 0.2 ng/kg); there it originates as a spontaneous fission product of uranium-238. The Oklo natural nuclear fission reactor contains evidence that significant amounts of technetium-99 were produced and have since decayed into ruthenium-99.
Technetium is a silvery-gray radioactive metal with an appearance similar to platinum, commonly obtained as a gray powder. The crystal structure of the pure metal is hexagonal close-packed. Atomic technetium has characteristic emission lines at wavelengths of 363.3 nm, 403.1 nm, 426.2 nm, 429.7 nm, and 485.3 nm.
The metal form is slightly paramagnetic, meaning its magnetic dipoles align with external magnetic fields, but will assume random orientations once the field is removed. Pure, metallic, single-crystal technetium becomes a type-II superconductor at temperatures below 7.46 K. Below this temperature, technetium has a very high magnetic penetration depth, greater than any other element except niobium.
Technetium is located in the seventh group of the periodic table, between rhenium and manganese. As predicted by the periodic law, its chemical properties are between those two elements. Of the two, technetium more closely resembles rhenium, particularly in its chemical inertness and tendency to form covalent bonds. Unlike manganese, technetium does not readily form cations (ions with a net positive charge). Technetium exhibits nine oxidation states from −1 to +7, with +4, +5, and +7 being the most common. Technetium dissolves in aqua regia, nitric acid, and concentrated sulfuric acid, but it is not soluble in hydrochloric acid of any concentration.
Metallic technetium slowly tarnishes in moist air and, in powder form, burns in oxygen.
Technetium can catalyse the destruction of hydrazine by nitric acid, and this property is due to its multiplicity of valencies. This caused a problem in the separation of plutonium from uranium in nuclear fuel processing, where hydrazine is used as a protective reductant to keep plutonium in the trivalent rather than the more stable tetravalent state. The problem was exacerbated by the mutually-enhanced solvent extraction of technetium and zirconium at the previous stage, and required a process modification.
The most prevalent form of technetium that is easily accessible is sodium pertechnetate, Na[TcO4]. The majority of this material is produced by radioactive decay from [99MoO4]2−:
Pertechnetate (tetroxidotechnetate) behaves analogously to perchlorate, both of which are tetrahedral. Unlike permanganate (), it is only a weak oxidizing agent.
Related to pertechnetate is heptoxide. This pale-yellow, volatile solid is produced by oxidation of Tc metal and related precursors:
It is a very rare example of a molecular metal oxide, other examples being OsO4 and RuO4. It adopts a centrosymmetric structure with two types of Tc−O bonds with 167 and 184 pm bond lengths.
Technetium heptoxide hydrolyzes to pertechnetate and pertechnetic acid, depending on the pH:
HTcO4 is a strong acid. In concentrated sulfuric acid, [TcO4]− converts to the octahedral form TcO3(OH)(H2O)2, the conjugate base of the hypothetical triaquo complex [TcO3(H2O)3]+.
Technetium forms a dioxide, disulfide, diselenide, and ditelluride. An ill-defined Tc2S7 forms upon treating pertechnate with hydrogen sulfide. It thermally decomposes into disulfide and elemental sulfur. Similarly the dioxide can be produced by reduction of the Tc2O7.
Unlike the case for rhenium, a trioxide has not been isolated for technetium. However, TcO3 has been identified in the gas phase using mass spectrometry.
Technetium forms the simple complex . The potassium salt is isostructural with .
The following binary (containing only two elements) technetium halides are known: TcF6, TcF5, TcCl4, TcBr4, TcBr3, α-TcCl3, β-TcCl3, TcI3, α-TcCl2, and β-TcCl2. The oxidation states range from Tc(VI) to Tc(II). Technetium halides exhibit different structure types, such as molecular octahedral complexes, extended chains, layered sheets, and metal clusters arranged in a three-dimensional network. These compounds are produced by combining the metal and halogen or by less direct reactions.
TcCl4 is obtained by chlorination of Tc metal or Tc2O7 Upon heating, TcCl4 gives the corresponding Tc(III) and Tc(II) chlorides.
The structure of TcCl4 is composed of infinite zigzag chains of edge-sharing TcCl6 octahedra. It is isomorphous to transition metal tetrachlorides of zirconium, hafnium, and platinum.
Two polymorphs of technetium trichloride exist, α- and β-TcCl3. The α polymorph is also denoted as Tc3Cl9. It adopts a confacial bioctahedral structure. It is prepared by treating the chloro-acetate Tc2(O2CCH3)4Cl2 with HCl. Like Re3Cl9, the structure of the α-polymorph consists of triangles with short M-M distances. β-TcCl3 features octahedral Tc centers, which are organized in pairs, as seen also for molybdenum trichloride. TcBr3 does not adopt the structure of either trichloride phase. Instead it has the structure of molybdenum tribromide, consisting of chains of confacial octahedra with alternating short and long Tc—Tc contacts. TcI3 has the same structure as the high temperature phase of TiI3, featuring chains of confacial octahedra with equal Tc—Tc contacts.
Several anionic technetium halides are known. The binary tetrahalides can be converted to the hexahalides [TcX6]2− (X = F, Cl, Br, I), which adopt octahedral molecular geometry. More reduced halides form anionic clusters with Tc–Tc bonds. The situation is similar for the related elements of Mo, W, Re. These clusters have the nuclearity Tc4, Tc6, Tc8, and Tc13. The more stable Tc6 and Tc8 clusters have prism shapes where vertical pairs of Tc atoms are connected by triple bonds and the planar atoms by single bonds. Every technetium atom makes six bonds, and the remaining valence electrons can be saturated by one axial and two bridging ligand halogen atoms such as chlorine or bromine.
Technetium forms a variety of coordination complexes with organic ligands. Many have been well-investigated because of their relevance to nuclear medicine.
Technetium forms a variety of compounds with Tc–C bonds, i.e. organotechnetium complexes. Prominent members of this class are complexes with CO, arene, and cyclopentadienyl ligands. The binary carbonyl Tc2(CO)10 is a white volatile solid. In this molecule, two technetium atoms are bound to each other; each atom is surrounded by octahedra of five carbonyl ligands. The bond length between technetium atoms, 303 pm, is significantly larger than the distance between two atoms in metallic technetium (272 pm). Similar carbonyls are formed by technetium's congeners, manganese and rhenium. Interest in organotechnetium compounds has also been motivated by applications in nuclear medicine. Unusual for other metal carbonyls, Tc forms aquo-carbonyl complexes, prominent being [Tc(CO)3(H2O)3]+.
Technetium, with atomic number (denoted "Z") 43, is the lowest-numbered element in the periodic table for which all isotopes are radioactive. The second-lightest exclusively radioactive element, promethium, has an atomic number of 61. Atomic nuclei with an odd number of protons are less stable than those with even numbers, even when the total number of nucleons (protons + neutrons) is even, and odd numbered elements have fewer stable isotopes.
The most stable radioactive isotopes are technetium-97 with a half-life of 4.21 million years, technetium-98 with 4.2 million years, and technetium-99 with 211,100 years. Thirty other radioisotopes have been characterized with mass numbers ranging from 85 to 118. Most of these have half-lives that are less than an hour, the exceptions being technetium-93 (2.73 hours), technetium-94 (4.88 hours), technetium-95 (20 hours), and technetium-96 (4.3 days).
The primary decay mode for isotopes lighter than technetium-98 (98Tc) is electron capture, producing molybdenum ("Z" = 42). For technetium-98 and heavier isotopes, the primary mode is beta emission (the emission of an electron or positron), producing ruthenium ("Z" = 44), with the exception that technetium-100 can decay both by beta emission and electron capture.
Technetium also has numerous nuclear isomers, which are isotopes with one or more excited nucleons. Technetium-97m (97mTc; "m" stands for metastability) is the most stable, with a half-life of 91 days and excitation energy 0.0965 MeV. This is followed by technetium-95m (61 days, 0.03 MeV), and technetium-99m (6.01 hours, 0.142 MeV). Technetium-99m emits only gamma rays and decays to technetium-99.
Technetium-99 (99Tc) is a major product of the fission of uranium-235 (235U), making it the most common and most readily available isotope of technetium. One gram of technetium-99 produces 6.2×108 disintegrations per second (in other words, the specific activity of 99Tc is 0.62 GBq/g).
Technetium occurs naturally in the Earth's crust in minute concentrations of about 0.003 parts per trillion. Technetium is so rare because the half-lives of 97Tc and 98Tc are only 4.2 million years. More than a thousand of such periods have passed since the formation of the Earth, so the probability for the survival of even one atom of primordial technetium is effectively zero. However, small amounts exist as spontaneous fission products in uranium ores. A kilogram of uranium contains an estimated 1 nanogram (10−9 g) of technetium. Some red giant stars with the spectral types S-, M-, and N contain a spectral absorption line indicating the presence of technetium. These red-giants are known informally as technetium stars.
In contrast to the rare natural occurrence, bulk quantities of technetium-99 are produced each year from spent nuclear fuel rods, which contain various fission products. The fission of a gram of uranium-235 in nuclear reactors yields 27 mg of technetium-99, giving technetium a fission product yield of 6.1%. Other fissile isotopes produce similar yields of technetium, such as 4.9% from uranium-233 and 6.21% from plutonium-239. An estimated 49,000 TBq (78 metric tons) of technetium was produced in nuclear reactors between 1983 and 1994, by far the dominant source of terrestrial technetium. Only a fraction of the production is used commercially.
Technetium-99 is produced by the nuclear fission of both uranium-235 and plutonium-239. It is therefore present in radioactive waste and in the nuclear fallout of fission bomb explosions. Its decay, measured in becquerels per amount of spent fuel, is the dominant contributor to nuclear waste radioactivity after about 104 to 106 years after the creation of the nuclear waste. From 1945 to 1994, an estimated 160 TBq (about 250 kg) of technetium-99 was released into the environment during atmospheric nuclear tests. The amount of technetium-99 from nuclear reactors released into the environment up to 1986 is on the order of 1000 TBq (about 1600 kg), primarily by nuclear fuel reprocessing; most of this was discharged into the sea. Reprocessing methods have reduced emissions since then, but as of 2005 the primary release of technetium-99 into the environment is by the Sellafield plant, which released an estimated 550 TBq (about 900 kg) from 1995–1999 into the Irish Sea. From 2000 onwards the amount has been limited by regulation to 90 TBq (about 140 kg) per year. Discharge of technetium into the sea resulted in contamination of some seafood with minuscule quantities of this element. For example, European lobster and fish from west Cumbria contain about 1 Bq/kg of technetium.
The metastable isotope technetium-99m is continuously produced as a fission product from the fission of uranium or plutonium in nuclear reactors:
Because used fuel is allowed to stand for several years before reprocessing, all molybdenum-99 and technetium-99m is decayed by the time that the fission products are separated from the major actinides in conventional nuclear reprocessing. The liquid left after plutonium–uranium extraction (PUREX) contains a high concentration of technetium as but almost all of this is technetium-99, not technetium-99m.
The vast majority of the technetium-99m used in medical work is produced by irradiating dedicated highly enriched uranium targets in a reactor, extracting molybdenum-99 from the targets in reprocessing facilities, and recovering at the diagnostic center the technetium-99m produced upon decay of molybdenum-99. Molybdenum-99 in the form of molybdate is adsorbed onto acid alumina () in a shielded column chromatograph inside a technetium-99m generator ("technetium cow", also occasionally called a "molybdenum cow"). Molybdenum-99 has a half-life of 67 hours, so short-lived technetium-99m (half-life: 6 hours), which results from its decay, is being constantly produced. The soluble pertechnetate can then be chemically extracted by elution using a saline solution. A drawback of this process is that it requires targets containing uranium-235, which are subject to the security precautions of fissile materials.
Almost two-thirds of the world's supply comes from two reactors; the National Research Universal Reactor at Chalk River Laboratories in Ontario, Canada, and the High Flux Reactor at Nuclear Research and Consultancy Group in Petten, Netherlands. All major reactors that produce technetium-99m were built in the 1960s and are close to the end of life. The two new Canadian Multipurpose Applied Physics Lattice Experiment reactors planned and built to produce 200% of the demand of technetium-99m relieved all other producers from building their own reactors. With the cancellation of the already tested reactors in 2008, the future supply of technetium-99m became problematic.
The long half-life of technetium-99 and its potential to form anionic species creates a major concern for long-term disposal of radioactive waste. Many of the processes designed to remove fission products in reprocessing plants aim at cationic species such as caesium (e.g., caesium-137) and strontium (e.g., strontium-90). Hence the pertechnetate escapes through those processes. Current disposal options favor burial in continental, geologically stable rock. The primary danger with such practice is the likelihood that the waste will contact water, which could leach radioactive contamination into the environment. The anionic pertechnetate and iodide tend not to adsorb into the surfaces of minerals, and are likely to be washed away. By comparison plutonium, uranium, and caesium tend to bind to soil particles. Technetium could be immobilized by some environments, such as microbial activity in lake bottom sediments, and the environmental chemistry of technetium is an area of active research.
An alternative disposal method, transmutation, has been demonstrated at CERN for technetium-99. In this process, the technetium (technetium-99 as a metal target) is bombarded with neutrons to form the short-lived technetium-100 (half-life = 16 seconds) which decays by beta decay to ruthenium-100. If recovery of usable ruthenium is a goal, an extremely pure technetium target is needed; if small traces of the minor actinides such as americium and curium are present in the target, they are likely to undergo fission and form more fission products which increase the radioactivity of the irradiated target. The formation of ruthenium-106 (half-life 374 days) from the 'fresh fission' is likely to increase the activity of the final ruthenium metal, which will then require a longer cooling time after irradiation before the ruthenium can be used.
The actual separation of technetium-99 from spent nuclear fuel is a long process. During fuel reprocessing, it comes out as a component of the highly radioactive waste liquid. After sitting for several years, the radioactivity reduces to a level where extraction of the long-lived isotopes, including technetium-99, becomes feasible. A series of chemical processes yields technetium-99 metal of high purity.
Molybdenum-99, which decays to form technetium-99m, can be formed by the neutron activation of molybdenum-98. When needed, other technetium isotopes are not produced in significant quantities by fission, but are manufactured by neutron irradiation of parent isotopes (for example, technetium-97 can be made by neutron irradiation of ruthenium-96).
The feasibility of technetium-99m production with the 22-MeV-proton bombardment of a molybdenum-100 target in medical cyclotrons following the reaction 100Mo(p,2n)99mTc was demonstrated in 1971. The recent shortages of medical technetium-99m reignited the interest in its production by proton bombardment of isotopically-enriched (>99.5%) molybdenum-100 targets. Other techniques are being investigated for obtaining molybdenum-99 from molybdenum-100 via (n,2n) or (γ,n) reactions in particle accelerators.
Technetium-99m ("m" indicates that this is a metastable nuclear isomer) is used in radioactive isotope medical tests. For example, Technetium-99m is a radioactive tracer that medical imaging equipment tracks in the human body. It is well suited to the role because it emits readily detectable 140 keV gamma rays, and its half-life is 6.01 hours (meaning that about 94% of it decays to technetium-99 in 24 hours). The chemistry of technetium allows it to be bound to a variety of biochemical compounds, each of which determines how it is metabolized and deposited in the body, and this single isotope can be used for a multitude of diagnostic tests. More than 50 common radiopharmaceuticals are based on technetium-99m for imaging and functional studies of the brain, heart muscle, thyroid, lungs, liver, gall bladder, kidneys, skeleton, blood, and tumors.
The longer-lived isotope, technetium-95m with a half-life of 61 days, is used as a radioactive tracer to study the movement of technetium in the environment and in plant and animal systems.
Technetium-99 decays almost entirely by beta decay, emitting beta particles with consistent low energies and no accompanying gamma rays. Moreover, its long half-life means that this emission decreases very slowly with time. It can also be extracted to a high chemical and isotopic purity from radioactive waste. For these reasons, it is a National Institute of Standards and Technology (NIST) standard beta emitter, and is used for equipment calibration. Technetium-99 has also been proposed for optoelectronic devices and nanoscale nuclear batteries.
Like rhenium and palladium, technetium can serve as a catalyst. In processes such as the dehydrogenation of isopropyl alcohol, it is a far more effective catalyst than either rhenium or palladium. However, its radioactivity is a major problem in safe catalytic applications.
When steel is immersed in water, adding a small concentration (55 ppm) of potassium pertechnetate(VII) to the water protects the steel from corrosion, even if the temperature is raised to . For this reason, pertechnetate has been used as an anodic corrosion inhibitor for steel, although technetium's radioactivity poses problems that limit this application to self-contained systems. While (for example) can also inhibit corrosion, it requires a concentration ten times as high. In one experiment, a specimen of carbon steel was kept in an aqueous solution of pertechnetate for 20 years and was still uncorroded. The mechanism by which pertechnetate prevents corrosion is not well understood, but seems to involve the reversible formation of a thin surface layer (passivation). One theory holds that the pertechnetate reacts with the steel surface to form a layer of technetium dioxide which prevents further corrosion; the same effect explains how iron powder can be used to remove pertechnetate from water. The effect disappears rapidly if the concentration of pertechnetate falls below the minimum concentration or if too high a concentration of other ions is added.
As noted, the radioactive nature of technetium (3 MBq/L at the concentrations required) makes this corrosion protection impractical in almost all situations. Nevertheless, corrosion protection by pertechnetate ions was proposed (but never adopted) for use in boiling water reactors.
Technetium plays no natural biological role and is not normally found in the human body. Technetium is produced in quantity by nuclear fission, and spreads more readily than many radionuclides. It appears to have low chemical toxicity. For example, no significant change in blood formula, body and organ weights, and food consumption could be detected for rats which ingested up to 15 µg of technetium-99 per gram of food for several weeks. The radiological toxicity of technetium (per unit of mass) is a function of compound, type of radiation for the isotope in question, and the isotope's half-life.
All isotopes of technetium must be handled carefully. The most common isotope, technetium-99, is a weak beta emitter; such radiation is stopped by the walls of laboratory glassware. The primary hazard when working with technetium is inhalation of dust; such radioactive contamination in the lungs can pose a significant cancer risk. For most work, careful handling in a fume hood is sufficient, and a glove box is not needed. | https://en.wikipedia.org/wiki?curid=30041 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.