text stringlengths 10 951k | source stringlengths 39 44 |
|---|---|
Saint Vincent and the Grenadines
Saint Vincent and the Grenadines (), also frequently known simply as Saint Vincent, is an Anglo-Caribbean country of several islands in the Lesser Antilles island arc, in the southeast Windward Islands, which lies in the West Indies at the southern end of the eastern border of the Caribbean Sea where the latter meets the Atlantic Ocean.
Its territory consists of the main island of Saint Vincent and the northern two-thirds of the Grenadines, a chain of 32 smaller islands. Some of the Grenadines are inhabited — Bequia, Mustique, Union Island, Canouan, Palm Island, Mayreau, Young Island — while others are not: Tobago Cays, Petit Saint Vincent, Baliceaux, Bettowia, Quatre, Petite Mustique, Savan and Petit Nevis. Most of Saint Vincent and the Grenadines lies within the Hurricane Alley.
To the north of Saint Vincent lies Saint Lucia, to the east is Barbados, and Grenada lies to the south. Saint Vincent and the Grenadines is a densely populated country for its size (over 300 inhabitants/km2) with approximately inhabitants.
Kingstown is the capital and main port. Saint Vincent has a British colonial history, and is now part of the Organisation of Eastern Caribbean States, CARICOM, the Commonwealth of Nations, the Bolivarian Alliance for the Americas and the Community of Latin American and Caribbean States (CELAC).
Christopher Columbus, the first European to discover the island, named it after St. Vincent of Saragossa, whose feast day it was on the day Columbus first saw it (22 January 1498). The name of the Grenadines refers to the Spanish city of Granada, but to differentiate it from the island of the same name, the diminutive was used. Before the arrival of the Spaniards, the Carib natives who inhabited the island of St. Vincent called it Youloumain, in honour of Youlouca, the spirit of the rainbows, who they believed inhabited the island.
Before the arrival of Europeans and Africans in the 16th century, various Amerindian groups passed through or settled on St. Vincent and the Grenadines, including the Ciboney, Arawak, and Carib people. The island now known as Saint Vincent was originally named Youloumain by the native Island Caribs who called themselves Kalina/Carina ("l" and "r" being pronounced the same in their language).
It is thought that Christopher Columbus sighted the island in 1498, giving it the name St Vincent. The indigenous Garifuna people, who became known as the "Black Caribs", aggressively prevented European settlement on Saint Vincent.
Various attempts by the English and Dutch to claim the island proved unsuccessful, and it was the French who were first able to colonise the island, settling in the town of Barrouallie on the leeward side of St Vincent in 1719. The French brought with them enslaved African prisoners of war to work the plantations of sugar, coffee, indigo, tobacco, cotton and cocoa.
The British captured the island and drove out the French from Barrouallie during the Seven Years' War, a claim confirmed with the Treaty of Paris (1763). On taking control of the island in 1763, the British laid the foundations of Fort Charlotte and also brought with them enslaved African prisoners of war to work on the island plantations. The Black Caribs however, opposed to the British presence, went into open conflict against the British, starting the First Carib War which lasted from 1772 to 1773.
During the Anglo-French War (1778–1783) the French recaptured St Vincent in 1779. However, the British regained control under the Treaty of Versailles (1783).
The uneasy peace between the British and the Black Caribs led to the Second Carib War, which lasted from 1795 to 1796. The Black Caribs were led by Garifuna Paramount Chief Joseph Chatoyer and they were supported by the French, notably the radical Victor Hugues from the island of Martinique. Their revolt and uprising was eventually put to an end in 1797 by British General Sir Ralph Abercromby; a peace treaty agreement was made which resulted in almost 5,000 Black Caribs being exiled to Roatán, an island off the coast of Honduras, and to Belize and Baliceaux in the Grenadines.
In 1806 the building of Fort Charlotte was completed.
The volcano La Soufrière erupted in 1812, resulting in considerable destruction
The British abolished slavery in Saint Vincent (as well as in the other British West Indies colonies) in 1834, and an apprenticeship period followed which ended in 1838. After its end, labour shortages on the plantations resulted, and this was initially addressed by the immigration of indentured servants; in the late 1840s many Portuguese immigrants arrived from Madeira and between 1861 and 1888 shiploads of Indian labourers arrived. Conditions remained harsh for both former slaves and immigrant agricultural workers, as depressed world sugar prices kept the economy stagnant until the turn of the century. The economy then went into a period of decline with many landowners abandoning their estates and leaving the land to be cultivated by liberated slaves.
The Opobo king Jaja of Nigeria was exiled to after his 1887 arrest by the British for shipping cargoes of palm oil directly to Liverpool without the intermediation of the National African Company.
In 1902, the volcano La Soufrière erupted, killing 1,500-2,000 people; much farmland was damaged, and the economy deteriorated.
Saint Vincent and the Grenadines passed through various stages of colonial status under the British. A representative assembly was authorised in 1776, Crown Colony government was installed in 1877, a legislative council was created in 1925 with a limited franchise, and universal adult suffrage was granted in 1951. During the period of its control of Saint Vincent and the Grenadines, Britain made several attempts to unify the island with other Windward Islands as a single entity, with the aim of simplifying British control in the Anglo-Caribbean region through a single unified administration. In the 1960s, the British again tried to unify all of its regional islands including Saint Vincent into one united single entity under British control, unified politically. The unification was to be called the West Indies Federation and was driven by a desire to gain independence from British government. However, the attempt collapsed in 1962.
Saint Vincent was granted "associate statehood" status by Britain on 27 October 1969. This gave Saint Vincent complete control over its own internal affairs but was short of full independence in law.
In April 1979 La Soufrière erupted again. Although no one was killed, thousands were evacuated and again there was extensive agricultural damage.
On 27 October 1979 Saint Vincent and the Grenadines became the last of the Windward Islands to gain full independence, and this date is now the country's Independence Day, a public holiday. The country opted to remain within the British Commonwealth, retaining Queen Elizabeth as Monarch, represented locally by a Governor-General.
Milton Cato of the centre-left Saint Vincent Labour Party (SVLP) was the country's first Prime Minister (he had been Premier since 1974), ruling until he was defeated in the 1984 Vincentian general election by James Fitz-Allen Mitchell of the centre-right New Democratic Party (NDP). During Cato's time in office there was a brief rebellion on Union Island in December 1979 led by Lennox 'Bumba' Charles; apparently inspired by the recent revolution on Grenada, Charles alleged neglect of Union by the central government. However, the revolt was swiftly put down and Charles arrested. There were also a series of strikes in the early 1980s. James Mitchell remained Prime Minister for 16 years until 2000, winning three consecutive elections. Mitchell was at the forefront of attempts to improve regional integration. In 1980 and 1987 hurricanes damaged many banana and coconut plantations. Hurricane seasons were also very active in 1998 and 1999, with Hurricane Lenny in 1999 causing extensive damage to the west coast of the island.
In 2000 Arnhim Eustace became Prime Minister after taking over the leadership of the NDP following Mitchell's retirement; he was defeated a year later by Ralph Gonsalves of the Unity Labour Party (the successor party to the SVLP). Gonsalves—a left-winger known in the country as "Comrade Ralph"—has argued that European nations owe Caribbean nations reparations for their role in the Atlantic slave trade. Gonsalves won a second term in 2005, a third term in 2010, and a fourth term in 2015.
In 2009, a referendum was held on a proposal to adopt a new constitution that would make the country a republic, replacing Queen Elizabeth II as head of state with a non-executive President, a proposal supported by Prime Minister Gonsalves. A two-thirds majority was required, and it was defeated by 29,019 votes (55.64 per cent) to 22,493 (43.13 per cent).
Saint Vincent and the Grenadines lies to the west of Barbados, south of Saint Lucia and north of Grenada in the Windward Islands of the Lesser Antilles, an island arc of the Caribbean Sea. The islands of Saint Vincent and the Grenadines include the main island of Saint Vincent and the northern two-thirds of the Grenadines , which are a chain of smaller islands stretching south from Saint Vincent to Grenada. There are 32 islands and cays that make up St Vincent and the Grenadines (SVG). Nine are inhabited, including the mainland St Vincent and the Grenadines islands: Young Island, Bequia, Mustique, Canouan, Union Island, Mayreau, Petit St Vincent and Palm Island. Prominent uninhabited islands of the Grenadines include Petit Nevis, used by whalers, and Petit Mustique, which was the centre of a prominent real-estate scam in the early 2000s.
The capital of Saint Vincent and the Grenadines is Kingstown, Saint Vincent. The main island of Saint Vincent measures long, in width and in area. From the most northern to the most southern points, the Grenadine islands belonging to Saint Vincent span , with a combined area of .
The island of Saint Vincent is volcanic and heavily forested, and includes little level ground. The windward side of the island is very rocky and steep, while the leeward side has more sandy beaches and bays. Saint Vincent's highest peak is La Soufrière volcano at . Other major mountains on St Vincent are (from north to south) Richmond Peak, Mount Brisbane, Colonarie Mountain, Grand Bonhomme, Petit Bonhomme and Mount St Andrew.
Saint Vincent and the Grenadines is a parliamentary democracy and constitutional monarchy, with Elizabeth II as Queen of Saint Vincent and the Grenadines. She does not reside in the islands and is represented as head of state in the country by the Governor-General of Saint Vincent and the Grenadines, currently Susan Dougan (since 1 August 2019).
The office of Governor-General has mostly ceremonial functions including the opening of the islands' House of Assembly and the appointment of various government officials. Control of the government rests with the elected Prime Minister and his or her cabinet. The current Prime Minister is Ralph Gonsalves, elected in 2001 as head of the Unity Labour Party.
The legislative branch of government is the unicameral House of Assembly of Saint Vincent and the Grenadines, seating 15 elected members representing single-member constituencies and six appointed members known as Senators. The parliamentary term of office is five years, although the Prime Minister may call elections at any time.
The judicial branch of government is divided into district courts, the Eastern Caribbean Supreme Court and the Privy Council in London being the court of last resort.
The two political parties with parliamentary representation are the New Democratic Party (NDP) and the Unity Labour Party (ULP). The parliamentary opposition is made up of the largest minority stakeholder in the general elections, headed by the leader of the opposition. The current opposition leader is Godwin Friday.
Saint Vincent has no formal armed forces, although the Royal Saint Vincent and the Grenadines Police Force includes a Special Service Unit as well as a militia that has a supporting role on the island.
In 2017, Saint Vincent signed the UN treaty on the Prohibition of Nuclear Weapons.
Administratively, Saint Vincent and the Grenadines is divided into six parishes. Five parishes are on Saint Vincent, while the sixth is made up of the Grenadine islands. Kingstown is located in the Parish of Saint George and is the capital city and central administrative centre of the country.
Acts of gross indecency, which may be defined to include homosexual activity, are illegal in Saint Vincent and the Grenadines. Section 148 of the Criminal Code states that "Any person, who in public or private, commits an act of gross indecency with another person of the same sex, or procures or attempts to procure another person of the same sex to commit an act of gross indecency with him or her, is guilty of an offence and liable to imprisonment for five years".
Saint Vincent and the Grenadines maintains close ties to Canada, the United Kingdom and the US, and cooperates with regional political and economic organisations such as the Organisation of Eastern Caribbean States (OECS) and CARICOM. The island nation's sixth embassy overseas was opened on 8 August 2019 in Taipei, after Prime Minister Ralph Gonsalves' official visit to the Republic of China; the other five are located in London, Washington D.C., Havana, Caracas and Brussels.
On 6 July 1994 at Sherbourne Conference Centre, St Michael, Barbados, as a representative of the Government of St. Vincent and the Grenadines, then (James Mitchell, who was subsequently knighted) signed the Double Taxation Relief (CARICOM) Treaties. There were seven other signatories to the agreement on that day. The countries which were represented were Antigua and Barbuda, Belize, Grenada, Jamaica, St Kitts and Nevis, St Lucia, and Trinidad and Tobago.
An eighth country signed the agreement on 19 August 2016, Guyana.
This treaty covered taxes, residence, tax jurisdictions, capital gains, business profits, interest, dividends, royalties and other areas.
On 30 June 2014, St. Vincent and the Grenadines signed a Model 1 agreement with the United States of America with respect to Foreign Account Tax Compliance (Act) or FATCA.
According to the updated site as of 16 January 2017, on 13 May 2016 the agreement went to "In Force" status.
St Vincent and the Grenadines is a member of the United Nations, the Commonwealth of Nations, the Organization of American States, and the Association of Caribbean States (ACS).
In September 2017, at the 72nd Session of the UN General Assembly, the Prime Ministers of the Solomon Islands, Tuvalu, Vanuatu and Saint Vincent and the Grenadines called for UN action on alleged human rights abuses committed on Western New Guinea's indigenous Papuans. Western New Guinea has been occupied by Indonesia since 1963. More than 100,000 Papuans have died during a 50-year Papua conflict.
The Charter of the OAS was signed in Bogotá in 1948 and was amended by several Protocols which were named after the city and the year in which the Protocol was signed, such as "Managua" in "1993" forming part of the name of the Protocol.
St Vincent and the Grenadines entered the OAS system on 27 October 1981 according to the OAS website.
The last Summits of the Americas, the seventh, was held in Panama City, Panama in 2015 with the eighth summit being held in Lima, Peru in 2018 according to the website of the Summits of Americas.
With St Vincent and the Grenadines having at least two groups of indigenous persons it is expected that there will be contributions from the SVG's on this topic at the next ILSAs.
The position of the OAS with respect to indigenous persons appears to be developing over the years. The following statements appear to capture the position of the OAS with respect to the ILSA: "The OAS has supported and participated in the organisation of Indigenous Leaders Summits of Americas (ILSA)" according to the OAS's website. The most recent "statement made by the Heads of State of the hemisphere was in the Declaration of Commitments of Port of Spain in 2009 – Paragraph 86 according to the OAS's website."
The Draft American Declaration of the Rights of the Indigenous Persons appear to be a working document. The last "Meeting for Negotiations in the Quest for Consensus on this area appeared to be Meeting Number (18) eighteen and is listed as being held in May 2015 according to the website."
In 2013, Saint Vincent called for European nations to pay reparations for the slave trade.
Saint Vincent protests against Venezuela's claim to give full effect to Aves (Bird) Island, which creates a Venezuelan EEZ/continental shelf extending over a large portion of the Caribbean Sea.
Agriculture, dominated by banana production, is the most important sector of this lower-middle-income economy. The services sector, based mostly on a growing tourist industry, is also important. The government has been relatively unsuccessful at introducing new industries, and the unemployment rate remains high at 19.8% in the 1991 census to 15% in 2001. The continuing dependence on a single crop represents the biggest obstacle to the islands' development as tropical storms wiped out substantial portions of bananas in many years.
There is a small manufacturing sector and a small offshore financial sector serving international businesses, and its secrecy laws have caused some international concern. There are increasing demands for international financial services like stock exchange and financial intermediaries financial activities in the country. In addition, the natives of Bequia are permitted to hunt up to four humpback whales per year under IWC subsistence quotas.
The tourism sector has considerable potential for development. The recent filming of the "Pirates of the Caribbean" movies on the island has helped to expose the country to more potential visitors and investors. Recent growth has been stimulated by strong activity in the construction sector and an improvement in tourism.
Argyle International Airport is the country's new international airport. The new facility opened on 14 February 2017, replacing the existing E.T. Joshua Airport. The airport is on the island's east coast about 8.3 km (5.17 miles) from Kingstown.
In 2010, Saint Vincent and the Grenadines had 21,700 telephone land lines. Its land telephone system is fully automatic and covers the entire island and all of the inhabited Grenadine islands. In 2002, there were 10,000 mobile phones. By 2010, this number had increased to 131,800. Mobile phone service is available in most areas of Saint Vincent as well as the Grenadines.
Saint Vincent has two ISPs (Digicel, Flow) that provide cellular telephone and internet service.
The population as estimated in was . The ethnic composition was 66% African descent, 19% of mixed descent, 6% East Indian, 4% Europeans (mainly Portuguese), 2% Island Carib and 3% others. Most Vincentians are the descendants of African people brought to the island to work on plantations. There are other ethnic groups such as Portuguese (from Madeira) and East Indians, both brought in to work on the plantations after the abolishing of slavery by the British living on the island. There is also a growing Chinese population.
English is the official language. Most Vincentians speak Vincentian Creole. English is used in education, government, religion, and other formal domains, while Creole (or 'dialect' as it is referred to locally) is used in informal situations such as in the home and among friends.
According to the 2001 census, 81.5% of the population of Saint Vincent and the Grenadines identified themselves as Christian, 6.7% has another religion and 8.8% has no religion or did not state a religion (1.5%).
Anglicanism constitutes the largest religious category, with 17.8% of the population. Pentecostals are the second largest group (17.6%). The next largest group are Methodists (10.9% of the population), followed by Seventh-day Adventists (10.2%) and Baptists (10.0%). Other Christians include Jehovah's Witnesses (0.6%), Roman Catholics (7.5%), Evangelicals (2.8%), Church of God (2.5%), Brethren Christian (1.3%), and the Salvation Army (0.3%).
Between 1991 and 2001 the number of Anglicans, Brethren, Methodists and Roman Catholics decreased, while the number of Pentecostals, Evangelicals and Seventh-day Adventists increased.
The number of non-Christians is small. These religious groups include the Rastafarians (1.5% of the population), Hindus and Muslims.
Cricket, rugby and association football are most popular among men whereas netball is most popular among women. Basketball, volleyball and tennis are also very popular.
The country's prime football league is the NLA Premier League, which provides its national (association) football team with most players. A notable Vincentian footballer is Ezra Hendrickson, former national team captain who played at several Major League Soccer clubs in the United States and is now an assistant coach with the Seattle Sounders FC.
The country regularly participates at the Caribbean Basketball Championship where a men's team and a women's team compete. Saint Vincent and the Grenadines also has its own national rugby union team which is ranked 84th in the world. Other notable sports played at the regional level include track and field.
Music popular in Saint Vincent and the Grenadines includes big drum, calypso, soca, steelpan and reggae. String band music, quadrille and traditional storytelling are also popular. One of the most successful St Vincent natives is Kevin Lyttle. He was named Cultural Ambassador for the Island 19 September 2013.
The national anthem of Saint Vincent and the Grenadines is "Saint Vincent, Land so beautiful", adopted upon independence in 1979.
Saint Vincent has twelve FM radio stations including 88.9 Adoration Fm, 89.1 Jem Radio, 89.7 NBC Radio, 95.7 and 105.7 Praise FM, 96.7 Nice Radio, 97.1 Hot 97, 98.3 Star FM, 99.9 We FM, 103.7 Hitz, 102.7 EZee radio, 104.3 Xtreme FM and 106.9 Boom FM. Several Internet radio stations including Chronicles Christian Radio. It has one television broadcast station ZBG-TV (SVGTV) and one cable television provider.
St Vincent and the Grenadines Broadcasting Co-operation is the parent company for SVGTV, Magic 103.7. | https://en.wikipedia.org/wiki?curid=27228 |
History of Saint Vincent and the Grenadines
Before the arrival of Europeans and Africans in the 16th century, various Amerindian groups passed through or settled on St. Vincent and the Grenadines, including the Ciboney, Arawak, and Carib people. These groups likely originated in the Orinoco Valley of South America and migrated north through Trinidad and the Lesser Antilles.
By the time Christopher Columbus passed near St. Vincent on his third voyage in 1498, the Caribs occupied the island after displacing the Arawaks a few centuries earlier.
Columbus and the Spanish conquistadors largely ignored St. Vincent and the smaller Grenadine islands nearby, but focused instead on the pursuit of gold and silver in Central and South America. They did embark on slaving expeditions in and around St. Vincent following royal sanction in 1511, driving the Carib inhabitants to the rugged interior, but the Spanish made no attempt to settle the island.
Carib Indians aggressively prevented European settlement on St. Vincent until the 18th century. African slaves, whether shipwrecked or escaped from St. Lucia or Grenada and seeking refuge in St. Vincent, intermarried with the Caribs and became known as "black Caribs". Now those of mixed African-Carib ancestry are known as "Garifuna". Established date is around 1511, over 444 years later.
The first Europeans to occupy St. Vincent were the French. However, following a series of wars and peace treaties, the islands were eventually ceded to the British. While the English were the first to lay claim to St. Vincent in 1627, the French (centered on the island of Martinique) would be the first European settlers on the island when they established their first colony at Barrouallie on the Leeward side of St. Vincent in 1719. The French settlers cultivated coffee, tobacco, indigo, corn, and sugar on plantations worked by African slaves.
St. Vincent was ceded to Britain by the Treaty of Paris (1763). From 1763 until independence, St. Vincent passed through various stages of colonial status under the British.
Friction between the British and the Caribs led to the First Carib War.
The First Carib War (1769–1773) was fought over British attempts to extend colonial settlements into Black Carib territories, and resulted in a stalemate and an unsatisfactory peace agreement.
Led primarily by Black Carib chieftain Joseph Chatoyer, the Caribs successfully defended the windward side of the island against a military survey expedition in 1769, and rebuffed repeated demands that they sell their land to representatives of the British colonial government. Frustrated by what they saw as intransigence, the British commissioners launched a full-scale military assault on the Caribs in 1772 with the objective of subjugating and deporting them from the island.
British unfamiliarity with the windward lands of the island and effective Carib defence of the island's difficult mountain terrain blunted the British advance, and political opposition to the expedition in London prompted an enquiry and calls for it to be ended. With military matters at a stalemate, a peace agreement was signed in 1773 that delineated boundaries between British and Carib areas of the island.
A representative assembly was authorized by the British in 1776.
France captured Saint Vincent in 1779 during the American War of Independence, but it was restored to Britain by the Treaty of Versailles (1783).
The Second Carib War begun in March 1795 by the Caribs, who harboured long-standing grievances against the British colonial administration, and were supported by French Revolutionary advisors including the radical Victor Hugues. The Caribs successfully gained control of most of the island except for the immediate area around Kingstown, which was saved from direct assault on several occasions by the timely arrival of British reinforcements. British efforts to penetrate and control the interior and windward areas of the island were repeatedly frustrated by incompetence, disease, and effective Carib defences, which were eventually supplemented by the arrival of some French troops from Martinique. A major military expedition by General Ralph Abercromby was eventually successful in crushing the Carib opposition in 1797. More than 5,000 black Caribs were deported from Saint Vincent first to the island of Baliceaux, off Bequia, where half of them died in concentration camps, and then to the island of Roatán off the coast of present-day Honduras, where they later became known as the Garifuna people.
Like the French before them, the British also used African slaves to work plantations of sugar, coffee, indigo, tobacco, cotton and cocoa. Decades after the success of the Haitian Revolution, the British abolished slavery in 1834; full emancipation was achieved in 1838. The economy then went into a period of decline with many landowners abandoning their estates and leaving the land to be cultivated by liberated slaves. The resulting labour shortages on the plantations attracted Portuguese immigrants in the 1840s and East Indians in the 1860s as laborers. Conditions remained harsh for both former slaves and immigrant agricultural workers, as depressed world sugar prices kept the economy stagnant until the turn of the 20th century.
A Crown Colony government was installed in 1877, a Legislative Council created in 1925, and universal adult suffrage granted in 1951. During this period, the British made several unsuccessful attempts to affiliate St. Vincent with other Windward Islands in order to govern the region through a unified administration. The most notable was the West Indies Federation, which collapsed in 1962.
The La Soufriere volcano erupted in 1812 and 1902 when much of the island was destroyed and many people were killed. In 1979 it erupted again, this time with no fatalities. In the same year, St Vincent and The Grenadines gained full independence from Britain, while remaining a member of the Commonwealth of Nations.
St. Vincent was granted associate statehood status on October 27, 1969, giving it complete control over its internal affairs. Following a referendum in 1979, St. Vincent and the Grenadines became the last of the Windward Islands to gain independence on 27 October 1979.
Natural disasters have plagued the country throughout the 20th century. In 1902, Soufrière volcano erupted, killing 2,000 people. Much farmland was damaged, and the economy deteriorated. In April 1979, La Soufrière erupted again. Although no one was killed, thousands had to be evacuated, and there was extensive agricultural damage.
The island also suffers from hurricanes. On September 11, 1898, six hours of a terrible hurricane devastated Barrouallie, which was almost completely destroyed. More recently, in 1980 and 1987, hurricanes devastated banana and coconut plantations; 1998 and 1999 also saw very active hurricane seasons, with Hurricane Lenny in 1999 causing extensive damage to the west coast of the island. | https://en.wikipedia.org/wiki?curid=27229 |
Demographics of Saint Vincent and the Grenadines
This article is about the demographics of the population of Saint Vincent and the Grenadines, including population density, ethnicity, religious affiliations and other aspects of the population.
According to the 2001 population census Saint Vincent and the Grenadines has a population of 106,253, a decrease of 256 since the 1991 census.
The population decrease of St. Vincent is caused by a high rate of emigration, as natural growth is positive.
The estimated population of is ().
Structure of the population (01.07.2008) (Estimates) :
Saint Vincents's population is predominantly African/black (77,390 in 2001; 72.8% of the total population) or of mixed African-European descent (21,303; 20%). 1.4% of the population is East Indian (1,436 residents in 2001) and 1.4% white (608 Portuguese and 870 other white).
Saint Vincent & the Grenadines also has a small Black Carib population. During the past decades the Black Caribs increased from 3,347 at the 1991 census (3.1% of the population) to 3,818 at the 2001 census (3.6% of the population). Black Caribs are indigenous from the island of Saint Vincent, formed in the 18th century by the mixture between Carib Amerindians and black slaves. A part of their community (now known as Garifuna) was expelled from St. Vincent in 1797 and exported to the island of Roatán, Honduras, from where they migrated to the Caribbean coast of the mainland of Central America and spread as far as Belize and Nicaragua. While the Garifuna have retained their Carib language, the Black Caribs of Saint Vincent and the Grenadines speak Creole English.
The remaining 0.8% of the population includes Chinese and people from the Middle East.
While the official language is English most Vincentians speak Vincentian Creole, an English-based creole, as their mother tongue. English is used in education, government, religion, and other formal domains, while Creole (or "dialect" as it is referred to locally) is used in informal situations such as in the home and among friends.
Protestant 75% (Anglican 47%, Methodist 28%), Roman Catholic 13%, other (includes Hindu, Seventh-Day Adventist, other Protestant) 12%.
According to the 2001 census, 81.5% of the population of Saint Vincent and the Grenadines is considered Christian, 6.7% has another religion and 8.8% has no religion or did not state a religion (1.5%).
Anglicanism constitutes the largest religion, with 47.8% of the population. Methodists are the second largest group (28%). The next largest group are Roman Catholics (13% of the population), followed by other religions including Hindu, Seventh-Day Adventist, other Protestant (12% of the population)
Between 1991 and 2001 the number of Anglicans, Brethren, Methodists and Roman Catholics decreased, while the number of Pentecostals, Evangelicals and Seventh-day Adventists increased. | https://en.wikipedia.org/wiki?curid=27231 |
Economy of Saint Vincent and the Grenadines
The economy of Saint Vincent and the Grenadines is heavily dependent on agriculture, being the world’s leading producer of arrowroot and grows other exotic fruit, vegetables and root crops. Bananas alone account for upwards of 60% of the work force and 50% of merchandise exports in Saint Vincent and the Grenadines. Such reliance on a single crop makes the economy vulnerable to external factors. St. Vincent's banana growers benefited from preferential access to the European market. In view of the European Union's announced phase-out of this preferred access, economic diversification is a priority.
Tourism has grown to become a very important part of the economy. In 1993, tourism supplanted banana exports as the chief source of foreign exchange. The Grenadines have become a favourite of the up-market yachting crowd. The trend toward increasing tourism revenues will likely continue. In 1996, new cruise ship and ferry berths came on-line, sharply increasing the number of passenger arrivals. In 1998, total visitor arrivals stood at 202,109 with United States visitors constituting 2.7%, as most of the nation's tourists are from other countries in the Caribbean and the United Kingdom. Figures from 2005 record tourism's contribution to the economy at US$90 million.
St. Vincent and the Grenadines is a beneficiary of the U.S. Caribbean Basin Initiative. The country belongs to the Caribbean Community (CARICOM), which has signed a framework agreement with the United States to promote trade and investment in the region.
Household income or consumption by percentage share:
Distribution of family income - Gini index: N/A
Agriculture - products: banana, coconuts, sweet-potatoes, spices; small numbers of cattle, sheep, pigs, goats; fish
Industrial production growth rate: -0.9% (1997 estimate)
Electricity - production: 115 million KWh (2005)
Electricity - consumption: 107 million KWh (2005)
Oil - consumption: (2005 estimate)
Current account balance: $-0.22 billion (2013 estimate)
Reserves of foreign exchange and gold: $115 million (2013 estimate)
$111 million (2012 estimate)
2010 Index of Economic Freedom rank = 49th
Exchange rates: East Caribbean dollars per US dollar - 2.7 (2007), 2.7 (2006), 2.7 (2005), 2.7 (2003) | https://en.wikipedia.org/wiki?curid=27233 |
Telecommunications in Saint Vincent and the Grenadines
Communications in Saint Vincent and the Grenadines
Telephones - main lines in use: 20,500 (1998)
Telephones - mobile cellular: 83 (1993)
Telephone system:
Radio broadcast stations: AM 0 (ZBG-AM 700 went off air in 2010), FM 3, shortwave 0 (1998)
Radios: 77,000 (1997)
Television broadcast stations: 1 (plus three repeaters) (1997)
Televisions: 18,000 (1997)
Internet Service Providers (ISPs): Cable and Wireless/FLOW
Country code (Top level domain): VC | https://en.wikipedia.org/wiki?curid=27234 |
Transport in Saint Vincent and the Grenadines
There are no railways in Saint Vincent and the Grenadines.
As of 1996, there were 829 km of highways, of which 580 km are paved.
Ports and harbours:
Kingstown
Merchant marine:
"total:"
825 ships (1,000 GT or over) totaling 7,253,092 GT/
"ships by type:"
barge carrier 1, bulk 142, cargo 400, chemical tanker 31, combination bulk 10, combination ore/oil 5, container 47, liquified gas 5, livestock carrier 5, multi-functional large load carrier 3, passenger 3, petroleum tanker 60, refrigerated cargo 41, roll-on/roll-off 51, short-sea passenger 12, specialized tanker 8, vehicle carrier 1 (1999 est.)
"note:"
a flag of convenience registry; includes ships from 20 countries among which are Croatia 17, Slovenia 7, People's Republic of China 5, Greece 5, United Arab Emirates 3, Norway 2, Japan 2, and Ukraine 2 (1998 est.)
Airports:
6 (2005)
Airports - with paved runways:
"total:"
5
"914 to 1,523 m:"
4
"under 914 m:"
1 (2005)
There is one airport with an unpaved runway, under 914 m (2005 est.) | https://en.wikipedia.org/wiki?curid=27235 |
Samoa
Samoa (), officially the Independent State of Samoa (; , ) and until 1997 known as Western Samoa, is an island country consisting of two main islands, Savai'i and Upolu, two smaller inhabited islands, Manono and Apolima, and several small uninhabited islands including the Aleipata Islands (Nu'utele, Nu'ulua, Fanuatapu, and Namua). The capital city is Apia. The Lapita people discovered and settled the Samoan Islands around 3,500 years ago. They developed a Samoan language and Samoan cultural identity.
Samoa is a unitary parliamentary democracy with eleven administrative divisions. The sovereign state is a member of the Commonwealth of Nations. Western Samoa was admitted to the United Nations on 15 December 1976. The entire island group, which includes American Samoa, was called "Navigator Islands" by European explorers before the 20th century because of the Samoans' seafaring skills. The country was governed by New Zealand until its independence in 1962.
In July 2017, Va'aletoa Sualauvi II became the head of state, succeeding Tui Ātua Tupua Tamasese Efi. The Prime Minister Tuila'epa came back to power after a landslide victory in March 2016, beginning a fifth term for the premier.
Samoa was discovered and settled by their Lapita ancestors (Austronesian people speaking Oceanic languages), with New Zealand scientists dating remains in Samoa to about 2900–3500 years ago. These were found at a Lapita site at Mulifanua and the findings were published in 1974.
The origins of the Samoans are closely studied in modern research about Polynesia in various scientific disciplines such as genetics, linguistics and anthropology. Scientific research is ongoing, although a number of different theories exist, including one proposing that the Samoans originated from Austronesian predecessors during the terminal eastward Lapita expansion period from Southeast Asia and Melanesia between 2,500 and 1,500 BCE.
Intimate sociocultural and genetic ties were maintained between Samoa, Fiji, and Tonga, and the archaeological record supports oral tradition and native genealogies that indicate inter-island voyaging and intermarriage between pre-colonial Samoans, Fijians, and Tongans. Notable figures in Samoan history included the Tui Manu'a line, Queen Salamasina, King Fonoti and the four tama-a-aiga: Malietoa, Tupua Tamasese, Mata'afa and Tuimalealiifano. Nafanua was a famous woman warrior who was deified in ancient Samoan religion and whose patronage was highly sought after by successive Samoan rulers.
Contact with Europeans began in the early 18th century. Jacob Roggeveen, a Dutchman, was the first known non-Polynesian to sight the Samoan islands in 1722. This visit was followed by French explorer Louis-Antoine de Bougainville, who named them the "Navigator Islands" in 1768. Contact was limited before the 1830s, which is when English missionaries, whalers and traders began arriving.
Visits by American trading and whaling vessels were important in the early economic development of Samoa. The Salem brig "Roscoe" (Captain Benjamin Vanderford), in October 1821, was the first American trading vessel known to have called, and the "Maro" (Captain Richard Macy) of Nantucket, in 1824, was the first recorded United States whaler at Samoa. The whalers came for fresh drinking water, firewood and provisions, and later, they recruited local men to serve as crewmen on their ships. The last recorded whaler visitor was the "Governor Morton" in 1870.
Christian missionary work in Samoa began in 1830 when John Williams of the London Missionary Society arrived in Sapapali'i from the Cook Islands and Tahiti. According to Barbara A. West, "The Samoans were also known to engage in ‘headhunting', a ritual of war in which a warrior took the head of his slain opponent to give to his leader, thus proving his bravery." However, Robert Louis Stevenson, who lived in Samoa from 1889 until his death in 1894, wrote in "", "… the Samoans are gentle people."
The Germans, in particular, began to show great commercial interest in the Samoan Islands, especially on the island of Upolu, where German firms monopolised copra and cocoa bean processing. The United States laid its own claim, based on commercial shipping interests in Pearl River in Hawaii and Pago Pago Bay in Eastern Samoa, and forced alliances, most conspicuously on the islands of Tutuila and Manu'a which became American Samoa.
Britain also sent troops to protect British business enterprise, harbour rights, and consulate office. This was followed by an eight-year civil war, during which each of the three powers supplied arms, training and in some cases combat troops to the warring Samoan parties. The Samoan crisis came to a critical juncture in March 1889 when all three colonial contenders sent warships into Apia harbour, and a larger-scale war seemed imminent. A massive storm on 15 March 1889 damaged or destroyed the warships, ending the military conflict.
The Second Samoan Civil War reached a head in 1898 when Germany, the United Kingdom, and the United States were locked in dispute over who should control the Samoa Islands. The Siege of Apia occurred in March 1899. Samoan forces loyal to Prince Tanu were besieged by a larger force of Samoan rebels loyal to Mata'afa Iosefo. Supporting Prince Tanu were landing parties from four British and American warships. After several days of fighting, the Samoan rebels were finally defeated.
American and British warships shelled Apia on 15 March 1899, including the USS "Philadelphia". Germany, the United Kingdom and the United States quickly resolved to end the hostilities and divided the island chain at the Tripartite Convention of 1899, signed at Washington on 2 December 1899 with ratifications exchanged on 16 February 1900.
The eastern island-group became a territory of the United States (the Tutuila Islands in 1900 and officially Manu'a in 1904) and was known as American Samoa. The western islands, by far the greater landmass, became German Samoa. The United Kingdom had vacated all claims in Samoa and in return received (1) termination of German rights in Tonga, (2) all of the Solomon Islands south of Bougainville, and (3) territorial alignments in West Africa.
The German Empire governed the western part of the Samoan archipelago from 1900 to 1914. Wilhelm Solf was appointed the colony's first governor. In 1908, when the non-violent Mau a Pule resistance movement arose, Solf did not hesitate to banish the Mau leader Lauaki Namulau'ulu Mamoe to Saipan in the German Northern Mariana Islands.
The German colonial administration governed on the principle that "there was only one government in the islands." Thus, there was no Samoan "Tupu" (king), nor an "alii sili" (similar to a governor), but two "Fautua" (advisors) were appointed by the colonial government. "Tumua" and "Pule" (traditional governments of Upolu and Savai'i) were for a time silent; all decisions on matters affecting lands and titles were under the control of the colonial Governor.
In the first month of World War I, on 29 August 1914, troops of the New Zealand Expeditionary Force landed unopposed on Upolu and seized control from the German authorities, following a request by Great Britain for New Zealand to perform this "great and urgent imperial service."
From the end of World War I until 1962, New Zealand controlled Samoa as a Class C Mandate under trusteeship through the League of Nations, then through the United Nations. Between 1919 and 1962, Samoa was administered by the Department of External Affairs, a government department which had been specially created to oversee New Zealand's Island Territories and Samoa. In 1943, this Department was renamed the Department of Island Territories after a separate Department of External Affairs was created to conduct New Zealand's foreign affairs. During the period of New Zealand control, their administrators were responsible for two major incidents.
In the first incident, approximately one fifth of the Samoan population died in the influenza epidemic of 1918–1919.
In 1918, during the final stages of World War I, the Spanish flu had taken its toll, spreading rapidly from country to country. On Samoa, there had been no epidemic of pneumonic influenza in Western Samoa before the arrival of the SS "Talune" from Auckland on 7 November 1918. The NZ administration allowed the ship to berth in breach of quarantine; within seven days of this ship's arrival, influenza became epidemic in Upolu and then spread rapidly throughout the rest of the territory. Samoa suffered the most of all Pacific islands, with 90% of the population infected; 30% of adult men, 22% of adult women and 10% of children died. The cause of the epidemic was confirmed in 1919 by a Royal Commission of Inquiry into the Epidemic concluded that there had been no epidemic of pneumonic influenza in Western Samoa before the arrival of the "Talune" from Auckland on 7 November 1918.
The second major incident arose out of an initially peaceful protest by the Mau (which literally translates as "strongly held opinion"), a non-violent popular movement which had its beginnings in the early 1900s on Savai'i, led by Lauaki Namulauulu Mamoe, an orator chief deposed by Solf. In 1909, Lauaki was exiled to Saipan and died en route back to Samoa in 1915.
By 1918, Samoa had a population of some 38,000 Samoans and 1,500 Europeans.
However, Samoans greatly resented New Zealand's colonial rule, and blamed inflation and the catastrophic 1918 flu epidemic on its misrule. By the late 1920s the resistance movement against colonial rule had gathered widespread support. One of the Mau leaders was Olaf Frederick Nelson, a half Samoan and half Swedish merchant. Nelson was eventually exiled during the late 1920s and early 1930s, but he continued to assist the organisation financially and politically. In accordance with the Mau's non-violent philosophy, the newly elected leader, High Chief Tupua Tamasese Lealofi, led his fellow uniformed Mau in a peaceful demonstration in downtown Apia on 28 December 1929.
The New Zealand police attempted to arrest one of the leaders in the demonstration. When he resisted, a struggle developed between the police and the Mau. The officers began to fire randomly into the crowd and a Lewis machine gun, mounted in preparation for this demonstration, was used to disperse the demonstrators. Chief Tamasese was shot from behind and killed while trying to bring calm and order to the Mau demonstrators, screaming "Peace, Samoa". Ten others died that day and approximately 50 were injured by gunshot wounds and police batons. That day would come to be known in Samoa as Black Saturday. The Mau grew, remaining steadfastly non-violent, and expanded to include the highly influential women's branch.
After repeated efforts by the Samoan independence movement, the New Zealand Western Samoa Act 1961 of 24 November 1961 granted Samoa independence, effective on 1 January 1962, upon which the Trusteeship Agreement terminated. Samoa also signed a friendship treaty with New Zealand. Samoa, the first small-island country in the Pacific to become independent, joined the Commonwealth of Nations on 28 August 1970. While independence was achieved at the beginning of January, Samoa annually celebrates 1 June as its independence day.
Travel writer Paul Theroux noted marked differences between the societies in Western Samoa and American Samoa in 1992.
In 2002, New Zealand's prime minister Helen Clark formally apologised for New Zealand's role in Spanish Influenza outbreak in 1918 that killed over a quarter of Samoa's population and for the Black Saturday killings in 1929.
On 4 July 1997 the government amended the constitution to change the country's name from "Western Samoa" to "Samoa". American Samoa protested against the move, asserting that the change diminished its own identity.
On 7 September 2009, the government changed the rule of the road, from right to left, in common with most other Commonwealth countries, most notably countries in the region like Australia and New Zealand, home to large numbers of Samoans. This made Samoa the first country in the 21st century to switch to driving on the left.
At the end of December 2011, Samoa jumped forward by one day, omitting 30 December from the local calendar, when the nation moved to the west of the International Date Line. This change aimed to help the nation boost its economy in doing business with Australia and New Zealand. Before this change, Samoa was 21 hours behind Sydney, but the change means it is now three hours ahead. The previous time zone, implemented on 4 July 1892, operated in line with American traders based in California.
In 2017, Samoa signed the UN treaty on the Prohibition of Nuclear Weapons.
In June 2017, Parliament established an amendment to Article 1 of the Samoan Constitution, thereby making Christianity the state religion.
The 1960 constitution, which formally came into force with independence from New Zealand in 1962, builds on the British pattern of parliamentary democracy, modified to take account of Samoan customs. The national modern Government of Samoa is referred to as the "Malo".
Fiame Mata'afa Faumuina Mulinu'u II, one of the four highest-ranking paramount chiefs in the country, became Samoa's first Prime Minister. Two other paramount chiefs at the time of independence were appointed joint heads of state for life. Tupua Tamasese Mea'ole died in 1963, leaving Malietoa Tanumafili II sole head of state until his death on 11 May 2007. The next Head of State, Tuiatua Tupua Tamasese Efi, was elected by the legislature on 17 June 2007 for a fixed five-year term, and was re-elected unopposed in July 2012. Tufuga Efi was succeeded by Va'aletoa Sualauvi II in 2017.
The unicameral legislature (the Fono) consists of 49 members serving 5-year terms. Forty-seven are "matai" title-holders elected from territorial districts by Samoans; the other two are chosen by non-Samoans with no chiefly affiliation on separate electoral rolls. Universal suffrage was adopted in 1990, but only chiefs (matai) may stand for election to the Samoan seats. There are more than 25,000 matais in the country, about 5% of whom are women. The prime minister, chosen by a majority in the Fono, is appointed by the head of state to form a government. The prime minister's choices for the 12 cabinet positions are appointed by the head of state, subject to the continuing confidence of the Fono.
Prominent women in Samoan politics include the late Laulu Fetauimalemau Mata'afa (1928–2007) from Lotofaga constituency, the wife of Samoa's first prime minister. Their daughter Fiame Naomi Mata'afa is a paramount chief and a long-serving senior member of cabinet. Other women in politics include Samoan scholar and eminent professor Aiono Fanaafi Le Tagaloa, orator-chief Matatumua Maimoana and Safuneitu'uga Pa'aga Neri ( the Minister of Communication and Technology).
The judicial system incorporates English common law and local customs. The Supreme Court of Samoa is the court of highest jurisdiction. Its chief justice is appointed by the head of state upon the recommendation of the prime minister.
Samoa comprises eleven "itūmālō" (political districts). These are the traditional eleven districts which predate European arrival. Each district has its own constitutional foundation ("fa'avae") based on the traditional order of title precedence found in each district's "faalupega" (traditional salutations). The capital village of each district administers and coordinates the affairs of the district and confers each district's paramount title, amongst other responsibilities.
For example:
A'ana has its capital at Leulumoega. The paramount "'tama-a-'aiga"' (royal lineage) title of A'ana is Tuimalealiifano.
The paramount "pāpā" title of A'ana is the Tui A'ana. The orator group which confers this title – the "Faleiva" (House of Nine) – is based at Leulumoega.
Ātua has its capital at Lufilufi. The paramount "'tama-a-'aiga"' (royal lineage) titles of A'ana are Tupua Tamasese (based in Lufilufi) and Mata'afa (based in Lotofaga). The two main political families who confer the respective titles are 'Aiga Sā Fenunuivao and 'Aiga Sā Levālasi.
The paramount "pāpā" title of Ātua is the Tui Ātua. The orator group which confers this title - the "Faleono" (House of Six) - is based at Lufilufi.
Tuamasaga has its capital at Afega. The paramount "'tama-a-'aiga"' (royal lineage) title of Tuamasaga is the Malietoa title, based in Malie. The main political family that confers the Malietoa title is 'Aiga Sā Malietoa.
The paramount "pāpā" titles of Tuamasaga are Gatoaitele (conferred by Afega) and Vaetamasoalii (conferred by Safata).
The eleven "itūmālō" are identified to be:
On Upolu
On Savai'i
1
2
3
4
Major areas of concern include the under-representation of women, domestic violence and poor prison conditions. Homosexual acts are illegal in Samoa.
In June 2017, an Act was passed changing the country's constitution to include a reference to the Trinity. As amended, Article 1 of the Samoan Constitution states that “Samoa is a Christian nation founded of God the Father, the Son and the Holy Spirit”. According to "The Diplomat", "What Samoa has done is shift references to Christianity into the body of the constitution, giving the text far more potential to be used in legal processes." The preamble to the constitution already described the country as "an independent State based on Christian principles and Samoan custom and traditions."
Samoa lies south of the equator, about halfway between Hawaii and New Zealand, in the Polynesian region of the Pacific Ocean. The total land area is 2,842 km2 (1,097 sq mi), consisting of the two large islands of Upolu and Savai'i (which together account for 99% of the total land area) and eight small islets.
The islets are:
The main island of Upolu is home to nearly three-quarters of Samoa's population, and to the capital city, Apia.
The Samoan islands result geologically from volcanism, originating with the Samoa hotspot, which probably results from a mantle plume. While all of the islands have volcanic origins, only Savai'i, the westernmost island in Samoa, remains volcanically active, with the most recent eruptions at Mt Matavanu (1905–1911), Mata o le Afi (1902) and Mauga Afi (1725). The highest point in Samoa is Mt Silisili, at 1858 m (6,096 ft). The Saleaula lava fields situated on the central north coast of Savai'i result from the Mt Matavanu eruptions, which left 50 km2 (20 sq mi) of solidified lava.
Savai'i is the largest of the Samoan islands and the sixth-largest Polynesian island (after New Zealand's North, South and Stewart Islands and the Hawaiian islands of Hawaiʻi and Maui). The population of Savai'i is 42,000 people.
Samoa has an equatorial/monsoonal climate, with an average annual temperature of 26.5 °C (79.7 °F) and a rainy season from November to April.
Samoa forms part of the Samoan tropical moist forests ecoregion. Since human habitation began, about 80% of the lowland rainforests have disappeared. Within the ecoregion about 28% of plants and 84% of land birds are endemic.
The United Nations has classified Samoa as an economically developing country since 2014. In 2017, Samoa's gross domestic product in purchasing power parity was estimated to be $1.13 billion U.S. dollars, ranking 204th among all countries. The services sector accounted for 66% of GDP, followed by industry and agriculture at 23.6% and 10.4%, respectively. The same year, the Samoan labour force was estimated at 50,700.
The country's currency is the Samoan tālā, issued and regulated by the Central Bank of Samoa.
The economy of Samoa has traditionally been dependent on agriculture and fishing at the local level. In modern times, development aid, private family remittances from overseas, and agricultural exports have become key factors in the nation's economy. Agriculture employs two-thirds of the labour force and furnishes 90% of exports, featuring coconut cream, coconut oil, noni (juice of the "nonu" fruit, as it is known in Samoan), and copra.
Outside of a large automotive wire harness factory (Yazaki Corporation which ended production in August 2017), the manufacturing sector mainly processes agricultural products. Tourism is an expanding sector which now accounts for 25% of GDP. Tourist arrivals have been increasing over the years with more than 100,000 tourists visiting the islands in 2005, up from 70,000 in 1996.
The Samoan government has called for deregulation of the financial sector, encouragement of investment, and continued fiscal discipline. Observers point to the flexibility of the labour market as a basic strength for future economic advances. The sector has been helped enormously by major capital investment in hotel infrastructure, political instability in neighbouring Pacific countries, and the 2005 launch of Virgin Samoa a joint-venture between the government and Virgin Australia (then Virgin Blue).
In the period before German colonisation, Samoa produced mostly copra. German merchants and settlers were active in introducing large scale plantation operations and developing new industries, notably, cocoa bean and rubber, relying on imported labourers from China and Melanesia. When the value of natural rubber fell drastically, about the end of the Great War (World War I), the New Zealand government encouraged the production of bananas, for which there is a large market in New Zealand.
Because of variations in altitude, a large range of tropical and subtropical crops can be cultivated, but land is not generally available to outside interests. Of the total land area of 2,934 km2 (725,000 acres), about 24.4% is in permanent crops and another 21.2% is arable. About 4.4% is Western Samoan Trust Estates Corporation (WSTEC).
The staple products of Samoa are copra (dried coconut meat), cocoa bean (for chocolate), and bananas. The annual production of both bananas and copra has been in the range of 13,000 to 15,000 metric tons (about 14,500 to 16,500 short tons). If the rhinoceros beetle in Samoa were eradicated, Samoa could produce in excess of 40,000 metric tons (44,000 short tons) of copra. Samoan cocoa beans are of very high quality and used in fine New Zealand chocolates. Most are Criollo-Forastero hybrids. Coffee grows well, but production has been uneven. WSTEC is the biggest coffee producer. Rubber has been produced in Samoa for many years, but its export value has little impact on the economy.
Other agricultural industries have been less successful. Sugarcane production, originally established by Germans in the early 20th century, could be successful. Old train tracks for transporting cane can be seen at some plantations east of Apia. Pineapples grow well in Samoa, but beyond local consumption have not been a major export.
Sixty percent of Samoa's electricity comes from renewable hydro, solar, and wind sources, with the remainder from diesel generators. The Electric Power Corporation has a goal of 100% renewable energy by 2021.
Samoa reported a population of 194,320 in its 2016 census. About three-quarters of the population live on the main island of Upolu.
A measles outbreak began in October 2019 and is currently ongoing. As of 7 December, there have been 68 deaths (0.31 per 1,000, based on a population of 201,316) and over 4,460 cases (2.2% of the population) of measles in Samoa, mainly children under four years old, and 10 reported cases in Fiji. It is expected that 70 people will die and up to 6,500 people will be infected.
92.6% of the population are Samoans, 7% Euronesians (people of mixed European and Polynesian ancestry) and 0.4% are Europeans, according to the CIA World Factbook.
Samoan ("Gagana Fa'asāmoa") and English are the official languages. Including second-language speakers, there are more speakers of Samoan than English in Samoa. Samoan Sign Language is also commonly used among the deaf population of Samoa. To emphasize the importance of full inclusion with sign language, elementary Samoan Sign Language was taught to members of the Samoa Police Service, Red Cross Society, and public during the 2017 International Week of the Deaf.
Since 2012, Article 1 of the Samoan Constitution states that “Samoa is a Christian nation founded of God the Father, the Son and the Holy Spirit”.
Samoans' religious adherence includes the following: Christian Congregational Church of Samoa 31.8%, Roman Catholic 19.4%, Methodist 15.2%, Assembly of God 13.7%, The Church of Jesus Christ of Latter-day Saints 7.6%, Seventh-day Adventist 3.9%, Worship Centre 1.7%, other Christian 5.5%, other 0.7%, none 0.1%, unspecified 0.1% (2011 estimate). The Head of State until 2007, Malietoa Tanumafili II, was a Bahá'í. Samoa hosts the seventh (of nine current) Bahá'í Houses of Worship in the world; completed in 1984 and dedicated by the Head of State, it is located in Tiapapata, 8 km (5 mi) from Apia.
The Samoan government provides eight years of primary and secondary education that is tuition-free and is compulsory through age 16.
In Samoa primary education is free, which includes grades 1–8, Ages 6–13, and is a total of 6 years. After primary education comes middle school which is also free and is only 2 years.
After 8 years students are required to take a national exam which ranks them for selection into secondary schools. Once the students are in secondary school everything is taught to them in English and they also have to pay yearly tuition, which is $60 for Samoan citizens and $150 for non Samoan citizen students.
Samoa's main post-secondary educational institution is the National University of Samoa, established in 1984. The country is also home to several branches of the multi-national University of the South Pacific and the Oceania University of Medicine.
Education in Samoa has proved to be effective as a 2012 UNESCO report stated that 99 per cent of Samoan adults are literate.
The fa'a Samoa, or traditional Samoan way, remains a strong force in Samoan life and politics. As one of the oldest Polynesian cultures, the fa'asamoa developed over a period of 3,000 years, withstanding centuries of European influence to maintains its historical customs, social and political systems, and language. Cultural customs such as the Samoa 'ava ceremony are significant and solemn rituals at important occasions including the bestowal of "matai" chiefly titles. Items of great cultural value include the finely woven "'ie toga".
Samoan mythology includes many gods with creation stories and figures of legend such as Tagaloa and the goddess of war Nafanua, the daughter of Saveasi'uleo, ruler of the spirit realm Pulotu. Other legends include the well known story of Sina and the Eel which explains the origins of the first coconut tree.
Some Samoans are spiritual and religious, and have subtly adapted the dominant religion of Christianity to 'fit in' with fa'a Samoa and vice versa. Ancient beliefs continue to co-exist side by side with Christianity, particularly in regard to the traditional customs and rituals of fa'a Samoa. The Samoan culture is centred around the principle of vāfealoa'i, the relationships between people. These relationships are based on respect, or fa'aaloalo. When Christianity was introduced in Samoa, most Samoan people converted. Currently 98% of the population identify themselves as Christian.
Some Samoans live a communal way of life, participating in activities collectively. Examples of this are the traditional Samoan "fale" (houses) which are open with no walls, using blinds made of coconut palm fronds during the night or bad weather.
The Samoan "siva" dance has unique gentle movements of the body in time to music and tells a story, although the Samoan male dances can be more snappy. The "sasa" is also a traditional dance where rows of dancers perform rapid synchronised movements in time to the rhythm of wooden drums "(pate)" or rolled mats. Another dance performed by males is called the "fa'ataupati" or the slap dance, creating rhythmic sounds by slapping different parts of the body. This is believed to have been derived from slapping insects on the body.
The form and construction of traditional architecture of Samoa was a specialised skill by "Tufuga fai fale" that was also linked to other cultural artforms.
As with other Polynesian cultures (Hawaiian, Tahitian and Māori) with significant and unique tattoos, Samoans have two gender specific and culturally significant tattoos. For males, it is called the Pe'a and consists of intricate and geometrical patterns tattooed that cover areas from the knees up towards the ribs. A male who possesses such a tatau is called a soga'imiti. A Samoan girl or "teine" is given a malu, which covers the area from just below her knees to her upper thighs.
Albert Wendt is a significant Samoan writer whose novels and stories tell the Samoan experience. In 1989, his novel "Flying Fox in a Freedom Tree" was made into a feature film in New Zealand, directed by Martyn Sanderson. Another novel "Sons for the Return Home" had also been made into a feature film in 1979, directed by Paul Maunder.
The late John Kneubuhl, born in American Samoa, was an accomplished playwright and screenwriter and writer.
Sia Figiel won the 1997 Commonwealth Writers' Prize for fiction in the south-east Asia/South Pacific region with her novel "Where We Once Belonged".
Momoe Malietoa Von Reiche is an internationally recognised poet and artist.
Tusiata Avia is a performance poet. Her first book of poetry "Wild Dogs Under My Skirt" was published by Victoria University Press in 2004.
Dan Taulapapa McMullin is an artist and writer.
Other Samoan poets and writers include Sapa'u Ruperake Petaia, Eti Sa'aga and Savea Sano Malifa, the editor of the Samoa Observer.
In music, popular local bands include The Five Stars, Penina o Tiafau and Punialava'a.
The Yandall Sisters' cover of the song "Sweet Inspiration" reached number one on the New Zealand charts in 1974.
King Kapisi was the first hip hop artist to receive the prestigious New Zealand APRA Silver Scroll Award in 1999 for his song "Reverse Resistance". The music video for "Reverse Resistance" was filmed in Savai'i at his villages.
Other successful Samoan hip hop artists include rapper Scribe, Dei Hamo, Savage and Tha Feelstyle whose music video "Suamalie" was filmed in Samoa.
Lemi Ponifasio is a director and choreographer who is prominent internationally with his dance Company MAU.
Neil Ieremia's company Black Grace has also received international acclaim with tours to Europe and New York.
Hip hop has had a significant impact on Samoan culture. According to Katerina Martina Teaiwa, PhD from the University of Hawaii at Manoa, "Hip hop culture in particular is popular amongst Samoan youth." As in many other countries, hip hop music is popular. In addition, the integration of hip hop elements into Samoan tradition also "testifies to the transferability of the dance forms themselves," and to the "circuits through which people and all their embodied knowledge travel." Dance both in its traditional form and its more modern forms has remained a central cultural currency to Samoans, especially youths.
The arts organisation "Tautai" is a collective of visual artists including Fatu Feu'u, Johnny Penisula, Shigeyuki Kihara, Michel Tuffery, and Lily Laita.
Director Sima Urale is an award-winning filmmaker. Urale's short film "O Tamaiti" won the prestigious Best Short Film at the Venice Film Festival in 1996. Her first feature film "Apron Strings" opened the 2008 NZ International Film Festival. The feature film "Siones Wedding", co-written by Oscar Kightley, was financially successful following premieres in Auckland and Apia. The 2011 film The Orator was the first ever fully Samoan film, shot in Samoa in the Samoan language with a Samoan cast telling a uniquely Samoan story. Written and directed by Tusi Tamasese, it received much critical acclaim and attention at film festivals throughout the world.
The main sports played in Samoa are rugby union, Samoan cricket and netball. Rugby union is the national football code of Samoa. In Samoan villages, volleyball is also popular.
Rugby union is the national sport in Samoa and the national team, nicknamed the Manu Samoa, is consistently competitive against teams from vastly more populous nations. Samoa has competed at every Rugby World Cup since 1991, and made the quarter finals in 1991, 1995 and the second round of the 1999 World Cup. At the 2003 world cup, Manu Samoa came close to beating eventual world champions, England. Samoa also played in the Pacific Nations Cup and the Pacific Tri-Nations. The sport is governed by the Samoa Rugby Football Union, who are members of the Pacific Islands Rugby Alliance, and thus, also contribute to the international Pacific Islanders rugby union team.
At club level, there is the National Provincial Championship and Pacific Rugby Cup. They also took home the cup at Wellington and the Hong Kong Rugby Sevens in 2007—for which the Prime Minister of Samoa, also Chairman of the national rugby union, Tuila’epa Sa’ilele Malielegaoi, declared a national holiday. They were also the IRB World Sevens Series Champions in 2010 capping a year of achievement for the Samoans, following wins in the US, Australia, Hong Kong and Scotland Sevens tournaments.
Prominent Samoan players include Pat Lam and Brian Lima. In addition, many Samoans have played for or are playing for New Zealand.
Rugby league is mostly played by Samoans living in New Zealand and Australia. Samoa reached the quarter finals of the 2013 Rugby League World Cup, the team comprising players from the NRL and Super League plus domestic players. Many Samoans and New Zealanders or Australians of Samoan descent play in the Super League and National Leagues in Britain, including Francis Meli, Ta'ane Lavulavu of Workington Town, Maurie Fa'asavalu of St Helens and David Fatialofa of Whitehaven and Setima Sa who signed with London Irish rugby club. Other noteworthy players from NZ and Australia have represented the Samoan National team. The 2011 domestic Samoan rugby league competition contained 10 teams with plans to expand to 12 in 2012.
Samoans have been very visible in boxing, kickboxing, wrestling, and sumo; some Samoan sumo wrestlers, most famously Musashimaru and Konishiki, have reached the highest rank of "Ozeki" and "yokozuna".
American football is occasionally played in Samoa, reflecting its wide popularity in American Samoa, where the sport is played under high school sanction. About 30 ethnic Samoans, many from American Samoa, currently play in the National Football League. A 2002 article from "ESPN" estimated that a Samoan male (either an American Samoan or a Samoan living in the mainland United States) is 40 times more likely to play in the NFL than a non-Samoan American. | https://en.wikipedia.org/wiki?curid=27238 |
History of Samoa
The Samoan Islands were first settled some 3,500 years ago as part of the Austronesian expansion. Samoa's early and more current history is strongly connected with the histories of Tonga and Fiji, which are in the same region, and with whom it shares historical, genealogical, and cultural traditions.
European exploration first reached the islands in the early 18th century.
Louis-Antoine de Bougainville named them "Navigator Islands" in 1768.
The United States Exploring Expedition (1838–42) under Charles Wilkes reached Samoa in 1839.
In 1855 J.C. Godeffroy & Sohn expanded its trading business into the archipelago.
The Samoan Civil War of 1886–1894 devolved into the Samoan crisis between colonial powers, followed by the Second Samoan Civil War of 1898/9, which was resolved by partition of the islands in the Tripartite Convention, between the United States, Great Britain and Germany.
After World War I, German Samoa became a Trust Territory and eventually became independent as Samoa in 1962. American Samoa remains an unincorporated territory of the United States.
Archeologists place the earliest human settlement of the Samoan archipelago at around 2900–3500 years before present. This date is based upon the ancient Lapita pottery shards found throughout the islands, the oldest evidence being in Mulifanua as well as in Sasoa'a, Falefa. This area of Polynesia, Samoa and Tonga, contains evidence from dates of similar times, suggesting the area was settled during the same period. Whatever occurred between 750 BC and AD 1000 remains a mystery, though this may have been the period of great migrations that led to the settlement of present-day Polynesia. Another mystery is that the making of pottery suddenly stopped; there is no oral tradition among the people of Samoa that explains this but some theories suggest the lack of available pottery-making materials in Polynesia meant the majority of pottery was imported during migration and not locally sourced or made.
Samoa's history early was interwoven with that of certain chiefdoms of Fiji as well as the history of the kingdom of Tonga. The oral history of Samoa preserves the memories of many battles fought between Samoa and neighboring islands. Too, intermarriage of Tongan and Fijian royalty to Samoan nobility has helped build close relationships between these island nations that exist to the present; these royal blood ties are acknowledged at special events and cultural gatherings. Other Samoan folklore tells of the arrival of two maidens from Fiji who brought the tools necessary to create the art of tatau, or in English tattoo, whence came the traditional Samoan "malofie" (also known as "pe'a" for men and as "malu" for women).
The foundation of the cultural tradition of Samoa, the fa'asamoa, came with the rise of the warrior queen Nafanua. Her rule instituted traditions of the fa'amatai or local family and village and regional chiefly systems, a decentralized system. Her niece Salamasina continued under this system and their time is considered a golden age of Samoan cultural traditions.
Linguistically, the Samoan language belongs to the Polynesian sub-branch of the Austronesian language family, whose origin is thought to be in Taiwan.
According to oral tradition, Samoa shares the common Polynesian ancestor of Tagaloa. The very earliest history of Samoa concerns a political center in the easternmost Samoan islands of Manu'a, under the rule of the Tui Manu'a. In the Cook Islands to the east, the tradition is that Karika, or Tui Manu'a 'Ali'a, came to the Cook Islands from Manu'a; suggesting that it was from Manu'a and Samoa that the rest of Polynesia as settled.
Contact with Europeans began in the early 18th century but did not intensify until the arrival of the British missionaries. In 1722, Dutchman Jacob Roggeveen was the first European to see the islands. This visit was followed by the French explorer Louis-Antoine de Bougainville (1729–1811), the man who named them the "Navigator Islands" in 1768. In 1787 Jean-François de Galaup, comte de Lapérouse visited Samoa, where at Tutuila Island, in what is now American Samoa, there was a conflict leading to deaths on both sides, including the deaths of twelve Frenchmen.
European and Tahitian and Cook Islander Missionaries and traders, led by John Williams (missionary) began arriving around 1830. Coming via Tahiti, they were known in Samoa as the Lotu Taiti. The Rev. John Williams was helped by the Ali'i Malietoa Vainu'upo to establish the Lotu Taiti, which became the Christian Congregational Church of Samoa.
The United States Exploring Expedition (1838–42) under Charles Wilkes reached Samoa in 1839 and appointed an Englishman, John C. Williams, son of the missionary, as acting U.S. consul. However this appointment was never confirmed by the U.S. State Department; John C. Williams was merely recognized as "Commercial Agent of the United States". A British consul was already residing at Apia.
In 1855 J.C. Godeffroy & Sohn expanded its trading business into the Samoan Islands, which were then known as the Navigator Islands. During the second half of the 19th century German influence in Samoa expanded with large plantation operations being introduced for coconut, cacao and hevea rubber cultivation, especially on the island of 'Upolu where German firms monopolized copra and cocoa bean processing. British business enterprises, harbour rights, and consulate office were the basis on which Britain had cause to intervene in Samoa. The United States began operations at the harbor of Pago Pago on Tutuila in 1877 and formed alliances with local native chieftains, most conspicuously on the islands of Tutuila and Manu'a (which were later formally annexed as American Samoa).
In the 1880s Great Britain, Germany and the United States all claimed parts of the kingdom of Samoa, and established trade posts. The rivalry between these powers exacerbated the indigenous factions that were struggling to preserve their ancient political system. The islands were divided among the three powers in the 1890s, And between the United States and Germany in 1899.
The First Samoan Civil War was fought roughly between 1886 and 1894, primarily between rival Samoan factions, although the rival powers intervened on several occasions with military forces. There followed an eight-year civil war, where each of the three powers supplied arms, training, and in some cases, combat troops to the warring Samoan parties. The Samoan crisis came to a critical juncture in March 1889 when all three colonial contenders sent warships into Apia harbour, and a larger-scale war seemed imminent, until a massive storm on 15 March 1889 damaged or destroyed the warships, ending the military conflict.
Robert Louis Stevenson arrived in Samoa in 1889 and built a house at Vailima. He quickly became passionately involved in the attendant political machinations. His influence spread to the Samoans, who consulted him for advice, and he soon became involved in local politics. These involved the three colonial powers battling for control of Samoa - America, Germany and Britain - and the indigenous factions struggling to preserve their ancient political system. He was convinced that the European officials appointed to rule the Samoans were incompetent, and after many futile attempts to resolve the matter, he published "". The book covers the period from 1882 to 1892. This was such a stinging protest against existing conditions that it resulted in the recall of two officials, and Stevenson feared for a time it would result in his own deportation.
The Second Samoan Civil War reached a head in 1898 when Germany, Great Britain and the United States disputed over who should control the Samoan Islands.
The Battle of Apia occurred in March 1899. Samoan forces loyal to Prince Tanu were besieged by a larger force of Samoan rebels loyal to powerful chief Mata'afa Iosefo. Supporting Prince Tanu were landing parties from four British and American warships. Over several days of fighting, the Samoan rebels were defeated.
American and British warships shelled Apia on 15 March 1899; including the USS "Philadelphia". Following the initial defeat at Apia, Mata'afa's rebels defeated a combined American, British and Tanu allied force at Vailele on 1 April 1899, with the allies in retreat. According to a war correspondent associated with the Auckland Star newspaper, the aftermath saw Mata'afa's warriors leaving American and British corpses on the field being severed of their heads. Germany, Britain and the United States quickly resolved to end the hostilities by partitioning the island chain at the Tripartite Convention of 1899. With Tanu and his American and British allies' inability to defeat him in war, the Tripartite resulted in Mata'afa being promoted to Ali'i Si'i, the high chief of Samoa.
The Samoa Tripartite Convention of 1899, a joint commission of three members composed of Bartlett Tripp for the United States, C. N. E. Eliot, C.B. for Great Britain, and Freiherr Speck von Sternburg for Germany, agreed to divide the islands.
The Tripartite Convention gave control of the islands west of 171 degrees west longitude to Germany, (later known as Western Samoa), containing Upolu and Savaii (the current Samoa) and other adjoining islands. These islands became known as German Samoa.
The United States was given control the eastern islands of Tutuila and Manu'a, (present-day American Samoa). In exchange for Britain ceding claims in Samoa, Germany transferred their protectorates in the North Solomon Islands and other territories in West Africa. It does not appear that any Samoans were consulted about the partition and the monarchy was also abolished.
From 1908, with the establishment of the Mau movement ("opinion movement"), Western Samoans began to assert their claim to independence. The Mau movement began in 1908 with the ‘Mau a Pule' resistance on Savai'i, led by orator chief Lauaki Namulau'ulu Mamoe. Lauaki and Mau a Pule chiefs, wives and children were exiled to Saipan in 1909. Many died in exile.
World War I broke out in August 1914, and soon after, New Zealand sent an expeditionary force to seize and occupy German Samoa. Although Germany refused to officially surrender the islands, no resistance was offered and the occupation took place without any fighting. New Zealand continued the occupation of Western Samoa throughout World War I. Under the Treaty of Versailles in 1919, Germany relinquished its claims to the islands.
In November 1918, the Spanish flu strongly hit the territory. 90% of the 38,302 native inhabitants were infected and 20% died. The American Samoa population was largely spared this devastation, due to vigorous efforts of its governor, John Martin Poyer. This led to some Samoan citizens petitioning in January 1919 for transfer to U.S. administration, or at least to the central British administration. The petition was recalled a few days later.
The Mau movement gained momentum with Samoa's royal leaders becoming more visible in supporting the movement but opposing violence. On 28 December 1929 Tupua Tamasese was shot along with eleven others during an otherwise peaceful demonstration in Apia. Tupua Tamasese died the following day; his final words included a plea that no more blood be shed.
New Zealand administered Western Samoa, or Samoa i Sisifo in the Samoan language, first as a League of Nations Mandate and then as a United Nations Trust Territory until the country received its independence on 1 January 1962 (from New Zealand) as Western Samoa. Samoa's first prime minister following independence was paramount chief Fiame Mata'afa Faumuina Mulinu'u II.
Samoa i Sisifo was the first Polynesian people to be recognized as a sovereign nation in the 20th century. Samoa became one of the Member states of the Commonwealth of Nations on 28 August 1970. In 1977, Queen Elizabeth II visited Samoa during her tour of the Commonwealth.
A conflict briefly emerged between Samoa and American Samoa following Samoa's decision to drop the adjective "Western" from its name. The change was made by an act of the Legislative Assembly of Western Samoa adopted on 4 July 1997. The step caused "surprise and uproar" in neighboring American Samoa, as for some American Samoans the change of name implied a claim to be the "real" Samoa and implied that American Samoa was just an American appendix. Some in the American territory said it implied that there was only one Samoa. Two members of American Samoa's legislature traveled to Apia in September 1997 to meet with Samoan head of State Malietoa Tanumafili II, and lobbied to have the name change reversed in order to maintain peace and good relations. An American Samoan petition to the United Nations for a ban on Samoa's using the name Samoa was seriously discussed and ten American Samoan representatives sponsored an unsuccessful bill aimed at preventing American Samoa from recognizing independent Samoa's new name. The proposed American Samoan bill was criticized by independent Samoa's Prime Minister Tofilau Eti Alesana who called the bill "rash and irresponsible".
In 2002, New Zealand's prime minister Helen Clark formally apologized for two incidents during the period of New Zealand's administration: a failure in 1918 to quarantine the , which carried the 'Spanish 'flu' to Samoa, leading to an epidemic which devastated the Samoan population, and the shooting of leaders of the non-violent Mau movement during a ceremonial procession in 1929.
In 2007, Samoa's first head of state, His Highness Malietoa Tanumafili II, died at age 95. He held this title jointly with Tupua Tamasese Lealofi until the latter's death in 1963. Malietoa Tanumafili II was Samoa's Head of State for 45 years. He was the son of Malietoa Tanumafili I, who was the last Samoan king recognized by Europe and the Western World.
Samoa's current head of state is His Highness Tui-Atua Tupua Tamasese Tupuola Efi, who was anointed the head of state title with the unanimous endorsement of Samoa's Parliament, a symbol of traditional Samoan protocol in alignment with Samoan decision-making stressing the importance of consensus in the 21st century.
As European traders began commercial (and later domination) activities in the Samoan Islands, they imposed their datekeeping system on their transactions. Thus by the 19th century, Samoan calendars were aligned with those of the other Asiatic countries to the west and south. However, in 1892, American traders convinced the king to alter the country's dating system to align with the United States; thus the country lived through 4 July 1892, twice. But 119 years later, the economic geography of the island had changed, and most business was being done with Australia and New Zealand. To make the jump back to the Asian date Samoa and Tokelau skipped 30 December 2011. | https://en.wikipedia.org/wiki?curid=27239 |
Demographics of Samoa
This article is about the demographic features of the population of Samoa, including population density, ethnicity, education level, health of the populace, economic status, religious affiliations and other aspects of the population.
Fertility Rate (TFR) (Wanted Fertility Rate) and CBR (Crude Birth Rate):
The following demographic statistics are from the CIA World Factbook, unless otherwise indicated. | https://en.wikipedia.org/wiki?curid=27241 |
Politics of Samoa
Politics of Samoa takes place in a framework of a parliamentary representative democratic state whereby the Prime Minister of Samoa is the head of government. Existing alongside the country's Western styled political system is the "fa'amatai" chiefly system of socio-political governance and organisation, central to understanding Samoa's political system.
From the country's independence in 1962, only "matai" could vote and stand as candidates in elections to parliament. In 1990, the voting system was changed by the Electoral Amendment Act which introduced universal suffrage. However, the right to stand for elections remains with "matai" title holders. Therefore, in the 49-seat parliament, all 47 "Samoan" Members of Parliament are also "matai", performing dual roles as chiefs and modern politicians, with the exception of the two seats reserved for non-Samoans. At the local level, much of the country's civil and criminal matters are dealt with by some 360 village chief councils, "Fono o Matai", according to traditional law, a practice further strengthened by the 1990 Village Fono Law.
The national government ("malo") generally controls the legislative assembly as it is formed from the party which controls the majority seats in the assembly. Executive power is exercised by the government. Legislative power is vested in the assembly, but the government generally controls legislation through its weight of numbers in the Fono. The Judiciary is independent of the executive and the legislature.
The 1960 Constitution, which formally came into force with independence, is based on the British Westminster model of parliamentary democracy, modified to take account of Samoan customs. Two of Samoa's four highest ranking paramount chiefs (Tama a 'Aiga) at the time of independence were given lifetime appointments to jointly hold the office of head of state. Another paramount chief, Fiame Mata'afa Faumuina Mulinu'u II was elected into parliament and became the first Prime Minister of Samoa. Malietoa Tanumafili II held the post of Head of State alone since the death of his colleague, Tupua Tamasese Mea'ole, in 1963. Tanumafili died in May 2007 and his successor, Tupua Tamasese Tupuola Tufuga Efi was elected by the legislature for a five-year term in June 2007. At the time the Constitution was adopted it was anticipated that future Heads of State would be chosen from among the four Tama-a-Aiga 'royal' paramount chiefs. However, this is not required by the Constitution and for this reason Samoa can be considered a republic rather than a constitutional monarchy like the United Kingdom. Parliament (the Fono) can also amend the constitution through a simple majority of votes in the house.
The Samoa system is very hard model of parliamentary democracy where the executive and the legislative arms of government are fused together. The prime minister is chosen by a majority in the Fono and is appointed by the head of state to form a government. The prime minister's preferred cabinet of 12 is appointed and sworn in by the head of state, subject to the continuing confidence of the Fono, which since the rise of political parties in Samoa in the 1980s, is controlled by the party with the majority of members in the Fono (the government).
The unicameral legislature, named the Fono Aoao Faitulafono (National Legislative Assembly) contains 49 members serving five-year terms. Forty-seven are elected from ethnic Samoan territorial constituencies; the other two are chosen by the Samoan citizens of non-Samoan origin on a separate electoral roll. Universal suffrage was extended in 1990, but only chiefs (matai) may stand for election to the Samoan seats. There are more than 25,000 matai in the country, about 5% of whom are women.
The third Tamaaiga id Tuimalealiifano who was the deputy Head of State or a member of the Council of Deputies when Samoa gained its independence in 1962.
The judicial system is based on English common law and local customs. The Supreme Court of Samoa is the court of highest jurisdiction. The Court of Appeal has a limited jurisdiction to hear only those cases referred to it by the Supreme Court. Below the Supreme Court are the district courts. The chief justice of the Supreme Court is appointed by the Head of State on the recommendation of the Prime Minister.
Perhaps the most important court in Samoa is the Land and Titles court, consisting of cultural and judicial experts appointed by the supreme court. This court hears village land and title succession disputes. The court derives from the Native Land and Titles Court put in place under the German colonial administration in 1901. Samoa's political stability is thought to be due in large part to the success of this court in hearing disputes.
The current Chief Justice is Patu Tiava'asu'e Falefatu Sapolu. Previous chief justices have included Conrad Cedercrantz (appointed first Chief Justice in 1890), Henry Clay Ide (1893–1897), William Lea Chambers (1897–c.1900), W.L. Taylor, C. Roberts, Charles Croft Marsack (1947–), Norman F. Smith and Gaven Donne (1972–1974)
From independence until the 1970s, Fono debate was conducted in the typical 'consensus' style manner of the faamatai system in the villages. This meant due deference was usually shown to the Tama-a-Aiga within parliament (the highest ranking chiefs in the nation). Debate usually ended up with the members supporting the then Tama-a-Aiga prime minister or other highly ranked chiefs in the house. Fiame Mataafa Mulinuu II was re-elected as Prime Minister unopposed for most of the period between 1962 and 1975. There were no political parties in these consensus-style parliaments of the 1960s and early 1970s. In the 1970-73 parliament, the first woman speaker of the Fono was chosen - Leaupepe Faima'ala.
However, rising competition and differences in views between MPs in the 1970s led to the establishment of the first political party - the Human Rights Protection Party (HRPP) in 1979. The 1978 election was the first time a non-Tama-a-Aiga was chosen as Prime Minister. The election of Tupuola Efi to the prime ministership by his supporters was met with staunch opposition from various quarters of the Fono and caused huge controversy at the time because he had defeated a Tama-a-Aiga candidate. The HRPP was set up in part to oppose the then Prime Minister, Tupuola Efi, and also to demand greater rights for farmers. One of the founding members was Va'ai Kolone - a famous farmer turned politician from the rural Savaii constituency of Vaisigano. Tui Atua Tupua Tamasese Efi eventually became Head of State in 2007 under his Tafaifa title TuiAtua and Tama-a-Aiga titles Tupua Tamasese.
Since 1982, the majority party in the Fono has been the HRPP, save for a short period in 1985 when Vaai Kolone leading a coalition of parties won the election but had to resign as MPs crossed the floor to the HRPP. Tofilau Eti Alesana regained the Prime Ministership after Vaai resigned. HRPP leader Tofilau Eti Alesana served as prime minister for nearly all of the period between 1982 and 1998, when he resigned due to health reasons. Tofilau Eti was replaced by his deputy, Tuila'epa Sailele Malielegaoi.
Parliamentary elections were held in March 2001. The Human Rights Protection Party, led by Tuila'epa Sailele Malielegaoi, won 30 of the 49 seats in the current Fono. The Samoa Democratic United Party, led by Le Mamea Ropati, is the main opposition. Other political parties are the Samoa Party, the Christian Party, and the Samoa Progressive Political Party.
The March 2006 elections were again won by the HRPP by an even larger margin than 2001. The HRPP won 32 seats to the SDUP's 10, with a third major party - the Samoa Party - not gaining any. The majority of independents joined the HRPP to increase the party's majority to 39 seats in the 49 seat parliament.
Internal SDUP infighting led to the party's parliamentary members splitting. Leader Le Mamea Ropati was ousted in a coup led by deputy leader Asiata Dr Saleimoa Vaai, who then assumed leadership of the SDUP. Le Mamea and supporters became independents and thus reduced the SDUP's MPs to only 7. This was not enough to be formally recognised in the Fono as an official opposition party (they needed at least 8 MPs). Therefore, there is no official opposition party recognised in the Samoan parliament at present.
The Samoa Democratic United Party (formed after the 2001 elections) bringing together the Samoa National Development Party and the Samoa Independent Party) is led by the long serving Member of Parliament, Hon. Le Mamea Ropati Mualia.
Other parties include(d) the Samoan Progressive Conservative Party, the Samoa All People's Party, and the Samoa Liberal Party.
Samoa is divided in 11 districts: | https://en.wikipedia.org/wiki?curid=27242 |
Economy of Samoa
The economy of Samoa is dependent on agricultural exports, development aid and private financing from overseas. The country is vulnerable to devastating storms. Agriculture employs two-thirds of the labor force, and furnishes 9% of exports, featuring coconut cream, coconut oil and copra. Outside a large automotive wire harness factory, the manufacturing sector mainly processes agricultural products. Tourism is an expanding sector; more than 70,000 tourists visited the islands in 1996 and 120,000 in 2014. The Samoan Government has called for deregulation of the financial sector, encouragement of investment, and continued fiscal discipline. Observers point to the flexibility of the labor market as a basic strength factor for future economic advances.
New Zealand is Samoa's principal trading partner, typically providing between 35% and 40% of imports and purchasing 45%–50% of exports. Australia, American Samoa, the United States, and Fiji are also important trading partners. Its main imports are food and beverages, industrial supplies, and fuels. The primary sector (agriculture, forestry, and fishing) employs nearly two-thirds of the labor force and produces 17% of GDP. Samoa's principal exports are coconut products and fish.
Fishing has had some success in Samoan waters, but the biggest fisheries industry (headed by Van Camp and StarKist) has been based in American Samoa. StarKist Management announced that it was going ahead with setting up at Asau a blast-freezer project to be operational by 2002. This announcement dispelled a growing suspicion about the genuine motives of StarKist to move to Samoa. The proposed blast-freezer operations in Asau were expected to bring this village back to life.
Samoa annually receives important financial assistance from abroad. More than 100,000 Samoans who live overseas provide two sources of revenue. Their direct remittances have amounted to $12.1 million per year recently, and they account for more than half of all tourist visits. In addition to the expatriate community, Samoa also receives roughly $28 million annually in official development assistance from sources led by China, Japan, Australia, and New Zealand. These three sources of revenue—tourism, private transfers, and official transfers—allow Samoa to cover its persistently large trade deficit.
In the late 1960s, Potlatch Forests, Inc. (a US company), upgraded the harbour and airport at Asau on the northern coast of Savai'i and established a timber operation, Samoa Forest Products, for harvesting tropical hardwoods. Potlatch invested about US$2,500,000 in a state-of-the-art sawmill and another US$6,000,000 over several years to develop power, water, and haul roads for their facility. Asau, with the Potlatch sawmillers and Samoa Forest Products, was one of the busiest parts of Savai'i in the 1960s and 1970s; however, the departure of Potlatch and the scaling down of the sawmill has left Asau a ghost town in recent years.
Until 2017 industry accounted for over one-quarter of GDP while employing less than 6% of the work force. The largest industrial venture was Yazaki Samoa, a Japanese-owned company processing automotive wire harnesses for export to Australia under a concessional market-access arrangement. The Yazaki plant employed more than 2,000 workers and made up over 20% of the manufacturing sector's total output. Net receipts amounted to between $1.5 million and $3.03 million annually, although shipments from Yazaki was counted as services (export processing) and therefore did not officially appear as merchandise exports. Yazaki Samoa closed down in 2017, but in the same year Fero, a New Zealand manufacturer producing wiring units, set up in Samoa in the same plant used by Yazaki.
The effects of three natural disasters in the early 1990s were overcome by the middle of the decade, but economic growth cooled again with the regional economic downturn. Long-run development depends upon upgrading the tourist infrastructure, attracting foreign investment, and further diversification of the economy.
Two major cyclones hit Samoa at the beginning of the 1990s. Cyclone Ofa left an estimated 10,000 islanders homeless in February 1990; Cyclone Val caused 13 deaths and hundreds of millions of dollars in damage in December 1991. As a result, gross domestic product declined by nearly 50% from 1989 to 1991. These experiences and Samoa's position as a low-lying island state punctuate its concern about global climate change.
Further economic problems occurred in 1994 with an outbreak of taro leaf blight and the near collapse of the national airline Polynesian Airlines. Taro, a root crop, traditionally was Samoa's largest export, generating more than half of all export revenue in 1993. But a fungal blight decimated the plants, and in each year since 1994 taro exports have accounted for less than 1% of export revenue. Polynesian Airlines reached a financial crisis in 1994, which disrupted the tourist industry and eventually required a government bailout.
The government responded to these shocks with a major program of road building and post-cyclone infrastructure repair. Economic reforms were stepped up, including the liberalization of exchange controls. GDP growth rebounded to over 6% in both 1995 and 1996 before slowing again at the end of the decade.
The collapse of taro exports in 1994 has had the unintended effect of modestly diversifying Samoa's export products and markets. Prior to the taro leaf blight, Samoa's exports consisted of taro ($1.1 million), coconut cream ($540,000), and "other" ($350,000). Ninety percent of exports went to the Pacific region, and only 1% went to Europe. Forced to look for alternatives to taro, Samoa's exporters have dramatically increased the production of copra, coconut oil, and fish. These three products, which combined to produce export revenue of less than $100,000 in 1993, now account for over $3.8 million. There also has been a relative shift from Pacific markets to European ones, which now receive nearly 15% of Samoa's exports. Samoa's exports are still concentrated in coconut products ($2.36 million worth of copra, copra meal, coconut oil, and coconut cream) and fish ($1.51 million) but are at least somewhat more diverse than before.
In 1972, more than 85,000 visitors arrived in Samoa, contributing over $12 million to the local economy. One-third came from American Samoa, 28% from New Zealand, and 11% from the United States. Arrivals also increased in 2000, as visitors to the South Pacific avoided the political strife in Fiji by traveling to Samoa instead.
Tourism numbers and revenue have more than doubled in the last decade. Samoa received 122,000 visitors in 2007 and increased to a total of 145,176 visitors in 2016. About 46% came from New Zealand, 20% from Australia and 7% from the United States. Samoans living overseas accounted for about 33% of all tourist numbers (South Pacific Tourism Organisation (SPTO), 2017).
The service sector accounts for more than half of GDP and employs approximately 30% of the labor force.
GDP:
purchasing power parity – US$1.137 billion (2017 est.)
GDP – real growth rate:
2.5% (2017 est.)
GDP – per capita:
purchasing power parity – $5,700 (2017 est.)
GDP – composition by sector:
(2017 est.)"agriculture:"
10.4%
"industry:"
23.6%
"services:"
66%
Population below poverty line:
NA%
Household income or consumption by percentage share:
"lowest 10%:"
NA%
"highest 10%:"
NA%
Inflation rate (consumer prices):
1.3% (2017 est.)
Labor force:
50,700 (2016 est.)
Labor force – by occupation::
(2015 est.)"agriculture:"
65%
"industry:"
6%
"services:"
29%
Unemployment rate:
5.2% (2017 est.)
Ease of Doing Business Rank: 57th
Budget:
"revenues:"
$110 million
"expenditures:"
$122 million (2011–12)
Industries:
tourism, food processing, auto parts, building materials
Industrial production growth rate:
5,3% (2010 est.)
Electricity – production:
200 GWh (2010)
Electricity – production by source:
"fossil fuel:"
60%
"hydro:"
40%
"nuclear:"
0%
"other:"
0% (2008)
Electricity – consumption:
150 GWh (2008)
Electricity – exports:
1 kWh (2008)
Electricity – imports:
0 kWh (2008)
Agriculture – products:
coconuts, bananas, taro, yams, coffee, cocoa
Exports:
$152 million (f.o.b., 2012)
Exports – commodities:
coconut oil and cream, copra, fish, beer
Exports – partners:
American Samoa, Australia, New Zealand, United States, Germany
Imports:
$258 million (f.o.b., 2012)
Imports – commodities:
machinery and equipment, foodstuffs
Imports – partners:
Australia, New Zealand, Japan, Fiji, United States
Debt – external:
$145 million (2010 est.)
Economic aid – recipient:
$24.3 million (2010)
Currency:
1 tala (WS$) = 100 sene
Exchange rates:
tala (WS$) per US$1 – 3.0460 (January 2000), 3.0120 (1999), 2.9429 (1998), 2.5562 (1997), 2.4618 (1996), 2.4722 (1995)
Fiscal year:
calendar year | https://en.wikipedia.org/wiki?curid=27243 |
Telecommunications in Samoa
This article is about communications systems in Samoa.
In 2009, the Samoa-American Samoa (SAS) Cable provided inter-island communication, as well as enabling users in Samoa to access the ASH cable capacity and connect to the global networks. While ASH Cable and SAS Cable are much smaller than the huge systems across the North Pacific, they will provide more than 40 times the capacity currently in use in both island groups combined.
Main lines in use:
8,000 (2005)
Telephones - mobile cellular:
>30,000 (2005)
Telephone system:
"domestic:"
GSM mobile phone network covering 90% of the country (2006) and a landline system covering 65% of country.
"international:"
satellite earth station - 1 Intelsat (Pacific Ocean)
Broadcast stations:
AM 1, FM 5, shortwave 0 (2005)
Radios:
90% of 23.098 households had at least one radio (2001 census)
Broadcast stations:
3 (in process of switching from PAL broadcast standard to NTSC) (2005)
Televisions:
15,603 (2001 census)
Internet Service Providers (ISPs):
3 (2005)
Country code (Top level domain): .ws, .as | https://en.wikipedia.org/wiki?curid=27244 |
Transport in Samoa
Transport in Samoa includes one international airport situated on the north west coast of Upolu island, paved highways reaching most parts of the two main islands, one main port in the capital Apia and two ports servicing mainly inter island ferries for vehicles and passengers between the two main islands, Upolu and Savai'i.
Highways:
"total: "
"paved: "
"unpaved: "
Ports and harbors:
Airports:
3 (2005)
Airports - with paved runways:
"total:"
2
":"
1 (Apia Faleolo International Airport, IATA airport code APW)
"under :"
1 (2005)
Airports - with unpaved runways:
"total:"
1
From 1900 Samoa had been a German colony, and even after the occupation by New Zealand in 1914 it maintained the German practice of driving on the right-hand side of the road.
A plan to move to driving on the left was first announced by the Samoan government in September 2007. Prime Minister Tuilaepa Aiono Sailele Malielegaoi said that the purpose of adopting left-hand traffic was to allow Samoans to use cheaper right-hand-drive vehicles sourced from Australia, New Zealand or Japan, and so that the large number of Samoans living in Australasia could drive on the same side of the road when they visited their country of origin. He aimed to reduce reliance on expensive, left-hand-drive imports from America.
On 18 April 2008 Samoa's parliament passed the Road Transport Reform Act 2008. Tuisugaletaua Avea, the Minister of Transport, announced that the switch would come into effect at 6:00 am on Monday, 7 September 2009 - and that 7 and 8 September 2009 would be public holidays, so that residents would be able to familiarise themselves with the new rules of the road.
However the decision was controversial, with an estimated 18,000 people attending demonstrations against it in Apia in April 2008 and road signs reminding people of the change being vandalised. The motor industry was also opposed to the decision as 14,000 of Samoa's 18,000 vehicles were designed for right-hand driving and the government refused to meet the cost of conversion. Bus drivers whose doors would be on the wrong side of the road due to the change threatened to strike in protest of the change.
In order to reduce accidents, the government widened roads, added new road markings, erected signs and installed speed humps. The speed limit was also reduced from and sales of alcohol were banned for three days. Prayers were said by the Congregational Christian Church of Samoa for an accident-free changeover and Samoa's Red Cross carried out a blood donation campaign in case of a surge of accidents.
The change came into force following a radio announcement at 5.50 local time (16:50 UTC) which halted traffic and an announcement at 6.00 (15:00 UTC) for traffic to switch from the right to the left side of the road. Samoa thus became the first territory in over thirty years to change which side of the road is driven on, the previous most recent to change having been Okinawa (1978), South Yemen (1977), Ghana (1974) and Nigeria (1972). | https://en.wikipedia.org/wiki?curid=27245 |
San Marino
San Marino (, ), officially the Republic of San Marino (), also known as the Most Serene Republic of San Marino (), is a microstate in Southern Europe completely enclosed by Italy.
Located on the northeastern side of the Apennine Mountains, San Marino covers a land area of just over , and has a population of 33,562. Its capital is the City of San Marino and its largest settlement is Dogana. San Marino's official language is Italian.
The country derives its name from Saint Marinus, a stonemason from the then Roman island of Rab, in modern-day Croatia. Born in AD 275, Marinus participated in the reconstruction of Rimini's city walls after their destruction by Liburnian pirates. Marinus then went on to found an independent monastic community on Monte Titano in AD 301; thus, San Marino lays claim to being the oldest extant sovereign state, as well as the oldest constitutional republic.
San Marino's politics are ruled by its constitution, which dictates that every six months San Marino's parliament must elect two Captains Regent. The Captain Regents have equal powers, and are free to exercise them within the limits of the constitution and parliamentary legislation until their term expires.
The country's economy is mainly based on finance, industry, services and tourism. It is one of the wealthiest countries in the world in terms of GDP per capita, with a figure comparable to the most developed European regions. San Marino is considered to have a highly stable economy, with one of the lowest unemployment rates in Europe, no national debt and a budget surplus.
Saint Marinus left the island of Rab in present-day Croatia with his lifelong friend Leo, and went to the city of Rimini as a stonemason. After the Diocletianic Persecution following his Christian sermons, he escaped to the nearby Monte Titano, where he built a small church and thus founded what is now the city and state of San Marino.
The official founding date is 3 September 301. In 1320 the community of Chiesanuova chose to join the country. In 1463 San Marino was extended with the communities of Faetano, Fiorentino, Montegiardino, and Serravalle, after which the country's borders have remained unchanged.
In 1503, Cesare Borgia, the son of Pope Alexander VI occupied the Republic for six months until his father's successor, Pope Julius II, intervened and restored the country's independence.
On June 4, 1543 Fabiano di Monte San Savino, nephew of the later Pope Julius III, attempted to conquer the republic, but his infantry and cavalry failed as they got lost in a dense fog, which the Sammarinese attributed to Saint Quirinus, whose feast day it was.
After the Duchy of Urbino was annexed by the Papal States in 1625, San Marino became an enclave within the Papal States, something which led to its seeking the formal protection of the Papal States in 1631, but this never equalled a "de facto" Papal control of the republic.
The country was occupied on October 17, 1739 by the legate (Papal governor) of Ravenna, Cardinal Giulio Alberoni, but the independence was restored by Pope Clement XII on February 5, 1740, the feast day of Saint Agatha, after which she became a patron saint of the republic.
The advance of Napoleon's army in 1797 presented a brief threat to the independence of San Marino, but the country was saved from losing its liberty thanks to one of its regents, Antonio Onofri, who managed to gain the respect and friendship of Napoleon. Thanks to his intervention, Napoleon, in a letter delivered to Gaspard Monge, scientist and commissary of the French Government for Science and Art, promised to guarantee and protect the independence of the Republic, even offering to extend its territory according to its needs. The offer was declined by the regents, fearing future retaliation from other states' revanchism.
During the later phase of the Italian unification process in the 19th century, San Marino served as a refuge for many people persecuted because of their support for unification, including Giuseppe Garibaldi and his wife Anita.
The government of San Marino made United States President Abraham Lincoln an honorary citizen. He wrote in reply, saying that the republic proved that "government founded on republican principles is capable of being so administered as to be secure and enduring."
During World War I, when Italy declared war on Austria-Hungary on 23 May 1915, San Marino remained neutral and Italy adopted a hostile view of Sammarinese neutrality, suspecting that San Marino could harbour Austrian spies who could be given access to its new radiotelegraph station. Italy tried to forcibly establish a detachment of Carabinieri in the republic and then cut the republic's telephone lines when it did not comply. Two groups of ten volunteers joined Italian forces in the fighting on the Italian front, the first as combatants and the second as a medical corps operating a Red Cross field hospital. The existence of this hospital later caused Austria-Hungary to suspend diplomatic relations with San Marino.
After the war, San Marino suffered from high rates of unemployment and inflation, leading to increased tensions between the lower and middle classes. The latter, fearing that the moderate government of San Marino would make concessions to the lower class majority, began to show support for the Sammarinese Fascist Party ("Partito Fascista Sammarinese", PFS), founded in 1922 and styled largely on their Italian counterpart. PFS rule lasted from 1923 to 1943, and during this time they often sought support from Benito Mussolini's fascist government in Italy.
During World War II, San Marino remained neutral, although it was wrongly reported in an article from "The New York Times" that it had declared war on the United Kingdom on 17 September 1940. The Sammarinese government later transmitted a message to the British government stating that they had not declared war on the United Kingdom.
Three days after the fall of Benito Mussolini in Italy, PFS rule collapsed and the new government declared neutrality in the conflict. The Fascists regained power on 1 April 1944 but kept neutrality intact. Despite that, on 26 June 1944, San Marino was bombed by the Royal Air Force, in the belief that San Marino had been overrun by German forces and was being used to amass stores and ammunition. The Sammarinese government declared on the same day that no military installations or equipment were located on its territory, and that no belligerent forces had been allowed to enter. San Marino accepted thousands of civilian refugees when Allied forces went over the Gothic Line. In September 1944, it was briefly occupied by German forces, who were defeated by Allied forces in the Battle of San Marino.
San Marino had the world's first democratically elected communist government – a coalition between the Sammarinese Communist Party and the Sammarinese Socialist Party, which held office between 1945 and 1957.
San Marino is the world's smallest republic, although when Nauru gained independence in 1968 it challenged that claim, Nauru's land mass being only . However Nauru's jurisdiction over its surrounding waters covers , an area thousands of times greater than the territory of San Marino. San Marino became a member of the Council of Europe in 1988 and of the United Nations in 1992. It is neither a member of the European Union nor of the Eurozone, although it uses the euro as its currency.
During the 2019–20 coronavirus pandemic, as of June 2020, San Marino had the highest death rate per capita of any country.
San Marino is an enclave (landlocked) surrounded by Italy in Southern Europe, on the border between the regions of Emilia Romagna and Marche and about from the Adriatic coast at Rimini. Its hilly topography, with no flat ground, is part of the Apennine mountain range. The highest point in the country, the summit of Monte Titano, is above sea level. San Marino has no still or contained bodies of water of any significant size.
It is one of only three countries in the world to be completely enclosed by another country (the others being Vatican City, also enclosed by Italy, and Lesotho, enclosed by South Africa). It is the third smallest country in Europe, after Vatican City and Monaco, and the fifth smallest country in the world.
The climate of San Marino is a humid subtropical climate (Köppen climate classification: Cfa), with continental influences, having warm summers and cool winters that are typical of inland areas of the central Italian peninsula. Snowfalls are common and heavy almost every winter, especially above 400–500 m of altitude.
San Marino has the political framework of a parliamentary representative democratic republic: the captains regent are both heads of state and heads of government, and there is a pluriform multi-party system. Executive power is exercised by the government. Legislative power is vested in both the government and the Grand and General Council. The judiciary is independent of the executive and the legislature.
San Marino is considered to have the earliest written governing documents still in effect.
San Marino was originally led by the Arengo, initially formed from the heads of each family. In the 13th century, power was given to the Grand and General Council. In 1243, the first two captains regent were nominated by the Council. , this method of nomination is still in use.
The legislature of the republic is the Grand and General Council ("Consiglio grande e generale"). The Council is a unicameral legislature with 60 members. There are elections every five years by proportional representation in all nine administrative districts. These districts (townships) correspond to the old parishes of the republic.
Citizens 18 years or older are eligible to vote. Besides general legislation, the Grand and General Council approves the budget and elects the captains regent, the State Congress (composed of ten secretaries with executive power), the Council of Twelve (which forms the judicial branch during the period of legislature of the Council), the Advising Commissions, and the Government Unions. The council also has the power to ratify treaties with other countries. The council is divided into five different Advising Commissions consisting of fifteen councillors who examine, propose, and discuss the implementation of new laws that are on their way to being presented on the floor of the council.
Every six months, the council elects two captains regent to be the heads of state. The regents are chosen from opposing parties so that there is a balance of power. They serve a six-month term. The investiture of the captains regent takes place on 1 April and 1 October in every year. Once this term is over, citizens have three days in which to file complaints about the captains' activities. If they warrant it, judicial proceedings against the ex-head(s) of state can be initiated.
The practice of having two heads of state, like Roman consuls, chosen in frequent elections, is derived directly from the customs of the Roman Republic. The Council is equivalent to the Roman Senate; the captains regent, to the consuls of ancient Rome. It is thought the inhabitants of the area came together as Roman rule collapsed to form a rudimentary government for their own protection from foreign rule.
San Marino is a multi-party democratic republic. A new election law in 2008 raised the threshold for small parties entering Parliament, causing political parties to organise themselves into two alliances: the right-wing Pact for San Marino, led by the San Marinese Christian Democratic Party; and the left-wing Reforms and Freedom, led by the Party of Socialists and Democrats, a merger of the Socialist Party of San Marino and the former communist Party of Democrats. The 2008 general election was won by the Pact for San Marino with 35 seats in the Grand and General Council against Reforms and Freedom's 25.
On 1 October 2007, Mirko Tomassoni was elected as among the heads of state, making him the first disabled person ever to have been elected as captain regent.
San Marino has had more female heads of state than any other country: 15 as of October 2014, including three who served twice. With regard to the legal profession, while the Order of Lawyers and Notaries of the Republic of San Marino [Ordine degli Avvocati e Notai della Repubblica di San Marino] exists, there is no clear indication as to how demographic groups have fared in the legal field.
San Marino is divided into the following nine municipalities, known locally as " (meaning "castles"):
There are also eight minor municipalities:
The largest settlement of the Republic is Dogana, which is not an autonomous ", but rather belongs to the Castello of Serravalle.
In a similar way to an Italian ', each ' includes a main settlement, called ', which is the seat of the ', and some even smaller localities known as ".
The republic is made up of 43 parishes named "curacies" (It: "curazie"):
Cà Berlone, Cà Chiavello, Cà Giannino, Cà Melone, Cà Ragni, Cà Rigo, Cailungo, Caladino, Calligaria, Canepa, Capanne, Casole, Castellaro, Cerbaiola, Cinque Vie, Confine, Corianino, Crociale, Dogana, Falciano, Fiorina, Galavotto, Gualdicciolo, La Serra, Lesignano, Molarini, Montalbo, Monte Pulito, Murata, Pianacci, Piandivello, Poggio Casalino, Poggio Chiesanuova, Ponte Mellini, Rovereta, San Giovanni sotto le Penne, Santa Mustiola, Spaccio Giannoni, Teglio, Torraccia, Valdragone, Valgiurata and Ventoso.
San Marino's military forces are among the smallest in the world. National defence is, by arrangement, the responsibility of Italy's armed forces. Different branches have varied functions, including: performing ceremonial duties; patrolling borders; mounting guard at government buildings; and assisting police in major criminal cases. The police are not included in the military of San Marino.
Once at the heart of San Marino's army, the Crossbow Corps is now a ceremonial force of approximately 80 volunteers. Since 1295, the Crossbow Corps has provided demonstrations of crossbow shooting at festivals. Its uniform design is medieval. While still a statutory military unit, the Crossbow Corps has no military function today.
The Guard of the Rock is a front-line military unit in the San Marino armed forces, a state border patrol, with responsibility for patrolling borders and defending them. In their role as Fortress Guards they are responsible for guarding the Palazzo Pubblico in San Marino City, the seat of national government.
In this role they are the forces most visible to tourists, and are known for their colourful ceremony of Changing the Guard. Under the 1987 statute the Guard of the Rock are all enrolled as "Criminal Police Officers" (in addition to their military role) and assist the police in investigating major crime. The uniform of the Guard of the Rock is a distinctive red and green.
The Guard of the Grand and General Council commonly known as The Guard of the Council or locally as the "Guard of Nobles", formed in 1740, is a volunteer unit with ceremonial duties. Due to its striking blue, white, and gold uniform, it is perhaps the best-known part of the Sammarinese military, and appears on countless postcard views of the republic. The functions of the Guard of the Council are to protect the captains regent, and to defend the Grand and General Council during its formal sessions. They also act as ceremonial bodyguards to government officials at both state and church festivals.
In former times, all families with two or more adult male members were required to enroll half of them in the Company of Uniformed Militia. This unit remains the basic fighting force of the armed forces of San Marino, but is largely ceremonial. It is a matter of civic pride for many Sammarinese to belong to the force, and all citizens with at least six years residence in the republic are entitled to enroll.
The uniform is dark blue, with a kepi bearing a blue and white plume. The ceremonial form of the uniform includes a white cross-strap, and white and blue sash, white epaulets, and white decorated cuffs.
Formally this is part of the Army Militia, and is the ceremonial military band of San Marino. It consists of approximately 50 musicians. The uniform is similar to that of the Army Militia. Military Ensemble music accompanies most state occasions in the republic.
Established in 1842, the Gendarmerie of San Marino is a militarised law enforcement agency. Its members are full-time and have responsibility for the protection of citizens and property, and the preservation of law and order.
The entire military corps of San Marino depends upon the co-operation of full-time forces and their retained (volunteer) colleagues, known as the "Corpi Militari Volontari", or Voluntary Military Force.
San Marino is a developed country and although it is not a European Union member, it is allowed to use the euro as its currency by arrangement with the Council of the European Union; it is also granted the right to use its own designs on the national side of the euro coins. Before the euro, the Sammarinese lira was pegged to, and exchangeable with, the Italian lira. The small number of Sammarinese euro coins, as was the case with the lira before it, are primarily of interest to coin collectors.
San Marino's per capita GDP and standard of living are comparable to that of Italy. Key industries include banking, electronics, and ceramics. The main agricultural products are wine and cheese. San Marino imports mainly staple goods from Italy.
San Marino's postage stamps, which are valid for mail posted in the country, are mostly sold to philatelists and are an important source of income. San Marino is no longer a member of the Small European Postal Administration Cooperation.
It has the world's highest rate of car ownership, being the only country with more vehicles than people.
The corporate profits tax rate in San Marino is 19%, capital gains are subject to a five per cent tax, and interest is subject to a 13% withholding tax.
In 1972, a value-added tax (VAT) system was introduced in Italy, and was applied in San Marino, in accordance with the 1939 friendship treaty. In addition, a tax on imported goods, to be levied by San Marino, was established. Such taxes, however, were not, and are not, applied to national products. Until 1996, goods manufactured and sold in San Marino were not subject to indirect taxation.
Under the European Union customs agreement, San Marino continues to levy taxes, the equivalent of an import duty, on imported goods. Also, a general VAT was introduced, in replacement of the Italian VAT.
The tourism sector contributes over 22% of San Marino's GDP, with approximately 2 million tourists having visited in 2014.
San Marino and Italy have engaged in conventions since 1862, dictating some economic activities in San Marino's territory.
Cultivation of tobacco and production of goods which are subject to Italy's government monopoly are forbidden in San Marino. Direct import is forbidden; all goods coming from a third party have to travel through Italy before reaching the country. Although it is allowed to print its own postal stamps, San Marino is not allowed to coin its own currency and is obliged to use Italy's mint; the agreement does not affect the right of the Republic of San Marino to continue to issue gold coins denominated in Scudi (legal value of 1 gold Scudo is 37.50 euros). Gambling is legal and regulated; however, casinos were outlawed prior to 2007. There is currently one legally operating casino.
In exchange for these limitations, Italy provides San Marino with an annual stipend, and at cost, sea salt (not more than 250 tonnes per year), tobacco (40 tonnes), cigarettes (20 tonnes) and matches (unlimited amount).
At the border there are no formalities with Italy. However, at the tourist office visitors can purchase officially cancelled souvenir stamps for their passports.
San Marino has a population of approximately 33,000, with 4,800 foreign residents, most of whom are Italian citizens. Another 12,000 Sammarinese live abroad (5,700 in Italy, 3,000 in the US, 1,900 in France and 1,600 in Argentina).
The first census since 1976 was conducted in 2010. Results were expected by the end of 2011; however, 13% of families did not return their forms.
The primary language spoken is Italian; Romagnol is also widely spoken.
San Marino is a predominantly Catholic state—over 97% of the population profess the Roman Catholic faith, but Catholicism is not an established religion. Approximately half of those who profess to be Catholic practice the faith. There is no episcopal see in San Marino, although its name is part of the present diocesan title. Historically, the various parishes in San Marino were divided between two Italian dioceses, mostly in the Diocese of Montefeltro, and partly in the Diocese of Rimini. In 1977, the border between Montefeltro and Rimini was readjusted so that all of San Marino fell within the diocese of Montefeltro. The bishop of Montefeltro-San Marino resides in Pennabilli, in Italy's province of Pesaro e Urbino.
There is a provision under the income tax rules that the taxpayers have the right to request for allocation of 0.3% of their income tax to the Catholic Church or to charities. The churches include the two religious groups of the Waldensian Church and Jehovah's Witnesses.
The Roman Catholic Diocese of San Marino-Montefeltro was until 1977 the historic diocese of Montefeltro. It is a suffragan of the archdiocese of Ravenna-Cervia. The current diocese includes all the parishes of San Marino. The earliest mention of Montefeltro, as "Mona Feretri", is in the diplomas by which Charlemagne confirmed the donation of Pepin. The first known bishop of Montefeltro was Agatho (826), whose residence was at San Leo. Under Bishop Flaminios Dondi (1724) the see was again transferred to San Leo, but later it returned to Pennabilli. The historic diocese was a suffragan of the archdiocese of Urbino. Since 1988, there is formally an apostolic nunciature to the republic, but it is vested in the nuncio to Italy.
There has been a Jewish presence in San Marino for at least 600 years. The first mention of Jews in San Marino dates to the late 14th century, in official documents recording the business transactions of Jews. There are many documents throughout the 15th to 17th centuries describing Jewish dealings and verifying the presence of a Jewish community in San Marino. Jews were permitted official protection by the government.
During World War II, San Marino provided a haven for more than 100,000 Italians and Jews (approximately 10 times the Sammarinese population at the time) from Nazi persecution. , few Jews remain. In 2019 it has been inaugurated the 'Chapel of three religions', the first ever building of its kind devoted to interfaith dialogue.
There are of roads in the country, the main road being the San Marino Highway. Authorities license private vehicles with distinctive Sammarinese license plates, which are white with blue figures and the coat of arms, usually a letter followed by up to four numbers. Many vehicles also carry the international vehicle identification code (in black on a white oval sticker), which is "RSM".
There are no public airports in San Marino, but there is a small private airstrip located in Torraccia and an international heliport located in Borgo Maggiore. Most tourists who arrive by air land at Federico Fellini International Airport close to the city of Rimini, then make the transfer by bus.
Two rivers flow through San Marino, but there is no major water transport, and no port or harbour.
San Marino has limited public transport facilities. There is a regular bus service between Rimini and the city of San Marino that is popular with both tourists and workers commuting to San Marino from Italy. This service stops at approximately 20 locations in Rimini and within San Marino, with its two terminus stops at Rimini railway station and San Marino coach station.
A limited licensed taxi service operates nationwide. There are seven licensed taxi companies operating in the republic, and Italian taxis regularly operate within San Marino when carrying passengers picked up in Italian territory.
There is a aerial tramway connecting the City of San Marino on top of Monte Titano with Borgo Maggiore, a major town in the republic, with the second largest population of any Sammarinese settlement. From here a further connection is available to the nation's largest settlement, Dogana, via the local bus service.
Two aerial tramway cars (gondolas) operate, with service provided at roughly 15-minute intervals throughout the day. A third vehicle is available on the system, a service car for the use of engineers maintaining the tramway.
Today, there is no railway in San Marino, but for a short period before World War II, it had a single narrow-gauge line called the Ferrovia Rimini–San Marino which connected the country with the Italian rail network at Rimini. Because of the difficulties in accessing the capital, City of San Marino, with its mountain-top location, the terminus station was planned to be located in the village of Valdragone, but was extended to reach the capital through a steep and winding track comprising many tunnels. The railway was opened on 12 June 1932.
An advanced system for its time, it was an electric railway, powered from overhead cables. It was well built and had a high frequency of passengers, but was almost completely destroyed during World War II. Many facilities such as bridges, tunnels, and stations remain visible today, and some have been converted to parks, public footpaths, or traffic routes.
The Three Towers of San Marino are located on the three peaks of Monte Titano in the capital. They are depicted on both the flag of San Marino and its coat of arms. The three towers are: "Guaita", the oldest of the three (it was constructed in the 11th century); the 13th-century "Cesta", located on the highest of Monte Titano's summits; and the 14th-century "Montale", on the smallest of Monte Titano's summits, still privately owned.
The "Università degli Studi della Repubblica di San Marino" (University of the Republic of San Marino) is the main university, which includes the "Scuola Superiore di Studi Storici di San Marino" (Graduate School of Historical Studies), a distinguished research and advanced international study centre governed by an international Scientific Committee coordinated by the emeritus historian Luciano Canfora. Other important institutes are the "Istituto Musicale Sammarinese" (Sammarinese Musical Institute) and the Akademio Internacia de la Sciencoj San Marino or "Accademia Internazionale delle Scienze San Marino" (International Academy of Sciences San Marino). The latter is known for adopting Esperanto as the language for teaching and for scientific publications; further, it makes wide use of electronic educational technology (also called e-learning).
Italian author Umberto Eco had attempted to create a "university without physical structures" in San Marino.
In San Marino football is the most popular sport. Basketball and volleyball are also popular. The three sports have their own federations, the San Marino Football Federation, the San Marino Basketball Federation and the San Marino Volleyball Federation.
The San Marino national football team has had little success, being made up of part-timers, never qualifying for a major tournament, and recording only one win in over 25 years of its history, a 1–0 victory in 2004 against Liechtenstein. They have drawn four more, with their most notable result being a 1993 0–0 draw with Turkey during the European qualifiers for the 1994 FIFA World Cup. In the same qualifying competition Davide Gualtieri scored a goal 8.3 seconds into a match against England; this goal held the record for the fastest in international football until 2016.
A Formula One race, the San Marino Grand Prix, was named after the state, although it did not take place there. Instead, it was held at the Autodromo Enzo e Dino Ferrari in the Italian town of Imola, about northwest of San Marino. Roland Ratzenberger and Ayrton Senna suffered fatal accidents a day apart during the 1994 Grand Prix. This international event was removed from the calendar in 2007.
The San Marino and Rimini's Coast motorcycle Grand Prix was reinstated in the schedule in 2007 and takes place at the Misano World Circuit Marco Simoncelli, as does San Marino's round of the World Superbike Championship.
San Marino has a professional baseball team which plays in Italy's top division. It has participated in the European Cup tournament for the continent's top club sides several times, hosting the event in 1996, 2000, 2004, and 2007. It won the championship in 2006 and was a runner-up in 2010.
Together with Italy, San Marino held the 2019 UEFA European Under-21 Championship, with teams playing at the Stadio Olimpico in Serravalle.
San Marino has had little success at the Olympic Games, winning no medals.
The cuisine of San Marino is extremely similar to Italian, especially that of the adjoining Emilia-Romagna and Marche regions, but it has a number of its own unique dishes and products. Its best known is probably the "Torta Tre Monti" ("Cake of the Three Mountains" or "Cake of the Three Towers"), a wafer layered cake covered in chocolate depicting the Three Towers of San Marino. The country also has a small wine industry.
The site San Marino: Historic Centre and Mount Titano became part of the UNESCO World Heritage List in 2008. The decision was taken during the 32nd Session of the UNESCO World Heritage Committee composed of 21 Countries convened in Québec, Canada.
The country has a long and rich musical tradition, closely linked to that of Italy, but which is also highly independent in itself. A well-known 17th-century composer is Francesco Maria Marini. The pop singer Little Tony achieved considerable success in the United Kingdom and Italy in the 1950s and 1960s.
San Marino has taken part in the Eurovision Song Contest ten times, achieving two final qualifications to date (with then-three, eventually four-time contestant and San Marino native Valentina Monetta with "Maybe" in 2014 and Turkish singer Serhat with "Say Na Na Na" who achieved 19th place in the final in 2019). | https://en.wikipedia.org/wiki?curid=27248 |
History of San Marino
As the only surviving medieval microstate in the Italian peninsula, the history of San Marino is intertwined with medieval, Renaissance and modern-day history of the Italian peninsula, beginning with independence from the Roman Empire on 257 AD (Diocletian kingdom).
Like Andorra, Liechtenstein and Monaco, it is a surviving example of the typical medieval city-states of Germany, Italy and the Pyrenees.
The country, whose independence has ancient origins, claims to be the world's oldest surviving republic. According to legend, San Marino was founded in 301 AD when a Christian stonemason Marinus (lit. "from the sea"), later venerated as Saint Marinus, emigrated in 297 AD from Dalmatian island of Rab, when Emperor Diocletian issued a decree calling for the reconstruction of the city walls of Rimini, destroyed by Liburnian pirates. Marinus later became a Deacon and was ordained by Gaudentius, the Bishop of Rimini; shortly after, he was "recognised" and accused by an insane woman of being her estranged husband, whereupon he quickly fled to Monte Titano to build a chapel and monastery and live as a hermit. Later, the State of San Marino would bud from the centre created by this monastery. Living in geographical isolation from the Diocletianic Persecution of Christians at the time, the mountain people were able to live peaceful lives. When this settlement of "refugee" mountain people was eventually discovered, the owner of the land, Felicissima, a sympathetic lady of Rimini, bequeathed it to the small Christian community of mountain dwellers, recommending to them to remain always united.
Evidence of the existence of a community on Mount Titano dates back to the Middle Ages. That evidence comes from a monk named Eugippio, who reports in several documents going back to 511 that another monk lived here. In memory of the stonecutter, the land was renamed "Land of San Marino", and was changed to its present-day name, "Republic of San Marino".
Later papers from the 9th century report a well organized, open and proud community: the writings report that the bishop ruled this territory.
In Lombard age, San Marino was a fief of Dukes of Spoleto (linked to Papal States), but the free "comune" dates to the tenth century.
The original government structure was composed of a self-governed assembly known as the "Arengo", which consisted of the heads of each family (as in the original Roman Senate, the "Patres"). In 1243, the positions of Captains Regent ("") were established to be the joint heads of state. The state's earliest statutes date back to 1263. The Holy See confirmed the independence of San Marino in 1631.
In quick succession, the lords of Montefeltro, the Malatesta of Rimini, and the lords of Urbino attempted to conquer the little town, but without success. In 1320 the community of Chiesanuova chose to join the country. The land area of San Marino consisted only of Mount Titano until 1463, at which time the republic entered into an alliance against Sigismondo Pandolfo Malatesta, duke of Rimini, who was later defeated. As a result, Pope Pius II gave San Marino some castles and the towns of Fiorentino, Montegiardino and Serravalle. Later that year, the town of Faetano joined the republic on its own accord. Since then, the size of San Marino has remained unchanged.
San Marino has been occupied by foreign militaries three times in its history, each for only a short period of time. Two of these periods were in the feudal era. In 1503, Cesare Borgia occupied the Republic until the death of his father some months later.
On June 4, 1543 Fabiano di Monte San Savino, nephew of the later Pope Julius III, attempted to conquer the republic in a plan involving 500 infantry men and some cavalry. The group failed as they got lost in a dense fog, which the Sammarinese attributed to Saint Quirinus, whose feast day it was, and which afterwards has been celebrated annually in the country.
San Marino faced many potential threats during the feudal period, so a treaty of protection was signed in 1602 with Pope Clement VIII, which came into force in 1631.
On October 17, 1739, Cardinal Giulio Alberoni, Papal Governor of Ravenna, used military force to occupy the country, imposed a new constitution, and endeavored to force the Sammarinesi to submit to the government of the Papal States. He was aiding certain rebels, and acting possibly contrary to the orders of Pope Clement XII. However, civil disobedience occurred, and clandestine notes were written to the Pope to appeal for justice. On February 5, 1740, 3.5 months after the occupation began, the Pope recognized San Marino's rights, restoring independence. February 5, is the feast day of Saint Agatha, after which she became a patron saint of San Marino.
The basis of San Marino's government is the multi-document Constitution of San Marino, the first components of which were promulgated and became effective on 1 September 1600. Whether these documents amount to a written constitution depends upon how one defines the term. The political scientist Jorri Duursma claims that "San Marino does not have an official constitution as such. The first legal documents which mentioned San Marino's institutional organs were the Statutes of 1600."
After Napoleon's campaign of Italy, San Marino found itself on the border between the Kingdom of Italy and long-time ally, the Papal States. On February 5, 1797, when, with the arrival of a letter from General Louis Alexandre Berthier addressed to the Regents, it was required to arrest and consign the , Monsignor Vincenzo Ferretti, accused of instigating crimes against French Empire, who fled with all his possessions to San Marino and refusal would result in the immediate intervention of French troops.
The Government of San Marino replied that it would do everything possible to fulfil the request, even though, in reality, the bishop was able to flee across the border.
A solution was found by one of the Regents, Antonio Onofri, who inspired in Napoleon a friendship and respect toward the sovereign state. Napoleon was won to the commonality in cause with the ideals of liberty and humanity extolled in San Marino's humble founding and wrote in recognition of its cultural value in a letter to Gaspard Monge, scientist and commissary of the French Government for the Sciences and the Arts who was at the time stationed in Italy; further promising to guarantee and protect the independence of the Republic even so far as offering to extend its territory according to its needs. While grateful for the former, the offer of territorial expansion was politely declined by San Marino.
Napoleon issued orders that exempted San Marino's citizens from any type of taxation and gave them 1,000 quintals (over 2,200 lb or 1,000 kg) of wheat as well as four cannons; although for unknown reasons, the cannons were ultimately never brought into San Marino.
The mystery behind Napoleon's treatment of San Marino may be better understood in light of the ongoing French Revolution (1789–1799) where France was undergoing drastic political reform. At this time, the Republic of San Marino and the recently established First French Republic (est. 1792) would have been ideologically aligned.
The state was recognized by Napoleon by the Treaty of Tolentino, in 1797 and by the Congress of Vienna in 1815. In 1825 and 1853, new attempts to submit it to the Papal States failed; and its wish to be left out of Giuseppe Garibaldi's Italian unification in the mid-nineteenth century was honoured by Giuseppe in gratitude for indiscriminately taking in refugees in years prior, many of whom were supporters of unification, including Giuseppe himself and 250 followers. Although faced with many hardships (with his wife Anita who was carrying their fifth child dying near Comacchio before they could reach the refuge), the hospitality received by Giuseppe on San Marino would later prove to be a shaping influence on Giuseppe's diplomatic manner, presaging the themes and similar language used in his political correspondences such as his letter to Joseph Cowen.
In the spring of 1861, shortly before the beginning of the American Civil War, the government of San Marino wrote a letter (in "perfect Italian on one side, and imperfect but clear English on the other") to United States President Abraham Lincoln, proposing an "alliance" between the two democratic nations and offering the President honorary San Marino citizenship. Lincoln accepted the offer, writing (with his Secretary of State, William H. Seward) in reply that San Marino proved that "government founded on republican principles is capable of being so administered as to be secure and enduring." Presaging a theme he would bring to the fore, using similar language, in his Gettysburg Address in 1863, Lincoln wrote: "You have kindly adverted to the trial through which this Republic is now passing. It is one of deep import. It involves the question whether a Representative republic, extended and aggrandized so much as to be safe against foreign enemies can save itself from the dangers of domestic faction. I have faith in a good result..."
After the unification of the Kingdom of Italy a treaty in 1862 confirmed San Marino's independence. It was revised in 1872.
Towards the end of the 19th century, San Marino experienced economic depression: a large increase in the birth rate coupled with a widening of the gap between agricultural and industrial development led people to seek their fortunes in more industrialised countries. The Sammarinese first sought seasonal employment in Tuscany, Rome, Genoa and Trieste, but in the latter half of the century whole families were uprooted, with the first permanent migrations to the Americas (United States, Argentina and Uruguay) and to Greece, Germany and Austria. This phenomenon lasted up to the 1870s, with a pause during the First World War and an increase during the Fascist period in Italy. Even today there are still large concentrations of San Marino citizens residing in foreign countries, above all, in the United States, in France and in Argentina. There are more than 15,000 San Marino citizens spread throughout the world.
An important turning-point in the political and social life of the country took place on March 25, 1906, when the Arengo met; out of 1,054 heads of family, 805 were present. Each head of family received a ballot which contained two questions: the first asking if the Government of San Marino should be headed by a Principal and Sovereign Council, and the second, if the number of members of the Council should be proportionate between the city population and the rural population. This was the first move towards a referendum and true democracy in San Marino. In the past, similar attempts were made by people such as Pietro Franciosi, but without results. In the same year a second referendum took place on May 5 dealing with the first electoral laws and on June 10 the first political elections in San Marino's history resulted in a victory of the exponents of democracy.
While Italy declared war on Austria–Hungary on 23 May 1915, San Marino remained neutral. Italy, suspecting that San Marino could harbour Austrian spies who could be given access to its new radiotelegraph station, tried to forcefully establish a detachment of Carabinieri on its territory and then suspended any telephone connections with the Republic when it did not comply.
Two groups of 10 volunteers each did join Italian forces in the fighting on the Italian front, the first as combatants and the second as a medical corps operating a Red Cross field hospital. It was the presence of this hospital that later caused Austrian authorities to suspend diplomatic relations with San Marino.
Although propaganda articles appeared in "The New York Times" as early as 4 June 1915 claiming that San Marino declared war on Austria–Hungary, the republic never entered the war.
San Marino in the 1920s, still a largely agrarian society, experienced political turmoil influenced by the events in Fascist Italy, culminating in June 1921 in the murder in Serravalle of Italian doctor and Fascist sympathiser Carlo Bosi by local leftists, which led to condemnation by the surrounding Italian population and threats of retaliation by Italian "squadristi". The government decided to ask Italy for help in the form of a detachment of 30 Carabinieri.
As in Italy, Fascism eventually took over government of the Republic, the Sammarinese Fascist Party causing the Socialist newspaper "Nuovo Titano" to cease publication.
The 1930s was an era of public works and reinvention of the Republic's economy, with the construction of the San Marino-Rimini railway that connected it to the Italian railway network and modernization of the country's infrastructures that paved the way to its present status as a major tourist destination.
San Marino was mostly uninvolved in the Second World War. In September 1940, press reports claimed that it had to have declared war on Britain in support of Italy; however, this was later denied by the Sammarinese government.
On 26 June 1944, it was bombed by the British Royal Air Force which mistakenly believed it had been overrun by German forces and was being used to amass stores and ammunitions. The railway was destroyed and 63 civilians died during the operation. The British government later admitted the bombing had been unjustified and that it had been executed on receipt of erroneous information.
San Marino's hope to escape further involvement was shattered on 27 July 1944 when Major Gunther, commander of the German forces in Forlì, delivered a letter from German headquarters in Ferrara to San Marino's government declaring that the country's sovereignty could not be respected if, in view of military requirements, the necessity of transit of troops and vehicles arose. The communiqué, however, underlined that wherever possible occupation would be avoided.
Fears were confirmed when on 30 July a German medical corps colonel presented himself with an order for the requisition of two public buildings for the establishment of a military hospital. On the following day, 31 July 1944, in view of the likely invasion by German forces, the state sent three letters of protest: one to Joachim von Ribbentrop, German Foreign Minister, one to Adolf Hitler and one to Benito Mussolini, the latter delivered by a delegation to Serafino Mazzolini, a high-ranking diplomat in the Italian Ministry of Foreign Affairs. Demanding to meet Mussolini with the intention to ask that its neutrality be respected, the following day Mazzolini took them to see Mussolini, who promised to contact the German authorities and intervene in favour of San Marino's request.
San Marino was a refuge for over civilians who sought safety on the passing of Allied forces over the Gothic Line during the Battle of Rimini, an enormous effort of relief by the inhabitants of a country that at that time counted only 15,000 people.
Despite all this, the Germans and Allies clashed on San Marino's soil in late September 1944 at the Battle of Monte Pulito; Allied troops occupied San Marino after that, but only stayed for two months before returning the Republic's sovereignty.
After the war, San Marino became the first country in Western Europe to be ruled by a communist party (the Sammarinese Communist Party, in coalition with the Sammarinese Socialist Party) through democratic elections. The coalition lasted from 1945 to 1957, when the "fatti di Rovereta" occurred. This was the first time anywhere in the world, when a communist government was democratically elected into power.
The Sammarinese Communist Party peacefully dissolved in 1990 and restructured as the Sammarinese Democratic Progressive Party replacing the former hammer-and-sickle logo (a communist motif representing the rights of workers) with the image of a drawing of a dove by Pablo Picasso.
Universal suffrage was achieved by San Marino in 1960. Having joined the Council of Europe as a full member in 1988, San Marino held the rotating chair of the organisation during the first half of 1990.
San Marino became a member of the United Nations in 1992. In 2002 it signed a treaty with the OECD, agreeing to greater transparency in banking and taxation matters to help combat tax evasion.
General: | https://en.wikipedia.org/wiki?curid=27249 |
Politics of San Marino
The politics of the state of San Marino takes place in a framework of a parliamentary representative democratic republic, whereby the Captains Regent are the heads of state and heads of government, and of a multi-party system. Executive power is exercised by the government. Legislative power is vested in both the government and the Grand and General Council. The judiciary is independent of the executive and the legislature.
San Marino was originally led by the Arengo, initially formed with the heads of each family. In the 13th century, power was given to the Great and General Council. In 1243, the first two Captains Regent were nominated by the Council and is still in use today.
The legislature of the republic is the Grand and General Council ("Consiglio grande e generale"). The Council is a unicameral legislature which has 60 members with elections occurring every 5 years under a majoritarian representation system in a sole national constituency. Citizens eighteen years or older are eligible to vote. Besides general legislation, the Grand and General Council approves the budget and elects the Captains Regent, the State Congress, the Council of Twelve, the Advising Commissions, and the Government Unions. The Council also has the power to ratify treaties with other countries. The Council is divided into five different Advising Commissions consisting of 15 councilors which examine, propose, and discuss the implementation of new laws that are on their way to being presented on the floor of the Council.
Every six months, the Council elects two Captains Regent to be the heads of state. The foundational theory was to create a balance of power or, at least, reciprocal control. They serve a 6-month term. The investiture of the Captains Regent takes place on April 1 and October 1 in every year. Once this term is over, citizens have 3 days in which to file complaints about the Captains' activities. If they warrant it, judicial proceedings against the ex-head(s) of state can be initiated.
The practice of dual heads of state, according to the principle of Collegiality, as well as the frequent re-election of same, are derived directly from the customs of the Roman Republic. The Council is equivalent to the Roman Senate; the Captains Regent, to the consuls of ancient Rome.
The Congress of State is the government of the country and wields the executive power. It is composed by a variable number of Secretaries of State, which by law cannot exceed the number of 10, that are appointed by the Grand and General Council at the beginning of the legislature. Because of this fact, also the areas of competence of the various Secretaries are not fixed, as some could be merged depending on the number of Secretaries. The law identifies 10 sectors of the public administration for which the secretaries are politically responsible:
The Captains Regents participate to the Congress with coordination powers, but no voting right. While all the Secretaries are equally important on principle, over the years, the secretary of state for Foreign and Political Affairs has assumed many of the prerogatives of a prime minister or head of government.
San Marino is a multi-party democratic republic. The two main parties are the Sammarinese Christian Democratic Party (PDCS) and the Party of Socialists and Democrats (PSD, a merger of the Socialist Party of San Marino and the Party of Democrats) in addition to several other smaller parties. It is difficult for any party to gain a pure majority and most of the time the government is run by a coalition.
Because tourism accounts for more than 50% of the economic sector, the government relies not only on taxes and customs for revenue, but also on the sale of coins and postage stamps to collectors throughout the world. In addition, the Italian Government pays San Marino an annual budget subsidy provided under the terms of the Basic Treaty with Italy.
The Council of Twelve (Italian: "Consiglio dei XII") serves as the supreme tribunal of the republic. The Grand and General Council elects the Council of Twelve, whose members remain in office until the next general election. The Council of Twelve has appellate jurisdiction in the third instance. Two government inspectors represent the State in financial and patrimonial questions.
The Guarantors’ Panel on the Constitutionality of Rules (Italian: "Collegio Garante della Costituzionalità delle Norm"e) is the highest court of San Marino in matters of constitutional law. The institution was established in February 2002, making it the youngest body of San Marino constitutional order. Its members are also elected by the Grand and General Council.
The judicial system of San Marino is entrusted to foreign executives, both for historical and social reasons. The only native judges are the Justices of the Peace, who handle only civil cases where sums involved do not exceed €15,000.
San Marino carried out the last execution in its history in 1468 (by hanging). The death penalty was abolished for murder on March 12, 1848, and for other crimes two years later.
The main issues confronting the current government include economic and administrative problems related to San Marino's status as a close financial and trading partner with Italy while at the same time remaining separated from the European Union (EU). The other priority issue will be to increase the transparency and efficiency in parliament and in relations among parliament, cabinet, and the Captains Regent.
Council of Europe, ECE, ICAO, ICC, ICFTU, ICRM, IFRCS, ILO, IMF, IOC, IOM (observer), ITU, OPCW, OSCE, UN, UNCTAD, UNESCO, UPU, WHO, WIPO, UNWTO | https://en.wikipedia.org/wiki?curid=27252 |
Economy of San Marino
The economy of San Marino is focused around industries such as banking, wearing apparel, including fabrics, electronics, ceramics, tiles, furniture, paints, and spirits/wines. In addition, San Marino sells collectible postage stamps to philatelists. The main agricultural products are wine and cheeses.
The per capita level of output and standard of living are comparable to those of Italy, which supplies much of its food. In addition, San Marino has a state budget surplus and no national debt. Income taxes are much lower than in Italy, and there are therefore extremely strict requirements to obtain citizenship. San Marino's per capita gross national product in 2000 stood at $32,000 with more than 50% of that coming from the tourism industry which draws about 3.15 million people annually.
One of the greatest sources of income from tourism comes from the sale of historic coins and stamps. In 1894, San Marino issued the first commemorative stamps and since then that has been part of a large livelihood in the republic. All 10 of the Post Offices of San Marino sell these stamps and collectible coins, including "Legal Gold Tender Coins".
Traditional economic activities in San Marino were food crops, sheep farming, and stone quarrying. Today farming activities focus on grain, vines and orchards, as well as animal husbandry (cattle and swine). | https://en.wikipedia.org/wiki?curid=27253 |
Telecommunications in San Marino
This article provides an outline of the telecommunications infrastructure in San Marino.
Note: In addition the four Italian mobile network operators Iliad, TIM, Vodafone and Wind Tre can be received in San Marino.
San Marino has only one television network, San Marino RTV, which is owned by a company with the same name. In 1997, there were approximately 9,000 television sets in the country.
San Marino has two radio networks, Radio San Marino and Radio San Marino Classic, also owned by San Marino RTV. In 1997, there were approximately 16,000 radios in San Marino.
In 2010 there were 17,000 Internet users in San Marino. San Marino's internet domain is .sm. | https://en.wikipedia.org/wiki?curid=27254 |
Transport in San Marino
San Marino is a small European republic, with limited public transport facilities. It is an enclave in central Italy. The principal public transport links involve buses, helicopters, and an aerial tramway. There was a public rail network, a small part of which is preserved.
There is a disused railway to Rimini, with much of the infrastructure such as tunnels still intact.
There is a 300 m aerial tramway connecting the city of San Marino on top of Monte Titano with Borgo Maggiore, a major town in the republic, with the second largest population of any Sammarinese settlement. Indeed, for the tourist visitor the aerial tramway gives the best available views of Borgo Maggiore, as the cars sweep low over the rooftops of the main town square. From here a further connection is available to the nation's largest settlement, Dogana, by means of local bus service.
Two aerial tramway cars, known as gondolas, and numbered '1' and '2', operate in opposition on a cable, and a service is provided at roughly fifteen-minute intervals throughout the day. A third vehicle is available on the system, being a service car for the use of engineers maintaining the tramway.
The upper station of the aerial tramway serves no other purpose (although it is situated close to a tourist information office). However, the lower station in Borgo Maggiore has a number of retail and catering outlets situated within its overall structure.
There are 220 km of highways in the country, the main road being the San Marino Highway. Roads are well used by private car drivers. Sammarinese authorities license private vehicles with distinctive licence plates which are white with blue figures, usually a letter followed by up to four numbers. To the left of these figures is printed the national Coat of Arms of San Marino. Many vehicles also carry the international vehicle identification code (in black on a white oval sticker), which is "RSM". Since 2004 custom licence plates have also become available.
A limited licensed taxi service operates nationwide. There are seven licensed taxi operating companies in the republic, and Italian taxis regularly operate within San Marino when carrying passengers picked up in Italian territory.
There is a regular international bus service between Rimini and the city of San Marino, popular with both tourists and tourist industry workers commuting to San Marino from Italy. This service stops at approximately twenty advertised locations in Rimini and within San Marino, with its two terminus stops at Rimini railway station and San Marino coach station, respectively.
San Marino also has its own local bus system within the republic, which provides a limited service connecting the capital and the smaller rural communities.
There is a small airfield located in Domagnano right next to the border; there is also an international heliport located in Borgo Maggiore. Most tourists who arrive by air land at Rimini's Federico Fellini Airport, Italy, and then make the transfer by bus.
Two rivers flow through San Marino, but there is no major water transport, and no major port or harbour. | https://en.wikipedia.org/wiki?curid=27255 |
Sammarinese Armed Forces
The Sammarinese Armed Forces () refers to the national military defence forces of the country of San Marino. It is one of the smallest military forces in the world, with its different branches having varied functions including: performing ceremonial duties; patrolling borders; mounting guard at government buildings; and assisting police in major criminal cases. There is also a military Gendarmerie which is part of the military forces of the republic. The entire military corps of San Marino depends upon the co-operation of full-time forces and their retained (volunteer) colleagues, known as the "Corpi Militari Volontari", or Voluntary Military Force. National defence in the face of an aggressive world power is, by arrangement, the responsibility of Italy's armed forces. The component parts of the military (other than the purely historical Crossbow Corps) are distinguished (as in many nations) by distinctive cap badges, one each for the Fortress Guard (uniformed), Fortress Guard (artillery), Guard of the Council, Uniformed Militia, Military Ensemble (band), and Gendarmerie. There is no compulsory service, however under special circumstances citizens aged 16 to 55 may be drafted for the defence of the state.
Although once at the heart of San Marino's army, the Crossbow Corps is now an entirely ceremonial force of about 70 volunteer soldiers. The Crossbow Corps has a continuous history from its first mention in the national statutes of 1295. Described by the Government as "The oldest military formation in the Republic, nominated in the statutes of 1295", its uniform is medieval in design, and although it is a statutory military unit, it has no actual military function today. By the mid twentieth century the Crossbow Corps had become largely defunct, save for parading on state holidays; but in 1956 the practice of training its members in crossbow shooting was revived, and a 'Crossbow Federation' was formed to encourage competition in this art, so that the unit (although still entirely ceremonial in nature) now again has a very active existence.
The Guard of the Rock () is a front-line military unit in the San Marino armed forces. Its precise origin is difficult to pinpoint due to amalgamations. Its role was last redefined by statute in 1987, and it probably came into being as a military branch in 1754; however, the unit also uses the name "Fortress Guards" which may be traced back to much earlier units of Sammarinese military. The Guard of the Rock are the state border patrol, with responsibility for patrolling borders and defending them. In their role as Fortress Guards they are also responsible for the guarding of the Palazzo Pubblico in San Marino City, which is the seat of national government. In this role, they are the forces most visible to tourists, and known for their colourful ceremony of Changing the Guard. Under the 1987 statute, the Guard of the Rock are all enrolled as 'Criminal Police Officers' (in addition to their military role) and act to assist the police in investigating major crime.
The uniform of the Guard of the Rock is distinctively red and green, with three dress standards. The ceremonial uniform (1) for festivals, usually worn only by the ceremonial Artillery Company, includes red trousers with a green stripe (two thinner gold stripes for officers), a double breasted green jacket (with red and white lanyard, red cuffs and collars, gold buttons, and red and white dress epaulettes (for officers the epaulettes are gold, and there is a gold edge to the red cuffs)), a black leather belt, and a black helmet decorated with red and white feathers. For normal guard duties (2) the uniform of the main (full-time) Fortress Guard Corps is similar to that described above, but with plain green epaulettes, and a simple black kepi with a single red feather plume in place of the helmet. For routine patrol duties (3) on the border the uniform is simple and modern, with red trousers, green bomber-jacket, and a green peaked hat. For ceremonial duties the Guard of the Rock carry Beretta BM 59 rifles, with the sentry on duty having a fixed bayonet. For patrol duties they are armed with 9mm Glock 17 pistols, and they patrol in green and white patrol cars (see illustrations in gallery, below).
Most members of the Guard of the Rock are full-time soldiers, but there is also a single Company of volunteers called the "Fortress Guard, Artillery Company" which exists for the now purely ceremonial duty of firing the cannon of the Palazzo Pubblico on ceremonial occasions. This volunteer unit maintains the original artillery function of the Fortress Guard. Although both units are part of the same Guard Corps, and wear the same uniform, the Artillery Unit has a totally different military cap badge, as a reminder of its historical origins.
Its full name is 'The Guard of the Council Great and General' and it is also known locally as the 'Guard of Nobles'. The official tourism website of the nation explains this alternative name by stating: "Originally called “Guardia Nobile” (Noble Guard), this name is still sometimes used today to underscore the highly prestigious institutional duties the Corps is called upon to perform." This unit, formed in 1740, is composed almost entirely of volunteers, and its duties are largely ceremonial, although members undergo full military training. Due to its striking uniform, it is arguably the best-known part of the Sammarinese military, and appears on countless postcard views of the republic. The functions of the Guard of the Council are to protect the Captains Regent, and to defend the Great and General Council during its formal sessions. They also provide a ceremonial bodyguard to government officials on festivals of both state and church.
The distinctive dress uniform (1) includes dark blue trousers and double-breasted tailed jacket, with gold coloured ornaments including: double stripe on trouser legs, dress epaulettes, cuffs, collars, and jacket tail edges - distinctively, the double gold trouser stripe is in 'cloth of gold' for officers, but a bright yellow-gold cloth for other ranks. This uniform also includes white gloves, a white leather cross-strap, and a cocked hat decorated with blue and white feathers. For ceremonial duties this force carries sabres, rather than firearms. The undress uniform (2) employs the same colour scheme, but less decoration.
In former times all families with two or more adult male members were required to enroll half of them in the Company of Uniformed Militia. This unit remains the basic fighting force of the armed forces of San Marino, and although it is largely ceremonial in deployment, members are all fully trained in the use of firearms, and for basic policing duties in support of the gendarmerie and civil police. It is a matter of civic pride for many Sammarinese to belong to the force, and all citizens with at least six years residence in the republic are entitled to enroll. Both male and female soldiers belong to the Company of Uniformed Militia, although men predominate in numbers.
Officers are armed with swords and pistols, whilst other ranks are equipped with muskets and bayonets. The uniform is dark blue, with a kepi bearing a blue and white plume. The ceremonial form of the uniform includes a white cross-strap, and white and blue sash, white epaulettes, and white decorated cuffs.
Formally this is part of the Army Militia, and is the ceremonial military band of San Marino. It comprises around 50 musicians. The uniform is largely similar to that of the Army Militia itself, though with some differences in decoration. The music of the Military Ensemble accompanies most state occasions in the republic.
Established in 1842, the 'Corps of Gendarmerie of San Marino' () is a militarized police service, under the control of the Secretary of State for Foreign Affairs and Politics. Its members are full-time and have responsibility for the protection of citizens and their property, and for the preservation of law and order.
The uniform of the gendarmerie includes a hot-weather (summer) standard, which is informal and khaki in colour; a winter standard which is black with light-blue decoration and stripes, and silver braiding; and a dress standard for ceremonial duties, which is dark blue with a white cross-strap and lanyard, blue and white dress epaulettes, white collars, a blue kepi with red and blue plume, and a sword.
The Gendarmerie has two 'Divisions' entitled 'Criminal Police Division' and 'Flying Squad Division', each of which is further divided into operational 'Brigades'. The Gendarmerie may call upon the assistance of the Municipal Police in cases of major crime or national security, and (following revised regulations for both corps introduced by the Government of San Marino in 2008) may also call upon soldiers and border guards of the Fortress Guards Corps, in their secondary role as 'Criminal Police Officers'. It is currently led by Colonel Maurizio Faraone. | https://en.wikipedia.org/wiki?curid=27256 |
Foreign relations of San Marino
San Marino is an independent and sovereign member of the international community. It maintains an extensive diplomatic network in relation to its diminutive size as well as an active foreign policy and international presence.
Among other international organizations, San Marino is a full member of the following international organizations:
It also cooperates with UNICEF and the United Nations High Commissioner for Refugees and has official relations with the European Union.
From May 10 until November 6, 1990, San Marino held the semi-annual presidency of the Committee of Ministers of the Council of Europe. The second San Marino Chairmanship of the Committee of Ministers of the Council of Europe was from November 2006 until May 2007.
Austria, Bulgaria, France, Japan, Mexico, Monaco, Romania, Italy, the Sovereign Military Order of Malta, Croatia, and the Holy See maintain resident embassies or honorary consulates in San Marino. While other states maintain non-resident embassies and consulates, commonly located in Rome. San Marino additionally maintains honorary consulates in some countries, such as in Armenia.
On 31 March-1 April 2013, United Nations Secretary-General Ban Ki-moon has been the official orator on the occasion of the newly elected Captains Regent. “Although this country is small, your importance to the United Nations stands as tall as Mount Titano,” the Secretary-General told the country's highest officials, the two Captains Regent, in reference to the country's 739 meter UNESCO World Heritage Site. Mr. Ban also noted that the country accepted five times as many refugees as its population during the Second World War, and praised its emphasis on protecting human rights. This has been the second visit to San Marino by a UN Secretary General, the first being Boutrous Boutrous-Gali's visit in 1996. | https://en.wikipedia.org/wiki?curid=27257 |
History of São Tomé and Príncipe
The islands of São Tomé and Príncipe were uninhabited at the time of the arrival of the Portuguese sometime between 1469 and 1471. After the islands were discovered by the explorers João de Santarém and Pêro Escobar, Portuguese navigators explored the islands and decided they would be a good location for bases to trade with the mainland.
The first successful settlement of São Tomé was established in 1493 by Álvaro Caminha, who received the land as a grant from the crown. Príncipe was settled in 1500 under a similar arrangement. Attracting settlers proved difficult, however, and most of the earliest inhabitants were "undesirables" sent from Portugal, mostly Jews. In time, these settlers found the excellent volcanic soil of the region suitable for agriculture, especially the growing of sugar.
The cultivation of sugar was a labor-intensive process, and the Portuguese began to import large numbers of slaves from the African mainland. By the mid-16th century, the Portuguese settlers had turned the islands into Africa's foremost exporter of sugar. São Tomé and Príncipe were taken over and administered by the Portuguese crown in 1522 and 1573, respectively.
However, superior sugar colonies in the western hemisphere began to hurt the islands. The large slave population also proved difficult to control with Portugal unable to invest many resources in the effort. As well, the Dutch captured and occupied São Tomé for seven years in 1641, razing over 70 sugar mills. Sugar cultivation thus declined over the next 100 years, and by the mid-17th century, the economy of São Tomé had changed. It was now primarily a transit point for ships engaged in the slave trade between the West and continental Africa.
In the early 19th century, two new cash crops, coffee and cocoa, were introduced. The rich volcanic soils proved well suited to the new cash crop industry, and soon extensive plantations (roças), owned by Portuguese companies or absentee landlords, occupied almost all of the good farmland. By 1908, São Tomé had become the world's largest producer of cocoa, which still is the country's most important crop.
The roças system, which gave the plantation managers a high degree of authority, led to abuses against the African farm workers. Although Portugal officially abolished slavery in 1876, the practice of forced paid labor continued. In the early 20th century, an internationally publicized controversy arose over charges that Angolan contract workers were being subjected to forced labor and unsatisfactory working conditions. Sporadic labor unrest and dissatisfaction continued well into the 20th century, culminating in an outbreak of riots in 1953 in which several hundred African laborers were killed in a clash with their Portuguese rulers. This "Batepá Massacre" remains a major event in the colonial history of the islands, and its anniversary is officially observed by the government.
During the 1967–70 secession war from Nigeria (Nigerian Civil War), São Tomé served as the major base of operations for the Biafran airlift. The airlift was an international humanitarian relief effort (the largest civilian airlift to date) that transported food and medicine to eastern Nigeria. It is estimated to have saved more than a million lives.
By the late 1950s, when other emerging nations across the African Continent were demanding independence, a small group of São Toméans had formed the Movement for the Liberation of São Tomé and Príncipe (MLSTP), which eventually established its base in nearby Gabon. Picking up momentum in the 1960s, events moved quickly after the overthrow of the Caetano dictatorship in Portugal in April 1974. The new Portuguese regime was committed to the dissolution of its overseas colonies; in November 1974, their representatives met with the MLSTP in Algiers and worked out an agreement for the transfer of sovereignty. After a period of transitional government, São Tomé and Príncipe achieved independence on July 12, 1975, choosing as its first president the MLSTP Secretary General Manuel Pinto da Costa.
In 1990, São Tomé became one of the first African countries to embrace democratic reform and changes to the constitution—the legalization of opposition political parties—led to elections in 1991 that were nonviolent, free, and transparent. Miguel Trovoada, a former prime minister who had been in exile since 1986, returned as an independent candidate and was elected president. Trovoada was re-elected in São Tomé's second multiparty presidential election in 1996. The Party of Democratic Convergence (PCD) toppled the MLSTP to take a majority of seats in the National Assembly, with the MLSTP becoming an important and vocal minority party. Municipal elections followed in late 1992, in which the MLSTP came back to win a majority of seats on five of seven regional councils. In early legislative elections in October 1994, the MLSTP won a plurality of seats in the Assembly. It regained an outright majority of seats in the November 1998 elections. The Government of São Tomé fully functions under a multiparty system. Presidential elections were held in July 2001. The candidate backed by the Independent Democratic Action party, Fradique de Menezes, was elected in the first round and inaugurated on September 3. Parliamentary elections were held in March 2002. For the next four years, a series of short-lived, opposition-led governments were formed.
The army seized power for one week in July 2003, complaining of corruption and that forthcoming oil revenues would not be divided fairly. An accord was negotiated under which President de Menezes was returned to office.
The cohabitation period ended in March 2006, when a pro-presidential coalition won enough seats in National Assembly elections to form and head a new government.
In the 30 July 2006 presidential election, Fradique de Menezes easily won a second five-year term in office, defeating two other candidates Patrice Trovoada (son of former President Miguel Trovoada) and independent Nilo Guimarães. Local elections, the first since 1992, took place on 27 August 2006 and were dominated by members of the ruling coalition. | https://en.wikipedia.org/wiki?curid=27259 |
Demographics of São Tomé and Príncipe
This article is about the demographic features of the population of São Tomé and Príncipe, including population density, ethnicity, education level, health of the populace, economic status, religious affiliations and other aspects of the population.
Of São Tomé and Príncipe's total population of some 201,800, about 193,380 live on São Tomé and 8,420 on Príncipe. All are descended from various ethnic groups that have migrated to the islands since 1485. 70% of the people on São Tomé and Príncipe are black and 30% of the people are mixed race, mostly black and white. Six groups are identifiable:
Although a small country, São Tomé and Príncipe has four national languages: Portuguese (the official language, spoken by 95% of the population), and the Portuguese-based creoles Forro (85%), Angolar (3%) and Principense (0.1%). French is also learned in schools, as the country is a member of Francophonie.
In the 1970s, there were two significant population movements—the exodus of most of the 4,000 Portuguese residents and the influx of several hundred São Toméan refugees from Angola. The islanders have been absorbed largely into a common Luso-African culture. Almost all belong to the Roman Catholic, Evangelical Protestant, or Seventh-day Adventist churches, which in turn retain close ties with churches in Portugal. There is a small but growing Muslim population.
According to the total population was in , compared to only 60,000 in 1950. The proportion of children below the age of 15 in 2010 was 40.3%, 55.8% was between 15 and 65 years of age, while 3.9% was 65 years or older
.
Registration of vital events is in São Tomé & Príncipe not available for recent years. The Population Departement of the United Nations prepared the following estimates.
Births and deaths
Fertility Rate (TFR) (Wanted Fertility Rate) and CBR (Crude Birth Rate):
Fertility data as of 2008-2009 (DHS Program):
Demographic statistics according to the World Population Review in 2019.
The following demographic are from the CIA World Factbook unless otherwise indicated.
"at birth:"
1.03 male(s)/female
"under 15 years:"
1.03 male(s)/female
"15–64 years:"
0.93 male(s)/female
"65 years and over:"
0.84 male(s)/female
"total population:"
0.97 male(s)/female (2000 est.)
"noun:"
São Toméan(s)
"adjective:"
São Toméan
Mestiços, angolares (descendants of Angolan slaves), forros (descendants of freed slaves), serviçais (contract laborers from Angola, Mozambique, and Cape Verde), tongas (children of serviçais born on the islands) and Europeans (primarily Portuguese)
Roman Catholic 55.7%, Adventist 4.1%, Assembly of God 3.4%, New Apostolic 2.9%, Mana 2.3%, Universal Kingdom of God 2%, Jehovah's Witness 1.2%, Other 6.2%, None 21.2%, Unspecified 1% (2012 est.)
Portuguese 98.4% (official), Forro 36.2%, Cabo Verdian 8.5%, French 6.8%, Angolar 6.6%, English 4.9%, Lunguie 1%, Other (including sign language) 2.4%
"definition:" age 15 and over can read and write | https://en.wikipedia.org/wiki?curid=27261 |
Politics of São Tomé and Príncipe
The politics of São Tomé and Príncipe takes place in a framework of a unitary semi-presidential representative democratic republic, whereby the President of São Tomé and Príncipe is head of state and the Prime Minister of São Tomé and Príncipe is head of government, and of a multi-party system. Executive power is exercised by the President and the Government. Legislative power is vested in both the government and the National Assembly. The Judiciary is independent of the executive and the legislature. São Tomé has functioned under a multiparty system since 1990. Following the promulgation of a new constitution in 1990, São Tomé and Príncipe held multiparty elections for the first time since independence. Shortly after the constitution took effect, the National Assembly formally legalized opposition parties. Independent candidates also were permitted to participate in the January 1991 legislative elections.
The president of the republic is elected to a five-year term by direct universal suffrage and a secret ballot, and may hold up to two consecutive terms. Candidates are chosen at their party's national conference (or individuals may run independently). A presidential candidate must obtain an outright majority of the popular vote in either a first or second round of voting in order to be elected president. The prime minister is named by the president but must be ratified by the majority party and thus normally comes from a list of its choosing. The prime minister, in turn, names the 14 members of the cabinet.
The National Assembly ("Assembleia Nacional") has 55 members, elected for a four-year term in seven multi-member constituencies by proportional representation. It is the supreme organ of the state and the highest legislative body, and meets semiannually.
Justice is administered at the highest level by the Supreme Court of São Tomé and Príncipe. Formerly responsible to the National Assembly, the judiciary is now independent under the new constitution.
As for the legal profession, the São Tomé and Príncipe Lawyers Association (Ordem dos Advogados de São Tomé e Príncipe) was created in 2006. However, there is no clear indication as to how certain demographic groups, such as women, have fared in the legal field.
Administratively, the country is divided into seven municipal districts, six on São Tomé and one comprising Príncipe. Governing councils in each district maintain a limited number of autonomous decision-making powers, and are reelected every 5 years. Príncipe has had self-government since 29 April 1995
Since the constitutional reforms of 1990 and the elections of 1991, São Tomé and Príncipe has made great strides toward developing its democratic institutions and further guaranteeing the civil and human rights of its citizens. São Toméans have freely changed their government through peaceful and transparent elections, and while there have been disagreements and political conflicts within the branches of government and the National Assembly, the debates have been carried out and resolved in open, democratic, and legal fora, in accordance with the provisions of São Toméan law. A number of political parties actively participate in government and openly express their views. Freedom of the press is respected, and there are several independent newspapers in addition to the government bulletin. The government's respect for human rights is exemplary; the government does not engage in repressive measures against its citizens, and respect for individuals' rights to due process and protection from government abuses is widely honored. Freedom of expression is accepted, and the government has taken no repressive measures to silence critics.
A briefly successful coup d'état led by Major Fernando "Cobo" Pereira took place on 16 July 2003.
The country is member of the ACCT, ACP, AfDB, CEEAC, ECA, FAO, G-77, IBRD, ICAO, ICRM, IDA, IFAD, IFRCS, ILO, IMF, International Maritime Organization, Intelsat (nonsignatory user), Interpol, IOC, IOM (observer), ITU, NAM, OAU, United Nations, UNCTAD, UNESCO, UNIDO, UPU, WHO, WIPO, WMO, World Tourism Organization, World Trade Organization (applicant) | https://en.wikipedia.org/wiki?curid=27262 |
Telecommunications in São Tomé and Príncipe
Telephones - main lines in use:
3,000 (1995)
Telephones - mobile cellular:
available, working more accurately than landlines
Telephone system:
"domestic:"
minimal system
"international:"
satellite earth station - 1 Intelsat (Atlantic Ocean)
Radio broadcast stations:
AM 2, FM 4, shortwave 0 (1998)
Radios:
38,000 (1997)
Television broadcast stations:
2 (1997)
Televisions:
23,000 (1997)
Internet Service Providers (ISPs):
available, dial-up low quality, "broad band" (128/256) very expensive
Country code (Top level domain): .st | https://en.wikipedia.org/wiki?curid=27264 |
Transport in São Tomé and Príncipe
Transport in São Tomé and Príncipe relies primarily on road infrastructure for local needs and airports and sea travel for international needs. São Tomé and Príncipe does not have railways.
In 1999 the country's merchant marine fleet included 9 ships (1,000 GT or over) totaling 43,587 GT/—four cargo ships, one container ship, one refrigerated cargo ship and three roll-on/roll-off ships.
On São Tomé Island, there are seaports in São Tomé by Ana Chaves Bay, another in Neves which is not only fishing but a fuel port which was constructed in 2012 and near Porto Alegre which is the only two ferry ports. On Príncipe, there is a seaport in Santo António. There is the tiny ferry port at Ilhéu das Rolas which is the only port of any size founded outside the two main islands.
São Tomé and Príncipe are served by two airports, for São Tomé (and its surrounding islets), it is the São Tomé International Airport. There are two paved runways in total: one in the 1,524 to 2,437 m range and the other in the 914 to 1,523 m range. For Príncipe, it is the Príncipe Airport which has recently been paved during its modernisation which began in 2012 and finished on October 2015. The paved runway is 1,750 m.
São Tomé's airport is the only international airport which offers flights to parts of Africa, mainly the west and the central portions. Airline companies includes STP Airways, the national airline and Africa's Connection STP, the latter operates STP Airways.
As for 2006, there were highways in São Tomé and Príncipe, paved roads and unpaved roads.
The nation has the highway network (commonly as routes), it consists of three primary highways, the EN1 (São Tomé-Guadalupe-Neves), the EN2 (São Tomé-Santana-São João dos Angolares-Porto Alegre) and the EN3 (São Tomé-Monte Café). It also has secondary roads, some of them area the ES1, ES2 and the ES3. Príncipe has no route numbers.
Its bus services are served with minibuses, as to that part of Africa, it is dominant. | https://en.wikipedia.org/wiki?curid=27265 |
Armed Forces of São Tomé and Príncipe
The Armed Forces of São Tomé and Príncipe (, FASTP) are the armed forces of the island nation of São Tomé and Príncipe, off the coast of West Africa. The islands' military consists of a small land and naval contingent, with a limited budget. Sitting adjacent to strategically important sea lane of communication in the Gulf of Guinea, due to recent concerns about regional security issues including security for oil tankers transiting the area, the US military and other foreign navies have increased their engagement with the FASTP, providing the country with assistance in the form of construction projects and training missions, as well as integration into international information and intelligence sharing programs.
The formation dates back to 1968. In the early years of independence only a barrack police force of insignicant numbers was maintained. The FASTP remains a very small force, consisting of four branches: Army ("Exército"), Coast Guard ("Guarda Costeira" also called "Navy"), Presidential Guard ("Guarda Presidencial"), and the National Guard. There is no air force. Since the end of the Cold War, the nation's military budget has been steadily decreased. Despite the discovery of large oil reserves in the mid-2000s the Sao Tomean military is largely reliant upon foreign financial assistance, and it remains the least funded force in Africa. In the 2005 fiscal year, military expenditures were $581,729, about 0.8% of São Tomé and Príncipe's gross domestic product. A 2004 estimate put military manpower availability (males age 15–49) at 38,347, with a "fit for military service" estimate of 20,188. In a 2009 article, it was reported the FASTP consisted of a total of just 300 soldiers, which was reduced from 600 after an unsuccessful coup attempt in 2003 resulted in a reorganization aimed at ensuring an apolitical military that is subordinated to civilian political structures. It is believed that the Army is formed into two companies, headquartered on the main island of Sao Tome, with a detachment on the smaller island of Principe.
São Tomé and Príncipe's military is a small force – reputedly the smallest in Africa – with almost no resources at its disposal and would be wholly ineffective operating unilaterally with no force projection capabilities. Additionally, legislatively there is no requirement for personnel to deploy overseas and there is no reserve capacity. The limited equipment that the military possesses is reported nearing the end of its lifespan and while its basic small arms are considered simple to operate and maintain they may be of limited serviceability and may require refurbishment or replacement after 20–25 years in tropical climates. Poor pay, working conditions, and alleged nepotism in the promotion of officers have caused tension in the past, as evidenced by unsuccessful coups that were launched in 1995 and 2003.
These coups were ultimately unsuccessful and in the aftermath, reforms have been implemented by the government, with foreign financial assistance, to address the underlying issues that the coups highlighted and to work to improve civil-military relations within the nation. These reforms have been aimed at improving the army and providing it with a more defined role, focusing on realistic security concerns. As of 2005, command is exercised from the president, through the Minister of Defense, to the Chief of the Armed Forces staff. Nevertheless, tension between the military and the government of the island nation has remained, and in February 2014, elements of the military went on strike due to pay and conditions disputes, after which a new military chief was appointed by President Manuel Pinto da Costa with Colonel Justino Lima replacing Brigadier Felisberto Maria Segundo.
According to Jane's, São Tomé and Príncipe's military is equipped largely with low technology small arms, rocket launchers and some heavy machine guns. A limited anti-armor and air defense capability is also maintained, most of which has sourced from former Soviet stocks. Uniforms and load carriage equipment were upgraded in 2007–08 following a donation from Portugal. Light vehicles have also been procured from South Africa and Nigeria.
Sao Tome has an exclusive economic zone of 142,563 square kilometers, and a naval force of around fifty volunteers. The country's coast guard's main role is the protection of this EEZ, and the areas where oil and gas exploration are being considered. In 2005, the US provided a 27-foot Boston Whaler Challenger (8.2 m) inshore patrol vessel. It has also been reported that the coast guard also operates some Zodiac Hurricane Rigid Hull Inflatable Boats, at least one Wilson Sons SY LAEP 10 Águia, and a 42-foot Archangel-Class Fast Response Boat.
São Tomé and Príncipe has traditionally had strong ties with both Portugal and Angola. In the past the United States has provided the country with occasional assistance; however, US interest in the region has increased since the start of the Global War of Terrorism. The position of the country along strategically important sea lanes of communication along the west African coast, as well as rising concerns about piracy and security for oil tankers transiting region, has seen increased foreign interest in the nation. This has seen São Tomé and Príncipe's military become part of the NATO-sponsored Maritime Safety and Security Information System, as well as the commencement of several engagement activities on the part of the US military. It has also seen Sao Tomean officers undertaking training in the United States under the terms of the International Military Education and Training program.
In 2002, it was announced that an unmanned US naval facility would be established in the country, to be used mainly a stopover base for US military aircraft and ships transiting the area. In late 2004, the US Navy began exploring further options for maritime engagement in the Gulf of Guinea area, and delegates from Sao Tome and Principle attended a conference in Naples, Italy, after which the US submarine tender USS "Emory S. Land" conducted a training mission in the area as part of steps towards the establishment of United States Africa Command.
In July 2005, USCGC "Bear", under the command of then-Commander Robert Wagner, visited and conducted training sessions for personnel from the São Tomé and Príncipe coast guard as part of US international engagement efforts. In July 2007, the Military Sealift Command-contracted cargo ship CEC "Endeavor" delivered construction equipment to São Tomé as part of a construction effort by US Navy personnel from Naval Mobile Construction Battalion 133 and Underwater Construction Team One, to renovate the boat ramp for the Santomean coast guard base (currently the only boat ramp is unable to launch patrol boats due to erosion and shallow slope into the water) as well as to build a guard house for the base. In 2015, elements of the country's coast guard took part in a multinational exercise, Exercise Obangame, with the US Navy and other African nations which included training focused upon "boarding techniques, search and rescue operations, medical casualty response, radio communication, and information management techniques". As part of the exercise, the Portuguese frigate "Bartolmeu Dias" made a port visit to São Tomé and Príncipe to provide training to local naval personnel. Portugal has also provided communications training, while France, the United Kingdom and South Africa have also provided assistance. | https://en.wikipedia.org/wiki?curid=27266 |
History of Saudi Arabia
The history of Saudi Arabia in its current form as a state began with its foundation in 1744, although the history of the region extends as far as 20,000 years ago. The region has had a global impact twice in world history. In the 7th century it became the cradle of Islam and the capital of the Islamic Rashidun Caliphate. From the mid-20th century the discovery of vast oil deposits propelled it into a key economic and geo-political role.
At other times, the region existed in relative obscurity and isolation, although from the 7th century the cities of Mecca and Medina had the highest spiritual significance for the Muslim world, with Mecca becoming the destination for the Hajj pilgrimage, an obligation, at least once in a believer's lifetime, if at all possible.
For much of the region's history a patchwork of tribal rulers controlled most of the area. The Al Saud (the Saudi royal family) emerged as minor tribal rulers in Najd in central Arabia. Over the following 150 years, the extent of the Al Saud territory fluctuated. However, between 1902 and 1927, the Al Saud leader, Abdulaziz, carried out a series of wars of conquest which resulted in his establishing the Kingdom of Saudi Arabia in 1930.
From 1930 until his death in 1953, Abdulaziz ruled Saudi Arabia as an absolute monarchy. Thereafter six of his sons in succession have reigned over the kingdom:
There is evidence that human habitation in the Arabian Peninsula dates back to about 63,000 years ago. Nevertheless, the stone tools from the Middle Paleolithic age along with fossils of other animals discovered at Ti's al Ghadah, in northwestern Saudi Arabia, might imply that hominids migrated through a "Green Arabia" between 300,000 and 500,000 years ago.
Archaeology has revealed some early settled civilizations: the Dilmun civilization on the east of the Arabian Peninsula, Thamud north of the Hejaz, and Kindah kingdom and Al-Magar civilization in the central of Arabian Peninsula.
The earliest known events in Arabian history are migrations from the peninsula into neighbouring areas.
There is also evidence from Timna (Israel) and (Jordan) that the local Qurayya/Midianite pottery originated within the Hejaz region of NW Saudi Arabia, which suggests that the biblical Midianites originally came from the Hejaz region of NW Saudi Arabia before expanding into Jordan and Southern Israel.
Muhammad, the Prophet of Islam, was born in Mecca in about 570 and first began preaching in the city in 610, but migrated to Medina in 622. From there, He and His companions united the tribes of Arabia under the banner of Islam and created a single Arab Muslim religious polity in the Arabian Peninsula.
Following Muhammad's death in 632, Abu Bakr became leader of the Muslims as the first Caliph. After putting down a rebellion by the Arab tribes (known as the Ridda wars, or "Wars of Apostasy"), Abu Bakr attacked the Byzantine Empire. On his death in 634, he was succeeded by Umar as caliph, followed by Uthman ibn al-Affan and Ali ibn Abi Talib. The period of these first four caliphs is known as the Rashidun or "rightly guided" Caliphate ("al-khulafā' ar-rāshidūn"). Under the Rashidun Caliphs, and, from 661, their Umayyad successors, the Arabs rapidly expanded the territory under Muslim control outside of Arabia. In a matter of decades Muslim armies decisively defeated the Byzantine army and destroyed the Persian Empire, conquering huge swathes of territory from the Iberian peninsula to India. The political focus of the Muslim world then shifted to the newly conquered territories.
Nevertheless, Mecca and Medina remained the spiritually most important places in the Muslim world. The Quran requires every able-bodied Muslim who can afford it, as one of the five pillars of Islam, to make a pilgrimage, or Hajj, to Mecca during the Islamic month of Dhu al-Hijjah at least once in his or her lifetime. The Masjid al-Haram (the Grand Mosque) in Mecca is the location of the Kaaba, Islam's holiest site, and the Masjid al-Nabawi (the Prophet's Mosque) in Medina is the location of Muhammad tomb; as a result, from the 7th century, Mecca and Medina became the pilgrimage destinations for large numbers of Muslims from across the Muslim world.
After the fall of the Umayyad empire in 750 CE, most of what was to become Saudi Arabia reverted to traditional tribal rule soon after the initial Muslim conquests, and remained a shifting patchwork of tribes and tribal emirates and confederations of varying durability.
Muawiyah I, the first Umayyad caliph, took an interest in his native Mecca, erecting buildings and digging wells. Under his Marwanids successors, Mecca became the abode of poets and musicians. Even then, Medina eclipsed Mecca in importance for much of the Umayyad period, as it was home to the new Muslim aristocracy. Under Yazid I, the revolt of Abd Allah bin al-Zubair brought Syrian troops to Mecca. An accident led to a fire that destroyed the Kaaba, which was rebuilt by Ibn al-Zubair. In 747, a Kharidjit rebel from Yemen seized Mecca unopposed, but he was soon defeated by Marwan II. In 750, Mecca, along with the rest of the caliphate, was passed to the Abbasids.
From the 10th century (and, in fact, until the 20th century) the Hashemite Sharifs of Mecca maintained a state in the most developed part of the region, the Hejaz. Their domain originally comprised only the holy cities of Mecca and Medina but in the 13th century it was extended to include the rest of the Hejaz. Although the Sharifs exercised at most times independent authority in the Hejaz, they were usually subject to the suzerainty of one of the major Islamic empires of the time. In the Middle Ages, these included the Abbasids of Baghdad, and the Fatimids, Ayyubids and Mamluks of Egypt.
Beginning with Selim I's acquisition of Medina and Mecca in 1517, the Ottomans, in the 16th century, added to their Empire the Hejaz and Asir regions along the Red Sea and the al-Hasa region on the Persian Gulf coast, these being the most populous parts of what was to become Saudi Arabia. They also laid claim to the interior, although this remained a rather nominal suzerainty. The degree of control over these lands varied over the next four centuries with the fluctuating strength or weakness of the Empire's central authority. In the Hejaz, the Sharifs of Mecca were largely left in control of their territory (although there would often be an Ottoman governor and garrison in Mecca). On the eastern side of the country, the Ottomans lost control of the al-Hasa region to Arab tribes in the 17th century but regained it again in the 19th century. Throughout the period, the interior remained under the rule of a large number of petty tribal rulers in much the same way as it had in previous centuries.
The emergence of the Saudi dynasty began in central Arabia in 1744. In that year, Muhammad ibn Saud, the tribal ruler of the town of Ad-Dir'iyyah near Riyadh, joined forces with the religious leader Muhammad ibn Abd-al-Wahhab, the founder of the Wahhabi movement. This alliance formed in the 18th century provided the ideological impetus to Saudi expansion and remains the basis of Saudi Arabian dynastic rule today. Over the next 150 years, the fortunes of the Saud family rose and fell several times as Saudi rulers contended with Egypt, the Ottoman Empire, and other Arabian families for control of the peninsula.
The first Saudi State was established in 1744 in the area around Riyadh and briefly controlled most of the present-day territory of Saudi Arabia through conquests made between 1786 and 1816; these included Mecca and Medina.
Concerned at the growing power of the Saudis, the Ottoman Sultan, Mustafa IV, instructed his viceroy in Egypt, Mohammed Ali Pasha, to reconquer the area. Ali sent his sons Tusun Pasha and Ibrahim Pasha who were eventually successful in routing the Saudi forces in 1818 and destroyed the power of the Al Saud.
The Al Saud returned to power in 1824 but their area of control was mainly restricted to the Saudi heartland of the Najd region, known as the second Saudi state. However, their rule in Najd was soon contested by new rivals, the Rashidis of Ha'il. Throughout the rest of the 19th century, the Al Saud and the Al Rashid fought for control of the interior of what was to become Saudi Arabia. By 1891, the Al Saud were conclusively defeated by the Al Rashid, who drove the Saudis into exile in Kuwait.
Meanwhile, in the Hejaz, following the defeat of the first Saudi State, the Egyptians continued to occupy the area until 1840. After they left, the Sharifs of Mecca reasserted their authority, albeit with the presence of an Ottoman governor and garrison.
By the early 20th century, the Ottoman Empire continued to control or have suzerainty (albeit nominal) over most of the peninsula. Subject to this suzerainty, Arabia was ruled by a patchwork of tribal rulers (including the Al Saud who had returned from exile in 1902 – see below) with the Sharif of Mecca having preeminence and ruling the Hejaz.
In 1916, with the encouragement and support of Britain and France (which were fighting the Ottomans in the World War I), the sharif of Mecca, Hussein bin Ali, led a pan-Arab revolt against the Ottoman Empire with the aim of securing Arab independence and creating a single unified Arab state spanning the Arab territories from Aleppo in Syria to Aden in Yemen. The Arab army comprised bedouin and others from across the peninsula, but not the Al Saud and their allied tribes who did not participate in the revolt partly because of a long-standing rivalry with the Sharifs of Mecca and partly because their priority was to defeat the Al Rashid for control of the interior. Nevertheless, the revolt played a part in the Middle-Eastern Front and tied down thousands of Ottoman troops thereby contributing to the Ottomans' World War I defeat in 1918.
However, with the subsequent partitioning of the Ottoman Empire, the British and French reneged on promises to Hussein to support a pan-Arab state. Although Hussein was acknowledged as King of the Hejaz, Britain later shifted support to the Al Saud, leaving him diplomatically and militarily isolated. The revolt, therefore, failed in its objective to create a pan-Arab state but Arabia was freed from Ottoman suzerainty and control.
In 1902, Abdul-Aziz Al Saud, leader of the Al Saud, returned from exile in Kuwait to resume the conflict with the Al Rashid, and seized Riyadh – the first of a series of conquests ultimately leading to the creation of the modern state of Saudi Arabia in 1930. The main weapon for achieving these conquests was the Ikhwan, the Wahhabist-Bedouin tribal army led by Sultan bin Bajad Al-Otaibi and Faisal al-Duwaish.
By 1906, Abdulaziz had driven the Al Rashid out of Najd and the Ottomans recognized him as their client in Najd. His next major acquisition was Al-Hasa, which he took from the Ottomans in 1913, bringing him control of the Persian Gulf coast and what would become Saudi Arabia's vast oil reserves. He avoided involvement in the Arab Revolt, having acknowledged Ottoman suzerainty in 1914, and instead continued his struggle with the Al Rashid in northern Arabia. In 1920, the Ikhwan's attention turned to the south-west, when they seized Asir, the region between the Hejaz and Yemen. In the following year, Abdul-Aziz finally defeated the Al Rashid and annexed all northern Arabia.
Prior to 1923, Abdulaziz had not risked invading the Hejaz because Hussein bin Ali, King of the Hejaz, was supported by Britain. However, in that year, the British withdrew their support. At a conference in Riyadh in July 1924 complaints were stated against the Hejaz; principally that pilgrimage from Najd was prevented and it boycotted the implementation of certain public policy in contravention of "shari'a". Ikhwan units were massed on a large scale for the first time, and under Khalid bin Lu'ayy and Sultan bin Bajad rapidly advanced on Mecca laying waste to symbols of "heathen" practices. The Ikhwan completed their conquest of the Hejaz by the end of 1925. On 10 January 1926 Abdulaziz declared himself King of the Hejaz and, then, on 27 January 1927 he took the title King of Najd (his previous title was Sultan). The use of the Ikhwan to effect the conquest had important consequences for the Hejaz: The old cosmopolitan society was uprooted, and version of Wahhabi culture was imposed as a new compulsory social order.
By the Treaty of Jeddah, signed on 20 May 1927, the United Kingdom recognized the independence of Abdul-Aziz's realm (then known as the Kingdom of Hejaz and Najd). After the conquest of the Hejaz, the Ikhwan leaders wanted to continue the expansion of the Wahhabist realm into the British protectorates of Transjordan, Iraq and Kuwait. Abdul-Aziz, however, refused to agree to this, recognizing the danger of a direct conflict with the British. The Ikhwan therefore revolted but were defeated in the Battle of Sabilla in 1929, and the Ikhwan leadership were massacred.
In 1930, the two kingdoms of the Hejaz and Najd were united as the 'Kingdom of Saudi Arabia'. Boundaries with Transjordan, Iraq, and Kuwait were established by a series of treaties negotiated in the 1920s, with two "neutral zones" created, one with Iraq and the other with Kuwait. The country's southern boundary with Yemen was partially defined by the 1934 Treaty of Ta'if, which ended a brief border war between the two states.
Abdulaziz's military and political successes were not mirrored economically until vast reserves of oil were discovered in 1938 in the Al-Hasa region along the Persian Gulf coast. Development began in 1941 and by 1949 production was in full swing.
In February 1945, King Abdul Aziz met President Franklin D. Roosevelt aboard the USS "Quincy" in the Suez Canal. A historic handshake agreeing on supplying oil to the United States in exchange for guaranteed protection to the Saudi regime is still in force today. It has survived seven Saudi Kings and twelve US presidents.
Abdulaziz died in 1953. King Saud succeeded to the throne on his father's death in 1953. Oil provided Saudi Arabia with economic prosperity and a great deal of political leverage in the international community. At the same time, the government became increasingly wasteful and lavish. Despite the new wealth, extravagant spending led to governmental deficits and foreign borrowing in the 1950s.
However, by the early 1960s an intense rivalry between the King and his half-brother, Prince Faisal emerged, fueled by doubts in the royal family over Saud's competence. As a consequence, Saud was deposed in favor of Faisal in 1964.
The mid-1960s saw external pressures generated by Saudi-Egyptian differences over Yemen. When civil war broke out in 1962 between Yemeni royalists and republicans, Egyptian forces entered Yemen to support the new republican government, while Saudi Arabia backed the royalists. Tensions subsided only after 1967, when Egypt withdrew its troops from Yemen. Saudi forces did not participate in the Six-Day (Arab–Israeli) War of June 1967, but the government later provided annual subsidies to Egypt, Jordan, and Syria to support their economies.
During the 1973 Arab-Israeli war, Saudi Arabia participated in the Arab oil boycott of the United States and Netherlands. A member of the OPEC, Saudi Arabia had joined other member countries in moderate oil price increases beginning in 1971. After the 1973 war, the price of oil rose substantially, dramatically increasing Saudi Arabia's wealth and political influence.
Faisal was assassinated in 1975 by his nephew, Prince Faisal bin Musaid, and was succeeded by his half-brother King Khalid during whose reign economic and social development continued at an extremely rapid rate, revolutionizing the infrastructure and educational system of the country; in foreign policy, close ties with the US were developed.
In 1979, two events occurred which the Al Saud perceived as threatening the regime, and had a long-term influence on Saudi foreign and domestic policy. The first was the Iranian Islamic revolution. There were several anti-government riots in the region in 1979 and 1980. The second event was the seizure of the Grand Mosque in Mecca by Islamist extremists. The militants involved were in part angered by what they considered to be the corruption and un-Islamic nature of the Saudi regime. Part of the response of the royal family was to enforce a much stricter observance of Islamic and traditional Saudi norms. Islamism continued to grow in strength.
King Khalid died in June 1982. Khalid was succeeded by his brother King Fahd in 1982, who maintained Saudi Arabia's foreign policy of close cooperation with the United States and increased purchases of sophisticated military equipment from the United States and Britain.
Following the Iraqi invasion of Kuwait in 1990, Saudi Arabia joined the anti-Iraq Coalition. King Fahd, fearing an attack from Iraq, invited soldiers from the US and 32 other countries to Saudi Arabia. Saudi and Coalition forces also repelled Iraqi forces when they breached the Kuwaiti-Saudi border in 1991( see Battle of Khafji).
In 1995, Fahd suffered a debilitating stroke and the Crown Prince, Prince Abdullah assumed day-to-day responsibility for the government. In 2003, Saudi Arabia refused to support the US and its allies in the invasion of Iraq. Terrorist activity within Saudi Arabia increased dramatically in 2003, with the Riyadh compound bombings and other attacks, which prompted the government to take more stringent action against terrorism.
In 2005, King Fahd died and his half-brother, Abdullah, ascended to the throne. Despite growing calls for change, the king has continued the policy of moderate reform. King Abdullah has pursued a policy of limited deregulation, privatization and seeking foreign investment. In December 2005, following 12 years of talks, the World Trade Organization gave the green light to Saudi Arabia's membership.
As the Arab Spring unrest and protests began to spread across Arab world in early 2011, King Abdullah announced an increase in welfare spending. No political reforms were announced as part of the package. At the same time, Saudi troops were sent to participate in the crackdown on unrest in Bahrain. King Abdullah gave asylum to deposed President Zine El Abidine Ben Ali of Tunisia and telephoned President Hosni Mubarak of Egypt (prior to his deposition) to offer his support.
On 23 January 2015, King Abdullah died and was succeeded by King Salman. | https://en.wikipedia.org/wiki?curid=27269 |
Geography of Saudi Arabia
The Kingdom of Saudi Arabia is a country situated in Southwest Asia, the largest country of Arabia, by the Arabian Peninsula, bordering the Persian Gulf and the Red Sea, north of Yemen. Its extensive coastlines on the Persian Gulf and Red Sea provide great leverage on shipping (especially crude oil) through the Persian Gulf and the Suez Canal. The kingdom occupies 80% of the Arabian Peninsula. Most of the country's boundaries with the United Arab Emirates (UAE), Oman, and the Republic of Yemen (formerly two separate countries: the Yemen Arab Republic or North Yemen; and the People's Democratic Republic of Yemen or South Yemen) are undefined, so the exact size of the country remains unknown. The Saudi government estimate is at 2,217,949 square kilometres, while other reputable estimates vary between 2,149,690 and 2,240,000 sq. kilometres. Less than 1% of the total area is suitable for cultivation, and in the early 1960s, population distribution varied greatly among the towns of the eastern and western coastal areas, the densely populated interior oases, and the vast, almost empty deserts.
Saudi Arabia is bounded by seven countries and three bodies of water. To the west, the Gulf of Aqaba and the Red Sea form a coastal border of almost that extends to the southern part of Yemen and follows a mountain ridge for approximately to the vicinity of Najran. This section of the border with Yemen was demarcated in 1934 and is one of the few clearly defined borders with a neighbouring country. The Saudi border running southeast from Najran, however, is still undetermined. The undemarcated border became an issue in the early 1990s, when oil was discovered in the area and Saudi Arabia objected to the commercial exploration by foreign companies on behalf of Yemen. In the summer of 1992, representatives of Saudi Arabia and Yemen met in Geneva to discuss settlement of the border issue.
To the north, Saudi Arabia is bounded by Jordan, Iraq, and Kuwait. The northern boundary extends almost from the Gulf of Aqaba on the west to Ras al Khafji on the Persian Gulf. In 1965, Saudi Arabia and Jordan agreed to boundary demarcations involving an exchange of areas of territory. Jordan gained of land on the Gulf of Aqaba and 6,000 square kilometers of territory in the interior, and 7,000 square kilometers of Jordanian-administered, landlocked territory was ceded to Jordan.
In 1922, Ibn Saud and British officials representing Iraqi interests signed the Treaty of Mohammara, which established the boundary between Iraq and the future Saudi Arabia. Later that year, the Uqair Protocol signed by the two parties agreed to the creation of a diamond-shaped Saudi Arabian–Iraqi neutral zone of approximately 7,000 square kilometers, adjacent to the western tip of Kuwait, within which neither Iraq nor Saudi Arabia would build permanent dwellings or installations. The agreement was designed to safeguard water rights in the zone for Bedouin of both countries. In May 1938, Iraq and Saudi Arabia signed an additional agreement regarding the administration of the zone. Forty-three years later, Saudi Arabia and Iraq signed an agreement that defined the border between the two countries and provided for the division of the neutral zone between them. The agreement effectively dissolved this neutral zone.
The boundary between Ibn Saud's territories of Najd and the Eastern Province and the British protectorate of Kuwait was first regulated by the Al Uqair Convention in 1922. In an effort to avoid territorial disputes, another diamond-shaped Saudi–Kuwaiti neutral zone of 5,790 square kilometers directly south of Kuwait was established. In 1938 oil was discovered in Kuwait's southern Burqan fields, and both countries contracted with foreign oil companies to perform exploration work in the Divided Zone. After years of discussions, Saudi Arabia and Kuwait reached an agreement in 1965 that divided the zone geographically, with each country administering its half of the zone. The agreement guaranteed that the rights of both parties to the natural resources in the whole zone would continue to be respected after each country had annexed its half of the zone in 1966.
Saudi Arabia's eastern boundary follows the Persian Gulf from Ras 'al Khafji to the peninsula of Qatar, whose border with Saudi Arabia was determined in 1965. The Saudi border with the state of Oman, on the southeastern coast of the Arabian Peninsula, runs through the Empty Quarter (Rub 'al-Khali). The border demarcation was defined by a 1990 agreement between Saudi Arabia and Oman that included provisions for shared grazing rights and water rights. The border through 'Al Buraymi Oasis, located near the conjunction of the frontiers of Oman, Abu Dhabi (one of the emirates of the UAE) and Saudi Arabia, has triggered extensive dispute among the three states since the Treaty of Jeddah in 1927. In a 1975 agreement with Saudi Arabia, Abu Dhabi accepted sovereignty over six villages in the 'Al Buraymi Oasis and the sharing of the rich Zararah oil field. In return, Saudi Arabia obtained an outlet to the Persian Gulf through Abu Dhabi.
Saudi Arabia's maritime claims include a twelve-nautical-mile (22 km) territorial limit along its coasts. The country also claims many small islands as well as some seabeds and subsoils beyond the twelve-nautical-mile (22 km) limit.
Land boundaries:
"total:"
4,415 km
"border countries:"
Iraq 814 km, Jordan 728 km, Kuwait 222 km, Oman 676 km, Qatar 60 km, UAE 457 km, Yemen 1,458 km
Coastline:
2,640 km
Maritime claims:
"contiguous zone:"
"continental shelf:"
not specified
"territorial sea:"
"exclusive economic zone:"
Until the 1980s, Saudi Arabia had lakes at Layla Aflaj and deep waterholes at 'Al-Kharj, fed by huge underground aquifers formed in prehistoric times and non-renewable. 'Al Kharj was a valuable source of drinking water in a barren terrain. In recent years, these aquifers have been drawn upon heavily, both for agricultural and domestic purposes, and no fresh water remain in the lakes or pits.
In the absence of permanent rivers or bodies of water, streams and groundwater, desalinated seawater and very scarce surface water must supply the country's needs. In eastern Arabia and in the Jabal Tuwayq, artesian wells and springs are plentiful. In al-Ahsa a number of large, deep pools are constantly replenished by artesian springs as a result of underground water from the eastern watershed of the Jabal Tuwayq. Such springs and wells permit extensive irrigation in local oases. In the Hijaz, wells are abundant, and springs are common in the mountainous areas. In Najd and the great deserts, watering places are comparatively fewer and scattered over a wide area. Water must be hoisted or pumped to the surface, and even where water is plentiful, its quality may be poor.
Modern technology has located and increased the availability of much of the underground water. Saudi Arabian Oil Company (Saudi Aramco) technicians have determined that very deep aquifers lie in many areas of northern and eastern Arabia and that the Wasia, the largest aquifer in Saudi Arabia, contains more water than the Persian Gulf. The Saudi government, Saudi Aramco, and the United Nations (UN) Food and Agriculture Organization (FAO) have made separate and joint efforts to exploit underground water resources. In the past, improperly drilled wells have reduced or destroyed any good they might have served by leaching the lands they were drilled to irrigate. Successive agricultural projects, many of which were designed primarily to encourage Bedouin settlement, have increased water resource exploitation. In the early 1990s, large-scale agricultural projects have relied primarily on such underground aquifers, which provided more than 80% of the water for agricultural requirements. In fiscal year (FY) 1987, about 90% of the total water demand in the kingdom was consumed by agriculture.
The Arabian Peninsula is an ancient massif composed of stable crystalline rock whose geologic structure developed concurrently with the Alps. Geologic movements caused the entire mass to tilt eastward and the western and southern edges to tilt upward. In the valley created by the fault, called the Great Rift, the Red Sea was formed. The Great Rift runs from the Mediterranean along both sides of the Red Sea south through Ethiopia and the lake country of East Africa, gradually disappearing in the area of Mozambique, Zambia, and Zimbabwe. Scientists analyzing photographs taken by United States astronauts on the joint United States-Soviet space mission in July 1975 detected a vast fan-shaped complex of cracks and fault lines extending north and east from the Golan Heights. These fault lines are believed to be the northern and final portion of the Great Rift and are presumed to be the result of the slow rotation of the Arabian Peninsula counterclockwise in a way that will, in approximately ten million years, close off the Persian Gulf and make it a lake.
On the peninsula, the eastern line of the Great Rift fault is visible in the steep and, in places, high escarpment that parallels the Red Sea between the Gulf of Aqaba and the Gulf of Aden. The eastern slope of this escarpment is relatively gentle, dropping to the exposed shield of the ancient landmass that existed before the faulting occurred. A second lower escarpment, the Jabal Tuwayq, runs north to south through the area of Riyadh.
In the south, a coastal plain, the Tihamah, rises gradually from the sea to the mountains. Hejaz extends southward to the borders of mountainous Yemen. The central plateau, Najd, extends east to the Jabal Tuwayq and slightly beyond. A long, narrow strip of desert known as Ad Dahna separates Najd from eastern Arabia, which slopes eastward to the sandy coast along the Persian Gulf. North of Najd a larger desert, An Nafud, isolates the heart of the peninsula from the steppes of northern Arabia. South of Najd lies one of the largest sand deserts in the world, the Rub al Khali.
The western coastal escarpment can be considered two mountain ranges separated by a gap in the vicinity of Mecca in Tihamah. The northern range in the Hejaz seldom exceeds 2,100 meters, and the elevation gradually decreases toward the south to about 600 meters. The rugged mountain wall drops abruptly to the sea with only a few intermittent coastal plains. There are virtually no natural harbors along the Red Sea. The western slopes have been stripped of soil by the erosion of infrequent but turbulent rainfalls that have fertilized the plains to the west. The eastern slopes are less steep and are marked by dry river beds (wadis) that trace the courses of ancient rivers and continue to lead the rare rainfalls down to the plains. Scattered oases, drawing water from springs and wells in the vicinity of the wadis, permit some settled agriculture. Of these oases, the largest and most important is Medina. South of Hejaz, the mountains exceed 2,400 meters in several places with some peaks nearing 3,000 meters.
The eastern slope of the mountain range in Asir is gentle, melding into a plateau region that drops gradually into the Rub al Khali. Although rainfall is infrequent in this area, a number of fertile wadis, of which the most important are the Wadi Bishah and the Wadi Tathlith, make oasis agriculture possible on a relatively large scale. A number of extensive lava fields (harrat) scar the surfaces of the plateaus east of the mountain ranges in the Hijaz and give evidence of fairly recent volcanic activity. The largest of these beds is Khaybar, north of Medina; another is Al Harrah, part of the large volcanic field Harrat Ash Shamah. Famous cities of Hejaz include the holy city of Medina and the city of Taif.
The rugged western face of the escarpment drops steeply to the coastal plain, the Tihamah lowlands, whose width averages only sixty-five kilometers. Along the seacoast is a salty tidal plain of limited agricultural value, backed by potentially rich alluvial plains. The relatively well-watered and fertile upper slopes and the mountains behind are extensively terraced to allow maximum land use. This coastal plain is part of the Arabian Peninsula coastal fog desert ecoregion. Both the holy city of Mecca and the city of Jeddah lie within the northern part of Tihamah.
East of the Hejaz and Asir lies the great plateau area of Najd. This region is mainly rocky plateau interspersed by small, sandy deserts and isolated mountain clumps. The best known of the mountain groups is the Jabal Shammar, northwest of Riyadh and just south of the An Nafud. This area is the home of the pastoral Shammar tribes, which under the leadership of the Al Rashid were the most implacable foes of the Al Saud in the late 19th and early 20th centuries. Their capital was the large oasis of Hail, now a flourishing urban center.
Across the peninsula as a whole, the plateau slopes toward the east from an elevation of 1,360 meters in the west to 750 meters at its easternmost limit. A number of wadis cross the region in an eastward direction from the Red Sea escarpment toward the Persian Gulf. There is little pattern to these remains of ancient riverbeds; the most important of them are Wadi Hanifa, Wadi ar Rummah, Wadi as Surr, and Wadi ad-Dawasir.
The heart of Najd is the area of the Jabal Tuwayq, an arc-shaped ridge with a steep west face that rises between 100 and 250 meters above the plateau. Many oases exist in this area, the most important of which are Buraydah, Unayzah, Riyadh, and Al Kharj. Outside the oasis areas, Najd is sparsely populated. Large salt marshes (sabkah) are scattered throughout the area.
The area north of the An Nafud is geographically part of the Syrian Desert. It is an upland plateau scored by numerous wadis, most tending northeastward toward Iraq. This area, known as Badiyat ash Sham, and covered with grass and scrub vegetation, is extensively used for pasture by nomadic and seminomadic herders. The most significant feature of the area is the Wadi as Sirhan, a large basin as much as 300 meters below the surrounding plateau, which is the vestige of an ancient inland sea. For thousands of years, some of the heavily traveled caravan routes between the Mediterranean and the central and southern peninsula have passed through the Wadi as Sirhan. The most important oases in the area are Al Jawf and Sakakah, just north of the An Nafud.
East of the Ad Dahna lies the As Summen Plateau, about 120 kilometers wide and dropping in elevation from about 400 meters in the west to about 240 meters in the east. The area is generally barren, with a highly eroded surface of ancient river gorges and isolated buttes.
Farther east the terrain changes abruptly to the flat lowlands of the coastal plain. This area, about sixty kilometers wide, is generally featureless and covered with gravel or sand. In the north is the Ad Dibdibah graveled plain and in the south the 'Al Jafurah sand desert, which reaches the gulf near Dhahran and merges with the Rub al Khali at its southern end. The coast itself is extremely irregular, merging sandy plains, marshes, and salt flats almost imperceptibly with the sea. As a result, the land surface is unstable; in places water rises almost to the surface, and the sea is shallow, with shoals and reefs extending far offshore. Only the construction of long moles at Ras Tanura has opened the Saudi coast on the gulf to seagoing tankers.
Eastern Arabia is sometimes called 'Al-Hasa or 'Al Ahsa after the great oasis, one of the more fertile areas of the country. 'Al-Hasa, the largest oasis in the country, actually comprises two neighbouring oases, including the town of Al-Hofuf.
Three great deserts isolate the great plateau area Najd of Saudi Arabia from the north, east, and south, as the Red Sea escarpment does from the west. In the north, the An Nafud—sometimes called the Great Nafud because An Nafud is the term for desert—covers about 55,000 square kilometers at an elevation of about 1,000 meters. Longitudinal dunes—scores of kilometers in length and as much as ninety meters high, and separated by valleys as much as sixteen kilometers wide—characterize the An Nafud. Iron oxide gives the sand a red tint, particularly when the sun is low. Within the area are several watering places, and winter rains bring up short-lived but succulent grasses that permit nomadic herding during the winter and spring.
Stretching more than 125 kilometers south from the An Nafud in a narrow arc is the ad-Dahna desert, a narrow band of sand mountains also called the river of sand. Like the An Nafud, its sand tends to be reddish, particularly in the north, where it shares with the An Nafud the longitudinal structure of sand dunes. The Ad Dahna also furnishes the Bedouin with winter and spring pasture, although water is scarcer than in the An Nafud.
The southern portion of the Ad Dahna curves westward following the arc of the Jabal Tuwayq. At its southern end, it merges with the Rub' al Khali, one of the truly forbidding sand deserts in the world and, until the 1950s, one of the least explored. The topography of this huge area, covering more than 550,000 square kilometers, is varied. In the west, the elevation is about 600 meters, and the sand is fine and soft; in the east, the elevation drops to about 180 meters, and much of the surface is covered by relatively stable sand sheets and salt flats. In places, particularly in the east, longitudinal sand dunes prevail; elsewhere sand mountains as much as 300 meters in height form complex patterns. Most of the area is totally waterless and uninhabited except for the few wandering Bedouin tribes.
Beneath the harsh deserts of Saudi Arabia lie dark chambers and complex mazes filled with crystalline structures, stalactites and stalagmites. The limestone floor of the Summan plateau, a karst area to the east of the Dahna sands, is riddled with such caves, known locally as "Dahls". Some have tiny entrances which open into caves, others lead into a maze of passages which can be several kilometers long. Local Bedouin have long known of these caves and some were used as water supplies. They were first systematically studied in 1981, and later explored and reported by the Saudi Geological Survey.
The Persian Gulf War of 1991 brought serious environmental damage to the region. The world's largest oil spill, estimated at as much as , fouled gulf waters and the coastal areas of Kuwait, Iran, and much of Saudi Arabia's Persian Gulf shoreline. In some of the sections of the Saudi coast that sustained the worst damage, sediments were found to contain 7% oil. The shallow areas affected normally provide feeding grounds for birds, and feeding and nursery areas for fish and shrimp. Because the plants and animals of the sea floor are the basis of the food chain, damage to the shoreline has consequences for the whole shallow-water ecosystem, including the multimillion-dollar Saudi fisheries industry.
The spill had a severe impact on the coastal area surrounding Madinat 'al-Jubayl as Sinaiyah, the major industrial and population center newly planned and built by the Saudi government. The spill threatened industrial facilities in 'Al Jubayl because of the seawater cooling system for primary industries and threatened the supply of potable water produced by seawater-fed desalination plants. The 'Al Jubayl community harbor and Abu Ali Island, which juts into the gulf immediately north of 'Al Jubayl, experienced the greatest pollution, with the main effect of the spill concentrated in mangrove areas and shrimp grounds. Large numbers of marine birds, such as cormorants, grebes, and auks, were killed when their plumage was coated with oil. In addition, beaches along the entire 'Al Jubayl coastline were covered with oil and tar balls.
The exploding and burning of approximately 700 oil wells in Kuwait also created staggering levels of atmospheric pollution, spewed oily soot into the surrounding areas, and produced lakes of oil in the Kuwaiti desert equal in volume to twenty times the amount of oil that poured into the gulf, or about . The soot from the Kuwaiti fires was found in the snows of the Himalayas and in rainfall over the southern members of the Community of Independent States, Iran, Oman, and Turkey. Residents of Riyadh reported that cars and outdoor furniture were covered daily with a coating of oily soot. The ultimate effects of the airborne pollution from the burning wells have yet to be determined, but samples of soil and vegetation in Ras al Khafji in northern Saudi Arabia revealed high levels of particles of oily soot incorporated into the desert ecology. The UN Environmental Programme warned that eating livestock that grazed within an area of 7,000 square kilometers of the fires, or 1,100 kilometers from the center of the fires, an area that included northern Saudi Arabia, posed a danger to human health. The overall effects of the oil spill and the oil fires on marine life, human health, water quality, and vegetation remained to be determined as of 1992. Moreover, to these two major sources of environmental damage must be added large quantities of refuse, toxic materials, and between 173 million and 207 million liters of untreated sewage in sand pits left behind by coalition forces.
Natural hazards:
frequent sand and dust storms
Environment - current issues:
desertification; depletion of ground water resources; the lack of perennial rivers or permanent water bodies has prompted the development of extensive seawater desalination facilities; coastal pollution from oil spills
Environment - international agreements:
"party to:"
Climate Change, Desertification, Endangered Species, Hazardous Wastes, Law of the Sea, Ozone Layer Protection
Area:
2,250,000 km² (international borders of Saudi Arabia are not finalized. Saudi government claim large tracts of land inside the neighboring countries of Yemen, Oman and U.A.E. in addition to others. The present figure for the size of that states includes all those territories that are outside Saudi control)
"Land:"
2,250,000 km²
"Water:"
0 km²
Land use:
"Arable land:"
1.8%
"Permanent crops:"
0%
"Permanent pastures:"
56%
"Forests and Woodland:"
0%
"Other:"
42%
Irrigated land:
4,350 km² | https://en.wikipedia.org/wiki?curid=27270 |
Demographics of Saudi Arabia
The Kingdom of Saudi Arabia is the second largest state in the Arab world, with a reported population of 33,413,660 as of 2018. A significant percentage of the nation's inhabitants are immigrants seeking economic opportunity, making up 37% of the total Saudi population. Saudi Arabia has experienced a population explosion in the last 40 years, and continues to grow at a rate of 1.63% per year.
Until the 1960s, most of the population was nomadic or seminomadic; due to rapid economic and urban growth, more than 95% of the population is now settled. 80% of Saudis live in ten major urban centers—Riyadh, Jeddah, Mecca, Medina, Hofuf, Ta'if, Khobar, Yanbu, Dhahran, Dammam.
Some cities and oases have densities of more than 1,000 people per square kilometer (2,600/mile²). Saudi Arabia's population is characterized by rapid growth, far more men than women, and a large cohort of youths.
Saudi Arabia hosts one of the pillars of Islam, which obliges all Muslims to make the Hajj, or pilgrimage to Mecca, at least once during their lifetime if they are able to do so. The cultural environment in Saudi Arabia is highly conservative; the country adheres to the interpretation of Islamic religious law (Sharia). Cultural presentations must conform to narrowly defined standards of ethics.
Most Saudis are ethnically Arabs, the majority of whom are tribal Bedouins. According to a random survey, most would-be Saudis come from the Subcontinent and Arab countries. Many Arabs from nearby countries are employed in the kingdom, particularly Egypt, as the Egyptian community developed from the 1950s onwards. There also are significant numbers of Asian expatriates, mostly from India, Pakistan, Bangladesh, Indonesia, Philippines, and recently refugees from Syria and Yemen. In the 1970s and 1980s, there was also a significant community of South Korean migrant labourers, numbering in the hundreds of thousands, but due rapid economic growth and development, most have since returned home; the South Korean government's statistics showed only 1,200 of their nationals living in the kingdom (most of them being professionals and business personnels) . There are more than 100,000 Westerners in Saudi Arabia, most of whom live in private compounds in the major cities such as Riyadh, Jeddah, Yanbu and Dhahran. The government prohibits non-Muslims from entering the cities of Mecca and Medinah.
As of 2018 the Kingdom of Saudi Arabia is estimated to have a population of 33,413,660 .
The following data have been retrieved from the CIA World Factbook as of 2020.
0–14 years: 24.84%
15–24 years: 15.38%
25–54 years: 50.2%
55–64 years: 5.95%
65 years and over: 3.63%
at birth: 1.05 male(s)/female
0–14 years: 1.04 male(s)/female
15–24 years: 1.09 male(s)/female
25–54 years: 1.52 male(s)/female
55–64 years: 1.61 male(s)/female
65 years and over: 1.12 male(s)/female
According to the CIA World Factbook the population of Saudi Arabia has a large young population ages 0–19 years and an increasing middle-age population ages 20–35 years. With a growing population reaching adulthood, global economists and the Saudi government have become concerned that there are more Saudis seeking jobs than are available. The nation has also seen a rise in its older population as life expectancy has risen throughout the last 40 years.
The following data has been retrieved from the CIA World Factbook as of 2018.
total population:
male: 74.2 years
female: 77.3 years
Population Density: 15.322 people per km2 of land (2017)
The following data have been retrieved from the CIA World Factbook as of 2020.
Saudi Arabia is ranked 111th in comparison to the world with a birth rate of 18.51 births per 1,000 people in 2019. The nation's death rate is ranked 220th worldwide with 3.3 deaths per 1,000 people. Although birth rates have decreased in the last two decades, rates of decline fail to match the significant decline in death rates. Because of this, Saudi Arabia has experienced a population explosion in the last 40 years, and continues to grow at a rate of 1.63% per year. Saudi Arabia's population growth continues to be 0.295% higher than population growth rates in the Middle East and North Africa. Infant mortality rates have declined dramatically in the past twenty years from 25.3 deaths per 1,000 live births in 1995 to 6.3 deaths in 2017, according to the World Bank. Saudi Arabia has a substantially lower infant mortality rate in comparison to the Middle East and North Africa region, which continues to face a high of 19.3 deaths for every 1,000 live births as of 2017. This significant reduction can be attributed to rising access to modern healthcare across the country, ranking 26th worldwide for healthcare system quality. The construction of new hospitals and primary healthcare centers across the Kingdom, as well as healthcare during pregnancy and increased use of vaccinations account for a decline in infant mortality and increased life expectancy.
"noun:"
Saudi(s)
"adjective:"
Saudi or Saudi Arabian
The ethnic composition of Saudi citizens is 90% Arab and 10% Afro-Asian.
The following data has been retrieved from the CIA World Factbook
urban population: 83.8% of total population (2018)
rate of urbanization: 2.17% annual rate of change (2015-20 est.)
Historically, the population of Saudi Arabia followed a nomadic lifestyle. Following the discovery of oil in the 1930s, the Kingdom became far more settled as people moved to centers of high economic activity. Significant population growth can be seen in the rise of urbanization throughout Saudi Arabia, which has grown 2 percent in the past ten years. The largest Saudi cities have become flooded with new residents as more people move to urban cities to find better employment opportunities, and overcrowding has become a major issue across the nation.
Migration is a significant part of Saudi Arabia's society and culture, as the nation's thriving oil economy attracts large numbers of foreign workers from an assortment of countries throughout Asia and the Arab world. Following economic diversification in response to the oil boom of the 1970s, the Saudi government encouraged skilled and semi-skilled workers to enter the Kingdom as the demand for infrastructure and development intensified. Saudi Arabia is among the top five immigrant destination countries around the world, currently hosting 5.3 million international migrants in its borders. In 2017 non-native residents accounted for 37% of the Kingdom's total population, more than twice that of the United States whose immigrants make up 15% of the nation's total population. The majority of Saudi Arabia's foreign born population are males between the ages of 25 and 45. These immigrants make up a larger percentage of the total population in this age group compared to native-born Saudis ages 25–45, according to the United Nations 2013 report. 26.3% of the total migrant population in Saudi Arabia are from India, followed by Pakistan (24.2%), Bangladesh (19.5%), Egypt (19.3%), and finally the Philippines (15.3%). Most immigrants of the Kingdom are skilled, unskilled, and service industry foreign workers. Although the living and working conditions immigrant workers are harsh in Saudi Arabia, economic opportunity tends to be much greater than in their homelands. There are around five million illegal immigrants in Saudi Arabia, most of which come from Africa and Asia. These immigrants are planned to be deported within the next few years. There are around 100,000 Westerners in Saudi Arabia, most of whom live in compounds or gated communities.
The government does not conduct census on religion, but estimates put the percentage of the majority Sunnis at 85-90%. The rest are other forms of islamic minorities. Other smaller communities (Ismailis and Zaidis) reside in the south, with Ismailis constituting around half of the population of the province of Nejran, and a small percentage of the Holy Islamic cities of Mecca and Medina. There is also a Christian population of uncertain size. According to Gallup atheists account for 5% of the population with a total non-religious population of 24%.
The official language of Saudi Arabia is Arabic. Saudi Sign Language is the principal language of the deaf community. The large expatriate communities also speak their own languages, the most numerous of which are Hindi (1,000,000), Indonesian (850,000), Filipino/Tagalog (700,000), Malayalam (447,000), Rohingya (400,000), Urdu (380,000), and Egyptian Arabic (300,000). | https://en.wikipedia.org/wiki?curid=27271 |
OS/2
OS/2 is a series of computer operating systems, initially created by Microsoft and IBM under the leadership of IBM software designer Ed Iacobucci. As a result of a feud between the two companies over how to position OS/2 relative to Microsoft's new Windows 3.1 operating environment, the two companies severed the relationship in 1992 and OS/2 development fell to IBM exclusively. The name stands for "Operating System/2", because it was introduced as part of the same generation change release as IBM's "Personal System/2 (PS/2)" line of second-generation personal computers. The first version of OS/2 was released in December 1987 and newer versions were released until December 2001.
OS/2 was intended as a protected-mode successor of PC DOS. Notably, basic system calls were modeled after MS-DOS calls; their names even started with "Dos" and it was possible to create "Family Mode" applications – text mode applications that could work on both systems. Because of this heritage, OS/2 shares similarities with Unix, Xenix, and Windows NT.
IBM discontinued its support for OS/2 on 31 December 2006. Since then, it has been updated, maintained and marketed under the name eComStation. In 2015 it was announced that a new OEM distribution of OS/2 would be released that was to be called ArcaOS. ArcaOS is available for purchase.
The development of OS/2 began when IBM and Microsoft signed the "Joint Development Agreement" in August 1985. It was code-named "CP/DOS" and it took two years for the first product to be delivered.
OS/2 1.0 was announced in April 1987 and released in December. The original release is textmode-only, and a GUI was introduced with OS/2 1.1 about a year later. OS/2 features an API for controlling the video display (VIO) and handling keyboard and mouse events so that programmers writing for protected-mode need not call the BIOS or access hardware directly. Other development tools included a subset of the video and keyboard APIs as linkable libraries so that family mode programs are able to run under MS-DOS, and, in the OS/2 Extended Edition v1.0, a database engine called Database Manager or DBM (this was related to DB2, and should not be confused with the DBM family of database engines for Unix and Unix-like operating systems). A task-switcher named Program Selector was available through the Ctrl-Esc hotkey combination, allowing the user to select among multitasked text-mode sessions (or screen groups; each can run multiple programs).
Communications and database-oriented extensions were delivered in 1988, as part of OS/2 1.0 Extended Edition: SNA, X.25/APPC/LU 6.2, LAN Manager, Query Manager, SQL.
The promised user interface, Presentation Manager, was introduced with OS/2 1.1 in October 1988. It had a similar user interface to Windows 2.1, which was released in May of that year. (The interface was replaced in versions 1.2 and 1.3 by a look closer in appearance to Windows 3.1).
The Extended Edition of 1.1, sold only through IBM sales channels, introduced distributed database support to IBM database systems and SNA communications support to IBM mainframe networks.
In 1989, Version 1.2 introduced Installable Filesystems and, notably, the HPFS filesystem. HPFS provided a number of improvements over the older FAT file system, including long filenames and a form of alternate data streams called Extended Attributes. In addition, extended attributes were also added to the FAT file system.
The Extended Edition of 1.2 introduced TCP/IP and Ethernet support.
OS/2- and Windows-related books of the late 1980s acknowledged the existence of both systems and promoted OS/2 as the system of the future.
The collaboration between IBM and Microsoft unravelled in 1990, between the releases of Windows 3.0 and OS/2 1.3. During this time, Windows 3.0 became a tremendous success, selling millions of copies in its first year. Much of its success was because Windows 3.0 (along with MS-DOS) was bundled with most new computers. OS/2, on the other hand, was available only as an additional stand-alone software package. In addition, OS/2 lacked device drivers for many common devices such as printers, particularly non-IBM hardware. Windows, on the other hand, supported a much larger variety of hardware. The increasing popularity of Windows prompted Microsoft to shift its development focus from cooperating on OS/2 with IBM to building its own business based on Windows.
Several technical and practical reasons contributed to this breakup.
The two companies had significant differences in culture and vision. Microsoft favored the open hardware system approach that contributed to its success on the PC; IBM sought to use OS/2 to drive sales of its own hardware, including systems that could not support the features Microsoft wanted. Microsoft programmers also became frustrated with IBM's bureaucracy and its use of lines of code to measure programmer productivity. IBM developers complained about the terseness and lack of comments in Microsoft's code, while Microsoft developers complained that IBM's code was bloated.
The two products have significant differences in API. OS/2 was announced when Windows 2.0 was near completion, and the Windows API already defined. However, IBM requested that this API be significantly changed for OS/2. Therefore, issues surrounding application compatibility appeared immediately. OS/2 designers hoped for source code conversion tools, allowing complete migration of Windows application source code to OS/2 at some point. However, OS/2 1.x did not gain enough momentum to allow vendors to avoid developing for both OS/2 and Windows in parallel.
OS/2 1.x targets the Intel 80286 processor and DOS fundamentally doesn't. IBM insisted on supporting the 80286 processor, with its 16-bit segmented memory mode, because of commitments made to customers who had purchased many 80286-based PS/2s as a result of IBM's promises surrounding OS/2. Until release 2.0 in April 1992, OS/2 ran in 16-bit protected mode and therefore could not benefit from the Intel 80386's much simpler 32-bit flat memory model and virtual 8086 mode features. This was especially painful in providing support for DOS applications. While, in 1988, Windows/386 2.1 could run several cooperatively multitasked DOS applications, including expanded memory (EMS) emulation, OS/2 1.3, released in 1991, was still limited to one "DOS box".
Given these issues, Microsoft started to work in parallel on a version of Windows which was more future-oriented and more portable. The hiring of Dave Cutler, former VMS architect, in 1988 created an immediate competition with the OS/2 team, as Cutler did not think much of the OS/2 technology and wanted to build on his work at Digital rather than creating a "DOS plus". His "NT OS/2" was a completely new architecture.
IBM grew concerned about the delays in development of OS/2 2.0. Initially, the companies agreed that IBM would take over maintenance of OS/2 1.0 and development of OS/2 2.0, while Microsoft would continue development of OS/2 3.0. In the end, Microsoft decided to recast NT OS/2 3.0 as Windows NT, leaving all future OS/2 development to IBM. From a business perspective, it was logical to concentrate on a consumer line of operating systems based on DOS and Windows, and to prepare a new high-end system in such a way as to keep good compatibility with existing Windows applications. While it waited for this new high-end system to develop, Microsoft would still receive licensing money from Xenix and OS/2 sales. Windows NT's OS/2 heritage can be seen in its initial support for the HPFS filesystem, text mode OS/2 1.x applications, and OS/2 LAN Manager network support. Some early NT materials even included OS/2 copyright notices embedded in the software.
One example of NT OS/2 1.x support is in the WIN2K resource kit. Windows NT could also support OS/2 1.x Presentation Manager and AVIO applications with the addition of the Windows NT Add-On Subsystem for Presentation Manager.
OS/2 2.0 was released in April 1992. At the time, the suggested retail price was U.S. $195, while Windows retailed for $150.
OS/2 2.0 provided a 32-bit API for native programs, though the OS itself still contained some 16-bit code and drivers. It also included a new OOUI (object-oriented user interface) called the Workplace Shell. This was a fully object-oriented interface that was a significant departure from the previous GUI. Rather than merely providing an environment for program windows (such as the Program Manager), the Workplace Shell provided an environment in which the user could manage programs, files and devices by manipulating objects on the screen. With the Workplace Shell, everything in the system is an "object" to be manipulated.
OS/2 2.0 was touted by IBM as "a better DOS than DOS and a better Windows than Windows". It managed this by including the fully-licensed MS-DOS 5.0, which had been patched and improved upon. For the first time, OS/2 was able to run more than one DOS application at a time. This was so effective, that it allowed OS/2 to run a modified copy of Windows 3.0, itself a DOS extender, including Windows 3.0 applications.
Because of the limitations of the Intel 80286 processor, OS/2 1.x could run only one DOS program at a time, and did this in a way that allowed the DOS program to have total control over the computer. A problem in DOS mode could crash the entire computer. In contrast, OS/2 2.0 could leverage the virtual 8086 mode of the Intel 80386 processor to create a much safer virtual machine in which to run DOS programs. This included an extensive set of configuration options to optimize the performance and capabilities given to each DOS program. Any real-mode operating system (such as 8086 Xenix) could also be made to run using OS/2's virtual machine capabilities, subject to certain direct hardware access limitations.
Like most 32-bit environments, OS/2 could not run protected-mode DOS programs using the older VCPI interface, unlike the Standard mode of Windows 3.1; it only supported programs written according to DPMI. (Microsoft discouraged the use of VCPI under Windows 3.1, however, due to performance degradation.)
Unlike Windows NT, OS/2 always allowed DOS programs the possibility of masking real hardware interrupts, so any DOS program could deadlock the machine in this way. OS/2 could, however, use a hardware watchdog on selected machines (notably IBM machines) to break out of such a deadlock. Later, release 3.0 leveraged the enhancements of newer Intel 80486 and Intel Pentium processors—the Virtual Interrupt Flag (VIF), which was part of the Virtual Mode Extensions (VME)—to solve this problem.
Compatibility with Windows 3.0 (and later Windows 3.1) was achieved by adapting Windows user-mode code components to run inside a virtual DOS machine (VDM). Originally, a nearly complete version of Windows code was included with OS/2 itself: Windows 3.0 in OS/2 2.0, and Windows 3.1 in OS/2 2.1. Later, IBM developed versions of OS/2 that would use whatever Windows version the user had installed previously, patching it on the fly, and sparing the cost of an additional Windows license. It could either run full-screen, using its own set of video drivers, or "seamlessly," where Windows programs would appear directly on the OS/2 desktop. The process containing Windows was given fairly extensive access to hardware, especially video, and the result was that switching between a full-screen WinOS/2 session and the Workplace Shell could occasionally cause issues.
Because OS/2 only runs the user-mode system components of Windows, it is incompatible with Windows device drivers (VxDs) and applications that require them.
Multiple Windows applications run by default in a single Windows session – multitasking cooperatively and without memory protection – just as they would under native Windows 3.x. However, to achieve true isolation between Windows 3.x programs, OS/2 can also run multiple copies of Windows in parallel, with each copy residing in a separate VDM. The user can then optionally place each program either in its own Windows session – with preemptive multitasking and full memory protection "between" sessions, though not "within" them – or allow some applications to run together cooperatively in a shared Windows session while isolating other applications in one or more separate Windows sessions. At the cost of additional hardware resources, this approach can protect each program in any given Windows session (and each instance of Windows itself) from every other program running in any "separate" Windows session (though not from other programs running in the same Windows session).
Whether Windows applications are running in full-screen or windowed mode, and in one Windows session or several, it is possible to use DDE between OS/2 and Windows applications, and OLE between Windows applications only.
Released in 1994, OS/2 version 3.0 was labelled as OS/2 Warp to highlight the new performance benefits, and generally to freshen the product image. "Warp" had originally been the internal IBM name for the release: IBM claimed that it had used "Star Trek" terms as internal names for prior OS/2 releases, and that this one seemed appropriate for external use as well. At the launch of OS/2 Warp in 1994, Patrick Stewart was to be the Master of Ceremonies; however Kate Mulgrew of the then-upcoming series "" was substituted at the last minute.
OS/2 Warp offers a host of benefits over OS/2 2.1, notably broader hardware support, greater multimedia capabilities, Internet-compatible networking, and it includes a basic office application suite known as IBM Works. It was released in two versions: the less expensive "Red Spine" and the more expensive "Blue Spine" (named for the color of their boxes). "Red Spine" was designed to support Microsoft Windows applications by utilizing any existing installation of Windows on the computer's hard drive. "Blue Spine" includes Windows support in its own installation, and so can support Windows applications without a Windows installation. As most computers were sold with Microsoft Windows pre-installed and the price was less, "Red Spine" was the more popular product. OS/2 Warp Connect—which has full LAN client support built-in—followed in mid-1995. Warp Connect was nicknamed "Grape".
In OS/2 2.0, most performance-sensitive subsystems, including the graphics (Gre) and multimedia (MMPM/2) systems, were updated to 32-bit code in a fixpack, and included as part of OS/2 2.1. Warp 3 brought about a fully 32-bit windowing system, while Warp 4 introduced the object-oriented 32-bit GRADD display driver model.
In 1996, Warp 4 added Java and speech recognition software. IBM also released server editions of Warp 3 and Warp 4 which bundled IBM's LAN Server product directly into the operating system installation. A personal version of Lotus Notes was also included, with a number of template databases for contact management, brainstorming, and so forth. The UK-distributed free demo CD-ROM of OS/2 Warp essentially contained the entire OS and was easily, even accidentally, cracked, meaning that even people who liked it did not have to buy it. This was seen as a backdoor tactic to increase the number of OS/2 users, in the belief that this would increase sales and demand for third-party applications, and thus strengthen OS/2's desktop numbers. This suggestion was bolstered by the fact that this demo version had replaced another which was not so easily cracked, but which had been released with trial versions of various applications. In 2000, the July edition of "Australian Personal Computer" magazine bundled software CD-ROMs, included a full version of Warp 4 that required no activation and was essentially a free release. Special versions of OS/2 2.11 and Warp 4 also included symmetric multiprocessing (SMP) support.
OS/2 sales were largely concentrated in networked computing used by corporate professionals; however, by the early 1990s, it was overtaken by Microsoft Windows NT. While OS/2 was arguably technically superior to Microsoft Windows 95, OS/2 failed to develop much penetration in the consumer and stand-alone desktop PC segments; there were reports that it could not be installed properly on IBM's own Aptiva series of home PCs. Microsoft made an offer in 1994 where IBM would receive the same terms as Compaq (the largest PC manufacturer at the time) for a license of Windows 95, if IBM ended development of OS/2 completely. IBM refused and instead went with an "IBM First" strategy of promoting OS/2 Warp and disparaging Windows, as IBM aimed to drive sales of its own software as well as hardware. By 1995, Windows 95 negotiations between IBM and Microsoft, which were already difficult, stalled when IBM purchased Lotus SmartSuite, which would have directly competed with Microsoft Office. As a result of the dispute, IBM signed the license agreement 15 minutes before Microsoft's Windows 95 launch event, which was later than their competitors and this badly hurt sales of IBM PCs. IBM officials later conceded that OS/2 would not have been a viable operating system to keep them in the PC business.
In 1991 IBM started development on an intended replacement for OS/2 called Workplace OS. This was an entirely new product, brand new code, that borrowed only a few sections of code from both the existing OS/2 and AIX products. It used an entirely new microkernel code base, intended (eventually) to host several of IBM's operating systems (including OS/2) as microkernel "personalities". It also included major new architectural features including a system registry, JFS, support for UNIX graphics libraries, and a new driver model.
Workplace OS was developed solely for POWER platforms, and IBM intended to market a full line of PowerPCs in an effort to take over the market from Intel. A mission was formed to create prototypes of these machines and they were disclosed to several Corporate customers, all of whom raised issues with the idea of dropping Intel.
Advanced plans for the new code base would eventually include replacement of the OS/400 operating system by Workplace OS, as well as a microkernel product that would have been used in industries such as telecommunications and set-top television receivers.
A partially functional pre-alpha version of Workplace OS was demonstrated at Comdex, where a bemused Bill Gates stopped by the booth. The second and last time it would be shown in public was at an OS/2 user group in Phoenix, Arizona; the pre-alpha code refused to boot.
It was released in 1995. But with $990 million being spent per year on development of this as well as Workplace OS, and no possible profit or widespread adoption, the end of the entire Workplace OS and OS/2 product line was near.
A project was launched internally by IBM to evaluate the looming competitive situation with Microsoft Windows 95. Primary concerns included the major code quality issues in the existing OS/2 product (resulting in over 20 service packs, each requiring more diskettes than the original installation), and the ineffective and heavily matrixed development organization in Boca Raton (where the consultants reported that "basically, everybody reports to everybody") and Austin.
That study, tightly classified as "Registered Confidential" and printed only in numbered copies, identified untenable weaknesses and failures across the board in the Personal Systems Division as well as across IBM as a whole. This resulted in a decision being made at a level above the Division to cut over 95% of the overall budget for the entire product line, end all new development (including Workplace OS), eliminate the Boca Raton development lab, end all sales and marketing efforts of the product, and lay off over 1,300 development individuals (as well as sales and support personnel). $990 million had been spent in the last full year. Warp 4 became the last distributed version of OS/2.
A small and dedicated community remained faithful to OS/2 for many years after its final mainstream release, but overall, OS/2 failed to catch on in the mass market and is little used outside certain niches where IBM traditionally had a stronghold. For example, many bank installations, especially automated teller machines, run OS/2 with a customized user interface; French SNCF national railways used OS/2 1.x in thousands of ticket selling machines. Telecom companies such as Nortel use OS/2 in some voicemail systems. Also, OS/2 was used for the host PC used to control the Satellite Operations Support System equipment installed at NPR member stations from 1994 to 2007, and used to receive the network's programming via satellite.
Although IBM began indicating shortly after the release of Warp 4 that OS/2 would eventually be withdrawn, the company did not end support until December 31, 2006. Sales of OS/2 stopped on December 23, 2005. The latest IBM version is 4.52, which was released for both desktop and server systems in December 2001. Serenity Systems has been reselling OS/2 since 2001, calling it eComStation. Version 1.2 was released in 2004. After a series of preliminary "release candidates," version 2.0 GA (General Availability) was released on 15 May 2010. eComStation version 2.1 GA was released on May 20, 2011.
IBM is still delivering defect support for a fee. IBM urges customers to migrate their often highly complex applications to e-business technologies such as Java in a platform-neutral manner. Once application migration is completed, IBM recommends migration to a different operating system, suggesting Linux as an alternative.
, support for running OS/2 under virtualization appears to be improving in several third-party products. OS/2 has historically been more difficult to run in a virtual machine than most other legacy x86 operating systems because of its extensive reliance on the full set of features of the x86 CPU; in particular, OS/2's use of ring 2 prevented it from running in VMware. Emulators such as QEMU and Bochs don't suffer from this problem and can run OS/2.
A beta of VMware Workstation 2.0 released in January 2000 was the first hypervisor that could run OS/2 at all. Later, the company decided to drop official OS/2 support.
VirtualPC from Microsoft (originally Connectix) has been able to run OS/2 without hardware virtualization support for many years. It also provided "additions" code which greatly improves host–guest OS interactions in OS/2. The additions are not provided with the current version of VirtualPC, but the version last included with a release may still be used with current releases. At one point, OS/2 was a supported host for VirtualPC in addition to a guest. Note that OS/2 runs only as a guest on those versions of VirtualPC that use virtualization (x86 based hosts) and not those doing full emulation (VirtualPC for Mac).
VirtualBox from Oracle Corporation (originally InnoTek, later Sun) supports OS/2 1.x, Warp 3 through 4.5, and eComStation as well as "Other OS/2" as guests. However, attempting to run OS/2 and eComStation can still be difficult, if not impossible, because of the strict requirements of VT-x/AMD-V hardware-enabled virtualization and only ACP2/MCP2 is reported to work in a reliable manner.
The difficulties in efficiently running OS/2 have, at least once, created an opportunity for a new virtualization company. A large bank in Moscow needed a way to use OS/2 on newer hardware that OS/2 did not support. As virtualization software is an easy way around this, the company desired to run OS/2 under a hypervisor. Once it was determined that VMware was not a possibility, it hired a group of Russian software developers to write a host-based hypervisor that would officially support OS/2. Thus, the Parallels, Inc. company and their Parallels Workstation product was born.
OS/2 has few native computer viruses; while it is not invulnerable by design, its reduced market share appears to have discouraged virus writers. There are, however, OS/2-based antivirus programs, dealing with DOS viruses and Windows viruses that could pass through an OS/2 server.
Many people hoped that IBM would release OS/2 or a significant part of it as open source. Petitions were held in 2005 and 2007, but IBM refused them, citing legal and technical reasons. It is unlikely that the entire OS will be open at any point in the future because it contains third-party code to which IBM does not have copyright, and much of this code is from Microsoft. IBM also once engaged in a technology transfer with Commodore, licensing Amiga technology for OS/2 2.0 and above, in exchange for the REXX scripting language. This means that OS/2 may have some code that was not written by IBM, which can therefore prevent the OS from being re-announced as open-sourced in the future. On the other hand, IBM donated Object REXX for Windows and OS/2 to the "Open Object REXX" project maintained by the "REXX Language Association" on SourceForge.
There was a petition, arranged by OS2World, to open parts of the OS. Open source operating systems such as Linux have already profited from OS/2 indirectly through IBM's release of the improved JFS file system, which was ported from the OS/2 code base. As IBM didn't release the source of the OS/2 JFS driver, developers ported the Linux driver back to eComStation and added the functionality to boot from a JFS partition. This new JFS driver has been integrated into eComStation v2.0, the successor of OS/2.
Release dates refer to the US English editions unless otherwise noted.
The graphic system has a layer named Presentation Manager that manages windows, fonts, and icons. This is similar in functionality to a non-networked version of X11 or the Windows GDI. On top of this lies the Workplace Shell (WPS) introduced in OS/2 2.0. WPS is an object-oriented shell allowing the user to perform traditional computing tasks such as accessing files, printers, launching legacy programs, and advanced object oriented tasks using built-in and third-party application objects that extended the shell in an integrated fashion not available on any other mainstream operating system. WPS follows IBM's Common User Access user interface standards.
WPS represents objects such as disks, folders, files, program objects, and printers using the System Object Model (SOM), which allows code to be shared among applications, possibly written in different programming languages. A distributed version called DSOM allowed objects on different computers to communicate. DSOM is based on CORBA. The object oriented aspect of SOM is similar to, and a direct competitor to, Microsoft's Component Object Model, though it is implemented in a radically different manner; for instance, one of the most notable differences between SOM and COM is SOM's support for inheritance (one of the most fundamental concepts of OO programming)—COM does not have such support. SOM and DSOM are no longer being developed.
The multimedia capabilities of OS/2 are accessible through Media Control Interface commands.
The last update (bundled with the IBM version of Netscape Navigator plugins) added support for MPEG files. Support for newer formats such as PNG, progressive JPEG, DivX, Ogg, and MP3 comes from third parties. Sometimes it is integrated with the multimedia system, but in other offers it comes as standalone applications.
The following list of commands is supported by cmd.exe on OS/2.
OS/2 also includes a radical advancement in application development with compound document technology called OpenDoc, which was developed with Apple. OpenDoc proved interesting as a technology, but was not widely used or accepted by users or developers. OpenDoc is also no longer being developed.
The TCP/IP stack is based on the open source BSD stack as visible with SCCS what compatible tools.
Hardware vendors were reluctant to support device drivers for alternative operating systems including OS/2 and Linux, leaving users with few choices from a select few vendors. To relieve this issue for video cards, IBM licensed a reduced version of the Scitech display drivers, allowing users to choose from a wide selection of cards supported through Scitech's modular driver design.
Some problems were classic subjects of comparison with other operating systems:
OS/2 has been widely used in Iran Export Bank (Bank Saderat Iran) in their teller machines, ATMs and local servers (over 30,000 working stations). As of 2011, the bank moved to virtualize and renew their infrastructure by moving OS/2 to Virtual Machines running over Windows.
OS/2 was widely used in Brazilian banks. Banco do Brasil had a peak 10,000 machines running OS/2 Warp in the 1990s. OS/2 was used in automated teller machines until 2006. The workstations and automated teller machines and attendant computers have been migrated to Linux.
OS/2 has been used in the banking industry. Suncorp bank in Australia still ran its ATM network on OS/2 as late as 2002. ATMs in Perisher Blue used OS/2 as late as 2009, and even the turn of the decade.
OS/2 was widely adopted by accounting professionals and auditing companies. In mid-1990s native 32-bit accounting software were well developed and serving corporate markets.
OS/2 ran the faulty baggage handling system at Denver International Airport. The OS was eventually scrapped, but the software written for the system led to massive delays in the opening of the new airport. The OS itself was not at fault, but the software written to run on the OS was. The baggage handling system was eventually removed.
OS/2 was used by radio personality Howard Stern. He once had a 10-minute on-air rant about OS/2 versus Windows 95 and recommended OS/2. He also used OS/2 on his IBM 760CD laptop.
OS/2 was used as part of the Satellite Operations Support System (SOSS) for NPR's Public Radio Satellite System. SOSS was a computer-controlled system using OS/2 that NPR member stations used to receive programming feeds via satellite. SOSS was introduced in 1994 using OS/2 3.0, and was retired in 2007, when NPR switched over to its successor, the ContentDepot.
OS/2 was used to control the SkyTrain automated light rail system in Vancouver, British Columbia, Canada until the late 2000s when it was replaced by Windows XP.
OS/2 was used in the London Underground Jubilee Line Extension Signals Control System (JLESCS) in London, UK. This control system delivered by Alcatel was in use from 1999 to 2011 i.e. between abandonment before opening of the line's unimplemented original automatic train control system and the present SelTrac system. JLESCS did not provide automatic train operation only manual train supervision. Six OS/2 local site computers were distributed along the railway between Stratford and Westminster, the shunting tower at Stratford depot, and several formed the central equipment located at Neasden. It was once intended to cover the rest of the line between Green Park and Stanmore but this was never introduced.
OS/2 has been used by The Co-operative Bank in the UK for its domestic call centre staff, using a bespoke program created to access customer accounts which cannot easily be migrated to Windows.
OS/2 has been used by the Stop & Shop supermarket chain (and has been installed in new stores as recently as March 2010).
OS/2 has been used on ticket machines for Croydon Tramlink in outer-London (UK).
OS/2 has been used in New York City's subway system for MetroCards.
Rather than interfacing with the user, it connects simple computers and the mainframes.
When NYC MTA finishes its transition to contactless payment, OS/2 will be removed.
OS/2 was used in checkout systems at Safeway supermarkets.
OS/2 was used by Trenitalia, both for the desktops at Ticket Counters and for the Automatic Ticket Counters up to 2011. Incidentally, the Automatic Ticket Counters with OS/2 were more reliable than the current ones running a flavor of Windows.
OS/2 was used as the main operating system for Abbey National General Insurance motor and home direct call centre products using the PMSC Series III insurance platform on DB2.2 from 1996-2001
"BYTE" in 1989 listed OS/2 as among the "Excellence" winners of the BYTE Awards, stating that it "is today where the Macintosh was in 1984: It's a development platform in search of developers". The magazine predicted that "When it's complete and bug-free, when it can really use the 80386, and when more desktops sport OS/2-capable PCs, OS/2 will—deservedly—supersede DOS. But even as it stands, OS/2 is a milestone product".
In March 1995 OS/2 won seven awards
IBM has used OS/2 in a wide variety of hardware products, effectively as a form of embedded operating system. | https://en.wikipedia.org/wiki?curid=22409 |
Oliver Cromwell
Oliver Cromwell (25 April 15993 September 1658) was an English general and statesman who led the Parliament of England's armies against King Charles I during the English Civil War and ruled the British Isles as Lord Protector from 1653 until his death in 1658. He acted simultaneously as head of state and head of government of the new republican commonwealth.
Cromwell was born into the middle gentry to a family descended from the sister of Henry VIII's minister Thomas Cromwell. Little is known of the first 40 years of his life, as only four of his personal letters survive along with a summary of a speech that he delivered in 1628. He became an Independent Puritan after undergoing a religious conversion in the 1630s, taking a generally tolerant view towards the many Protestant sects of his period. He was an intensely religious man, and he fervently believed that God was guiding his victories. He was elected Member of Parliament for Huntingdon in 1628 and for Cambridge in the Short (1640) and Long (1640–1649) Parliaments. He entered the English Civil Wars on the side of the "Roundheads" or Parliamentarians, nicknamed "Old Ironsides". He demonstrated his ability as a commander and was quickly promoted from leading a single cavalry troop to being one of the principal commanders of the New Model Army, playing an important role under General Sir Thomas Fairfax in the defeat of the Royalist ("Cavalier") forces.
Cromwell was one of the signatories of King Charles I's death warrant in 1649, and he dominated the short-lived Commonwealth of England as a member of the Rump Parliament (1649–1653). He was selected to take command of the English campaign in Ireland in 1649–1650. Cromwell's forces defeated the Confederate and Royalist coalition in Ireland and occupied the country, bringing to an end the Irish Confederate Wars. During this period, a series of Penal Laws were passed against Roman Catholics (a significant minority in England and Scotland but the vast majority in Ireland), and a substantial amount of their land was confiscated. Cromwell also led a campaign against the Scottish army between 1650 and 1651.
On 20 April 1653, he dismissed the Rump Parliament by force, setting up a short-lived nominated assembly known as Barebone's Parliament before being invited by his fellow leaders to rule as Lord Protector of England (which included Wales at the time), Scotland, and Ireland from 16 December 1653. As a ruler, he executed an aggressive and effective foreign policy. He died from natural causes in 1658 and was buried in Westminster Abbey. The Royalists returned to power along with King Charles II in 1660, and they had his corpse dug up, hung in chains, and beheaded.
Cromwell is one of the most controversial figures in the history of the British Isles, considered a regicidal dictator by historians such as David Sharp, a military dictator by Winston Churchill, and a hero of liberty by John Milton, Thomas Carlyle, and Samuel Rawson Gardiner. His tolerance of Protestant sects did not extend to Catholics; his measures against them in Ireland have been characterised by some as genocidal or near-genocidal, and his record is strongly criticised in Ireland. He was selected as one of the ten greatest Britons of all time in a 2002 BBC poll.
Cromwell was born in Huntingdon on 25 April 1599 to Robert Cromwell and his second wife Elizabeth, daughter of William Steward (buried in Ely Cathedral in 1593). The family's estate derived from Oliver's great-great-grandfather Morgan ap William, a brewer from Glamorgan who settled at Putney near London, and married Katherine Cromwell (born 1482), the sister of Thomas Cromwell, who would become the famous chief minister to Henry VIII. The Cromwell family acquired great wealth as occasional beneficiaries of Thomas's administration of the Dissolution of the Monasteries. Morgan ap William was a son of William ap Yevan of Wales. The family line continued through Richard Williams (alias Cromwell), (c. 1500–1544), Henry Williams (alias Cromwell), (c. 1524 – 6 January 1604), then to Oliver's father Robert Williams, alias Cromwell (c. 1560–1617), who married Elizabeth Steward (c. 1564 – 1654), probably in 1591. They had ten children, but Oliver, the fifth child, was the only boy to survive infancy.
Cromwell's paternal grandfather Sir Henry Williams was one of the two wealthiest landowners in Huntingdonshire. Cromwell's father Robert was of modest means but still a member of the landed gentry. As a younger son with many siblings, Robert inherited only a house at Huntingdon and a small amount of land. This land would have generated an income of up to £300 a year, near the bottom of the range of gentry incomes. Cromwell himself in 1654 said, "I was by birth a gentleman, living neither in considerable height, nor yet in obscurity."
Cromwell was baptised on 29 April 1599 at St John's Church, and attended Huntingdon Grammar School. He went on to study at Sidney Sussex College, Cambridge, then a recently founded college with a strong Puritan ethos. He left in June 1617 without taking a degree, immediately after his father's death. Early biographers claim that he then attended Lincoln's Inn, but the Inn's archives retain no record of him. Antonia Fraser concludes that it was likely that he did train at one of the London Inns of Court during this time. His grandfather, his father, and two of his uncles had attended Lincoln's Inn, and Cromwell sent his son Richard there in 1647.
Cromwell probably returned home to Huntingdon after his father's death. As his mother was widowed, and his seven sisters unmarried, he would have been needed at home to help his family.
Cromwell married Elizabeth Bourchier (1598–1665) on 22 August 1620 at St Giles-without-Cripplegate, Fore Street, London. Elizabeth's father, Sir James Bourchier, was a London leather merchant who owned extensive lands in Essex and had strong connections with Puritan gentry families there. The marriage brought Cromwell into contact with Oliver St John and with leading members of the London merchant community, and behind them the influence of the Earls of Warwick and Holland. A place in this influential network would prove crucial to Cromwell's military and political career. The couple had nine children:
Little evidence exists of Cromwell's religion at this stage. His letter in 1626 to Henry Downhall, an Arminian minister, suggests that Cromwell had yet to be influenced by radical Puritanism. However, there is evidence that Cromwell went through a period of personal crisis during the late 1620s and early 1630s. In 1628 he was elected to Parliament from the Huntingdonshire county town of Huntingdon. Later that year, he sought treatment for a variety of physical and emotional ailments, including "valde melancholicus" (depression), from the Swiss-born London doctor Théodore de Mayerne. In 1629 he was caught up in a dispute among the gentry of Huntingdon over a new charter for the town, as a result of which he was called before the Privy Council in 1630.
In 1631 Cromwell sold most of his properties in Huntingdon—probably as a result of the dispute—and moved to a farmstead in nearby St Ives (then in Huntingdonshire, now in Cambridgeshire). This signified a major step down in society compared with his previous position, and seems to have had a significant emotional and spiritual impact. A 1638 letter survives from Cromwell to his cousin, the wife of Oliver St John, and gives an account of his spiritual awakening. The letter outlines how, having been "the chief of sinners", Cromwell had been called to be among "the congregation of the firstborn". The language of this letter, which is saturated with biblical quotations and which represents Cromwell as having been saved from sin by God's mercy, places his faith firmly within the Independent beliefs that the Reformation had not gone far enough, that much of England was still living in sin, and that Catholic beliefs and practices needed to be fully removed from the church. It would appear that in 1634 Cromwell attempted to emigrate to Connecticut in America, but was prevented by the government from leaving.
Along with his brother Henry, Cromwell had kept a smallholding of chickens and sheep, selling eggs and wool to support himself, his lifestyle resembling that of a yeoman farmer. In 1636 Cromwell inherited control of various properties in Ely from his uncle on his mother's side, and his uncle's job as tithe collector for Ely Cathedral. As a result, his income is likely to have risen to around £300–400 per year; by the end of the 1630s Cromwell had returned to the ranks of acknowledged gentry. He had become a committed Puritan and had established important family links to leading families in London and Essex.
Cromwell became the Member of Parliament for Huntingdon in the Parliament of 1628–1629, as a client of the Montagu family of Hinchingbrooke House. He made little impression: records for the Parliament show only one speech (against the Arminian Bishop Richard Neile), which was poorly received. After dissolving this Parliament, Charles I ruled without a Parliament for the next 11 years. When Charles faced the Scottish rebellion known as the Bishops' Wars, shortage of funds forced him to call a Parliament again in 1640. Cromwell was returned to this Parliament as member for Cambridge, but it lasted for only three weeks and became known as the Short Parliament. Cromwell moved his family from Ely to London in 1640.
A second Parliament was called later the same year, and became known as the Long Parliament. Cromwell was again returned as member for Cambridge. As with the Parliament of 1628–29, it is likely that Cromwell owed his position to the patronage of others, which might explain why in the first week of the Parliament he was in charge of presenting a petition for the release of John Lilburne, who had become a Puritan cause célèbre after his arrest for importing religious tracts from the Netherlands. For the first two years of the Long Parliament Cromwell was linked to the godly group of aristocrats in the House of Lords and Members of the House of Commons with whom he had established familial and religious links in the 1630s, such as the Earls of Essex, Warwick and Bedford, Oliver St John and Viscount Saye and Sele. At this stage, the group had an agenda of reformation: the executive checked by regular parliaments, and the moderate extension of liberty of conscience. Cromwell appears to have taken a role in some of this group's political manoeuvres. In May 1641, for example, it was Cromwell who put forward the second reading of the Annual Parliaments Bill and later took a role in drafting the Root and Branch Bill for the abolition of episcopacy.
Failure to resolve the issues before the Long Parliament led to armed conflict between Parliament and Charles I in late 1642, the beginning of the English Civil War. Before joining Parliament's forces Cromwell's only military experience was in the trained bands, the local county militia. He recruited a cavalry troop in Cambridgeshire after blocking a valuable shipment of silver plate from Cambridge colleges that was meant for the King. Cromwell and his troop then rode to, but arrived too late to take part in, the indecisive Battle of Edgehill on 23 October 1642. The troop was recruited to be a full regiment in the winter of 1642 and 1643, making up part of the Eastern Association under the Earl of Manchester. Cromwell gained experience in a number of successful actions in East Anglia in 1643, notably at the Battle of Gainsborough on 28 July. He was subsequently appointed governor of the Isle of Ely and a colonel in the Eastern Association.
By the time of the Battle of Marston Moor in July 1644, Cromwell had risen to the rank of lieutenant general of horse in Manchester's army. The success of his cavalry in breaking the ranks of the Royalist cavalry and then attacking their infantry from the rear at Marston Moor was a major factor in the Parliamentarian victory. Cromwell fought at the head of his troops in the battle and was slightly wounded in the neck, stepping away briefly to receive treatment during the battle but returning to help force the victory. After Cromwell's nephew was killed at Marston Moor he wrote a famous letter to his brother-in-law. Marston Moor secured the north of England for the Parliamentarians, but failed to end Royalist resistance.
The indecisive outcome of the Second Battle of Newbury in October meant that by the end of 1644 the war still showed no signs of ending. Cromwell's experience at Newbury, where Manchester had let the King's army slip out of an encircling manoeuvre, led to a serious dispute with Manchester, whom he believed to be less than enthusiastic in his conduct of the war. Manchester later accused Cromwell of recruiting men of "low birth" as officers in the army, to which he replied: "If you choose godly honest men to be captains of horse, honest men will follow them ... I would rather have a plain russet-coated captain who knows what he fights for and loves what he knows than that which you call a gentleman and is nothing else". At this time, Cromwell also fell into dispute with Major-General Lawrence Crawford, a Scottish Covenanter attached to Manchester's army, who objected to Cromwell's encouragement of unorthodox Independents and Anabaptists. He was also charged with familism by Scottish Presbyterian Samuel Rutherford in response to his letter to the House of Commons in 1645.
Partly in response to the failure to capitalise on their victory at Marston Moor, Parliament passed the Self-Denying Ordinance in early 1645. This forced members of the House of Commons and the Lords, such as Manchester, to choose between civil office and military command. All of them—except Cromwell, whose commission was given continued extensions and was allowed to remain in parliament—chose to renounce their military positions. The Ordinance also decreed that the army be "remodelled" on a national basis, replacing the old county associations; Cromwell contributed significantly to these military reforms. In April 1645 the New Model Army finally took to the field, with Sir Thomas Fairfax in command and Cromwell as Lieutenant-General of cavalry and second-in-command.
At the critical Battle of Naseby in June 1645, the New Model Army smashed the King's major army. Cromwell led his wing with great success at Naseby, again routing the Royalist cavalry. At the Battle of Langport on 10 July, Cromwell participated in the defeat of the last sizeable Royalist field army. Naseby and Langport effectively ended the King's hopes of victory, and the subsequent Parliamentarian campaigns involved taking the remaining fortified Royalist positions in the west of England. In October 1645, Cromwell besieged and took the wealthy and formidable Catholic fortress Basing House, later to be accused of killing 100 of its 300-man Royalist garrison after its surrender. Cromwell also took part in successful sieges at Bridgwater, Sherborne, Bristol, Devizes, and Winchester, then spent the first half of 1646 mopping up resistance in Devon and Cornwall. Charles I surrendered to the Scots on 5 May 1646, effectively ending the First English Civil War. Cromwell and Fairfax took the formal surrender of the Royalists at Oxford in June 1646.
Cromwell had no formal training in military tactics, and followed the common practice of ranging his cavalry in three ranks and pressing forward, relying on impact rather than firepower. His strengths were an instinctive ability to lead and train his men, and his moral authority. In a war fought mostly by amateurs, these strengths were significant and are likely to have contributed to the discipline of his cavalry.
Cromwell introduced close-order cavalry formations, with troopers riding knee to knee; this was an innovation in England at the time, and was a major factor in his success. He kept his troops close together following skirmishes where they had gained superiority, rather than allowing them to chase opponents off the battlefield. This facilitated further engagements in short order, which allowed greater intensity and quick reaction to battle developments. This style of command was decisive at both Marston Moor and Naseby.
In February 1647 Cromwell suffered from an illness that kept him out of political life for over a month. By the time he had recovered, the Parliamentarians were split over the issue of the King. A majority in both Houses pushed for a settlement that would pay off the Scottish army, disband much of the New Model Army, and restore Charles I in return for a Presbyterian settlement of the Church. Cromwell rejected the Scottish model of Presbyterianism, which threatened to replace one authoritarian hierarchy with another. The New Model Army, radicalised by the failure of the Parliament to pay the wages it was owed, petitioned against these changes, but the Commons declared the petition unlawful. In May 1647 Cromwell was sent to the army's headquarters in Saffron Walden to negotiate with them, but failed to agree.
In June 1647, a troop of cavalry under Cornet George Joyce seized the King from Parliament's imprisonment. With the King now present, Cromwell was eager to find out what conditions the King would acquiesce to if his authority was restored. The King appeared to be willing to compromise, so Cromwell employed his son-in-law, Henry Ireton, to draw up proposals for a constitutional settlement. Proposals were drafted multiple times with different changes until finally the "Heads of Proposals" pleased Cromwell in principle and would allow for further negotiations. It was designed to check the powers of the executive, to set up regularly elected parliaments, and to restore a non-compulsory Episcopalian settlement.
Many in the army, such as the Levellers led by John Lilburne, thought this was not enough and demanded full political equality for all men, leading to tense debates in Putney during the autumn of 1647 between Fairfax, Cromwell and Ireton on the one hand, and radical Levellers like Colonel Rainsborough on the other. The Putney Debates ultimately broke up without reaching a resolution.
The failure to conclude a political agreement with the King led eventually to the outbreak of the Second English Civil War in 1648, when the King tried to regain power by force of arms. Cromwell first put down a Royalist uprising in south Wales led by Rowland Laugharne, winning back Chepstow Castle on 25 May and six days later forcing the surrender of Tenby. The castle at Carmarthen was destroyed by burning. The much stronger castle at Pembroke, however, fell only after a siege of eight weeks. Cromwell dealt leniently with the ex-Royalist soldiers, but less so with those who had previously been members of the parliamentary army, John Poyer eventually being executed in London after the drawing of lots.
Cromwell then marched north to deal with a pro-Royalist Scottish army (the Engagers) who had invaded England. At Preston, Cromwell, in sole command for the first time and with an army of 9,000, won a decisive victory against an army twice as large.
During 1648, Cromwell's letters and speeches started to become heavily based on biblical imagery, many of them meditations on the meaning of particular passages. For example, after the battle of Preston, study of Psalms 17 and 105 led him to tell Parliament that "they that are implacable and will not leave troubling the land may be speedily destroyed out of the land". A letter to Oliver St John in September 1648 urged him to read Isaiah 8, in which the kingdom falls and only the godly survive. On four occasions in letters in 1648 he referred to the story of Gideon's defeat of the Midianites at Ain Harod. These letters suggest that it was Cromwell's faith, rather than a commitment to radical politics, coupled with Parliament's decision to engage in negotiations with the King at the Treaty of Newport, that convinced him that God had spoken against both the King and Parliament as lawful authorities. For Cromwell, the army was now God's chosen instrument. The episode shows Cromwell's firm belief in "Providentialism"—that God was actively directing the affairs of the world, through the actions of "chosen people" (whom God had "provided" for such purposes). Cromwell believed, during the Civil Wars, that he was one of these people, and he interpreted victories as indications of God's approval of his actions, and defeats as signs that God was directing him in another direction.
In December 1648, in an episode that became known as Pride's Purge, a troop of soldiers headed by Colonel Thomas Pride forcibly removed from the Long Parliament all those who were not supporters of the Grandees in the New Model Army and the Independents. Thus weakened, the remaining body of MPs, known as the Rump Parliament, agreed that Charles should be tried on a charge of treason. Cromwell was still in the north of England, dealing with Royalist resistance, when these events took place, but then returned to London. On the day after Pride's Purge, he became a determined supporter of those pushing for the King's trial and execution, believing that killing Charles was the only way to end the civil wars. Cromwell approved Thomas Brook's address to the House of Commons, which justified the trial and execution of the King on the basis of the Book of Numbers, chapter 35 and particularly verse 33 ("The land cannot be cleansed of the blood that is shed therein, but by the blood of him that shed it.").
The death warrant for Charles was eventually signed by 59 of the trying court's members, including Cromwell (who was the third to sign it). Though it was not unprecedented, execution of the King, or "regicide", was controversial, if for no other reason due to the doctrine of the divine right of kings. Thus, even after a trial, it was difficult to get ordinary men to go along with it: "None of the officers charged with supervising the execution wanted to sign the order for the actual beheading, so they brought their dispute to Cromwell...Oliver seized a pen and scribbled out the order, and handed the pen to the second officer, Colonel Hacker who stooped to sign it. The execution could now proceed." Although Fairfax conspicuously refused to sign, Charles I was executed on 30 January 1649.
After the execution of the King, a republic was declared, known as the "Commonwealth of England". The "Rump Parliament" exercised both executive and legislative powers, with a smaller Council of State also having some executive functions. Cromwell remained a member of the "Rump" and was appointed a member of the council. In the early months after the execution of Charles I, Cromwell tried but failed to unite the original "Royal Independents" led by St John and Saye and Sele, which had fractured during 1648. Cromwell had been connected to this group since before the outbreak of civil war in 1642 and had been closely associated with them during the 1640s. However, only St John was persuaded to retain his seat in Parliament. The Royalists, meanwhile, had regrouped in Ireland, having signed a treaty with the Irish known as "Confederate Catholics". In March, Cromwell was chosen by the Rump to command a campaign against them. Preparations for an invasion of Ireland occupied Cromwell in the subsequent months. In the latter part of the 1640s, Cromwell came across political dissidence in the "New Model Army". The "Leveller" or "Agitator" movement was a political movement that emphasised popular sovereignty, extended suffrage, equality before the law, and religious tolerance. These sentiments were expressed in the manifesto "Agreement of the People" in 1647. Cromwell and the rest of the "Grandees" disagreed with these sentiments in that they gave too much freedom to the people; they believed that the vote should only extend to the landowners. In the "Putney Debates" of 1647, the two groups debated these topics in hopes of forming a new constitution for England. There were rebellions and mutinies following the debates, and in 1649, the Bishopsgate mutiny resulted in the execution of Leveller Robert Lockyer by firing squad. The next month, the Banbury mutiny occurred with similar results. Cromwell led the charge in quelling these rebellions. After quelling Leveller mutinies within the English army at Andover and Burford in May, Cromwell departed for Ireland from Bristol at the end of July.
Cromwell led a Parliamentary invasion of Ireland from 1649–50. Parliament's key opposition was the military threat posed by the alliance of the Irish Confederate Catholics and English royalists (signed in 1649). The Confederate-Royalist alliance was judged to be the biggest single threat facing the Commonwealth. However, the political situation in Ireland in 1649 was extremely fractured: there were also separate forces of Irish Catholics who were opposed to the Royalist alliance, and Protestant Royalist forces that were gradually moving towards Parliament. Cromwell said in a speech to the army Council on 23 March that "I had rather be overthrown by a Cavalierish interest than a Scotch interest; I had rather be overthrown by a Scotch interest than an Irish interest and I think of all this is the most dangerous".
Cromwell's hostility to the Irish was religious as well as political. He was passionately opposed to the Catholic Church, which he saw as denying the primacy of the Bible in favour of papal and clerical authority, and which he blamed for suspected tyranny and persecution of Protestants in continental Europe. Cromwell's association of Catholicism with persecution was deepened with the Irish Rebellion of 1641. This rebellion, although intended to be bloodless, was marked by massacres of English and Scottish Protestant settlers by Irish ("Gaels") and Old English in Ireland, and Highland Scot Catholics in Ireland. These settlers had settled on land seized from former, native Catholic owners to make way for the non-native Protestants. These factors contributed to the brutality of the Cromwell military campaign in Ireland.
Parliament had planned to re-conquer Ireland since 1641 and had already sent an invasion force there in 1647. Cromwell's invasion of 1649 was much larger and, with the civil war in England over, could be regularly reinforced and re-supplied. His nine-month military campaign was brief and effective, though it did not end the war in Ireland. Before his invasion, Parliamentarian forces held only outposts in Dublin and Derry. When he departed Ireland, they occupied most of the eastern and northern parts of the country. After his landing at Dublin on 15 August 1649 (itself only recently defended from an Irish and English Royalist attack at the Battle of Rathmines), Cromwell took the fortified port towns of Drogheda and Wexford to secure logistical supply from England. At the Siege of Drogheda in September 1649, Cromwell's troops killed nearly 3,500 people after the town's capture—comprising around 2,700 Royalist soldiers and all the men in the town carrying arms, including some civilians, prisoners and Roman Catholic priests. Cromwell wrote afterwards that:
I am persuaded that this is a righteous judgment of God upon these barbarous wretches, who have imbrued their hands in so much innocent blood and that it will tend to prevent the effusion of blood for the future, which are satisfactory grounds for such actions, which otherwise cannot but work remorse and regret
At the Siege of Wexford in October, another massacre took place under confused circumstances. While Cromwell was apparently trying to negotiate surrender terms, some of his soldiers broke into the town, killed 2,000 Irish troops and up to 1,500 civilians, and burned much of the town.
After the taking of Drogheda, Cromwell sent a column north to Ulster to secure the north of the country and went on to besiege Waterford, Kilkenny and Clonmel in Ireland's south-east. Kilkenny surrendered on terms, as did many other towns like New Ross and Carlow, but Cromwell failed to take Waterford, and at the siege of Clonmel in May 1650 he lost up to 2,000 men in abortive assaults before the town surrendered.
One of his major victories in Ireland was diplomatic rather than military. With the help of Roger Boyle, 1st Earl of Orrery, Cromwell persuaded the Protestant Royalist troops in Cork to change sides and fight with the Parliament. At this point, word reached Cromwell that Charles II (son of Charles I) had landed in Scotland from exile in France and been proclaimed King by the Covenanter regime. Cromwell therefore returned to England from Youghal on 26 May 1650 to counter this threat.
The Parliamentarian conquest of Ireland dragged on for almost three years after Cromwell's departure. The campaigns under Cromwell's successors Henry Ireton and Edmund Ludlow mostly consisted of long sieges of fortified cities and guerrilla warfare in the countryside. The last Catholic-held town, Galway, surrendered in April 1652 and the last Irish Catholic troops capitulated in April of the following year.
In the wake of the Commonwealth's conquest of the island of Ireland, the public practice of Roman Catholicism was banned and Catholic priests were killed when captured. All Catholic-owned land was confiscated under the Act for the Settlement of Ireland of 1652 and given to Scottish and English settlers, Parliament's financial creditors and Parliamentary soldiers. The remaining Catholic landowners were allocated poorer land in the province of Connacht.
The extent of Cromwell's brutality in Ireland has been strongly debated. Some historians argue that Cromwell never accepted that he was responsible for the killing of civilians in Ireland, claiming that he had acted harshly but only against those "in arms". Other historians, however, cite Cromwell's contemporary reports to London including that of 27 September 1649 in which he lists the slaying of 3,000 military personnel, followed by the phrase "and many inhabitants". In September 1649, he justified his sacking of Drogheda as revenge for the massacres of Protestant settlers in Ulster in 1641, calling the massacre "the righteous judgement of God on these barbarous wretches, who have imbrued their hands with so much innocent blood". However, Drogheda had never been held by the rebels in 1641—many of its garrison were in fact English royalists. On the other hand, the worst atrocities committed in Ireland, such as mass evictions, killings and deportation of over 50,000 men, women and children as prisoners of war and indentured servants to Bermuda and Barbados, were carried out under the command of other generals after Cromwell had left for England. Some point to his actions on entering Ireland. Cromwell demanded that no supplies were to be seized from the civilian inhabitants and that everything should be fairly purchased; "I do hereby warn...all Officers, Soldiers and others under my command not to do any wrong or violence toward Country People or any persons whatsoever, unless they be actually in arms or office with the enemy...as they shall answer to the contrary at their utmost peril."
The massacres at Drogheda and Wexford were in some ways typical of the day, especially in the context of the recently ended Thirty Years War, although there are few comparable incidents during the Civil Wars in England or Scotland, which were fought mainly between Protestant adversaries, albeit of differing denominations. One possible comparison is Cromwell's Siege of Basing House in 1645—the seat of the prominent Catholic the Marquess of Winchester—which resulted in about 100 of the garrison of 400 being killed after being refused quarter. Contemporaries also reported civilian casualties, six Catholic priests and a woman. However, the scale of the deaths at Basing House was much smaller. Cromwell himself said of the slaughter at Drogheda in his first letter back to the Council of State: "I believe we put to the sword the whole number of the defendants. I do not think thirty of the whole number escaped with their lives." Cromwell's orders—"in the heat of the action, I forbade them to spare any that were in arms in the town"—followed a request for surrender at the start of the siege, which was refused. The military protocol of the day was that a town or garrison that rejected the chance to surrender was not entitled to quarter. The refusal of the garrison at Drogheda to do this, even after the walls had been breached, was to Cromwell justification for the massacre. Where Cromwell negotiated the surrender of fortified towns, as at Carlow, New Ross, and Clonmel, some historians argue that he respected the terms of surrender and protected the lives and property of the townspeople. At Wexford, Cromwell again began negotiations for surrender. However, the captain of Wexford Castle surrendered during the middle of the negotiations and, in the confusion, some of Cromwell's troops began indiscriminate killing and looting.
Although Cromwell's time spent on campaign in Ireland was limited, and although he did not take on executive powers until 1653, he is often the central focus of wider debates about whether, as historians such as Mark Levene and John Morrill suggest, the Commonwealth conducted a deliberate programme of ethnic cleansing in Ireland. Faced with the prospect of an Irish alliance with Charles II, Cromwell carried out a series of massacres to subdue the Irish. Then, once Cromwell had returned to England, the English Commissary, General Henry Ireton, adopted a deliberate policy of crop burning and starvation. Total excess deaths for the entire period of the Wars of the Three Kingdoms in Ireland was estimated by Sir William Petty, the 17th Century economist, to be 600,000 out of a total Irish population of 1,400,000 in 1641. More modern estimates put the figure closer to 200,000 out of a population of 2 million.
The sieges of Drogheda and Wexford have been prominently mentioned in histories and literature up to the present day. James Joyce, for example, mentioned Drogheda in his novel "Ulysses": "What about sanctimonious Cromwell and his ironsides that put the women and children of Drogheda to the sword with the Bible text "God is love" pasted round the mouth of his cannon?" Similarly, Winston Churchill (writing 1957) described the impact of Cromwell on Anglo-Irish relations:
...upon all of these Cromwell's record was a lasting bane. By an uncompleted process of terror, by an iniquitous land settlement, by the virtual proscription of the Catholic religion, by the bloody deeds already described, he cut new gulfs between the nations and the creeds. 'Hell or Connaught' were the terms he thrust upon the native inhabitants, and they for their part, across three hundred years, have used as their keenest expression of hatred 'The Curse of Cromwell on you.' ... Upon all of us there still lies 'the curse of Cromwell'.
A key surviving statement of Cromwell's own views on the conquest of Ireland is his "Declaration of the lord lieutenant of Ireland for the undeceiving of deluded and seduced people" of January 1650. In this he was scathing about Catholicism, saying that "I shall not, where I have the power... suffer the exercise of the Mass." However, he also declared that: "as for the people, what thoughts they have in the matter of religion in their own breasts I cannot reach; but I shall think it my duty, if they walk honestly and peaceably, not to cause them in the least to suffer for the same." Private soldiers who surrendered their arms "and shall live peaceably and honestly at their several homes, they shall be permitted so to do".
In 1965 the Irish minister for lands stated that his policies were necessary to "undo the work of Cromwell"; circa 1997, Taoiseach Bertie Ahern demanded that a portrait of Cromwell be removed from a room in the Foreign Office before he began a meeting with Robin Cook.
Cromwell left Ireland in May 1650 and several months later invaded Scotland after the Scots had proclaimed Charles I's son Charles II as King. Cromwell was much less hostile to Scottish Presbyterians, some of whom had been his allies in the First English Civil War, than he was to Irish Catholics. He described the Scots as a people "fearing His [God's] name, though deceived". He made a famous appeal to the General Assembly of the Church of Scotland, urging them to see the error of the royal alliance—"I beseech you, in the bowels of Christ, think it possible you may be mistaken." The Scots' reply was robust: "would you have us to be sceptics in our religion?" This decision to negotiate with Charles II led Cromwell to believe that war was necessary.
His appeal rejected, Cromwell's veteran troops went on to invade Scotland. At first, the campaign went badly, as Cromwell's men were short of supplies and held up at fortifications manned by Scottish troops under David Leslie. Sickness began to spread in the ranks. Cromwell was on the brink of evacuating his army by sea from Dunbar. However, on 3 September 1650, unexpectedly, Cromwell smashed the main Scottish army at the Battle of Dunbar, killing 4,000 Scottish soldiers, taking another 10,000 prisoner, and then capturing the Scottish capital of Edinburgh. The victory was of such a magnitude that Cromwell called it "A high act of the Lord's Providence to us [and] one of the most signal mercies God hath done for England and His people".
The following year, Charles II and his Scottish allies made a desperate attempt to invade England and capture London while Cromwell was engaged in Scotland. Cromwell followed them south and caught them at Worcester on 3 September 1651, and his forces destroyed the last major Scottish Royalist army at the Battle of Worcester. Charles II barely escaped capture and fled to exile in France and the Netherlands, where he remained until 1660.
To fight the battle, Cromwell organised an envelopment followed by a multi-pronged coordinated attack on Worcester, his forces attacking from three directions with two rivers partitioning them. He switched his reserves from one side of the river Severn to the other and then back again. The editor of the "Great Rebellion" article of the Encyclopædia Britannica (eleventh edition) notes that Worcester was a battle of manoeuvre compared to the early Civil War Battle of Turnham Green, which the English parliamentary armies were unable to execute at the start of the war, and he suggests that it was a prototype for the Battle of Sedan (1870).
In the final stages of the Scottish campaign, Cromwell's men under George Monck sacked Dundee, killing up to 1,000 men and 140 women and children. Scotland was ruled from England during the Commonwealth and was kept under military occupation, with a line of fortifications sealing off the Highlands which had provided manpower for Royalist armies in Scotland. The northwest Highlands was the scene of another pro-Royalist uprising in 1653–55, which was put down with deployment of 6,000 English troops there. Presbyterianism was allowed to be practised as before, but the Kirk (the Scottish church) did not have the backing of the civil courts to impose its rulings, as it had previously.
Cromwell's conquest left no significant legacy of bitterness in Scotland. The rule of the Commonwealth and Protectorate was largely peaceful, apart from the Highlands. Moreover, there were no wholesale confiscations of land or property. Three out of every four Justices of the Peace in Commonwealth Scotland were Scots and the country was governed jointly by the English military authorities and a Scottish Council of State.
Cromwell was away on campaign from the middle of 1649 until 1651, and the various factions in Parliament began to fight amongst themselves with the King gone as their "common cause". Cromwell tried to galvanise the Rump into setting dates for new elections, uniting the three kingdoms under one polity, and to put in place a broad-brush, tolerant national church. However, the Rump vacillated in setting election dates, although it put in place a basic liberty of conscience, but it failed to produce an alternative for tithes or to dismantle other aspects of the existing religious settlement. In frustration, Cromwell demanded that the Rump establish a caretaker government in April 1653 of 40 members drawn from the Rump and the army, and then abdicate; but the Rump returned to debating its own bill for a new government. Cromwell was so angered by this that he cleared the chamber and dissolved the Parliament by force on 20 April 1653, supported by about 40 musketeers. Several accounts exist of this incident; in one, Cromwell is supposed to have said "you are no Parliament, I say you are no Parliament; I will put an end to your sitting". At least two accounts agree that he snatched up the ceremonial mace, symbol of Parliament's power, and demanded that the "bauble" be taken away. His troops were commanded by Charles Worsley, later one of his Major Generals and one of his most trusted advisors, to whom he entrusted the mace.
After the dissolution of the Rump, power passed temporarily to a council that debated what form the constitution should take. They took up the suggestion of Major-General Thomas Harrison for a "sanhedrin" of saints. Although Cromwell did not subscribe to Harrison's apocalyptic, Fifth Monarchist beliefs—which saw a sanhedrin as the starting point for Christ's rule on earth—he was attracted by the idea of an assembly made up of men chosen for their religious credentials. In his speech at the opening of the assembly on 4 July 1653, Cromwell thanked God's providence that he believed had brought England to this point and set out their divine mission: "truly God hath called you to this work by, I think, as wonderful providences as ever passed upon the sons of men in so short a time." The Nominated Assembly, sometimes known as the Parliament of Saints, or more commonly and denigratingly called Barebone's Parliament after one of its members, Praise-God Barebone. The assembly was tasked with finding a permanent constitutional and religious settlement (Cromwell was invited to be a member but declined). However, the revelation that a considerably larger segment of the membership than had been believed were the radical Fifth Monarchists led to its members voting to dissolve it on 12 December 1653, out of fear of what the radicals might do if they took control of the Assembly.
After the dissolution of the Barebones Parliament, John Lambert put forward a new constitution known as the Instrument of Government, closely modelled on the Heads of Proposals. It made Cromwell Lord Protector for life to undertake "the chief magistracy and the administration of government". Cromwell was sworn in as Lord Protector on 16 December 1653, with a ceremony in which he wore plain black clothing, rather than any monarchical regalia. However, from this point on Cromwell signed his name 'Oliver P', the "P" being an abbreviation for "Protector", which was similar to the style of monarchs who used an "R" to mean "Rex" or "Regina", and it soon became the norm for others to address him as "Your Highness". As Protector, he had the power to call and dissolve parliaments but was obliged under the Instrument to seek the majority vote of a Council of State. Nevertheless, Cromwell's power was buttressed by his continuing popularity among the army. As the Lord Protector he was paid £100,000 a year.
Cromwell had two key objectives as Lord Protector. The first was "healing and settling" the nation after the chaos of the civil wars and the regicide, which meant establishing a stable form for the new government to take. Although Cromwell declared to the first Protectorate Parliament that, "Government by one man and a parliament is fundamental," in practice social priorities took precedence over forms of government. Such forms were, he said, "but ... dross and dung in comparison of Christ". The social priorities did not, despite the revolutionary nature of the government, include any meaningful attempt to reform the social order. Cromwell declared, "A nobleman, a gentleman, a yeoman; the distinction of these: that is a good interest of the nation, and a great one!", Small-scale reform such as that carried out on the judicial system were outweighed by attempts to restore order to English politics. Direct taxation was reduced slightly and peace was made with the Dutch, ending the First Anglo-Dutch War.
England's overseas possessions in this period included Newfoundland, the New England Confederation, the Providence Plantation, the Virginia Colony, the Maryland Colony, and islands in the West Indies. Cromwell soon secured the submission of these and largely left them to their own affairs, intervening only to curb his fellow Puritans who were usurping control over the Maryland Colony at the Battle of the Severn, by his confirming the former Roman Catholic proprietorship and edict of tolerance there. Of all the English dominions, Virginia was the most resentful of Cromwell's rule, and Cavalier emigration there mushroomed during the Protectorate.
Cromwell famously stressed the quest to restore order in his speech to the first Protectorate parliament at its inaugural meeting on 3 September 1654. He declared that "healing and settling" were the "great end of your meeting". However, the Parliament was quickly dominated by those pushing for more radical, properly republican reforms. After some initial gestures approving appointments previously made by Cromwell, the Parliament began to work on a radical programme of constitutional reform. Rather than opposing Parliament's bill, Cromwell dissolved them on 22 January 1655. The First Protectorate Parliament had a property franchise of £200 per annum in real or personal property value set as the minimum value in which a male adult was to possess before he was eligible to vote for the representatives from the counties or shires in the House of Commons. The House of Commons representatives from the boroughs were elected by the burgesses or those borough residents who had the right to vote in municipal elections, and by the aldermen and councilors of the boroughs.
Cromwell's second objective was spiritual and moral reform. He aimed to restore liberty of conscience and promote both outward and inward godliness throughout England. During the early months of the Protectorate, a set of "triers" was established to assess the suitability of future parish ministers, and a related set of "ejectors" was set up to dismiss ministers and schoolmasters who were deemed unsuitable for office. The triers and the ejectors were intended to be at the vanguard of Cromwell's reform of parish worship. This second objective is also the context in which to see the constitutional experiment of the Major Generals that followed the dissolution of the first Protectorate Parliament. After a Royalist uprising in March 1655, led by Sir John Penruddock, Cromwell (influenced by Lambert) divided England into military districts ruled by army major generals who answered only to him. The 15 major generals and deputy major generals—called "godly governors"—were central not only to national security, but Cromwell's crusade to reform the nation's morals. The generals not only supervised militia forces and security commissions, but collected taxes and ensured support for the government in the English and Welsh provinces. Commissioners for securing the peace of the Commonwealth were appointed to work with them in every county. While a few of these commissioners were career politicians, most were zealous puritans who welcomed the major-generals with open arms and embraced their work with enthusiasm. However, the major-generals lasted less than a year. Many feared they threatened their reform efforts and authority. Their position was further harmed by a tax proposal by Major General John Desborough to provide financial backing for their work, which the second Protectorate parliament—instated in September 1656—voted down for fear of a permanent military state. Ultimately, however, Cromwell's failure to support his men, sacrificing them to his opponents, caused their demise. Their activities between November 1655 and September 1656 had, however, reopened the wounds of the 1640s and deepened antipathies to the regime. In late 1654, Cromwell launched the "Western Design" armada against the Spanish West Indies, and in May 1655 captured Jamaica.
As Lord Protector, Cromwell was aware of the Jewish community's involvement in the economics of the Netherlands, now England's leading commercial rival. It was this—allied to Cromwell's tolerance of the right to private worship of those who fell outside Puritanism—that led to his encouraging Jews to return to England in 1657, over 350 years after their banishment by Edward I, in the hope that they would help speed up the recovery of the country after the disruption of the Civil Wars. There was a longer-term motive for Cromwell's decision to allow the Jews to return to England, and that was the hope that they would convert to Christianity and therefore hasten the Second Coming of Jesus Christ, ultimately based on Matthew 23:37–39 and Romans 11. At the Whitehall conference of December 1655 he quoted from St. Paul's Epistle to the Romans 10:12–15 on the need to send Christian preachers to the Jews. William Prynne the Presbyterian, in contrast to Cromwell the Congregationalist, was strongly opposed to the latter's pro-Jewish policy.
On 23 March 1657, the Protectorate signed the Treaty of Paris with Louis XIV against Spain. Cromwell pledged to supply France with 6,000 troops and war ships. In accordance with the terms of the treaty, Mardyck and Dunkirk – a base for privateers and commerce raiders attacking English merchant shipping – were ceded to England.
In 1657, Cromwell was offered the crown by Parliament as part of a revised constitutional settlement, presenting him with a dilemma since he had been "instrumental" in abolishing the monarchy. Cromwell agonised for six weeks over the offer. He was attracted by the prospect of stability it held out, but in a speech on 13 April 1657 he made clear that God's providence had spoken against the office of King: "I would not seek to set up that which Providence hath destroyed and laid in the dust, and I would not build Jericho again". The reference to Jericho harks back to a previous occasion on which Cromwell had wrestled with his conscience when the news reached England of the defeat of an expedition against the Spanish-held island of Hispaniola in the West Indies in 1655—comparing himself to Achan, who had brought the Israelites defeat after bringing plunder back to camp after the capture of Jericho.
Instead, Cromwell was ceremonially re-installed as Lord Protector on 26 June 1657 at Westminster Hall, sitting upon King Edward's Chair, which was moved specially from Westminster Abbey for the occasion. The event in part echoed a coronation, using many of its symbols and regalia, such as a purple ermine-lined robe, a sword of justice and a sceptre (but not a crown or an orb). But, most notably, the office of Lord Protector was still not to become hereditary, though Cromwell was now able to nominate his own successor. Cromwell's new rights and powers were laid out in the Humble Petition and Advice, a legislative instrument which replaced the Instrument of Government. Despite failing to restore the Crown, this new constitution did set up many of the vestiges of the ancient constitution including a house of life peers (in place of the House of Lords). In the Humble Petition it was called the Other House as the Commons could not agree on a suitable name. Furthermore, Oliver Cromwell increasingly took on more of the trappings of monarchy. In particular, he created three peerages after the acceptance of the Humble Petition and Advice: Charles Howard was made Viscount Morpeth and Baron Gisland in July 1657 and Edmund Dunch was created Baron Burnell of East Wittenham in April 1658.
Cromwell is thought to have suffered from malaria and from "stone" (kidney stone disease). In 1658, he was struck by a sudden bout of malarial fever, followed directly by illness symptomatic of a urinary or kidney complaint. The Venetian ambassador wrote regular dispatches to the Doge of Venice in which he included details of Cromwell's final illness, and he was suspicious of the rapidity of his death. The decline may have been hastened by the death of his daughter Elizabeth Claypole in August. He died at age 59 at Whitehall on Friday 3 September 1658, the anniversary of his great victories at Dunbar and Worcester. The most likely cause was septicaemia (blood poisoning) following his urinary infection. He was buried with great ceremony, with an elaborate funeral at Westminster Abbey based on that of James I, his daughter Elizabeth also being buried there.
He was succeeded as Lord Protector by his son Richard. Richard had no power base in Parliament or the Army and was forced to resign in May 1659, ending the Protectorate. There was no clear leadership from the various factions that jostled for power during the reinstated Commonwealth, so George Monck was able to march on London at the head of New Model Army regiments and restore the Long Parliament. Under Monck's watchful eye, the necessary constitutional adjustments were made so that Charles II could be invited back from exile in 1660 to be King under a restored monarchy.
Cromwell's body was exhumed from Westminster Abbey on 30 January 1661, the 12th anniversary of the execution of Charles I, and was subjected to a posthumous execution, as were the remains of Robert Blake, John Bradshaw, and Henry Ireton. (The body of Cromwell's daughter was allowed to remain buried in the Abbey.) His body was hanged in chains at Tyburn, London, and then thrown into a pit. His head was cut off and displayed on a pole outside Westminster Hall until 1685. Afterwards, it was owned by various people, including a documented sale in 1814 to Josiah Henry Wilkinson, and it was publicly exhibited several times before being buried beneath the floor of the antechapel at Sidney Sussex College, Cambridge, in 1960. The exact position was not publicly disclosed, but a plaque marks the approximate location.
Many people began to question whether the body mutilated at Tyburn and the head seen on Westminster Hall were Cromwell's. These doubts arose because it was assumed that Cromwell's body was reburied in several places between his death in September 1658 and the exhumation of January 1661, in order to protect it from vengeful royalists. The stories suggest that his bodily remains are buried in London, Cambridgeshire, Northamptonshire, or Yorkshire.
The Cromwell vault was later used as a burial place for Charles II's illegitimate descendants. In Westminster Abbey, the site of Cromwell's burial was marked during the 19th century by a floor stone in what is now the RAF Chapel reading: "The burial place of Oliver Cromwell 1658–1661".
During his lifetime, some tracts painted Cromwell as a hypocrite motivated by power. For example, "The Machiavilian Cromwell" and "The Juglers Discovered" are parts of an attack on Cromwell by the Levellers after 1647, and both present him as a Machiavellian figure. John Spittlehouse presented a more positive assessment in "A Warning Piece Discharged", comparing him to Moses rescuing the English by taking them safely through the Red Sea of the civil wars. Poet John Milton called Cromwell "our chief of men" in his "Sonnet XVI".
Several biographies were published soon after Cromwell's death. An example is "The Perfect Politician", which describes how Cromwell "loved men more than books" and provides a nuanced assessment of him as an energetic campaigner for liberty of conscience who is brought down by pride and ambition. An equally nuanced but less positive assessment was published in 1667 by Edward Hyde, 1st Earl of Clarendon in his "History of the Rebellion and Civil Wars in England". Clarendon famously declares that Cromwell "will be looked upon by posterity as a brave bad man". He argues that Cromwell's rise to power had been helped by his great spirit and energy, but also by his ruthlessness. Clarendon was not one of Cromwell's confidantes, and his account was written after the Restoration of the monarchy.
During the early 18th century, Cromwell's image began to be adopted and reshaped by the Whigs as part of a wider project to give their political objectives historical legitimacy. John Toland rewrote Edmund Ludlow's "Memoirs" in order to remove the Puritan elements and replace them with a Whiggish brand of republicanism, and it presents the Cromwellian Protectorate as a military tyranny. Through Ludlow, Toland portrayed Cromwell as a despot who crushed the beginnings of democratic rule in the 1640s.
During the early 19th century, Cromwell began to be portrayed in a positive light by Romantic artists and poets. Thomas Carlyle continued this reassessment in the 1840s, publishing an annotated collection of his letters and speeches, and describing English Puritanism as "the last of all our Heroisms" while taking a negative view of his own era. By the late 19th century, Carlyle's portrayal of Cromwell had become assimilated into Whig and Liberal historiography, stressing the centrality of puritan morality and earnestness. Oxford civil war historian Samuel Rawson Gardiner concluded that "the man—it is ever so with the noblest—was greater than his work". Gardiner stressed Cromwell's dynamic and mercurial character, and his role in dismantling absolute monarchy, while underestimating Cromwell's religious conviction. Cromwell's foreign policy also provided an attractive forerunner of Victorian imperial expansion, with Gardiner stressing his "constancy of effort to make England great by land and sea". Calvin Coolidge described Cromwell as a brilliant statesman who "dared to oppose the tyranny of the kings."
During the first half of the 20th century, Cromwell's reputation was often influenced by the rise of fascism in Nazi Germany and in Italy. Harvard historian Wilbur Cortez Abbott, for example, devoted much of his career to compiling and editing a multi-volume collection of Cromwell's letters and speeches, published between 1937 and 1947. Abbott argues that Cromwell was a proto-fascist. However, subsequent historians such as John Morrill have criticised both Abbott's interpretation of Cromwell and his editorial approach.
Late 20th-century historians re-examined the nature of Cromwell's faith and of his authoritarian regime. Austin Woolrych explored the issue of "dictatorship" in depth, arguing that Cromwell was subject to two conflicting forces: his obligation to the army and his desire to achieve a lasting settlement by winning back the confidence of the nation as a whole. He argued that the dictatorial elements of Cromwell's rule stemmed less from its military origin or the participation of army officers in civil government than from his constant commitment to the interest of the people of God and his conviction that suppressing vice and encouraging virtue constituted the chief end of government. Historians such as John Morrill, Blair Worden, and J. C. Davis have developed this theme, revealing the extent to which Cromwell's writing and speeches are suffused with biblical references, and arguing that his radical actions were driven by his zeal for godly reformation.
In 1776, one of the first ships commissioned to serve in the American Continental Navy during the American Revolutionary War was named "Oliver Cromwell".
19th-century engineer Sir Richard Tangye was a noted Cromwell enthusiast and collector of Cromwell manuscripts and memorabilia. His collection included many rare manuscripts and printed books, medals, paintings, objects d'art, and a bizarre assemblage of "relics". This includes Cromwell's Bible, button, coffin plate, death mask, and funeral escutcheon. On Tangye's death, the entire collection was donated to the Museum of London, where it can still be seen.
In 1875, a statue of Cromwell by Matthew Noble was erected in Manchester outside the Manchester Cathedral, a gift to the city by Abel Heywood in memory of her first husband. It was the first large-scale statue to be erected in the open in England, and was a realistic likeness based on the painting by Peter Lely; it showed Cromwell in battledress with drawn sword and leather body armour. It was unpopular with local Conservatives and the large Irish immigrant population. Queen Victoria was invited to open the new Manchester Town Hall, and she allegedly consented on the condition that the statue be removed. The statue remained, Victoria declined, and the town hall was opened by the Lord Mayor. During the 1980s, the statue was relocated outside Wythenshawe Hall, which had been occupied by Cromwell's troops.
During the 1890s, Parliamentary plans turned controversial to erect a statue of Cromwell outside Parliament. Pressure from the Irish Nationalist Party forced the withdrawal of a motion to seek public funding for the project; the statue was eventually erected but it had to be funded privately by Lord Rosebery.
Cromwell controversy continued into the 20th century. Winston Churchill was First Lord of the Admiralty before World War I, and he twice suggested naming a British battleship HMS "Oliver Cromwell". The suggestion was vetoed by King George V because of his personal feelings and because he felt that it was unwise to give such a name to an expensive warship at a time of Irish political unrest, especially given the anger caused by the statue outside Parliament. Churchill was eventually told by First Sea Lord Admiral Battenberg that the King's decision must be treated as final. The Cromwell Tank was a British medium-weight tank first used in 1944, and a steam locomotive built by British Railways in 1951 was the BR Standard Class 7 70013 Oliver Cromwell.
Other public statues of Cromwell are the Statue of Oliver Cromwell, St Ives in Cambridgeshire and the Statue of Oliver Cromwell, Warrington in Cheshire. An oval plaque at Sidney Sussex College, Cambridge, refers to the end of the travels of his head and reads:
Near to
this place was buried
on 25 March 1960 the head of
OLIVER CROMWELL
Lord Protector of the Common-
wealth of England, Scotland &
Ireland, Fellow Commoner
of this College 1616-7 | https://en.wikipedia.org/wiki?curid=22413 |
Otto von Bismarck
Otto Eduard Leopold, Prince of Bismarck, Duke of Lauenburg (born von Bismarck-Schönhausen; ; 1 April 1815 – 30 July 1898), known as Otto von Bismarck (), was a conservative German statesman who masterminded the unification of Germany in 1871 and served as its first chancellor until 1890, in which capacity he dominated European affairs for two decades. He had previously been Minister President of Prussia (1862–1890) and Chancellor of the North German Confederation (1867–1871). He provoked three short, decisive wars against Denmark, Austria, and France. Following the victory against Austria, he abolished the supranational German Confederation and instead formed the North German Confederation as the first German national state, aligning the smaller North German states behind Prussia. Receiving the support of the independent South German states in the Confederation's defeat of France, he formed the German Empire (which excluded Austria) and united Germany.
With Prussian dominance accomplished by 1871, Bismarck skillfully used balance of power diplomacy to maintain Germany's position in a peaceful Europe. To historian Eric Hobsbawm, Bismarck "remained undisputed world champion at the game of multilateral diplomatic chess for almost twenty years after 1871, [and] devoted himself exclusively, and successfully, to maintaining peace between the powers". However, his annexation of Alsace-Lorraine (Elsaß-Lothringen) gave new fuel to French nationalism and Germanophobia. This helped set the stage for the First World War. Bismarck's diplomacy of "Realpolitik" and powerful rule at home gained him the nickname the "Iron Chancellor". German unification and its rapid economic growth was the foundation to his foreign policy. He disliked colonialism but reluctantly built an overseas empire when it was demanded by both elite and mass opinion. Juggling a very complex interlocking series of conferences, negotiations and alliances, he used his diplomatic skills to maintain Germany's position.
A master of complex politics at home, Bismarck created the first welfare state in the modern world, with the goal of gaining working class support that might otherwise go to his Socialist enemies. In the 1870s, he allied himself with the low-tariff, anti-Catholic Liberals and fought the Catholic Church in what was called the "Kulturkampf" ("culture struggle"). He lost that battle as the Catholics responded by forming the powerful German Centre Party and using universal male suffrage to gain a bloc of seats. Bismarck then reversed himself, ended the "Kulturkampf", broke with the Liberals, imposed protective tariffs, and formed a political alliance with the Centre Party to fight the Socialists. A devout Lutheran, he was loyal to his king, Wilhelm I, who argued with Bismarck but in the end supported him against the advice of his wife and his heir. While Germany's parliament was elected by universal male suffrage, it did not have much control of government policy. Bismarck distrusted democracy and ruled through a strong, well-trained bureaucracy with power in the hands of a traditional Junker elite that consisted of the landed nobility in eastern Prussia. He largely controlled domestic and foreign affairs, until he was removed by the young new headstrong Kaiser Wilhelm II. He retired to write his memoirs.
A Junker himself, Bismarck was strong-willed, outspoken and overbearing, but he could also be polite, charming and witty. Occasionally he displayed a violent temper, and he kept his power by melodramatically threatening resignation time and again, which cowed Wilhelm I. He possessed not only a long-term national and international vision but also the short-term ability to juggle complex developments. Bismarck became a hero to German nationalists; they built many monuments honoring the founder of the new "Reich". Many historians praise him as a visionary who was instrumental in uniting Germany and, once that had been accomplished, kept the peace in Europe through adroit diplomacy.
Bismarck was born in 1815 at Schönhausen, a noble family estate west of Berlin in the Prussian province of Saxony. His father, Karl Wilhelm Ferdinand von Bismarck (1771–1845), was a Junker estate owner and a former Prussian military officer; his mother, Wilhelmine Luise Mencken (1789–1839), was the well educated daughter of a senior government official in Berlin. In 1816, the family moved to its Pomeranian estate, Kniephof (now Konarzewo, Poland), northeast of Stettin (now Szczecin), in the then-Prussian province of Farther Pomerania. There Bismarck spent his childhood in a bucolic setting.
Bismarck had two siblings: his older brother Bernhard (1810–1893) and his younger sister Malwine (1827–1908). The world saw Bismarck as a typical backwoods Prussian Junker, an image that he encouraged by wearing military uniforms. However, he was well educated and cosmopolitan with a gift for conversation, and knew English, French, Italian, Polish and Russian.
Bismarck was educated at Johann Ernst Plamann's elementary school, and the Friedrich-Wilhelm and Graues Kloster secondary schools. From 1832 to 1833, he studied law at the University of Göttingen, where he was a member of the Corps Hannovera, and then enrolled at the University of Berlin (1833–35). In 1838, while stationed as an army reservist in Greifswald, he studied agriculture at the University of Greifswald. At Göttingen, Bismarck befriended the American student John Lothrop Motley. Motley, who later became an eminent historian and diplomat while remaining close to Bismarck, wrote a novel in 1839, "Morton's Hope, or the Memoirs of a Provincial", about life in a German university. In it he described Bismarck as a reckless and dashing eccentric, but also as an extremely gifted and charming young man.
Although Bismarck hoped to become a diplomat, he started his practical training as a lawyer in Aachen and Potsdam, and soon resigned, having first placed his career in jeopardy by taking unauthorized leave to pursue two English girls: first Laura Russell, niece of the Duke of Cleveland, and then Isabella Loraine-Smith, daughter of a wealthy clergyman. He also served in the army for a year and became an officer in the Landwehr (reserve), before returning to run the family estates at Schönhausen on his mother's death in his mid-twenties.
Around age 30, Bismarck formed an intense friendship with , newly married to one of his friends, . Under her influence, Bismarck became a Pietist Lutheran, and later recorded that at Marie's deathbed (from typhoid) he prayed for the first time since his childhood. Bismarck married Marie's cousin, the noblewoman Johanna von Puttkamer (1824–94) at Alt-Kolziglow (modern Kołczygłowy) on 28 July 1847. Their long and happy marriage produced three children: Marie (b. 1847), Herbert (b. 1849) and Wilhelm (b. 1852). Johanna was a shy, retiring and deeply religious woman—although famed for her sharp tongue in later life—and in his public life, Bismarck was sometimes accompanied by his sister Malwine "Malle" von Arnim. Bismarck soon adopted his wife's Pietism, and he remained a devout Pietist Lutheran for the rest of his life.
In 1847 Bismarck, aged thirty-two, was chosen as a representative to the newly created Prussian legislature, the "". There, he gained a reputation as a royalist and reactionary politician with a gift for stinging rhetoric; he openly advocated the idea that the monarch had a divine right to rule. His selection was arranged by the Gerlach brothers, fellow Pietist Lutherans whose ultra-conservative faction was known as the "Kreuzzeitung" after their newspaper, the "Neue Preußische Zeitung", which was so nicknamed because it featured an Iron Cross on its cover.
In March 1848, Prussia faced a revolution (one of the revolutions of 1848 across Europe), which completely overwhelmed King Frederick William IV. The monarch, though initially inclined to use armed forces to suppress the rebellion, ultimately declined to leave Berlin for the safety of military headquarters at Potsdam. Bismarck later recorded that there had been a "rattling of sabres in their scabbards" from Prussian officers when they learned that the King would not suppress the revolution by force. He offered numerous concessions to the liberals: he wore the black-red-gold revolutionary colours (as seen on the flag of today's Germany), promised to promulgate a constitution, agreed that Prussia and other German states should merge into a single nation-state, and appointed a liberal, Gottfried Ludolf Camphausen, as Minister President.
Bismarck had at first tried to rouse the peasants of his estate into an army to march on Berlin in the King's name. He travelled to Berlin in disguise to offer his services, but was instead told to make himself useful by arranging food supplies for the Army from his estates in case they were needed. The King's brother, Prince Wilhelm, had fled to England; Bismarck tried to get Wilhelm's wife Augusta to place their teenage son Frederick William on the Prussian throne in Frederick William IV's place. Augusta would have none of it, and detested Bismarck thereafter, despite the fact that he later helped restore a working relationship between Wilhelm and his brother the King. Bismarck was not yet a member of the "Landtag", the lower house of the new Prussian legislature. The liberal movement perished by the end of 1848 amid internal fighting. Meanwhile, the conservatives regrouped, formed an inner group of advisers—including the Gerlach brothers, known as the "Camarilla"—around the King, and retook control of Berlin. Although a constitution was granted, its provisions fell far short of the demands of the revolutionaries.
In 1849, Bismarck was elected to the "Landtag". At this stage in his career, he opposed the unification of Germany, arguing that Prussia would lose its independence in the process. He accepted his appointment as one of Prussia's representatives at the Erfurt Parliament, an assembly of German states that met to discuss plans for union, but he only did so to oppose that body's proposals more effectively. The parliament failed to bring about unification, for it lacked the support of the two most important German states, Prussia and Austria. In September 1850, after a dispute over Hesse (the Hesse Crisis of 1850), Prussia was humiliated and forced to back down by Austria (supported by Russia) in the so-called Punctation of Olmütz; a plan for the unification of Germany under Prussian leadership, proposed by Prussia's Minister President Radowitz, was also abandoned.
In 1851, Frederick William IV appointed Bismarck as Prussia's envoy to the Diet of the German Confederation in Frankfurt. Bismarck gave up his elected seat in the "Landtag", but was appointed to the Prussian House of Lords a few years later. In Frankfurt he engaged in a battle of wills with the Austrian representative Count Friedrich von Thun und Hohenstein. He insisted on being treated as an equal by petty tactics such as imitating Thun when Thun claimed the privileges of smoking and removing his jacket in meetings. This episode was the background for an altercation in the Frankfurt chamber with Georg von Vincke that led to a duel between Bismarck and Vincke with Carl von Bodelschwingh as an impartial party, which ended without injury.
Bismarck's eight years in Frankfurt were marked by changes in his political opinions, detailed in the numerous lengthy memoranda, which he sent to his ministerial superiors in Berlin. No longer under the influence of his ultraconservative Prussian friends, Bismarck became less reactionary and more pragmatic. He became convinced that to countervail Austria's newly restored influence, Prussia would have to ally herself with other German states. As a result, he grew to be more accepting of the notion of a united German nation. He gradually came to believe that he and his fellow conservatives had to take the lead in creating a unified nation to keep from being eclipsed. He also believed that the middle-class liberals wanted a unified Germany more than they wanted to break the grip of the traditional forces over society.
Bismarck also worked to maintain the friendship of Russia and a working relationship with Napoleon III's France, the latter being anathema to his conservative friends, the Gerlachs, but necessary both to threaten Austria and to prevent France allying with Russia. In a famous letter to Leopold von Gerlach, Bismarck wrote that it was foolish to play chess having first put 16 of the 64 squares out of bounds. This observation became ironic, as after 1871, France indeed became Germany's permanent enemy, and eventually allied with Russia against Germany in the 1890s.
Bismarck was alarmed by Prussia's isolation during the Crimean War of the mid-1850s, in which Austria sided with Britain and France against Russia; Prussia was almost not invited to the peace talks in Paris. In the Eastern Crisis of the 1870s, fear of a repetition of this turn of events would later be a factor in Bismarck's signing the Dual Alliance with Austria-Hungary in 1879.
In October 1857, Frederick William IV suffered a paralysing stroke, and his brother Wilhelm took over the Prussian government as Regent. Wilhelm was initially seen as a moderate ruler, whose friendship with liberal Britain was symbolised by the recent marriage of his son Frederick William to Queen Victoria's eldest daughter. As part of his "New Course", Wilhelm brought in new ministers, moderate conservatives known as the "Wochenblatt" after their newspaper.
The Regent soon replaced Bismarck as envoy in Frankfurt and made him Prussia's ambassador to the Russian Empire. In theory, this was a promotion, as Russia was one of Prussia's two most powerful neighbors. But Bismarck was sidelined from events in Germany and could only watch impotently as France drove Austria out of Lombardy during the Italian War of 1859. Bismarck proposed that Prussia should exploit Austria's weakness to move her frontiers "as far south as Lake Constance" on the Swiss border; instead, Prussia mobilised troops in the Rhineland to deter further French advances into Venetia.
As a further snub, the Regent, who scorned Bismarck as a "Landwehrleutnant" (reserve lieutenant), had declined to promote him to the rank of major-general, a rank that the ambassador to St. Petersburg was expected to hold. This was an important refusal as Prussia and Russia were close military allies, whose heads of state often communicated through military contacts rather than diplomatic channels. Bismarck stayed in St Petersburg for four years, during which he almost lost his leg to botched medical treatment and once again met his future adversary, the Russian Prince Gorchakov, who had been the Russian representative in Frankfurt in the early 1850s. The Regent also appointed Helmuth von Moltke as the new Chief of Staff of the Prussian Army, and Albrecht von Roon as Minister of War with the job of reorganizing the army. Over the next twelve years, Bismarck, Moltke and Roon transformed Prussia; Bismarck would later refer to this period as "the most significant of my life".
Despite his lengthy stay abroad, Bismarck was not entirely detached from German domestic affairs. He remained well-informed due to Roon, with whom Bismarck formed a lasting friendship and political alliance. In May 1862, he was sent to Paris to serve as ambassador to France, and also visited England that summer. These visits enabled him to meet and take the measure of several adversaries: Napoleon III in France, and in Britain, Prime Minister Palmerston, Foreign Secretary Earl Russell, and Conservative politician Benjamin Disraeli. Disraeli, who would become Prime Minister in the 1870s, later claimed to have said of Bismarck, "Be careful of that man—he means every word he says".
Prince Wilhelm became King of Prussia upon his brother Frederick Wilhelm IV's death in 1861. The new monarch often came into conflict with the increasingly liberal Prussian Diet ("Landtag"). A crisis arose in 1862, when the Diet refused to authorize funding for a proposed re-organization of the army. The King's ministers could not convince legislators to pass the budget, and the King was unwilling to make concessions. Wilhelm threatened to abdicate in favour of his son Crown Prince Frederick William, who opposed his doing so, believing that Bismarck was the only politician capable of handling the crisis. However, Wilhelm was ambivalent about appointing a person who demanded unfettered control over foreign affairs. It was in September 1862, when the "Abgeordnetenhaus" (House of Deputies) overwhelmingly rejected the proposed budget, that Wilhelm was persuaded to recall Bismarck to Prussia on the advice of Roon. On 23 September 1862, Wilhelm appointed Bismarck Minister President and Foreign Minister.
Bismarck, Roon and Moltke took charge at a time when relations among the Great Powers (Great Britain, France, Austria and Russia) had been shattered by the Crimean War and the Italian War. In the midst of this disarray, the European balance of power was restructured with the creation of the German Empire as the dominant power in continental Europe apart from Russia. This was achieved by Bismarck's diplomacy, Roon's reorganization of the army and Moltke's military strategy.
Despite the initial distrust of the King and Crown Prince and the loathing of Queen Augusta, Bismarck soon acquired a powerful hold over the King by force of personality and powers of persuasion. Bismarck was intent on maintaining royal supremacy by ending the budget deadlock in the King's favour, even if he had to use extralegal means to do so. Under the Constitution, the budget could be passed only after the king and legislature agreed on its terms. Bismarck contended that since the Constitution did not provide for cases in which legislators failed to approve a budget, there was a "legal loophole" in the Constitution and so he could apply the previous year's budget to keep the government running. Thus, on the basis of the 1861 budget, tax collection continued for four years.
Bismarck's conflict with the legislators intensified in the coming years. Following the Alvensleben Convention of 1863, the House of Deputies resolved that it could no longer come to terms with Bismarck; in response, the King dissolved the Diet, accusing it of trying to obtain unconstitutional control over the ministry—which, under the Constitution, was responsible solely to the king. Bismarck then issued an edict restricting the freedom of the press, an edict that even gained the public opposition of the Crown Prince. Despite (or perhaps because of) his attempts to silence critics, Bismarck remained a largely unpopular politician. His supporters fared poorly in the elections of October 1863, in which a liberal coalition, whose primary member was the Progress Party, won over two-thirds of the seats. The House made repeated calls for Bismarck to be dismissed, but the King supported him, fearing that if he did dismiss the Minister President, he would most likely be succeeded by a liberal.
German unification had been a major objective of the revolutions of 1848, when representatives of the German states met in Frankfurt and drafted a constitution, creating a federal union with a national parliament to be elected by universal male suffrage. In April 1849, the Frankfurt Parliament offered the title of Emperor to King Frederick William IV. Fearing the opposition of the other German princes and the military intervention of Austria and Russia, the King renounced this popular mandate. Thus, the Frankfurt Parliament ended in failure for the German liberals.
On 30 September 1862, Bismarck made a famous speech to the Budget Committee of the Prussian Chamber of Deputies in which he expounded on the use of "iron and blood" to achieve Prussia's goals:
Prior to the 1860s, Germany consisted of a multitude of principalities loosely bound together as members of the German Confederation. Bismarck used both diplomacy and the Prussian military to achieve unification, excluding Austria from a unified Germany. This made Prussia the most powerful and dominant component of the new Germany, but also ensured that it remained an authoritarian state and not a liberal parliamentary democracy.
Bismarck faced a diplomatic crisis when King Frederick VII of Denmark died in November 1863. The succession to the duchies of Schleswig and Holstein was disputed; they were claimed by Christian IX, Frederick VII's heir as King, and also by Frederick von Augustenburg, a Danish duke. Prussian public opinion strongly favoured Augustenburg's claim, as the populations of Holstein and southern Schleswig were primarily German-speaking. Bismarck took an unpopular step by insisting that the territories legally belonged to the Danish monarch under the London Protocol signed a decade earlier. Nonetheless, Bismarck denounced Christian's decision to completely annex Schleswig to Denmark. With support from Austria, he issued an ultimatum for Christian IX to return Schleswig to its former status. When Denmark refused, Austria and Prussia invaded, sparking the Second Schleswig War. Denmark was ultimately forced to renounce its claim on both duchies.
At first this seemed like a victory for Augustenburg, but Bismarck soon removed him from power by making a series of unworkable demands, namely that Prussia should have control over the army and navy of the duchies. Originally, it had been proposed that the Diet of the German Confederation, in which all the states of Germany were represented, should determine the fate of the duchies; but before this scheme could be effected, Bismarck induced Austria to agree to the Gastein Convention. Under this agreement signed on 20 August 1865, Prussia received Schleswig, while Austria received Holstein. In that year Bismarck was given the title of Count ("Graf") of Bismarck-Schönhausen.
In 1866, Austria reneged on the agreement and demanded that the Diet determine the Schleswig–Holstein issue. Bismarck used this as an excuse to start a war with Austria by accusing them of violating the Gastein Convention. Bismarck sent Prussian troops to occupy Holstein. Provoked, Austria called for the aid of other German states, who quickly became involved in the Austro-Prussian War. Thanks to Roon's reorganization, the Prussian army was nearly equal in numbers to the Austrian army. With the strategic genius of Moltke, the Prussian army fought battles it was able to win. Bismarck had also made a secret alliance with Italy, who desired Austrian-controlled Veneto. Italy's entry into the war forced the Austrians to divide their forces.
Meanwhile, as the war began, a German radical named Ferdinand Cohen-Blind attempted to assassinate Bismarck in Berlin, shooting him five times at close range. Bismarck had only minor injuries. Cohen-Blind later committed suicide while in custody.
The war lasted seven weeks; Germans called it a "Blitzkrieg" ("lightning war"), a term also used in 1939. Austria had a seemingly powerful army that was allied with most of the north German and all of the south German states. Nevertheless, Prussia won the decisive Battle of Königgrätz. The King and his generals wanted to push onward, conquer Bohemia and march to Vienna, but Bismarck, worried that Prussian military luck might change or that France might intervene on Austria's side, enlisted the help of Crown Prince Frederick Wilhelm, who had opposed the war but had commanded one of the Prussian armies at Königgrätz, to dissuade his father after stormy arguments. Bismarck insisted on a "soft peace" with no annexations and no victory parades, so as to be able to quickly restore friendly relations with Austria.
As a result of the Peace of Prague (1866), the German Confederation was dissolved. Prussia annexed Schleswig, Holstein, Frankfurt, Hanover, Hesse-Kassel, and Nassau. Furthermore, Austria had to promise not to intervene in German affairs. To solidify Prussian hegemony, Prussia forced the 21 states north of the River Main to join it in forming the North German Confederation in 1867. The confederation was governed by a constitution largely drafted by Bismarck. Executive power was vested in a president, an hereditary office of the kings of Prussia, who was assisted by a chancellor responsible only to him. As president of the confederation, Wilhelm appointed Bismarck as chancellor of the confederation. Legislation was the responsibility of the Reichstag, a popularly elected body, and the Bundesrat, an advisory body representing the states. The Bundesrat was, in practice, the stronger chamber. Bismarck was the dominant figure in the new arrangement; as Foreign Minister of Prussia, he instructed the Prussian deputies to the Bundesrat.
Prussia had only a plurality (17 out of 43 seats) in the Bundesrat despite being larger than the other 21 states combined, but Bismarck could easily control the proceedings through alliances with the smaller states. This began what historians refer to as "The Misery of Austria" in which Austria served as a mere vassal to the superior Germany, a relationship that was to shape history until the end of the First World War. Bismarck had originally managed to convince smaller states like Saxony, Hesse-Kassel, and Hanover to join with Prussia against Austria, after promising them protection from foreign invasion and fair commercial laws.
Bismarck, who by now held the rank of major in the Landwehr, wore this uniform during the campaign and was at last promoted to the rank of major-general in the Landwehr cavalry after the war. Although he never personally commanded troops in the field, he usually wore a general's uniform in public for the rest of his life, as seen in numerous paintings and photographs. He was also given a cash grant by the Prussian Landtag, which he used to purchase a country estate in Varzin, now part of Poland.
Military success brought Bismarck tremendous political support in Prussia. In the elections of 1866 the liberals suffered a major defeat, losing their majority in the House of Deputies. The new, largely conservative House was on much better terms with Bismarck than previous bodies; at the Minister President's request, it retroactively approved the budgets of the past four years, which had been implemented without parliamentary consent. Bismarck suspected it would split the liberal opposition. While some liberals argued that constitutional government was a bright line that should not be crossed, most of them believed it would be a waste of time to oppose the bill, and supported it in hopes of winning more freedom in the future.
Jonathan Steinberg says of Bismarck's achievements to this point: The scale of Bismarck's triumph cannot be exaggerated. He alone had brought about a complete transformation of the European international order. He had told those who would listen what he intended to do, how he intended to do it, and he did it. He achieved this incredible feat without commanding an army, and without the ability to give an order to the humblest common soldier, without control of a large party, without public support, indeed, in the face of almost universal hostility, without a majority in parliament, without control of his cabinet, and without a loyal following in the bureaucracy. He no longer had the support of the powerful conservative interest groups who had helped him achieve power. The most senior diplomats in the foreign service ... were sworn enemies and he knew it. The Queen and the Royal Family hated him and the King, emotional and unreliable, would soon have his 70th birthday. ... With perfect justice, in August 1866, he punched his fist on his desk and cried "I have beaten them all! All!"
Prussia's victory over Austria increased the already existing tensions with France. The Emperor of France, Napoleon III, had tried to gain territory for France (in Belgium and on the left bank of the Rhine) as a compensation for not joining the war against Prussia and was disappointed by the surprisingly quick outcome of the war. Accordingly, opposition politician Adolphe Thiers claimed that it was France, not Austria, who had really been defeated at Königgrätz. Bismarck, at the same time, did not avoid war with France, though he feared the French for a number of reasons. First, he feared that Austria, hungry for revenge, would ally with the French. Similarly, he feared that the Russian army would assist France to maintain a balance of power. Still, however, Bismarck believed that if the German states perceived France as the aggressor, they would then unite behind the King of Prussia. To achieve this he kept Napoleon III involved in various intrigues, whereby France might gain territory from Luxembourg or Belgium. France never achieved any such gain, but it was made to look greedy and untrustworthy.
A suitable pretext for war arose in 1870, when the German Prince Leopold of Hohenzollern-Sigmaringen was offered the Spanish throne, vacant since a revolution in 1868. France pressured Leopold into withdrawing his candidacy. Not content with this, Paris demanded that Wilhelm, as head of the House of Hohenzollern, assure that no Hohenzollern would ever seek the Spanish crown again. To provoke France into declaring war with Prussia, Bismarck published the Ems Dispatch, a carefully edited version of a conversation between King Wilhelm and the French ambassador to Prussia, Count Benedetti. This conversation had been edited so that each nation felt that its ambassador had been slighted and ridiculed, thus inflaming popular sentiment on both sides in favor of war. Langer, however, argues that this episode played a minor role in causing the war.
Bismarck wrote in his Memoirs that he "had no doubt that a Franco-German war must take place before the construction of a united Germany could be realised." Yet he felt confident that the French army was not prepared to give battle to Germany's numerically larger forces: " If the French fight us alone they are lost." He was also convinced that the French would not be able to find allies since " France, the victor, would be a danger to everybody – Prussia to nobody." He added, "That is our strong point."
France mobilized and declared war on 19 July. The German states saw France as the aggressor, and—swept up by nationalism and patriotic zeal—they rallied to Prussia's side and provided troops. Both of Bismarck's sons served as officers in the Prussian cavalry. The war was a great success for Prussia as the German army, controlled by Chief of Staff Moltke, won victory after victory. The major battles were all fought in one month (7 August to 1 September), and both French armies were captured at Sedan and Metz, the latter after a siege of some weeks. Napoleon III was taken prisoner at Sedan and kept in Germany for a time in case Bismarck had need of him to head the French regime; he later died in exile in England in 1873. The remainder of the war featured a siege of Paris, the city was "ineffectually bombarded"; the new French republican regime then tried, without success, to relieve Paris with various hastily assembled armies and increasingly bitter partisan warfare.
Bismarck quoted the first verse lyrics of "La Marseillaise", amongst others, when being recorded on an Edison phonograph in 1889, the only known recording of his voice. A biographer stated that he did so, 19 years after the war, to mock the French.
Bismarck acted immediately to secure the unification of Germany. He negotiated with representatives of the southern German states, offering special concessions if they agreed to unification. The negotiations succeeded; patriotic sentiment overwhelmed what opposition remained. While the war was in its final phase, Wilhelm I of Prussia was proclaimed German Emperor on 18 January 1871 in the Hall of Mirrors in the Château de Versailles. The new German Empire was a federation: each of its 25 constituent states (kingdoms, grand duchies, duchies, principalities, and free cities) retained some autonomy. The King of Prussia, as German Emperor, was not sovereign over the entirety of Germany; he was only "primus inter pares", or first among equals. However, he held the presidency of the Bundesrat, which met to discuss policy presented by the Chancellor, whom the emperor appointed.
In the end, France had to cede Alsace and part of Lorraine, as Moltke and his generals wanted it as a buffer. Historians debate whether Bismarck wanted this annexation or was forced into it by a wave of German public and elite opinion. France was also required to pay an indemnity; the indemnity figure was calculated, on the basis of population, as the precise equivalent of the indemnity that Napoleon I had imposed on Prussia in 1807.
Historians debate whether Bismarck had a master plan to expand the North German Confederation of 1866 to include the remaining independent German states into a single entity or simply to expand the power of the Kingdom of Prussia. They conclude that factors in addition to the strength of Bismarck's "Realpolitik" led a collection of early modern polities to reorganize political, economic, military, and diplomatic relationships in the 19th century. Reaction to Danish and French nationalism provided foci for expressions of German unity. Military successes—especially those of Prussia—in three regional wars generated enthusiasm and pride that politicians could harness to promote unification. This experience echoed the memory of mutual accomplishment in the Napoleonic Wars, particularly in the War of Liberation of 1813–14. By establishing a Germany without Austria, the political and administrative unification in 1871 at least temporarily solved the problem of dualism.
Jonathan Steinberg said of Bismarck's creation of the German Empire that: the first phase of [his] great career had been concluded. The genius-statesmen had transformed European politics and had unified Germany in eight and a half years. And he had done so by sheer force of personality, by his brilliance, ruthlessness, and flexibility of principle. ... [It] marked the high point of [his] career. He had achieved the impossible, and his genius and the cult of genius had no limits. ... When he returned to Berlin in March 1871, he had become immortal ...
In 1871, Bismarck was raised to the rank of "Fürst" (Prince). He was also appointed as the first Imperial Chancellor ("Reichskanzler") of the German Empire, but retained his Prussian offices, including those of Minister-President and Foreign Minister. He was also promoted to the rank of lieutenant-general, and bought a former hotel in Friedrichsruh near Hamburg, which became an estate. He also continued to serve as his own foreign minister. Because of both the imperial and the Prussian offices that he held, Bismarck had near complete control over domestic and foreign policy. The office of Minister President of Prussia was temporarily separated from that of Chancellor in 1873, when Albrecht von Roon was appointed to the former office. But by the end of the year, Roon resigned due to ill health, and Bismarck again became Minister-President.
Bismarck launched an anti-Catholic "Kulturkampf" ("culture struggle") in Prussia in 1871. This was partly motivated by Bismarck's fear that Pius IX and his successors would use papal infallibility to achieve the "papal desire for international political hegemony... The result was the Kulturkampf, which, with its largely Prussian measures, complemented by similar actions in several other German states, sought to curb the clerical danger by legislation restricting the Catholic church's political power." In May 1872 Bismarck thus attempted to reach an understanding with other European governments to manipulate future papal elections; governments should agree beforehand on unsuitable candidates, and then instruct their national cardinals to vote appropriately. The goal was to end the pope's control over the bishops in a given state, but the project went nowhere.
Bismarck accelerated the "Kulturkampf". In its course, all Prussian bishops and many priests were imprisoned or exiled. Prussia's population had greatly expanded in the 1860s and was now one-third Catholic. Bismarck believed that the pope and bishops held too much power over the German Catholics and was further concerned about the emergence of the Catholic Centre Party, organised in 1870. With support from the anticlerical National Liberal Party, which had become Bismarck's chief ally in the Reichstag, he abolished the Catholic Department of the Prussian Ministry of Culture. That left the Catholics without a voice in high circles. Moreover, in 1872, the Jesuits were expelled from Germany. In 1873, more anti-Catholic laws allowed the Prussian government to supervise the education of the Roman Catholic clergy and curtailed the disciplinary powers of the Church. In 1875, civil ceremonies were required for civil weddings. Hitherto, weddings in churches were civilly recognized.
"Kulturkampf" became part of Bismarck's foreign-policy, as he sought to destabilize and weaken Catholic regimes, especially in Belgium and France, but he had little success.
The British ambassador Odo Russell reported to London in October 1872 that Bismarck's plans were backfiring by strengthening the ultramontane (pro-papal) position inside German Catholicism:
"The German Bishops, who were politically powerless in Germany and theologically in opposition to the Pope in Rome, have now become powerful political leaders in Germany and enthusiastic defenders of the now infallible Faith of Rome, united, disciplined, and thirsting for martyrdom, thanks to Bismarck's uncalled for antiliberal declaration of War on the freedom they had hitherto peacefully enjoyed."
The Catholics reacted by organizing themselves and strengthening the Centre Party. Bismarck, a devout pietistic Protestant, was alarmed that secularists and socialists were using the "Kulturkampf" to attack all religion. He abandoned it in 1878 to preserve his remaining political capital since he now needed the Centre Party votes in his new battle against socialism. Pius IX died that year, replaced by the more pragmatic Pope Leo XIII who negotiated away most of the anti-Catholic laws. The Pope kept control of the selection of bishops, and Catholics for the most part supported unification and most of Bismarck's policies. However, they never forgot his culture war and preached solidarity to present organized resistance should it ever be resumed.
Steinberg comments: The anti-Catholic hysteria in many European countries belongs in its European setting. Bismarck's campaign was not unique in itself, but his violent temper, intolerance of opposition, and paranoia that secret forces had conspired to undermine his life's work, made it more relentless. His rage drove him to exaggerate the threat from Catholic activities and to respond with very extreme measures. ... As Odo Russell wrote to his mother, [Lady Emily Russell,] "The demonic is stronger in him than in any man I know." ... The bully, the dictator, and the "demonic" combined in him with the self-pity and the hypochondria to create a constant crisis of authority, which he exploited for his own ends. ... Opponents, friends, and subordinates all remarked on Bismarck as "demonic," a kind of uncanny, diabolic personal power over men and affairs. In these years of his greatest power, he believed that he could do anything.
In 1873, Germany and much of Europe and America entered the Long Depression, the "Gründerkrise". A downturn hit the German economy for the first time since industrial development began to surge in the 1850s. To aid faltering industries, the Chancellor abandoned free trade and established protectionist import-tariffs, which alienated the National Liberals who demanded free trade. The "Kulturkampf" and its effects had also stirred up public opinion against the party that supported it, and Bismarck used this opportunity to distance himself from the National Liberals. That marked a rapid decline in the support of the National Liberals, and by 1879 their close ties with Bismarck had all but ended. Bismarck instead returned to conservative factions, including the Centre Party, for support. He helped foster support from the conservatives by enacting several tariffs protecting German agriculture and industry from foreign competitors in 1879.
Imperial and provincial government bureaucracies attempted to Germanise the state's national minorities situated near the borders of the empire: the Danes in the North, the Francophones in the West and Poles in the East. As minister president of Prussia and as imperial chancellor, Bismarck "sorted people into their linguistic [and religious] 'tribes'"; he pursued a policy of hostility in particular toward the Poles, which was an expedient rooted in Prussian history. "He never had a Pole among his peasants" working the Bismarckian estates; it was the educated Polish bourgeoisie and revolutionaries he denounced from personal experience, and "because of "them" he disliked intellectuals in politics." Bismarck's antagonism is revealed in a private letter to his sister in 1861: "Hammer the Poles until they despair of living [...] I have all the sympathy in the world for their situation, but if we want to exist we have no choice but to wipe them out: wolves are only what God made them, but we shoot them all the same when we can get at them." Later that year, the public Bismarck modified his belligerence and wrote to Prussia's foreign minister: "Every success of the Polish national movement is a defeat for Prussia, we cannot carry on the fight against this element according to the rules of civil justice, but only in accordance with the rules of war." With Polish nationalism the ever-present menace, Bismarck preferred expulsion rather than Germanisation.
Worried by the growth of the socialist movement, the Social Democratic Party in particular, Bismarck instituted the Anti-Socialist Laws in 1878. Socialist organizations and meetings were forbidden - except the SPD, which was allowed to take part in the elections - as was the circulation of socialist literature. Police officers could stop, search and arrest socialist party members and their leaders, a number of whom were then tried by police courts. Despite these efforts, the socialist movement steadily gained supporters and seats in the Reichstag. Socialists won seats in the Reichstag also by running as independent candidates, unaffiliated with any party, although the law did not ban the SPD directly, which was allowed by the German constitution.
Bismarck's strategy in the 1880s was to win the workers over for the conservative regime by implementing social benefits. He added accident and old-age insurance as well as a form of socialized medicine. He did not completely succeed, however. Support for the Social Democrats increased with each election.
After fifteen years of warfare in the Crimea, Germany and France, Europe began a period of peace in 1871. With the founding of the German Empire in 1871, Bismarck emerged as a decisive figure in European history from 1871 to 1890. He retained control over Prussia and as well as the foreign and domestic policies of the new German Empire. Bismarck had built his reputation as a war-maker but changed overnight into a peacemaker. He skillfully used balance of power diplomacy to maintain Germany's position in a Europe which, despite many disputes and war scares, remained at peace. For historian Eric Hobsbawm, it was Bismarck who "remained undisputed world champion at the game of multilateral diplomatic chess for almost twenty years after 1871, [and] devoted himself exclusively, and successfully, to maintaining peace between the powers". Historian Paul Knaplund concludes:
Bismarck's main mistake was giving in to the Army and to intense public demand in Germany for acquisition of the border provinces of Alsace and Lorraine, thereby turning France into a permanent, deeply-committed enemy ("see" French–German enmity). Theodore Zeldin says, "Revenge and the recovery of Alsace-Lorraine became a principal object of French policy for the next forty years. That Germany was France's enemy became the basic fact of international relations." Bismarck's solution was to make France a pariah nation, encouraging royalty to ridicule its new republican status, and building complex alliances with the other major powers – Austria, Russia, and Britain – to keep France isolated diplomatically. A key element was the League of the Three Emperors, in which Bismarck brought together rulers in Berlin, Vienna and St. Petersburg to guarantee each other's security, while blocking out France; it lasted 1881–1887.
Having unified his nation, Bismarck now devoted himself to promoting peace in Europe with his skills in statesmanship. He was forced to contend with French revanchism, the desire to avenge the losses of the Franco-Prussian War. Bismarck, therefore, engaged in a policy of diplomatically isolating France while maintaining cordial relations with other nations in Europe. He had little interest in naval or colonial entanglements and thus avoided discord with Great Britain. Historians emphasize that he wanted no more territorial gains after 1871, and vigorously worked to form cross-linking alliances that prevented any war in Europe from starting. By 1878 both the Liberal and Conservative spokesmen in Britain hailed him as the champion of peace in Europe. A. J. P. Taylor, a leading British diplomatic historian, concludes that, "Bismarck was an honest broker of peace; and his system of alliances compelled every Power, whatever its will, to follow a peaceful course."
Well aware that Europe was skeptical of his powerful new Reich, Bismarck turned his attention to preserving peace in Europe based on a balance of power that would allow Germany's economy to flourish. Bismarck feared that a hostile combination of Austria, France, and Russia would crush Germany. If two of them were allied, then the third would ally with Germany only if Germany conceded excessive demands. The solution was to ally with two of the three. In 1873 he formed the League of the Three Emperors ("Dreikaiserbund"), an alliance of Wilhelm, Tsar Alexander II of Russia, and Emperor Francis Joseph of Austria-Hungary. Together they would control Eastern Europe, making sure that restive ethnic groups such as the Poles were kept under control. The Balkans posed a more serious issue, and Bismarck's solution was to give Austria predominance in the western areas, and Russia in the eastern areas. This system collapsed in 1887.
In 1872, a protracted quarrel began to fester between Bismarck and Count Harry von Arnim, the imperial ambassador to France. Arnim saw himself as a rival and competitor for the chancellorship, but the rivalry escalated out of hand, and Arnim took sensitive records from embassy files at Paris to back up his case. He was formally accused of misappropriating official documents, indicted, tried and convicted, finally fleeing into exile where he died. No one again openly challenged Bismarck in foreign policy matters until his resignation.
France was Bismarck's main problem. Peaceful relations with France became impossible after 1871 when Germany annexed all of the province of Alsace and much of Lorraine. Public opinion demanded it to humiliate France, and the Army wanted its more defensible frontiers. Bismarck reluctantly gave in—French would never forget or forgive, he calculated, so might as well take the provinces. (That was a mistaken assumption—after about five years the French did calm down and considered it a minor issue.) Germany's foreign policy fell into a trap with no exit. "In retrospect it is easy to see that the annexation of Alsace-Lorraine was a tragic mistake." Once the annexation took place the only policy that made sense was trying to isolate France so it had no strong allies. However France complicated Berlin's plans when it became friends with Russia. In 1905 a German plan for an alliance with Russia fell through because Russia was too close to France.
Between 1873 and 1877, Germany repeatedly manipulated the internal affairs of France's neighbors to hurt France. Bismarck put heavy pressure on Belgium, Spain, and Italy hoping to obtain the election of liberal, anticlerical governments. His plan was to promote republicanism in France by isolating the clerical-monarchist regime of President MacMahon. He hoped that surrounding France with liberal states would help the French republicans defeat MacMahon and his reactionary supporters.
The bullying, however, almost got out of hand in mid-1875, when an editorial entitled ""Krieg-in-Sicht"" ("War in Sight") was published in a Berlin newspaper close to the government, the "Post". The editorial indicated that highly influential Germans were alarmed by France's rapid recovery from defeat in 1875 and its announcement of an increase in the size of its army, as well as talks of launching a preventive war against France. Bismarck denied knowing about the article ahead of time, but he certainly knew about the talk of preventive war. The editorial produced a war scare, with Britain and Russia warning that they would not tolerate a preventive war against France. Bismarck had no desire for war either, and the crisis soon blew over. It was a rare instance where Bismarck was outmaneuvered and embarrassed by his opponents, but from that he learned an important lesson. It forced him to take into account the fear and alarm that his bullying and Germany's fast-growing power was causing among its neighbors, and reinforced his determination that Germany should work in proactive fashion to preserve the peace in Europe, rather than passively let events take their own course and reacting to them.
Bismarck maintained good relations with Italy, although he had a personal dislike for Italians and their country. He can be seen as a marginal contributor to Italian unification. Politics surrounding the 1866 Austro-Prussian War allowed Italy to annex Venetia, which had been a "kronland" ("crown land") of the Austrian Empire since the 1815 Congress of Vienna. In addition, French mobilization for the Franco-Prussian War of 1870–1871 made it necessary for Napoleon III to withdraw his troops from Rome and The Papal States. Without these two events, Italian unification would have been a more prolonged process.
After Russia's victory over the Ottoman Empire in the Russo-Turkish War of 1877–78, Bismarck helped negotiate a settlement at the Congress of Berlin. The Treaty of Berlin revised the earlier Treaty of San Stefano, reducing the size of newly independent Bulgaria (a pro-Russian state at that time). Bismarck and other European leaders opposed the growth of Russian influence and tried to protect the integrity of the Ottoman Empire (see Eastern Question). As a result, Russo-German relations further deteriorated, with the Russian chancellor Gorchakov denouncing Bismarck for compromising his nation's victory. The relationship was additionally strained due to Germany's protectionist trade policies. Some in the German military clamored for a preemptive war with Russia; Bismarck refused, stating: "Preemptive war is like committing suicide for fear of death."
Bismarck realized that both Russia and Britain considered control of central Asia a high priority, dubbed the "Great Game". Germany had no direct stakes, however its dominance of Europe was enhanced when Russian troops were based as far away from Germany as possible. Over two decades, 1871–1890, he maneuvered to help the British, hoping to force the Russians to commit more soldiers to Asia.
The League of the Three Emperors having fallen apart, Bismarck negotiated the Dual Alliance with Austria-Hungary, in which each guaranteed the other against Russian attack. He also negotiated the Triple Alliance in 1882 with Austria-Hungary and Italy, and Italy and Austria-Hungary soon reached the "Mediterranean Agreement" with Britain. Attempts to reconcile Germany and Russia did not have a lasting effect: the Three Emperors' League was re-established in 1881 but quickly fell apart, ending Russian-Austrian-Prussian solidarity, which had existed in various forms since 1813. Bismarck therefore negotiated the secret Reinsurance Treaty of 1887 with Russia, in order to prevent Franco-Russian encirclement of Germany. Both powers promised to remain neutral towards one another unless Russia attacked Austria-Hungary. However, after Bismarck's departure from office in 1890, the Treaty was not renewed, thus creating a critical problem for Germany in the event of a war.
Bismarck had opposed colonial acquisitions, arguing that the burden of obtaining, maintaining, and defending such possessions would outweigh any potential benefit. He felt that colonies did not pay for themselves, that the German formal bureaucratic system would not work well in the easy-going tropics, and that the diplomatic disputes colonies brought would distract Germany from its central interest, Europe itself. As for French designs on Morocco, Chlodwig, Prince of Hohenlohe-Schillingsfürst wrote in his memoirs that Bismarck had told him that Germany "could only be pleased if France took possession of the country" since "she would then be very occupied" and distracted from the loss of Alsace-Lorraine. However, in 1883–84 he suddenly reversed himself and overnight built a colonial empire in Africa and the South Pacific. The Berlin Conference of 1884–85 organized by Bismarck can be seen as the formalization of the Scramble for Africa.
Historians have debated the exact motive behind Bismarck's sudden and short-lived move. He was aware that public opinion had started to demand colonies for reasons of German prestige. He also wanted to undercut the anti-colonial liberals who were sponsored by the Crown Prince, who—given Wilhelm I's old age—might soon become emperor and remove Bismarck. Bismarck was influenced by Hamburg merchants and traders, his neighbors at Friedrichsruh. The establishment of the German colonial empire proceeded smoothly, starting with German New Guinea in 1884.
Other European nations, led by Britain and France, were acquiring colonies in a rapid fashion (see New Imperialism). Bismarck therefore joined in the Scramble for Africa. Germany's new colonies included Togoland (now Togo and part of Ghana), German Kamerun (now Cameroon and part of Nigeria), German East Africa (now Rwanda, Burundi, and the mainland part of Tanzania), and German South-West Africa (now Namibia). The Berlin Conference (1884–85) established regulations for the acquisition of African colonies; in particular, it protected free trade in certain parts of the Congo basin. Germany also acquired colonies in the Pacific, such as German New Guinea.
Hans-Ulrich Wehler argues that his imperialistic policies were based on internal political and economic forces; they were not his response to external pressure. At first he promoted liberal goals of free trade commercial expansionism in order to maintain economic growth and social stability, as well as preserve the social and political power structure. However he changed, broke with the liberals, and adopted tariffs to win Catholic support and shore up his political base. Germany's imperialism in the 1880s derived less from strength and instead represented Bismarck's solution to unstable industrialization. Protectionism made for unity at a time when class conflict was rising. Wehler says the chancellor's ultimate goal was to strengthen traditional social and power structures, and avoid a major war.
In February 1888, during a Bulgarian crisis, Bismarck addressed the Reichstag on the dangers of a European war:
Bismarck also repeated his emphatic warning against any German military involvement in Balkan disputes. Bismarck had first made this famous comment to the Reichstag in December 1876, when the Balkan revolts against the Ottoman Empire threatened to extend to a war between Austria and Russia:
A leading diplomatic historian of the era, William L. Langer sums up Bismarck's two decades as Chancellor:
Whatever else may be said of the intricate alliance system evolved by the German Chancellor, it must be admitted that it worked and that it tided Europe over a period of several critical years without a rupture... there was, as Bismarck himself said, a premium upon the maintenance of peace.
Langer concludes:
His had been a great career, beginning with three wars in eight years and ending with a period of 20 years during which he worked for the peace of Europe, despite countless opportunities to embark on further enterprises with more than even chance of success... No other statesman of his standing had ever before shown the same great moderation and sound political sense of the possible and desirable... Bismarck at least deserves full credit for having steered European politics through this dangerous transitional period without serious conflict between the great powers."
In domestic policy, Bismarck pursued a conservative state-building strategy designed to make ordinary Germans—not just his own Junker elite—more loyal to throne and empire, implementing the modern welfare state in Germany in the 1880s. According to Kees van Kersbergen and Barbara Vis, his strategy was:
Bismarck worked closely with large industry and aimed to stimulate German economic growth by giving workers greater security. A secondary concern was trumping the Socialists, who had no welfare proposals of their own and opposed Bismarck's. Bismarck especially listened to Hermann Wagener and Theodor Lohmann, advisers who persuaded him to give workers a corporate status in the legal and political structures of the new German state. In March 1884, Bismarck declared:
Bismarck's idea was to implement welfare programs that were acceptable to conservatives without any socialistic aspects. He was dubious about laws protecting workers at the workplace, such as safe working conditions, limitation of work hours, and the regulation of women's and child labor. He believed that such regulation would force workers and employers to reduce work and production and thus harm the economy. Bismarck opened debate on the subject in November 1881 in the Imperial Message to the Reichstag, using the term "practical Christianity" to describe his program. Bismarck's program centred squarely on insurance programs designed to increase productivity, and focus the political attentions of German workers on supporting the Junkers' government. The program included sickness insurance, accident insurance, disability insurance, and a retirement pension, none of which were then in existence to any great degree.
Based on Bismarck's message, the Reichstag filed three bills to deal with the concepts of accident and sickness insurance. The subjects of retirement pensions and disability insurance were placed on the back-burner for the time being. The social legislation implemented by Bismarck in the 1880s played a key role in the sharp, rapid decline of German emigration to America. Young men considering emigration looked at not only the gap between higher hourly "direct wages" in the United States and Germany but also the differential in "indirect wages", social benefits, which favored staying in Germany. The young men went to German industrial cities, so that Bismarck's insurance system partly offset low wage rates in Germany and further reduced the emigration rate.
The first successful bill, passed in 1883, was the Sickness Insurance Bill. Bismarck considered the program, established to provide sickness insurance for German industrial laborers, the least important and the least politically troublesome. The health service was established on a local basis, with the cost divided between employers and the employed. The employers contributed one third, and the workers contributed two-thirds. The minimum payments for medical treatment and sick pay for up to 13 weeks were legally fixed. The individual local health bureaus were administered by a committee elected by the members of each bureau, and this move had the unintended effect of establishing a majority representation for the workers on account of their large financial contribution. This worked to the advantage of the Social Democrats who, through heavy worker membership, achieved their first small foothold in public administration.
According to a 2019 study, the health insurance legislation caused a substantial reduction in mortality.
Bismarck's government had to submit three draft bills before it could get one passed by the Reichstag in 1884. Bismarck had originally proposed that the federal government pay a portion of the accident insurance contribution. Bismarck wanted to demonstrate the willingness of the German government to reduce the hardship experienced by the German workers so as to wean them away from supporting the various left-wing parties, most importantly the Social Democrats. The National Liberals took this program to be an expression of State Socialism, against which they were dead set. The Centre Party was afraid of the expansion of federal power at the expense of states' rights.
As a result, the only way the program could be passed at all was for the entire expense to be underwritten by the employers. To facilitate this, Bismarck arranged for the administration of this program to be placed in the hands of "Der Arbeitgeberverband in den beruflichen Korporationen" (the Organization of Employers in Occupational Corporations). This organization established central and bureaucratic insurance offices on the federal, and in some cases the state level to actually administer the program whose benefits kicked in to replace the sickness insurance program as of the 14th week. It paid for medical treatment and a pension of up to two-thirds of earned wages if the worker were fully disabled. This program was expanded, in 1886, to include agricultural workers.
The old age pension program, insurance equally financed by employers and workers, was designed to provide a pension annuity for workers who reached the age of 70. Unlike the accident and sickness insurance programs, this program covered all categories of workers (industrial, agrarian, artisans and servants) from the start. Also, unlike the other two programs, the principle that the national government should contribute a portion of the underwriting cost, with the other two portions prorated accordingly, was accepted without question. The disability insurance program was intended to be used by those permanently disabled. This time, the state or province supervised the programs directly.
In 1888 Kaiser Wilhelm I died, leaving the throne to his son, Friedrich III. The new monarch was already suffering from cancer of the larynx and died after reigning for only 99 days. He was succeeded by his son, Wilhelm II, who opposed Bismarck's careful foreign policy, preferring vigorous and rapid expansion to enlarge Germany's "place in the sun".
Bismarck was sixteen years older than Friedrich; before the latter became terminally ill, Bismarck did not expect he would live to see Wilhelm ascend to the throne and thus had no strategy to deal with him. Conflicts between Wilhelm and his chancellor soon poisoned their relationship. Their final split occurred after Bismarck tried to implement far-reaching anti-socialist laws in early 1890. The "Kartell" majority in the Reichstag, including the amalgamated Conservative Party and the National Liberal Party, was willing to make most of the laws permanent. However, it was split about the law granting the police the power to expel socialist agitators from their homes, a power that had been used excessively at times against political opponents. The National Liberals refused to make this law permanent, while the Conservatives supported only the entirety of the bill, threatening to and eventually vetoing the entire bill in session because Bismarck would not agree to a modified bill.
As the debate continued, Wilhelm became increasingly interested in social problems, especially the treatment of mine workers during their strike in 1889. Keeping with his active policy in government, he routinely interrupted Bismarck in Council to make clear his social views. Bismarck sharply disagreed with Wilhelm's policies and worked to circumvent them. Even though Wilhelm supported the altered anti-socialist bill, Bismarck pushed for his support to veto the bill in its entirety. When his arguments could not convince Wilhelm, Bismarck became excited and agitated until uncharacteristically blurting out his motive to see the bill fail: to have the socialists agitate until a violent clash occurred that could be used as a pretext to crush them. Wilhelm countered that he was not willing to open his reign with a bloody campaign against his own subjects. The next day, after realizing his blunder, Bismarck attempted to reach a compromise with Wilhelm by agreeing to his social policy towards industrial workers and even suggested a European council to discuss working conditions, presided over by the Emperor.
Still, a turn of events eventually led to his breaking with Wilhelm. Bismarck, feeling pressured and unappreciated by the Emperor and undermined by ambitious advisers, refused to sign a proclamation regarding the protection of workers along with Wilhelm, as was required by the German constitution. His refusal to sign was apparently to protest Wilhelm's ever increasing interference with Bismarck's previously unquestioned authority. Bismarck also worked behind the scenes to break the Continental labour council on which Wilhelm had set his heart.
The final break came as Bismarck searched for a new parliamentary majority, as his "Kartell" was voted from power as a consequence of the anti-socialist bill fiasco. The remaining forces in the Reichstag were the Catholic Centre Party and the Conservative Party. Bismarck wished to form a new block with the Centre Party and invited Ludwig Windthorst, the parliamentary leader, to discuss an alliance. That would be Bismarck's last political maneuver. Upon hearing about Windthorst's visit, Wilhelm was furious.
In a parliamentary state, the head of government depends on the confidence of the parliamentary majority and has the right to form coalitions to ensure their policies have majority support. However, in Germany, the Chancellor depended on the confidence of the Emperor alone, and Wilhelm believed that the Emperor had the right to be informed before his minister's meeting. After a heated argument in Bismarck's office, Wilhelm—to whom Bismarck had shown a letter from Tsar Alexander III describing Wilhelm as a "badly brought-up boy"—stormed out, after first ordering the rescinding of the Cabinet Order of 1851, which had forbidden Prussian Cabinet Ministers from reporting directly to the King of Prussia and required them instead to report via the Chancellor. Bismarck, forced for the first time into a situation that he could not use to his advantage, wrote a blistering letter of resignation, decrying Wilhelm's interference in foreign and domestic policy. The letter, however, was published only after Bismarck's death.
Bismarck resigned at Wilhelm II's insistence on 18 March 1890, at the age of seventy-five. Steinberg sums up:
Thus ended the extraordinary public career of Otto von Bismarck, who ... had presided over the affairs of a state he made great and glorious. ... Now the humble posture that he had necessarily adopted in his written communications with his royal master had become his real posture. The old servant, no matter how great and how brilliant, had become in reality what he had always played as on a stage: a servant who could be dismissed at will by his Sovereign. He had defended that royal prerogative because it had allowed him to carry out his immense will; now the absolute prerogative of the Emperor became what it has always been, the prerogative of the sovereign. Having crushed his parliamentary opponents, flattened and abused his ministers, and refused to allow himself to be bound by any loyalty, Bismarck had no ally left when he needed it. It was not his cabinet nor his parliamentary majority. He had made sure that it remained the sovereign's, and so it was that he fell because of a system that he preserved and bequeathed to the unstable young Emperor.
Bismarck was succeeded as Imperial Chancellor and Minister President of Prussia by Leo von Caprivi. After his dismissal he was promoted to the rank of "Colonel-General with the Dignity of Field Marshal", so-called because the German Army did not appoint full Field Marshals in peacetime. He was also given a new title, Duke of Lauenburg, which he joked would be useful when traveling incognito. He was soon elected to the "Reichstag" as a National Liberal in Bennigsen's old and supposedly safe Hamburg seat, but he was so humiliated by being taken to a second ballot by a Social Democrat opponent that he never actually took up his seat. Bismarck entered into resentful retirement, lived in Friedrichsruh near Hamburg and sometimes on his estates at Varzin, and waited in vain to be called upon for advice and counsel. After his wife's death on 27 November 1894, his health worsened and one year later he was finally confined to a wheelchair.
In December 1897, Wilhelm visited Bismarck for the last time. Bismarck again warned him about the dangers of improvising government policy based on the intrigues of courtiers and militarists:
Subsequently, Bismarck made this prediction:
The year before his death, Bismarck again predicted:
Bismarck spent his final years composing his memoirs ("Gedanken und Erinnerungen", or "Thoughts and Memories"), a work lauded by historians. In the memoirs Bismarck continued his feud with Wilhelm II by attacking him, and by increasing the drama around every event and by often presenting himself in a favorable light. He also published the text of the Reinsurance Treaty with Russia, a major breach of national security, for which an individual of lesser status would have been heavily prosecuted.
Bismarck's health began to fail in 1896. He was diagnosed with gangrene in his foot, but refused to accept treatment for it; as a result he had difficulty walking and was often confined to a wheelchair. By July 1898 he was permanently wheelchair-bound, had trouble breathing, and was almost constantly feverish and in pain. His health rallied momentarily on the 28th, but then sharply deteriorated over the next two days. He died just after midnight on 30 July 1898, at the age of eighty-three in Friedrichsruh, where he is entombed in the Bismarck Mausoleum. He was succeeded as Prince Bismarck by his eldest son, Herbert. Bismarck managed a posthumous snub of Wilhelm II by having his own sarcophagus inscribed with the words, "A loyal German servant of Emperor Wilhelm I".
Historians have reached a broad consensus on the content, function and importance of the image of Bismarck within Germany's political culture over the past 125 years. According to Steinberg, his achievements in 1862–71 were "the greatest diplomatic and political achievement by any leader in the last two centuries."
Bismarck's most important legacy is the unification of Germany. Germany had existed as a collection of hundreds of separate principalities and Free Cities since the formation of the Holy Roman Empire. Over the centuries various rulers had tried to unify the German states without success until Bismarck. Largely as a result of Bismarck's efforts, the various German kingdoms were united into a single country.
Following unification, Germany became one of the most powerful nations in Europe. Bismarck's astute, cautious, and pragmatic foreign policies allowed Germany to peacefully retain the powerful position into which he had brought it, while maintaining amiable diplomacy with almost all European nations. France was the main exception because of the Franco–Prussian War and Bismarck's harsh subsequent policies; France became one of Germany's most bitter enemies in Europe. Austria, too, was weakened by the creation of a German Empire, though to a much lesser extent than France. Bismarck believed that as long as Britain, Russia and Italy were assured of the peaceful nature of the German Empire, French belligerency could be contained; his diplomatic feats were undone, however, by Kaiser Wilhelm II, whose policies unified other European powers against Germany in time for World War I.
Historians stress that Bismarck's peace-oriented, "saturated continental diplomacy" was increasingly unpopular, because it consciously reined in any expansionist drives. In dramatic contrast stands the ambition of Wilhelm II's "Weltpolitik" to secure the Reich's future through expansion, leading to World War I. Likewise Bismarck's policy to deny the military a dominant voice in foreign political decision making was overturned by 1914 as Germany became an armed state.
Bismarck's psychology and personal traits have not been so favourably received by scholars. The historian Jonathan Steinberg portrays a demonic genius who was deeply vengeful, even toward his closest friends and family members:
[Bismarck's friend, German diplomat Kurd von Schlözer] began to see Bismarck as a kind of malign genius who, behind the various postures, concealed an ice-cold contempt for his fellow human beings and a methodical determination to control and ruin them. His easy chat combined blunt truths, partial revelations, and outright deceptions. His extraordinary double ability to see how groups would react and the willingness to use violence to make them obey, the capacity to read group behavior and the force to make them move to his will, gave him the chance to exercise what [Steinberg has] called his "sovereign self"
Evans says he was "intimidating and unscrupulous, playing to others' frailties, not their strengths." British historians, including Steinberg, Evans, Taylor, Palmer and Crankshaw, see Bismarck as an ambivalent figure, undoubtedly a man of great skill but who left no lasting system in place to guide successors less skilled than himself. Being a committed monarchist himself, Bismarck allowed no effective constitutional check on the power of the Emperor, thus placing a time bomb in the foundation of the Germany that he created.
Observers at the time and since have commented on Bismarck's skill as a writer. As Henry Kissinger has noted, "The man of 'blood and iron' wrote prose of extraordinary directness and lucidity, comparable in distinctiveness to Churchill's use of the English language."
Jonathan Steinberg, in his 2011 biography of Bismarck wrote that he was:a political genius of a very unusual kind [whose success] rested on several sets of conflicting characteristics among which brutal, disarming honesty mingled with the wiles and deceits of a confidence man. He played his parts with perfect self-confidence, yet mixed them with rage, anxiety, illness, hypochrondria, and irrationality. ... He used democracy when it suited him, negotiated with revolutionaries and the dangerous Ferdinand Lassalle, the socialist who might have contested his authority. He utterly dominated his cabinet ministers with a sovereign contempt and blackened their reputations as soon as he no longer needed them. He outwitted the parliamentary parties, even the strongest of them, and betrayed all those ... who had put him into power. By 1870 even his closest friends ... realized that they had helped put a demonic figure into power.
During most of his nearly thirty-year-long tenure, Bismarck held undisputed control over the government's policies. He was well supported by his friend Albrecht von Roon, the war minister, as well as the leader of the Prussian army Helmuth von Moltke. Bismarck's diplomatic moves relied on a victorious Prussian military, and these two men gave Bismarck the victories he needed to convince the smaller German states to join Prussia.
Bismarck took steps to silence or restrain political opposition, as evidenced by laws restricting the freedom of the press, and the anti-socialist laws. He waged a culture war ("Kulturkampf") against the Catholic Church until he realized the conservatism of the Catholics made them natural allies against the Socialists. His king Wilhelm I rarely challenged the Chancellor's decisions; on several occasions, Bismarck obtained his monarch's approval by threatening to resign. However, Wilhelm II intended to govern the country himself, making the ousting of Bismarck one of his first tasks as Kaiser. Bismarck's successors as Chancellor were much less influential, as power was concentrated in the Emperor's hands.
Immediately after he left office, citizens started to praise him and established funds to build monuments like the Bismarck Memorial or towers dedicated to him. Throughout Germany, the accolades were unending, several buildings were named in his honour, portraits of him were commissioned from artists such as Franz von Lenbach and C.W. Allers and books about him became best-sellers. The first monument built in his honour was the one at Bad Kissingen erected in 1877.
Numerous statues and memorials dot the cities, towns, and countryside of Germany, including the famous Bismarck Memorial in Berlin and numerous Bismarck towers on four continents. The only memorial depicting him as a student at Göttingen University (together with a dog, possibly his "Reichshund" Tyras) and as a member of his Corps Hannovera was re-erected in 2006 at the Rudelsburg.
The gleaming white 1906 Bismarck Monument in the city of Hamburg, stands in the centre of the St. Pauli district, and is the largest, and probably best-known, memorial to Bismarck worldwide. The statues depicted him as massive, monolithic, rigid and unambiguous. Two warships were named in his honour, the of the German Imperial Navy, and the from the World War II–era.
Bismarck was the most memorable figure in Germany down to the 1930s. The dominant memory was the great hero of the 1860s, who defeated all enemies, especially France, and unified Germany to become the most powerful military and diplomatic force in the world. Of course, there were no monuments celebrating Bismarck's devotion to the cause of European peace after 1871. But there were other German memories. His fellow Junkers were disappointed, as Prussia after 1871 became swallowed up and dominated by the German Empire. Liberal intellectuals, few in number but dominant in the universities and business houses, celebrated his achievement of the national state, a constitutional monarchy, and the rule of law, and forestalling revolution and marginalizing radicalism. Social Democrats and labor leaders had always been his target, and he remained their bête noire. Catholics could not forget the Kulturkampf and remained distrustful. Especially negative were the Poles who hated his Germanization programs.
Robert Gerwarth shows that the Bismarck myth, built up predominantly during his years of retirement and even more stridently after his death, proved a powerful rhetorical and ideological tool. The myth made him out to be a dogmatic ideologue and ardent nationalist when, in fact, he was ideologically flexible. Gerwarth argues that the constructed memory of Bismarck played a central role as an antidemocratic myth in the highly ideological battle over the past, which raged between 1918 and 1933. This myth proved to be a weapon against the Weimar Republic and exercised a destructive influence on the political culture of the first German democracy. Frankel in "Bismarck's Shadow" (2005) shows the Bismarck cult fostered and legitimized a new style of right-wing politics. It made possible the post-Bismarckian crisis of leadership, both real and perceived, that had Germans seeking the strongest possible leader and asking, "What Would Bismarck Do?" For example, Hamburg's memorial, unveiled in 1906, is considered one of the greatest expressions of Imperial Germany's Bismarck cult and an important development in the history of German memorial art. It was a product of the desire of Hamburg's patrician classes to defend their political privileges in the face of dramatic social change and attendant demands for political reform. To those who presided over its construction, the monument was also a means of asserting Hamburg's cultural aspirations and of shrugging off a reputation as a city hostile to the arts. The memorial was greeted with widespread disapproval among the working classes and did not prevent their increasing support for the Social Democrats.
A number of localities around the world have been named in Bismarck's honour. They include:
Bismarck was created ' ("Count of Bismarck-Schönhausen") in 1865; this comital title is borne by all his descendants in the male line. In 1871, he was further created ' ("Prince of Bismarck") and accorded the style of " ("Serene Highness"); this princely title descended only to his eldest male heirs.
In 1890, Bismarck was granted the title of " ("Duke of Lauenburg"); the duchy was one of the territories that Prussia seized from the king of Denmark in 1864.
It was Bismarck's ambition to be assimilated into the mediatized houses of Germany. He attempted to persuade Kaiser Wilhelm I that he should be endowed with the sovereign duchy of Lauenburg, in reward for his services to the imperial family and the German empire. This was on the understanding that Bismarck would immediately restore the duchy to Prussia; all he wanted was the status and privileges of a mediatized family for himself and his descendants. This novel idea was rejected by the conservative emperor, who thought that he had already given the chancellor enough rewards. There is reason to believe that he informed Wilhelm II of his wishes. After being forced by the sovereign to resign, he received the purely honorific title of "Duke of Lauenburg", without the duchy itself and the sovereignty that would have transformed his family into a mediatized house. Bismarck regarded it as a mockery of his ambition, and he considered nothing more cruel than this action of the emperor.
Upon Bismarck's death in 1898 his dukedom, held only for his own lifetime, became extinct.
Domestic
Foreign
Literature
Film
Games
Notes | https://en.wikipedia.org/wiki?curid=22416 |
Olney Hymns
The Olney Hymns were first published in February 1779 and are the combined work of curate John Newton (1725–1807) and his poet friend, William Cowper (1731–1800). The hymns were written for use in Newton's rural parish, which was made up of relatively poor and uneducated followers. The "Olney Hymns" are an illustration of the potent ideologies of the Evangelical movement, to which both men belonged, present in many communities in England at the time.
The "Olney Hymns" were very popular; by 1836 there had been 37 recorded editions, and it is likely that many other editions were printed in both Britain and America. As hymn-singing gained popularity in the nineteenth century, many (around 25) of the hymns were reproduced in other hymn-books and pamphlets. Today around six of the original 348 "Olney Hymns" regularly feature in modern church worship, the most famous of which is "Amazing Grace". Other well-known hymns include "Glorious Things of Thee Are Spoken" and "How sweet the name of Jesus sounds". "Amazing Grace" as it is popularly known was first set to the tune "New Britain" by William Walker in "The Southern Harmony and Musical Companion" in 1835.
The Buckinghamshire town from which the hymns get their name, Olney, was, at the time of first publication, a market town of about 2,000 people. Around 1,200 of these were employed in its lace-making industry. This was generally poorly paid, and Cowper is said to have described his neighbours as "the half-starved and ragged of the earth". The Olney Hymns were written primarily with these poor and under-educated people in mind.
Olney is situated near the borders of Buckinghamshire, Bedfordshire, and Northamptonshire – an area traditionally associated with religious Dissent. Dissenters were Protestants who refused to follow the rules of the Church of England after the Restoration of Charles II in 1660, and when Newton settled in Olney the town still supported two Dissenting chapels. Notable local Dissenters included John Bunyan, from Bedford, author of the "Pilgrim's Progress", and another important hymn writer, Philip Doddridge (1702–51), from Northampton. Newton's own associations with Dissenters (his mother was one) meant he was in a position to conciliate with, rather than confront, his parishioners, and he quickly achieved a reputation as a popular preacher. Within his first year at Olney a gallery was added to the church to increase its congregational capacity, and the weekly prayer-meetings were moved in 1769 to Lord Dartmouth's mansion, the Great House, to accommodate even greater numbers. "Jesus where'er thy people meet" was written for their first meeting at the Great House.
John Newton was an only child, and was a self-educated sea captain, at one time captaining slave ships.
Newton's conversion occurred during a violent storm at sea on 10 March 1748. He describes the event in his autobiography, "An Authentic Narrative" (published 1764), and thereafter marked the anniversary of his conversion as a day of thanksgiving. This incident revived Newton's belief in God, and despite considerable reservations from within the established church (it took him six years to be ordained into the Church of England), he achieved the position of priest in Olney in 1764. Newton's apparent influence and charisma proved beneficial to him and his parish when local Evangelical merchant, John Thornton, to whom he had sent a copy of his autobiography, offered the parish £200 per year, requesting that Newton, in part, provided for the poor. This annual contribution ceased when Newton left in 1780 to take the position of Rector at St. Mary Woolnoth in London. Newton's epitaph on a plaque in St. Mary Woolnoth, written by Newton himself, bears these words:
William Cowper was the son of an Anglican clergyman, and well-educated at Westminster School. Cowper was liable to bouts of severe depression throughout his adult life, and during a period in an asylum he was counselled by his cousin, Martin Madan, an Evangelical clergyman. His new enthusiasm for Evangelicalism, his conversion, and his move to Olney in 1767 brought him into contact with John Newton. Cowper eventually became an unpaid curate at Newton's church, helping with the distribution of Thornton's funds.
Cowper is best known not just for his contribution to the "Olney Hymns", but as a poet, letter-writer, and translator: his works include "The Diverting History of John Gilpin" (1782), "The Task" (1785) and his translation of the works of Homer, published in 1791. Cowper left Olney for nearby Weston Underwood in 1786.
The "Olney Hymns" are in part an expression of Newton's and Cowper's personal religious faith and experience, and a reflection of the principal tenets of the Evangelical faith: the inherent sinfulness of man; religious conversion; atonement; activism; devotion to the Bible; God's providence; and the belief in an eternal life after death. However, the hymns were primarily written for immediate and day-to-day use in Newton's ministry at Olney. Here they were sung, or chanted, in church or at Newton's other Sunday and weekday meetings as a collective expression of worship. Hymn singing, though, was not without controversy, particularly within the Established church, the Church of England. By the 1760s hymns had become an established feature of religious devotion in the Evangelical church, where early (post-Reformation) hymns were versifications (song-like verses adapted from the original words) of the biblical text of the psalms, known as metrical psalms. In the Church of England, hymns other than metrical psalms were of questionable legality until the 1820s, as they were not explicitly sanctioned by the Book of Common Prayer. As a consequence, many church leaders reserved hymn-singing for meetings other than the main Sunday services, and for private or household devotions.
In the preface to the "Hymns" Newton says: "They should be Hymns, not Odes, if designed for public worship, and for the use of plain people". Newton also explains his two primary motives for publishing: his desire to promote "the faith and comfort of sincere Christians", and as a permanent record of his friendship with Cowper. Newton is attributed with suggesting that he and Cowper collaborate on a collection of hymns, ultimately drawn largely from Newton's texts accumulated over some 10 years (by the time of publication). Of the 348 hymns in the original published edition of 1779, some commentaries state that Cowper wrote just 66 between 1772 and 1773, and Newton the remainder, while other sources attribute 67 to Cowper. It is known, however, that Newton wrote some of the hymns in direct response to events around him: "Oh for a closer walk with God" for instance was written by Cowper in response to the serious illness then being suffered by his house companion, Mary Unwin, an illness she survived.
There is no evidence to show that either Newton or Cowper wrote any music to accompany the hymns. It is assumed that they were initially sung to any suitable tune that fitted the metre (rhythm), most probably to 16th or 17th century metrical psalm tunes. Subsequently, individual tunes have become linked to specific hymns from the Olney books. For example, the tune "Austria" (originally Haydn's "Gott erhalte Franz den Kaiser", an Austrian patriotic anthem) is associated today with the hymn "Glorious Things of Thee Are Spoken", just as "New Britain", an American folk melody believed to be Scottish or Irish in origin, has since the 1830s been associated with "Amazing Grace". This hymn's Scottish or Irish melody is pentatonic and suggests a bagpipe tune; the hymn is frequently performed on bagpipes and has become associated with that instrument.
As an expression of the many Evangelical beliefs, "Amazing Grace" serves as an example: The first stanza (verse), for instance, expresses Newton's sense of past sinfulness, as a "wretch", but also conversion, from being "lost" and "blind" to "now I see". God's providence, and Cowper's sense of a close and personal relationship with God are voiced in stanza four: "He will my shield and portion be". The belief in eternal life after death is expressed in stanzas five and six: "when this flesh and heart shall fail", "I shall possess" "A life of joy and peace", and "God, who call'd me here below, Will be for ever mine".
"Amazing Grace" was not the original title of this hymn: It was originally written as a poem entitled "Faith's Review and Expectation" and appears as Hymn 41 in Book I of the Olney Hymns with that title. The six stanza version quoted is the original, as written by Newton, but it has also appeared in longer forms where others have added verses or where verses from other hymns from the Olney books have been moved across.
The "Olney Hymns" are subdivided into three books: Book I, On Select Texts of Scripture; Book II, On occasional Subjects; and, Book III, On the Progress and Changes of the Spiritual Life. The sub-divisions reflect key Evangelical beliefs. Book I holds that the Bible is the ultimate source of religious authority, and its hymns are written to provide the believer, through simple language, with a thorough understanding of its contents. Book II's "Occasional Subjects" are those that bring understanding to the priorities of the Evangelical spiritual life. There is a section for instance on "Providences", which serves to illustrate the Evangelical belief in God's ever-present controlling hand. Book III is written to express Newton's ideas of the stages of personal spiritual awakening and salvation.
The undoubted popularity of the hymns was not simply a matter of local taste, but can be seen within the wider, developing religious climate in England. The relative rise in popularity of the Evangelical movement in the late 18th and early 19th centuries was due to a number of reasons: the onset of the Industrial Revolution and the subsequent break-up of, particularly, rural communities, was an unsettling influence on a parish like Olney; Methodism had seen a significant growth in popularity in the same period; and Evangelicalism was gradually finding its way into the established Church of England. However, Newton's and Cowper's writing clearly fitted its purpose. Cowper's relatively few hymns demonstrate his poetic and creative abilities, whereas Newton's prose have been assessed by some as "wooden". Nevertheless, the principal purpose of the hymns was not a theological discussion or representation of the Bible; rather, they were written for "plain people". Newton's use of simple and repetitive metres (rhythms) and simple rhyming structures helped his congregation remember the words. The significant emphasis on 'I' within the hymns shows Newton's view that the hymns are a product of his personal experience, a feature of his belief in personal repentance and Conversion, and his desire for a personal relationship with God. | https://en.wikipedia.org/wiki?curid=22422 |
Recapitulation theory
The theory of recapitulation, also called the biogenetic law or embryological parallelism—often expressed using Ernst Haeckel's phrase "ontogeny recapitulates phylogeny"—is a historical hypothesis that the development of the embryo of an animal, from fertilization to gestation or hatching (ontogeny), goes through stages resembling or representing successive adult stages in the evolution of the animal's remote ancestors (phylogeny). It was formulated in the 1820s by Étienne Serres based on the work of Johann Friedrich Meckel, after whom it is also known as Meckel–Serres law.
Since embryos also evolve in different ways, the shortcomings of the theory had been recognized by the early 20th century, and it had been relegated to "biological mythology" by the mid-20th century.
Analogies to recapitulation theory have been formulated in other fields, including cognitive development and music criticism.
The idea of recapitulation was first formulated in biology from the 1790s onwards by the German natural philosophers Johann Friedrich Meckel and Carl Friedrich Kielmeyer, and by Étienne Serres after which, Marcel Danesi states, it soon gained the status of a supposed biogenetic law.
The embryological theory was formalised by Serres in 1824–26, based on Meckel's work, in what became known as the "Meckel-Serres Law". This attempted to link comparative embryology with a "pattern of unification" in the organic world. It was supported by Étienne Geoffroy Saint-Hilaire, and became a prominent part of his ideas. It suggested that past transformations of life could have been through environmental causes working on the embryo, rather than on the adult as in Lamarckism. These naturalistic ideas led to disagreements with Georges Cuvier. The theory was widely supported in the Edinburgh and London schools of higher anatomy around 1830, notably by Robert Edmond Grant, but was opposed by Karl Ernst von Baer's ideas of divergence, and attacked by Richard Owen in the 1830s.
Ernst Haeckel (1834–1919) attempted to synthesize the ideas of Lamarckism and Goethe's "Naturphilosophie" with Charles Darwin's concepts. While often seen as rejecting Darwin's theory of branching evolution for a more linear Lamarckian view of progressive evolution, this is not accurate: Haeckel used the Lamarckian picture to describe the ontogenetic and phylogenetic history of individual species, but agreed with Darwin about the branching of all species from one, or a few, original ancestors. Since early in the twentieth century, Haeckel's "biogenetic law" has been refuted on many fronts.
Haeckel formulated his theory as "Ontogeny recapitulates phylogeny". The notion later became simply known as the recapitulation theory. Ontogeny is the growth (size change) and development (structure change) of an individual organism; phylogeny is the evolutionary history of a species. Haeckel claimed that the development of advanced species passes through stages represented by adult organisms of more primitive species. Otherwise put, each successive stage in the development of an individual represents one of the adult forms that appeared in its evolutionary history.
For example, Haeckel proposed that the pharyngeal grooves between the pharyngeal arches in the neck of the human embryo not only roughly resembled gill slits of fish, but directly represented an adult "fishlike" developmental stage, signifying a fishlike ancestor. Embryonic pharyngeal slits, which form in many animals when the thin branchial plates separating pharyngeal pouches and pharyngeal grooves perforate, open the pharynx to the outside. Pharyngeal arches appear in all tetrapod embryos: in mammals, the first pharyngeal arch develops into the lower jaw (Meckel's cartilage), the malleus and the stapes.
Haeckel produced several embryo drawings that often overemphasized similarities between embryos of related species. Modern biology rejects the literal and universal form of Haeckel's theory, such as its possible application to behavioural ontogeny, i.e. the psychomotor development of young animals and human children.
Haeckel's drawings misrepresented observed human embryonic development to such an extent that he attracted the opposition of several members of the scientific community, including the anatomist Wilhelm His, who had developed a rival "causal-mechanical theory" of human embryonic development. His's work specifically criticised Haeckel's methodology, arguing that the shapes of embryos were caused most immediately by mechanical pressures resulting from local differences in growth. These differences were, in turn, caused by "heredity". His compared the shapes of embryonic structures to those of rubber tubes that could be slit and bent, illustrating these comparisons with accurate drawings. Stephen Jay Gould noted in his 1977 book "Ontogeny and Phylogeny" that His's attack on Haeckel's recapitulation theory was far more fundamental than that of any empirical critic, as it effectively stated that Haeckel's "biogenetic law" was irrelevant.
Darwin proposed that embryos resembled each other since they shared a common ancestor, which presumably had a similar embryo, but that development did not necessarily recapitulate phylogeny: he saw no reason to suppose that an embryo at any stage resembled an adult of any ancestor. Darwin supposed further that embryos were subject to less intense selection pressure than adults, and had therefore changed less.
Modern evolutionary developmental biology (evo-devo) follows von Baer, rather than Darwin, in pointing to active evolution of embryonic development as a significant means of changing the morphology of adult bodies. Two of the key principles of evo-devo, namely that changes in the timing (heterochrony) and positioning (heterotopy) within the body of aspects of embryonic development would change the shape of a descendant's body compared to an ancestor's, were however first formulated by Haeckel in the 1870s. These elements of his thinking about development have thus survived, whereas his theory of recapitulation has not.
The Haeckelian form of recapitulation theory is considered defunct. Embryos do undergo a period where their morphology is strongly shaped by their phylogenetic position, rather than selective pressures, but that means only that they resemble other embryos at that stage, not ancestral adults as Haeckel had claimed. The modern view is summarised by the University of California Museum of Paleontology:
The idea that ontogeny recapitulates phylogeny has been applied to some other areas.
English philosopher Herbert Spencer was one of the most energetic proponents of evolutionary ideas to explain many phenomena. In 1861, five years before Haeckel first published on the subject, Spencer proposed a possible basis for a cultural recapitulation theory of education with the following claim:
G. Stanley Hall used Haeckel's theories as the basis for his theories of child development. His most influential work, "Adolescence: Its Psychology and Its Relations to Physiology, Anthropology, Sociology, Sex, Crime, Religion and Education" in 1904 suggested that each individual's life course recapitulated humanity's evolution from "savagery" to "civilization". Though he has influenced later childhood development theories, Hall's conception is now generally considered racist.
Developmental psychologist Jean Piaget favored a weaker version of the formula, according to which ontogeny "parallels" phylogeny because the two are subject to similar external constraints.
The Austrian pioneer of psychoanalysis, Sigmund Freud, also favored Haeckel's doctrine. He was trained as a biologist under the influence of recapitulation theory during its heyday, and retained a Lamarckian outlook with justification from the recapitulation theory. Freud also distinguished between physical and mental recapitulation, in which the differences would become an essential argument for his .
In the late 20th century, studies of symbolism and learning in the field of cultural anthropology suggested that "both biological evolution and the stages in the child's cognitive development follow much the same progression of evolutionary stages as that suggested in the archaeological record".
The musicologist Richard Taruskin in 2005 applied the phrase "ontogeny becomes phylogeny" to the process of creating and recasting music history, often to assert a perspective or argument. For example, the peculiar development of the works by modernist composer Arnold Schoenberg (here an "ontogeny") is generalized in many histories into a "phylogeny" – a historical development ("evolution") of Western music toward atonal styles of which Schoenberg is a representative. Such historiographies of the "collapse of traditional tonality" are faulted by music historians as asserting a rhetorical rather than historical point about tonality's "collapse".
Taruskin also developed a variation of the motto into the pun "ontogeny recapitulates ontology" to refute the concept of "absolute music" advancing the socio-artistic theories of the musicologist Carl Dahlhaus. Ontology is the investigation of what exactly something is, and Taruskin asserts that an art object becomes that which society and succeeding generations made of it. For example, Johann Sebastian Bach's "St. John Passion", composed in the 1720s, was appropriated by the Nazi regime in the 1930s for propaganda. Taruskin claims the historical development of the "St John Passion" (its ontogeny) as a work with an anti-Semitic message does, in fact, inform the work's identity (its ontology), even though that was an unlikely concern of the composer. Music or even an abstract visual artwork can not be truly autonomous ("absolute") because it is defined by its historical and social reception. | https://en.wikipedia.org/wiki?curid=22424 |
Ostrogoths
The Ostrogoths () were a Roman-era Germanic people. In the 5th century, they followed the Visigoths in creating one of the two great Gothic kingdoms within the Roman Empire, based upon the large Gothic populations who had settled in the Balkans in the 4th century, having crossed the Lower Danube. While the Visigoths had formed under the leadership of Alaric I, the new Ostrogothic political entity which came to rule Italy, was formed in the Balkans under the influence of the Amal Dynasty, the family of Theodoric the Great.
After the death of Attila and collapse of the Hunnic empire represented by the Battle of Nedao in 453, the Amal family, began to form their kingdom in Pannonia. Emperor Zeno played these Pannonian Goths off against the Thracian Goths, but instead the two groups united after the death of the Thracian leader Theoderic Strabo and his son Recitach. Zeno then backed Theodoric to invade Italy and replace Odoacer there, who he had previously supported as its king. In 493 Theodoric the Great established the Ostrogothic Kingdom of Italy, when he defeated Odoacer's forces, and killed his rival at a banquet.
Following the death of Theoderic, there was a period of instability, eventually tempting the Eastern Roman Emperor Justinian to declare war on the Ostrogoths in 535, in an effort to restore the former western provinces of the Roman Empire. Initially, the Byzantines were successful, but under the leadership of Totila, the Goths reconquered most of the lost territory until Totila's death at the Battle of Taginae. The war lasted for almost 21 years and caused enormous damage across Italy, reducing the population of the peninsula. Any remaining Ostrogoths in Italy were absorbed into the Lombards, who established a kingdom in Italy in 568.
As with other Gothic groups, the history of the peoples who made them up before they reached the Roman Balkans is difficult to reconstruct in detail. However, the Ostrogoths are associated with the earlier Greuthungi. The Ostrogoths themselves were more commonly referred to simply as Goths even in the 5th century, but before then they were referred to once, in a poem by Claudian which associates them with a group of Greuthungi, settled as a military unit in Phrygia. Furthermore, the 6th century historian of the Goths Jordanes also equated the Ostrogoths of his time to the Goths ruled by King Ermanaric in the 4th century, who the Roman writer Ammianus Marcellinus had called Greuthungi, and described as living between the Dniester and Don rivers. Huns and Alans attacked the Goths from the east and large groups of Goths moved into the Roman empire, while others became subservient to the Huns.
The Ostrogoths were one of several peoples referred to more generally as Goths. The Goths appear in Roman records starting in the third century, in the regions north of the Lower Danube and Black Sea. They competed for influence and Roman subsidies with peoples who had lived longer in the area, such as the Carpi, and various Sarmatians, and they contributed men to the Roman military. Based on their Germanic language and material culture it is believed that their Gothic culture derived from cultures originally from the direction of the Vistula river, in the north, and now in Poland. By the third century, the Goths were already in sub-groups with their own names, because the Tervingi, who bordered on the Roman Empire and the Carpathian mountains, were mentioned separately on at least one occasion.
The Ostrogoths, not mentioned until later, are associated with the Greuthungi who lived further east. The dividing line between the Tervingi and the Greuthungi, was reported by Ammianus to be the Dniester River, and to the east of the Greuthungi were Alans living near the River Don.
The Ostrogoths in Italy used a Gothic language which had both spoken and written forms, and which is best attested today in the surviving translation of the Bible by Ulfilas. Goths were a minority in all the places they lived within the Roman empire, and no Gothic language or distinct Gothic ethnicity has survived. On the other hand, the Gothic language texts which the Ostrogothic kingdom helped preserve are the only eastern Germanic language with "continuous texts" surviving, and the earliest significant remnants of any Germanic language.
The first part of the word "Ostrogoth" comes from a Germanic root "*auster-" meaning eastern. According to the proposal of Wolfram, this was originally a boastful tribal name meaning "Goths of the rising sun", or "Goths glorified by the rising sun". By the 6th century, however, Jordanes, for example, believed that the Visigoths and Ostrogoths were two contrasting names simply meaning western and eastern Goths.
The nature of the divisions of the Goths before the arrival of the Huns is uncertain, but throughout all their history the Ostrogoths are only mentioned by that name very rarely, and normally in very uncertain contexts. Among other Gothic group names however, they are associated with the Greuthungi. Scholarly opinions are divided about this connection. Historian Herwig Wolfram sees these as two names for one people as will be discussed below. Peter Heather, in contrast, has written that:
Ostrogoths in the sense of the group led by Theodoric to Italy stand at the end of complex processes of fragmentation and unification involving a variety of groups - mostly but not solely Gothic it seems - and the better, more contemporary, evidence argues against the implication derived from Jordanes that Ostrogoths are Greuthungi by another name.
Some historians go much further than Heather, questioning whether we can assume any single ethnicity, even Gothic, which united the Ostrogoths before they were politically united by the Amal clan.
One dubious early mention of the Ostrogoths is found in the much later-written "Historia Augusta", but it distinguishes the Ostrogoths and Greuthungi. In the article for Emperor Claudius Gothicus (reigned 268–270), the following list of "Scythian" peoples is given who had been conquered by the emperor when he earned his title "Gothicus": ""peuci trutungi "austorgoti" uirtingi sigy pedes celtae etiam eruli"". These words are traditionally edited by modern to include well-known peoples: ""Peuci, Grutungi, "Austrogoti", Tervingi, Visi, Gipedes, Celtae etiam et Eruli"" (emphasis added). However this work is not considered reliable, especially for contemporary terminology.
The first record of a Gothic sub-group acting in its own name, specifically the Tervingi, was dated from 291. The Greuthungi, Vesi, and Ostrogothi are all attested no earlier than 388.
The Ostrogoths were first definitely mentioned more than one hundred years later than the Tervingi in 399, and this is the only certain mention of this name at all before the Amals created their kingdom of Italy. A poem by Claudian describes Ostrogoths who are mixed with Greuthungi and settled in Phrygia together as a disgruntled barbarian military force, who had once fought against Rome, but were now supposed to fight for it. Claudian only uses the term Ostrogoth once in the long poem, but in other references to this same group he more often calls them Greuthungi or "Getic" (an older word used poetically for Goths in this period). These Goths came to be led into rebellion by Tribigild, a Roman general of Gothic background. Much later Zosimus also described Tribigild and his rebellion against the eunuch consul Eutropius. Gainas, the aggrieved Gothic general sent to fight Tribigild, openly joined forces with him after the death of Eutropius. Zosimus believed that was conspiracy between the two Goths from the beginning. It is generally believed by historians that this Phrygian settlement of Greuthingi, referred to as including Ostrogoths, were part of the Greuthungi-led force led by Odotheus in 386, and not the Greuthungi who had entered the empire earlier, in 376 under Alatheus and Saphrax.
Based upon the 6th century writer Jordanes, whose "Getica" is a history of the Ostrogothic Amal dynasty, there is a tradition of simply equating the Greuthungi with the Ostrogothi. Jordanes does not mention the Greuthungi at all by that name, but he identified the Ostrogothic kings of Italy, the Amal dynasty, as the heirs and descendants of king Ermanaric. Ermanaric was described by the more reliable contemporary writer Ammianus Marcellinus as a king of the Greuthungi, however the family succession described by the two classical authors is completely different, and Ammianus is considered to be the more reliable source. Jordanes also specified that around 250 (the time of Emperor Philip the Arab who reigned 244–249) the Ostrogoths were ruled by a king called Ostrogotha and they either derived their name from this "father of the Ostrogoths", or else the Ostrogoths and Visigoths got these names because they meant eastern and western Goths.
Modern historians agree that Jordanes is unreliable, especially for events long before his time, but some historians such as Herwig Wolfram defend the equation of the Greuthungi and Ostrogoths. Wolfram follows the position of Franz Altheim that the terms Tervingi and Greuthungi were older geographical identifiers used by outsiders to describe these Visigoths and Ostrogoths before they crossed the Danube, and that this terminology dropped out of use after about 400, when many Goths had moved into the Roman empire. In contrast, according to him, the terms "Vesi" and "Ostrogothi" were used by the peoples themselves to boastfully describe themselves, and thus remained in use. In support of this, Wolfram argues that it is significant that Roman writers either used terminology contrasting Tervingi and Greuthungi, or Vesi/Visigoths and Ostrogoths, and never mixed these pairs — for example they never contrasted Tervingi and Ostrogoths. As described above, there are two examples of Roman texts which mix Wolfram's proposed geographical and boastful terminologies as if these were separate peoples, and these are the only two early mentions of Ostrogoths before the Amals. For Wolfram, these lists are mistaken to see these peoples as separate, but he notes that neither contrasts what he considers to be the geographical and boastful terms. First, as mentioned above, Ostrogoths and Greuthungi were mentioned together by the poet Claudian, and secondly, all four names were used together in the unreliable "Augustan History" for the Emperor Claudius Gothicus which has ""Gruthungi, Ostrogothi, Tervingi, Vesi"". As a second argument for this geographical versus boastful contrast, Wolfram cites Zosimus as referring to the group of "Scythians" north of the Danube after 376 who were called "Greuthungi" by the barbarians, arguing that these "can only" be Thervingi, and that this shows how the name "Greuthungi" was only used by outsiders. However, as mentioned above, these Greuthungi mentioned by Zosimus are those who Heather and other historians equate to the rebellious Greuthungi mentioned later by Claudian in Phrygia in 399–400, who were, according to Claudian, mixed with Ostrogoths.
In any case, the older terminology of a divided Gothic people disappeared gradually after they entered the Roman Empire. The term "Visigoth" was an invention of the sixth century. Cassiodorus, a Roman in the service of Theodoric the Great, invented the term "Visigothi" to match "Ostrogothi", which terms he thought of as "western Goths" and "eastern Goths" respectively. The western-eastern division was a simplification and a literary device of sixth-century historians where political realities were more complex. Furthermore, Cassiodorus used the term "Goths" to refer only to the Ostrogoths, whom he served, and reserved the geographical term "Visigoths" for the Gallo-Hispanic Goths. This usage, however, was adopted by the Visigoths themselves in their communications with the Byzantine Empire and was in use in the seventh century.
Other names for the Goths abounded. A "Germanic" Byzantine or Italian author referred to one of the two peoples as the "Valagothi", meaning "Roman ["walha"] Goths". In 484 the Ostrogoths had been called the "Valameriaci" (men of Valamir) because they followed Theodoric, a descendant of Valamir. This terminology survived in the Byzantine East as late as the reign of Athalaric, who was called "του Ουαλεμεριακου" ("tou Oualemeriakou") by John Malalas.
In the late 4th century, the rise of the Huns forced many of the Goths and Alans to join them, while others moved westwards and eventually moved into Roman territory in the Balkans. Ostrogoths and Greuthungi, perhaps the same people, are believed to have been among the first Goths who were subdued by the Huns. Many Greuthungi entered the Roman Empire in 376 with Saphrax and Alatheus, and many of these Goths probably subsequently joined Alaric, contributing to the formation of the Visigothic kingdom. As discussed above a group of Ostrogoths and Greuthungi were apparently also settled in Phrygia in the 380s by the Romans. Otherwise, historical records only begin to mention the name of the Ostrogoths as the Gothic political entity which formed in the Balkans in the 5th century.
The Amal-led Ostrogothic kingdom began to coalesce around the leadership of the Amal dynasty who had fought under Attila, and later settled in Pannonia. The second major component of the Amal kingdom's population were the Thracian Goths. This occurred around 483/484.
The Ostrogoths followed a similar path in the Balkans to the Alaric-led Visigoths played the century before. They gained momentum by merging two large blocks of militarized Balkan peoples including many who had fought for the Roman Empire, the Pannonian and Thracian Goths; they had a difficult relationship with the Eastern Roman power; they were reinforced by other groups most notably the Rugii; and then just as Alaric's Goths had done before them, they passed from the East to the West.
The Pannonian Ostrogoths had fought alongside both Alans and Huns. Like several other tribal peoples, they became one of the many Hunnic vassals fighting in Europe, as in the Battle of Chalons in 451, where the Huns were defeated by the Roman general Aetius, accompanied by a contingent of Alans, and Visigoths. Jordanes' account of this battle certainly cannot be trusted as he wrongly attributes a good portion of the victory to the Goths, when it was the Alans who formed the "backbone of Roman defences." More generally, Jordanes, depicts the Amals as an ancient royal family in his "Getica", making them traditionally preeminent among the Goths in the Ukraine, both before and during the empire of Attila. Valamir, the uncle of Theodoric the Great, is even depicted as Attila's most highly valued leader along with Ardaric of the Gepids. Modern historians such as Peter Heather believe this is an exaggeration, and point out that there were at least three factions of Goths in Attila's forces.
The recorded history of the Ostrogoths as a political entity thus begins with their independence from the remains of the Hunnic Empire following the death of Attila the Hun in 453. Under Valimir they were among the peoples who were living in the Middle Danube region by this time, and whose freedom from domination by Attila's sons was confirmed by the Battle of Nedao in 454, which was led by the Gepids. It is unclear what role the Goths played in this battle, if any, and after the battle many Goths entered Roman military service, while only some began to coalesce under the leadership of Valamir and his two brothers, Vidimir and Theodemir, the father of Theodoric the Great.
These Amal-led Goths apparently first settled in the Pannonian area of Lake Balaton and Sirmium (Sremska Mitrovica), on the Roman Danube frontier. The land they acquired between Vindobona (Vienna) and Sirmium (Sremska Mitrovica) was not well-managed, a fact which rendered the Ostrogoths dependent upon Constantinople for subsidies. They came into conflict with other Middle Danubian peoples including the Danubian Suebian kingdom of Hunimund, and the Scirii, who had arrived as part of the Hunnic empire, and this led to the death of Valimir, and eventual Gothic victory at the Battle of Bolia in 469, now under Theodemir. Theodemir, father of Theoderic, brought these Goths into East Roman territory in 473/474. The younger uncle of Theoderic, Vidimir, with his like-named son and some of the Pannonian Goths, headed to Italy and his son was eventually settled in Gaul.
Theodemir and Theoderic moved their Goths around the Balkans, while in the meantime, the Thracian Goths were the main focus of Gothic power. For some time they held a part of Macedonia, controlling part of the Via Egnatia between the major Roman cities of Durrës and Thessalonika. Theodemir died in Cyrrhus in 474, having made sure that Theoderic (the future "Great") was designated as successor. In the same year, the other Theoderic ("Strabo"), fell out of favour with the new emperor Zeno.
The 5th century Thracian Goths, according to Peter Heather, had probably become unified only in about the 460s, although they probably lived in the area since the 420s when a group of Goths under Hunnic influence already in Pannonia were detached and settled there. Wolfram has proposed that Theoderic Strabo was an Amal, whose father had split with Theoderic's branch only as recently as the time of the Battle of Nadao.
They formed a military force which was loyal to Aspar, the East Roman "magister militum" ("master of soldiers") of Alanic-Gothic descent, who was killed in 471. Aspar's death saw a change in the East Roman approach to Gothic military forces which he had been allied to. Theoderic Strabo led a revolt in 473 and was declared king of the Goths. As Wolfram noted, "His elevation as king in Thrace in 473 parallels the elevation of Odoacer in 476. [...] A Roman federate army sought to force through its demands by making its general king". He demanded to be recognized as the "sole Gothic king to whom all deserters had to be returned [...] and he further demanded the settling of his people in Thrace as well as the surrender of the institutional and material inheritance of Aspar. It took more bloodshed and devastation before the emperor formally agreed to the demands and promised in addition to pay two thousand pounds of gold each year." In return his Goths were ready to fight for Rome, except for a campaign against the Vandal kingdom in North Africa.
With the death of Emperor Leo II, and the succession of Aspar's old rival Emperor Zeno in 474, the situation for the old Gothic party became increasingly difficult in the eastern empire, and Theoderic Strabo lost the support of the emperor. The younger Theoderic, son of Theodemir, was able to benefit from this.
About 476, Zeno, having removed support from Theoderic Strabo, started to give important honours to Theoderic the son of Theodemir. He was adopted as a "son in arms", named as a friend of the emperor, and given the status of "patricius" and commander-in-chief. His kingdom, now based on the Lower Danube in Moesia, was recognized as a federate kingdom and granted (at least in theory) an annual subsidy.
However, when Zeno forced the two Gothic groups into a confrontation in 478, Theoderic Strabo made an appeal to the Amal-led Goths which made a strong case for Gothic unity and made a great impression. Strabo made new appeals to Zeno, and in reply Zeno made new offers to Theoderic the Amal instead, but these were rejected. Warfare between the Goths and imperial forces ensued, and the Amal-led Goths once again became mobile, leaving Moesia. Zeno proposed a new federate kingdom for them in Dacia, north of the Danube, but instead they attempted to take Durrës, but Roman forces were able act quickly and effectively to put them into a difficult situation.
In 479-481, it was the Thracian Goths under Theoderic Strabo who kept the Romans from having their will, but in 481 Strabo died when he fell from his horse and was impaled on a lance. His son Recitac lost support and was killed in 484. Theoderic the Amal killed him, and was able to join the two Gothic groups. Zeno was forced to conclude a treaty and Theoderic the Amal was even consul in 484. Hostilities between him and Zeno began again by 487.
The greatest of all Ostrogothic rulers, the future Theodoric the Great (whose Gothic name meant "leader of the people") of the Ostrogothic Kingdom ("Regnum Italiae", "Kingdom of Italy") was born to Theodemir in or about 454, soon after the Battle of Nedao. His childhood was spent at Constantinople as a diplomatic hostage, where he was carefully educated. The early part of his life was taken up with various disputes, intrigues and wars within the Byzantine empire, in which he had as his rival Theodoric Strabo of the Thracian Goths, a distant relative of Theodoric the Great and son of Triarius. This older but lesser Theodoric seems to have been the chief, not the king, of that branch of the Ostrogoths that had settled within the Empire earlier. Theodoric the Great, as he is sometimes distinguished, was sometimes the friend, sometimes the enemy, of the Empire. In the former case he was clothed with various Roman titles and offices, as patrician and consul; but in all cases alike he remained the national Ostrogothic king. Theodoric is also known for his attainment of support from the Catholic Church and on one occasion, he even helped resolve a disputed papal election. During his reign, Theodoric, who was an Arian, allowed freedom of religion, which had not been done before. However, he did try to appease the Pope and tried to keep his alliance with the church strong. He saw the Pope as an authority not only in the church but also over Rome itself. His ability to work well with Italy's nobles, members of the Roman Senate, and the Catholic Church all helped facilitate his acceptance as the ruler of Italy.
Theodoric sought to revive Roman culture and government and in doing so, profited the Italian people. It was in both characters together that he set out in 488, by commission from the Byzantine emperor Zeno, to recover Italy from Odoacer. In 489, the Rugii, a Germanic tribe who dwelt in the Hungarian Plain, joined the Ostrogoths in their invasion of Italy under their leader Frideric. By 493 Ravenna was taken, where Theodoric would set up his capital. It was also at this time that Odoacer was killed by Theodoric's own hand. Ostrogothic power was fully established over Italy, Sicily, Dalmatia and the lands to the north of Italy. Around 500, Theodoric celebrated his thirtieth anniversary as King of the Ostrogoths. In order to improve their chances against the Roman Empire the Ostrogoths and Visigoths began again to unite in what became a loose confederation of Germanic peoples. The two branches of the nation were soon brought closer together; after he was forced to become regent of the Visigothic kingdom of Toulouse, the power of Theodoric was practically extended over a large part of Gaul and over nearly the whole of the Iberian peninsula. Theodoric forged alliances with the Visigoths, Alamanni, Franks and Burgundians, some of which were accomplished through diplomatic marriages.
The Ostrogothic dominion was once again as far-reaching and splendid as it was in the time of Hermanaric; however it was now of a wholly different character. The dominion of Theodoric was not a barbarian but a civilized power. His twofold position ran through everything. He was at once king of the Goths and successor, though without any imperial titles, of the Western Roman emperors. The two nations, differing in manners, language and religion, lived side by side on the soil of Italy; each was ruled according to its own law, by the prince who was, in his two separate characters, the common sovereign of both. Due to his ability to foster and leverage relations among the various Germanic kingdoms, the Byzantines began to fear Theodoric's power, which led to an alliance between the Byzantine emperor and the Frankish king, Clovis I, a pact designed to counteract and ultimately overthrow the Ostrogoths. In some ways Theodoric may have been overly accommodating to both the Romans and other Gothic people as he placated Catholics and Arian Christians alike. Historian Herwig Wolfram suggests that Theodoric's efforts in trying to appease Latin and barbarian cultures in kind brought about the collapse of Ostrogothic predominance and also resulted in the "end of Italy as the heartland of late antiquity." All the years of creating a protective perimeter around Italy were broken down by the Franco-Byzantine coalition. Theodoric was able to temporarily salvage some of his realm with the assistance of the Thuringians. Realizing that the Franks were the most significant threat to the Visigothic empire as well, Alaric II, (who was the son-in-law of Theodoric) enlisted the aid of the Burgundians and fought against the Franks at the urging of the magnates of his tribe, but this choice proved an error and he allegedly met his end at the hand of the Frankish king, Clovis.
A time of confusion followed the death of Alaric II who was slain during the Battle of Vouillé. The Ostrogothic king Theodoric stepped in as the guardian of his grandson Amalaric, and preserved for him all his Iberian and a fragment of his Gallic dominion. Toulouse passed to the Franks but the Goths kept Narbonne and its district and Septimania, which was the last part of Gaul held by the Goths, keeping the name of Gothia for many years. While Theodoric lived, the Visigothic kingdom was practically united to his own dominion. He seems also to have claimed a kind of protectorate over the Germanic powers generally, and indeed to have practically exercised it, except in the case of the Franks. From 508–511 under Theodoric's command, the Ostrogoths marched on Gaul as the Vandal king of Carthage and Clovis made concerted efforts to weaken his hold on the Visigoths. On the death of Theodoric in 526, the eastern and western Goths were once again divided. By the late 6th century, the Ostrogoths lost their political identity and assimilated into other Germanic tribes.
The picture of Theodoric's rule is drawn for us in the state papers drawn up, in his name and in the names of his successors, by his Roman minister Cassiodorus. The Goths seem to have been thick on the ground in northern Italy; in the south they formed little more than garrisons. Meanwhile, the Frankish king Clovis fought protracted wars against various enemies while consolidating his rule, forming the embryonic stages of what would eventually become Medieval Europe.
Absent the unifying presence of Theodoric, the Ostrogoths and Visigoths were unable to consolidate their realms despite their common Germanic kinship. The few instances where they acted together after this time are as scattered and incidental as they were before. Amalaric succeeded to the Visigothic kingdom in Iberia and Septimania. Theodoric's grandson Athalaric took on the mantle as king of the Ostrogoths for the next five years. Provence was added to the dominion of the new Ostrogothic king Athalaric and through his daughter Amalasuntha who was named regent. Both were unable to settle disputes among Gothic elites. Theodahad, cousin of Amalasuntha and nephew of Theodoric through his sister, took over and slew them; however the usurping ushered in more bloodshed. Atop this infighting, the Ostrogoths faced the doctrinal challenges incurred from their Arian Christianity, which both the aristocracy of Byzantium and the Papacy strongly opposed—so much that it brought them together.
The weakness of the Ostrogothic position in Italy now showed itself, particularly when Eastern Roman Emperor Justinian I enacted a law excluding pagans—among them Arian Christians and Jews—from public employment. The Ostrogothic King Theodoric reacted by persecuting Catholics. Nonetheless, Justinian always strove to restore as much of the Western Roman Empire as he could and certainly would not pass up the opportunity. Launched on both land and sea, Justinian began his war of reconquest. In 535, he commissioned Belisarius to attack the Ostrogoths following the success he had in North Africa against the Vandals. It was Justinian's intention to recover Italy and Rome from the Goths. Belisarius quickly captured Sicily and then crossed into Italy, where he captured Naples and Rome in December of 536. Sometime during the spring of 537, the Goths marched on Rome with upwards of 100,000 men under the leadership of Witiges and laid siege to the city, albeit unsuccessfully. Despite outnumbering the Romans by a five-to-one margin, the Goths could not loose Belisarius from the former western capital of the Empire. After recuperating from siege warfare, Belisarius marched north, taking Mediolanum (Milan) and the Ostrogoth capital of Ravenna in 540.
With the attack on Ravenna, Witiges and his men were trapped in the Ostrogothic capital. Belisarius proved more capable at siege warfare than his rival Witiges had been at Rome and the Ostrogoth ruler, who was also dealing with Frankish enemies, was forced to surrender, but not without terms. Belisarius refused to grant any concessions save unconditional surrender in view of the fact that Justinian wanted to make Witiges a vassal king in Trans-Padane Italy. This condition made for something of an impasse.
A faction of the Gothic nobility pointed out that their own king Witiges, who had just lost, was something of a weakling and they would need a new one. Eraric, the leader of the group, endorsed Belisarius and the rest of the kingdom agreed, so they offered him their crown. Belisarius was a soldier, not a statesman, and still loyal to Justinian. He made as if to accept the offer, rode to Ravenna to be crowned, and promptly arrested the leaders of the Goths and reclaimed their entire kingdom—no halfway settlements—for the Empire. Fearful that Belisarius might set himself up a permanent kingship should he consolidate his conquests, Justinian recalled him to Constantinople with Witiges in tow.
As soon as Belisarius was gone, the remaining Ostrogoths elected a new king named Totila. Under the brilliant command of Totila, the Goths were able to reassert themselves to a degree. For a period of nearly ten years, control for Italy became a seesaw battle between Byzantine and Ostrogothic forces. Totila eventually recaptured all of northern Italy and even drove the Byzantines out of Rome, thereby affording him the opportunity to take political control of the city, partly by executing the Roman senatorial order. Many of them fled eastwards for Constantinople.
By 550 Justinian was able to put together an enormous force, an assembly designed to recover his losses and subdue any Gothic resistance. In 551, the Roman navy destroyed Totila's fleet and in 552 an overwhelming Byzantine force under Narses entered Italy from the north. Attempting to surprise the invading Byzantines, Totila gambled with his forces at Taginaei, where he was slain. Broken but not yet defeated, the Ostrogoths made one final stand at Campania under a chief named Teia, but when he was also killed in battle at Nuceria they finally capitulated. On surrendering, they informed Narses that evidently "the hand of God was against them" and so they left Italy for the northern lands of their fathers. After that final defeat, the Ostrogothic name wholly died. The nation had practically evaporated with Theodoric's death. The leadership of western Europe therefore passed by default to the Franks. Consequently, Ostrogothic failure and Frankish success were crucial for the development of early medieval Europe, for Theodoric had made it "his intention to restore the vigor of Roman government and Roman culture". The chance of forming a national state in Italy by the union of Roman and Germanic elements, such as those that arose in Gaul, in Iberia, and in parts of Italy under Lombard rule, was thus lost. The failures of the barbarian kingdoms to maintain control of the regions they conquered were partly the result of leadership vacuums like those which resulted from the death of Theodoric (also the lack of male succession) and Totila but additionally as a consequence of political fragmentation amid the Germanic tribes as their loyalties wavered between their kin and their erstwhile enemies. Frankish entry onto the geopolitical map of Europe also bears into play: had the Ostrogoths attained more military success against the Byzantines on the battlefield by combining the strength of other Germanic tribes, this could have changed the direction of Frankish loyalty. Military success or defeat and political legitimacy were interrelated in barbarian society.
Nevertheless, according to Roman historian Procopius of Caesarea, the Ostrogothic population was allowed to live peacefully in Italy with their Rugian allies under Roman sovereignty. They later joined the Lombards during their conquest of Italy.
Surviving Gothic writings in the Gothic language include the Bible of Ulfilas and other religious writings and fragments. In terms of Gothic legislation in Latin, we have the edict of Theodoric from around the year 500, and the "Variae" of Cassiodorus, which may also pass as a collection of the state papers of Theodoric and his immediate successors. Among the Visigoths, written laws had already been put forth by Euric. Alaric II put forth a Breviarium of Roman law for his Roman subjects; but the great collection of Visigothic laws dates from the later days of the monarchy, being put forth by King Reccaswinth about 654. This code gave occasion to some well-known comments by Montesquieu and Gibbon, and has been discussed by Savigny ("Geschichte des römischen Rechts", ii. 65) and various other writers. They are printed in the "Monumenta Germaniae, leges", tome i. (1902).
Amid Gothic histories that remain, besides that of the frequently quoted Jordanes, there is the Gothic history of Isidore, archbishop of Seville, a special source of the history of the Visigothic kings down to Suinthila (621-631). But all the Latin and Greek writers contemporary with the days of Gothic predominance also made their contributions. Not for special facts, but for a general estimate, no writer is more instructive than Salvian of Marseilles in the 5th century, whose work, "De Gubernatione Dei", is full of passages contrasting the vices of the Romans with the virtues of the "barbarians", especially of the Goths. In all such pictures one must allow a good deal for exaggeration both ways, but there must be a groundwork of truth. The chief virtues that the Roman Catholic presbyter praises in the Arian Goths are their chastity, their piety according to their own creed, their tolerance towards the Catholics under their rule, and their general good treatment of their Roman subjects. He even ventures to hope that such good people may be saved, notwithstanding their heresy. This image must have had some basis in truth, but it is not very surprising that the later Visigoths of Iberia had fallen away from Salvian's somewhat idealistic picture.
Jordanes named a people called the Ostrogoths ("Ostrogothae") in a list of many peoples living on the large island of "Scandza", north of the mouth of the Vistula, which most modern scholars understand to refer to the Scandinavian peninsula. The implication was that these Ostrogoths were living there in the 6th century, during the lifetime of Jordanes or his source Cassiodorus — the same period when there was a powerful Ostrogothic kingdom in Italy. The list itself mentions a Roduulf, king of the Ranii who lived in Scandza near the Dani (Danes). It says he had despised his own kingdom and come to Italy and the received the embrace of Theoderic the Great there. This Roduulf has thus been proposed as a possible source of information about Scandinavian peoples, because Cassiodorus was an important statesman at Theoderic's court.
On the other hand, scholars have come to no consensus about when the list was made, and by whom, nor how to interpret most of the names in the list. Arne Søby Christensen, in his detailed analysis lists three possibilities:
It has been pointed out by Walter Goffart that Jordanes (V.38) also digresses specially to criticize stories going around Constantinople, that the Goths had once been slaves in Britain or another northern island, and had been freed for the price of a nag. Goffart argues that Jordanes likely rejected the idea that the Goths should be simply sent north to their alleged land of origin. Goffart points out that Procopius—a contemporary of Jordanes—reports that Belisarius offered Britain to the Ostrogoths ("Gothic Wars", VI, 6); Goffart also suggests this may be connected to the stories mentioned by Jordanes.
Fundamental to the question of the Scandza list, which mentions the Ostrogothae, there has been much scholarly discussion about why Jordanes claimed that Scandinavia was a "womb of the nations", and the point of origin to not only the Goths but also many other northern barbarian peoples. Before Jordanes, there was already a Judaeo-Christian tradition equating the Goths and other "Scythian" peoples with the descendants of Gog and Magog, who readers of the Book of Ezekiel and the Book of Revelation might otherwise associate with distant islands. | https://en.wikipedia.org/wiki?curid=22428 |
Ordered field
In mathematics, an ordered field is a field together with a total ordering of its elements that is compatible with the field operations. The basic example of an ordered field is the field of real numbers, and every Dedekind-complete ordered field is isomorphic to the reals.
Every subfield of an ordered field is also an ordered field in the inherited order. Every ordered field contains an ordered subfield that is isomorphic to the rational numbers. Squares are necessarily non-negative in an ordered field. This implies that the complex numbers cannot be ordered since the square of the imaginary unit "i" is . Finite fields cannot be ordered.
Historically, the axiomatization of an ordered field was abstracted gradually from the real numbers, by mathematicians including David Hilbert, Otto Hölder and Hans Hahn. This grew eventually into the Artin–Schreier theory of ordered fields and formally real fields.
There are two equivalent common definitions of an ordered field. The definition of total order appeared first historically and is a first-order axiomatization of the ordering ≤ as a binary predicate. Artin and Schreier gave the definition in terms of positive cone in 1926, which axiomatizes the subcollection of nonnegative elements. Although the latter is higher-order, viewing positive cones as "maximal" prepositive cones provides a larger context in which field orderings are "extremal" partial orderings.
A field ("F", +, ⋅) together with a (strict) total order < on "F" is an ordered field if the order satisfies the following properties for all "a", "b" and "c" in "F":
A prepositive cone or preordering of a field "F" is a subset "P" ⊂ "F" that has the following properties:
A preordered field is a field equipped with a preordering "P". Its non-zero elements "P"∗ form a subgroup of the multiplicative group of "F".
If in addition, the set "F" is the union of "P" and −"P", we call "P" a positive cone of "F". The non-zero elements of "P" are called the positive elements of "F".
An ordered field is a field "F" together with a positive cone "P".
The preorderings on "F" are precisely the intersections of families of positive cones on "F". The positive cones are the maximal preorderings.
Let "F" be a field. There is a bijection between the field orderings of "F" and the positive cones of "F".
Given a field ordering ≤ as in the first definition, the set of elements such that "x" ≥ 0 forms a positive cone of "F". Conversely, given a positive cone "P" of "F" as in the second definition, one can associate a total ordering ≤"P" on "F" by setting "x" ≤"P" "y" to mean "y" − "x" ∈ "P". This total ordering ≤"P" satisfies the properties of the first definition.
Examples of ordered fields are:
The surreal numbers form a proper class rather than a set, but otherwise obey the axioms of an ordered field. Every ordered field can be embedded into the surreal numbers.
For every "a", "b", "c", "d" in "F":
Every subfield of an ordered field is also an ordered field (inheriting the induced ordering). The smallest subfield is isomorphic to the rationals (as for any other field of characteristic 0), and the order on this rational subfield is the same as the order of the rationals themselves. If every element of an ordered field lies between two elements of its rational subfield, then the field is said to be "Archimedean". Otherwise, such field is a non-Archimedean ordered field and contains infinitesimals. For example, the real numbers form an Archimedean field, but hyperreal numbers form a non-Archimedean field, because it extends real numbers with elements greater than any standard natural number.
An ordered field "K" is isomorphic to the real number field if every non-empty subset of "K" with an upper bound in "K" has a least upper bound in "K". This property implies that the field is Archimedean.
Vector spaces (particularly, "n"-spaces) over an ordered field exhibit some special properties and have some specific structures, namely: orientation, convexity, and positively-definite inner product. See Real coordinate space#Geometric properties and uses for discussion of those properties of R"n", which can be generalized to vector spaces over other ordered fields.
Every ordered field is a formally real field, i.e., 0 cannot be written as a sum of nonzero squares.
Conversely, every formally real field can be equipped with a compatible total order, that will turn it into an ordered field. (This order need not be uniquely determined.) The proof uses Zorn's lemma.
Finite fields and more generally fields of positive characteristic cannot be turned into ordered fields, because in characteristic "p", the element −1 can be written as a sum of ("p" − 1) squares 12. The complex numbers also cannot be turned into an ordered field, as −1 is a square (of the imaginary number "i") and would thus be positive. Also, the p-adic numbers cannot be ordered, since according to Hensel's lemma Q2 contains a square root of −7, thus 12+12+12+22+()2=0, and Q"p" ("p" > 2) contains a square root of 1−"p", thus ("p"−1)⋅12+()2=0.
If "F" is equipped with the order topology arising from the total order ≤, then the axioms guarantee that the operations + and × are continuous, so that "F" is a topological field.
The Harrison topology is a topology on the set of orderings "X""F" of a formally real field "F". Each order can be regarded as a multiplicative group homomorphism from "F"∗ onto ±1. Giving ±1 the discrete topology and ±1"F" the product topology induces the subspace topology on "X""F". The Harrison sets formula_11 form a subbasis for the Harrison topology. The product is a Boolean space (compact, Hausdorff and totally disconnected), and "X""F" is a closed subset, hence again Boolean.
A fan on "F" is a preordering "T" with the property that if "S" is a subgroup of index 2 in "F"∗ containing "T" − {0} and not containing −1 then "S" is an ordering (that is, "S" is closed under addition). A superordered field is a totally real field in which the set of sums of squares forms a fan. | https://en.wikipedia.org/wiki?curid=22430 |
Oracle machine
In complexity theory and computability theory, an oracle machine is an abstract machine used to study decision problems. It can be visualized as a Turing machine with a black box, called an oracle, which is able to solve certain decision problems in a single operation. The problem can be of any complexity class. Even undecidable problems, such as the halting problem, can be used.
An oracle machine can be conceived as a Turing machine connected to an oracle. The oracle, in this context, is an entity capable of solving some problem, which for example may be a decision problem or a function problem. The problem does not have to be computable; the oracle is not assumed to be a Turing machine or computer program. The oracle is simply a "black box" that is able to produce a solution for any instance of a given computational problem:
An oracle machine can perform all of the usual operations of a Turing machine, and can also query the oracle to obtain a solution to any instance of the computational problem for that oracle. For example, if the problem is a decision problem for a set "A" of natural numbers, the oracle machine supplies the oracle with a natural number, and the oracle responds with "yes" or "no" stating whether that number is an element of "A".
There are many equivalent definitions of oracle Turing machines, as discussed below. The one presented here is from van Melkebeek (2000:43).
An oracle machine, like a Turing machine, includes:
In addition to these components, an oracle machine also includes:
From time to time, the oracle machine may enter the ASK state. When this happens, the following actions are performed in a single computational step:
The effect of changing to the ASK state is thus to receive, in a single step, a solution to the problem instance that is written on the oracle tape.
There are many alternative definitions to the one presented above. Many of these are specialized for the case where the oracle solves a decision problem. In this case:
These definitions are equivalent from the point of view of Turing computability: a function is oracle-computable from a given oracle under all of these definitions if it is oracle-computable under any of them. The definitions are not equivalent, however, from the point of view of computational complexity. A definition such as the one by van Melkebeek, using an oracle tape which may have its own alphabet, is required in general.
The complexity class of decision problems solvable by an algorithm in class A with an oracle for a language L is called AL. For example, PSAT is the class of problems solvable in polynomial time by a deterministic Turing machine with an oracle for the Boolean satisfiability problem. The notation AB can be extended to a set of languages B (or a complexity class B), by using the following definition:
When a language L is complete for some class B, then AL=AB provided that machines in A can execute reductions used in the completeness definition of class B. In particular, since SAT is NP-complete with respect to polynomial time reductions, PSAT=PNP. However, if A = DLOGTIME, then ASAT may not equal ANP. (Note that the definition of formula_2 given above is not completely standard. In some contexts, such as the proof of the time and space hierarchy theorems, it is more useful to assume that the abstract machine defining class formula_3 only has access to a single oracle for one language. In this context, formula_2 is not defined if the complexity class formula_5 does not have any complete problems with respect to the reductions available to formula_3.)
It is understood that NP ⊆ PNP, but the question of whether NPNP, PNP, NP, and P are equal remains tentative at best. It is believed they are different, and this leads to the definition of the polynomial hierarchy.
Oracle machines are useful for investigating the relationship between complexity classes P and NP, by considering the relationship between PA and NPA for an oracle A. In particular, it has been shown there exist languages A and B such that PA=NPA and PB≠NPB (Baker, Gill, and Solovay 1975). The fact the P = NP question relativizes both ways is taken as evidence that answering this question is difficult, because a proof technique that "relativizes" (i.e., unaffected by the addition of an oracle) will not answer the P = NP question. Most proof techniques relativize.
One may consider the case where an oracle is chosen randomly from among all possible oracles (an infinite set). It has been shown in this case, that with probability 1, PA≠NPA (Bennett and Gill 1981). When a question is true for almost all oracles, it is said to be true "for a random oracle". This choice of terminology is justified by the fact random oracles support a statement with probability 0 or 1 only. (This follows from Kolmogorov's zero one law.) This is only weak evidence that P≠NP, since a statement may be true for a random oracle but false for ordinary Turing machines; for example, IPA≠PSPACEA for a random oracle A but IP = PSPACE (Chang et al., 1994).
A machine with an oracle for the halting problem can determine whether particular Turing machines will halt on particular inputs, but they cannot determine, in general, whether machines equivalent to themselves will halt. This creates a hierarchy of machines, each with a more powerful halting oracle and an even harder halting problem.
This hierarchy of machines can be used to define the "arithmetical hierarchy" (Börger 1989).
In cryptography, oracles are used to make arguments for the security of cryptographic protocols where a hash function is used. A security reduction for the protocol is given in the case where, instead of a hash function, a random oracle answers each query randomly but consistently; the oracle is assumed to be available to all parties including the attacker, as the hash function is. Such a proof shows that unless the attacker solves the hard problem at the heart of the security reduction, they must make use of some interesting property of the hash function to break the protocol; they cannot treat the hash function as a black box (i.e., as a random oracle). | https://en.wikipedia.org/wiki?curid=22431 |
Orangutan
The orangutans (also spelled orang-utan, orangutang, or orang-utang) are three extant species of great apes native to Indonesia and Malaysia. Orangutans are found in the rainforests of Borneo and Sumatra, but during the Pleistocene they ranged throughout Southeast Asia and South China. Classified in the genus Pongo, orangutans were originally considered to be one species. From 1996, they were divided into two species: the Bornean orangutan ("P. pygmaeus", with three subspecies) and the Sumatran orangutan ("P. abelii"). In November 2017, it was reported that a third species had been identified: the Tapanuli orangutan ("P. tapanuliensis"). The orangutans are the only surviving species of the subfamily Ponginae, which also included several other species, including the largest known primate, "Gigantopithecus blacki". The ancestors of the Ponginae split from the main ape line in Africa 15.7 to 19.3 million years ago (mya).
Orangutans are the most arboreal of the great apes and spend most of their time in trees. Their hair is reddish-brown, instead of the brown or black hair typical of chimpanzees and gorillas. Flanged (the distinctive cheek pads) adult males make long calls that attract females and intimidate rivals; younger unflanged males do not and resemble adult females. Orangutans are the most solitary of the great apes, with social bonds occurring primarily between mothers and their dependent offspring, who remain together for the first two years. Fruit is the most important component of an orangutan's diet; however, the apes will also eat vegetation, bark, honey, insects and bird eggs. They can live over 30 years both in the wild and in captivity.
Orangutans are among the most intelligent primates; they use a variety of sophisticated tools and construct elaborate sleeping nests each night from branches and foliage. The apes' learning abilities have been studied extensively. There may even be distinctive cultures within populations. Orangutans have been featured in literature and art since at least the 18th century, particularly in works which comment on human society. Field studies of the apes were pioneered by primatologist Birutė Galdikas and they have been kept in captive facilities around the world since at least the early 19th century. All three orangutan species are considered to be critically endangered. Human activities have caused severe declines in populations and ranges. Threats to wild orangutan populations include poaching, habitat destruction because of palm oil cultivation, and the illegal pet trade. Several conservation and rehabilitation organisations are dedicated to the survival of orangutans in the wild.
The name "orangutan" (also written orang-utan, orang utan, orangutang, and ourang-outang) is derived from the Malay words "orang", meaning "man", and "hutan", meaning "forest", thus "man of the forest". The locals originally used the name to refer to actual forest-dwelling people, while the ape was called "mawas".
The first attestation of the word "orangutan" to name the Asian ape is in Dutch physician Jacobus Bontius' 1631 "Historiae naturalis et medicae Indiae orientalis". He reported that Malays had informed him the ape could talk, but preferred not to "lest he be compelled to labour". The word appeared in several German-language descriptions of Indonesian zoology in the 17th century. The likely origin of the word comes specifically from the Banjarese variety of Malay. Cribb and colleagues (2014) suggest that Bontius' account referred not to apes (which were not known from Java) but to humans suffering some serious medical condition (most likely cretinism) and that his use of the word was misunderstood by Nicolaes Tulp, who was the first to use the term in a publication.
The word was first attested in English in 1691 in the form "orang-outang", and variants ending with "-ng" are found in many languages. This spelling (and pronunciation) has remained in use in English up to the present but has come to be regarded as incorrect. The loss of "h" in Utan and the shift from -ng to -n has been taken to suggest the term entered English through Portuguese. British naturalist Alfred Russel Wallace published his account of Malaysia's wildlife, "The Malay Archipelago: The Land of the Orang-Utan and the Bird of Paradise", in 1869.
The name of the genus, "Pongo", comes from a 16th-century account by Andrew Battel, an English sailor held prisoner by the Portuguese in Angola, which describes two anthropoid "monsters" named Pongo and Engeco. He is now believed to have been describing gorillas, but in the 18th century, the terms orangutan and pongo were used for all great apes. Lacépède used the term "Pongo" for the genus in 1799. Battel's "Pongo", in turn, is from the Kongo word "mpongi" or other cognates from the region: Lumbu "pungu", Vili "mpungu", or Yombi "yimpungu".
The orangutan was first described scientifically in 1758 in the "Systema Naturae" of Linnaeus as "Simia satyrus". The was renamed "Simia pygmaeus" in 1760 by his student Christian Emmanuel Hopp and given the name "Pongo" was by Lacépède in 1799. The populations on the two islands were classified as separate species when "P. abelii" was described by Lesson in 1827. "P. abelii" was placed under "P. pygmaeus" in 1985 as a subspecies. In 2001, "P. abelii" was reelevated to full species status based on molecular evidence published in 1996, and three distinct populations on Borneo were elevated to subspecies ("P. p. pygmaeus", "P. p. morio" and "P. p. wurmbii"). The description in 2017 of a third species, "P. tapanuliensis", from Sumatra south of Lake Toba, came with a surprising twist: it is more closely related to the Bornean species, "P. pygmaeus" than to its fellow Sumatran species, "P. abelii".
The three orangutan species are the only extant members of the subfamily Ponginae. This subfamily also included the extinct genera "Lufengpithecus", which lived in southern China and Thailand 2–8 mya, and "Sivapithecus", which lived in India and Pakistan from 12.5 mya until 8.5 mya. These apes likely lived in drier and cooler environments than orangutans do today. "Khoratpithecus piriyai", which lived in Thailand 5–7 mya, is believed to be the closest known relative of the orangutans. The largest known primate, "Gigantopithecus", was also a member of Ponginae and lived in China, India and Vietnam from 5 mya to 100,000 years ago.
The oldest known record of "Pongo" is from the early Pleistocene of Chongzuo, consisting of teeth ascribed to "P. weidenreichi". "Pongo" is found as part of the faunal complex in the Pleistocene cave assemblage in Vietnam, alongside "Giganopithecus", though it is known only from teeth. Some fossils described under the name "P. hooijeri" have been found in Vietnam, and multiple fossil subspecies have been described from several parts of southeastern Asia. It is unclear if these belong to "P. pygmaeus" or "P. abelii" or, in fact, represent distinct species. During the Pleistocene, "Pongo" had a far more extensive range than at present, extending throughout Sundaland and mainland Southeast Asia and South China. Teeth of orangutans are known from Peninsular Malaysia that date to 60,000 years ago. The range of orangutans had contracted significantly by the end of the Pleistocene, most likely because of the reduction of forest habitat during the Last Glacial Maximum. Though they may have survived into the Holocene in Cambodia and Vietnam.
The Sumatran orangutan genome was sequenced in January 2011. Following humans and chimpanzees, the Sumatran orangutan became the third species of hominid to have its genome sequenced. Subsequently, the Bornean species had its genome sequenced. Genetic diversity was found to be lower in Bornean orangutans ("P. pygmaeus") than in Sumatran ones ("P. abelii"), despite the fact that Borneo is home to six or seven times as many orangutans as Sumatra. The researchers hope these data may help conservationists save the endangered ape, and also prove useful in further understanding of human genetic diseases. Similarly to gorillas and chimpanzees, orangutans have 48 diploid chromosomes, in contrast to humans, which have 46.
Within apes (superfamily Hominoidea), the gibbons diverged during the early Miocene (between 19.7 and 24.1 mya, according to molecular evidence) and the orangutans split from the African great ape lineage between 15.7 and 19.3 mya. Israfil et al (2011) estimated that the Sumatran and Borean species diverged 2.9 to 4.9 mya. By contrast, the 2011 genome study suggested that these two species diverged around 400,000 years ago, more recently than was previously thought. Also, the orangutan genome was found to have evolved much more slowly than chimpanzee and human DNA. However a 2017 found that the Bornean and Tapanuli orangutans diverged from Sumatran orangutans about 3.4 mya, and from each other around 2.4 mya. Orangutans travelled from Sumatra to Borneo as the islands were connected by land bridges as parts of Sundaland during recent glacial periods when sea levels were much lower. The present range of Tapanuli orangutans is thought to be close to where ancestral orangutans first entered what is now Indonesia from mainland Asia.
Orangutans display significant sexual dimorphism; females typically stand tall and weigh around , while flanged adult males stand tall and weigh . Compared to humans, they have proportionally long arms, a male orangutan having arm span of about , and short legs. Most of their bodies are covered in coarse hair that is generally red but ranges from bright orange to maroon or dark chocolate, while the skin is grey-black. Though largely hairless, males' faces can develop some hair, giving them a beard.
Orangutans have small ears and noses; the ears are unlobed. The mean endocranial volume is 397 cm. The braincase is elevated relative to the facial area, which is concave and prognathous. Females and juveniles have rounded skulls and narrow faces while males develop a large sagittal crest and large cheek flaps, which show their dominance to other males. The cheek flaps are made mostly of fatty tissue and are supported by the musculature of the face. Mature males develop large throat pouches and long canines.
As in all Old World Primates, orangutan hands are similar to those of humans; they have four long fingers but a dramatically shorter opposable thumb for a strong grip on branches as they travel high in the trees. The joint and tendon arrangement in the orangutans' hands produces two adaptations significant for arboreal locomotion. The resting configuration of the fingers is curved, creating a suspensory hook grip. Additionally, with the thumb out of the way, the fingers (and hands) can grip securely around objects with a small diameter by resting the tops of the fingers against the inside of the palm, thus creating a double-locked grip. Their feet have four long toes and an opposable big toe, enabling orangutans to grasp things securely both with their hands and with their feet. Since their hip joints have the same flexibility as their shoulder and arm joints, orangutans have less restriction in the movements of their legs than humans have.
Orangutans move through the trees by both vertical climbing and suspension. Compared to other great apes they infrequently descend to the ground were they are more cumbersome. Unlike gorillas and chimpanzees, orangutans are not true knuckle-walkers; which involves the use a more relaxed open hand with the middle segments of the fingers sweeping the ground and the feet flat. Orangutans tuck in their digits and shuffle on the sides of their hands and feet.
Compared to their relatives in Borneo, Sumatran orangutans are thinner with paler and longer hair and a longer face. Tapanuli orangutans resemble Sumatran orangutans more than Bornean orangutans in body build and fur color. However, they have frizzier hair, smaller heads, and flatter and wider faces than the other species.
Orangutans are mainly arboreal and inhabit tropical rainforest, particularly dipterocarp and secondary old growth forest. Population densities are highest in habitats near rivers, such as freshwater and peat swamp forest, while drier forests away from the flood plains are less inhabited. Population density also decreases at higher elevations. Orangutans occasionally enter grasslands, cultivated fields, gardens, young secondary forest, and shallow lakes.
Most of the day is spent feeding, resting, and travelling. They start the day feeding for two to three hours in the morning. They rest during midday, then travel in the late afternoon. When evening arrives, they prepare their nests for the night. Orangutans do not swim, although they have been recorded wading in water. Orangutans' potential predators include tigers, clouded leopards and wild dogs. The absence of tigers on Borneo has been suggested as a reason Bornean orangutans are found on the ground more often than their Sumatran relatives. The most frequent orangutan parasites are nematodes of the genus "Strongyloides" and the ciliate "Balantidium coli". Among "Strongyloides", the species "S. fuelleborni" and "S. stercoralis" are commonly reported in young individuals. Orangutans also use plants of the genus "Commelina" as an anti-inflammatory balm.
Orangutans are primarily frugivores (fruit-eaters) and 57–80% of their feeding time is spent foraging for fruits. Even during times of scarcity, fruit can still take up 16% of feeding. Orangutans prefer fruits with soft pulp, arils or seed-walls surrounding their seeds, as well as trees with large crops. "Ficus" fruits fit both preferences and are thus highly favoured, but they also consume drupes and berries. Orangutans are thought to be the sole fruit disperser for some plant species including the climber species "Strychnos ignatii" which contains the toxic alkaloid strychnine.
Orangutans also supplement their diet with leaves, which take up 25% of their foraging time on average. Leaf eating increases when fruit gets scarcer, but even during times of fruit abundance, orangutan will eat leaves 11–20% of the time. The leaf and stem material of "Borassodendron borneensis" appears to be an important food source during low fruit abundance. Other food items consumed by the apes include bark, honey, bird eggs, insects and small vertebrates including the slow loris. In some areas, orangutans may practice geophagy, which involves consuming soil and other earth substances. The apes may eat tubes of soil created by termites along tree trunks as well as descend to the ground to uproot soil to eat. Orangutans are also known to visit mineral licks at the clay or sandstone-like walls of cliffs or earth depressions. Soils appear to contain a high concentration of kaolin, which counteracts toxic tannins and phenolic acids found in the orangutan's diet.
A decade-long study of urine and faecal samples at the Gunung Palung Orangutan Conservation Project in West Kalimantan has shown that orangutans give birth during and after the high fruit season (though not every year), during which they consume various abundant fruits, totalling up to 11,000 calories per day. In the low-fruit season, they eat whatever fruit is available in addition to tree bark and leaves, with daily intake at only 2,000 calories.
The social structure of the orangutan can be best described as solitary but social; they live a more solitary lifestyle than the other great apes. Most social bonds occur between adult females and their dependent and weaned offspring. Adult males and independent adolescents of both sexes often live alone. Orangutan societies comprise resident and transient individuals of both sexes. Resident females live with their offspring in defined home ranges that overlap with those of other adult females, which may be their immediate relatives. One to several resident female home ranges are encompassed within the home range of a resident male, who is their main mating partner. Bornean orangutans are generally more solitary, moving and foraging alone while Sumatran orangutans travel in groups more often. Interactions between adult females range from friendly to avoidance to antagonistic. The home ranges of resident males can overlap greatly, though encounters are relatively rare and hostile. Adult males dominate sub-adult males.
Orangutans disperse and establish their home ranges by age 11. Females tend to settle close to their mothers. However, they do not seem to have any special social bonds with them. Males disperse much farther but may include their natal (birth) range within their new home range. They enter a transient phase, which lasts until a male can challenge and displace a dominant, resident male from his home range. Both resident and transient orangutans aggregate on large fruiting trees to feed. The fruits tend to be abundant, so competition is low and individuals may engage in social interactions. Orangutans will also form travelling groups with members moving between different food sources. These groups usually consist of only a few individuals. They also tend to be consortships between an adult male and a female. Social grooming is uncommon among orangutans.
Orangutans communicate with various sounds. Males will make long calls, both to attract females and to advertise themselves to other males. These are divided into three parts; they begin with grumbles, climax with pulses and end with bubbles. Both sexes will try to intimidate conspecifics with a series of low guttural noises known collectively as the "rolling call". When annoyed, an orangutan will suck in air through pursed lips, making a kissing sound known as the "kiss squeak". Mothers produce throatscrapes to keep in contact with their offspring. Infants make soft hoots when distressed. Orangutans are also known to produce smacks or blow raspberries when making a nest.
Males become sexually mature at around age 15. However, they may exhibit arrested development by not developing the distinctive cheek pads, pronounced throat pouches, long fur, or long calls until a resident dominant male is absent. The transformation from unflanged to flanged can occur quickly. Flanged males attract females in oestrous with their characteristic long calls, which may also suppress development in younger males. Unflanged males wander widely in search of oestrous females and upon finding one, will force copulation on her. While both strategies can be successful, females prefer to mate with flanged males and seek their company for protection. However, in some areas females prefer unflanged males during times of instability and do not resist copulations. Resident males may form consortships that last for days, weeks or months after copulation.
Like other great apes, female orangutans are infertile during adolescence, which may last up to four years. They first ovulate at 5.8 to 11.1 years (earlier in those with more body fat) and have a 22- to 30-day menstrual cycle. Gestation lasts nine months, with a first birth at age 14 or 15. Female orangutans have a six to nine year interberth interval, the longest among the great apes. Unlike many other primates, male orangutans do not seem to practice infanticide. This may be because they cannot ensure they will sire a female's next offspring, because she does not immediately begin ovulating again after her infant dies. However, there is evidence that females with offspring under six years old generally avoid adult males.
Females do most of the caring and socialising of the young, while males play no role. A female often has an older offspring with her to help socialise the infant. Usually only a single infant is born; twins are a rare occurrence. Infant orangutans completely depend on their mothers for the first two years of their lives. The mother will carry the infant while travelling, and feed it and sleep with it in the same night nest. For the first four months, the infant is carried on its belly and almost never without physical contact. In the following months, the time an infant spends with its mother decreases. When an orangutan reaches the age of one-and-a-half years, its climbing skills improve and it will travel through the canopy holding hands with other orangutans, a behaviour known as "buddy travel". After two years of age, juvenile orangutans will begin to move away from their mothers temporarily and are weaned at four years old. They reach adolescence at six or seven years of age and will socialise with their peers while still having contact with their mothers. Typically, orangutans live over 30 years both in the wild and in captivity.
Orangutans build nests specialised for either day or night use. These are carefully constructed; young orangutans learn from observing their mother's nest-building behaviour. In fact, nest-building ability is a leading cause for young orangutans to regularly leave their mother. From six months of age onwards, orangutans practice nest-building and gain proficiency by the time they are three years old.
Construction of a night nest is done by following a sequence of steps. Initially, a suitable tree is located. Orangutans are selective about sites, though many tree species are used. The nest is then built by pulling together branches under them and joining them at a point. After the foundation has been built, the orangutan bends smaller, leafy branches onto the foundation; this serves the purpose of and is termed the "mattress". After this, orangutans stand and braid the tips of branches into the mattress. Doing this increases the stability of the nest and is the final act of nest-building. Orangutans may add additional features, such as "pillows", "blankets", "roofs" and "bunk-beds" to their nests.
Orangutans are among the most intelligent non-human primates. Experiments suggest they can track the displacement of objects both visible and hidden. In addition, Zoo Atlanta has a touch-screen computer on which their two Sumatran orangutans play games. Scientists hope the data they collect will help researchers learn about socialising patterns, such as whether the apes learn behaviours through trial and error or by mimicry, and point to new conservation strategies.
A 2008 study of two orangutans at the Leipzig Zoo showed orangutans can use "calculated reciprocity", which involves weighing the costs and benefits of gift exchanges and keeping track of these over time. Orangutans are the first nonhuman species documented to do so. In a 1997 study, two captive adult orangutans were tested with the cooperative pulling paradigm. Without any training, the orangutans succeeded in pulling off an object to get food in the first session. Over the course of 30 sessions, the apes succeeded more quickly, having learned to coordinate. An adult orangutan has been documented to pass the mirror test, indicating self-awareness. However, mirror tests with a 2-year-old orangutan failed to reveal self-recognition.
Studies in the wild indicate that flanged male orangutans plan their movements in advance and signal them to other individuals. Orangutans can make a new nest in only five or six minutes and choose branches they know can support their body weight. Orangutans and other great apes show laughter-like vocalisations in response to physical contact such as wrestling, play chasing or tickling. This suggests that laughter derived from a common origin among primate species and therefore evolved prior to the origin of humans. Orangutans have also been found to have voluntary control over vocal fold oscillation, which is essential for speech in humans, and can learn and mimic new sounds. Bonnie, an orangutan at the National Zoo, was recorded spontaneously whistling after hearing a caretaker. She appears to whistle without expecting a food reward.
Tool use in orangutans was observed by primatologist Birutė Galdikas in ex-captive populations. In addition, evidence of sophisticated tool manufacture and use in the wild was reported from a population of orangutans in Suaq Balimbing ("Pongo abelii") in 1996. These orangutans developed a tool kit for use in foraging which consisted of both insect-extraction tools for use in the hollows of trees and seed-extraction tools for harvesting seeds from hard-husked fruit. The orangutans adjusted their tools according to the task at hand, and preference was given to oral tool use. This preference was also found in an experimental study of captive orangutans ("P. pygmaeus"). Orangutans have been observed to jab at catfish with sticks, so that the panicked prey would flop out of ponds and into the ape's waiting hands. Orangutan have also been documented to save tools for future use.
Primatologist Carel P. van Schaik and biological anthropologist Cheryl D. Knott further investigated tool use in different wild orangutan populations. They compared geographic variations in tool use related to the processing of "Neesia" fruit. The orangutans of Suaq Balimbing ("P. abelii") were found to be avid users of insect and seed-extraction tools when compared to other wild orangutans. The scientists suggested these differences are cultural as they do not correlate with habitat. The orangutans at Suaq Balimbing live in dense groups and are socially tolerant; this creates good conditions for social transmission. Further evidence that highly social orangutans are more likely to exhibit cultural behaviours came from a study of leaf-carrying behaviours of formerly captive orangutans that were being rehabilitated on the island of Kaja in Borneo.
Wild orangutans ("P. pygmaeus wurmbii") in Tuanan, Borneo, were reported to use tools in acoustic communication. They use leaves to amplify the kiss squeak sounds they produce. The apes may employ this method of amplification to deceive the listener into believing they are larger animals. In 2003, researchers from six different orangutan field sites who used the same behavioural coding scheme compared the behaviours of the animals from each site. They found each orangutan population behaved differently. The evidence suggested the differences were cultural: first, the extent of the differences increased with distance, suggesting cultural diffusion was occurring, and second, the size of the orangutans' cultural repertoire increased according to the amount of social contact present within the group. Social contact facilitates cultural transmission.
Zoologist Gary L. Shapiro conducted a study of orangutan symbolic capability from 1973 to 1975 with Aazk, a juvenile female orangutan at the Fresno City Zoo (now Chaffee Zoo) in Fresno, California. The study employed the techniques of psychologist David Premack, who used plastic tokens to teach linguistic skills to the chimpanzee, Sarah. Shapiro continued to examine the linguistic and learning abilities of ex-captive orangutans in Tanjung Puting National Park, in Indonesian Borneo, between 1978 and 1980.
During that time, Shapiro instructed ex-captive orangutans in the acquisition and use of signs following the techniques of psychologists R. Allen Gardner and Beatrix Gardner, who taught the chimpanzee, Washoe, in the late 1960s. In the first signing study ever conducted in a great ape's natural environment, Shapiro home-reared Princess, a juvenile female which learned nearly 40 signs (according to the criteria of sign acquisition used by psychologist Francine Patterson with Koko, the gorilla) and trained Rinnie, a free-ranging adult female orangutan, which learned nearly 30 signs over a two-year period. For his dissertation study, Shapiro examined the factors influencing sign learning by four juvenile orangutans over a 15-month period.
In June 2008, Spain became the first country in the world to recognise the rights of some non-human primates, when its parliament's cross-party environmental committee urged the country to comply with the recommendations of the Great Ape Project, which are that chimpanzees, bonobos, orangutans, and gorillas are not to be used for animal experiments. In December 2014, Argentina ruled that an orangutan named Sandra at the Buenos Aires Zoo must be moved to a sanctuary in Brazil to provide her "partial or controlled freedom". Animal rights groups like Great Ape Project Argentina interpreted the ruling as applicable to all species in captivity, and legal specialists from the Argentina's Federal Chamber of Criminal Cassatio considered the ruling applicable only to non-human hominids.
Orangutans were known to the native people of Sumatra and Borneo for millennia. While some communities hunted them for food and decoration, others placed taboos on such practices. In central Borneo, some traditional folk beliefs consider it bad luck to look in the face of an orangutan. Some folk tales involve orangutans mating with and kidnapping humans. There are even stories of hunters being seduced by female orangutans. Europeans became aware of the existence of the orangutan possibly as early as the 17th century. European explorers in Borneo hunted them extensively during the 19th century. The first scientific description of orangutans was given by Dutch anatomist Petrus Camper, who observed the animals and dissected some specimens.
Little was known about orangutan behaviour until the field studies of Birutė Galdikas, who became a leading authority on the apes. When she arrived in Borneo, Galdikas settled into a primitive bark-and-thatch hut, at a site she dubbed Camp Leakey, near the Java Sea. Despite numerous hardships, she remained there over 30 years and became an outspoken advocate for orangutans and the preservation of their rainforest habitat, which is rapidly being devastated by loggers, palm oil plantations, gold miners, and unnatural forest fires. Galdikas's conservation efforts have extended well beyond advocacy, largely focusing on rehabilitation of the many orphaned orangutans turned over to her for care. Along with Jane Goodall and Dian Fossey, Galdikas is considered to be one of Leakey's Angels.
Orangutans first appeared in Western fiction in the 18th century and have been used to comment on human society. Written by the pseudonymous A. Ardra, "Tintinnabulum naturae" (The Bell of Nature, 1772) is told from the point of view of a human-orangutan hybrid who calls himself the "metaphysician of the woods". Over half a century later, the anonymously written work "The Orang Outang" is narrated by a pure orangutan in captivity in the US, writing to her friend in Java and critiquing Boston society.
Thomas Love Peacock's 1817 novel "Melincourt" features Sir Oran Hautto, an orangutan who participates in English society and becomes a candidate for Parliament. The novel satirises the class and political system of Britain. Oran's reliability, honesty and status as a "natural man" stand in contrast to the cowardice, greed, folly, and inequality of "civilised" human society. In Frank Challice Constable's "The Curse of Intellect" (1895), the protagonist Reuben Power travels to Borneo to capture and train an orangutan "to know what a beast like that might think of us". Orangutans are featured prominently in the 1963 science fiction novel "Planet of the Apes" by Pierre Boulle and the media franchise derived from it. Orangutans are typically portrayed as bureaucrats like Dr. Zaius, the science minister.
Orangutans are sometimes portrayed as villains, notably in the 1832 Walter Scott novel "Count Robert of Paris" and the 1841 Edgar Allan Poe short story "The Murders in the Rue Morgue" where the ape was trained to murder by his human master. Disney's 1967 animated musical adaptation of "The Jungle Book" added an orangutan named King Louie, voiced by Louis Prima, who tries to get Mowgli to teach him how to make fire. The 1986 horror film "Link" features an intelligent orangutan which serves a university professor but has sinister motives, particularly with his stalking of a student assistant. Some stories have portrayed orangutans as guides to humans, such as The Librarian in Terry Pratchett's fantasy novels "Discworld" and in Dale Smith's 2004 novel "What the Orangutan Told Alice". More comical portrayals of the orangutan include the 1996 film "Dunston Checks In".
By the early 19th century, orangutans were being shipped to captive facilities at various locations. In 1817, an orangutan joined several other animals in London's Exeter Exchange. The ape was recorded to have shunned the company of other animals, aside from a dog, and appeared to prefer the company of humans. It was occasionally taken on coach rides dressed in a smock-frock and hat and even treated with refreshments at an inn where it impressed its host with its polite behaviour. The London Zoo housed a female orangutan named Jenny who wore human clothing and learned to drink tea. She is remembered for her meeting with the naturalist Charles Darwin who compared her reactions to those of a human child.
Zoos and circuses in the Western world would continue to use orangutans and other simians as sources for entertainment, training them to behave like humans at tea parties and to perform tricks. Notable orangutan "character actors" include Jacob and Rosa of the Tierpark Hagenbeck in the early 20th century and Jiggs of the San Diego Zoo in the 1930s and 1940s. Animal rights groups have urged a stop to such acts, considering them abusive. Starting in the 1960s, zoos became more concerned with education and orangutans exhibits were designed to mimic their natural environment and displayed their natural behaviours.
Ken Allen, an orangutan of the San Diego Zoo, became world famous in the 1980s for multiple escapes from his enclosures. He was nicknamed "the hairy Houdini" and was the subject of a fan club, T-shirts, bumper stickers and a song titled "The Ballad of Ken Allen". Galdikas reported that her cook was sexually assaulted by a captive male orangutan. The ape may have suffered from a skewed species identity and forced copulation is a standard mating strategy for low-ranking male orangutans.
All three species are critically endangered according to the IUCN Red List of mammals. They are legally protected from capture, harm or killing in both Malaysia and Indonesia. They are listed under Appendix I by CITES, which prohibits their unlicensed trade under international law. The Bornean orangutan range has become patchy throughout the island, being largely extirpated from various parts of the island, including the southeast. The largest remaining population is found in the forest around the Sabangau River, but this environment is at risk. The Sumatran orangutan is found only in the northern part of Sumatra, with most of the population inhabiting the Leuser Ecosystem. The Tapanuli orangutan is found only in the Batang Toru forest of Sumatra.
During the early 2000s, orangutan habitat has decreased rapidly because of logging and forest fires, and fragmentation by roads. A major factor has been the conversion of vast areas of tropical forest to palm oil plantations in response to international demand. Hunting is also a major problem, as is the illegal pet trade. Orangutans may be killed for the bushmeat trade and bones are secretly traded in souvenir shops in several cities in Indonesian Borneo. Conflicts between locals and orangutans also pose a threat. Orangutans that have loss their homes often raid agricultural areas and end up being killed by villagers. Locals may also be motivated to kill orangutans for food or because of fear and self-defense. Mother orangutans are killed so their infants can be sold as pets, and many of these infants die without the help of their mother. Since 2012, the Indonesian authorities, with the aid of the Orangutan Information Center, confiscated 114 orangutans, 39 of which were pets.
Estimates between 2000 and 2003 found 7,300 Sumatran orangutans and between 45,000 and 69,000 Bornean orangutans remain in the wild. A 2016 study estimates a population of 14,613 Sumatran orangutans in the wild, doubling previous population estimates. Fewer than 800 Tapanuli orangutan are estimated to still exist, which puts the species among the most endangered of the great apes. The table below shows a breakdown of the species and subspecies and their estimated populations from this, or (in the case of "P. tapanuliensis") a more recent, report:
A number of organisations are working for the rescue, rehabilitation and reintroduction of orangutans. The largest of these is the Borneo Orangutan Survival (BOS) Foundation, founded by conservationist Willie Smits which operates a number of large projects, such as the Nyaru Menteng Rehabilitation Program founded by conservationist Lone Drøscher Nielsen. A female orangutan was rescued from a village brothel in Kareng Pangi village, Central Kalimantan, in 2003. The orangutan was shaved and chained for sexual purposes. Since being freed, the orangutan, named Pony, has been living with the BOS. She has been re-socialised to live with other orangutans. In May 2017, the BOS rescued an albino orangutan from captivity. The rare primate was being held captive in a remote village in Kapuas Hulu, on the island of Kalimantan in Indonesian Borneo. According to volunteers at BOS, albino orangutans are extremely rare (one in ten thousand). This is the first albino orangutan the organisation has seen in 25 years of activity.
Other major conservation centres in Indonesia include those at Tanjung Puting National Park, Sebangau National Park, Gunung Palung National Park and Bukit Baka Bukit Raya National Park in Borneo and the Gunung Leuser National Park and Bukit Lawang in Sumatra. In Malaysia, conservation areas include Semenggoh Wildlife Centre and Matang Wildlife Centre also in Sarawak, and the Sepilok Orang Utan Sanctuary in Sabah. Major conservation centres headquartered outside the orangutans' home countries include Frankfurt Zoological Society, Orangutan Foundation International, which was founded by Birutė Galdikas, and the Australian Orangutan Project. Conservation organisations such as the Orangutan Land Trust work with the palm oil industry to improve sustainability and encourages the industry to establish conservation areas for orangutans. It works to bring different stakeholders together to achieve conservation of the species and its habitat. | https://en.wikipedia.org/wiki?curid=22433 |
Orbital resonance
In celestial mechanics, orbital resonance occurs when orbiting bodies exert regular, periodic gravitational influence on each other, usually because their orbital periods are related by a ratio of small integers. Most commonly this relationship is found for a pair of objects. The physical principle behind orbital resonance is similar in concept to pushing a child on a swing, where the orbit and the swing both have a natural frequency, and the other body doing the "pushing" will act in periodic repetition to have a cumulative effect on the motion. Orbital resonances greatly enhance the mutual gravitational influence of the bodies ("i.e.", their ability to alter or constrain each other's orbits). In most cases, this results in an "unstable" interaction, in which the bodies exchange momentum and shift orbits until the resonance no longer exists. Under some circumstances, a resonant system can be self-correcting and thus stable. Examples are the 1:2:4 resonance of Jupiter's moons Ganymede, Europa and Io, and the 2:3 resonance between Pluto and Neptune. Unstable resonances with Saturn's inner moons give rise to gaps in the rings of Saturn. The special case of 1:1 resonance between bodies with similar orbital radii causes large Solar System bodies to eject most other bodies sharing their orbits; this is part of the much more extensive process of clearing the neighbourhood, an effect that is used in the current definition of a planet.
A binary resonance ratio in this article should be interpreted as the "ratio of number of orbits" completed in the same time interval, rather than as the "ratio of orbital periods", which would be the inverse ratio. Thus the 2:3 ratio above means Pluto completes two orbits in the time it takes Neptune to complete three. In the case of resonance relationships among three or more bodies, either type of ratio may be used (in such cases the smallest whole-integer ratio sequences are not necessarily reversals of each other) and the type of ratio will be specified.
Since the discovery of Newton's law of universal gravitation in the 17th century, the stability of the Solar System has preoccupied many mathematicians, starting with Pierre-Simon Laplace. The stable orbits that arise in a two-body approximation ignore the influence of other bodies. The effect of these added interactions on the stability of the Solar System is very small, but at first it was not known whether they might add up over longer periods to significantly change the orbital parameters and lead to a completely different configuration, or whether some other stabilising effects might maintain the configuration of the orbits of the planets.
It was Laplace who found the first answers explaining the linked orbits of the Galilean moons (see below). Before Newton, there was also consideration of ratios and proportions in orbital motions, in what was called "the music of the spheres", or Musica universalis.
In general, an orbital resonance may
A "mean-motion orbital resonance" occurs when two bodies have periods of revolution that are a simple integer ratio of each other. Depending on the details, this can either stabilize or destabilize the orbit.
"Stabilization" may occur when the two bodies move in such a synchronised fashion that they never closely approach. For instance:
Orbital resonances can also "destabilize" one of the orbits. This process can be exploited to find energy-efficient ways of deorbiting spacecraft. For small bodies, destabilization is actually far more likely. For instance:
Most bodies that are in resonance orbit in the same direction; however, the retrograde asteroid 514107 Kaʻepaokaʻawela appears to be in a stable (for a period of at least a million years) 1:−1 resonance with Jupiter. In addition, a few retrograde damocloids have been found that are temporarily captured in mean-motion resonance with Jupiter or Saturn. Such orbital interactions are weaker than the corresponding interactions between bodies orbiting in the same direction.
A "Laplace resonance" is a three-body resonance with a 1:2:4 orbital period ratio (equivalent to a 4:2:1 ratio of orbits). The term arose because Pierre-Simon Laplace discovered that such a resonance governed the motions of Jupiter's moons Io, Europa, and Ganymede. It is now also often applied to other 3-body resonances with the same ratios, such as that between the extrasolar planets Gliese 876 c, b, and e. Three-body resonances involving other simple integer ratios have been termed "Laplace-like" or "Laplace-type".
A "Lindblad resonance" drives spiral density waves both in galaxies (where stars are subject to forcing by the spiral arms themselves) and in Saturn's rings (where ring particles are subject to forcing by Saturn's moons).
A "secular resonance" occurs when the precession of two orbits is synchronised (usually a precession of the perihelion or ascending node). A small body in secular resonance with a much larger one (e.g. a planet) will precess at the same rate as the large body. Over long times (a million years, or so) a secular resonance will change the eccentricity and inclination of the small body.
Several prominent examples of secular resonance involve Saturn. A resonance between the precession of Saturn's rotational axis and that of Neptune's orbital axis (both of which have periods of about 1.87 million years) has been identified as the likely source of Saturn's large axial tilt (26.7°). Initially, Saturn probably had a tilt closer to that of Jupiter (3.1°). The gradual depletion of the Kuiper belt would have decreased the precession rate of Neptune's orbit; eventually, the frequencies matched, and Saturn's axial precession was captured into the spin-orbit resonance, leading to an increase in Saturn's obliquity. (The angular momentum of Neptune's orbit is 104 times that of Saturn's spin, and thus dominates the interaction.)
The perihelion secular resonance between asteroids and Saturn ("ν6" = "g" − "g6") helps shape the asteroid belt (the subscript "6" identifies Saturn as the sixth planet from the Sun). Asteroids which approach it have their eccentricity slowly increased until they become Mars-crossers, at which point they are usually ejected from the asteroid belt by a close pass to Mars. This resonance forms the inner and "side" boundaries of the asteroid belt around 2 AU, and at inclinations of about 20°.
Numerical simulations have suggested that the eventual formation of a perihelion secular resonance between Mercury and Jupiter ("g1" = "g5") has the potential to greatly increase Mercury's eccentricity and possibly destabilize the inner Solar System several billion years from now.
The Titan Ringlet within Saturn's C Ring represents another type of resonance in which the rate of apsidal precession of one orbit exactly matches the speed of revolution of another. The outer end of this eccentric ringlet always points towards Saturn's major moon Titan.
A "Kozai resonance" occurs when the inclination and eccentricity of a perturbed orbit oscillate synchronously (increasing eccentricity while decreasing inclination and vice versa). This resonance applies only to bodies on highly inclined orbits; as a consequence, such orbits tend to be unstable, since the growing eccentricity would result in small pericenters, typically leading to a collision or (for large moons) destruction by tidal forces.
In an example of another type of resonance involving orbital eccentricity, the eccentricities of Ganymede and Callisto vary with a common period of 181 years, although with opposite phases.
There are only a few known mean-motion resonances in the Solar System involving planets, dwarf planets or larger satellites (a much greater number involve asteroids, planetary rings, moonlets and smaller Kuiper belt objects, including many possible dwarf planets).
Additionally, Haumea is believed to be in a 7:12 resonance with Neptune, and 225088 Gonggong is believed to be in a 3:10 resonance with Neptune.
The simple integer ratios between periods hide more complex relations:
As illustration of the latter, consider the well-known 2:1 resonance of Io-Europa. If the orbiting periods were in this relation, the mean motions formula_1 (inverse of periods, often expressed in degrees per day) would satisfy the following
Substituting the data (from Wikipedia) one will get −0.7395° day−1, a value substantially different from zero.
Actually, the resonance perfect, but it involves also the precession of perijove (the point closest to Jupiter), formula_3. The correct equation (part of the Laplace equations) is:
In other words, the mean motion of Io is indeed double of that of Europa taking into account the precession of the perijove. An observer sitting on the (drifting) perijove will see the moons coming into conjunction in the same place (elongation). The other pairs listed above satisfy the same type of equation with the exception of Mimas-Tethys resonance. In this case, the resonance satisfies the equation
The point of conjunctions librates around the midpoint between the nodes of the two moons.
The Laplace resonance involving Io–Europa–Ganymede includes the following relation locking the "orbital phase" of the moons:
where formula_7 are mean longitudes of the moons (the second equals sign ignores libration).
This relation makes a triple conjunction impossible. (A Laplace resonance in the Gliese 876 system, in contrast, is associated with one triple conjunction per orbit of the outermost planet, ignoring libration.) The graph illustrates the positions of the moons after 1, 2 and 3 Io periods. formula_8 librates about 180° with an amplitude of 0.03°.
Another "Laplace-like" resonance involves the moons Styx, Nix and Hydra of Pluto:
This reflects orbital periods for Styx, Nix and Hydra, respectively, that are close to a ratio of 18:22:33 (or, in terms of the near resonances with Charon's period, 3+3/11:4:6; see below); the respective ratio of orbits is 11:9:6. Based on the ratios of synodic periods, there are 5 conjunctions of Styx and Hydra and 3 conjunctions of Nix and Hydra for every 2 conjunctions of Styx and Nix. As with the Galilean satellite resonance, triple conjunctions are forbidden. formula_10 librates about 180° with an amplitude of at least 10°.
The dwarf planet Pluto is following an orbit trapped in a web of resonances with Neptune. The resonances include:
One consequence of these resonances is that a separation of at least 30 AU is maintained when Pluto crosses Neptune's orbit. The minimum separation between the two bodies overall is 17 AU, while the minimum separation between Pluto and Uranus is just 11 AU (see Pluto's orbit for detailed explanation and graphs).
The next largest body in a similar 2:3 resonance with Neptune, called a "plutino", is the probable dwarf planet Orcus. Orcus has an orbit similar in inclination and eccentricity to Pluto's. However, the two are constrained by their mutual resonance with Neptune to always be in opposite phases of their orbits; Orcus is thus sometimes described as the "anti-Pluto".
Neptune's innermost moon, Naiad, is in a 73:69 fourth-order resonance with the next outward moon, Thalassa. As it orbits Neptune, the more inclined Naiad successively passes Thalassa twice from above and then twice from below, in a cycle that repeats every ~21.5 Earth days. The two moons are about 3540 km apart when they pass each other. Although their orbital radii differ by only 1850 km, Naiad swings ~2800 km above or below Thalassa's orbital plane at closest approach. As is common, this resonance stabilizes the orbits by maximizing separation at conjunction, but it is unusual for the role played by orbital inclination in facilitating this avoidance in a case where eccentricities are minimal.
While most extrasolar planetary systems discovered have not been found to have planets in mean-motion resonances, chains of up to five resonant planets and up to seven at least near resonant planets have been uncovered. Simulations have shown that during planetary system formation, the appearance of resonant chains of planetary embryos is favored by the presence of the primordial gas disc. Once that gas dissipates, 90–95% of those chains must then become unstable to match the low frequency of resonant chains observed.
In this case, formula_8 librates with an amplitude of 40° ± 13° and the resonance follows the time-averaged relation:
This represents the first confirmed 4-body orbital resonance. The librations within this system are such that close encounters between two planets occur only when the other planets are in distant parts of their orbits. Simulations indicate that this system of resonances must have formed via planetary migration.
Cases of extrasolar planets close to a 1:2 mean-motion resonance are fairly common. Sixteen percent of systems found by the transit method are reported to have an example of this (with period ratios in the range 1.83–2.18), as well as one sixth of planetary systems characterized by Doppler spectroscopy (with in this case a narrower period ratio range). Due to incomplete knowledge of the systems, the actual proportions are likely to be higher. Overall, about a third of radial velocity characterized systems appear to have a pair of planets close to a commensurability. It is much more common for pairs of planets to have orbital period ratios a few percent larger than a mean-motion resonance ratio than a few percent smaller (particularly in the case of first order resonances, in which the integers in the ratio differ by one). This was predicted to be true in cases where tidal interactions with the star are significant.
A number of near-integer-ratio relationships between the orbital frequencies of the planets or major moons are sometimes pointed out (see list below). However, these have no dynamical significance because there is no appropriate precession of perihelion or other libration to make the resonance perfect (see the detailed discussion in the section above). Such near resonances are dynamically insignificant even if the mismatch is quite small because (unlike a true resonance), after each cycle the relative position of the bodies shifts. When averaged over astronomically short timescales, their relative position is random, just like bodies that are nowhere near resonance. For example, consider the orbits of Earth and Venus, which arrive at almost the same configuration after 8 Earth orbits and 13 Venus orbits. The actual ratio is 0.61518624, which is only 0.032% away from exactly 8:13. The mismatch after 8 years is only 1.5° of Venus' orbital movement. Still, this is enough that Venus and Earth find themselves in the opposite relative orientation to the original every 120 such cycles, which is 960 years. Therefore, on timescales of thousands of years or more (still tiny by astronomical standards), their relative position is effectively random.
The presence of a near resonance may reflect that a perfect resonance existed in the past, or that the system is evolving towards one in the future.
Some orbital frequency coincidences include:
The least probable orbital correlation in the list is that between Io and Metis, followed by those between Rosalind and Cordelia, Pallas and Ceres, Jupiter and Pallas, Callisto and Ganymede, and Hydra and Charon, respectively.
A past resonance between Jupiter and Saturn may have played a dramatic role in early Solar System history. A 2004 computer model by Alessandro Morbidelli of the Observatoire de la Côte d'Azur in Nice suggested that the formation of a 1:2 resonance between Jupiter and Saturn (due to interactions with planetesimals that caused them to migrate inward and outward, respectively) created a gravitational push that propelled both Uranus and Neptune into higher orbits, and in some scenarios caused them to switch places, which would have doubled Neptune's distance from the Sun. The resultant expulsion of objects from the proto-Kuiper belt as Neptune moved outwards could explain the Late Heavy Bombardment 600 million years after the Solar System's formation and the origin of Jupiter's Trojan asteroids. An outward migration of Neptune could also explain the current occupancy of some of its resonances (particularly the 2:5 resonance) within the Kuiper belt.
While Saturn's mid-sized moons Dione and Tethys are not close to an exact resonance now, they may have been in a 2:3 resonance early in the Solar System's history. This would have led to orbital eccentricity and tidal heating that may have warmed Tethys' interior enough to form a subsurface ocean. Subsequent freezing of the ocean after the moons escaped from the resonance may have generated the extensional stresses that created the enormous graben system of Ithaca Chasma on Tethys.
The satellite system of Uranus is notably different from those of Jupiter and Saturn in that it lacks precise resonances among the larger moons, while the majority of the larger moons of Jupiter (3 of the 4 largest) and of Saturn (6 of the 8 largest) are in mean-motion resonances. In all three satellite systems, moons were likely captured into mean-motion resonances in the past as their orbits shifted due to tidal dissipation (a process by which satellites gain orbital energy at the expense of the primary's rotational energy, affecting inner moons disproportionately). In the Uranian system, however, due to the planet's lesser degree of oblateness, and the larger relative size of its satellites, escape from a mean-motion resonance is much easier. Lower oblateness of the primary alters its gravitational field in such a way that different possible resonances are spaced more closely together. A larger relative satellite size increases the strength of their interactions. Both factors lead to more chaotic orbital behavior at or near mean-motion resonances. Escape from a resonance may be associated with capture into a secondary resonance, and/or tidal evolution-driven increases in orbital eccentricity or inclination.
Mean-motion resonances that probably once existed in the Uranus System include (3:5) Ariel-Miranda, (1:3) Umbriel-Miranda, (3:5) Umbriel-Ariel, and (1:4) Titania-Ariel. Evidence for such past resonances includes the relatively high eccentricities of the orbits of Uranus' inner satellites, and the anomalously high orbital inclination of Miranda. High past orbital eccentricities associated with the (1:3) Umbriel-Miranda and (1:4) Titania-Ariel resonances may have led to tidal heating of the interiors of Miranda and Ariel, respectively. Miranda probably escaped from its resonance with Umbriel via a secondary resonance, and the mechanism of this escape is believed to explain why its orbital inclination is more than 10 times those of the other regular Uranian moons (see Uranus' natural satellites).
Similar to the case of Miranda, the present inclinations of Jupiter's moonlets Amalthea and Thebe are thought to be indications of past passage through the 3:1 and 4:2 resonances with Io, respectively.
Neptune's regular moons Proteus and Larissa are thought to have passed through a 1:2 resonance a few hundred million years ago; the moons have drifted away from each other since then because Proteus is outside a synchronous orbit and Larissa is within one. Passage through the resonance is thought to have excited both moons' eccentricities to a degree that has not since been entirely damped out.
In the case of Pluto's satellites, it has been proposed that the present near resonances are relics of a previous precise resonance that was disrupted by tidal damping of the eccentricity of Charon's orbit (see Pluto's natural satellites for details). The near resonances may be maintained by a 15% local fluctuation in the Pluto-Charon gravitational field. Thus, these near resonances may not be coincidental.
The smaller inner moon of the dwarf planet Haumea, Namaka, is one tenth the mass of the larger outer moon, Hiiaka. Namaka revolves around Haumea in 18 days in an eccentric, non-Keplerian orbit, and as of 2008 is inclined 13° from Hiiaka. Over the timescale of the system, it should have been tidally damped into a more circular orbit. It appears that it has been disturbed by resonances with the more massive Hiiaka, due to converging orbits as it moved outward from Haumea because of tidal dissipation. The moons may have been caught in and then escaped from orbital resonance several times. They probably passed through the 3:1 resonance relatively recently, and currently are in or at least close to an 8:3 resonance. Namaka's orbit is strongly perturbed, with a current precession of about −6.5° per year. | https://en.wikipedia.org/wiki?curid=22444 |
Open-wheel car
An open-wheel car (formula car, or often single-seater car in British English) is a car with the wheels outside the car's main body, and usually having only one seat. Open-wheel cars contrast with street cars, sports cars, stock cars, and touring cars, which have their wheels below the body or inside fenders. Open-wheel cars are usually built specifically for road racing, frequently with a higher degree of technological sophistication than in other forms of motor sport. Open-wheel street cars, such as the Ariel Atom, are very scarce as they are often impractical for everyday use.
American racecar driver and constructor Ray Harroun was an early pioneer of the concept of a lightweight single-seater, open-wheel "monoposto" racecar. After working as a mechanic in the automotive industry, Harroun began competitive professional racing in 1906, winning the AAA National Championship in 1910. He was then hired by the Marmon Motor Car Company as chief engineer, charged with building a racecar intended to race at the very first Indianapolis 500, which he went on to win. He developed a revolutionary concept which would become the originator and forefather of the single-seater (i.e. monoposto) racecar design. Harroun has also been credited by some as pioneering the rear-view mirror which appeared on his 1911 Indianapolis 500 winning car, though he himself claimed he got the idea from seeing a mirror used for a similar purpose on a horse-drawn vehicle in 1904.
A typical open-wheeler has a minimal cockpit sufficient only to enclose the driver's body, with the head exposed to the air. In the Whelen Modified Tour and other short track modified series, the driver's head is contained in the car. In modern cars, the engine is often located directly behind the driver, and drives the rear wheels; except in asphalt modified cars, such as the Whelen Modified Tour, where the engine is in front of the driver. Depending on the rules of the class, many types of open-wheelers have wings at the front and rear of the vehicle, as well as a very low and virtually flat undertray that helps achieve additional aerodynamic downforce pushing the car onto the road.
Some major races, such as the Singapore Grand Prix, Monaco Grand Prix (sanctioned by Formula One) and the Long Beach Grand Prix (sanctioned by IndyCar), are held on temporary street circuits. However, most open-wheel races are on dedicated road courses, such as Watkins Glen International in the US, Nürburgring in Germany, Spa-Francorchamps in Belgium and Silverstone in Great Britain. In the United States, some top-level open-wheel events are held on ovals, of both short track and superspeedway variety, with an emphasis being placed more on speed and endurance than the maneuverability inherently required by road and street course events. The Whelen Modified Tour is the only opened wheeled race car series endorsed by NASCAR. This series races on most of NASCAR's most famous tracks in the United States. Other asphalt modified series race on short tracks in the United States and Canada, such as Wyoming County International Speedway in New York. The most well-attended oval race in the world is the annual Indianapolis 500 (Indy 500) in Speedway, Indiana, sanctioned by IndyCar; in the United States, it is quite common to refer to open-wheel cars as IndyCars, or Champ Cars, because of their recognizable appearance and widespread popularity across America at the Indy 500.
Open-wheeled racing is among the fastest in the world. Formula One cars can reach speeds in excess of . At Autodromo Nazionale Monza, Antônio Pizzonia of BMW Williams F1 team recorded a top speed of in the 2004 Italian Grand Prix. Since the end of the V10 era in 2006, speeds like this have not been reached, with contemporary machinery reaching around . It is difficult to give precise figures for the absolute top speeds of Formula One cars, as the cars do not have speedometers as such and the data are not generally released by teams. The 'speed traps' on fast circuits such as Monza give a good indication, but are not necessarily located at the point on the track where the car is travelling at its fastest. BAR Honda team recorded an average top speed of in 2006 at Bonneville Salt Flats with unofficial top speed reaching using modified BAR 007 Formula One car. Speeds on ovals can range in constant excess of , and at Indianapolis in excess of . In 2000, Gil de Ferran set the one-lap qualifying record of at California Speedway. Even on tight non-oval street circuits such as the Grand Prix of Toronto, open-wheel Indy Cars attain speeds of .
Driving an open-wheel car is substantially different from driving a car with fenders. Virtually all Formula One and Indycar drivers spent some time in various open-wheel categories before joining either top series. Open-wheel vehicles, due to their light weight, aerodynamic capabilities, and powerful engines, are often considered the fastest racing vehicles available and among the most challenging to master. Wheel-to-wheel contact is dangerous, particularly when the forward edge of one tire contacts the rear of another tire: since the treads are moving in opposite directions (one upward, one downward) at the point of contact, both wheels rapidly decelerate, torquing the chassis of both cars and often causing one or both vehicles to be suddenly and powerfully flung upwards (the rear car tends to pitch forward, and the front car tends to pitch backward.) An example of this is the 2005 Chicagoland crash of Ryan Briscoe with Alex Barron.
The lower weight of an open-wheel racecar allows for better performance. While the exposure of the wheels to the airstream causes a very high aerodynamic drag at high speeds, it allows improved cooling of the brakes, which is important on road courses with their frequent changes of pace.
In 2018, several single seater series such as Formula One, Formula 2 (with their new Dallara F2 2018 chassis), and Formula E (with their new Spark SRT05e chassis) introduced a protection system to the cockpit called the "halo", a wishbone-shaped frame aimed to deflect debris away from a driver's head. Despite initial criticism, it showed some praise in the Formula 2 sprint race in Catalunya when Nirei Fukuzumi spun and had the back of his car land on fellow countryman Tadasuke Makino's halo. In the Formula 1 Belgian Grand Prix, McLaren driver Fernando Alonso was sent airborne after being hit from behind by the Renault of Nico Hülkenberg and struck the halo of Sauber driver Charles Leclerc, thereby saving the Monegasque driver from a visor strike.
In 2019, the newly-formed FIA Formula 3 Championship introduced a halo to their new chassis which was unveiled at the 2018 Abu Dhabi Grand Prix.
In 2020, the Indycar Series adopted a halo combined with an aeroscreen, built by Red Bull Advanced Technologies. | https://en.wikipedia.org/wiki?curid=22450 |
Offshore powerboat racing
Offshore powerboat racing is a type of racing by ocean-going powerboats, typically point-to-point racing.
In most of the world, offshore powerboat racing is led by the Union Internationale Motonautique (UIM) regulated Class 1 and Powerboat GPS (formerly known as Powerboat P1). In the USA, offshore powerboat racing is led by the APBA/UIM and consists of races hosted by Powerboat P1.
The sport is financed by a mixture of private funding and commercial sponsors.
In 1903, the Automobile Club of Great Britain and Ireland, and its offshoot, the Marine Motor Association organised a race of auto-boats. The winner was awarded the Harmsworth Trophy. Offshore powerboat racing was first recognised as a sport when, in 1904, a race took place from the south-eastern coast England to Calais, France. In the United States, the APBA (American Power Boat Association) was formed soon thereafter and the first U.S. recorded race was in 1911, in California.
The sport increased in popularity over the next few years in the United States, with 10 races being scheduled during the 1917 season. The sport's growth was disrupted in Europe during World War I.
Over the period of 1927–35 there was a huge interest in power boat racing in Europe both on sea water and on freshwater rivers and lakes. These boats which were described as hydroplanes were powered by Evinrude, Elto, Johnson, Lockwood, and Watermota outboard engines.
The sport entered the modern era in the 1960s, with notable names like Jim Wynn, Don Aronow, and Dick Bertram competing in events such as the Bahamas race. During that time, the 'navigator' position in the raceboat was extremely important (unlike in today's small, track-like circuits), as finding small checkpoints over a hundred-mile open ocean run was a difficult endeavour.
The list of modern world champions extended into the 1980s, when the sport entered the catamaran, and then the 'superboat' era – the 1000 cubic inch total engine displacement restrictions were lifted for boats over in length, and soon three- and four-engine boats sporting F16 fighter canopies replaced the venerable deep-vee hulls that had been the sport's top category for twenty years.
Modern races are short, track style events with much improved viewing for the spectators, and the different categories of boats have multiplied far beyond the 4 classes that were common through much of the 1960s, '70s, and '80s.
In recent years the biggest number of entries in Offshore races have been for the Cowes – Torquay – Cowes and Cowes – Poole – Cowes races held by the British Offshore Powerboat Race Club.
Class 1 World Powerboat Championship.
Class 1 has come a long way technologically since first being sanctioned by the U.I.M. in 1964. Shortly after its advent, Americans Jim Wynne, Dick Bertram and Don Aronow supported technological advancement, with Daytona, Mercruiser, and AeroMarine. In the 1980s European design became more prominent. Don Shead's Aluminium monohulls, Italian manufacturers Picchiotti and CUV, and the James Beard-Clive Curtis Cougar catamarans set the record. Fabio Buzzi took a giant step forward with the introduction of glass-reinforced polymer hulls, turbo-charged engines, and integral surface drives and the 90's subsequently saw the emergence of the Michael Peter's design and Tencara and Victory hulls dominate, with Sterling, Lamborghini, Seatek and more recently, Mercury sharing the power battle.
Weighing in at around 5 tonnes, each boat in the Class 1 fleet is approximately 12-14m in length, 3.5m wide, and constructed using composite materials. All the boats are catamarans.
In 2012, it was announced that a new series of 'ultra-marathon' offshore races would be run every two years under the title of the Venture Cup. The first race was scheduled to take place in June 2013 from Cowes in the UK to Monte Carlo, which reflects what many consider to have been the greatest powerboat race ever - the 1972 London to Monte-Carlo race . The Venture Cup is billed as the World's longest, toughest and most prestigious powerboat race. The 2013 race was however cancelled because of lack of funding and replaced by a Prologue.
In 2015 the Venture Offshore Cup was announced. The race was to be run around the entire coast of Ireland, beginning in Cork and ending in Dublin with multiple stops en route. However, in May 2016 the organisers cancelled the race again.
P1 SuperStock is a single class powerboat race series. It has international recognition and guaranteed media exposure and is broadcast on TV. P1 SuperStock is approved by the sport’s governing body, the Union Internationale Motonautique (UIM), as an international class of powerboat racing.
P1 SuperStock is a major sporting festival over five or six weekends in May through October. There are up to six races over the race weekend, lasting 30–45 minutes each. The free events attract thousands of spectators and often run alongside the AquaX jetski series. All teams race in P1 Panther race boats with 250HP outboard engines.
Powerboat P1 Management Ltd is the rights-holder for P1 SuperStock and also owns the rights to Powerboat P1 World Championship and P1 Aqua X. In the USA, a wholly owned subsidiary, P1 USA, manages all aspects of racing throughout North America.
The Boats
250+ hp Class
This sport racer is powered by a 250+ hp engine. This propels the boat to speeds up to in flat water, and its lower centre of gravity provides greater stability and improved handling.
The series was officially founded as Powerboat P1 World Championship in May 2003 in Nettuno, Italy. Twelve boats, the majority of which were Italian, raced in the first-ever Grand Prix of the Sea. Starting out with 15-year-old aluminum boats, Powerboat P1 boats evolved dramatically through the decade to the point where the mono-hull twin-engine boats were kicking out around 1800 hp. During the Powerboat P1 World Championship era, which spanned 2003 to 2009, there was 40% more horsepower on a P1 starting grid than Formula 1.
In 2010, Powerboat P1 Management Ltd took the decision to cancel the championship. Instead the UIM took over the series' management and renamed it Powerboat GPS (Grand Prix of the Sea), continuing the championship. The series is split between Evolution class and Supersport class. All the boats are V-type monohulls.
There is a P1 Grand Prix of the Sea in Scotland every year.
The Cowes-Torquay was launched by Sir Max Aitken, 2nd Baronet as the first Offshore Powerboat race in Europe in 1961.
It is the longest-running offshore powerboat race in the world.
Initially sponsored by the Daily Express newspaper, its success encouraged several countries in Europe and the Middle East to follow suit. Hence it can rightly claim to have introduced offshore powerboat racing to the rest of the world outside the United States where the modern sport was launched with the first Miami-Nassau Race in 1956.
In 1967, the Union Internationale Motonautique , the world governing authority for powerboat racing, introduced the World Offshore Championship as a memorial to Sam Griffith, the American founder of modern offshore racing.
In order to qualify as a championship heat, the race format was therefore changed and instead of finishing at Tor-quay, the fleet returned to Cowes, a pattern that remains to this day.
The race is organised by the British Powerboat Racing Club .
Event Director Martin Levi, son of powerboat designer, Sonny Levi took over the running of the event in 2016.
The Round Britain Powerboat has been run on 3 previous occasions.
1459 miles, divided into 10 racing stages and one slow cruise; flat calm seas under blazing skies, a thick pea-souper fog, and a rough coastal run; 42 assorted boats ranging in power from 100 hp to 1,000 hp.
The most outstanding feature of this marathon race was undoubtedly the freak weather, it was called by most participants, for the first 700 miles to Oban the conditions were as near perfect as they could be, and the fog on the Inverness-Dundee run, and the rough seas of the Dundee-Whitby leg were greeted almost with glee.
Avenger Too, crewed by Timo Mäkinen, Pascoe Watson and Brian Hendicott, the Round Britain race was a success story from start to finish. They won the first leg to Falmouth and the second leg to Milford Haven; on the run to Douglas they were third, but still retained their overall lead. Only once during the entire race were they pushed from that leading position, and they had such a handsome lead that they could afford to tuck in behind a slower radar-equipped boat on the foggy run to Dundee, and still emerge the leaders by two hours.
Their final victory, in a total time of just over 39 hours, represented an average speed, sustained over 1,381 nautical miles of racing, of 37.1 knots.
A Class 3, Offshore, open Cockpit race, held between 1964 - 1968. The course ran between Falmouth and Plymouth. In the 1966 race only four boats out of eighteen entered finished the course. Originally, the course started at Black Rock, Falmouth to Plymouth and back with marks at the Manacles rock and Looe Island. From 1967, the course started in Plymouth. It was a straight run from Plymouth the Black Rock, Falmouth and then a return to Plymouth. The approximate distance being 100 miles. Notable winners include: Tommy Sopwith in 1965 and Fiona Gore in 1968.
Once again the course for this great race was going to imitate the 1969 version. Organised by ex Powerboat Racer Tim Powell and after two years in concept and design Tim managed to obtain sponsorship from Everest Double Glazing which ensured the success of the race. With famous racers such as Fabio Buzzi, Lady Arran, Colin Gervase-Brazier, Peter Armstrong, Ted Toleman and Renato DelaValle and many others the fleet set off on 14 July 1984, once again from Portsmouth on its 1,400 journey around the British Isles.
The two main contenders were Buzzi cruiser-based White Iveco, raced by company owner Fabio Buzzi, and Renato della Valle’s Ego Lamborghini. White Iveco was a single-step monohull powered by four Iveco diesels, while Ego was a Don Shead designed 38 ft (11.6 m) hull powered by a pair of 7-litre, marinised V12 Lamborghini petrol engines. Weather conditions for the first leg were poor and of the 28 starters at Portsmouth, only 18 boats reached Falmouth. By the end of the second leg only 12 remained. By the halfway stage, White Iveco led on elapsed time with Ego Lamborghini behind.
British hopes lay in the hands of Double Two Shirts, a 40 ft (12.1 m) Shead-designed, Planatec-built racer with Sabre Diesel power, lying two hours back. An indication of the performance of these powerboats can be gauged from the Dundee to Whitby leg. Over a distance of 157 miles White Iveco averaged 69 knots, though Buzzi dismissed this with a typical Italian shrug saying, "In Italy this is just a cruising boat." However, at Ramsgate, while White Iveco was being craned out of the water for an overhaul she slipped from her cradle, landed on a bollard and gashed her hull. A feverish 36 hours followed while repairs were made so that she could complete the final leg. At the finish she was in first place with Colin Gervase-Braziers "The Legend" second and Ego Lamborghini third.
Significantly, Motorboats and Yachting commented that the number of retirements demonstrated that though undoubtedly fast, some Class I craft had proved themselves to be unsafe in anything other than calm waters.
After a period of 24 years another ex-powerboat racer and businessman now retired, Mike Lloyd, made the decision in 2006 that this great race should be brought back to life. He and his small team - including Peter Myles - fought for two years to ensure it did take place. Supported by 47 competitors and the Fiat Powertrain the fleet eventually left once again from the premises of Gunwharf Quays in Portsmouth at 09.30am on 21 June 2008 on this ten-leg twelve-day race.
Fabio Buzzi had decided to take part in his old but famous four engined Red FPT as had the famous racer Hannes Bohinc in Wettpunkt. There was a strong contingent of three boats from Goldfish of Norway and competitors from Sweden, Greece, Germany, Scotland and Ireland.
As in the previous races the weather at the start was awful and once the fleet of 47 boats had negotiated the many excited support boats within the Solent and entered the serious seas off the Needles the fleet knew they were in for a tough leg. Before reaching the Solent Fabio Buzzi retired with damaged drives and the infamous Lyme Bay between Portland Bill and Torquay took out several more including Wettpunkt and also the German owned and driven Blue Marlin which actually sank in Lyme Bay in 50 metres of water. All crew however were rescued and returned to land safe. The leg to Plymouth was won by a British crew Silverline (owned and driven by famous offshore racer Drew Langdon and Miles Jennings) with the Norwegians "Lionhead" second and the surprise of the day the Greek boat Blue FPT third. The 2nd leg next day had to be cancelled because of huge seas in the Bristol Channel so the Fleet made its way by road to Milford Haven in South Wales to be ready for their run to Northern Ireland the following day.
The Round Britain Powerboat Race is the last remaining long distance offshore powerboat race of beyond 1,000 miles anywhere in the world and is a real test of strength, determination, speed and shows how the best results can be reached by boats that are well built, able to maintain consistently high performance levels, thanks to the reliability of their technical equipment.
The Needles Trophy was first presented in 1932 and every year until 1938. A break until 1951, 1952, 1954, 1956. Then another break until 1967 until 1989 inclusive. | https://en.wikipedia.org/wiki?curid=22451 |
Ogden Nash
Frederic Ogden Nash (August 19, 1902 – May 19, 1971) was an American poet well known for his light verse, of which he wrote over 500 pieces. With his unconventional rhyming schemes, he was declared the country's best-known producer of humorous poetry.
Nash was born in Rye, New York, the son of Mattie (Chenault) and Edmund Strudwick Nash. His father owned and operated an import–export company, and because of business obligations, the family relocated often. Nash was descended from Abner Nash, an early governor of North Carolina. The city of Nashville, Tennessee, was named for Abner's brother, Francis, a Revolutionary War general.
Throughout his life, Nash loved to rhyme. "I think in terms of rhyme, and have since I was six years old," he stated in a 1958 news interview. He had a fondness for crafting his own words whenever rhyming words did not exist, though admitting that crafting rhymes was not always the easiest task.
His family lived briefly in Savannah, Georgia, in a carriage house owned by Juliette Gordon Low, founder of the Girl Scouts of the USA; he wrote a poem about Mrs. Low's House. After graduating from St. George's School in Newport County, Rhode Island, Nash entered Harvard University in 1920, only to drop out a year later.
He returned as a teacher to St. George's for one year before returning to New York. There, he took up selling bonds, about which Nash reportedly quipped, "Came to New York to make my fortune as a bond salesman and in two years sold one bond—to my godmother. However, I saw lots of good movies." Nash then took a position as a writer of the streetcar card ads for Barron Collier, a company that previously had employed another Baltimore resident, F. Scott Fitzgerald. While working as an editor at Doubleday, he submitted some short rhymes to "The New Yorker". Editor Harold Ross wrote Nash asking for more, saying ‘’They are about the most original stuff we have had lately.’’ Nash spent three months in 1931 working on the editorial staff for "The New Yorker".
In 1931 he married Frances Leonard. He published his first collection of poems, "Hard Lines", that same year, earning him national recognition. Some of his poems reflected an anti-establishment feeling. For example, one verse, titled "Common Sense", asks:
In 1934, Nash moved to Baltimore, Maryland, where he remained until his death in 1971. Nash thought of Baltimore as home. After his return from a brief move to New York, he wrote, apropos Richard Lovelace, "I could have loved New York had I not loved Balti-more."
When Nash wasn't writing poems, he made guest appearances on comedy and radio shows and toured the United States and the United Kingdom, giving lectures at colleges and universities.
Nash was regarded with respect by the literary establishment, and his poems were frequently anthologized even in serious collections such as Selden Rodman's 1946 "A New Anthology of Modern Poetry."
Nash was the lyricist for the Broadway musical "One Touch of Venus", collaborating with librettist S. J. Perelman and composer Kurt Weill. The show included the notable song "Speak Low." He also wrote the lyrics for the 1952 revue "Two's Company".
Nash and his love of the Baltimore Colts were featured in the December 13, 1968 issue of "Life", with several poems about the American football team matched to full-page pictures. Entitled "My Colts, verses and reverses," the issue includes his poems and photographs by Arthur Rickerby. "Mr. Nash, the league leading writer of light verse (Averaging better than 6.3 lines per carry), lives in Baltimore and loves the Colts," it declares. The comments further describe Nash as "a fanatic of the Baltimore Colts, and a gentleman." Featured on the magazine cover is defensive player Dennis Gaubatz, number 53, in midair pursuit with this description: "That is he, looming 10 feet tall or taller above the Steelers' signal caller ... Since Gaubatz acts like this on Sunday, I'll do my quarterbacking Monday." Memorable Colts Jimmy Orr, Billy Ray Smith, Bubba Smith, Willie Richardson, Dick Szymanski and Lou Michaels contribute to the poetry.
Among his most popular writings were a series of animal verses, many of which featured his off-kilter rhyming devices. Examples include "If called by a panther / Don't anther"; "Who wants my jellyfish? / I'm not sellyfish!"; "The one-L lama, he's a priest. The two-L llama, he's a beast. And I will bet a silk pajama: there isn't any three-L lllama!". Nash later appended the footnote "*The author's attention has been called to a type of conflagration known as a three-alarmer. Pooh."
The best of his work was published in 14 volumes between 1931 and 1972.
Nash died at Baltimore's Johns Hopkins Hospital on May 19, 1971, of complications from Crohn's disease aggravated by a lactobacillus infection transmitted by improperly prepared coleslaw. He is buried in East Side Cemetery in North Hampton, New Hampshire.
At the time of his death in 1971, "The New York Times" said his "droll verse with its unconventional rhymes made him the country's best-known producer of humorous poetry."
A biography, "Ogden Nash: the Life and Work of America's Laureate of Light Verse", was written by Douglas M. Parker, published in 2005 and in paperback in 2007. The book was written with the cooperation of the Nash family, and quotes extensively from Nash's personal correspondence as well as his poetry.
His daughter Isabel was married to noted photographer Fred Eberstadt, and his granddaughter, Fernanda Eberstadt, is an acclaimed author. Nash had one other daughter, Linell Nash Smith.
Nash was best known for surprising, pun-like rhymes, sometimes with words deliberately misspelled for comic effect, as in his retort to Dorothy Parker's humorous dictum, "Men seldom make passes / At girls who wear glasses:"
In this example, the word "nectacled" sounds like the phrase "neck tickled" when rhymed with the previous line.
Sometimes the words rhyme by mispronunciation rather than misspelling, as in:
Another typical example of rhyming by combining words occurs in "The Adventures of Isabel", when Isabel confronts a witch who threatens to turn her into a toad:
Nash often wrote in an exaggerated verse form with pairs of lines that rhyme, but are of dissimilar length and irregular meter:
Nash's poetry was often a playful twist of an old saying or poem. For one example, in a twist on Joyce Kilmer's poem "Trees" (1913), which contains "I think that I shall never see / a poem lovely as a tree"; Nash replaces “poem” with "billboard" and adds, "Indeed, unless the billboards fall / I'll never see a tree at all."
Nash, a baseball fan, wrote a poem titled "Line-Up for Yesterday," an alphabetical poem listing baseball immortals. Published in "Sport" magazine in January 1949, the poem pays tribute to highly respected baseball players and to his own fandom, in alphabetical order. Lines include:
Nash wrote humorous poems for each movement of the Camille Saint-Saëns orchestral suite "The Carnival of the Animals", which are sometimes recited when the work is performed. The original recording of this version was made by Columbia Records in the 1940s, with Noël Coward reciting the poems and Andre Kostelanetz conducting the orchestra.
He wrote a humorous poem about the IRS and income tax titled "Song for the Saddest Ides", a reference to March 15, the ides of March, when federal taxes were due at the time. It was later set to music and performed by the IRS Chorale until its composer/conductor's later retirement.
Many of his poems, reflecting the times in which they were written, presented stereotypes of different nationalities. For example, in "Genealogical Reflections" he writes:
In "The Japanese" published in 1938, Nash presents an allegory for the expansionist policies of the Empire of Japan:
He published some poems for children, including "The Adventures of Isabel", which begins:
The US Postal Service released a postage stamp featuring Ogden Nash and text from six of his poems on the centennial of his birth on August 19, 2002. The six poems are "The Turtle", "The Cow", "Crossing The Border", "The Kitten", "The Camel", and "Limerick One". It was the first stamp in the history of the USPS to include the word "sex"; it can be found under the "O" and is part of "The Turtle". The stamp is the eighteenth in the Literary Arts section. The first issue ceremony took place in Baltimore on August 19 at the home that he and his wife Frances shared with his parents on 4300 Rugby Road, where he did most of his writing. | https://en.wikipedia.org/wiki?curid=22452 |
Octahedron
In geometry, an octahedron (plural: octahedra) is a polyhedron with eight faces, twelve edges, and six vertices. The term is most commonly used to refer to the regular octahedron, a Platonic solid composed of eight equilateral triangles, four of which meet at each vertex.
A regular octahedron is the dual polyhedron of a cube. It is a rectified tetrahedron. It is a square bipyramid in any of three orthogonal orientations. It is also a triangular antiprism in any of four orientations.
An octahedron is the three-dimensional case of the more general concept of a cross polytope.
A regular octahedron is a 3-ball in the Manhattan () metric.
If the edge length of a regular octahedron is "a", the radius of a circumscribed sphere (one that touches the octahedron at all vertices) is
and the radius of an inscribed sphere (tangent to each of the octahedron's faces) is
while the midradius, which touches the middle of each edge, is
The "octahedron" has four special orthogonal projections, centered, on an edge, vertex, face, and normal to a face. The second and third correspond to the B2 and A2 Coxeter planes.
The octahedron can also be represented as a spherical tiling, and projected onto the plane via a stereographic projection. This projection is conformal, preserving angles but not areas or lengths. Straight lines on the sphere are projected as circular arcs on the plane.
An octahedron with edge length can be placed with its center at the origin and its vertices on the coordinate axes; the Cartesian coordinates of the vertices are then
In an "x"–"y"–"z" Cartesian coordinate system, the octahedron with center coordinates ("a", "b", "c") and radius "r" is the set of all points ("x", "y", "z") such that
The surface area "A" and the volume "V" of a regular octahedron of edge length "a" are:
Thus the volume is four times that of a regular tetrahedron with the same edge length, while the surface area is twice (because we have 8 rather than 4 triangles).
If an octahedron has been stretched so that it obeys the equation
the formulas for the surface area and volume expand to become
Additionally the inertia tensor of the stretched octahedron is
These reduce to the equations for the regular octahedron when
The interior of the compound of two dual tetrahedra is an octahedron, and this compound, called the stella octangula, is its first and only stellation. Correspondingly, a regular octahedron is the result of cutting off from a regular tetrahedron, four regular tetrahedra of half the linear size (i.e. rectifying the tetrahedron). The vertices of the octahedron lie at the midpoints of the edges of the tetrahedron, and in this sense it relates to the tetrahedron in the same way that the cuboctahedron and icosidodecahedron relate to the other Platonic solids. One can also divide the edges of an octahedron in the ratio of the golden mean to define the vertices of an icosahedron. This is done by first placing vectors along the octahedron's edges such that each face is bounded by a cycle, then similarly partitioning each edge into the golden mean along the direction of its vector. There are five octahedra that define any given icosahedron in this fashion, and together they define a "regular compound".
Octahedra and tetrahedra can be alternated to form a vertex, edge, and face-uniform tessellation of space, called the octet truss by Buckminster Fuller. This is the only such tiling save the regular tessellation of cubes, and is one of the 28 convex uniform honeycombs. Another is a tessellation of octahedra and cuboctahedra.
The octahedron is unique among the Platonic solids in having an even number of faces meeting at each vertex. Consequently, it is the only member of that group to possess mirror planes that do not pass through any of the faces.
Using the standard nomenclature for Johnson solids, an octahedron would be called a "square bipyramid". Truncation of two opposite vertices results in a square bifrustum.
The octahedron is 4-connected, meaning that it takes the removal of four vertices to disconnect the remaining vertices. It is one of only four 4-connected simplicial well-covered polyhedra, meaning that all of the maximal independent sets of its vertices have the same size. The other three polyhedra with this property are the pentagonal dipyramid, the snub disphenoid, and an irregular polyhedron with 12 vertices and 20 triangular faces.
The octahedron can also be generated as the case of a 3D superellipsoid with all values set to 1.
There are 3 uniform colorings of the octahedron, named by the triangular face colors going around each vertex: 1212, 1112, 1111.
The octahedron's symmetry group is Oh, of order 48, the three dimensional hyperoctahedral group. This group's subgroups include D3d (order 12), the symmetry group of a triangular antiprism; D4h (order 16), the symmetry group of a square bipyramid; and Td (order 24), the symmetry group of a rectified tetrahedron. These symmetries can be emphasized by different colorings of the faces.
It has eleven arrangements of nets.
The octahedron is the dual polyhedron to the cube.
If the length of an edge of the octahedron formula_12, then the length of an edge of the dual cube formula_13.
The uniform tetrahemihexahedron is a tetrahedral symmetry faceting of the regular octahedron, sharing edge and vertex arrangement. It has four of the triangular faces, and 3 central squares.
The following polyhedra are combinatorially equivalent to the regular polyhedron. They all have six vertices, eight triangular faces, and twelve edges that correspond one-for-one with the features of a regular octahedron.
More generally, an octahedron can be any polyhedron with eight faces. The regular octahedron has 6 vertices and 12 edges, the minimum for an octahedron; irregular octahedra may have as many as 12 vertices and 18 edges.
There are 257 topologically distinct "convex" octahedra, excluding mirror images. More specifically there are 2, 11, 42, 74, 76, 38, 14 for octahedra with 6 to 12 vertices respectively. (Two polyhedra are "topologically distinct" if they have intrinsically different arrangements of faces and vertices, such that it is impossible to distort one into the other simply by changing the lengths of edges or the angles between edges or faces.)
Some better known irregular octahedra include the following:
A framework of repeating tetrahedrons and octahedrons was invented by Buckminster Fuller in the 1950s, known as a space frame, commonly regarded as the strongest structure for resisting cantilever stresses.
A regular octahedron can be augmented into a tetrahedron by adding 4 tetrahedra on alternated faces. Adding tetrahedra to all 8 faces creates the stellated octahedron.
The octahedron is one of a family of uniform polyhedra related to the cube.
It is also one of the simplest examples of a hypersimplex, a polytope formed by certain intersections of a hypercube with a hyperplane.
The octahedron is topologically related as a part of sequence of regular polyhedra with Schläfli symbols {3,"n"}, continuing into the hyperbolic plane.
The regular octahedron can also be considered a "rectified tetrahedron" – and can be called a "tetratetrahedron". This can be shown by a 2-color face model. With this coloring, the octahedron has tetrahedral symmetry.
Compare this truncation sequence between a tetrahedron and its dual:
The above shapes may also be realized as slices orthogonal to the long diagonal of a tesseract. If this diagonal is oriented vertically with a height of 1, then the first five slices above occur at heights "r", , , , and "s", where "r" is any number in the range , and "s" is any number in the range .
The octahedron as a "tetratetrahedron" exists in a sequence of symmetries of quasiregular polyhedra and tilings with vertex configurations (3."n")2, progressing from tilings of the sphere to the Euclidean plane and into the hyperbolic plane. With orbifold notation symmetry of *"n"32 all of these tilings are Wythoff constructions within a fundamental domain of symmetry, with generator points at the right angle corner of the domain.
As a trigonal antiprism, the octahedron is related to the hexagonal dihedral symmetry family. | https://en.wikipedia.org/wiki?curid=22458 |
Ole Rømer
Ole Christensen Rømer (; 25 September 1644 – 19 September 1710) was a Danish astronomer who, in 1676, made the first quantitative measurements of the speed of light.
Rømer also invented the modern thermometer showing the temperature between two fixed points, namely the points at which water respectively boils and freezes.
In scientific literature, alternative spellings such as "Roemer", "Römer", or "Romer" are common.
Rømer was born on 25 September 1644 in Århus to merchant and skipper Christen Pedersen (died 1663), and Anna Olufsdatter Storm ( – 1690), daughter of a well-to-do alderman. Since 1642, Christen Pedersen had taken to using the name Rømer, which means that he was from the Danish island of Rømø, to distinguish himself from a couple of other people named Christen Pedersen. There are few records of Ole Rømer before 1662, when he graduated from the old Aarhus Katedralskole (the Cathedral school of Aarhus), moved to Copenhagen and matriculated at the University of Copenhagen. His mentor at the University was Rasmus Bartholin, who published his discovery of the double refraction of a light ray by Iceland spar (calcite) in 1668, while Rømer was living in his home. Rømer was given every opportunity to learn mathematics and astronomy using Tycho Brahe's astronomical observations, as Bartholin had been given the task of preparing them for publication.
Rømer was employed by the French government: Louis XIV made him tutor for the Dauphin, and he also took part in the construction of the magnificent fountains at Versailles.
In 1681, Rømer returned to Denmark and was appointed professor of astronomy at the University of Copenhagen, and the same year he married Anne Marie Bartholin, the daughter of Rasmus Bartholin. He was active also as an observer, both at the University Observatory at Rundetårn and in his home, using improved instruments of his own construction. Unfortunately, his observations have not survived: they were lost in the great Copenhagen Fire of 1728. However, a former assistant (and later an astronomer in his own right), Peder Horrebow, loyally described and wrote about Rømer's observations.
In Rømer's position as royal mathematician, he introduced the first national system for weights and measures in Denmark on 1 May 1683. Initially based on the Rhine foot, a more accurate national standard was adopted in 1698. Later measurements of the standards fabricated for length and volume show an excellent degree of accuracy. His goal was to achieve a definition based on astronomical constants, using a pendulum. This would happen after his death, practicalities making it too inaccurate at the time. Notable is also his definition of the new Danish mile of 24,000 Danish feet (circa 7,532 m).
In 1700, Rømer persuaded the king to introduce the Gregorian calendar in Denmark-Norway – something Tycho Brahe had argued for in vain a hundred years earlier.
Rømer developed one of the first temperature scales while convalescing from a broken leg. Daniel Gabriel Fahrenheit visited him in 1708 and improved on the Rømer scale, the result being the familiar Fahrenheit temperature scale still in use today in a few countries.
Rømer also established navigation schools in several Danish cities.
In 1705, Rømer was made the second Chief of the Copenhagen Police, a position he kept until his death in 1710. As one of his first acts, he fired the entire force, being convinced that the morale was alarmingly low. He was the inventor of the first street lights (oil lamps) in Copenhagen, and worked hard to try to control the beggars, poor people, unemployed, and prostitutes of Copenhagen.
In Copenhagen, Rømer made rules for building new houses, got the city's water supply and sewers back in order, ensured that the city's fire department got new and better equipment, and was the moving force behind the planning and making of new pavement in the streets and on the city squares.
Rømer died at the age of 65 in 1710. He was buried in Copenhagen Cathedral, which has since been rebuilt following its destruction in the Battle of Copenhagen (1807). There is a modern memorial.
The determination of longitude is a significant practical problem in cartography and navigation. Philip III of Spain offered a prize for a method to determine the longitude of a ship out of sight of land, and Galileo proposed a method of establishing the time of day, and thus longitude, based on the times of the eclipses of the moons of Jupiter, in essence using the Jovian system as a cosmic clock; this method was not significantly improved until accurate mechanical clocks were developed in the eighteenth century. Galileo proposed this method to the Spanish crown (1616–1617) but it proved to be impractical, because of the inaccuracies of Galileo's timetables and the difficulty of observing the eclipses on a ship. However, with refinements, the method could be made to work on land.
After studies in Copenhagen, Rømer joined Jean Picard in 1671 to observe about 140 eclipses of Jupiter's moon Io on the island of Hven at the former location of Tycho Brahe’s observatory of Uraniborg, near Copenhagen, over a period of several months, while in Paris Giovanni Domenico Cassini observed the same eclipses. By comparing the times of the eclipses, the difference in longitude of Paris to Uranienborg was calculated.
Cassini had observed the moons of Jupiter between 1666 and 1668, and discovered discrepancies in his measurements that, at first, he attributed to light having a finite speed. In 1672 Rømer went to Paris and continued observing the satellites of Jupiter as Cassini's assistant. Rømer added his own observations to Cassini's and observed that times between eclipses (particularly those of Io) got shorter as Earth approached Jupiter, and longer as Earth moved farther away. Cassini made an announcement to the Academy of Sciences on 22 August 1676:
"This second inequality appears to be due to light taking some time to reach us from the satellite; light seems to take about ten to eleven minutes [to cross] a distance equal to the half-diameter of the terrestrial orbit".
Oddly, Cassini seems to have abandoned this reasoning, which Rømer adopted and set about buttressing in an irrefutable manner, using a selected number of observations performed by Picard and himself between 1671 and 1677. Rømer presented his results to the French Academy of Sciences, and it was summarised soon after by an anonymous reporter in a short paper, "", published 7 December 1676 in the "Journal des sçavans". Unfortunately, the reporter, possibly in order to hide his lack of understanding, resorted to cryptic phrasing, obfuscating Rømer's reasoning in the process. Rømer himself never published his results.
Rømer's reasoning was as follows. Referring to the illustration, assume the Earth is at point "L", and Io emerges from Jupiter's shadow at point "D". After several orbits of Io, at 42.5 hours per orbit, the Earth is at point "K". If light is not propagated instantaneously, the additional time it takes to reach "K", which he reckoned about 3½ minutes, would explain the observed delay. Rømer observed "immersions" at point "C" from positions "F" and "G", to avoid confusion with eclipses (Io shadowed by Jupiter from "C" to "D") and occultations (Io hidden behind Jupiter at various angles). In the table below, his observations in 1676, including the one on 7 August, believed to be at the opposition point "H", and the one observed at Paris Observatory to be 10 minutes late, on 9 November.
By trial and error, during eight years of observations Rømer worked out how to account for "the retardation of light" when reckoning the ephemeris of Io. He calculated the delay as a proportion of the angle corresponding to a given Earth's position with respect to Jupiter, "Δt" = 22·()[minutes]. When the angle α is 180° the delay becomes 22 minutes, which may be interpreted as the time necessary for the light to cross a distance equal to the diameter of the Earth's orbit, H to E. (Actually, Jupiter is not visible from the conjunction point E.) That interpretation makes it possible to calculate the strict result of Rømer's observations: The ratio of the speed of light to the speed with which Earth orbits the sun, which is the ratio of
the duration of a year divided by pi as compared to the 22 minutes
In comparison, the modern value is circa ≈ 10,100.
Rømer neither calculated this ratio, nor did he give a value for the speed of light. However, many others calculated a speed from his data, the first being Christiaan Huygens; after corresponding with Rømer and eliciting more data, Huygens deduced that light travelled Earth diameters per second, which is approximately 212,000 km/s.
Rømer's view that the velocity of light was finite was not fully accepted until measurements of the so-called aberration of light were made by James Bradley in 1727.
In 1809, again making use of observations of Io, but this time with the benefit of more than a century of increasingly precise observations, the astronomer Jean Baptiste Joseph Delambre reported the time for light to travel from the Sun to the Earth as 8 minutes and 12 seconds. Depending on the value assumed for the astronomical unit, this yields the speed of light as just a little more than 300,000 kilometres per second. The modern value is 8 minutes and 19 seconds, and a speed of 299,792.458 km/s.
A plaque at the Observatory of Paris, where the Danish astronomer happened to be working, commemorates what was, in effect, the first measurement of a universal quantity made on this planet.
In addition to inventing the first street lights in Copenhagen, Rømer also invented the meridian circle, the altazimuth, and the passage instrument (also known as the "transit instrument", a type of meridian circle whose horizontal axis is not fixed in the east-west direction).
The is given annually by the Danish Natural Science Research Council for outstanding research.
The Ole Rømer Museum is located in the municipality of Høje-Taastrup, Denmark, at the excavated site of Rømer's observatory "" at Vridsløsemagle. The observatory opened in 1704, and operated until about 1716, when the remaining instruments were moved to Rundetårn in Copenhagen. There is a large collection of ancient and more recent astronomical instruments on display at the museum. The museum opened in 1979, and has since 2002 been a part of the museum Kroppedal at the same location.
In Denmark, Ole Rømer has been honoured in various ways through the ages. He has been portrayed on bank notes, the eponymous is named after him, as are streets in both Aarhus and Copenhagen ("Ole Rømers Gade" and ' respectively). Aarhus University's astronomical observatory is named The Ole Rømer Observatory (') in his honour, and a Danish satellite project to measure the age, temperature, physical and chemical conditions of selected stars, was named . The satellite project stranded in 2002 and was never realised though.
The Römer crater on the Moon is named after him.
In the 1960s, the comic-book superhero The Flash on a number of occasions would measure his velocity in "Roemers" , in honour of Ole Rømer's "discovery" of the speed of light.
In Larry Niven's 1999 novel "Rainbow Mars", Ole Rømer is mentioned as having observed Martian life in an alternate history timeline.
Ole Rømer features in the 2012 game "" as a gentleman under Denmark.
On 7 December 2016, a Google Doodle was dedicated to Rømer. | https://en.wikipedia.org/wiki?curid=22459 |
Othello
Othello (The Tragedy of Othello, the Moor of Venice) is a tragedy by William Shakespeare, believed to have been written in 1603. It is based on the story "Un Capitano Moro" ("A Moorish Captain") by Cinthio (a disciple of Boccaccio's), first published in 1565. The story revolves around its two central characters: Othello, a Moorish general in the Venetian army, and his treacherous ensign, Iago. Given its varied and enduring themes of racism, love, jealousy, betrayal, revenge, and repentance, "Othello" is still often performed in professional and community theatre alike, and has been the source for numerous operatic, film, and literary adaptations.
Roderigo, a wealthy and dissolute gentleman, complains to his friend Iago, an ensign, that Iago has not told him about the secret marriage between Desdemona, the daughter of a senator named Brabantio, and Othello, a Moorish general in the Venetian army. Roderigo is upset because he loves Desdemona and had asked her father, Brabantio, for her hand in marriage.
Iago hates Othello for promoting a younger man named Cassio above him, whom Iago considers a less capable soldier than himself, and tells Roderigo that he plans to exploit Othello for his own advantage. Iago convinces Roderigo to wake Brabantio and tell him about his daughter's elopement. Meanwhile, Iago sneaks away to find Othello and warns him that Brabantio is coming for him.
Brabantio, provoked by Roderigo, is enraged and will not rest until he has confronted Othello, but he finds Othello's residence full of the Duke of Venice's guards, who prevent violence. News has arrived in Venice that the Turks are going to attack Cyprus, and Othello is therefore summoned to advise the senators. Brabantio has no option but to accompany Othello to the Duke's residence, where he accuses Othello of seducing Desdemona by witchcraft.
Othello defends himself before the Duke of Venice, Brabantio's kinsmen Lodovico and Gratiano, and various senators. Othello explains that Desdemona became enamoured of him for the sad and compelling stories he told of his life before Venice, not because of any witchcraft. The senate is satisfied, once Desdemona confirms that she loves Othello, but Brabantio leaves saying that Desdemona will betray Othello: "Look to her, Moor, if thou hast eyes to see. She has deceived her father, and may thee," (Act I, Sc 3). Iago, still in the room, takes note of Brabantio's remark. By order of the Duke, Othello leaves Venice to command the Venetian armies against invading Turks on the island of Cyprus, accompanied by his new wife, his new lieutenant Cassio, his ensign Iago, and Iago's wife, Emilia, as Desdemona's attendant.
The party arrives in Cyprus to find that a storm has destroyed the Turkish fleet. Othello orders a general celebration and leaves to consummate his marriage with Desdemona. In his absence, Iago gets Cassio drunk, and then persuades Roderigo to draw Cassio into a fight. Montano tries to calm down an angry and drunk Cassio and this leads to them fighting one another. Montano is injured in the fight. Othello reenters and questions the men as to what happened. Othello blames Cassio for the disturbance and strips him of his rank. Cassio is distraught. Iago persuades Cassio to ask Desdemona to persuade her husband to reinstate him.
Iago now persuades Othello to be suspicious of Cassio and Desdemona. When Desdemona drops a handkerchief (the first gift given to her by Othello), Emilia finds it, and gives it to her husband Iago, at his request, unaware of what he plans to do with it. Othello reenters and, then being convinced by Iago of his wife's unfaithfulness with his captain, vows with Iago for the death of Desdemona and Cassio, after which he makes Iago his lieutenant. Act III, scene iii is considered to be the turning point of the play as it is the scene in which Iago successfully sows the seeds of doubt in Othello's mind, inevitably sealing Othello's fate.
Iago plants the handkerchief in Cassio's lodgings, then tells Othello to watch Cassio's reactions while Iago questions him. Iago goads Cassio on to talk about his affair with Bianca, a local courtesan, but whispers her name so quietly that Othello believes the two men are talking about Desdemona. Later, Bianca accuses Cassio of giving her a second-hand gift which he had received from another lover. Othello sees this, and Iago convinces him that Cassio received the handkerchief from Desdemona.
Enraged and hurt, Othello resolves to kill his wife and tells Iago to kill Cassio. Othello proceeds to make Desdemona's life miserable and strikes her in front of visiting Venetian nobles. Meanwhile, Roderigo complains that he has received no results from Iago in return for his money and efforts to win Desdemona, but Iago convinces him to kill Cassio.
Roderigo, having been manipulated by Iago, attacks Cassio in the street after Cassio leaves Bianca's lodgings. Cassio wounds Roderigo. During the scuffle, Iago comes from behind Cassio and badly cuts his leg. In the darkness, Iago manages to hide his identity, and when Lodovico and Gratiano hear Cassio's cries for help, Iago joins them. When Cassio identifies Roderigo as one of his attackers, Iago secretly stabs Roderigo to stop him revealing the plot. Iago then accuses Bianca of the failed conspiracy to kill Cassio.
Othello confronts Desdemona, and then smothers her in their bed. When Emilia arrives, Desdemona defends her husband before dying, and Othello accuses Desdemona of adultery. Emilia calls for help. The former governor Montano arrives, with Gratiano and Iago. When Othello mentions the handkerchief as proof, Emilia realizes what her husband Iago has done, and she exposes him, whereupon he kills her. Othello, belatedly realising Desdemona's innocence, stabs Iago but not fatally, saying that Iago is a devil, and he would rather have him live the rest of his life in pain.
Iago refuses to explain his motives, vowing to remain silent from that moment on. Lodovico apprehends both Iago and Othello for the murders of Roderigo, Emilia, and Desdemona, but Othello commits suicide. Lodovico appoints Cassio as Othello's successor and exhorts him to punish Iago justly. He then denounces Iago for his actions and leaves to tell the others what has happened.
"Othello" is an adaptation of the Italian writer Cinthio's tale "Un Capitano Moro" ("A Moorish Captain") from his "Gli Hecatommithi" (1565), a collection of one hundred tales in the style of Giovanni Boccaccio's "Decameron." No English translation of Cinthio was available in Shakespeare's lifetime, and verbal echoes in "Othello" are closer to the Italian original than to Gabriel Chappuy's 1584 French translation. Cinthio's tale may have been based on an actual incident occurring in Venice about 1508. It also resembles an incident described in the earlier tale of "The Three Apples", one of the stories narrated in the "One Thousand and One Nights" ("Arabian Nights"). Desdemona is the only named character in Cinthio's tale, with his few other characters identified only as the "Moor", the "Squadron Leader", the "Ensign", and the "Ensign's Wife" (corresponding to the play's Othello, Cassio, Iago and Emilia). Cinthio drew a moral (which he placed in the mouth of Desdemona) that it is unwise for European women to marry the temperamental men of other nations. Cinthio's tale has been described as a "partly racist warning" about the dangers of miscegenation.
While Shakespeare closely followed Cinthio's tale in composing "Othello", he departed from it in some details. Brabantio, Roderigo, and several minor characters are not found in Cinthio, for example, and Shakespeare's Emilia takes part in the handkerchief mischief while her counterpart in Cinthio does not. Unlike in "Othello", in Cinthio, the "Ensign" (the play's Iago) lusts after Desdemona and is spurred to revenge when she rejects him. Shakespeare's opening scenes are unique to his tragedy, as is the tender scene between Emilia and Desdemona as the lady prepares for bed. Shakespeare's most striking departure from Cinthio is the manner of his heroine's death. In Shakespeare, Othello suffocates Desdemona, but in Cinthio, the "Moor" commissions the "Ensign" to bludgeon his wife to death with a sand-filled stocking. Cinthio describes each gruesome blow, and, when the lady is dead, the "Ensign" and the "Moor" place her lifeless body upon her bed, smash her skull, and cause the cracked ceiling above the bed to collapse upon her, giving the impression its falling rafters caused her death. In Cinthio, the two murderers escape detection. The "Moor" then misses Desdemona greatly, and comes to loathe the sight of the "Ensign". He demotes him, and refuses to have him in his company. The "Ensign" then seeks revenge by disclosing to the "Squadron Leader" the "Moor's" involvement in Desdemona's death. The two depart Cyprus for Venice, and denounce the "Moor" to the Venetian Seigniory; he is arrested, taken to Venice, and tortured. He refuses to admit his guilt and is condemned to exile. Desdemona's relatives eventually find and kill him. The "Ensign", however, continues to escape detection in Desdemona's death, but engages in other crimes while in Venice. He is arrested and dies after being tortured. Cinthio's "Ensign's Wife" (the play's Emilia), survives her husband's death to tell her story.
Cinthio's "Moor" is the model for Shakespeare's Othello, but some researchers believe the poet also took inspiration from the several Moorish delegations from Morocco to Elizabethan England "circa" 1600.
Another possible source was the "Description of Africa" by Leo Africanus. The book was an enormous success in Europe, and was translated into many other languages, remaining a definitive reference work for decades (and to some degree, centuries) afterwards. An English translation by John Pory appeared in 1600 under the title "A Geographical Historie of Africa, Written in Arabicke and Italian by Iohn Leo a More..." in which form Shakespeare may have seen it and reworked hints in creating the character of Othello.
While supplying the source of the plot, the book offered nothing of the sense of place of Venice or Cyprus. For knowledge of this, Shakespeare may have used Gasparo Contarini's "The Commonwealth and Government of Venice", in Lewes Lewkenor's 1599 translation.
The earliest mention of the play is found in a 1604 Revels Office account, which records that on "Hallamas Day, being the first of Nouembar ... the Kings Maiesties plaiers" performed "A Play in the Banketinghouse at Whit Hall Called The Moor of Venis." The work is attributed to "Shaxberd." The Revels account was first printed by Peter Cunningham in 1842, and, while its authenticity was once challenged, is now regarded as genuine (as authenticated by A.E. Stamp in 1930). Based on its style, the play is usually dated 1603 or 1604, but arguments have been made for dates as early as 1601 or 1602.
The play was entered into the Register of the Stationers Company on 6 October 1621, by Thomas Walkley, and was first published in quarto format by him in 1622:
"Othello" is renowned amongst literary scholars for the way it portrays the human emotion of jealousy. Throughout the play, good-natured characters make rash decisions based on the jealousy that they feel, most notably Othello.
In the early acts, Othello is depicted as a typical heroic figure and holds admirable qualities, written with the intention of winning over the favour of the audience; however, as the play goes on, jealousy will manipulate his decisions and lead him into sin. While the majority of the evil that Othello carries out in the play can be cited as coming from Iago, it is jealousy that motivates him to perform wicked deeds. When Iago highlights the almost excessive amount of time that Cassio and his wife, Desdemona, are spending together, Othello becomes filled with rage and, following a series of events, will murder the one that he loves.
Shakespeare explores man's ugliest trait in this opus and perfectly represents the idea of the 'tragic hero' in Othello, who wins over the responders early on but proceeds to make bad, almost wicked, decisions that will make it harder for the audience to like him until his eventual undoing. This idea of the tragic hero is made clear through the utilisation of jealousy, one of the various notable themes present in "Othello".
Although its title suggests that the tragedy belongs primarily to Othello, Iago plays an important role in the plot. He reflects the archetypal villain and has the biggest share of the dialogue. In "Othello", it is Iago who manipulates all other characters at will, controlling their movements and trapping them in an intricate net of lies. He achieves this by getting close to all characters and playing on their weaknesses while they refer to him as "honest" Iago, thus furthering his control over the characters. A. C. Bradley, and more recently Harold Bloom, have been major advocates of this interpretation. Other critics, most notably in the later twentieth century (after F. R. Leavis), have focused on Othello.
Although characters described as "Moors" appear in two other Shakespeare plays ("Titus Andronicus" and "The Merchant of Venice"), such characters were a rarity in contemporary theatre, and it was unknown for them to take centre stage.
There is no consensus over Othello's ethnic origin. E. A. J. Honigmann, the editor of the Arden Shakespeare edition, concluded that Othello's race is ambiguous. "Renaissance representations of the Moor were vague, varied, inconsistent, and contradictory. As critics have established, the term 'Moor' referred to dark-skinned people in general, used interchangeably with terms such as 'African', 'Somali', 'Ethiopian', 'Negro', 'Arab', 'Berber', and even 'Indian' to designate a figure from Africa (or beyond)." Various uses of the word "black" (for example, "Haply for I am black") are insufficient evidence for any accurate racial classification, Honigmann argues, since "black" could simply mean "swarthy" to Elizabethans. Iago twice uses the word "Barbary" or "Barbarian" to refer to Othello, seemingly referring to the Barbary coast inhabited by Berbers. Roderigo calls Othello "the thicklips", which seems to refer to Sub-Saharan African physiognomy, but Honigmann counters that, as these comments are all intended as insults by the characters, they need not be taken literally.
However, Jyotsna Singh wrote that the opposition of Brabantio to Desdemona marrying Othelloa respected and honoured generalcannot make sense except in racial terms, citing the scene where Brabantio accuses Othello of using witchcraft to make his daughter fall in love with him, saying it is "unnatural" for Desdemona to desire Othello's "sooty bosom". Singh argued that, since people with dark complexions are common in the Mediterranean area, a Venetian senator like Brabantio being opposed to Desdemona marrying Othello for merely being swarthy makes no sense, and that the character of Othello was intended to be black.
Michael Neill, editor of "The Oxford Shakespeare", notes that the earliest critical references to Othello's colour (Thomas Rymer's 1693 critique of the play, and the 1709 engraving in Nicholas Rowe's edition of Shakespeare) assume him to be Sub-Saharan, while the earliest known North African interpretation was not until Edmund Kean's production of 1814. Honigmann discusses the view that Abd el-Ouahed ben Messaoud ben Mohammed Anoun, Moorish ambassador of the Arab sultan of Barbary (Morocco) to Queen Elizabeth I in 1600, was one inspiration for Othello. He stayed with his retinue in London for several months and occasioned much discussion. While Shakespeare's play was written only a few years afterwards, Honigmann questions the view that ben Messaoud himself was a significant influence on it.
Othello is referred to as a "Barbary horse" (1.1.113) and a "lascivious Moor" (1.1.127). In 3.3 he denounces Desdemona's supposed sin as being "black as mine own face". Desdemona's physical whiteness is otherwise presented in opposition to Othello's dark skin: 5.2 "that whiter skin of hers than snow". Iago tells Brabantio that "an old black ram / is tupping your white ewe" (1.1.88). In Elizabethan discourse, the word "black" could suggest various concepts that extended beyond the physical colour of skin, including a wide range of negative connotations.
Othello was frequently performed as an Arab Moor during the 19th century. He was first played by a black man on the London stage in 1833 by the most important of the nineteenth-century Othellos, the African American Ira Aldridge who had been forced to leave his home country to make his career. Regardless of what Shakespeare intended by calling Othello a "Moor" – whether he meant that Othello was a Muslim or a black man or both – in the 19th century and much of the 20th century, many critics tended to see the tragedy in racial terms, seeing interracial marriages as "aberrations" that could end badly. Given this view of "Othello", the play became especially controversial in apartheid-era South Africa where interracial marriages were banned and performances of "Othello" were discouraged.
The first major screen production casting a black actor as Othello did not come until 1995, with Laurence Fishburne opposite Kenneth Branagh's Iago. In the past, Othello would often have been portrayed by a white actor in blackface or in a black mask: more recent actors who chose to 'black up' include Ralph Richardson (1937); Orson Welles (1952); John Gielgud (1961); Laurence Olivier (1964); and Anthony Hopkins (1981). Ground-breaking black American actor Paul Robeson played the role in three different productions between 1930 and 1959. The casting of the role comes with a political subtext. Patrick Stewart played the role alongside an otherwise all-black cast in the Shakespeare Theatre Company's 1997 staging of the play and Thomas Thieme, also white, played Othello in a 2007 Munich Kammerspiele staging at the Royal Shakespeare Theatre, Stratford. Michael Gambon also took the role in 1980 and 1991; their performances were critically acclaimed. Carlo Rota, of Mediterranean (British Italian) heritage, played the character on Canadian television in 2008.
The race of the title role is often seen as Shakespeare's way of isolating the character, culturally as well as visually, from the Venetian nobles and officers, and the isolation may seem more genuine when a black actor takes the role. But questions of race may not boil down to a simple decision of casting a single role. In 1979, Keith Fowler’s production of "Othello" mixed the races throughout the company. Produced by the American Revels Company at the Empire Theater (renamed the November Theater in 2011) in Richmond, Virginia, this production starred African American actor Clayton Corbin in the title role, with Henry K. Bal, a Hawaiian actor of mixed ethnicity, playing Iago. Othello’s army was composed of both black and white mercenaries. Iago’s wife, Emilia was played by the popular black actress Marie Goodman Hunter. The 2016 production at the New York Theatre Workshop, directed by Sam Gold, also effectively used a mixed-race cast, starring English actors David Oyelowo as Othello and Daniel Craig as Iago. Desdemona was played by American actress Rachel Brosnahan, Cassio was played by Finn Wittrock, and Emilia was played by Marsha Stephanie Blake.
As the Protestant Reformation of England proclaimed the importance of pious, controlled behaviour in society, it was the tendency of the contemporary Englishman to displace society's "undesirable" qualities of barbarism, treachery, jealousy and libidinousness onto those who are considered "other". The assumed characteristics of black men, or "the other", were both instigated and popularised by Renaissance dramas of the time; for example, the treachery of black men inherent to George Peele's "The Battle of Alcazar" (1588). It has been argued that it is Othello's "otherness" which makes him so vulnerable to manipulation. Audiences of the time would expect Othello to be insecure about his race and the implied age gap between himself and Desdemona.
The title "Moor" implies a religious "other" of North African or Middle Eastern descent. Though the actual racial definition of the term is murky, the implications are religious as well as racial. Many critics have noted references to demonic possession throughout the play, especially in relation to Othello's seizure, a phenomenon often associated with possession in the popular consciousness of the day. Thomas M. Vozar, in a 2012 article in "Philosophy and Literature", suggests that the epileptic fit relates to the mind–body problem and the existence of the soul.
There have been many differing views on the character of Othello over the years. A.C. Bradley calls Othello the "most romantic of all of Shakespeare's heroes" (by "hero" Bradley means protagonist) and "the greatest poet of them all". On the other hand, F.R. Leavis describes Othello as "egotistical". There are those who also take a less critical approach to the character of Othello such as William Hazlitt, who said: "the nature of the Moor is noble ... but his blood is of the most inflammable kind".
"Othello" possesses an unusually detailed performance record. The first certainly known performance occurred on 1 November 1604, at Whitehall Palace in London, being mentioned in a Revels account on "Hallamas Day, being the first of Nouembar", 1604, when "the Kings Maiesties plaiers" performed "A Play in the Banketinge house at Whit Hall Called The Moor of Venis". The play is there attributed to "Shaxberd". Subsequent performances took place on Monday, 30 April 1610 at the Globe Theatre, and at Oxford in September 1610. On 22 November 1629, and on 6 May 1635, it played at the Blackfriars Theatre. "Othello" was also one of the twenty plays performed by the King's Men during the winter of 1612, in celebration of the wedding of Princess Elizabeth and Frederick V, Elector Palatine.
At the start of the Restoration era, on 11 October 1660, Samuel Pepys saw the play at the Cockpit Theatre. Nicholas Burt played the lead, with Charles Hart as Cassio; Walter Clun won fame for his Iago. Soon after, on 8 December 1660, Thomas Killigrew's new King's Company acted the play at their Vere Street theatre, with Margaret Hughes as Desdemona – probably the first time a professional actress appeared on a public stage in England.
It may be one index of the play's power that "Othello" was one of the very few Shakespearean plays that was never adapted and changed during the Restoration and the eighteenth century.
As Shakespeare regained popularity among nineteenth-century French Romantics, poet, playwright, and novelist Alfred de Vigny created a French translation of "Othello", titled "Le More de Venise", which premiered at the Comédie-Française on 24 October 1829.
Famous nineteenth-century Othellos included Ira Aldridge, Edmund Kean, Edwin Forrest, and Tommaso Salvini, and outstanding Iagos were Edwin Booth and Henry Irving.
The most notable American production may be Margaret Webster's 1943 staging starring Paul Robeson as Othello and José Ferrer as Iago. This production was the first ever in America to feature a black actor playing Othello with an otherwise all-white cast (there had been all-black productions of the play before). It ran for 296 performances, almost twice as long as any other Shakespearean play ever produced on Broadway. Although it was never filmed, it was the first lengthy performance of a Shakespeare play released on records, first on a multi-record 78 RPM set and then on a 3-LP one. Robeson had first played the role in London in 1930 in a cast that included Peggy Ashcroft as Desdemona and Ralph Richardson as Roderigo, and would return to it in 1959 at Stratford on Avon with co-stars Mary Ure, Sam Wanamaker and Vanessa Redgrave. The critics had mixed reactions to the "flashy" 1959 production which included mid-western accents and rock-and roll drumbeats but gave Robeson primarily good reviews. W. A. Darlington of "The Daily Telegraph" ranked Robeson's Othello as the best he had ever seen while the "Daily Express", which had for years before published consistently scathing articles about Robeson for his leftist views, praised his "strong and stately" performance (though in turn suggested it was a "triumph of presence not acting").
Actors have alternated the roles of Iago and Othello in productions to stir audience interest since the nineteenth century. Two of the most notable examples of this role swap were William Charles Macready and Samuel Phelps at Drury Lane (1837) and Richard Burton and John Neville at The Old Vic (1955). When Edwin Booth's tour of England in 1880 was not well attended, Henry Irving invited Booth to alternate the roles of Othello and Iago with him in London. The stunt renewed interest in Booth's tour. James O'Neill also alternated the roles of Othello and Iago with Booth.
The American actor William Marshall performed the title role in at least six productions. His Othello was called by Harold Hobson of the "London Sunday Times" "the best Othello of our time," continuing:
...nobler than Tearle, more martial than Gielgud, more poetic than Valk. From his first entry, slender and magnificently tall, framed in a high Byzantine arch, clad in white samite, mystic, wonderful, a figure of Arabian romance and grace, to his last plunging of the knife into his stomach, Mr Marshall rode without faltering the play's enormous rhetoric, and at the end the house rose to him.
Marshall also played Othello in a jazz musical version, "Catch My Soul", with Jerry Lee Lewis as Iago, in Los Angeles in 1968. His Othello was captured on record in 1964 with Jay Robinson as Iago and on video in 1981 with Ron Moody as Iago. The 1982 Broadway staging starred James Earl Jones as Othello and Christopher Plummer as Iago, who became the only actor to receive a Tony Award nomination for a performance in the play.
When Laurence Olivier gave his acclaimed performance of Othello at the Royal National Theatre in 1964, he had developed a case of stage fright that was so profound that when he was alone onstage, Frank Finlay (who was playing Iago) would have to stand offstage where Olivier could see him to settle his nerves. This performance was recorded complete on LP, and filmed by popular demand in 1965 (according to a biography of Olivier, tickets for the stage production were notoriously hard to get). The film version still holds the record for the most Oscar nominations for acting ever given to a Shakespeare film – Olivier, Finlay, Maggie Smith (as Desdemona) and Joyce Redman (as Emilia, Iago's wife) were all nominated for Academy Awards. Olivier was among the last white actors to be greatly acclaimed as Othello, although the role continued to be played by such performers as Donald Sinden at the Royal Shakespeare Company in 1979–1980, Paul Scofield at the Royal National Theatre in 1980, Anthony Hopkins in the "BBC Television Shakespeare" production (1981), and Michael Gambon in a stage production at Scarborough directed by Alan Ayckbourn in 1990. Gambon had been in Olivier's earlier production. In an interview Gambon commented "I wasn't even the second gentleman in that. I didn't have any lines at all. I was at the back like that, standing for an hour. [It's] what I used to do – I had a metal helmet, I had an earplug, and we used to listen to "The Archers". No one knew. All the line used to listen to "The Archers". And then I went and played Othello myself at Birmingham Rep I was 27. Olivier sent me a telegram on the first night. He said, "Copy me." He said, "Do what I used to do." Olivier used to lower his voice for Othello so I did mine. He used to paint the big negro lips on. You couldn't do it today, you'd get shot. He had the complete negro face. And the hips. I did all that. I copied him exactly. Except I had a pony tail. I played him as an Arab. I stuck a pony tail on with a bell on the end of it. I thought that would be nice. Every time I moved my hair went wild." British blacking-up for Othello ended with Gambon in 1990; however the Royal Shakespeare Company didn't run the play at all on the main Stratford stage until 1999, when Ray Fearon became the first black British actor to take the part, the first black man to play Othello with the RSC since Robeson.
In 1997, Patrick Stewart took the role of Othello with the Shakespeare Theatre Company (Washington, D.C.) in a race-bending performance, in a "photo negative" production of a white "Othello" with an otherwise all-black cast. Stewart had wanted to play the title role since the age of 14, so he and director Jude Kelly inverted the play so Othello became a comment on a white man entering a black society. The interpretation of the role is broadening, with theatre companies casting Othello as a woman or inverting the gender of the whole cast to explore gender questions in Shakespeare's text. Companies have also chosen to share the role between several actors during a performance.
Canadian playwright Ann-Marie MacDonald's 1988 award-winning play "Goodnight Desdemona (Good Morning Juliet)" is a revision of "Othello" and "Romeo and Juliet" in which an academic deciphers a cryptic manuscript she believes to be the original source for the tragedies, and is transported into the plays themselves.
"Othello" opened at the Donmar Warehouse in London on 4 December 2007, directed by Michael Grandage, with Chiwetel Ejiofor as Othello, Ewan McGregor as Iago, Tom Hiddleston as Cassio, Kelly Reilly as Desdemona and Michelle Fairley as Emillia. Ejiofor, Hiddleston and Fairley all received nominations for Laurence Olivier Awards, with Ejiofor winning. Stand-up comedian Lenny Henry played Othello in 2009 produced by Northern Broadsides in collaboration with West Yorkshire Playhouse.
In March 2016 the historian Onyeka produced a play entitled "Young Othello", a fictional take on Othello's young life before the events of Shakespeare's play. In June 2016, baritone and actor David Serero played the title role in a Moroccan adaptation featuring Judeo-Arabic songs and Verdi's opera version in New York. In 2017, Ben Naylor directed the play for the Pop-up Globe in Auckland, with Māori actor Te Kohe Tuhaka in the title role, Jasmine Blackborow as Desdemona and Haakon Smestad as Iago. The production transferred to Melbourne, Australia with another Maori actor, Regan Taylor, taking over the title role.
In September 2013, a Tamil adaptation titled "Othello, the Fall of a Warrior" was directed and produced in Singapore by Subramanian Ganesh.
Othello as a literary character has appeared in many representations within popular culture over several centuries. There also have been over a dozen film adaptations of "Othello". | https://en.wikipedia.org/wiki?curid=22460 |
Osteoporosis
Osteoporosis is a disease in which bone weakening increases the risk of a broken bone. It is the most common reason for a broken bone among the elderly. Bones that commonly break include the vertebrae in the spine, the bones of the forearm, and the hip. Until a broken bone occurs there are typically no symptoms. Bones may weaken to such a degree that a break may occur with minor stress or spontaneously. After a broken bone, Chronic pain and a decreased ability to carry out normal activities may occur.
Osteoporosis may be due to lower-than-normal maximum bone mass and greater-than-normal bone loss. Bone loss increases after menopause due to lower levels of estrogen. Osteoporosis may also occur due to a number of diseases or treatments, including alcoholism, anorexia, hyperthyroidism, kidney disease, and surgical removal of the ovaries. Certain medications increase the rate of bone loss, including some antiseizure medications, chemotherapy, proton pump inhibitors, selective serotonin reuptake inhibitors, and glucocorticosteroids. Smoking, dairy consumption, and too little exercise are also risk factors. Osteoporosis is defined as a bone density of 2.5 standard deviations below that of a young adult. This is typically measured by dual-energy X-ray absorptiometry.
Prevention of osteoporosis includes a proper diet during childhood and efforts to avoid medications that increase the rate of bone loss. Efforts to prevent broken bones in those with osteoporosis include a good diet, exercise, and fall prevention. Lifestyle changes such as stopping smoking and not drinking alcohol may help. Biphosphonate medications are useful to decrease future broken bones in those with previous broken bones due to osteoporosis. In those with osteoporosis but no previous broken bones, they are less effective. They do not appear to affect the risk of death. A number of other medications may also be useful.
Osteoporosis becomes more common with age. About 15% of Caucasians in their 50s and 70% of those over 80 are affected. It is more common in women than men. In the developed world, depending on the method of diagnosis, 2% to 8% of males and 9% to 38% of females are affected. Rates of disease in the developing world are unclear. About 22 million women and 5.5 million men in the European Union had osteoporosis in 2010. In the United States in 2010, about eight million women and one to two million men had osteoporosis. White and Asian people are at greater risk. The word "osteoporosis" is from the Greek terms for "porous bones".
Osteoporosis itself has no symptoms; its main consequence is the increased risk of bone fractures. Osteoporotic fractures occur in situations where healthy people would not normally break a bone; they are therefore regarded as fragility fractures. Typical fragility fractures occur in the vertebral column, rib, hip and wrist.
Fractures are a common symptom of osteoporosis and can result in disability. Acute and chronic pain in the elderly is often attributed to fractures from osteoporosis and can lead to further disability and early mortality. These fractures may also be asymptomatic. The most common osteoporotic fractures are of the wrist, spine, shoulder and hip. The symptoms of a vertebral collapse ("compression fracture") are sudden back pain, often with radicular pain (shooting pain due to nerve root compression) and rarely with spinal cord compression or cauda equina syndrome. Multiple vertebral fractures lead to a stooped posture, loss of height, and chronic pain with resultant reduction in mobility.
Fractures of the long bones acutely impair mobility and may require surgery. Hip fracture, in particular, usually requires prompt surgery, as serious risks are associated with it, such as deep vein thrombosis and pulmonary embolism, and increased mortality.
Fracture risk calculators assess the risk of fracture based upon several criteria, including bone mineral density, age, smoking, alcohol usage, weight, and gender. Recognized calculators include FRAX and Dubbo.
The term "established osteoporosis" is used when a broken bone due to osteoporosis has occurred. Osteoporosis is a part of frailty syndrome.
There is an increased risk of falls associated with aging. These falls can lead to skeletal damage at the wrist, spine, hip, knee, foot, and ankle. Part of the fall risk is because of impaired eyesight due to many causes, (e.g. glaucoma, macular degeneration), balance disorder, movement disorders (e.g. Parkinson's disease), dementia, and sarcopenia (age-related loss of skeletal muscle). Collapse (transient loss of postural tone with or without loss of consciousness). Causes of syncope are manifold, but may include cardiac arrhythmias (irregular heart beat), vasovagal syncope, orthostatic hypotension (abnormal drop in blood pressure on standing up), and seizures. Removal of obstacles and loose carpets in the living environment may substantially reduce falls. Those with previous falls, as well as those with gait or balance disorders, are most at risk.
Risk factors for osteoporotic fracture can be split between nonmodifiable and (potentially) modifiable. In addition, osteoporosis is a recognized complication of specific diseases and disorders. Medication use is theoretically modifiable, although in many cases, the use of medication that increases osteoporosis risk may be unavoidable.
Caffeine is not a risk factor for osteoporosis.
It is more common in females than males.
Many diseases and disorders have been associated with osteoporosis. For some, the underlying mechanism influencing the bone metabolism is straightforward, whereas for others the causes are multiple or unknown.
Certain medications have been associated with an increase in osteoporosis risk; only glucocorticosteroids and anticonvulsants are classically associated, but evidence is emerging with regard to other drugs.
Age-related bone loss is common among humans due to exhibiting less dense bones than other primate species. Because of the more porous bones of humans, frequency of severe osteoporosis and osteoporosis related fractures is higher. The human vulnerability to osteoporosis is an obvious cost but it can be justified by the advantage of bipedalism inferring that this vulnerability is the byproduct of such. It has been suggested that porous bones help to absorb the increased stress that we have on two surfaces compared to our primate counterparts who have four surfaces to disperse the force. In addition, the porosity allows for more flexibility and a lighter skeleton that is easier to support. One other consideration may be that diets today have much lower amounts of calcium than the diets of other primates or the tetrapedal ancestors to humans which may lead to higher likelihood to show signs of osteoporosis.
The underlying mechanism in all cases of osteoporosis is an imbalance between bone resorption and bone formation. In normal bone, matrix remodeling of bone is constant; up to 10% of all bone mass may be undergoing remodeling at any point in time. The process takes place in bone multicellular units (BMUs) as first described by Frost & Thomas in 1963. Osteoclasts are assisted by transcription factor PU.1 to degrade the bone matrix, while osteoblasts rebuild the bone matrix. Low bone mass density can then occur when osteoclasts are degrading the bone matrix faster than the osteoblasts are rebuilding the bone.
The three main mechanisms by which osteoporosis develops are an inadequate peak bone mass (the skeleton develops insufficient mass and strength during growth), excessive bone resorption, and inadequate formation of new bone during remodeling, likely due to mesenchymal stem cells biasing away from the osteoblast and toward the marrow adipocyte lineage. An interplay of these three mechanisms underlies the development of fragile bone tissue. Hormonal factors strongly determine the rate of bone resorption; lack of estrogen (e.g. as a result of menopause) increases bone resorption, as well as decreasing the deposition of new bone that normally takes place in weight-bearing bones. The amount of estrogen needed to suppress this process is lower than that normally needed to stimulate the uterus and breast gland. The α-form of the estrogen receptor appears to be the most important in regulating bone turnover. In addition to estrogen, calcium metabolism plays a significant role in bone turnover, and deficiency of calcium and vitamin D leads to impaired bone deposition; in addition, the parathyroid glands react to low calcium levels by secreting parathyroid hormone (parathormone, PTH), which increases bone resorption to ensure sufficient calcium in the blood. The role of calcitonin, a hormone generated by the thyroid that increases bone deposition, is less clear and probably not as significant as that of PTH.
The activation of osteoclasts is regulated by various molecular signals, of which RANKL (receptor activator of nuclear factor kappa-B ligand) is one of the best studied. This molecule is produced by osteoblasts and other cells (e.g. lymphocytes), and stimulates RANK (receptor activator of nuclear factor κB). Osteoprotegerin (OPG) binds RANKL before it has an opportunity to bind to RANK, and hence suppresses its ability to increase bone resorption. RANKL, RANK and OPG are closely related to tumor necrosis factor and its receptors. The role of the Wnt signaling pathway is recognized, but less well understood. Local production of eicosanoids and interleukins is thought to participate in the regulation of bone turnover, and excess or reduced production of these mediators may underlie the development of osteoporosis.
Trabecular bone (or cancellous bone) is the sponge-like bone in the ends of long bones and vertebrae. Cortical bone is the hard outer shell of bones and the middle of long bones. Because osteoblasts and osteoclasts inhabit the surface of bones, trabecular bone is more active and is more subject to bone turnover and remodeling. Not only is bone density decreased, but the microarchitecture of bone is also disrupted. The weaker spicules of trabecular bone break ("microcracks"), and are replaced by weaker bone. Common osteoporotic fracture sites, the wrist, the hip and the spine, have a relatively high trabecular bone to cortical bone ratio. These areas rely on the trabecular bone for strength, so the intense remodeling causes these areas to degenerate most when the remodeling is imbalanced. Around the ages of 30–35, cancellous or trabecular bone loss begins. Women may lose as much as 50%, while men lose about 30%.
The diagnosis of osteoporosis can be made using conventional radiography and by measuring the bone mineral density (BMD). The most popular method of measuring BMD is dual-energy X-ray absorptiometry.
In addition to the detection of abnormal BMD, the diagnosis of osteoporosis requires investigations into potentially modifiable underlying causes; this may be done with blood tests. Depending on the likelihood of an underlying problem, investigations for cancer with metastasis to the bone, multiple myeloma, Cushing's disease and other above-mentioned causes may be performed.
Conventional radiography is useful, both by itself and in conjunction with CT or MRI, for detecting complications of osteopenia (reduced bone mass; pre-osteoporosis), such as fractures; for differential diagnosis of osteopenia; or for follow-up examinations in specific clinical settings, such as soft tissue calcifications, secondary hyperparathyroidism, or osteomalacia in renal osteodystrophy. However, radiography is relatively insensitive to detection of early disease and requires a substantial amount of bone loss (about 30%) to be apparent on X-ray images.
The main radiographic features of generalized osteoporosis are cortical thinning and increased radiolucency. Frequent complications of osteoporosis are vertebral fractures for which spinal radiography can help considerably in diagnosis and follow-up. Vertebral height measurements can objectively be made using plain-film X-rays by using several methods such as height loss together with area reduction, particularly when looking at vertical deformity in T4-L4, or by determining a spinal fracture index that takes into account the number of vertebrae involved. Involvement of multiple vertebral bodies leads to kyphosis of the thoracic spine, leading to what is known as dowager's hump.
Dual-energy X-ray absorptiometry (DEXA scan) is considered the gold standard for the diagnosis of osteoporosis. Osteoporosis is diagnosed when the bone mineral density is less than or equal to 2.5 standard deviations below that of a young (30–40-year-old:58), healthy adult women reference population. This is translated as a T-score. But because bone density decreases with age, more people become osteoporotic with increasing age.:58 The World Health Organization has established the following diagnostic guidelines:
The International Society for Clinical Densitometry takes the position that a diagnosis of osteoporosis in men under 50 years of age should not be made on the basis of densitometric criteria alone. It also states, for premenopausal women, Z-scores (comparison with age group rather than peak bone mass) rather than T-scores should be used, and the diagnosis of osteoporosis in such women also should not be made on the basis of densitometric criteria alone.
Chemical biomarkers are a useful tool in detecting bone degradation. The enzyme cathepsin K breaks down type-I collagen, an important constituent in bones. Prepared antibodies can recognize the resulting fragment, called a neoepitope, as a way to diagnose osteoporosis. Increased urinary excretion of C-telopeptides, a type-I collagen breakdown product, also serves as a biomarker for osteoporosis.
Quantitative computed tomography (QCT) differs from DXA in that it gives separate estimates of BMD for trabecular and cortical bone and reports precise volumetric mineral density in mg/cm3 rather than BMD's relative Z-score. Among QCT's advantages: it can be performed at axial and peripheral sites, can be calculated from existing CT scans without a separate radiation dose, is sensitive to change over time, can analyze a region of any size or shape, excludes irrelevant tissue such as fat, muscle, and air, and does not require knowledge of the patient's subpopulation in order to create a clinical score (e.g. the Z-score of all females of a certain age). Among QCT's disadvantages: it requires a high radiation dose compared to DXA, CT scanners are large and expensive, and because its practice has been less standardized than BMD, its results are more operator-dependent. Peripheral QCT has been introduced to improve upon the limitations of DXA and QCT.
Quantitative ultrasound has many advantages in assessing osteoporosis. The modality is small, no ionizing radiation is involved, measurements can be made quickly and easily, and the cost of the device is low compared with DXA and QCT devices. The calcaneus is the most common skeletal site for quantitative ultrasound assessment because it has a high percentage of trabecular bone that is replaced more often than cortical bone, providing early evidence of metabolic change. Also, the calcaneus is fairly flat and parallel, reducing repositioning errors. The method can be applied to children, neonates, and preterm infants, just as well as to adults. Some ultrasound devices can be used on the tibia.
The U.S. Preventive Services Task Force (USPSTF) recommend that all women 65 years of age or older be screened by bone densitometry. Additionally they recommend screening younger women with risk factors. There is insufficient evidence to make recommendations about the intervals for repeated screening and the appropriate age to stop screening.
In men the harm versus benefit of screening for osteoporosis is unknown. Prescrire states that the need to test for osteoporosis in those who have not had a previous bone fracture is unclear. The International Society for Clinical Densitometry suggest BMD testing for men 70 or older, or those who are indicated for risk equal to that of a 70‑year‑old. A number of tools exist to help determine who is reasonable to test.
Lifestyle prevention of osteoporosis is in many aspects the inverse of the potentially modifiable risk factors. As tobacco smoking and high alcohol intake have been linked with osteoporosis, smoking cessation and moderation of alcohol intake are commonly recommended as ways to help prevent it.
In people with coeliac disease adherence to a gluten-free diet decreases the risk of developing osteoporosis and increases bone density. The diet must ensure optimal calcium intake (of at least one gram daily) and measuring vitamin D levels is recommended, and to take specific supplements if necessary.
Studies of the benefits of supplementation with calcium and vitamin D are conflicting, possibly because most studies did not have people with low dietary intakes. A 2018 review by the USPSTF found low-quality evidence that the routine use of calcium and vitamin D supplements (or both supplements together) did not reduce the risk of having an osteoporotic fracture in male and female adults living in the community who had no known history of vitamin D deficiency, osteoporosis, or a fracture. Furthermore, the same review found moderate-quality evidence that the combination of vitamin D and calcium supplementation increases the risk for developing kidney stones in this population. The evidence was insufficient to determine if supplementation with vitamin D, calcium, or the combination of both had an effect on the risk of cancer, cardiovascular disease, or death from any cause. The USPSTF does not recommend low dose supplementation (less than 1 g of calcium and 400 IU of vitamin D) in postmenopausal women as there does not appear to be a difference in fracture risk. A 2015 review found little data that supplementation of calcium decreases the risk of fractures.
While some meta-analyses have found a benefit of vitamin D supplements combined with calcium for fractures, they did not find a benefit of vitamin D supplements (800 IU/day or less) alone.
While supplementation does not appear to affect the risk of death, there is an increased risk of myocardial infarctions with calcium supplementation, kidney stones, and stomach problems.
Vitamin K deficiency is also a risk factor for osteoporotic fractures. The gene gamma-glutamyl carboxylase (GGCX) is dependent on vitamin K. Functional polymorphisms in the gene could attribute to variation in bone metabolism and BMD. Vitamin K2 is also used as a means of treatment for osteoporosis and the polymorphisms of GGCX could explain the individual variation in the response to treatment of vitamin K.
Good dietary sources of calcium include leafy greens, legumes, and beans. There is conflicting evidence about whether or not dairy is an adequate source of calcium to prevent fractures. The National Academy of Sciences recommends 1,000 mg of calcium for those ages 19–50, and 1,200 mg for those ages 50 and above. However, this would equate to 2-3 glasses of milk, which is over the required amount or a healthy diet. Currently, there is not sufficient evidence to show that drinking more than 1 glass of milk a day prevents fractures, in fact, there is evidence that dairy consumption increases the risk of and even cause fractures since countries with higher rates of dairy consumption tend to have higher rates of fractures, this is because the animal proteins in dairy and other animal based foods such as meat and eggs is known to cause an acidic condition in the body called metabolic acidosis, in which the body draws upon calcium from the bones for use as an acid buffer, thereby causing the bones to weaken, leading to fractures.
There is limited evidence indicating that exercise is helpful in promoting bone health. A 2011 review reported a small benefit of physical exercise on bone density of postmenopausal women. The chances of having a fracture were also slightly reduced (absolute difference 4%). People who exercised had on average less bone loss (0.85% at the spine, 1.03% at the hip). However, other studies suggest that increased bone activity and weight-bearing exercises at a young age prevent bone fragility in adults.
Low-quality evidence suggests that exercise may improve pain and quality of life of people with vertebral fractures. Moderate-quality evidence found that exercise will likely improve physical performance in individuals with vertebral fractures.
People with osteoporosis are at higher risk of falls due to poor postural control, muscle weakness, and overall deconditioning. Postural control is important to maintaining functional movements such as walking and standing. Physical therapy may be an effective way to address postural weakness that may result from vertebral fractures, which are common in people with osteoporosis. Physical therapy treatment plans for people with vertebral fractures include balance training, postural correction, trunk and lower extremity muscle strengthening exercises, and moderate-intensity aerobic physical activity.. The goal of these interventions are to regain normal spine curvatures, increase spine stability, and improve functional performance. Physical therapy interventions were also designed to slow the rate of bone loss through home exercise programs.
Weight-bearing endurance exercise and/or exercises to strengthen muscles improve bone strength in those with osteoporosis. Aerobics, weight bearing, and resistance exercises all maintain or increase BMD in postmenopausal women. Fall prevention can help prevent osteoporosis complications. There is some evidence for hip protectors specifically among those who are in care homes.
Bisphosphonates are useful in decreasing the risk of future fractures in those who have already sustained a fracture due to osteoporosis. This benefit is present when taken for three to four years. They do not appear to change the overall risk of death. Tentative evidence does not support the use of bisphosphonates as a standard treatment for secondary osteoporosis in children. Different bisphosphonates have not been directly compared, therefore it is unknown if one is better than another. Fracture risk reduction is between 25 and 70% depending on the bone involved. There are concerns of atypical femoral fractures and osteonecrosis of the jaw with long-term use, but these risks are low. With evidence of little benefit when used for more than three to five years and in light of the potential adverse events, it may be appropriate to stop treatment after this time. One medical organization recommends that after five years of medications by mouth or three years of intravenous medication among those at low risk, bisphosphonate treatment can be stopped. In those at higher risk they recommend up to ten years of medication by mouth or six years of intravenous treatment.
For those with osteoporosis but who have not had a fracture evidence does not support a reduction in fracture risk with risedronate or etidronate. Alendronate decreases fractures of the spine but does not have any effect on other types of fractures. Half stop their medications within a year. When on treatment with bisphosphonates rechecking bone mineral density is not needed. Another review found tentative evidence of benefit in males with osteoporosis.
Fluoride supplementation does not appear to be effective in postmenopausal osteoporosis, as even though it increases bone density, it does not decrease the risk of fractures.
Teriparatide (a recombinant parathyroid hormone) has been shown to be effective in treatment of women with postmenopausal osteoporosis. Some evidence also indicates strontium ranelate is effective in decreasing the risk of vertebral and nonvertebral fractures in postmenopausal women with osteoporosis. Hormone replacement therapy, while effective for osteoporosis, is only recommended in women who also have menopausal symptoms. It is not recommended for osteoporosis by itself. Raloxifene, while effective in decreasing vertebral fractures, does not affect the risk of nonvertebral fracture. And while it reduces the risk of breast cancer, it increases the risk of blood clots and strokes. While denosumab is effective at preventing fractures in women, there is not clear evidence of benefit in males. In hypogonadal men, testosterone has been shown to improve bone quantity and quality, but, as of 2008, no studies evaluated its effect on fracture risk or in men with a normal testosterone levels. Calcitonin while once recommended is no longer due to the associated risk of cancer and questionable effect on fracture risk. Alendronic acid/colecalciferol can be taken to treat this condition in post-menopausal women
Certain medications like alendronate, etidronate, risedronate, raloxifene, and strontium ranelate can help to prevent osteoporotic fragility fractures in postmenopausal women with osteoporosis. Tentative evidence suggests that Chinese herbal medicines may have potential benefits on bone mineral density.
Although people with osteoporosis have increased mortality due to the complications of fracture, the fracture itself is rarely lethal.
Hip fractures can lead to decreased mobility and additional risks of numerous complications (such as deep venous thrombosis and/or pulmonary embolism, and pneumonia). The six-month mortality rate for those aged 50 and above following hip fracture was found to be around 13.5%, with a substantial proportion (almost 13%) needing total assistance to mobilize after a hip fracture.
Vertebral fractures, while having a smaller impact on mortality, can lead to a severe chronic pain of neurogenic origin, which can be hard to control, as well as deformity. Though rare, multiple vertebral fractures can lead to such severe hunch back (kyphosis), the resulting pressure on internal organs can impair one's ability to breathe.
Apart from risk of death and other complications, osteoporotic fractures are associated with a reduced health-related quality of life.
The condition is responsible for millions of fractures annually, mostly involving the lumbar vertebrae, hip, and wrist. Fragility fractures of ribs are also common in men.
Hip fractures are responsible for the most serious consequences of osteoporosis. In the United States, more than 250,000 hip fractures annually are attributable to osteoporosis. A 50-year-old white woman is estimated to have a 17.5% lifetime risk of fracture of the proximal femur. The incidence of hip fractures increases each decade from the sixth through the ninth for both women and men for all populations. The highest incidence is found among men and women ages 80 or older.
Between 35 and 50% of all women over 50 had at least one vertebral fracture. In the United States, 700,000 vertebral fractures occur annually, but only about a third are recognized. In a series of 9704 women aged 68.8 on average studied for 15 years, 324 had already suffered a vertebral fracture at entry into the study and 18.2% developed a vertebral fracture, but that risk rose to 41.4% in women who had a previous vertebral fracture.
In the United States, 250,000 wrist fractures annually are attributable to osteoporosis. Wrist fractures are the third most common type of osteoporotic fractures. The lifetime risk of sustaining a Colles' fracture is about 16% for white women. By the time women reach age 70, about 20% have had at least one wrist fracture.
Fragility fractures of the ribs are common in men as young as age 35. These are often overlooked as signs of osteoporosis, as these men are often physically active and suffer the fracture in the course of physical activity. An example would be as a result of falling while water skiing or jet skiing. However, a quick test of the individual's testosterone level following the diagnosis of the fracture will readily reveal whether that individual might be at risk.
It is estimated that 200 million people have osteoporosis. Osteoporosis becomes more common with age. About 15% of Caucasians in their 50s and 70% of those over 80 are affected. It is more common in women than men. In the developed world, depending on the method of diagnosis, 2% to 8% of males and 9% to 38% of females are affected. Rates of disease in the developing world are unclear.
Postmenopausal women have a higher rate of osteoporosis and fractures than older men. Postmenopausal women have decreased estrogen which contributes to their higher rates of osteoporosis. A 60-year-old women has a 44% risk of fracture while a 60-year-old man has a 25% risk of fracture.
There are 8.9 million fractures worldwide per year due to osteoporosis. Globally, 1 in 3 women and 1 in 5 men over the age of 50 will have an osteoporotic fracture. Data from the United States shows a decrease in osteoporosis within the general population and in white women, from 18% in 1994 to 10% in 2006. White and Asian people are at greater risk. People of African descent are at a decreased risk of fractures due to osteoporosis, although they have the highest risk of death following an osteoporotic fracture.
It has been shown that latitude affects risk of osteoporotic fracture. Areas of higher latitude such as Northern Europe receive less Vitamin D through sunlight compared to regions closer to the equator, and consequently have higher fracture rates in comparison to lower latitudes. For example, Swedish men and women have a 13% and 28.5% risk of hip fracture by age 50, respectively, whereas this risk is only 1.9% and 2.4% in Chinese men and women. Diet may also be a factor that is responsible for this difference, as vitamin D, calcium, magnesium, and folate are all linked to bone mineral density.
There is also an association between Celiac Disease and increased risk of osteoporosis. In studies with premenopausal females and males, there was a correlation between Celiac Disease and osteoporosis and osteopenia. Celiac Disease can decrease absorption of nutrients in the small intestine such as calcium, and a gluten-free diet can help people with Celiac Disease to revert to normal absorption in the gut.
About 22 million women and 5.5 million men in the European Union had osteoporosis in 2010. In the United States in 2010 about 8 million women and one to 2 million men had osteoporosis. This places a large economic burden on the healthcare system due to costs of treatment, long-term disability, and loss of productivity in the working population. The EU spends 37 billion euros per year in healthcare costs related to osteoporosis, and the US spends an estimated US$19 billion annually for related healthcare costs.
The link between age-related reductions in bone density and fracture risk goes back at least to Astley Cooper, and the term "osteoporosis" and recognition of its pathological appearance is generally attributed to the French pathologist Jean Lobstein. The American endocrinologist Fuller Albright linked osteoporosis with the postmenopausal state. Bisphosphonates were discovered in the 1960s.
Anthropologists have studied skeletal remains that showed loss of bone density and associated structural changes that were linked to a chronic malnutrition in the agricultural area in which these individuals lived. "It follows that the skeletal deformation may be attributed to
their heavy labor in agriculture as well as to their chronic malnutrition", causing the osteoporosis seen when radiographs of the remains were made.
Osteoporosis means "porous bones", from Greek: οστούν/"ostoun" meaning "bone" and πόρος/"poros" meaning "pore". | https://en.wikipedia.org/wiki?curid=22461 |
Oklahoma City bombing
The Oklahoma City bombing was a domestic terrorist truck bombing of the Alfred P. Murrah Federal Building in Oklahoma City, Oklahoma, United States, on April 19, 1995. Perpetrated by American terrorists Timothy McVeigh and Terry Nichols, the bombing happened at 9:02 am and killed at least 168 people, including many children, injured more than 680 others, and destroyed more than one third of the building, which had to be demolished. The blast destroyed or damaged 324 other buildings within a 16-block radius, shattered glass in 258 nearby buildings, and destroyed or burned 86 cars, causing an estimated $652 million worth of damage. Local, state, federal, and worldwide agencies engaged in extensive rescue efforts in the wake of the bombing. They and the city received substantial donations from across the country. The Federal Emergency Management Agency (FEMA) activated 11 of its Urban Search and Rescue Task Forces, consisting of 665 rescue workers who assisted in rescue and recovery operations. Until the September 11 attacks in 2001, the Oklahoma City bombing was the deadliest terrorist attack in the history of the United States. It remains the deadliest act of domestic terrorism in U.S. history.
Within 90 minutes of the explosion, McVeigh was stopped by Oklahoma Highway Patrolman Charlie Hanger for driving without a license plate and arrested for illegal weapons possession. Forensic evidence quickly linked McVeigh and Nichols to the attack; Nichols was arrested, and within days, both were charged. Michael and Lori Fortier were later identified as accomplices. McVeigh, a veteran of the Gulf War and a sympathizer with the U.S. militia movement, had detonated a Ryder rental truck full of explosives he parked in front of the building. Nichols had assisted with the bomb's preparation. Motivated by his dislike for the U.S. federal government and unhappy about its handling of the Ruby Ridge incident in 1992 and the Waco siege in 1993, McVeigh timed his attack to coincide with the second anniversary of the fire that ended the siege at the Branch Davidian compound in Waco, Texas.
The official FBI investigation, known as "OKBOMB", involved 28,000 interviews and collecting 3.5 short tons (3,200 kg) of evidence and nearly one billion pieces of information. The bombers were tried and convicted in 1997. Sentenced to death, McVeigh was executed by lethal injection on June 11, 2001, at the U.S. federal penitentiary in Terre Haute, Indiana. Nichols was sentenced to life in prison in 2004. Michael and Lori Fortier testified against McVeigh and Nichols; Michael Fortier was sentenced to 12 years in prison for failing to warn the United States government, and Lori received immunity from prosecution in exchange for her testimony.
In response to the bombing, the U.S. Congress passed the Antiterrorism and Effective Death Penalty Act of 1996, which tightened the standards for habeas corpus in the United States. It also passed legislation to increase the protection around federal buildings to deter future terrorist attacks.
On April 19, 2000, the Oklahoma City National Memorial was dedicated on the site of the Murrah Federal Building, commemorating the victims of the bombing. Remembrance services are held every year on April 19, at the time of the explosion.
The chief conspirators, Timothy McVeigh and Terry Nichols, met in 1988 at Fort Benning during basic training for the U.S. Army. McVeigh met Michael Fortier as his Army roommate. The three shared interests in survivalism. They expressed anger at the federal government's handling of the 1992 Federal Bureau of Investigation (FBI) standoff with Randy Weaver at Ruby Ridge, as well as the Waco siege, a 1993 51-day standoff between the FBI and Branch Davidian members that began with a botched Bureau of Alcohol, Tobacco, and Firearms (ATF) attempt to execute a search warrant. There was a firefight and ultimately a siege of the compound, resulting in the burning and shooting deaths of David Koresh and 75 others. In March 1993, McVeigh visited the Waco site during the standoff, and again after the siege ended. He later decided to bomb a federal building as a response to the raids.
McVeigh later said that he had contemplated assassinating Attorney General Janet Reno, Lon Horiuchi, and others rather than attacking a building, and sometimes wished he had done so. He initially intended to destroy only a federal building, but he later decided that his message would be more powerful if many people were killed in the bombing. McVeigh's criterion for attack sites was that the target should house at least two of three federal law enforcement agencies: the Bureau of Alcohol, Tobacco, and Firearms (ATF), the Federal Bureau of Investigation (FBI), and the Drug Enforcement Administration (DEA). He regarded the presence of additional law enforcement agencies, such as the Secret Service or the U.S. Marshals Service, as a bonus.
A resident of Kingman, Arizona, McVeigh considered targets in Missouri, Arizona, Texas, and Arkansas. He said in his authorized biography that he wanted to minimize non-governmental casualties, so he ruled out a 40-story building in Little Rock, Arkansas, because a florist's shop occupied space on the ground floor. In December 1994, McVeigh and Fortier visited Oklahoma City to inspect McVeigh's target: the Alfred P. Murrah Federal Building.
The Murrah building had been targeted in October 1983 by white supremacist group The Covenant, The Sword, and the Arm of the Lord, including founder James Ellison and Richard Snell. The group had plotted to park "a van or trailer in front of the Federal Building and blow it up with rockets detonated by a timer." After Snell's appeal for murdering two people in unrelated cases was denied, it happened that he was executed the same day as the Murrah bombing.
The nine-story building, built in 1977, was named for a federal judge and housed 14 federal agencies, including the DEA, ATF, Social Security Administration, and recruiting offices for the Army and Marine Corps.
McVeigh chose the Murrah building because he expected its glass front to shatter under the impact of the blast. He also believed that its adjacent large, open parking lot across the street might absorb and dissipate some of the force, and protect the occupants of nearby non-federal buildings. In addition, McVeigh believed that the open space around the building would provide better photo opportunities for propaganda purposes. He planned the attack for April 19, 1995, to coincide with the 2nd anniversary of the Waco siege and the 220th anniversary of the Battles of Lexington and Concord during the American Revolution.
McVeigh and Nichols purchased or stole the materials they needed to manufacture the bomb, and stored them in rented sheds. In August 1994, McVeigh obtained nine Kinestiks from gun collector Roger E. Moore, and ignited the devices with Nichols outside Nichols's home in Herington, Kansas. On September 30, 1994, Nichols bought forty bags of ammonium nitrate fertilizer from Mid-Kansas Coop in McPherson, Kansas, enough to fertilize of farmland at a rate of of nitrogen per acre (.4 ha), an amount commonly used for corn. Nichols bought an additional bag on October 18, 1994. McVeigh approached Fortier and asked him to assist with the bombing project, but he refused.
McVeigh and Nichols robbed Moore in his home of $60,000 worth of guns, gold, silver, and jewels, transporting the property in the victim's van. McVeigh wrote Moore a letter in which he claimed that the robbery had been committed by government agents. Items stolen from Moore were later found in Nichols's home and in a storage shed he had rented.
In October 1994, McVeigh showed Michael and Lori Fortier a diagram he had drawn of the bomb he wanted to build. McVeigh planned to construct a bomb containing more than of ammonium nitrate fertilizer mixed with about of liquid nitromethane and of Tovex. Including the weight of the sixteen 55-U.S.-gallon drums in which the explosive mixture was to be packed, the bomb would have a combined weight of about . McVeigh originally intended to use hydrazine rocket fuel, but it proved too expensive. During the Chief Auto Parts Nationals, a round of the NHRA Winston Drag Racing Series at the Texas Motorplex, McVeigh posed as a motorcycle racer and attempted to purchase drums of nitromethane on the pretext that he and some fellow bikers needed the fuel for racing. But there were no nitromethane-powered motorcycles at the meeting, and he did not have an NHRA competitors' license. Representative Steve LeSueur refused to sell to him because he was suspicious of McVeigh's actions and attitudes, but sales representative Tim Chambers sold him three barrels. Chambers questioned the purchase of three barrels, when typically only 1–5 gallons of nitromethane, he noted, would be purchased by a Top Fuel Harley rider, and the class was not even raced that weekend. LeSueur reported the incident to the FBI immediately after rejecting McVeigh's request.
McVeigh rented a storage space in which he stockpiled seven crates of Tovex sausages, 80 spools of shock tube, and 500 electric blasting caps, which he and Nichols had stolen from a Martin Marietta Aggregates quarry in Marion, Kansas. He decided not to steal any of the of ANFO (ammonium nitrate/fuel oil) he found at the scene, as he did not believe it powerful enough (he did obtain 17 bags of ANFO from another source for use in the bomb). McVeigh made a prototype bomb that was detonated in the desert to avoid detection.
Later, speaking about the military mindset with which he went about the preparations, he said, "You learn how to handle killing in the military. I face the consequences, but you learn to accept it." He compared his actions to the atomic bombings of Hiroshima and Nagasaki, rather than the attack on Pearl Harbor, reasoning it was necessary to prevent more lives from being lost.
On April 14, 1995, McVeigh paid for a motel room at the Dreamland Motel in Junction City, Kansas. The next day he rented a 1993 Ford F-700 truck from Ryder under the name Robert D. Kling, an alias he adopted because he knew an Army soldier named Kling with whom he shared physical characteristics, and because it reminded him of the Klingon warriors of "Star Trek". On April 16, 1995, he and Nichols drove to Oklahoma City, where he parked a getaway car, a yellow 1977 Mercury Marquis, several blocks from the Murrah Federal Building. The nearby Regency Towers Apartments' lobby security camera recorded images of Nichols's blue 1984 GMC pickup truck on April 16. After removing the car's license plate, he left a note covering the Vehicle Identification Number (VIN) plate that read, "Not abandoned. Please do not tow. Will move by April 23. (Needs battery & cable)." Both men then returned to Kansas.
On April 17–18, 1995, McVeigh and Nichols removed the bomb supplies from their storage unit in Herington, Kansas, where Nichols lived, and loaded them into the Ryder rental truck. They then drove to Geary Lake State Park, where they nailed boards onto the floor of the truck to hold the 13 barrels in place and mixed the chemicals using plastic buckets and a bathroom scale. Each filled barrel weighed nearly . McVeigh added more explosives to the driver's side of the cargo bay, which he could ignite (killing himself in the process) at close range with his Glock 21 pistol in case the primary fuses failed. During McVeigh's trial, Lori Fortier (the wife of Michael Fortier) stated that McVeigh claimed to have arranged the barrels in order to form a shaped charge. This was achieved by tamping the aluminum side panel of the truck with bags of ammonium nitrate fertilizer to direct the blast laterally towards the building. Specifically, McVeigh arranged the barrels in the shape of a backwards J; he later said that for pure destructive power, he would have put the barrels on the side of the cargo bay closest to the Murrah Building; however, such an unevenly distributed load might have broken an axle, flipped the truck over, or at least caused it to lean to one side, which could have drawn attention. All or most of the barrels of ANNM contained metal cylinders of acetylene intended to increase the fireball and the brisance of the explosion.
McVeigh then added a dual-fuse ignition system accessible from the truck's front cab. He drilled two holes in the cab of the truck under the seat, while two holes were also drilled in the body of the truck. One green cannon fuse was run through each hole into the cab. These time-delayed fuses led from the cab through plastic fish-tank tubing conduit to two sets of non-electric blasting caps which would ignite around of the high-grade explosives that McVeigh stole from a rock quarry. The tubing was painted yellow to blend in with the truck's livery, and duct-taped in place to the wall to make it harder to disable by yanking from the outside. The fuses were set up to initiate, through shock tubes, the of Tovex Blastrite Gel "sausages", which would in turn set off the configuration of barrels. Of the 13 filled barrels, nine contained ammonium nitrate and nitromethane, and four contained a mixture of the fertilizer and about of diesel fuel. Additional materials and tools used for manufacturing the bomb were left in the truck to be destroyed in the blast. After finishing the truck bomb, the two men separated; Nichols returned home to Herington and McVeigh traveled with the truck to Junction City. The bomb cost about $5000 to make.
McVeigh's original plan had been to detonate the bomb at 11:00 am, but at dawn on April 19, 1995, he decided instead to destroy the building at 9:00 am. As he drove toward the Murrah Federal Building in the Ryder truck, McVeigh carried with him an envelope containing pages from "The Turner Diaries" – a fictional account of white supremacists who ignite a revolution by blowing up the FBI headquarters at 9:15 one morning using a truck bomb. McVeigh wore a printed T-shirt with the motto of the Commonwealth of Virginia, "Sic semper tyrannis" ("Thus always to tyrants", according to legend what Brutus said as he assassinated Julius Caesar, also claimed to have been done by John Wilkes Booth immediately after the assassination of Abraham Lincoln) and "The tree of liberty must be refreshed from time to time with the blood of patriots and tyrants" (from Thomas Jefferson). He also carried an envelope full of revolutionary materials that included a bumper sticker with the slogan, falsely attributed to Thomas Jefferson, "When the government fears the people, there is liberty. When the people fear the government, there is tyranny." Underneath, McVeigh had written, "Maybe now, there will be liberty!" with a hand-copied quote by John Locke asserting that a man has a right to kill someone who takes away his liberty.
McVeigh entered Oklahoma City at 8:50 am. At 8:57 am, the Regency Towers Apartments' lobby security camera that had recorded Nichols's pickup truck three days earlier recorded the Ryder truck heading towards the Murrah Federal Building. At the same moment, McVeigh lit the five-minute fuse. Three minutes later, still a block away, he lit the two-minute fuse. He parked the Ryder truck in a drop-off zone situated under the building's day-care center, exited and locked the truck. As he headed to his getaway vehicle, he dropped the keys to the truck a few blocks away.
At 9:02 a.m. (14:02 UTC), the Ryder truck, containing over of ammonium nitrate fertilizer, nitromethane, and diesel fuel mixture, detonated in front of the north side of the nine-story Alfred P. Murrah Federal Building. 168 people were killed and hundreds more injured. One third of the building was destroyed by the explosion, which created a , crater on NW 5th Street next to the building. The blast destroyed or damaged 324 buildings within a 4-block radius, and shattered glass in 258 nearby buildings. The broken glass alone accounted for 5 percent of the death total and 69 percent of the injuries outside the Murrah Federal Building. The blast destroyed or burned 86 cars around the site. The destruction of the buildings left several hundred people homeless and shut down a number of offices in downtown Oklahoma City. The explosion was estimated to have caused at least $652 million worth of damage.
The effects of the blast were equivalent to over of TNT, and could be heard and felt up to away. Seismometers at the Omniplex Science Museum in Oklahoma City, away, and in Norman, Oklahoma, away, recorded the blast as measuring approximately 3.0 on the Richter magnitude scale.
The collapse took roughly 7 seconds. As the truck exploded, it first destroyed the column next to it, designated as G20, and shattered the entire glass facade of the building. The shockwave of the explosion forced the lower floors upwards, before the fourth and fifth floors collapsed onto the third floor, which housed a transfer beam that ran the length of the building and was being supported by four pillars below and was supporting the pillars that hold the upper floors. The added weight meant that the third floor gave way along with the transfer beam, which in turn caused the total collapse of the building.
Initially, the FBI had three hypotheses about responsibility for the bombing: international terrorists, possibly the same group that had carried out the World Trade Center bombing; a drug cartel, carrying out an act of vengeance against DEA agents in the building's DEA office; and anti-government radicals attempting to start a rebellion against the federal government.
McVeigh was arrested within 90 minutes of the explosion, as he was traveling north on Interstate 35 near Perry in Noble County, Oklahoma. Oklahoma State Trooper Charlie Hanger stopped McVeigh for driving his yellow 1977 Mercury Marquis without a license plate, and arrested him for having a concealed weapon. For his home address, McVeigh falsely claimed he resided at Terry Nichols's brother James's house in Michigan. After booking McVeigh into jail, Trooper Hanger searched his patrol car and found a business card which had been concealed by McVeigh after being handcuffed. Written on the back of the card, which was from a Wisconsin military surplus store, were the words "TNT at $5 a stick. Need more." The card was later used as evidence during McVeigh's trial.
While investigating the VIN from an axle of the truck used in the explosion and the remnants of the license plate, federal agents were able to link the truck to a specific Ryder rental agency in Junction City, Kansas. Using a sketch created with the assistance of Eldon Elliot, owner of the agency, the agents were able to implicate McVeigh in the bombing. McVeigh was also identified by Lea McGown of the Dreamland Motel, who remembered him parking a large yellow Ryder truck in the lot; McVeigh had signed in under his real name at the motel, using an address that matched the one on his forged license and the charge sheet at the Perry Police Station. Before signing his real name at the motel, McVeigh had used false names for his transactions. However, McGown noted, "People are so used to signing their own name that when they go to sign a phony name, they almost always go to write, and then look up for a moment as if to remember the new name they want to use. That's what [McVeigh] did, and when he looked up I started talking to him, and it threw him."
After an April 21, 1995, court hearing on the gun charges, but before McVeigh's release, federal agents took him into custody as they continued their investigation into the bombing. Rather than talk to investigators about the bombing, McVeigh demanded an attorney. Having been tipped off by the arrival of police and helicopters that a bombing suspect was inside, a restless crowd began to gather outside the jail. While McVeigh's requests for a bulletproof vest or transport by helicopter were denied, authorities did use a helicopter to transport him from Perry to Oklahoma City.
Federal agents obtained a warrant to search the house of McVeigh's father, Bill, after which they broke down the door and wired the house and telephone with listening devices. FBI investigators used the resulting information gained, along with the fake address McVeigh had been using, to begin their search for the Nichols brothers, Terry and James. On April 21, 1995, Terry Nichols learned that he was being hunted, and turned himself in. Investigators discovered incriminating evidence at his home: ammonium nitrate and blasting caps, the electric drill used to drill out the locks at the quarry, books on bomb-making, a copy of "Hunter" (a 1989 novel by William Luther Pierce, the founder and chairman of the National Alliance, a white nationalist group) and a hand-drawn map of downtown Oklahoma City, on which the Murrah Building and the spot where McVeigh's getaway car was hidden were marked. After a nine-hour interrogation, Terry Nichols was formally held in federal custody until his trial. On April 25, 1995, James Nichols was also arrested, but he was released after 32 days due to lack of evidence. McVeigh's sister Jennifer was accused of illegally mailing bullets to McVeigh, but she was granted immunity in exchange for testifying against him.
A Jordanian-American man traveling from his home in Oklahoma City to visit family in Jordan on April 19, 1995, was also arrested, amid concern that Middle Eastern terrorists could have been behind the attack. Further investigation cleared the man of any involvement in the bombing.
An estimated 646 people were inside the building when the bomb exploded. By the end of the day, 14 adults and 6 children were confirmed dead, and over 100 injured. The toll eventually reached 168 confirmed dead, not including an unmatched left leg that could have belonged to an unidentified 169th victim or could have belonged to any one of eight victims who had been buried without a left leg. Most of the deaths resulted from the collapse of the building, rather than the bomb blast itself. Those killed included 163 who were in the Alfred P. Murrah Federal Building, one person in the Athenian Building, one woman in a parking lot across the street, a man and woman in the Oklahoma Water Resources building, and a rescue worker struck on the head by debris.
The victims, including three pregnant women, ranged in age from three months to 73 years. Of the dead, 108 worked for the Federal government: Drug Enforcement Administration (5); Secret Service (6); Department of Housing and Urban Development (35); Department of Agriculture (7); Customs Office (2); Department of Transportation/Federal Highway (11); General Services Administration (2); and the Social Security Administration (40). Eight of the federal government victims were federal law enforcement agents. Of those law enforcement agents, four were members of the U.S. Secret Service, two were members of the U.S. Customs Service, one was a member of the U.S. Drug Enforcement Administration, and one was a member of the U.S. Department of Housing and Urban Development. Six of the victims were U.S. military personnel; two members of the U.S. Army, two members of the U.S. Air Force, and two members of the U.S. Marine Corps. The rest of the victims were civilians, including 19 children, of whom 15 were in the America's Kids Day Care Center. The bodies of the 168 victims were identified at a temporary morgue set up at the scene. A team of 24 identified the victims using full-body X-rays, dental examinations, fingerprinting, blood tests, and DNA testing. More than 680 people were injured. The majority of the injuries were abrasions, severe burns, and bone fractures.
McVeigh's later response to the range of casualties was: "I didn't define the rules of engagement in this conflict. The rules, if not written down, are defined by the aggressor. It was brutal, no holds barred. Women and kids were killed at Waco and Ruby Ridge. You put back in [the government's] faces exactly what they're giving out." He later stated "I wanted the government to hurt like the people of Waco and Ruby Ridge had."
At 9:03 am, the first of over 1,800 9-1-1 calls related to the bombing was received by Emergency Medical Services Authority (EMSA). By that time, EMSA ambulances, police, and firefighters had heard the blast and were already headed to the scene. Nearby civilians, who had also witnessed or heard the blast, arrived to assist the victims and emergency workers. Within 23 minutes of the bombing, the State Emergency Operations Center (SEOC) was set up, consisting of representatives from the state departments of public safety, human services, military, health, and education. Assisting the SEOC were agencies including the National Weather Service, the Air Force, the Civil Air Patrol, and the American Red Cross. Immediate assistance also came from 465 members of the Oklahoma National Guard, who arrived within the hour to provide security, and from members of the Department of Civil Emergency Management. Terrance Yeakey and Jim Ramsey, from the Oklahoma City Police Department, were among the first officers to arrive at the site. Several cast and crew members filming for the 1996 movie Twister paused filming to come help with recovery efforts.
The EMS command post was set up almost immediately following the attack and oversaw triage, treatment, transportation, and decontamination. A simple plan/objective was established: treatment and transportation of the injured was to be done as quickly as possible, supplies and personnel to handle a large number of patients was needed immediately, the dead needed to be moved to a temporary morgue until they could be transferred to the coroner's office, and measures for a long-term medical operation needed to be established. The triage center was set up near the Murrah Building and all the wounded were directed there. Two hundred and ten patients were transported from the primary triage center to nearby hospitals within the first couple hours following the bombing.
Within the first hour, 50 people were rescued from the Murrah Federal Building. Victims were sent to every hospital in the area. The day of the bombing, 153 people were treated at St. Anthony Hospital, eight blocks from the blast, over 70 people were treated at Presbyterian Hospital, 41 people were treated at University Hospital, and 18 people were treated at Children's Hospital. Temporary silences were observed at the blast site so that sensitive listening devices capable of detecting human heartbeats could be used to locate survivors. In some cases, limbs had to be amputated without anesthetics (avoided because of the potential to induce coma) in order to free those trapped under rubble. The scene had to be periodically evacuated as the police received tips claiming that other bombs had been planted in the building.
At 10:28 am, rescuers found what they believed to be a second bomb. Some rescue workers refused to leave until police ordered the mandatory evacuation of a four-block area around the site. The device was determined to be a three-foot (.9-m) long TOW missile used in the training of federal agents and bomb-sniffing dogs; although actually inert, it had been marked "live" in order to mislead arms traffickers in a planned law enforcement sting. On examination the missile was determined to be inert, and relief efforts resumed 45 minutes later. The last survivor, a 15-year-old girl found under the base of the collapsed building, was rescued at around 7 pm.
In the days following the blast, over 12,000 people participated in relief and rescue operations. The Federal Emergency Management Agency (FEMA) activated 11 of its Urban Search and Rescue Task Forces, bringing in 665 rescue workers. One nurse was killed in the rescue attempt after she was hit on the head by debris, and 26 other rescuers were hospitalized because of various injuries. Twenty-four K-9 units and out-of-state dogs were brought in to search for survivors and bodies in the building debris. In an effort to recover additional bodies, of rubble were removed from the site each day from April 24 to 29.
Rescue and recovery efforts were concluded at 12:05 a.m. on May 5, by which time the bodies of all but three of the victims had been recovered. For safety reasons, the building was initially slated to be demolished shortly afterward. McVeigh's attorney, Stephen Jones, filed a motion to delay the demolition until the defense team could examine the site in preparation for the trial. At 7:02 a.m. on May 23, more than a month after the bombing, the Murrah Federal building was demolished. The EMS Command Center remained active and was staffed 24 hours a day until the demolition of the Federal Murrah Building. The final three bodies to be recovered were those of two credit union employees and a customer. For several days after the building's demolition, trucks hauled away of debris a day from the site. Some of the debris was used as evidence in the conspirators' trials, incorporated into memorials, donated to local schools, or sold to raise funds for relief efforts.
The national humanitarian response was immediate, and in some cases even overwhelming. Large numbers of items such as wheelbarrows, bottled water, helmet lights, knee pads, rain gear, and even football helmets were donated. The sheer quantity of such donations caused logistical and inventory control problems until drop-off centers were set up to accept and sort the goods. The Oklahoma Restaurant Association, which was holding a trade show in the city, assisted rescue workers by providing 15,000 to 20,000 meals over a ten-day period.
The Salvation Army served over 100,000 meals and provided over 100,000 ponchos, gloves, hard hats, and knee pads to rescue workers. Local residents and those from further afield responded to the requests for blood donations. Of the over 9,000 units of blood donated 131 units were used; the rest were stored in blood banks.
At 9:45 am, Governor Frank Keating declared a state of emergency and ordered all non-essential workers in the Oklahoma City area to be released from their duties for their safety. President Bill Clinton learned about the bombing at around 9:30 a.m. while he was meeting with Turkish Prime Minister Tansu Çiller at the White House. Before addressing the nation, President Clinton considered grounding all planes in the Oklahoma City area to prevent the bombers from escaping by air, but decided against it. At 4:00 pm, President Clinton declared a federal emergency in Oklahoma City and spoke to the nation:
He ordered that flags for all federal buildings be flown at half-staff for 30 days in remembrance of the victims. Four days later, on April 23, 1995, Clinton spoke from Oklahoma City.
No major federal financial assistance was made available to the survivors of the Oklahoma City bombing, but the Murrah Fund set up in the wake of the bombing attracted over $300,000 in federal grants. Over $40 million was donated to the city to aid disaster relief and to compensate the victims. Funds were initially distributed to families who needed it to get back on their feet, and the rest was held in trust for longer-term medical and psychological needs. By 2005, $18 million of the donations remained, some of which was earmarked to provide a college education for each of the 219 children who lost one or both parents in the bombing. A committee chaired by Daniel Kurtenbach of Goodwill Industries provided financial assistance to the survivors.
International reactions to the bombing varied. President Clinton received many messages of sympathy, including those from Queen Elizabeth II of the United Kingdom, Yasser Arafat of the Palestine Liberation Organization, and Narasimha Rao of India. Iran condemned the bombing as an attack on innocent people, but also blamed the U.S. government's policies for inciting it. Other condolences came from Russia, Canada, Australia, the United Nations, and the European Union, among other nations and organizations.
Several countries offered to assist in both the rescue efforts and the investigation. France offered to send a special rescue unit, and Israeli Prime Minister Yitzhak Rabin offered to send agents with anti-terrorist expertise to help in the investigation. President Clinton declined Israel's offer, believing that accepting it would increase anti-Muslim sentiments and endanger Muslim-Americans.
In the wake of the bombing, the national media focused on the fact that 19 of the victims had been babies and children, many in the day-care center. At the time of the bombing, there were 100 day-care centers in the United States in 7,900 federal buildings. McVeigh later stated that he was unaware of the day-care center when choosing the building as a target, and if he had known "... it might have given me pause to switch targets. That's a large amount of collateral damage." The FBI stated that McVeigh scouted the interior of the building in December 1994 and likely knew of the day-care center before the bombing. In April 2010, Joseph Hartzler, the prosecutor at McVeigh's trial, questioned how he could have decided to pass over a prior target building because of an included florist shop but at the Murrah building not "... notice that there's a child day-care center there, that there was a credit union there and a Social Security office?"
Schools across the country were dismissed early and ordered closed. A photograph of firefighter Chris Fields emerging from the rubble with infant Baylee Almon, who later died in a nearby hospital, was reprinted worldwide and became a symbol of the attack. The photo, taken by bank employee Charles H. Porter IV, won the 1996 Pulitzer Prize for Spot News Photography and appeared on newspapers and magazines for months following the attack.
Aren Almon Kok, mother of Baylee Almon, said of the photo: "It was very hard to go to stores because they are in the check out aisle. It was always there. It was devastating. Everybody had seen my daughter dead. And that's all she became to them. She was a symbol. She was the girl in the fireman's arms. But she was a real person that got left behind."
The images and media reports of children dying terrorized many children who, as demonstrated by later research, showed symptoms of post-traumatic stress disorder. Children became a primary focus of concern in the mental health response to bombing and many bomb related services were delivered to the community, young and old alike. These services were delivered to public schools of Oklahoma and reached approximately 40,000 students. One of the first organized mental health activities in Oklahoma City was a clinical study of middle and high school students conducted 7 weeks after the bombing. The study focused on middle and high school students who had no connection or relation to the victims of the bombing. This study showed that these students, although deeply moved by the event and showing a sense of vulnerability on the matter, had no difficulty with the demands of school or home life, contrasting those who were connected to the bombing and its victims, who suffered from post-traumatic stress disorder.
Children were also affected through the loss of parents in the bombing. Many children lost one or more parents in the blast, with a reported seven children who lost their only remaining parent. Children of the disaster have been raised by single parents, foster parents, and other family members. Adjusting to the loss has made these children suffer psychologically and emotionally. One interview revealed the sleepless nights and obsession with death of one of the at least ten orphaned children.
President Clinton stated that after seeing images of babies being pulled from the wreckage, he was "beyond angry" and wanted to "put [his] fist through the television". Clinton and his wife Hillary requested that aides talk to child care specialists about how to communicate with the children regarding the bombing. President Clinton spoke to the nation three days after the bombing, saying: "I don't want our children to believe something terrible about life and the future and grownups in general because of this awful thing ... most adults are good people who want to protect our children in their childhood and we are going to get through this". On April 22, 1995, the Clintons spoke in the White House with over 40 federal agency employees and their children, and in a live nationwide television and radio broadcast, addressed their concerns.
Hundreds of news trucks and members of the press arrived at the site to cover the story. The press immediately noticed that the bombing took place on the second anniversary of the Waco incident.
Many initial news stories hypothesized the attack had been undertaken by Islamic terrorists, such as those who had masterminded the 1993 World Trade Center bombing. Some media reported that investigators wanted to question men of Middle Eastern appearance. Hamzi Moghrabi, chairman of the American-Arab Anti-Discrimination Committee, blamed the media for harassment of Muslims and Arabs that took place after the bombing.
As the rescue effort wound down, the media interest shifted to the investigation, arrests, and trials of Timothy McVeigh and Terry Nichols, and on the search for an additional suspect named "John Doe Number Two." Several witnesses claimed to have seen a second suspect, who did not resemble Nichols, with McVeigh.
Those who expressed sympathy for McVeigh typically described his deed as an act of war, as in the case of Gore Vidal's essay "The Meaning of Timothy McVeigh".
The Federal Bureau of Investigation (FBI) led the official investigation, known as OKBOMB, with Weldon L. Kennedy acting as special agent in charge. Kennedy oversaw 900 federal, state, and local law enforcement personnel including 300 FBI agents, 200 officers from the Oklahoma City Police Department, 125 members of the Oklahoma National Guard, and 55 officers from the Oklahoma Department of Public Safety. The crime task force was deemed the largest since the investigation into the assassination of John F. Kennedy. OKBOMB was the largest criminal case in America's history, with FBI agents conducting 28,000 interviews, amassing of evidence, and collecting nearly one billion pieces of information. Federal judge Richard Paul Matsch ordered that the venue for the trial be moved from Oklahoma City to Denver, Colorado, citing that the defendants would be unable to receive a fair trial in Oklahoma. The investigation led to the separate trials and convictions of McVeigh, Nichols, and Fortier.
Opening statements in McVeigh's trial began on April 24, 1997. The United States was represented by a team of prosecutors led by Joseph Hartzler. In his opening statement Hartzler outlined McVeigh's motivations, and the evidence against him. McVeigh, he said, had developed a hatred of the government during his time in the army, after reading "The Turner Diaries". His beliefs were supported by what he saw as the militia's ideological opposition to increases in taxes and the passage of the Brady Bill, and were further reinforced by the Waco and Ruby Ridge incidents. The prosecution called 137 witnesses, including Michael Fortier and his wife Lori, and McVeigh's sister, Jennifer McVeigh, all of whom testified to confirm McVeigh's hatred of the government and his desire to take militant action against it. Both Fortiers testified that McVeigh had told them of his plans to bomb the Alfred P. Murrah Federal Building. Michael revealed that McVeigh had chosen the date, and Lori testified that she created the false identification card McVeigh used to rent the Ryder truck.
McVeigh was represented by a defense counsel team of six principal attorneys led by Stephen Jones. According to law professor Douglas O. Linder, McVeigh wanted Jones to present a "necessity defense" – which would argue that he was in "imminent danger" from the government (that his bombing was intended to prevent future crimes by the government, such as the Waco and Ruby Ridge incidents). McVeigh argued that "imminent" does not mean "immediate": "If a comet is hurtling toward the earth, and it's out past the orbit of Pluto, it's not an immediate threat to Earth, but it is an imminent threat." Despite McVeigh's wishes, Jones attempted to discredit the prosecution's case in an attempt to instill reasonable doubt. Jones also believed that McVeigh was part of a larger conspiracy, and sought to present him as "the designated patsy", but McVeigh disagreed with Jones arguing that rationale for his defense. After a hearing, Judge Matsch independently ruled the evidence concerning a larger conspiracy to be too insubstantial to be admissible. In addition to arguing that the bombing could not have been carried out by two men alone, Jones also attempted to create reasonable doubt by arguing that no one had seen McVeigh near the scene of the crime, and that the investigation into the bombing had lasted only two weeks. Jones presented 25 witnesses over a one-week period, including Frederic Whitehurst. Although Whitehurst described the FBI's sloppy investigation of the bombing site and its handling of other key evidence, he was unable to point to any direct evidence that he knew to be contaminated.
A key point of contention in the case was the unmatched left leg found after the bombing. Although it was initially believed to be from a male, it was later determined to be that of Lakesha Levy, a female member of the Air Force who was killed in the bombing. Levy's coffin had to be re-opened so that her leg could replace another unmatched leg that had previously been buried with her remains. The unmatched leg had been embalmed, which prevented authorities from being able to extract DNA to determine the leg's owner. Jones argued that the leg could have belonged to another bomber, possibly John Doe No. 2. The prosecution disputed the claim, saying that the leg could have belonged to any one of eight victims who had been buried without a left leg.
Numerous damaging leaks, which appeared to originate from conversations between McVeigh and his defense attorneys, emerged. They included a confession said to have been inadvertently included on a computer disk that was given to the press, which McVeigh believed seriously compromised his chances of getting a fair trial. A gag order was imposed during the trial, prohibiting attorneys on either side from commenting to the press on the evidence, proceedings, or opinions regarding the trial proceedings. The defense was allowed to enter into evidence six pages of a 517-page Justice Department report criticizing the FBI crime laboratory and David Williams, one of the agency's explosives experts, for reaching unscientific and biased conclusions. The report claimed that Williams had worked backward in the investigation rather than basing his determinations on forensic evidence.
The jury deliberated for 23 hours. On June 2, 1997, McVeigh was found guilty on 11 counts of murder and conspiracy. Although the defense argued for a reduced sentence of life imprisonment, McVeigh was sentenced to death. In May 2001, the Justice Department announced that the FBI had mistakenly failed to provide over 3,000 documents to McVeigh's defense counsel. The Justice Department also announced that the execution would be postponed for one month for the defense to review the documents. On June 6, federal judge Richard Paul Matsch ruled the documents would not prove McVeigh innocent and ordered the execution to proceed. McVeigh invited conductor David Woodard to perform pre-requiem Mass music on the eve of his execution; while reproachful of McVeigh's capital wrongdoing, Woodard consented. After President George W. Bush approved the execution (McVeigh was a federal inmate and federal law dictates that the president must approve the execution of federal prisoners), he was executed by lethal injection at the Federal Correctional Complex, Terre Haute in Terre Haute, Indiana, on June 11, 2001. The execution was transmitted on closed-circuit television so that the relatives of the victims could witness his death. McVeigh's execution was the first federal execution in 38 years.
Nichols stood trial twice. He was first tried by the federal government in 1997 and found guilty of conspiring to build a weapon of mass destruction and of eight counts of involuntary manslaughter of federal officers. After he was sentenced on June 4, 1998 to life without parole, the State of Oklahoma in 2000 sought a death-penalty conviction on 161 counts of first-degree murder (160 non-federal agent victims and one fetus). On May 26, 2004 the jury found him guilty on all charges, but deadlocked on the issue of sentencing him to death. Presiding Judge Steven W. Taylor then determined the sentence of 161 consecutive life terms without the possibility of parole. In March 2005, FBI investigators, acting on a tip, searched a buried crawl space in Nichols's former house and found additional explosives missed in the preliminary search after Nichols was arrested.
Michael and Lori Fortier were considered accomplices for their foreknowledge of the planning of the bombing. In addition to Michael assisting McVeigh in scouting the federal building, Lori had helped McVeigh laminate a fake driver's license which was later used to rent the Ryder truck. Michael agreed to testify against McVeigh and Nichols in exchange for a reduced sentence and immunity for his wife. He was sentenced on May 27, 1998 to 12 years in prison and fined $75,000 for failing to warn authorities about the attack. On January 20, 2006, after serving ten and a half years of his sentence, including time already served, Fortier was released for good behavior into the Witness Protection Program and given a new identity.
No "John Doe #2" was ever identified, nothing conclusive was ever reported regarding the owner of the unmatched leg, and the government never openly investigated anyone else in conjunction with the bombing. Although the defense teams in both McVeigh's and Nichols's trials suggested that others were involved, Judge Steven W. Taylor found no credible, relevant, or legally admissible evidence, of anyone other than McVeigh and Nichols having directly participated in the bombing. When McVeigh was asked if there were other conspirators in the bombing, he replied: "You can't handle the truth! Because the truth is, I blew up the Murrah Building, and isn't it kind of scary that one man could wreak this kind of hell?" On the morning of McVeigh's execution a letter was released in which he had written "For those die-hard conspiracy theorists who will refuse to believe this, I turn the tables and say: Show me where I needed anyone else. Financing? Logistics? Specialized tech skills? Brainpower? Strategy? ... Show me where I needed a dark, mysterious 'Mr. X'!"
Within 48 hours of the attack, and with the assistance of the General Services Administration (GSA), the targeted federal offices were able to resume operations in other parts of the city. According to Mark Potok, director of Intelligence Project at the Southern Poverty Law Center, his organization tracked another 60 domestic smaller-scale terrorism plots from 1995 to 2005. Several of the plots were uncovered and prevented while others caused various infrastructure damage, deaths, or other destruction. Potok revealed that in 1996 there were approximately 858 domestic militias and other antigovernment groups but the number had dropped to 152 by 2004. Shortly after the bombing, the FBI hired an additional 500 agents to investigate potential domestic terrorist attacks.
In the wake of the bombing the U.S. government enacted several pieces of legislation including the Antiterrorism and Effective Death Penalty Act of 1996. In response to the trials of the conspirators being moved out-of-state, the Victim Allocution Clarification Act of 1997 was signed on March 20, 1997 by President Clinton to allow the victims of the bombing (and the victims of any other future acts of violence) the right to observe trials and to offer impact testimony in sentencing hearings. In response to passing the legislation, Clinton stated that "when someone is a victim, he or she should be at the center of the criminal justice process, not on the outside looking in."
In the years since the bombing, scientists, security experts, and the ATF have called on Congress to develop legislation that would require customers to produce identification when purchasing ammonium nitrate fertilizer, and for sellers to maintain records of its sale. Critics argue that farmers lawfully use large quantities of the fertilizer, and as of 2009, only Nevada and South Carolina require identification from purchasers. In June 1995, Congress enacted legislation requiring chemical taggants to be incorporated into dynamite and other explosives so that a bomb could be traced to its manufacturer. In 2008, Honeywell announced that it had developed a nitrogen-based fertilizer that would not detonate when mixed with fuel oil. The company got assistance from the Department of Homeland Security to develop the fertilizer (Sulf-N 26) for commercial use. It uses ammonium sulfate to make the fertilizer less explosive.
In the decade following the bombing, there was criticism of Oklahoma public schools for not requiring the bombing to be covered in the curriculum of mandatory Oklahoma history classes. "Oklahoma History" is a one-semester course required by state law for graduation from high school; however, the bombing was only covered for one to two pages at most in textbooks. The state's PASS standards (Priority Academic Student Skills) did not require that a student learn about the bombing, and focused more on other subjects such as corruption and the Dust Bowl. On April 6, 2010, House Bill 2750 was signed by Governor Brad Henry, requiring the bombing to be entered into the school curriculum for Oklahoma, U.S. and world history classes.
On the signing, Governor Henry said "Although the events of April 19, 1995 may be etched in our minds and in the minds of Oklahomans who remember that day, we have a generation of Oklahomans that has little to no memory of the events of that day ... We owe it to the victims, the survivors and all of the people touched by this tragic event to remember April 19, 1995 and understand what it meant and still means to this state and this nation."
In the weeks following the bombing the federal government ordered that all federal buildings in all major cities be surrounded with prefabricated Jersey barriers to prevent similar attacks. As part of a longer-term plan for United States federal building security most of those temporary barriers have since been replaced with permanent and more aesthetically considerate security barriers, which are driven deep into the ground for sturdiness. Furthermore, all new federal buildings must now be constructed with truck-resistant barriers and with deep setbacks from surrounding streets to minimize their vulnerability to truck bombs. FBI buildings, for instance, must be set back from traffic. The total cost of improving security in federal buildings across the country in response to the bombing reached over $600 million.
The Murrah Federal Building had been considered so safe that it only employed one security guard. In June 1995, the DOJ issued "Vulnerability Assessment of Federal Facilities", also known as "The Marshals Report", the findings of which resulted in a thorough evaluation of security at all federal buildings and a system for classifying risks at over 1,300 federal facilities owned or leased by the federal government. Federal sites were divided into five security levels ranging from Level 1 (minimum security needs) to Level 5 (maximum). The Alfred P. Murrah Building was deemed a Level 4 building. Among the 52 security improvements were physical barriers, closed-circuit television monitoring, site planning and access, hardening of building exteriors to increase blast resistance, glazing systems to reduce flying glass shards and fatalities, and structural engineering design to prevent progressive collapse.
The attack led to engineering improvements allowing buildings to better withstand tremendous forces, improvements which were incorporated into the design of Oklahoma City's new federal building. The National Geographic Channel documentary series "Seconds From Disaster" suggested that the Murrah Federal Building would probably have survived the blast had it been built according to California's earthquake design codes.
McVeigh believed that the bomb attack had a positive impact on government policy. In evidence he cited the peaceful resolution of the Montana Freemen standoff in 1996, the government's $3.1 million settlement with Randy Weaver and his surviving children four months after the bombing, and April 2000 statements by Bill Clinton regretting his decision to storm the Branch Davidian compound. McVeigh stated, "Once you bloody the bully's nose, and he knows he's going to be punched again, he's not coming back around."
A variety of conspiracy theories have been proposed about the events surrounding the bombing. Some theories allege that individuals in the government, including President Bill Clinton, knew of the impending bombing and intentionally failed to act. Other theories focus on initial reports by local news stations of multiple other unexploded bombs within the building itself as evidence of remnants of a controlled demolition; following the attack, search and rescue operations at the site were delayed until the area had been declared safe by the Oklahoma City bomb squad and federal authorities. According to both a situation report compiled by the Federal Emergency Management Agency and a memo issued by the United States Atlantic Command the day following the attack, a second bomb located within the building was disarmed while a third was evacuated. Further theories focus on additional conspirators involved with the bombing. Additional theories claim the bombing was carried out by the government in order to frame the militia movement or to provide the impetus for new antiterrorism legislation while using McVeigh as a scapegoat. Other conspiracy theories suggest that foreign agents, particularly Islamic terrorists but also the Japanese government or German Neo-Nazis, were involved in the bombing. Experts have disputed the theories and government investigations have been opened at various times to look into the theories.
Once the explosion took place at the Alfred P. Murrah building, chaotic response filled the surrounding streets. Those who were able to flee the Murrah building did so, while others, stuck in the rubble, awaited the assistance of rescue workers and volunteers. As reported on CNN, other federal buildings in the downtown area were not fully evacuated, but those who were able to leave the city were encouraged to do so. This traffic, along with the people leaving places around the Murrah Building clogged streets, delaying the arrival of rescue crews and relief agencies.
Several agencies, including the Federal Highway Administration and the City of Oklahoma City have evaluated the emergency response actions to the bombing, and have proposed plans for a better response in addition to addressing issues that hindered a smooth rescue effort. Because of the crowded streets, and the number of response agencies sent to the location, communication between government branches and rescue workers was muddled. Groups were unaware of the operations others were conducting, thus creating strife and delays in the search and rescue process. The City of Oklahoma City, in their After Action Report, declared that better communication and single bases for agencies would better the aid of those in disastrous situations.
Following the events of September 11, 2001, with consideration of other events including the Oklahoma City Bombing, the Federal Highway Administration proposed the idea that major metropolitan areas create evacuation routes for civilians. These highlighted routes would allow paths for emergency crews and government agencies to enter the disaster area more quickly. By helping civilians out, and rescue workers in, the number of casualties will hopefully be decreased.
For two years after the bombing the only memorials to the victims were plush toys, crucifixes, letters, and other personal items left by thousands of people at a security fence surrounding the site of the building. Many suggestions for suitable memorials were sent to Oklahoma City, but an official memorial planning committee was not set up until early 1996, when the Murrah Federal Building Memorial Task Force, composed of 350 members, was set up to formulate plans for a memorial to commemorate the victims of the bombing. On July 1, 1997 the winning design was chosen unanimously by a 15-member panel from 624 submissions. The memorial was designed at a cost of $29 million, which was raised by public and private funds. The national memorial is part of the National Park System as an affiliated area and was designed by Oklahoma City architects Hans and Torrey Butzer and Sven Berg. It was dedicated by President Clinton on April 19, 2000, exactly five years after the bombing. Within the first year, it had 700,000 visitors.
The memorial includes a reflecting pool flanked by two large gates, one inscribed with the time 9:01, the other with 9:03, the pool representing the moment of the blast. On the south end of the memorial is a field of symbolic bronze and stone chairs – one for each person lost, arranged according to what floor of the building they were on. The chairs represent the empty chairs at the dinner tables of the victims' families. The seats of the children killed are smaller than those of the adults lost. On the opposite side is the "survivor tree", part of the building's original landscaping that survived the blast and fires that followed it. The memorial left part of the foundation of the building intact, allowing visitors to see the scale of the destruction. Part of the chain link fence put in place around the site of the blast, which had attracted over 800,000 personal items of commemoration later collected by the Oklahoma City Memorial Foundation, is now on the western edge of the memorial. North of the memorial is the Journal Record Building, which now houses the Oklahoma City National Memorial Museum, an affiliate of the National Park Service. The building also contains the National Memorial Institute for the Prevention of Terrorism, a law enforcement training center.
St. Joseph's Old Cathedral, one of the first brick-and-mortar churches in the city, is located to the southwest of the memorial and was severely damaged by the blast. To commemorate the event, a statue and sculpture work entitled "And Jesus Wept" was installed adjacent to the Oklahoma City National Memorial. The work was dedicated in May 1997 and the church was rededicated on December 1 of the same year. The church, the statue, and the sculpture are not part of the Oklahoma City memorial.
An observance is held each year to remember the victims of the bombing. An annual marathon draws thousands, and allows runners to sponsor a victim of the bombing. For the tenth anniversary of the bombing, the city held 24 days of activities, including a week-long series of events known as the National Week of Hope from April 17 to 24, 2005. As in previous years, the tenth anniversary of the bombing observances began with a service at 9:02 am, marking the moment the bomb went off, with the traditional 168 seconds of silence – one second for each person who was killed as a result of the blast. The service also included the traditional reading of the names, read by children to symbolize the future of Oklahoma City.
Vice President Dick Cheney, former President Clinton, Oklahoma Governor Brad Henry, Frank Keating, Governor of Oklahoma at the time of the bombing, and other political dignitaries attended the service and gave speeches in which they emphasized that "goodness overcame evil". The relatives of the victims and the survivors of the blast also made note of it during the service at First United Methodist Church in Oklahoma City.
President George W. Bush made note of the anniversary in a written statement, part of which echoed his remarks on the execution of Timothy McVeigh in 2001: "For the survivors of the crime and for the families of the dead the pain goes on." Bush was invited but did not attend the service because he was en route to Springfield, Illinois, to dedicate the Abraham Lincoln Presidential Library and Museum. Cheney attended the service in his place.
Due to COVID-19 and 2019–20 coronavirus pandemic, the memorial site was closed to the public on April 19, 2020, and local television networks broadcast pre-recorded remembrances to mark the 25th anniversary. | https://en.wikipedia.org/wiki?curid=22467 |
Osama bin Laden
Osama bin Mohammed bin Awad bin Laden (, '; March 10, 1957 – May 2, 2011), also rendered Usama bin Ladin, was a founder of the pan-Islamic militant organization . He was a Saudi Arabian citizen until 1994 (stateless thereafter) and a member of the wealthy bin Laden family.
Bin Ladens father was Mohammed bin Awad bin Laden, a Saudi millionaire from Hadhramaut, Yemen, and the founder of the construction company, Saudi Binladin Group. His mother, Alia Ghanem, was from a secular middle-class family based in Latakia, Syria. He was born in Saudi Arabia and studied at university in the country until 1979, when he joined Mujahideen forces in Pakistan fighting against the Soviet Union in Afghanistan. He helped to fund the Mujahideen by funneling arms, money and fighters from the Arab world into Afghanistan, and gained popularity among many Arabs. In 1988, he formed al-Qaeda. He was banished from Saudi Arabia in 1992, and shifted his base to Sudan, until U.S. pressure forced him to leave Sudan in 1996. After establishing a new base in Afghanistan, he declared a war against the United States, initiating a series of bombings and related attacks. Bin Laden was on the American Federal Bureau of Investigation's (FBI) lists of Ten Most Wanted Fugitives and Most Wanted Terrorists for his involvement in the 1998 U.S. embassy bombings.
Bin Laden is most well known for his role in masterminding the September 11 attacks, which resulted in the deaths of nearly 3,000 and prompted the United States to initiate the War on Terror. He subsequently became the subject of a decade-long international manhunt. From 2001 to 2011, bin Laden was a major target of the United States, as the FBI offered a $25 million bounty in their search for him. On May 2, 2011, bin Laden was shot and killed by US Navy SEALs inside a private residential compound in Abbottabad, Pakistan, where he lived with a local family from Waziristan. The covert operation was conducted by members of the United States Naval Special Warfare Development Group (SEAL Team Six) and Central Intelligence Agency SAD/SOG operators on the orders of U.S. President Barack Obama. Under his leadership, the al-Qaeda organization was responsible for, in addition to the September 11 attacks in the United States, many other mass-casualty attacks worldwide.
There is no universally accepted standard for transliterating Arabic words and Arabic names into English; however, bin Laden's name is most frequently rendered "Osama bin Laden". The FBI and Central Intelligence Agency (CIA), as well as other U.S. governmental agencies, have used either "Usama bin Laden" or "Usama bin Ladin". Less common renderings include "Ussamah bin Ladin" and, in the French-language media, "Oussama ben Laden". Other spellings include "Binladen" or, as used by his family in the West, "Binladin". The decapitalization of "bin" is based on the convention of leaving short prepositions, articles, and patronymics uncapitalized in surnames; the nasab "bin" means "son of". The spellings with "o" and "e" come from a Persian-influenced pronunciation also used in Afghanistan, where bin Laden spent many years.
Osama bin Laden's full name, Osama bin Mohammed bin Awad bin Laden, means "Osama, son of Mohammed, son of Awad, son of Laden". "Mohammed" refers to bin Laden's father Mohammed bin Laden; "Awad" refers to his grandfather, Awad bin Aboud bin Laden, a Kindite Hadhrami tribesman; "Laden" refers not to bin Laden's great-grandfather, who was named Aboud, but to Aboud's father, Laden Ali al-Qahtani.
The Arabic linguistic convention would be to refer to him as "Osama" or "Osama bin Laden", not "bin Laden" alone, as "bin Laden" is a patronymic, not a surname in the Western manner. According to bin Laden's son Omar bin Laden, the family's hereditary surname is "al-Qahtani" (, "āl-Qaḥṭānī"), but bin Laden's father, Mohammed bin Laden, never officially registered the name.
Osama bin Laden had also assumed the "kunyah" "Abū 'Abdāllāh" ("father of Abdallah"). His admirers have referred to him by several nicknames, including the "Prince" or "Emir" (الأمير, "al-Amīr"), the "Sheik" (الشيخ, "aš-Šaykh"), the "Jihadist Sheik" or "Sheik al-Mujahid" (شيخ المجاهد, "Šaykh al-Mujāhid"), "Hajj" (حج, "Ḥajj"), and the "Director". The word "usāmah" (أسامة) means "lion", earning him the nicknames "Lion" and "Lion Sheik".
Osama bin Mohammed bin Awad bin Laden was born in Riyadh, Saudi Arabia, a son of Yemeni Mohammed bin Awad bin Laden, a millionaire construction magnate with close ties to the Saudi royal family, and Mohammed bin Laden's tenth wife, Syrian Hamida al-Attas (then called Alia Ghanem). In a 1998 interview, bin Laden gave his birth date as March 10, 1957.
Mohammed bin Laden divorced Hamida soon after Osama bin Laden was born. Mohammed recommended Hamida to Mohammed al-Attas, an associate. Al-Attas married Hamida in the late 1950s or early 1960s, and they are still together. The couple had four children, and bin Laden lived in the new household with three half-brothers and one half-sister. The bin Laden family made $5 billion in the construction industry, of which Osama later inherited around $25–30 million.
Bin Laden was raised as a devout Sunni Muslim. From 1968 to 1976, he attended the élite secular Al-Thager Model School. He studied economics and business administration at King Abdulaziz University. Some reports suggest he earned a degree in civil engineering in 1979, or a degree in public administration in 1981. bin Laden was an attendant at an English-language course in Oxford, England during 1971. One source described him as "hard working"; another said he left university during his third year without completing a college degree. At university, bin Laden's main interest was religion, where he was involved in both "interpreting the Quran and jihad" and charitable work. Other interests included writing poetry; reading, with the works of Field Marshal Bernard Montgomery and Charles de Gaulle said to be among his favorites; black stallions; and association football, in which he enjoyed playing at centre forward and followed the English club Arsenal.
At age 17 in 1974, bin Laden married Najwa Ghanem at Latakia, Syria; they were separated before September 11, 2001. Bin Laden's other known wives were Khadijah Sharif (married 1983, divorced 1990s); Khairiah Sabar (married 1985); Siham Sabar (married 1987); and Amal al-Sadah (married 2000). Some sources also list a sixth wife, name unknown, whose marriage to bin Laden was annulled soon after the ceremony. Bin Laden fathered between 20 and 26 children with his wives. Many of bin Laden's children fled to Iran following the September 11 attacks and , Iranian authorities reportedly continue to control their movements.
Nasser al-Bahri, who was bin Laden's personal bodyguard from 1997–2001, details bin Laden's personal life in his memoir. He describes him as a frugal man and strict father, who enjoyed taking his large family on shooting trips and picnics in the desert.
Bin Laden's father Mohammed died in 1967 in an airplane crash in Saudi Arabia when his American pilot Jim Harrington misjudged a landing. Bin Laden's eldest half-brother, Salem bin Laden, the subsequent head of the bin Laden family, was killed in 1988 near San Antonio, Texas, in the United States, when he accidentally flew a plane into power lines.
The FBI described bin Laden as an adult as tall and thin, between and in height and weighing about , although the author Lawrence Wright, in his Pulitzer Prize-winning book on al-Qaeda, "The Looming Tower", writes that a number of bin Laden's close friends confirmed that reports of his height were greatly exaggerated, and that bin Laden was actually "just over tall". Eventually, after his death, he was measured to be around . Bin Laden had an olive complexion and was left-handed, usually walking with a cane. He wore a plain white keffiyeh. Bin Laden had stopped wearing the traditional Saudi male keffiyeh and instead wore the traditional Yemeni male keffiyeh. Bin Laden was described as soft-spoken and mild-mannered in demeanor.
A major component of bin Laden's ideology was the concept that civilians from enemy countries, including women and children, were legitimate targets for jihadists to kill. According to former CIA analyst Michael Scheuer, who led the CIA's hunt for Osama bin Laden, the al-Qaeda leader was motivated by a belief that U.S. foreign policy has oppressed, killed, or otherwise harmed Muslims in the Middle East, condensed in the phrase, "They hate us for what we do, not who we are." Nonetheless, bin Laden criticized the U.S. for its secular form of governance, calling upon Americans to convert to Islam and "reject the immoral acts of fornication, homosexuality, intoxicants, gambling, and usury", in a letter published in late 2002.
Bin Laden believed that the Islamic world was in crisis and that the complete restoration of Sharia law would be the only way to "set things right" in the Muslim world. He opposed such alternatives as secular government, as well as "pan-Arabism, socialism, communism, democracy." He subscribed to the Athari (literalist) school of Islamic theology.
These beliefs, in conjunction with violent "jihad", have sometimes been called Qutbism after being promoted by Sayyid Qutb. Bin Laden believed that Afghanistan, under the rule of Mullah Omar's Taliban, was "the only Islamic country" in the Muslim world. Bin Laden consistently dwelt on the need for violent jihad to right what he believed were injustices against Muslims perpetrated by the United States and sometimes by other non-Muslim states. He also called for the elimination of Israel, and called upon the United States to withdraw all of its civilians and military personnel from the Middle East, as well as from every Islamic country of the world.
His viewpoints and methods of achieving them had led to him being designated as a terrorist by scholars, journalists from "The New York Times", the BBC, and Qatari news station Al Jazeera, analysts such as Peter Bergen, Michael Scheuer, Marc Sageman, and Bruce Hoffman. He was indicted on terrorism charges by law enforcement agencies in Madrid, New York City, and Tripoli.
In 1997, he condemned the United States for its hypocrisy in not labeling the bombing of Hiroshima as terrorism. In November 2001, he maintained that revenge killing of Americans was justified because he claimed that Islamic law allows believers to attack invaders even when the enemy uses human shields. However, according to Rodenbeck, "this classical position was originally intended as a legal justification for the accidental killings of civilians under very limited circumstances — not as a basis for the intentional targeting of noncombatants." A few months later in a 2002 letter, he made no mention of this justification but claimed "that since the United States is a democracy, all citizens bear responsibility for its government's actions, and civilians are therefore fair targets."
Bin Laden's overall strategy for achieving his goals against much larger enemies such as the Soviet Union and United States was to lure them into a long war of attrition in Muslim countries, attracting large numbers of jihadists who would never surrender. He believed this would lead to economic collapse of the enemy countries, by "bleeding" them dry. Al-Qaeda manuals express this strategy. In a 2004 tape broadcast by Al Jazeera, bin Laden spoke of "bleeding America to the point of bankruptcy".
A number of errors and inconsistencies in bin Laden's arguments have been alleged by authors such as Max Rodenbeck and Noah Feldman. He invoked democracy both as an example of the deceit and fraudulence of Western political system—American law being "the law of the rich and wealthy"—and as the reason civilians are responsible for their government's actions and so can be lawfully punished by death. He denounced democracy as a "religion of ignorance" that violates Islam by issuing man-made laws, but in a later statement compares the Western democracy of Spain favorably to the Muslim world—because "the ruler there is accountable." Rodenbeck states, "Evidently, [bin Laden] has never heard theological justifications for democracy, based on the notion that the will of the people must necessarily reflect the will of an all-knowing God."
Bin Laden was heavily anti-Semitic, stating that most of the negative events that occurred in the world were the direct result of Jewish actions. In a December 1998 interview with Pakistani journalist Rahimullah Yusufzai, bin Laden stated that Operation Desert Fox was proof that Israeli Jews controlled the governments of the United States and United Kingdom, directing them to kill as many Muslims as they could. In a letter released in late 2002, he stated that Jews controlled the civilian media outlets, politics, and economic institutions of the United States. In a May 1998 interview with ABC's John Miller, bin Laden stated that the Israeli state's ultimate goal was to annex the Arabian Peninsula and the Middle East into its territory and enslave its peoples, as part of what he called a "Greater Israel". He stated that Jews and Muslims could never get along and that war was "inevitable" between them, and further accused the U.S. of stirring up anti-Islamic sentiment. He claimed that the U.S. State Department and U.S. Department of Defense were controlled by Jews, for the sole purpose of serving the Israeli state's goals. He often delivered warnings against alleged Jewish conspiracies: "These Jews are masters of usury and leaders in treachery. They will leave you nothing, either in this world or the next." Shia Muslims have been listed along with "heretics, ... America, and Israel" as the four principal "enemies of Islam" at ideology classes of bin Laden's al-Qaeda organization.
Bin Laden was opposed to music on religious grounds, and his attitude towards technology was mixed. He was interested in "earth-moving machinery and genetic engineering of plants" on the one hand, but rejected "chilled water" on the other.
Bin Laden also believed climate change to be a serious threat and penned a letter urging Americans to work with President Barack Obama to make "a rational decision to save humanity from the harmful gases that threaten its destiny".
After leaving college in 1979, bin Laden went to Pakistan, joined Abdullah Azzam and used money and machinery from his own construction company to help the Mujahideen resistance in the Soviet–Afghan War. He later told a journalist: "I felt outraged that an injustice had been committed against the people of Afghanistan." Under CIA's Operation Cyclone from 1979 to 1989, the United States and Saudi Arabia provided $40 billion worth of financial aid and weapons to almost 100,000 Mujahideen and "Afghan Arabs" from forty Muslim countries through Pakistan's ISI. British journalist Jason Burke wrote that "bin Laden's Office of Services, set up to recruit overseas for the war, received some US cash." Bin Laden met and built relations with Hamid Gul, who was a three-star general in the Pakistani army and head of the ISI agency. Although the United States provided the money and weapons, the training of militant groups was entirely done by the Pakistani Armed Forces and the ISI.
By 1984, bin Laden and Azzam established Maktab al-Khidamat, which funneled money, arms and fighters from around the Arab world into Afghanistan. Through al-Khadamat, bin Laden's inherited family fortune paid for air tickets and accommodation, paid for paperwork with Pakistani authorities and provided other such services for the jihadi fighters. Bin Laden established camps inside Khyber Pakhtunkhwa in Pakistan and trained volunteers from across the Muslim world to fight against the Soviet-backed regime, the Democratic Republic of Afghanistan. Between 1986 and 1987, bin Laden set up a base in eastern Afghanistan for several dozen of his own Arab soldiers. From this base, bin Laden participated in some combat activity against the Soviets, such as the Battle of Jaji in 1987. Despite its little strategic significance, the battle was lionized in the mainstream Arab press. It was during this time that he became idolised by many Arabs.
In May 1988, responding to rumours of a massacre of Sunnis by Shias, large numbers of Shias from in and around Gilgit, Pakistan were killed in a massacre. Shia civilians were also subjected to rape.
The massacre is alleged by B. Raman, a founder of India's Research and Analysis Wing, to have been in response to a revolt by the Shias of Gilgit during the rule of military dictator Zia-ul Haq. He alleged that the Pakistan Army induced Osama bin Laden to lead an armed group of Sunni tribals, from Afghanistan and the North-West Frontier Province, into Gilgit and its surrounding areas to suppress the revolt.
By 1988, bin Laden had split from Maktab al-Khidamat. While Azzam acted as support for Afghan fighters, bin Laden wanted a more military role. One of the main points leading to the split and the creation of al-Qaeda was Azzam's insistence that Arab fighters be integrated among the Afghan fighting groups instead of forming a separate fighting force. Notes of a meeting of bin Laden and others on August 20, 1988 indicate that al-Qaeda was a formal group by that time: "Basically an organized Islamic faction, its goal is to lift the word of God, to make his religion victorious." A list of requirements for membership itemized the following: listening ability, good manners, obedience, and making a pledge ("bayat") to follow one's superiors.
According to Wright, the group's real name was not used in public pronouncements because "its existence was still a closely held secret". His research suggests that al-Qaeda was formed at an August 11, 1988, meeting between "several senior leaders" of Egyptian Islamic Jihad, Abdullah Azzam, and bin Laden, where it was agreed to join bin Laden's money with the expertise of the Islamic Jihad organization and take up the jihadist cause elsewhere after the Soviets withdrew from Afghanistan.
Following the Soviet Union's withdrawal from Afghanistan in February 1989, Osama bin Laden returned to Saudi Arabia as a hero of jihad. Along with his Arab legion, he was thought to have "brought down the mighty superpower" of the Soviet Union. After his return to Saudi Arabia, bin Laden engaged in opposition movements to the Saudi monarchy while working for his family business. He was also angered by the internecine tribal fighting among the Afghans.
The Iraqi invasion of Kuwait under Saddam Hussein on August 2, 1990, put the Saudi kingdom and the royal family at risk. With Iraqi forces on the Saudi border, Saddam's appeal to pan-Arabism was potentially inciting internal dissent. Bin Laden met with King Fahd, and Saudi Defense Minister Sultan, telling them not to depend on non-Muslim assistance from the United States and others, and offering to help defend Saudi Arabia with his Arab legion. Bin Laden's offer was rebuffed, and the Saudi monarchy invited the deployment of U.S. forces in Saudi territory. Bin Laden publicly denounced Saudi dependence on the U.S. military, arguing the two holiest shrines of Islam, Mecca and Medina, the cities in which the Prophet Muhammad received and recited Allah's message, should only be defended by Muslims. Bin Laden's criticism of the Saudi monarchy led them to try to silence him. The U.S. 82nd Airborne Division landed in the north-eastern Saudi city of Dhahran and was deployed in the desert barely 400 miles from Medina.
Meanwhile, on November 8, 1990, the FBI raided the New Jersey home of El Sayyid Nosair, an associate of al-Qaeda operative Ali Mohamed. They discovered copious evidence of terrorist plots, including plans to blow up New York City skyscrapers. This marked the earliest discovery of al-Qaeda terrorist plans outside of Muslim countries. Nosair was eventually convicted in connection to the 1993 World Trade Center bombing, and later admitted guilt for the murder of Rabbi Meir Kahane in New York City on November 5, 1990.
In 1991, bin Laden was expelled from Saudi Arabia by its regime after repeatedly criticizing the Saudi alliance with the United States. He and his followers moved first to Afghanistan and then relocated to Sudan by 1992, in a deal brokered by Ali Mohamed. Bin Laden's personal security detail consisted of "bodyguards ... personally selected by him." Their "arsenal included SAM-7 and Stinger missiles, AK-47s, RPGs, and PK machine guns (similar to an M60)." Meanwhile, in March–April 1992, bin Laden tried to play a pacifying role in the escalating civil war in Afghanistan, by urging warlord Gulbuddin Hekmatyar to join the other mujahideen leaders negotiating a coalition government instead of trying to conquer Kabul for himself.
U.S. intelligence monitored bin Laden in Sudan using operatives to run by daily and to photograph activities at his compound, and using an intelligence safe house and signals intelligence to surveil him and to record his moves.
In Sudan, bin Laden established a new base for Mujahideen operations in Khartoum. He bought a house on Al-Mashtal Street in the affluent Al-Riyadh quarter and a retreat at Soba on the Blue Nile. During his time in Sudan, he heavily invested in the infrastructure, in agriculture and businesses. He was the Sudan agent for the British firm Hunting Surveys, and built roads using the same bulldozers he had employed to construct mountain tracks in Afghanistan. Many of his labourers were the same fighters who had been his comrades in the war against the Soviet Union. He was generous to the poor and popular with the people. He continued to criticize King Fahd of Saudi Arabia. In response, in 1994 Fahd stripped bin Laden of his Saudi citizenship and persuaded his family to cut off his $7 million a year stipend.
By that time, bin Laden was being linked with Egyptian Islamic Jihad (EIJ), which made up the core of al-Qaeda. In 1995 the EIJ attempted to assassinate the Egyptian President Hosni Mubarak. The attempt failed, and Sudan expelled the EIJ.
The U.S. State Department accused Sudan of being a "sponsor of international terrorism" and bin Laden of operating "terrorist training camps in the Sudanese desert". According to Sudan officials, however, this stance became obsolete as the Islamist political leader Hassan al-Turabi lost influence in their country. The Sudanese wanted to engage with the U.S. but American officials refused to meet with them even after they had expelled bin Laden. It was not until 2000 that the State Department authorized U.S. intelligence officials to visit Sudan.
The 9/11 Commission Report states:
In late 1995, when Bin Laden was still in Sudan, the State Department and the Central Intelligence Agency (CIA) learned that Sudanese officials were discussing with the Saudi government the possibility of expelling Bin Laden. CIA paramilitary officer Billy Waugh tracked down Bin Ladin in Sudan and prepared an operation to apprehend him, but was denied authorization. U.S. Ambassador Timothy Carney encouraged the Sudanese to pursue this course. The Saudis, however, did not want Bin Laden, giving as their reason their revocation of his citizenship. Sudan's minister of defense, Fatih Erwa, has claimed that Sudan offered to hand Bin Laden over to the United States. The Commission has found no credible evidence that this was so. Ambassador Carney had instructions only to push the Sudanese to expel Bin Laden. Ambassador Carney had no legal basis to ask for more from the Sudanese since, at the time, there was no indictment outstanding against bin Laden in any country.
The 9/11 Commission Report further states:
In February 1996, Sudanese officials began approaching officials from the United States and other governments, asking what actions of theirs might ease foreign pressure. In secret meetings with Saudi officials, Sudan offered to expel Bin Laden to Saudi Arabia and asked the Saudis to pardon him. U.S. officials became aware of these secret discussions, certainly by March. Saudi officials apparently wanted Bin Laden expelled from Sudan. They had already revoked his citizenship, however, and would not tolerate his presence in their country. Also Bin Laden may have no longer felt safe in Sudan, where he had already escaped at least one assassination attempt that he believed to have been the work of the Egyptian or Saudi regimes, and paid for by the CIA.
Due to the increasing pressure on Sudan from Saudi Arabia, Egypt, and the United States, bin Laden was permitted to leave for a country of his choice. He chose to return to Jalalabad, Afghanistan aboard a chartered flight on May 18, 1996; there he forged a close relationship with Mullah Mohammed Omar. According to the 9/11 Commission, the expulsion from Sudan significantly weakened bin Laden and his organization. Some African intelligence sources have argued that the expulsion left bin Laden without an option other than becoming a full-time radical, and that most of the 300 Afghan Arabs who left with him subsequently became terrorists. Various sources report that bin Laden lost between $20 million and $300 million in Sudan; the government seized his construction equipment, and bin Laden was forced to liquidate his businesses, land, and even his horses.
In August 1996, bin Laden declared war against the United States. Despite the assurance of President George H. W. Bush to King Fahd in 1990, that all U.S. forces based in Saudi Arabia would be withdrawn once the Iraqi threat had been dealt with, by 1996 the Americans were still there. Bush cited the necessity of dealing with the remnants of Saddam's regime (which Bush had chosen not to destroy). Bin Laden's view was that "the 'evils' of the Middle East arose from America's attempt to take over the region and from its support for Israel. Saudi Arabia had been turned into 'an American colony".
He issued a fatwā against the United States, which was first published in "Al-Quds Al-Arabi", a London-based newspaper. It was entitled "Declaration of War against the Americans Occupying the Land of the Two Holy Places." Saudi Arabia is sometimes called "The Land of the Two Holy Mosques" in reference to Mecca and Medina, the two holiest places in Islam. The reference to "occupation" in the fatwā referred to US forces based in Saudi Arabia for the purpose of controlling air space in Iraq, known as Operation Southern Watch.
In Afghanistan, bin Laden and al-Qaeda raised money from "donors from the days of the Soviet jihad", and from the Pakistani ISI to establish more training camps for Mujahideen fighters. Bin Laden effectively took over Ariana Afghan Airlines, which ferried Islamic militants, arms, cash and opium through the United Arab Emirates and Pakistan, as well as provided false identifications to members of bin Laden's terrorist network. The arms smuggler Viktor Bout helped to run the airline, maintaining planes and loading cargo. Michael Scheuer, head of the CIA's bin Laden unit, concluded that Ariana was being used as a "terrorist taxi service".
It is believed that the first bombing attack involving bin Laden was the December 29, 1992, bombing of the Gold Mihor Hotel in Aden in which two people were killed.
It was after this bombing that al-Qaeda was reported to have developed its justification for the killing of innocent people. According to a fatwa issued by Mamdouh Mahmud Salim, the killing of someone standing near the enemy is justified because any innocent bystander will find a proper reward in death, going to "Jannah" (Paradise) if they were good Muslims and to "Jahannam" (hell) if they were bad or non-believers. The fatwa was issued to al-Qaeda members but not the general public.
In the 1990s, bin Laden's al-Qaeda assisted jihadis financially and sometimes militarily in Algeria, Egypt and Afghanistan. In 1992 or 1993, bin Laden sent an emissary, Qari el-Said, with $40,000 to Algeria to aid the Islamists and urge war rather than negotiation with the government. Their advice was heeded. The war that followed caused the deaths of 150,000–200,000 Algerians and ended with the Islamist surrender to the government. In January 1996, the CIA launched a new unit of its Counterterrorism Center (CTC) called Bin Laden Issue Station, code named "Alec Station," to track and to carry out operations against Bin Laden's activities. Bin Laden Issue Station was headed by Michael Scheuer, a veteran of the Islamic Extremism Branch of the CTC.
It has been claimed that bin Laden funded the Luxor massacre of November 17, 1997, which killed 62 civilians, and outraged the Egyptian public. In mid-1997, the Northern Alliance threatened to overrun Jalalabad, causing bin Laden to abandon his Najim Jihad compound and move his operations to Tarnak Farms in the south.
Another successful attack was carried out in the city of Mazar-i-Sharif in Afghanistan. Bin Laden helped cement his alliance with the Taliban by sending several hundred Afghan Arab fighters along to help the Taliban kill between five and six thousand Hazaras overrunning the city.
In February 1998, Osama bin Laden and Ayman al-Zawahiri co-signed a "fatwa" in the name of the World Islamic Front for Jihad Against Jews and Crusaders, which declared the killing of North Americans and their allies an "individual duty for every Muslim" to "liberate the al-Aqsa Mosque (in Jerusalem) and the holy mosque (in Mecca) from their grip". At the public announcement of the fatwa bin Laden announced that North Americans are "very easy targets". He told the attending journalists, "You will see the results of this in a very short time."
Bin Laden and al-Zawahiri organized an al-Qaeda congress on June 24, 1998. The 1998 U.S. Embassy bombings were a series of attacks that occurred on August 7, 1998, in which hundreds of people were killed in simultaneous truck bomb explosions at the United States embassies in the major East African cities of Dar es Salaam, Tanzania and Nairobi, Kenya. The attacks were linked to local members of the Egyptian Islamic Jihad, brought Osama bin Laden and Ayman al-Zawahiri to the attention of the United States public for the first time. Al-Qaeda later claimed responsibility for the bombings.
In retaliation for the embassy bombings, President Bill Clinton ordered a series of cruise missile strikes on bin Laden-related targets in Sudan and Afghanistan on August 20, 1998. In December 1998, the Director of Central Intelligence Counterterrorist Center reported to President Clinton that al-Qaeda was preparing for attacks in the United States of America, including the training of personnel to hijack aircraft. On June 7, 1999, the U.S. Federal Bureau of Investigation placed bin Laden on its Ten Most Wanted list.
At the end of 2000, Richard Clarke revealed that Islamic militants headed by bin Laden had planned a triple attack on January 3, 2000, which would have included bombings in Jordan of the Radisson SAS Hotel in Amman and tourists at Mount Nebo and a site on the Jordan River, the sinking of the destroyer in Yemen, as well as an attack on a target within the United States. The plan was foiled by the arrest of the Jordanian terrorist cell, the sinking of the explosive-filled skiff intended to target the destroyer, and the arrest of Ahmed Ressam.
A former U.S. State Department official in October 2001 described Bosnia and Herzegovina as a safe haven for terrorists, and asserted that militant elements of the former Sarajevo government were protecting extremists, some with ties to Osama bin Laden. In 1997, "Rzeczpospolita", one of the largest Polish daily newspapers, had reported that intelligence services of the Nordic-Polish SFOR Brigade suspected that a center for training terrorists from Islamic countries was located in the Bocina Donja village near Maglaj in Bosnia and Herzegovina. In 1992, hundreds of volunteers joined an "all-mujahedeen unit" called El Moujahed in an abandoned hillside factory, a compound with a hospital and prayer hall.
According to Middle East intelligence reports, bin Laden financed small convoys of recruits from the Arab world through his businesses in Sudan. Among them was Karim Said Atmani, who was identified by authorities as the document forger for a group of Algerians accused of plotting the bombings in the United States. He is a former roommate of Ahmed Ressam, the man arrested at the Canada–United States border in mid-December 1999 with a car full of nitroglycerin and bomb-making materials. He was convicted of colluding with Osama bin Laden by a French court.
A Bosnian government search of passport and residency records, conducted at the urging of the United States, revealed other former Mujahideen who were linked to the same Algerian group or to other groups of suspected terrorists, and had lived in the area north of Sarajevo, the capital, in the past few years. Khalil al-Deek was arrested in Jordan in late December 1999 on suspicion of involvement in a plot to blow up tourist sites. A second man with Bosnian citizenship, Hamid Aich, lived in Canada at the same time as Atmani and worked for a charity associated with Osama bin Laden. In its June 26, 1997, report on the bombing of the Al Khobar building in Riyadh, Saudi Arabia, "The New York Times" noted that those arrested confessed to serving with Bosnian Muslims forces. Further, the captured men also admitted to ties with Osama bin Laden.
In 1999, the press reported that bin Laden and his Tunisian assistant Mehrez Aodouni were granted citizenship and Bosnian passports in 1993 by the government in Sarajevo. This information was denied by the Bosnian government following the September 11 attacks, but it was later found that Aodouni was arrested in Turkey and that at that time he possessed the Bosnian passport. Following this revelation, a new explanation was given that bin Laden "did not personally collect his Bosnian passport" and that officials at the Bosnian embassy in Vienna, which issued the passport, could not have known who bin Laden was at the time. The Bosnian daily "Oslobođenje" published in 2001 that three men, believed to be linked to bin Laden, were arrested in Sarajevo in July 2001. The three, one of whom was identified as Imad El Misri, were Egyptian nationals. The paper said that two of the suspects were holding Bosnian passports.
SHISH's head Fatos Klosi said that Osama was running a terror network in Albania to take part in the Kosovo War under the guise of a humanitarian organisation and it was reported to have been started in 1994. Claude Kader who was a member testified its existence during his trial. By 1998, four members of Egyptian Islamic Jihad (EIJ) were arrested in Albania and extradited to Egypt. The mujahideen fighters were organised by Islamic leaders in Western Europe allied to him and Zawihiri.
During his trial at the International Criminal Tribunal for the Former Yugoslavia, former Serbian President Slobodan Milošević quoted from a purported FBI report that bin Laden's al-Qaeda had a presence in the Balkans and aided the Kosovo Liberation Army. He claimed bin Laden had used Albania as a "launchpad for violence" in the region and Europe. He claimed that they had informed Richard Holbrooke that KLA was being aided by al-Qaeda but the US decided to cooperate with the KLA and thus indirectly with Osama despite the 1998 United States embassy bombings earlier. Milošević had argued that the United States aided the terrorists, which culminated in its backing of the 1999 NATO bombing of Yugoslavia during the Kosovo War.
After his initial denial, | https://en.wikipedia.org/wiki?curid=22468 |
Ontogeny
Ontogeny (also ontogenesis or morphogenesis) is the origination and development of an organism (both physical and psychological, e.g., moral development), usually from the time of fertilization of the egg to adult. The term can also be used to refer to the study of the entirety of an organism's lifespan.
Ontogeny is the developmental history of an organism within its own lifetime, as distinct from phylogeny, which refers to the evolutionary history of a species. In practice, writers on evolution often speak of species as "developing" traits or characteristics. This can be misleading. While developmental (i.e., ontogenetic) processes can influence subsequent evolutionary (e.g., phylogenetic) processes (see evolutionary developmental biology and recapitulation theory), individual organisms develop (ontogeny), while species evolve (phylogeny).
Ontogeny, embryology and developmental biology are closely related studies and those terms are sometimes used interchangeably. The term ontogeny has also been used in cell biology to describe the development of various cell types within an organism.
Ontogeny is a useful field of study in many disciplines, including developmental biology, developmental psychology, developmental cognitive neuroscience, and developmental psychobiology.
Ontogeny is also a concept used in anthropology as "the process through which each of us embodies the history of our own making."
The word "ontogeny" comes from the Greek ὄν, "on" (gen. ὄντος, "ontos"), i.e. "being; that which is", which is the present participle of the verb εἰμί, "eimi", i.e. "to be, I am", and from the suffix "-geny" from the Greek -γένεια -"geneia", which expresses the concept of "mode of production".
A seminal paper named ontogeny as one of the four primary questions of biology, along with Huxley's three others: causation, survival value and evolution. Tinbergen emphasized that the change of behavioral "machinery" during development was distinct from the change in behavior during development. "We can conclude that the thrush itself, i.e. its behavioral machinery, has changed only if the behavior change occurred while the environment was held constant...When we turn from description to causal analysis, and ask in what way the observed change in behavior machinery has been brought about, the natural first step is to try and distinguish between environmental influences and those within the animal...In ontogeny the conclusion that a certain change is internally controlled (is "innate") is reached by "elimination". " (p. 424) Tinbergen was concerned that the elimination of environmental factors is difficult to establish, and the use of the word "innate" is often misleading.
Most organisms undergo allometric changes in shape as they grow and mature, while others engage in metamorphosis. Even "reptiles" (non-avian sauropsids, e.g., crocodilians, turtles, snakes, lizards), in which the offspring are often viewed as miniature adults, show a variety of ontogenetic changes in morphology and physiology.
Comparing ourselves to others is something humans do all the time. "In doing so we are acknowledging not so much our sameness to others
or our difference, but rather the commonality that resides in our difference. In other words, because each one of us is at once remarkably similar to, and remarkably different from, all other humans, it makes little sense to think of comparison in terms of a list of absolute similarities and a list of absolute differences. Rather, in respect of all other humans, we find similarities in the ways we are different from one another and differences in the ways we are the same. That we are able to do this is a function of the genuinely historical process that is human ontogeny". | https://en.wikipedia.org/wiki?curid=22469 |
Ophiuchus
Ophiuchus () is a large constellation straddling the celestial equator. Its name is from the Greek (, "serpent-bearer"), and it is commonly represented as a man grasping a snake (symbol ⛎, Unicode U+26CE). The serpent is represented by the constellation Serpens. Ophiuchus was one of the 48 constellations listed by the 2nd-century astronomer Ptolemy, and it remains one of the 88 modern constellations. It was formerly referred to as Serpentarius () and "Anguitenens".
Ophiuchus lies between Aquila, Serpens, Scorpius, Sagittarius, and Hercules, northwest of the center of the Milky Way. The southern part lies between Scorpius to the west and Sagittarius to the east. In the northern hemisphere, it is best visible in summer. It is opposite Orion. Ophiuchus is depicted as a man grasping a serpent; the interposition of his body divides the snake constellation Serpens into two parts, Serpens Caput and Serpens Cauda. Ophiuchus straddles the equator with the majority of its area lying in the southern hemisphere. Rasalhague, its brightest star, lies near the northern edge of Ophiuchus at about 12½°N declination. The constellation extends southward to −30° declination. Segments of the ecliptic within Ophiuchus are south of −20° declination.
In contrast to Orion, from November to January (summer in the Southern Hemisphere, winter in the Northern Hemisphere), Ophiuchus is in the daytime sky and thus not visible at most latitudes. However, for much of the Arctic Circle in the Northern Hemisphere's winter months, the Sun is below the horizon even at midday. Stars (and thus parts of Ophiuchus, especially Rasalhague) are then visible at twilight for a few hours around local noon, low in the south. In the Northern Hemisphere's spring and summer months, when Ophiuchus is normally visible in the night sky, the constellation is actually not visible, at those times and places in the Arctic when midnight sun obscures the stars. In countries close to the equator, Ophiuchus appears overhead in June around midnight and in the October evening sky.
The brightest stars in Ophiuchus include α Ophiuchi, called Rasalhague ("head of the serpent charmer"), at magnitude 2.07, and η Ophiuchi, known as Sabik ("the preceding one"), at magnitude 2.43. Other bright stars in the constellation include β Ophiuchi, Cebalrai ("dog of the shepherd") and λ Ophiuchi, or Marfik ("the elbow").
RS Ophiuchi is part of a class called recurrent novae, whose brightness increase at irregular intervals by hundreds of times in a period of just a few days. It is thought to be at the brink of becoming a type-1a supernova. Barnard's Star, one of the nearest stars to the Solar System (the only stars closer are the Alpha Centauri binary star system and Proxima Centauri), lies in Ophiuchus. It is located to the left of β and just north of the V-shaped group of stars in an area that was once occupied by the now-obsolete constellation of Taurus Poniatovii (Poniatowski's Bull). In 2005, astronomers using data from the Green Bank Telescope discovered a superbubble so large that it extends beyond the plane of the galaxy. It is called the Ophiuchus Superbubble.
In April 2007, astronomers announced that the Swedish-built Odin satellite had made the first detection of clouds of molecular oxygen in space, following observations in the constellation Ophiuchus. The supernova of 1604 was first observed on 9 October 1604, near θ Ophiuchi. Johannes Kepler saw it first on 16 October and studied it so extensively that the supernova was subsequently called "Kepler's Supernova". He published his findings in a book titled "De stella nova in pede Serpentarii" (""On the New Star in Ophiuchus's Foot""). Galileo used its brief appearance to counter the Aristotelian dogma that the heavens are changeless. In 2009 it was announced that GJ 1214, a star in Ophiuchus, undergoes repeated, cyclical dimming with a period of about 1.5 days consistent with the transit of a small orbiting planet. The planet's low density (about 40% that of Earth) suggests that the planet may have a substantial component of low-density gas—possibly hydrogen or steam. The proximity of this star to Earth (42 light years) makes it a tempting target for further observations. In April 2010, the naked-eye star ζ Ophiuchi was occulted by the asteroid 824 Anastasia.
Ophiuchus contains several star clusters, such as IC 4665, NGC 6633, M9, M10, M12, M14, M19, M62, and M107, as well as the nebula IC 4603-4604.
M10 is a fairly close globular cluster, only 20,000 light-years from Earth. It has a magnitude of 6.6 and is a Shapley class VII cluster. This means that it has "intermediate" concentration; it is only somewhat concentrated towards its center.
The unusual galaxy merger remnant and starburst galaxy NGC 6240 is also in Ophiuchus. At a distance of 400 million light-years, this "butterfly-shaped" galaxy has two supermassive black holes 3,000 light-years apart. Confirmation of the fact that both nuclei contain black holes was obtained by spectra from the Chandra X-ray Observatory. Astronomers estimate that the black holes will merge in another billion years. NGC 6240 also has an unusually high rate of star formation, classifying it as a starburst galaxy. This is likely due to the heat generated by the orbiting black holes and the aftermath of the collision.
In 2006, a new nearby star cluster was discovered associated with the 4th magnitude star Mu Ophiuchi. The Mamajek 2 cluster appears to be a poor cluster remnant analogous to the Ursa Major Moving Group, but 7 times more distant (approximately 170 parsecs away). Mamajek 2 appears to have formed in the same star-forming complex as the NGC 2516 cluster roughly 135 million years ago.
Barnard 68 is a large dark nebula, located 410 light-years from Earth. Despite its diameter of 0.4 light-years, Barnard 68 only has twice the mass of the Sun, making it both very diffuse and very cold, with a temperature of about 16 kelvins. Though it is currently stable, Barnard 68 will eventually collapse, inciting the process of star formation. One unusual feature of Barnard 68 is its vibrations, which have a period of 250,000 years. Astronomers speculate that this phenomenon is caused by the shock wave from a supernova.
The space probe Voyager 1 is travelling in the direction of Ophiuchus.
There is no evidence of the constellation preceding the classical era, and in Babylonian astronomy, a "Sitting Gods" constellation seems to have been located in the general area of Ophiuchus. However, Gavin White proposes that Ophiuchus may in fact be remotely descended from this Babylonian constellation, representing Nirah, a serpent-god who was sometimes depicted with his upper half human but with serpents for legs.
The earliest mention of the constellation is in Aratus, informed by the lost catalogue of Eudoxus of Cnidus (4th century BCE):
To the ancient Greeks, the constellation represented the god Apollo struggling with a huge snake that guarded the Oracle of Delphi.
Later myths identified Ophiuchus with Laocoön, the Trojan priest of Poseidon, who warned his fellow Trojans about the Trojan Horse and was later slain by a pair of sea serpents sent by the gods to punish him. According to Roman era mythography, the figure represents the healer Asclepius, who learned the secrets of keeping death at bay after observing one serpent bringing another healing herbs. To prevent the entire human race from becoming immortal under Asclepius' care, Jupiter killed him with a bolt of lightning, but later placed his image in the heavens to honor his good works. In medieval Islamic astronomy (Azophi's "Uranometry", 10th century), the constellation was known as "Al-Ḥawwa"', "the snake-charmer".
Aratus describes Ophiuchus as trampling on Scorpius with his feet. This is depicted in Renaissance to Early Modern star charts, beginning with Albrecht Dürer in 1515; in some depictions (such as that of Johannes Kepler in "De Stella Nova", 1606), Scorpius also seems to threaten to sting Serpentarius in the foot. This is consistent with Azophi, who already included ψ Oph and ω Oph as the snake-charmer's "left foot", and θ Oph and ο Oph as his "right foot", making Ophiuchus a zodiacal constellation at least as regards his feet. This arrangement has been taken as symbolic in later literature and placed in relation to the words spoken by God to the serpent in the Garden of Eden (Genesis 3:15).
Ophiuchus is one of thirteen constellations that cross the ecliptic. It has therefore been called the "13th sign of the zodiac". However, this confuses sign with constellation. The signs of the zodiac are a twelve-fold division of the ecliptic, so that each sign spans 30° of celestial longitude, approximately the distance the Sun travels in a month, and (in the Western tradition) are aligned with the seasons so that the March equinox always falls on the boundary between Pisces and Aries.
Constellations, on the other hand, are unequal in size and are based on the positions of the stars. The constellations of the zodiac have only a loose association with the signs of the zodiac, and do not in general coincide with them. In Western astrology the constellation of Aquarius, for example, largely corresponds to the sign of Pisces. Similarly, the constellation of Ophiuchus occupies most (November 29 – December 18) of the sign of Sagittarius (November 23 – December 21). The differences are due to the fact that the time of year that the sun passes through a particular zodiac constellation's position has slowly changed (because of the precession of the equinoxes) over the centuries from when the Greeks, Babylonians, and Dacians through Zamolxis originally developed the Zodiac. | https://en.wikipedia.org/wiki?curid=22472 |
Owain Glyndŵr
Owain ab Gruffydd, lord of Glyndyfrdwy (c. 1359 – c. 1415), or simply Owain Glyndŵr or Glyn Dŵr (, anglicized to Owen Glendower), was a Welsh leader who instigated a fierce and long-running yet ultimately unsuccessful war of independence with the aim of ending English rule in Wales during the Late Middle Ages. He was the last native Welshman to hold the title Prince of Wales ().
Glyndŵr was a descendant of the Princes of Powys through his father Gruffudd Fychan II, hereditary "Tywysog" of Powys Fadog and Lord of Glyndyfrdwy, and of those of Deheubarth through his mother Elen ferch Tomas ap Llywelyn ab Owen. On 16 September 1400, Glyndŵr instigated the Welsh Revolt against the rule of Henry IV of England. The uprising was initially very successful and rapidly gained control of large areas of Wales, but it suffered from key weaknesses – particularly a lack of artillery, which made capturing defended fortresses difficult, and of ships, which made rebel-controlled coastlands vulnerable. The uprising was eventually suppressed by the superior resources of the English. Glyndŵr was driven from his last remaining strongholds in 1409, but he avoided capture; the last documented sighting of him was in 1412. He twice ignored offers of a pardon from his military nemesis, the new king Henry V of England, and despite the large rewards offered, Glyndŵr was never betrayed to the English. His death was recorded by a former follower in the year 1415.
With his death Owain acquired a mythical status along with Cadwaladr, Cynan and Arthur as a folk hero awaiting the call to return and liberate his people. In William Shakespeare's play "Henry IV, Part 1", the character of Owen Glendower is a wild and exotic king ruled by magic and emotion. In the late 19th century, the Cymru Fydd movement recreated him as the father of Welsh nationalism.
Glyndŵr was born around 1349 or 1359 to a prosperous landed family, part of the Anglo-Welsh gentry of the Welsh Marches (the border between England and Wales) in northeast Wales. This group moved easily between Welsh and English societies and languages, occupying important offices for the Marcher Lords while maintaining their position as "uchelwyr" — nobles descended from the pre-conquest Welsh royal dynasties — in traditional Welsh society. His father, Gruffydd Fychan II, hereditary Tywysog of Powys Fadog and Lord of Glyndyfrdwy, died some time before 1370, leaving Glyndŵr's mother Elen ferch Tomas ap Llywelyn of Deheubarth a widow and Owain a young man of 16 years at most.
The young Owain ap Gruffydd was possibly fostered at the home of David Hanmer, a rising lawyer shortly to be a justice of the Kings Bench, or at the home of Richard FitzAlan, 3rd Earl of Arundel. Owain is then thought to have been sent to London to study law at the Inns of Court. He probably studied as a legal apprentice for seven years. He was possibly in London during the Peasants' Revolt of 1381. By 1383, he had returned to Wales, where he married David Hanmer's daughter, Margaret, started his large family and established himself as the Squire of Sycharth and Glyndyfrdwy, with all the responsibilities that entailed.
Glyndŵr entered the English king's military service in 1384 when he undertook garrison duty under the renowned Welshman Sir Gregory Sais, or Sir Degory Sais, on the English–Scottish border at Berwick-upon-Tweed. In August 1385, he served King Richard II under the command of John of Gaunt, again in Scotland. On 3 September 1386, he was called to give evidence in the "Scrope v Grosvenor" trial at Chester. In March 1387, Owain was in southeast England under Richard FitzAlan, 11th Earl of Arundel, in the English Channel at the defeat of a Franco-Spanish-Flemish fleet off the coast of Kent. Upon the death in late 1387 of his father-in-law, Sir David Hanmer, knighted earlier that same year by Richard II, Glyndŵr returned to Wales as executor of his estate. He possibly served as a squire to Henry Bolingbroke (later Henry IV of England), son of John of Gaunt, at the short, sharp Battle of Radcot Bridge in December 1387. He had gained three years' concentrated military experience in different theatres and seen at first hand some key events and people.
King Richard was distracted by a growing conflict with the Lords Appellant from this time on. Glyndŵr's opportunities were further limited by the death of Sir Gregory Sais in 1390 and the sidelining of Richard FitzAlan, Earl of Arundel, and he probably returned to his stable Welsh estates, living there quietly for ten years during his forties. The bard Iolo Goch ("Red Iolo"), himself a Welsh lord, visited Glyndŵr in the 1390s and wrote a number of odes to Owain, praising Owain's liberality, and writing of Sycharth, "Rare was it there / to see a latch or a lock."
The names and number of Owain Glyndŵr's siblings cannot be certainly known. The following are given by the Jacob Youde William Lloyd:
Tudur, Isabel and Lowri are given as his siblings by the more cautious R. R. Davies. That Owain Glyndŵr had another brother Gruffudd is likely; that he possibly had a third, Maredudd, is suggested by one reference.
In the late 1390s, a series of events began to push Owain towards rebellion, in what was later to be called the Welsh Revolt, the Glyndŵr Rising or (within Wales) the Last War of Independence. His neighbour, Baron Grey de Ruthyn, had seized control of some land, for which Glyndŵr appealed to the English Parliament. Owain's petition for redress was ignored. Later, in 1400, Lord Grey informed Glyndŵr too late of a royal command to levy feudal troops for Scottish border service, thus enabling him to call the Welshman a traitor in London court circles. Lord Grey was a personal friend of King Henry IV. The English Courts refused to hear, or at the very least, delayed the case. However, an alternative source states that Glyndŵr was under threat because he had written an angry letter to Lord Grey, boasting that he had stolen some of Lord Grey's horses, and believing Lord Grey had threatened to "burn and slay" within his lands, he threatened retaliation in the same manner. Lord Grey then denied making the initial threat to burn and slay, and replied that he would take the incriminating letter to Henry IV's council, and that Glyndŵr would hang for the admission of theft and treason contained within the letter. The deposed king, Richard II, had support in Wales, and in January 1400 serious civil disorder broke out in the English border city of Chester after the public execution of an officer of Richard II.
These events led to Owain formally assuming his ancestral title of Prince of Powys on 16 September 1400 at his Glyndyfrdwy estate. With a small band of followers which included his eldest son, his brothers-in-law, and the Bishop of St Asaph in the town of Corwen, possibly in the church of SS Mael & Sulien, he launched an assault on Lord Grey's territories. After a number of initial confrontations between King Henry IV and Owain's followers in September and October 1400, the revolt began to spread. Much of northern and central Wales went over to Owain. Henry IV appointed Henry Percy – the famous "Hotspur" – to bring the country to order. Hotspur issued an amnesty in March which applied to all rebels with the exception of Owain and his cousins, Rhys ap Tudur and Gwilym ap Tudur, sons of Tudur ap Gronw (forefather of King Henry VII of England). Both the Tudurs were pardoned after their capture of Edward I’s great castle at Conwy. In June, Owain scored his first major victory in the field at Mynydd Hyddgen on Pumlumon. Retaliation by Henry IV on the Strata Florida Abbey followed, but eventually led to Henry's retreat.
In 1402, the English Parliament issued the Penal Laws against Wales, designed to establish English dominance in Wales, but actually pushing many Welshmen into the rebellion. In the same year, Owain captured his archenemy, Baron Grey de Ruthyn. He held him for almost a year until he received a substantial ransom from Henry. In June 1402, Owain defeated an English force led by Sir Edmund Mortimer at the Battle of Bryn Glas, where Mortimer was captured. Glyndŵr offered to release Mortimer for a large ransom but, in sharp contrast to his attitude to de Grey, Henry IV refused to pay. Mortimer's nephew could be said to have had a greater claim to the English throne than Henry himself, so his speedy release was not an option. In response, Mortimer negotiated an alliance with Owain and married one of Owain's daughters. It is also in 1402 that mention of the French and Bretons helping Owain was first heard. The French were certainly hoping to use Wales as they had used Scotland: as a base from which to fight the English.
In 1403 the revolt became truly national in Wales. Royal officials reported that Welsh students at Oxford University were leaving their studies to join Owain, and Welsh labourers and craftsmen were abandoning their employers in England and returning to Wales. Owain could also draw on Welsh troops seasoned by the English campaigns in France and Scotland. Hundreds of Welsh archers and experienced men-at-arms left English service to join the rebellion.
In 1404, Owain held court at Harlech and appointed Gruffydd Young as his Chancellor. Soon afterwards, he called his first Parliament ( or "gathering") of all Wales at Machynlleth, where he was crowned Prince of Wales and announced his national programme. He declared his vision of an independent Welsh state with a parliament and separate Welsh church. There would be two national universities (one in the south and one in the north) and a return to the traditional law of Hywel Dda. Senior churchmen and important members of society flocked to his banner. English resistance was reduced to a few isolated castles, walled towns and fortified manor houses.
In February 1405, Owain negotiated the "Tripartite Indenture" with Edmund Mortimer and Henry Percy, Earl of Northumberland. The Indenture agreed to divide England and Wales among the three of them. Wales would extend as far as the rivers Severn and Mersey, including most of Cheshire, Shropshire and Herefordshire. The Mortimer Lords of March would take all of southern and western England and the Percys would take the north of England. R. R. Davies noted that certain internal features underscore the roots of Glyndŵr's political philosophy in Welsh mythology: in it, the three men invoke prophecy, and the boundaries of Wales are defined according to Merlinic literature.
Although negotiations with the lords of Ireland were unsuccessful, Owain had reason to hope that the French and Bretons might be more welcoming. He dispatched Gruffydd Young and his brother-in-law (Margaret's brother), John Hanmer, to negotiate with the French. The result was a formal treaty that promised French aid to Owain and the Welsh. The immediate effect seems to have been that joint Welsh and Franco-Breton forces attacked and laid siege to Kidwelly Castle. The Welsh could also count on semi-official fraternal aid from their fellow Celts in the then independent Brittany and Scotland. Scots and French privateers were operating around Wales throughout Owain's war. Scottish ships had raided English settlements on the Llŷn Peninsula in 1400 and 1401. In 1403, a Breton squadron defeated the English in the Channel and devastated Jersey, Guernsey and Plymouth, while the French made a landing on the Isle of Wight. By 1404, they were raiding the coast of England, with Welsh troops on board, setting fire to Dartmouth and devastating the coast of Devon.
1405 was the "Year of the French" in Wales. A formal treaty between Wales and France was negotiated. On the continent the French pressed the English as the French army invaded English Plantagenet Aquitaine. Simultaneously, the French landed in force at Milford Haven in west Wales. They marched through Herefordshire and on into Worcestershire. They met the English army just ten miles from Worcester. The armies took up battle positions daily and viewed each other from a mile without any major action for eight days. Then, for reasons that have never become clear, the Welsh retreated, and so did the French shortly afterwards.
By 1405, most French forces had withdrawn after politics in Paris shifted toward the peace party. Early in the year, the Welsh forces, who had until then won several easy victories, suffered a series of defeats. English forces landed in Anglesey from Ireland and would over time push the Welsh back, until the resistance in Anglesey formally ended toward the end of 1406.
At the same time, the English changed their strategy. Rather than focusing on punitive expeditions as favoured by his father, the young Prince Henry adopted a strategy of economic blockade. Using the castles that remained in English control, he gradually began to retake Wales while cutting off trade and the supply of weapons. By 1407 this strategy was beginning to bear fruit, even though by this time Owain's rebel soldiers had achieved victories over the King's men as far as Birmingham, where the English were in retreat. In the autumn, Owain's Aberystwyth Castle surrendered while he was away fighting. In 1409, it was the turn of Harlech Castle. Edmund Mortimer died in the final battle, and Owain's wife Margaret along with two of his daughters (including Catrin) and three of Mortimer's granddaughters were imprisoned in the Tower of London. They were all to die in the Tower before 1415.
Owain remained free, but he had lost his ancestral home and was a hunted prince. He continued the rebellion, particularly wanting to avenge his wife. In 1410, after a suicide raid into rebel-controlled Shropshire, which took many English lives, some of the leading rebels are thought to have been captured.
In 1412, Owain led one of the final successful raiding parties with his most faithful soldiers and cut through the King's men; and in an ambush in Brecon he captured, and later ransomed, a leading Welsh supporter of King Henry's, Dafydd Gam ("Crooked David"). This was the last time that Owain was seen alive by his enemies. As late as 1414, there were rumours that the Herefordshire-based Lollard leader Sir John Oldcastle was communicating with Owain, and reinforcements were sent to the major castles in the north and south.
But by then things were changing. Henry IV died in 1413 and his son King Henry V began to adopt a more conciliatory attitude to the Welsh. Royal pardons were offered to the major leaders of the revolt and other opponents of his father's regime.
Nothing certain is known of Owain after 1412. Despite enormous rewards being offered, he was neither captured nor betrayed. He ignored royal pardons. Tradition has it that he died and was buried possibly in the church of Saints Mael and Sulien at Corwen close to his home, or possibly on his estate in Sycharth or on the estates of his daughters' husbands. — Kentchurch in south Herefordshire or Monnington in west Herefordshire.
In his book "The Mystery of Jack of Kent and the Fate of Owain Glyndŵr", Alex Gibbon argues that the folk hero Jack of Kent, also known as Siôn Cent – the family chaplain of the Scudamore family – was in fact Owain Glyndŵr himself. Gibbon points out a number of similarities between Siôn Cent and Glyndŵr (including physical appearance, age, education, and character) and claims that Owain spent his last years living with his daughter Alys, passing himself off as an aging Franciscan friar and family tutor. There are many folk tales of Glyndŵr donning disguises to gain advantage over opponents during the rebellion.
Adam of Usk, a one-time supporter of Glyndŵr, made the following entry in his Chronicle under the year 1415: "After four years in hiding, from the king and the realm, Owain Glyndŵr died, and was buried by his followers in the darkness of night. His grave was discovered by his enemies, however, so he had to be re-buried, though it is impossible to discover where he was laid."
In 1875, the Rev. Francis Kilvert wrote in his diary that he saw the grave of "Owen Glendower" in the churchyard at Monnington "[h]ard by the church porch and on the western side of it ... It is a flat stone of whitish grey shaped like a rude obelisk figure, sunk deep into the ground in the middle of an oblong patch of earth from which the turf has been pared away, and, alas, smashed into several fragments."
In 2006, Adrien Jones, the president of the Owain Glyndŵr Society, said, "Four years ago we visited a direct descendant of Glyndŵr, a John Skidmore, at Kentchurch Court, near Abergavenny. He took us to Mornington Straddle, in Herefordshire, where one of Glyndŵr's daughters, Alice, lived. Mr Skidmore told us that he (Glyndŵr) spent his last days there and eventually died there... It was a family secret for 600 years and even Mr. Skidmore's mother, who died shortly before we visited, refused to reveal the secret. There's even a mound where he is believed to be buried at Mornington Straddle." Renowned historian Gruffydd Aled Williams | https://en.wikipedia.org/wiki?curid=22473 |
Octans
Octans is a faint constellation located in the deep southern sky. Its name is Latin for the eighth part of a circle, but it is named after the octant, a navigational instrument. The constellation was devised by French astronomer Nicolas Louis de Lacaille in 1752, and it remains one of the 88 modern constellations.
Octans was one of 14 constellations created by French astronomer Nicolas Louis de Lacaille during his expedition to the Cape of Good Hope, and was originally named "l’Octans de Reflexion" (“the reflecting octant”) in 1752, after he had observed and catalogued almost 10,000 southern stars during a two-year stay at the Cape of Good Hope. He devised fourteen new constellations in uncharted regions of the Southern Celestial Hemisphere not visible from Europe. All but one honoured instruments that symbolised the Age of Enlightenment.
It was part of his catalogue of the southern sky, the "Coelum Australe Stelliferum", which was published posthumously in 1763. In Europe, it became more widely known as "Octans Hadleianus", in honor of English mathematician John Hadley, who invented the octant in 1730. There is no real mythology related to Octans, partially due to its faintness and relative recentness, but mostly because of its extreme southerly latitude.
Octans is a very faint constellation; its brightest member is Nu Octantis, a spectral class K1 III giant star with an apparent magnitude 3.73. It is 63.3 ± 0.8 light-years distant from Earth.
Beta Octantis is the second brightest star in the constellation.
Sigma Octantis, the southern pole star, is a magnitude 5.4 star just over 1 degree away from the true South Celestial Pole. Its relative faintness means that it is not practical for navigation. Conveniently for navigators, there are other, much easier methods for locating the southern celestial pole.
For example, the constellation Crux, the Southern Cross, currently points toward the South Celestial Pole, if one draws a line from Gamma Crucis to Alpha Crucis. Another method includes an asterism made up of Sigma, Chi, Tau, and Upsilon Octantis, which form a distinctive trapezoid shape.
In addition to having the current southern pole star of Earth, Octans also contains the southern pole star of the planet Saturn, which is the magnitude 4.3 Delta Octantis.
The Astronomical Society of Southern Africa in 2003 reported that observations of the Mira variable stars R and T Octantis were urgently needed.
Three star systems are known to have planets. Mu2 Octantis is a binary star system, the brighter component of which has a planet. HD 142022 is a binary system, a component of which is a sunlike star with a massive planet with an orbital period of 1928 ± 46 days. HD 212301 is a yellow-white main sequence star with a hot jupiter that completes an orbit every 2.2 days.
NGC 2573 (also known as Polarissima Australis) is a faint barred spiral galaxy that happens to be the closest NGC object to the South Celestial Pole. NGC 7095 and NGC 7098 are two barred spiral galaxies that are 115 million and 95 million light-years distant from Earth respectively. The sparse open cluster Collinder 411 is also located in the constellation.
was a stores ship used by the United States Navy during World War II. | https://en.wikipedia.org/wiki?curid=22475 |
Okinawa Prefecture
is the southernmost prefecture of Japan. It encompasses two thirds of the Ryukyu Islands in a chain over long. The Ryukyu Islands extend southwest from Kagoshima Prefecture on Kyushu (the southwesternmost of Japan's four main islands) to Taiwan. Naha, Okinawa's capital, is located in the southern part of Okinawa Island.
Although Okinawa Prefecture comprises just 0.6 percent of Japan's total land mass, about 75 percent of all United States military personnel stationed in Japan are assigned to installations in the prefecture. Currently about 26,000 U.S. troops are based in the prefecture.
The indigenous people of Okinawa Prefecture are the Ryukyuan people, comprising the Okinawan, Miyako, Yaeyama and Yonaguni subgroups. The Amami people, who are another Ryukyuan subgroup, live in the Amami Islands of Kagoshima Prefecture.
The oldest evidence of human existence on the Ryukyu islands is from the Stone Age and was discovered in Naha and Yaeyama. Some human bone fragments from the Paleolithic era were unearthed from a site in Naha, but the artifact was lost in transportation before it was examined to be Paleolithic or not. Japanese Jōmon influences are dominant on the Okinawa Islands, although clay vessels on the Sakishima Islands have a commonality with those in Taiwan.
The first mention of the word "Ryukyu" was written in the "Book of Sui". "Okinawa" was the Japanese word identifying the islands, first seen in the biography of Jianzhen, written in 779. Agricultural societies begun in the 8th century slowly developed until the 12th century. Since the islands are located at the eastern perimeter of the East China Sea relatively close to Japan, China and South-East Asia, the Ryukyu Kingdom became a prosperous trading nation. Also during this period, many Gusukus, similar to castles, were constructed. The Ryukyu Kingdom entered into the Imperial Chinese tributary system under the Ming dynasty beginning in the 15th century, which established economic relations between the two nations.
In 1609, the Shimazu clan, which controlled the region that is now Kagoshima Prefecture, invaded the Ryukyu Kingdom. The Ryukyu Kingdom was obliged to agree to form a suzerain-vassal relationship with the Satsuma and the Tokugawa shogunate, while maintaining its previous role within the Chinese tributary system; Ryukyuan sovereignty was maintained since complete annexation would have created a conflict with China. The Satsuma clan earned considerable profits from trade with China during a period in which foreign trade was heavily restricted by the shogunate.
Although Satsuma maintained strong influence over the islands, the Ryukyu Kingdom maintained a considerable degree of domestic political freedom for over two hundred years. Four years after the 1868 Meiji Restoration, the Japanese government, through military incursions, officially annexed the kingdom and renamed it Ryukyu han. At the time, the Qing Empire asserted a nominal suzerainty over the islands of the Ryukyu Kingdom, since the Ryūkyū Kingdom was also a member state of the Chinese tributary system. Ryukyu han became Okinawa Prefecture of Japan in 1879, even though all other hans had become prefectures of Japan in 1872. In 1912, Okinawans first obtained the right to vote for representatives to the which had been established in 1890.
Near the end of World War II, in 1945, the US Army and Marine Corps invaded Okinawa with 185,000 troops. A third of Okinawa's civilian population died; a quarter of the civilian population died during the 1945 Battle of Okinawa alone. The dead, of all nationalities, are commemorated at the Cornerstone of Peace.
After the end of World War II, the United States set up the United States Military Government of the Ryukyu Islands administration, which ruled Okinawa for 27 years. During this "trusteeship rule", the United States established numerous military bases on the Ryukyu islands. The Ryukyu independence movement was an Okinawan movement that clamored against the U.S. rule.
During the Korean War, B-29 Superfortresses flew bombing missions over Korea from Kadena Air Base on Okinawa. The military buildup on the island during the Cold War increased a division between local inhabitants and the American military. Under the 1952 Treaty of Mutual Cooperation and Security between the United States and Japan, the United States Forces Japan (USFJ) have maintained a large military presence.
During the mid-1950s, the U.S. seized land from Okinawans to build new bases or expand currently-existing ones. According to the Melvin Price Report, by 1955, the military had displaced 250,000 residents.
Since 1960, the U.S. and Japan have maintained an agreement that allows the U.S. to secretly bring nuclear weapons into Japanese ports. The Japanese people tended to oppose the introduction of nuclear arms into Japanese territory and the Japanese government's assertion of Japan's non-nuclear policy and a statement of the Three Non-Nuclear Principles reflected this popular opposition. Most of the weapons were alleged to be stored in ammunition bunkers at Kadena Air Base. Between 1954 and 1972, 19 different types of nuclear weapons were deployed in Okinawa, but with fewer than around 1,000 warheads at any one time. In Fall 1960, U.S. commandos in Green Light Teams secret training missions carried actual small nuclear weapons on the east coast of Okinawa Island.
Between 1965 and 1972, Okinawa was a key staging point for the United States in its military operations directed towards North Vietnam. Along with Guam, it presented a geographically strategic launch pad for covert bombing missions over Cambodia and Laos. Anti-Vietnam War sentiment became linked politically to the movement for reversion of Okinawa to Japan.
In 1965, the US military bases, earlier viewed as paternal post war protection, were increasingly seen as aggressive. The Vietnam War highlighted the differences between the United States and Okinawa, but showed a commonality between the islands and mainland Japan.
As controversy grew regarding the alleged placement of nuclear weapons on Okinawa, fears intensified over the escalation of the Vietnam War. Okinawa was then perceived, by some inside Japan, as a potential target for China, should the communist government feel threatened by the United States. American military secrecy blocked any local reporting on what was actually occurring at bases such as Kadena Air Base. As information leaked out, and images of air strikes were published, the local population began to fear the potential for retaliation.
Political leaders such as Oda Makoto, a major figure in the Beheiren movement (Foundation of Citizens for Peace in Vietnam), believed, that the return of Okinawa to Japan would lead to the removal of U.S. forces ending Japan's involvement in Vietnam. In a speech delivered in 1967 Oda was critical of Prime Minister Sato's unilateral support of America's War in Vietnam claiming "Realistically we are all guilty of complicity in the Vietnam War". The Beheiren became a more visible anti-war movement on Okinawa as the American involvement in Vietnam intensified. The movement employed tactics ranging from demonstrations, to handing leaflets to soldiers, sailors, airmen and Marines directly, warning of the implications for a third World War.
The US military bases on Okinawa became a focal point for anti-Vietnam War sentiment. By 1969, over 50,000 American military personnel were stationed on Okinawa. The United States Department of Defense began referring to Okinawa as "The Keystone of the Pacific". This slogan was imprinted on local U.S. military license plates.
In 1969, chemical weapons leaked from the US storage depot at Chibana in central Okinawa, under Operation Red Hat. Evacuations of residents took place over a wide area for two months. Even two years later, government investigators found that Okinawans and the environment near the leak were still suffering because of the depot.
In 1972, the U.S. government handed over the islands to Japanese administration.
In a 1981 interview with the "Mainichi Shimbun", Edwin O. Reischauer, former U.S. ambassador to Japan, said that U.S. naval ships armed with nuclear weapons stopped at Japanese ports on a routine duty, and this was approved by the Japanese government.
The 1995 rape of a 12-year-old girl by U.S. servicemen triggered large protests in Okinawa. Reports by the local media of accidents and crimes committed by U.S. servicemen have reduced the local population's support for the U.S. military bases. A strong emotional response has emerged from certain incidents. As a result, the media has drawn renewed interest in the Ryukyu independence movement.
Documents declassified in 1997 proved that both tactical and strategic weapons have been maintained in Okinawa. In 1999 and 2002, the "Japan Times" and the "Okinawa Times" reported speculation that not all weapons were removed from Okinawa. On October 25, 2005, after a decade of negotiations, the governments of the US and Japan officially agreed to move Marine Corps Air Station Futenma from its location in the densely populated city of Ginowan to the more northerly and remote Camp Schwab in Nago by building a heliport with a shorter runway, partly on Camp Schwab land and partly running into the sea. The move is partly an attempt to relieve tensions between the people of Okinawa and the Marine Corps.
Okinawa prefecture constitutes 0.6 percent of Japan's land surface, yet , 75 percent of all USFJ bases were located on Okinawa, and U.S. military bases occupied 18 percent of the main island.
According to a 2007 "Okinawa Times" poll, 85 percent of Okinawans opposed the presence of the U.S. military, because of noise pollution from military drills, the risk of aircraft accidents, environmental degradation, and crowding from the number of personnel there, although 73.4 percent of Japanese citizens appreciated the mutual security treaty with the U.S. and the presence of the USFJ. In another poll conducted by the "Asahi Shimbun" in May 2010, 43 percent of the Okinawan population wanted the complete closure of the U.S. bases, 42 percent wanted reduction and 11 percent wanted the maintenance of the status quo. Okinawan feelings about the U.S. military are complex, and some of the resentment towards the U.S. bases is directed towards the government in Tokyo, perceived as being insensitive to Okinawan needs and using Okinawa to house bases not desired elsewhere in Japan.
In early 2008, U.S. Secretary of State Condoleezza Rice apologized after a series of crimes involving American troops in Japan, including the rape of a young girl of 14 by a Marine on Okinawa. The U.S. military also imposed a temporary 24-hour curfew on military personnel and their families to ease the anger of local residents. Some cited statistics that the crime rate of military personnel is consistently less than that of the general Okinawan population. However, some criticized the statistics as unreliable, since violence against women is under-reported.
Between 1972 and 2009, U.S. servicemen committed 5,634 criminal offenses, including 25 murders, 385 burglaries, 25 arsons, 127 rapes, 306 assaults and 2,827 thefts. Yet, per Marine Corps Installations Pacific data, U.S. service members are convicted of far fewer crimes than local Okinawans.
In 2009, a new Japanese government came to power and froze the US forces relocation plan, but in April 2010 indicated their interest in resolving the issue by proposing a modified plan.
A study done in 2010 found that the prolonged exposure to aircraft noise around the Kadena Air Base and other military bases cause health issues such as a disrupted sleep pattern, high blood pressure, weakening of the immune system in children, and a loss of hearing.
In 2011, it was reported that the U.S. military—contrary to repeated denials by the Pentagon—had kept tens of thousands of barrels of Agent Orange on the island. The Japanese and American governments have angered some U.S. veterans, who believe they were poisoned by Agent Orange while serving on the island, by characterizing their statements regarding Agent Orange as "dubious", and ignoring their requests for compensation. Reports that more than a third of the barrels developed leaks have led Okinawans to ask for environmental investigations, but both Tokyo and Washington refused such action. Jon Mitchell has reported concern that the U.S. used American Marines as chemical-agent guinea pigs.
On September 30, 2018, Denny Tamaki was elected as the next governor of Okinawa prefecture, after a campaign focused on sharply reducing the U.S. military presence on the island.
, one ongoing issue is the relocation of Marine Corps Air Station Futenma. First promised to be moved off the island and then later within the island, the future of any relocation is uncertain with the election of base-opponent Onaga as Okinawa governor. Onaga won against the incumbent Nakaima who had earlier approved landfill work to move the base to Camp Schwab in Henoko. However, Onaga has promised to veto the landfill work needed for the new base to be built and insisted Futenma should be moved outside of Okinawa.
, some 8,000 U.S. Marines were removed from the island and relocated to Guam. In November 2008, U.S. Pacific Command Commander Admiral Timothy Keating stated the move to Guam would probably not be completed before 2015.
In 2009, Japan's former foreign minister Katsuya Okada stated that he wanted to review the deployment of U.S. troops in Japan to ease the burden on the people of Okinawa (Associated Press, October 7, 2009) 5,000 of 9,000 Marines will be deployed at Guam and the rest will be deployed at Hawaii and Australia. Japan will pay $3.1 billion cash for the moving and for developing joint training ranges on Guam and on Tinian and Pagan in the U.S.-controlled Northern Mariana Islands.
, the US still maintains Air Force, Marine, Navy, and Army military installations on the islands. These bases include Kadena Air Base, Camp Foster, Marine Corps Air Station Futenma, Camp Hansen, Camp Schwab, Torii Station, Camp Kinser, and Camp Gonsalves. The area of 14 U.S. bases are , occupying 18 percent of the main island. Okinawa hosts about two-thirds of the 50,000 American forces in Japan although the islands account for less than one percent of total lands in Japan.
Suburbs have grown towards and now surround two historic major bases, Futenma and Kadena. One third () of the land used by the U.S. military is the Marine Corps Northern Training Area (known also as Camp Gonsalves or JWTC) in the north of the island.
On December 21, 2016, 10,000 acres of Okinawa Northern Training Area was returned to Japan.
On June 25, 2018, Okinawa residents held a protest demonstration at sea against scheduled land reclamation work for the relocation of a U.S. military base within Japan's southernmost island prefecture. A protest gathered hundreds of people.
Since the early 2000s, Okinawans have opposed the presence of American troops helipads in the Takae zone of the Yanbaru forest near Higashi and Kunigami. This opposition grew in July 2016 after the construction of six new helipads.
The islands comprising the prefecture are the southern two thirds of the archipelago of the . Okinawa's inhabited islands are typically divided into three geographical archipelagos. From northeast to southwest:
Eleven cities are located within the Okinawa Prefecture. Okinawan names are in parentheses:
These are the towns and villages in each district:
As of 31 March 2019, 36 percent of the total land area of the prefecture was designated as Natural Parks, namely the Iriomote-Ishigaki, Kerama Shotō, and Yanbaru National Parks; Okinawa Kaigan and Okinawa Senseki Quasi-National Parks; and Irabu, Kumejima, Tarama, and Tonaki Prefectural Natural Parks.
The dugong is an endangered marine mammal related to the manatee. Iriomote is home to one of the world's rarest and most endangered cat species, the Iriomote cat. The region is also home to at least one endemic pit viper, "Trimeresurus elegans". Coral reefs found in this region of Japan provide an environment for a diverse marine fauna. The sea turtles return yearly to the southern islands of Okinawa to lay their eggs. The summer months carry warnings to swimmers regarding venomous jellyfish and other dangerous sea creatures.
Okinawa is a major producer of sugar cane, pineapple, papaya, and other tropical fruit, and the Southeast Botanical Gardens represent tropical plant species.
The island is largely composed of coral, and rainwater filtering through that coral has given the island many caves, which played an important role in the Battle of Okinawa. Gyokusendo is an extensive limestone cave in the southern part of Okinawa's main island.
The island experiences temperatures above for most of the year. The climate of the islands ranges from humid subtropical climate (Köppen climate classification "Cfa") in the north, such as Okinawa Island, to tropical rainforest climate (Köppen climate classification "Af") in the south such as Iriomote Island. The islands of Okinawa are surrounded by some of the most abundant coral reefs found in the world. The world's largest colony of rare blue coral is found off of Ishigaki Island. Snowfall is unheard of at sea level. However, on January 24, 2016, sleet was reported in Nago on Okinawa Island for the first time on record.
Although unrecognized by the Japanese government, the indigenous Ryukyuan people make up the majority of Okinawa Prefecture's population. There is also a sizable Japanese minority there.
Okinawa prefecture age pyramid as of October 1, 2003
Okinawa Prefecture age pyramid, divided by sex, as of October 1, 2003
Having been a separate nation until 1879, Okinawan language and culture differ in many ways from those of mainland Japan.
There remain six Ryukyuan languages which, although related, are incomprehensible to speakers of Japanese. One of the Ryukyuan languages is spoken in Kagoshima Prefecture, rather than in Okinawa Prefecture. These languages are in decline as the younger generation of Okinawans uses Standard Japanese. Mainland Japanese - and some Okinawans themselves - generally perceive the Ryukyuan languages as "dialects". Standard Japanese is almost always used in formal situations. In informal situations, "de facto" everyday language among Okinawans under age 60 is Okinawa-accented mainland Japanese ("Okinawan Japanese"), which is often misunderstood as the Okinawan language proper. The actual traditional Okinawan language is still used in traditional cultural activities, such as folk music and folk dance. There is a radio-news program in the language as well.
Okinawans have traditionally followed Ryukyuan religious beliefs, generally characterized by ancestor worship and the respecting of relationships between the living, the dead, and the gods and spirits of the natural world.
Okinawan culture bears traces of its various trading partners. One can find Chinese, Thai and Austronesian influences in the island's customs. Perhaps Okinawa's most famous cultural export is karate, probably a product of the close ties with and influence of China on Okinawan culture. Karate is thought to be a synthesis of Chinese kung fu with traditional Okinawan martial arts. Okinawans' reputation as wily resisters of being influenced by conquerors is depicted in the 1956 Hollywood film, "The Teahouse of the August Moon", which takes place immediately after World War II.
Another traditional Okinawan product that owes its existence to Okinawa's trading history is awamori—an Okinawan distilled spirit made from "indica" rice imported from Thailand.
Other prominent examples of Okinawan culture include the sanshin—a three-stringed Okinawan instrument, closely related to the Chinese sanxian, and ancestor of the Japanese shamisen, somewhat similar to a banjo. Its body is often bound with snakeskin (from pythons, imported from elsewhere in Asia, rather than from Okinawa's venomous Trimeresurus flavoviridis, which are too small for this purpose). Okinawan culture also features the eisa dance, a traditional drumming dance. A traditional craft, the fabric named bingata, is made in workshops on the main island and elsewhere.
The Okinawan diet consist of low-fat, low-salt foods, such as whole fruits and vegetables, legumes, tofu, and seaweed. They are particularly well known for consuming purple potatoes, aka, the okinawan sweet potatoes. Okinawans are known for their longevity. This particular island is a so-called Blue Zone, an area where the people live longer than most others elsewhere in the world. Five times as many Okinawans live to be 100 as in the rest of Japan, and Japanese are already the longest-lived ethnic group globally. there were 34.7 centenarians for every 100,000 inhabitants, which is the highest ratio worldwide. Possible explanations are diet, low-stress lifestyle, caring community, activity, and spirituality of the inhabitants of the island.
A cultural feature of the Okinawans is the forming of moais. A moai is a community social gathering and groups that come together to provide financial and emotional support through emotional bonding, advice giving, and social funding. This provides a sense of security for the community members and as mentioned in the Blue Zone studies, may be a contributing factor to the longevity of its people.
In recent years, Okinawan literature has been appreciated outside of the Ryukyu archipelago. Two Okinawan writers have received the Akutagawa Prize: Matayoshi Eiki in 1995 for and Medoruma Shun in 1997 for "A Drop of Water" ("Suiteki"). The prize was also won by Okinawans in 1967 by Tatsuhiro Oshiro for "Cocktail Party" ("Kakuteru Pāti") and in 1971 by Mineo Higashi for "Okinawan Boy" ("Okinawa no Shōnen").
Karate originated in Okinawa. Over time, it developed into several styles and sub-styles. On Okinawa, the three main styles are considered to be Shōrin-ryū, Gōjū-ryū and Uechi-ryū. Internationally, the various styles and sub-styles include Matsubayashi-ryū, Wadō-ryū, Isshin-ryū, Shōrinkan, Shotokan, Shitō-ryū, Shōrinjiryū Kenkōkan, Shorinjiryu Koshinkai, and Shōrinji-ryū.
Despite widespread destruction during World War II, there are many remains of a unique type of castle or fortress known as "gusuku"; the most significant are now inscribed on the UNESCO World Heritage List (Gusuku Sites and Related Properties of the Kingdom of Ryukyu). In addition, and forty historic sites have been designated for protection by the national government. Shuri Castle in Naha is an UNESCO World Heritage Site.
Whereas most homes in Japan are made from wood and allow free-flow of air to combat humidity, typical modern homes in Okinawa are made from concrete with barred windows to protect from flying plant debris and to withstand regular typhoons. Roofs are designed with strong winds in mind, in which each tile is cemented on and not merely layered as seen with many homes in Japan. The Nakamura House () is an original 18th century farmhouse in Kitanakagusuki.
Many roofs also display a lion-dog statue, called a "shisa", which is said to protect the home from danger. Roofs are typically red in color and are inspired by Chinese design.
The public schools in Okinawa are overseen by the Okinawa Prefectural Board of Education. The agency directly operates several public high schools including Okinawa Shogaku High School. The U.S. Department of Defense Dependents Schools (DoDDS) operates 13 schools total in Okinawa. Seven of these schools are located on Kadena Air Base.
Okinawa has many types of private schools. Some of them are cram schools, also known as juku. Others, such as Nova, solely teach language. People also attend small language schools.
There are 10 colleges/universities in Okinawa, including the University of the Ryukyus, the only national university in the prefecture, and the Okinawa Institute of Science and Technology, a new international research institute. Okinawa's American military bases also host the Asian Division of the University of Maryland University College.
Announced on July 18, 2019, BASE Okinawa Baseball Club will be forming the first-ever professional baseball team on Okinawa, the Ryukyu Blue Oceans. The team is expected to be fully organized by January 2020 and intends on joining the Nippon Professional Baseball league.
In addition, various baseball teams from Japan hold training during the winter in Okinawa prefecture as it is the warmest prefecture of Japan with no snow and higher temperatures than other prefectures.
There are numerous golf courses in the prefecture, and there was formerly a professional tournament called the Okinawa Open.
The major ports of Okinawa include:
The 34 US military installations on Okinawa are financially supported by the U.S. and Japan. The bases provide jobs for Okinawans, both directly and indirectly; In 2011, the U.S. military employed over 9,800 Japanese workers in Okinawa.
The Okinawa Convention and Visitors Bureau is exploring the possibility of using facilities on the military bases for large-scale Meetings, incentives, conferencing, exhibitions events. | https://en.wikipedia.org/wiki?curid=22477 |
Olive oil
Olive oil is a liquid fat obtained from olives (the fruit of "Olea europaea"; family Oleaceae), a traditional tree crop of the Mediterranean Basin. The oil is produced by pressing whole olives. It is commonly used in cooking, for frying foods or as a salad dressing. It is also used in cosmetics, pharmaceuticals, and soaps, and as a fuel for traditional oil lamps, and has additional uses in some religions. There is limited evidence of its possible health benefits. The olive is one of three core food plants in Mediterranean cuisine; the other two are wheat and grapes. Olive trees have been grown around the Mediterranean since the 8th millennium BC.
The top five producers of olive oil by volume are Spain, Morocco, Turkey, Greece, and Italy. However, per capita national consumption is highest in Greece, followed by Spain and Italy.
The composition of olive oil varies with the cultivar, altitude, time of harvest and extraction process. It consists mainly of oleic acid (up to 83%), with smaller amounts of other fatty acids including linoleic acid (up to 21%) and palmitic acid (up to 20%). Extra virgin olive oil is required to have no more than 0.8% free acidity and is considered to have favorable flavor characteristics.
Olive oil has long been a common ingredient in Mediterranean cuisine, including ancient Greek and Roman cuisine. Wild olives, which originated in Asia Minor, were collected by Neolithic people as early as the 8th millennium BC. Besides food, olive oil has been used for religious rituals, medicines, as a fuel in oil lamps, soap-making, and skin care application. The Spartans and other Greeks used oil to rub themselves while exercising in the gymnasia. From its beginnings early in the 7th century BC, the cosmetic use of olive oil quickly spread to all of the Hellenic city states, together with athletes training in the nude, and lasted close to a thousand years despite its great expense. Olive oil was also popular as a form of birth control; Aristotle in his History of Animals recommends applying a mixture of olive oil combined with either oil of cedar, ointment of lead, or ointment of frankincense to the cervix to prevent pregnancy.
It is not clear when and where olive trees were first domesticated. According to an article published by "Reviews in Environmental Science and Biotechnology" the modern olive tree most likely originated in ancient Persia and Mesopotamia spreading towards Syria and Israel in the Mediterranean Basin where it was cultivated and later introduced to North Africa. Some scholars have argued that olive cultivation originated with the Ancient Egyptians.
The olive tree reached Greece, Carthage and Libya sometime in the 28th century BC, having been spread westward by the Phoenicians. Until around 1500 BC, eastern coastal areas of the Mediterranean were most heavily cultivated. Evidence also suggests that olives were being grown in Crete as long ago as 2500 BC. The earliest surviving olive oil amphorae date to 3500 BC (Early Minoan times), though the production of olive oil is assumed to have started before 4000 BC. Olive trees were certainly cultivated by the Late Minoan period (1500 BC) in Crete, and perhaps as early as the Early Minoan. The cultivation of olive trees in Crete became particularly intense in the post-palatial period and played an important role in the island's economy, as it did across the Mediterranean. Later, as Greek colonies were established in other parts of the Mediterranean, olive farming was introduced to places like Spain and continued to spread throughout the Roman Empire.
Olive trees were introduced to the Americas in the 16th century AD when cultivation began in areas that enjoyed a climate similar to the Mediterranean such as Chile, Argentina and California.
Recent genetic studies suggest that species used by modern cultivators descend from multiple wild populations, but a detailed history of domestication is not yet forthcoming.
Archaeological evidence shows that by 6000 BC olives were being turned into olive oil. and 4500 BC at a now-submerged prehistoric settlement south of Haifa.
Olive trees and oil production in the Eastern Mediterranean can be traced to archives of the ancient city-state Ebla (2600–2240 BC), which were located on the outskirts of the Syrian city Aleppo. Here some dozen documents dated 2400 BC describe lands of the king and the queen. These belonged to a library of clay tablets perfectly preserved by having been baked in the fire that destroyed the palace. A later source is the frequent mentions of oil in the Tanakh.
Dynastic Egyptians before 2000 BC imported olive oil from Crete, Syria and Canaan and oil was an important item of commerce and wealth. Remains of olive oil have been found in jugs over 4,000 years old in a tomb on the island of Naxos in the Aegean Sea. Sinuhe, the Egyptian exile who lived in northern Canaan about 1960 BC, wrote of abundant olive trees. The Minoans used olive oil in religious ceremonies. The oil became a principal product of the Minoan civilization, where it is thought to have represented wealth.
Olive oil was also a major export of Mycenaean Greece (c. 1450–1150 BC). Scholars believe the oil was made by a process where olives were placed in woven mats and squeezed. The oil collected in vats. This process was known from the Bronze Age and has been used by the Egyptians and continued to be used through the Hellenistic period.
The importance of olive oil as a commercial commodity increased after the Roman conquest of Egypt, Greece and Asia Minor led to more trade along the Mediterranean. Olive trees were planted throughout the entire Mediterranean basin during evolution of the Roman Republic and Empire. According to the historian Pliny the Elder, Italy had "excellent olive oil at reasonable prices" by the 1st century AD—"the best in the Mediterranean". As olive production expanded in the 5th century AD the Romans began to employ more sophisticated production techniques like the olive press and "trapetum" (pictured left). Many ancient presses still exist in the Eastern Mediterranean region, and some dating to the Roman period are still in use today. Productivity was greatly improved by Joseph Graham's development of the hydraulic pressing system developed in 1795.
The olive tree has historically been a symbol of peace between nations. It has played a religious and social role in Greek mythology, especially concerning the name of the city of Athens where the city was named after the goddess Athena because her gift of an olive tree was held to be more precious than rival Poseidon's gift of a salt spring.
There are many olive cultivars, each with a particular flavor, texture, and shelf life that make them more or less suitable for different applications, such as direct human consumption on bread or in salads, indirect consumption in domestic cooking or catering, or industrial uses such as animal feed or engineering applications. During the stages of maturity, olive fruit changes color from green to violet, and then black. Olive oil taste characteristics depend on which stage of ripeness olive fruits are collected.
Olive oil is an important cooking oil in countries surrounding the Mediterranean, and it forms one of the three staple food plants of Mediterranean cuisine, the other two being wheat (as in pasta, bread, and couscous) and the grape, used as a dessert fruit and for wine.
Extra virgin olive oil is mostly used as a salad dressing and as an ingredient in salad dressings. It is also used with foods to be eaten cold. If uncompromised by heat, the flavor is stronger. It also can be used for sautéing.
When extra virgin olive oil is heated above , depending on its free fatty acid content, the unrefined particles within the oil are burned. This leads to deteriorated taste. Also, most consumers do not like the pronounced taste of extra virgin olive oil for deep fried foods. Refined olive oils are suited for deep frying because of the higher smoke point and milder flavour. Extra virgin oils have a smoke point around 180–215 °C (356–419 °F) whereas refined light olive oil has a smoke point up to 230 °C (446 °F). "Contrary to popular myths, high quality EVOO [extra virgin olive oil] is an excellent choice for cooking. High quality EVOO has a smoke point well above the standard temperatures required for cooking, and its resistance to oxidation is higher than most cooking oils due to the antioxidant and mono-unsaturated fat content." "The smoke point of a good extra-virgin olive oil is 210 °C (420 °F). The smoke point is higher in good extra-virgin olive oil and lower in low-quality virgin olive oil."
Choosing a cold-pressed olive oil can be similar to selecting a wine. The flavor of these oils varies considerably and a particular oil may be more suited for a particular dish.
Fresh oil, as available in an oil-producing region, tastes noticeably different from the older oils available elsewhere. In time, oils deteriorate and become stale. One-year-old oil may be still pleasant to the taste, but it is less fragrant than fresh oil. After the first year, olive oil is more suitable for cooking than serving raw.
The taste of the olive oil is influenced by the varietals used to produce the oil and by the moment when the olives are harvested and ground (less ripe olives give more bitter and spicy flavors – riper olives give a sweeter sensation in the oil).
The Roman Catholic, Orthodox and Anglican churches use olive oil for the Oil of Catechumens (used to bless and strengthen those preparing for Baptism) and Oil of the Sick (used to confer the Sacrament of Anointing of the Sick or Unction). Olive oil mixed with a perfuming agent such as balsam is consecrated by bishops as Sacred Chrism, which is used to confer the sacrament of Confirmation (as a symbol of the strengthening of the Holy Spirit), in the rites of Baptism and the ordination of priests and bishops, in the consecration of altars and churches, and, traditionally, in the anointing of monarchs at their coronation.
Eastern Orthodox Christians still use oil lamps in their churches, home prayer corners and in the cemeteries. A vigil lamp consists of a votive glass containing a half-inch of water and filled the rest with olive oil. The glass has a metal holder that hangs from a bracket on the wall or sits on a table. A cork float with a lit wick floats on the oil. To douse the flame, the float is carefully pressed down into the oil. Makeshift oil lamps can easily be made by soaking a ball of cotton in olive oil and forming it into a peak. The peak is lit and then burns until all the oil is consumed, whereupon the rest of the cotton burns out. Olive oil is a usual offering to churches and cemeteries.
The Church of Jesus Christ of Latter-day Saints uses virgin olive oil that has been blessed by the priesthood. This consecrated oil is used for anointing the sick.
Iglesia ni Cristo uses olive oil to anoint sick (in Filipino: ""Pagpapahid ng Langis""), it is blessed by minister or deacon by prayer before anointing to the sick. After anointing, the Elder prays for Thanksgiving.
In Jewish observance, olive oil was the only fuel allowed to be used in the seven-branched menorah in the Mishkan service during the Exodus of the tribes of Israel from Egypt, and later in the permanent Temple in Jerusalem. It was obtained by using only the first drop from a squeezed olive and was consecrated for use only in the Temple by the priests and stored in special containers. Although candles can be used to light the menorah at Hanukkah, oil containers are preferred, to imitate the original menorah. Another use of oil in Jewish religion was for anointing the kings of the Kingdom of Israel, originating from King David. Tzidkiyahu was the last anointed King of Israel.
Olive oil has a long history of being used as a home skincare remedy. Egyptians used it alongside beeswax as a cleanser, moisturizer, and antibacterial agent since pharaonic times. In ancient Greece, olive oil was used during massage, to prevent sports injuries and relieve muscle fatigue. In 2000, Japan was the top importer of olive oil in Asia (13,000 tons annually) because consumers there believe both the ingestion and topical application of olive oil to be good for skin and health.
Olive oil is popular for use in massaging infants and toddlers, but scientific evidence of its efficacy is mixed. One analysis of olive oil versus mineral oil found that, when used for infant massage, olive oil can be considered a safe alternative to sunflower, grapeseed and fractionated coconut oils. This stands true particularly when it is mixed with a lighter oil like sunflower, which "would have the further effect of reducing the already low levels of free fatty acids present in olive oil". Another trial stated that olive oil lowered the risk of dermatitis for infants in all gestational stages when compared with emollient cream. However, yet another study on adults found that topical treatment with olive oil "significantly damages the skin barrier" when compared to sunflower oil, and that it may make existing atopic dermatitis worse. The researchers concluded that due to the negative outcome in adults, they do not recommend the use of olive oil for the treatment of dry skin and infant massage.
Applying olive oil to the skin does not help prevent or reduce stretch marks.
Olive oil is also a natural and safe lubricant, and can be used to lubricate kitchen machinery (grinders, blenders, cookware, etc.). It can also be used for illumination (oil lamps) or as the base for soaps and detergents. Some cosmetics also use olive oil as their base, and it can be used as a substitute for machine oil. Olive oil has also been used as both solvent and ligand in the synthesis of cadmium selenide quantum dots.
The Ranieri Filo della Torre is an international literary prize for writings about extra virgin olive oil. It yearly honors poetry, fiction and non-fiction about extra virgin olive oil.
Olive oil is produced by grinding olives and extracting the oil by mechanical or chemical means. Green olives usually produce more bitter oil, and overripe olives can produce oil that is rancid, so for good extra virgin olive oil care is taken to make sure the olives are perfectly ripened. The process is generally as follows:
The remaining semi-solid waste, called pomace, retains a small quantity (about 5–10%) of oil that cannot be extracted by further pressing, but only with chemical solvents. This is done in specialized chemical plants, not in the oil mills. The resulting oil is not "virgin" but "pomace oil".
Handling of olive waste is an environmental challenge because the wastewater, which amounts to millions of tons (billions of liters) annually in the European Union, is not biodegradable, is toxic to plants, and cannot be processed through conventional water treatment systems. Traditionally, olive pomace would be used as compost or developed as a possible biofuel, although these uses introduce concern due to chemicals present in the pomace. A process called "valorization" of olive pomace is under research and development, consisting of additional processing to obtain value-added byproducts, such as animal feed, food additives for human products, and phenolic and fatty acid extracts for potential human use.
In 2016/17, world production of virgin olive oil was 2,586,500 tonnes, an 18.6% decrease under 2015/16 global production. Spain produced 1,290,600 tonnes or 50% of world production. The next six largest producers – Greece, Italy, Turkey, Morocco, Syria and Tunisia – collectively produced 70% of Spain's annual total.
In the EU, Eurostat reported in 2007 that there were 1.9 million farms with olive groves. The olive sector is characterised by a large number of small operations. The largest holdings are in Andalucía (8 ha/holding on average) in Spain and Alentejo (7.5 ha/holding) in Portugal while the smallest are located in Cyprus (0.5 ha/holding), Apulia and Crete (1.7 ha/holding).
Some 75% of Spain's production derives from the region of Andalucía, particularly within Jaén province which produces 70% of olive oil in Spain. The world's largest olive oil mill (almazara, in Spanish), capable of processing 2,500 tonnes of olives per day, is in the town of Villacarrillo, Jaén.
In 2016/2017 Greece was the second largest producer of olive oil with 195,000 tons produced. As of 2009, there were 531,000 farms cultivating from 132 million trees producing 310–350,000 tons of olive oil.
Italy produced 182,300 tonnes in 2016/17 or 7.6% of the world's production. Even though the production can change from year to year, usually major Italian producers are the regions of Calabria and, above all, Apulia. Many PDO and PGI extra-virgin olive oil are produced in these regions. In Apulia, among the villages of Carovigno, Ostuni and Fasano is the Plain of Olive Trees, which counts some specimens as old as 3000 years; it has been proposed to add this plain to the UNESCO Heritage List. Excellent extra-virgin olive oil is also produced in Tuscany, in cities like Lucca, Florence, Siena which are also included in the association of ""Città dell'Olio"". Italy imports about 65% of Spanish olive oil exports. Some Italian companies are known to mix the imported olive oil with alternate oils (such as soy) and falsely market the blend as authentic olive oil "Made in Italy", creating a fraud that the European Commission has attempted to overcome by offering a 5 million Euro reward to stimulate better methods of authentication.
Turkey is the largest producer outside the EU (table), with 178,000 tons produced in 2016/2017, cultivating from 174.594 thousand trees.
Tunisia is the fourth largest producer outside the EU (table), with 100,000 tons produced in 2016 to 2017, among which 73% was exported to Europe. Because of the arid climate, pesticides and herbicides are largely unnecessary in Tunisia.
While the majority (between 60–70%) of olive oil consumed in Australia is imported from Europe, a smaller domestic industry does exist. Many Australian producers only make premium small-batch oils, while a number of corporate growers operate groves of a million trees or more and produce oils for the general market. 11% of Australian production is exported, mostly to Asia.
In North America, Italian and Spanish olive oils are the best-known, and top-quality extra virgin olive oil from Italy, Spain, Portugal and Greece are sold at high prices, often in prestige packaging. A large part of U.S. olive oil imports come from Italy, Spain, and Turkey.
The United States produces olive oil in California, Hawaii, Texas, Georgia, and Oregon.
Greece has by far the largest per capita consumption of olive oil worldwide, over per person per year; Spain and Italy, around 14 l; Tunisia, Portugal, Syria, Jordan and Lebanon, around 8 l. Northern Europe and North America consume far less, around 0.7 l, but the consumption of olive oil outside its home territory has been rising steadily.
The International Olive Council (IOC) is an intergovernmental organisation of states that produce olives or products derived from olives, such as olive oil. The IOC officially governs 95% of international production and holds great influence over the rest. The EU regulates the use of different protected designation of origin labels for olive oils.
The United States is not a member of the IOC and is not subject to its authority, but on October 25, 2010, the U.S. Department of Agriculture adopted new voluntary olive oil grading standards that closely parallel those of the IOC, with some adjustments for the characteristics of olives grown in the U.S. Additionally, U.S. Customs regulations on "country of origin" state that if a non-origin nation is shown on the label, then the real origin must be shown on the same side of the label and in comparable size letters so as not to mislead the consumer. Yet most major U.S. brands continue to put "imported from Italy" on the front label in large letters and other origins on the back in very small print. "In fact, olive oil labeled 'Italian' often comes from Turkey, Tunisia, Morocco, Spain, and Greece." This makes it unclear what percentage of the olive oil is really of Italian origin.
All production begins by transforming the olive fruit into olive paste by crushing or pressing. This paste is then malaxed (slowly churned or mixed) to allow the microscopic oil droplets to agglomerate. The oil is then separated from the watery matter and fruit pulp with the use of a press (traditional method) or centrifugation (modern method). After extraction the remnant solid substance, called pomace, still contains a small quantity of oil.
To classify its organoleptic qualities, olive oil is judged by a panel of trained tasters in a blind taste test.
One parameter used to characterise an oil is its acidity. In this context, "acidity" is not chemical acidity in the sense of pH, but the percent (measured by weight) of free oleic acid. Measured by quantitative analysis, acidity is a measure of the hydrolysis of the oil's triglycerides: as the oil degrades, more fatty acids are freed from the glycerides, increasing the level of free acidity and thereby increasing hydrolytic rancidity. Another measure of the oil's chemical degradation is the peroxide value, which measures the degree to which the oil is oxidized by free radicals, leading to "oxidative" rancidity. Phenolic acids present in olive oil also add acidic sensory properties to aroma and flavor.
The grades of oil extracted from the olive fruit can be classified as:
In countries that adhere to the standards of the International Olive Council, as well as in Australia, and under the voluntary United States Department of Agriculture labeling standards in the United States:
Extra virgin olive oil is the highest grade of virgin oil derived by cold mechanical extraction without use of solvents or refining methods. It contains no more than 0.8% free acidity, and is judged to have a superior taste, having some fruitiness and no defined sensory defects. Extra virgin olive oil accounts for less than 10% of oil in many producing countries; the percentage is far higher in the Mediterranean countries (Greece: 80%, Italy: 65%, Spain 50%).
Virgin olive oil is a lesser grade of virgin oil, with free acidity of up to 2.0%, and is judged to have a good taste, but may include some sensory defects.
Refined olive oil is virgin oil that has been refined using charcoal and other chemical and physical filters, methods which do not alter the glyceridic structure. It has a free acidity, expressed as oleic acid, of not more than 0.3 grams per 100 grams (0.3%) and its other characteristics correspond to those fixed for this category in this standard. It is obtained by refining virgin oils to eliminate high acidity or organoleptic defects. Oils labeled as "Pure olive oil" or "Olive oil" are primarily refined olive oil, with a small addition of virgin for taste.
Olive pomace oil is refined pomace olive oil, often blended with some virgin oil. It is fit for consumption, but may not be described simply as "olive oil". It has a more neutral flavor than pure or virgin olive oil, making it unfashionable among connoisseurs; however, it has the same fat composition as regular olive oil, giving it the same health benefits. It also has a high smoke point, and thus is widely used in restaurants as well as home cooking in some countries.
As the United States is not a member, the IOC retail grades have no legal meaning there, but on October 25, 2010, the United States Department of Agriculture (USDA) established Standards for Grades of Olive Oil and Olive-Pomace Oil, which closely parallel the IOC standards:
These grades are voluntary. Certification is available, for a fee, from the USDA.
Several olive producer associations, such as the North American Olive Oil Association and the California Olive Oil Council, also offer grading and certification within the United States. Oleologist Nicholas Coleman suggests that the California Olive Oil Council certification is the most stringent of the voluntary grading schemes in the United States.
There have been allegations, particularly in Italy and Spain, that regulation can be sometimes lax and corrupt. Major shippers are claimed to routinely adulterate olive oil so that only about 40% of olive oil sold as "extra virgin" in Italy actually meets the specification. In some cases, colza oil (extracted from rapeseed) with added color and flavor has been labeled and sold as olive oil. This extensive fraud prompted the Italian government to mandate a new labeling law in 2007 for companies selling olive oil, under which every bottle of Italian olive oil would have to declare the farm and press on which it was produced, as well as display a precise breakdown of the oils used, for blended oils. In February 2008, however, EU officials took issue with the new law, stating that under EU rules such labeling should be voluntary rather than compulsory. Under EU rules, olive oil may be sold as Italian even if it only contains a small amount of Italian oil.
Extra virgin olive oil has strict requirements and is checked for "sensory defects" that include: rancid, fusty, musty, winey (vinegary) and muddy sediment. These defects can occur for different reasons. The most common are:
In March 2008, 400 Italian police officers conducted "Operation Golden Oil", arresting 23 people and confiscating 85 farms after an investigation revealed a large-scale scheme to relabel oils from other Mediterranean nations as Italian. In April 2008, another operation impounded seven olive oil plants and arrested 40 people in nine provinces of northern and southern Italy for adding chlorophyll to sunflower and soybean oil, and selling it as extra virgin olive oil, both in Italy and abroad; 25,000 liters of the fake oil were seized and prevented from being exported.
On March 15, 2011, the prosecutor's office in Florence, Italy, working in conjunction with the forestry department, indicted two managers and an officer of Carapelli, one of the brands of the Spanish company Grupo SOS (which recently changed its name to Deoleo). The charges involved falsified documents and food fraud. Carapelli lawyer Neri Pinucci said the company was not worried about the charges and that "the case is based on an irregularity in the documents."
In February 2012, Spanish authorities investigated an international olive oil scam in which palm, avocado, sunflower and other cheaper oils were passed off as Italian olive oil. Police said the oils were blended in an industrial biodiesel plant and adulterated in a way to hide markers that would have revealed their true nature. The oils were not toxic and posed no health risk, according to a statement by the Guardia Civil. Nineteen people were arrested following the year-long joint probe by the police and Spanish tax authorities, part of what they call Operation Lucerna.
Using tiny print to state the origin of blended oil is used as a legal loophole by manufacturers of adulterated and mixed olive oil.
Journalist Tom Mueller has investigated crime and adulteration in the olive oil business, publishing the article "Slippery Business" in "New Yorker" magazine, followed by the 2011 book "Extra Virginity". On 3 January 2016 Bill Whitaker presented a program on CBS News including interviews with Mueller and with Italian authorities. It was reported that in the previous month 5,000 tons of adulterated olive oil had been sold in Italy, and that organised crime was heavily involved—the term "Agrimafia" was used. The point was made by Mueller that the profit margin on adulterated olive oil was three times that on the illegal narcotic drug cocaine. He said that over 50% of olive oil sold in Italy was adulterated, as was 75–80% of that sold in the US. Whitaker reported that 3 samples of "extra virgin olive oil" had been bought in a US supermarket and tested; two of the three samples did not meet the required standard, and one of them—with a top-selling US brand—was exceptionally poor.
In early February 2017, the Carabinieri arrested 33 suspects in the Calabrian mafia's Piromalli 'ndrina ('Ndrangheta) which was allegedly exporting fake extra virgin olive oil to the U.S.; the product was actually inexpensive olive pomace oil fraudulently labeled. Less than a year earlier, the American television program 60 Minutes had warned that "the olive oil business has been corrupted by the Mafia" and that "Agromafia" was a $16-billion per year enterprise. A Carabinieri investigator interviewed on the program said that "olive oil fraud has gone on for the better part of four millennia" but today, it's particularly "easy for the bad guys to either introduce adulterated olive oils or mix in lower quality olive oils with extra-virgin olive oil". Weeks later, a report by Forbes stated that "it's reliably reported that 80% of the Italian olive oil on the market is fraudulent" and that "a massive olive oil scandal is being uncovered in Southern Italy (Puglia, Umbria and Campania)".
Olive oil is composed mainly of the mixed triglyceride esters of oleic acid, linoleic acid, palmitic acid and of other fatty acids, along with traces of squalene (up to 0.7%) and sterols (about 0.2% phytosterol and tocosterols). The composition varies by cultivar, region, altitude, time of harvest, and extraction process.
Olive oil contains traces of phenolics (about 0.5%), such as esters of tyrosol, hydroxytyrosol, oleocanthal and oleuropein, which give extra virgin olive oil its bitter, pungent taste, and are also implicated in its aroma. Olive oil is a source of at least 30 phenolic compounds, among which are elenolic acid, a marker for maturation of olives, and alpha-tocopherol, one of the eight members of the Vitamin E family. Oleuropein, together with other closely related compounds such as 10-hydroxyoleuropein, ligstroside and 10-hydroxyligstroside, are tyrosol esters of elenolic acid.
Other phenolic constituents include flavonoids, lignans and pinoresinol.
One tablespoon of olive oil (13.5 g) contains the following nutritional information according to the USDA:
In the United States, the FDA allows producers of olive oil to place the following qualified health claim on product labels:
In a review by the European Food Safety Authority (EFSA) in 2011, health claims on olive oil were approved for protection by its polyphenols against oxidation of blood lipids, and for maintenance of normal blood LDL-cholesterol levels by replacing saturated fats in the diet with oleic acid. (Commission Regulation (EU) 432/2012 of 16 May 2012). Despite its approval, the EFSA has noted that a definitive cause-and-effect relationship has not been adequately established for consumption of olive oil and maintaining normal (fasting) blood concentrations of triglycerides, normal blood HDL-cholesterol concentrations, and normal blood glucose concentrations.
A 2014 meta-analysis concluded that increased consumption of olive oil was associated with reduced risk of all-cause mortality, cardiovascular events and stroke, while monounsaturated fatty acids of mixed animal and plant origin showed no significant effects. Another meta-analysis in 2018 found high-polyphenol olive oil intake was associated with improved measures of total cholesterol, HDL cholesterol, malondialdehyde, and oxidized LDL when compared to low-polyphenol olive oils. | https://en.wikipedia.org/wiki?curid=22478 |
Olive
The olive, known by the botanical name Olea europaea, meaning "European olive", is a species of small tree in the family Oleaceae, found traditionally in the Mediterranean Basin. The species is cultivated in all the countries of the Mediterranean, as well as in South America, South Africa, Australia, New Zealand and the United States. "Olea europaea" is the type species for the genus "Olea".
The olive's fruit, also called the olive, is of major agricultural importance in the Mediterranean region as the source of olive oil; it is one of the core ingredients in Mediterranean cuisine. The tree and its fruit give their name to the plant family, which also includes species such as lilacs, jasmine, "Forsythia", and the true ash trees ("Fraxinus").
The word "olive" derives from Latin ' ("olive fruit", "olive tree"), possibly through Etruscan 𐌄𐌋𐌄𐌉𐌅𐌀 (eleiva) from the archaic Proto-Greek form *ἐλαίϝα (*elaíwa) (Classic Greek ', "olive fruit", "olive tree").
The word "oil" originally meant "olive oil", from ', (', "olive oil"). Also in multiple other languages the word for "oil" ultimately derives from the name of this tree and its fruit.
The oldest attested forms of the Greek words are the Mycenaean , ', and , ' or , "", written in the Linear B syllabic script.
The olive tree, "Olea europaea", is an evergreen tree or shrub native to the Mediterranean, Asia, and Africa. It is short and squat, and rarely exceeds in height. 'Pisciottana', a unique variety comprising 40,000 trees found only in the area around Pisciotta in the Campania region of southern Italy often exceeds this, with correspondingly large trunk diameters. The silvery green leaves are oblong, measuring long and wide. The trunk is typically gnarled and twisted.
The small, white, feathery flowers, with ten-cleft calyx and corolla, two stamens, and bifid stigma, are borne generally on the previous year's wood, in racemes springing from the axils of the leaves.
The fruit is a small drupe long, thinner-fleshed and smaller in wild plants than in orchard cultivars. Olives are harvested in the green to purple stage. Canned black olives have often been artificially blackened (see below on processing) and may contain the chemical ferrous gluconate to improve the appearance. "Olea europaea" contains a seed commonly referred to in American English as a pit, and in British English as a stone.
The six natural subspecies of "Olea europaea" are distributed over a wide range:
The subspecies "O. e. maroccana" and "O. e. cerasiformis" are respectively hexaploid and tetraploid.
Wild growing forms of the olive are sometimes treated as the species "Olea oleaster".
The trees referred to as white and black olives in Southeast Asia are not actually olives, but species of "Canarium".
Hundreds of cultivars of the olive tree are known. An olive's cultivar has a significant impact on its colour, size, shape, and growth characteristics, as well as the qualities of olive oil. Olive cultivars may be used primarily for oil, eating, or both. Olives cultivated for consumption are generally referred to as table olives.
Since many olive cultivars are self-sterile or nearly so, they are generally planted in pairs with a single primary cultivar and a secondary cultivar selected for its ability to fertilize the primary one. In recent times, efforts have been directed at producing hybrid cultivars with qualities useful to farmers, such as resistance to disease, quick growth, and larger or more consistent crops.
Fossil evidence indicates the olive tree had its origins some 20–40 million years ago in the Oligocene, in what is now corresponding to Italy and the eastern Mediterranean Basin. The olive plant was first cultivated some 7,000 years ago in Mediterranean regions.
The edible olive seems to have coexisted with humans for about 5,000 to 6,000 years, going back to the early Bronze Age (3150 to 1200 BC). Its origin can be traced to the Levant based on written tablets, olive pits, and wood fragments found in ancient tombs.
The immediate ancestry of the cultivated olive is unknown. Fossil "Olea" pollen has been found in Macedonia and other places around the Mediterranean, indicating that this genus is an original element of the Mediterranean flora. Fossilized leaves of "Olea" were found in the palaeosols of the volcanic Greek island of Santorini (Thera) and were dated about 37,000 BP. Imprints of larvae of olive whitefly "Aleurolobus (Aleurodes) olivinus" were found on the leaves. The same insect is commonly found today on olive leaves, showing that the plant-animal co-evolutionary relations have not changed since that time. Other leaves found on the same island are dated back to 60,000 BP, making them the oldest known olives from the Mediterranean.
As far back as 3000 BC, olives were grown commercially in Crete; they may have been the source of the wealth of the Minoan civilization.
Olives are not native to the Americas. Spanish colonists brought the olive to the New World, where its cultivation prospered in present-day Peru, Chile and Argentina . The first seedlings from Spain were planted in Lima by Antonio de Rivera in 1560. Olive tree cultivation quickly spread along the valleys of South America's dry Pacific coast where the climate was similar to the Mediterranean. Spanish missionaries established the tree in the 18th century in California. It was first cultivated at Mission San Diego de Alcalá in 1769 or later around 1795. Orchards were started at other missions, but in 1838, an inspection found only two olive orchards in California. Cultivation for oil gradually became a highly successful commercial venture from the 1860s onward. In Japan, the first successful planting of olive trees happened in 1908 on Shodo Island, which became the cradle of olive cultivation. An estimated 865 million olive trees are in the world today (as of 2005), and the vast majority of these are found in Mediterranean countries, with traditionally marginal areas accounting for no more than 25% of olive-planted area and 10% of oil production.
Olive oil has long been considered sacred. The olive branch was often a symbol of abundance, glory, and peace. The leafy branches of the olive tree were ritually offered to deities and powerful figures as emblems of benediction and purification, and they were used to crown the victors of friendly games and bloody wars. Today, olive oil is still used in many religious ceremonies. Over the years, the olive has also been used to symbolize wisdom, fertility, power, and purity.
The olive was one of the main elements in ancient Israelite cuisine. Olive oil was used for not only food and cooking, but also lighting, sacrificial offerings, ointment, and anointment for priestly or royal office.
The olive tree is one of the first plants mentioned in the Hebrew Bible (the Christian Old Testament), and one of the most significant. An olive branch (or leaf, depending on translation) was brought back to Noah by a dove to demonstrate that the flood was over
(Book of Genesis, 8:11). The olive is listed in Deuteronomy 8:8 as one of the seven species that are noteworthy products of the Land of Israel.
Olives are thought to have been domesticated in the third millennium BC at the latest, at which point they, along with grain and grapes, became part of Colin Renfrew’s triad of Greek staple crops that fueled the emergence of more complex societies.. Olives, and especially (perfumed) olive oil, became a major export product during the Minoan and Mycenaean period. Dutch archaeologist Jorrit Kelder proposed that the Mycenaeans sent shipments of olive oil, probably alongside live olive branches, to the court of the Egyptian pharaoh Akhenaten as a diplomatic gift.. In Egypt, these imported olive branches may have acquired ritual meaning, seeing that they are depicted as offerings on the wall of the Aten temple and were used in wreaths for the burial of Tutankhamen. It is likely that, as well as being used for culinary purposes, olive oil was also used to various other ends, including as a perfume.
The ancient Greeks smeared olive oil on their bodies and hair as a matter of grooming and good health.
Olive oil was used to anoint kings and athletes in ancient Greece. It was burnt in the sacred lamps of temples and was the "eternal flame" of the original Olympic games. Victors in these games were crowned with its leaves.
In Homer's "Odyssey", Odysseus crawls beneath two shoots of olive that grow from a single stock, and in the "Iliad", (XVII.53ff) there is a metaphoric description of a lone olive tree in the mountains, by a spring; the Greeks observed that the olive rarely thrives at a distance from the sea, which in Greece invariably means up mountain slopes. Greek myth attributed to the primordial culture-hero Aristaeus the understanding of olive husbandry, along with cheese-making and bee-keeping. Olive was one of the woods used to fashion the most primitive Greek cult figures, called "xoana", referring to their wooden material; they were reverently preserved for centuries. It was purely a matter of local pride that the Athenians claimed that the olive grew first in Athens. In an archaic Athenian foundation myth, Athena won the patronage of Attica from Poseidon with the gift of the olive. According to the fourth-century BC father of botany, Theophrastus, olive trees ordinarily attained an age around 200 years, he mentions that the very olive tree of Athena still grew on the Acropolis; it was still to be seen there in the second century AD; and when Pausanias was shown it, c. 170 AD, he reported "Legend also says that when the Persians fired Athens the olive was burnt down, but on the very day it was burnt it grew again to the height of two cubits." Indeed, olive suckers sprout readily from the stump, and the great age of some existing olive trees shows that it was perfectly possible that the olive tree of the Acropolis dated to the Bronze Age. The olive was sacred to Athena and appeared on the Athenian coinage.
Theophrastus, in "On the Causes of Plants", does not give as systematic and detailed an account of olive husbandry as he does of the vine, but he makes clear (in 1.16.10) that the cultivated olive must be vegetatively propagated; indeed, the pits give rise to thorny, wild-type olives, spread far and wide by birds. Theophrastus reports how the bearing olive can be grafted on the wild olive, for which the Greeks had a separate name, "kotinos". In his "Enquiry into Plants" (2.1.2–4) he states that the olive can be propagated from a piece of the trunk, the root, a twig, or a stake.
According to Pliny the Elder, a vine, a fig tree, and an olive tree grew in the middle of the Roman Forum; the latter was planted to provide shade (the garden plot was recreated in the 20th century). The Roman poet Horace mentions it in reference to his own diet, which he describes as very simple: "As for me, olives, endives, and smooth mallows provide sustenance." Lord Monboddo comments on the olive in 1779 as one of the foods preferred by the ancients and as one of the most perfect foods.
Vitruvius describes of the use of charred olive wood in tying together walls and foundations in his "":
The thickness of the wall should, in my opinion, be such that armed men meeting on top of it may pass one another without interference. In the thickness there should be set a very close succession of ties made of charred olive wood, binding the two faces of the wall together like pins, to give it lasting endurance. For that is a material which neither decay, nor the weather, nor time can harm, but even though buried in the earth or set in the water it keeps sound and useful forever. And so not only city walls but substructures in general and all walls that require a thickness like that of a city wall, will be long in falling to decay if tied in this manner.
The Mount of Olives east of Jerusalem is mentioned several times in the New Testament. The Allegory of the Olive Tree in St. Paul's Epistle to the Romans refers to the scattering and gathering of Israel. It compares the Israelites to a tame olive tree and the Gentiles to a wild olive branch. The olive tree itself, as well as olive oil and olives, play an important role in the Bible.
The olive tree and olive oil are mentioned seven times in the Quran, and the olive is praised as a precious fruit. Olive tree and olive-oil health benefits have been propounded in Prophetic medicine. Muhammad is reported to have said: "Take oil of olive and massage with it – it is a blessed tree" (Sunan al-Darimi, 69:103).
Olives are substitutes for dates (if not available) during Ramadan fasting, and olive tree leaves are used as incense in some Muslim Mediterranean countries.
Olive trees in the groves around the Mediterranean Sea are centuries old, with some dated to 2000 years. An olive tree on the island of Brijuni (Brioni), Istria in Croatia, has a radiocarbon dating age of about 1,600 years. It still gives fruit (about per year), which is made into olive oil.
An olive tree in west Athens, named "Plato's Olive Tree", is thought to be a remnant of the grove where Plato's Academy was situated, making it an estimated 2,400 years old. The tree comprised a cavernous trunk from which a few branches were still sprouting in 1975, when a traffic accident caused a bus to uproot it. Following that, the trunk was preserved and displayed in the nearby Agricultural University of Athens. In 2013, it was reported that the remaining part of the trunk was uprooted and stolen, allegedly to serve as firewood. A supposedly older tree, the "Peisistratos Tree", is located by the banks of the Cephisus River, in the municipality of Agioi Anargyroi, and is said to be a remnant of an olive grove that was planted by Athenian tyrant Peisistratos in the sixth century BC. Numerous ancient olive trees also exist near Pelion in Greece. The age of an olive tree in Crete, the Finix Olive, is claimed to be over 2,000 years old; this estimate is based on archaeological evidence around the tree. The olive tree of Vouves, also in Crete, has an age estimated between 2000 and 4000 years. An olive tree called Farga d'Arió in Ulldecona, Catalonia, Spain, has been estimated (with laser-perimetry methods) to date back to 314 AD, which would mean that it was planted when Constantine the Great was Roman emperor.
Some Italian olive trees are believed to date back to Ancient Rome (8th century BC to 5th century AD), although identifying progenitor trees in ancient sources is difficult. Several other trees of about 1,000 years old are within the same garden. The 15th-century trees of Olivo della Linza, at Alliste in the Province of Lecce in Apulia on the Italian mainland, were noted by Bishop Ludovico de Pennis during his pastoral visit to the Diocese of Nardò-Gallipoli in 1452.
The town of Bshaale, Lebanon claims to have the oldest olive trees in the world (4000 BC for the oldest), but no scientific study supports these claims. Other trees in the towns of Amioun appear to be at least 1,500 years old.
Throughout Israel and Palestine, dozens of ancient olive trees are found with estimated ages of 1,600–2,000 years; however, these estimates could not be supported by current scientific practices. Ancient trees include two giant olive trees in Arraba and five trees in Deir Hanna, both in the Galilee region, which have been determined to be over 3,000 years old, although no available data support the credibility of the study that produced these age estimates, and as such, the 3000 years age estimate can not be considered valid. All seven trees continue to produce olives.
Several trees in the Garden of Gethsemane (from the Hebrew words "gat shemanim" or olive press) in Jerusalem are claimed to date back to the time of Jesus. A study conducted by the National Research Council of Italy in 2012 used carbon dating on older parts of the trunks of three trees from Gethsemane and came up with the dates of 1092, 1166, and 1198 AD, while DNA tests show that the trees were originally planted from the same parent plant. According to molecular analysis, the tested trees showed the same allelic profile at all microsatellite loci analyzed which furthermore may indicate attempt to keep the lineage of an older species intact. However, Bernabei writes, "All the tree trunks are hollow inside so that the central, older wood is missing . . . In the end, only three from a total of eight olive trees could be successfully dated. The dated ancient olive trees do, however, not allow any hypothesis to be made with regard to the age of the remaining five giant olive trees." Babcox concludes, "The roots of the eight oldest trees are possibly much older. Visiting guides to the garden often state that they are two thousand years old."
The 2,000-year-old Bidni olive trees on the island of Malta, which have been confirmed through carbon dating, have been protected since 1933, and are also listed in UNESCO's Database of National Cultural Heritage Laws. In 2011, after recognising their historical and landscape value, and in recognition of the fact that "only 20 trees remain from 40 at the beginning of the 20th century", Maltese authorities declared the ancient Bidni olive grove at Bidnija, limits of Mosta, as a Tree Protected Area, in accordance with the provisions of the Trees and Woodlands Protection Regulations, 2011, as per Government Notice number 473/11.
The olive tree, "Olea europaea", has been cultivated for olive oil, fine wood, olive leaf, and the olive fruit. About 90% of all harvested olives are turned into oil, while about 10% are used as table olives. The olive is one of the "trinity" or "triad" of basic ingredients in Mediterranean cuisine, the other two being wheat for bread, pasta, and couscous, and the grape for wine.
Table olives are classified by the IOC into three groups according to the degree of ripeness achieved before harvesting:
Raw or fresh olives are naturally very bitter; to make them palatable, olives must be cured and fermented, thereby removing oleuropein, a bitter phenolic compound that can reach levels of 14% of dry matter in young olives. In addition to oleuropein, other phenolic compounds render freshly picked olives unpalatable and must also be removed or lowered in quantity through curing and fermentation. Generally speaking, phenolics reach their peak in young fruit and are converted as the fruit matures. Once ripening occurs, the levels of phenolics sharply decline through their conversion to other organic products which render some cultivars edible immediately. One example of an edible olive native to the island of Thasos is the "throubes" black olive, which when allowed to ripen in the sun, shrivel, and fall from the tree, is then edible.
The curing process may take from a few days, with lye, to a few months with brine or salt packing. With the exception of California style and salt-cured olives, all methods of curing involve a major fermentation involving bacteria and yeast that is of equal importance to the final table olive product. Traditional cures, using the natural microflora on the fruit to induce fermentation, lead to two important outcomes: the leaching out and breakdown of oleuropein and other unpalatable phenolic compounds, and the generation of favourable metabolites from bacteria and yeast, such as organic acids, probiotics, glycerol, and esters, which affect the sensory properties of the final table olives. Mixed bacterial/yeast olive fermentations may have probiotic qualities. Lactic acid is the most important metabolite, as it lowers the pH, acting as a natural preservative against the growth of unwanted pathogenic species. The result is table olives which can be stored without refrigeration. Fermentations dominated by lactic acid bacteria are, therefore, the most suitable method of curing olives. Yeast-dominated fermentations produce a different suite of metabolites which provide poorer preservation, so they are corrected with an acid such as citric acid in the final processing stage to provide microbial stability.
The many types of preparations for table olives depend on local tastes and traditions. The most important commercial examples are listed below.
Lebanese or Phenician Type (olives with fermentation): Applied to green, semiripe, or ripe olives. Olives are soaked in salt water for 24-48 hours. Then, they are slightly crushed with a rock to fasten the fermentation process. The olives are stored for a period of up to a year in a container with salt water, fresh lemon juice, lemon peels, laurel and olive leaves, and rosemary. Some recipes may contain white vinegar or olive oil.
Spanish or Sevillian type (olives with fermentation): Most commonly applied to green olive preparation, around 60% of all the world's table olives are produced with this method. Olives are soaked in lye (dilute NaOH, 2–4%) for 8–10 hours to hydrolyse the oleuropein. They are usually considered "treated" when the lye has penetrated two-thirds of the way into the fruit. They are then washed once or several times in water to remove the caustic solution and transferred to fermenting vessels full of brine at typical concentrations of 8–12% NaCl. The brine is changed on a regular basis to help remove the phenolic compounds. Fermentation is carried out by the natural microbiota present on the olives that survive the lye treatment process. Many organisms are involved, usually reflecting the local conditions or "Terroir" of the olives. During a typical fermentation gram-negative enterobacteria flourish in small numbers at first, but are rapidly outgrown by lactic acid bacteria species such as "Leuconostoc mesenteroides, Lactobacillus plantarum, Lactobacillus brevis" and "Pediococcus damnosus". These bacteria produce lactic acid to help lower the pH of the brine and therefore stabilize the product against unwanted pathogenic species. A diversity of yeasts then accumulate in sufficient numbers to help complete the fermentation alongside the lactic acid bacteria. Yeasts commonly mentioned include the teleomorphs "Pichia anomala, Pichia membranifaciens, Debaryomyces hansenii" and "Kluyveromyces marxianus". Once fermented, the olives are placed in fresh brine and acid corrected, to be ready for market.
Sicilian or Greek type (olives with fermentation): Applied to green, semiripe and ripe olives, they are almost identical to the Spanish type fermentation process, but the lye treatment process is skipped and the olives are placed directly in fermentation vessels full of brine (8–12% NaCl). The brine is changed on a regular basis to help remove the phenolic compounds. As the caustic treatment is avoided, lactic acid bacteria are only present in similar numbers to yeast and appear to be outdone by the abundant yeasts found on untreated olives. As very little acid is produced by the yeast fermentation, lactic, acetic, or citric acid is often added to the fermentation stage to stabilize the process.
Picholine or directly-brined type (olives with fermentation): Applied to green, semiripe, or ripe olives, they are soaked in lye typically for longer periods than Spanish style (e.g. 10–72 hours) until the solution has penetrated three-quarters of the way into the fruit. They are then washed and immediately brined and acid corrected with citric acid to achieve microbial stability. Fermentation still occurs carried out by acidogenic yeast and bacteria, but is more subdued than other methods. The brine is changed on a regular basis to help remove the phenolic compounds and a series of progressively stronger concentrations of salt are added until the product is fully stabilized and ready to be eaten.
Water-cured type (olives with fermentation): Applied to green, semiripe, or ripe olives, these are soaked in water or weak brine and this solution is changed on a daily basis for 10–14 days. The oleuropein is naturally dissolved and leached into the water and removed during a continual soak-wash cycle. Fermentation takes place during the water treatment stage and involves a mixed yeast/bacteria ecosystem. Sometimes, the olives are lightly cracked with a hammer or a stone to trigger fermentation and speed up the fermentation process. Once debittered, the olives are brined to concentrations of 8–12% NaCl and acid corrected, and are then ready to eat.
Salt-cured type (olives with minor fermentation): Applied only to ripe olives, they are usually produced in Morocco, Turkey, and other eastern Mediterranean countries. Once picked, the olives are vigorously washed and packed in alternating layers with salt. The high concentrations of salt draw the moisture out of olives, dehydrating and shriveling them until they look somewhat analogous to a raisin. Once packed in salt, fermentation is minimal and only initiated by the most halophilic yeast species such as "Debaryomyces hansenii". Once cured, they are sold in their natural state without any additives. So-called oil-cured olives are cured in salt, and then soaked in oil.
California or "artificial ripening" type (olives without fermentation): Applied to green and semiripe olives, they are placed in lye and soaked. Upon their removal, they are washed in water injected with compressed air. This process is repeated several times until both oxygen and lye have soaked through to the pit. The repeated, saturated exposure to air oxidises the skin and flesh of the fruit, turning it black in an artificial process that mimics natural ripening. Once fully oxidised or "blackened", they are brined and acid corrected and are then ready for eating.
Olive wood is very hard and is prized for its durability, colour, high combustion temperature, and interesting grain patterns. Because of the commercial importance of the fruit, and the slow growth and relatively small size of the tree, olive wood and its products are relatively expensive. Common uses of the wood include: kitchen utensils, carved wooden bowls, cutting boards, fine furniture, and decorative items.
The yellow or light greenish-brown wood is often finely veined with a darker tint; being very hard and close-grained, it is valued by woodworkers.
In modern landscape design olive trees are frequently used as ornamental features for their distinctively gnarled trunks and "evergreen" silvery gray foliage.
The earliest evidence for the domestication of olives comes from the Chalcolithic period archaeological site of Teleilat el Ghassul in what is today modern Jordan. Farmers in ancient times believed that olive trees would not grow well if planted more than a certain distance from the sea; Theophrastus gives 300 stadia () as the limit. Modern experience does not always confirm this, and, though showing a preference for the coast, they have long been grown further inland in some areas with suitable climates, particularly in the southwestern Mediterranean (Iberia, northwest Africa) where winters are mild.
Olives are cultivated in many regions of the world with Mediterranean climates, such as South Africa, Chile, Peru, Australia, Oregon, and California, and in areas with temperate climates such as New Zealand. They are also grown in the Córdoba Province, Argentina, which has a temperate climate with rainy summers and dry winters.
Olive trees show a marked preference for calcareous soils, flourishing best on limestone slopes and crags, and coastal climate conditions. They grow in any light soil, even on clay if well drained, but in rich soils, they are predisposed to disease and produce poorer oil than in poorer soil. (This was noted by Pliny the Elder.) Olives like hot weather and sunny positions without any shade, while temperatures below may injure even a mature tree. They tolerate drought well, due to their sturdy and extensive root systems. Olive trees can live for several centuries and can remain productive for as long if they are pruned correctly and regularly.
Only a handful of olive varieties can be used to cross-pollinate. 'Pendolino' olive trees are partially self-fertile, but pollenizers are needed for a large fruit crop. Other compatible olive tree pollenizers include 'Leccino' and 'Maurino'. 'Pendolino' olive trees are used extensively as pollenizers in large olive tree groves.
Olives are propagated by various methods. The preferred ways are cuttings and layers; the tree roots easily in favourable soil and throws up suckers from the stump when cut down. However, yields from trees grown from suckers or seeds are poor; they must be budded or grafted onto other specimens to do well. Branches of various thickness cut into lengths around planted deeply in manured ground soon vegetate. Shorter pieces are sometimes laid horizontally in shallow trenches and, when covered with a few centimetres of soil, rapidly throw up sucker-like shoots. In Greece, grafting the cultivated tree on the wild tree is a common practice. In Italy, embryonic buds, which form small swellings on the stems, are carefully excised and planted under the soil surface, where they soon form a vigorous shoot.
The olive is also sometimes grown from seed. To facilitate germination, the oily pericarp is first softened by slight rotting, or soaked in hot water or in an alkaline solution.
In situations where extreme cold has damaged or killed the olive tree, the rootstock can survive and produce new shoots which in turn become new trees. In this way, olive trees can regenerate themselves. In Tuscany in 1985, a very severe frost destroyed many productive, and aged, olive trees and ruined many farmers' livelihoods. However, new shoots appeared in the spring and, once the dead wood was removed, became the basis for new fruit-producing trees. In this way, an olive tree can live for centuries or even millennia.
Olives grow very slowly, and over many years, the trunk can attain a considerable diameter. A. P. de Candolle recorded one exceeding in girth. The trees rarely exceed in height, and are generally confined to much more limited dimensions by frequent pruning.
"Olea europaea" is very hardy: drought-, disease- and fire-resistant, it can live to a great age. Its root system is robust and capable of regenerating the tree even if the above-ground structure is destroyed. The older the olive tree, the broader and more gnarled the trunk becomes. Many olive trees in the groves around the Mediterranean are said to be hundreds of years old, while an age of 2,000 years is claimed for a number of individual trees; in some cases, this has been scientifically verified. See paragraph dealing with the topic.
The crop from old trees is sometimes enormous, but they seldom bear well two years in succession, and in many cases, a large harvest occurs every sixth or seventh season.
Where the olive is carefully cultivated, as in Languedoc and Provence, the trees are regularly pruned. The pruning preserves the flower-bearing shoots of the preceding year, while keeping the tree low enough to allow the easy gathering of the fruit.
The spaces between the trees are regularly fertilized.
Various pathologies can affect olives. The most serious pest is the olive fruit fly ("Dacus oleae" or "Bactrocera oleae") which lays its eggs in the olive most commonly just before it becomes ripe in the autumn. The region surrounding the puncture rots, becomes brown, and takes a bitter taste, making the olive unfit for eating or for oil. For controlling the pest, the practice has been to spray with insecticides (organophosphates, e.g. dimethoate). Classic organic methods have now been applied such as trapping, applying the bacterium "Bacillus thuringiensis", and spraying with kaolin. Such methods are obligatory for organic olives.
A fungus, "Cycloconium oleaginum", can infect the trees for several successive seasons, causing great damage to plantations. A species of bacterium, "Pseudomonas savastanoi" pv. "oleae", induces tumour growth in the shoots. Certain lepidopterous caterpillars feed on the leaves and flowers.
"Xylella fastidiosa" bacteria, which can also infect citrus fruit and vines, has attacked olive trees in the Lecce province, Salento, Southern Italy causing the olive quick decline syndrome (OQDS). The main vector is "Philaenus spumarius" (meadow spittlebug).
A pest which spreads through olive trees is the black scale bug, a small black scale insect that resembles a small black spot. They attach themselves firmly to olive trees and reduce the quality of the fruit; their main predators are wasps. The curculio beetle eats the edges of leaves, leaving sawtooth damage.
Rabbits eat the bark of olive trees and can do considerable damage, especially to young trees. If the bark is removed around the entire circumference of a tree, it is likely to die. Voles and mice also do damage by eating the roots of olives.
At the northern edge of their cultivation zone, for instance in Southern France and north-central Italy, olive trees suffer occasionally from frost. Gales and long-continued rains during the gathering season also cause damage.
Since its first domestication, "O. europaea" has been spreading back to the wild from planted groves. Its original wild populations in southern Europe have been largely swamped by feral plants.
In some other parts of the world where it has been introduced, most notably South Australia, the olive has become a major woody weed that displaces native vegetation. In South Australia, its seeds are spread by the introduced red fox and by many bird species, including the European starling and the native emu, into woodlands, where they germinate and eventually form a dense canopy that prevents regeneration of native trees. As the climate of South Australia is very dry and bushfire prone, the oil-rich feral olive tree substantially increases the fire hazard of native sclerophyll woodlands.
Olives are harvested in the autumn and winter. More specifically in the Northern Hemisphere, green olives are picked from the end of September to about the middle of November. Blond olives are picked from the middle of October to the end of November, and black olives are collected from the middle of November to the end of January or early February. In southern Europe, harvesting is done for several weeks in winter, but the time varies in each country, and with the season and the cultivar.
Most olives today are harvested by shaking the boughs or the whole tree. Using olives found lying on the ground can result in poor quality oil, due to damage. Another method involves standing on a ladder and "milking" the olives into a sack tied around the harvester's waist. This method produces high quality oil. A third method uses a device called an oli-net that wraps around the tree trunk and opens to form an umbrella-like catcher from which workers collect the fruit. Another method uses an electric tool, the oliviera, that has large tongs that spin around quickly, removing fruit from the tree. Olives harvested by this method are used for oil.
Table olive varieties are more difficult to harvest, as workers must take care not to damage the fruit; baskets that hang around the worker's neck are used. In some places in Italy, Croatia, and Greece, olives are harvested by hand because the terrain is too mountainous for machines. As a result, the fruit is not bruised, which leads to a superior finished product. The method also involves sawing off branches, which is healthy for future production.
The amount of oil contained in the fruit differs greatly by cultivar; the pericarp is usually 60–70% oil. Typical yields are of oil per tree per year.
Processing olives is done through curing and fermentation or drying in order for them to be edible. Lye and salt brine are used to cure olives from their bitter oleuropein compound. Olives are fermented by yeast and the brine allows bacteria to add flavor and act as a natural preservative by lowering the pH from other bacteria that would lead to spoilage.
Olives are one of the most extensively cultivated fruit crops in the world. In 2011, about were planted with olive trees, which is more than twice the amount of land devoted to apples, bananas, or mangoes. Only coconut trees and oil palms command more space. Cultivation area tripled from between 1960 and 1998 and reached a peak of in 2008. The 10 largest producing countries, according to the Food and Agriculture Organization, are all located in the Mediterranean region and produce 95% of the world's olives.
One hundred grams of cured green olives provide 146 calories, are a rich source of vitamin E (25% of the Daily Value, DV), and contain a large amount of sodium (104% DV); other nutrients are insignificant. Green olives are 75% water, 15% fat, 4% carbohydrates and 1% protein (table).
The polyphenol composition of olive fruits varies during fruit ripening and during processing by fermentation when olives are immersed whole in brine or crushed to produce oil. In raw fruit, total polyphenol contents, as measured by the Folin method, are 117 mg/100 g in black olives and 161 mg/100 g in green olives, compared to 55 and 21 mg/100 g for extra virgin and virgin olive oil, respectively. Olive fruit contains several types of polyphenols, mainly tyrosols, phenolic acids, flavonols and flavones, and for black olives, anthocyanins. The main bitter flavor of olives before curing results from oleuropein and its aglycone which total in content, respectively, 72 and 82 mg/100 g in black olives, and 56 and 59 mg/100 g in green olives.
During the crushing, kneading and extraction of olive fruit to obtain olive oil, oleuropein, demethyloleuropein and ligstroside are hydrolyzed by endogenous beta-glucosidases to form aldehydic aglycones.
Polyphenol content also varies with olive cultivar (Spanish Manzanillo highest) and the manner of presentation, with plain olives having higher contents than those that are pitted or stuffed.
Olive tree pollen is extremely allergenic, with an OPALS allergy scale rating of 10 out of 10. "Olea europaea" is primarily wind-pollinated, and their light, buoyant pollen is a strong trigger for asthma. One popular variety, "Swan Hill", is widely sold as an "allergy-free" olive tree; however, this variety does bloom and produce allergenic pollen. | https://en.wikipedia.org/wiki?curid=22479 |
Otho
Otho (; Marcus Otho Caesar Augustus, ; 28 April 32 – 16 April 69) was Roman emperor for three months, from 15 January to 16 April 69. He was the second emperor of the Year of the Four Emperors.
A member of a noble Etruscan family, Otho was initially a friend and courtier of the young emperor Nero until he was effectively banished to the governorship of the remote province of Lusitania in 58 following his wife Poppaea Sabina's affair with Nero. After a period of moderate rule in the province, he allied himself with Galba, the governor of neighbouring Hispania Tarraconensis, during the revolts of 68. He accompanied Galba on his march to Rome, but revolted and murdered Galba at the start of the next year.
Inheriting the problem of the rebellion of Vitellius, commander of the army in Germania Inferior, Otho led a sizeable force which met Vitellius' army at the Battle of Bedriacum. After initial fighting resulted in 40,000 casualties, and a retreat of his forces, Otho committed suicide rather than fight on and Vitellius was proclaimed emperor.
Otho was born on 28 April 32 AD. His grandfather had been a senator, and Claudius granted Otho's father patrician status. Greenhalgh writes that "he was addicted to luxury and pleasure to a degree remarkable even in a Roman". An aged freedwoman brought him into the company of the emperor Nero. Otho married the emperor's mistress Poppaea Sabina; Nero forced Otho to divorce Poppaea so that he himself could marry her. He exiled Otho to the province Lusitania in 58 or 59 by appointing him to be its governor.
Otho proved to be capable as governor of Lusitania. Yet, he never forgave Nero for marrying Poppaea. He allied himself with Galba, governor of neighboring Hispania Tarraconensis, in the latter's rebellion against Nero in 68. Nero committed suicide later that year and Galba was proclaimed emperor by the Senate. Otho accompanied the new emperor to Rome in October 68. Before they entered the city, Galba's army fought against a legion that Nero had organized.
On 1 January 69, the day Galba took the office of consul alongside Titus Vinius, the fourth and twenty-second legions of Upper Germany refused to swear loyalty to the emperor. They toppled the statues of Galba and demanded that a new emperor be chosen. On the following day, the soldiers of Lower Germany also refused to swear their loyalty and proclaimed the governor of the province, Aulus Vitellius, as emperor. Galba tried to ensure his authority as emperor was recognized by adopting the nobleman Lucius Calpurnius Piso Licinianus as his successor, an action that gained resentment from Otho. Galba was killed by the Praetorians on 15 January, followed shortly by Vinius and Piso. Their heads were placed on poles and Otho was proclaimed emperor.
He accepted, or appeared to accept, the cognomen of Nero conferred upon him by the shouts of the populace, whom his comparative youth and the effeminacy of his appearance reminded of their lost favourite. Nero's statues were again set up, his freedmen and household officers reinstalled (including the young castrated boy Sporus whom Nero had taken in marriage and Otho also would live intimately with), and the intended completion of the Golden House announced.
At the same time, the fears of the more sober and respectable citizens were relieved by Otho's liberal professions of his intention to govern equitably, and by his judicious clemency towards Aulus Marius Celsus, a consul-designate and devoted adherent of Galba. Otho soon realized that it was much easier to overthrow an emperor than rule as one: according to Suetonius Otho once remarked that "Playing the Long Pipes is hardly my trade" (i.e., undertaking something beyond one's ability to do so).
Any further development of Otho's policy was checked once Otho had read through Galba's private correspondence and realized the extent of the revolution in Germany, where several legions had declared for Vitellius, the commander of the legions on the lower Rhine River, and were already advancing upon Italy. After a vain attempt to conciliate Vitellius by the offer of a share in the Empire, Otho, with unexpected vigor, prepared for war. From the much more remote provinces, which had quietly accepted his accession, little help was to be expected, but the legions of Dalmatia, Pannonia and Moesia were eager in his cause, the Praetorian cohorts were a formidable force and an efficient fleet gave him the mastery of the Italian seas.
The fleet was at once dispatched to secure Liguria, and on 14 March Otho, undismayed by omens and prophecies, started northwards at the head of his troops in the hopes of preventing the entry of Vitellius' troops into Italy. But for this he was too late, and all that could be done was to throw troops into Placentia and hold the line of the Po. Otho's advanced guard successfully defended Placentia against Aulus Caecina Alienus, and compelled that general to fall back on Cremona, but the arrival of Fabius Valens altered the aspect of affairs.
Vitellius' commanders now resolved to bring on a decisive battle, the Battle of Bedriacum, and their designs were assisted by the divided and irresolute counsels which prevailed in Otho's camp. The more experienced officers urged the importance of avoiding a battle until at least the legions from Dalmatia had arrived. However, the rashness of the emperor's brother Titianus and of Proculus, prefect of the Praetorian Guards, added to Otho's feverish impatience, overruled all opposition, and an immediate advance was decided upon.
Otho remained behind with a considerable reserve force at Brixellum on the southern bank of the Po. When this decision was taken, Otho's army already had crossed the Po and were encamped at Bedriacum (or Betriacum), a small village on the "Via Postumia", and on the route by which the legions from Dalmatia would naturally arrive.
Leaving a strong detachment to hold the camp at Bedriacum, the Othonian forces advanced along the "Via Postumia" in the direction of Cremona. At a short distance from that city they unexpectedly encountered the Vitellian troops. The Othonians, though taken at a disadvantage, fought desperately, but finally were forced to fall back in disorder upon their camp at Bedriacum. There on the next day the victorious Vitellians followed them, but only to come to terms at once with their disheartened enemy, and to be welcomed into the camp as friends.
More unexpected still was the effect produced at Brixellum by the news of the battle. Otho was still in command of a formidable force: The Dalmatian legions had reached Aquileia and the spirit of his soldiers and their officers was unbroken. He was resolved to accept the verdict of the battle that his own impatience had hastened. In a dignified speech, he bade farewell to those about him, declaring: "It is far more just to perish one for all, than many for one", and then retiring to rest soundly for some hours. Early in the morning he stabbed himself in the heart with a dagger, which he had concealed under his pillow, and died as his attendants entered the tent.
Otho's ashes were placed within a modest monument. He had reigned only three months. His funeral was celebrated at once as he had wished. A plain tomb was erected in his honour at Brixellum, with the simple inscription "Diis Manibus Marci Othonis". His 91-day reign would be the shortest until that of Pertinax, whose reign lasted 86 days in 193 during the tumultuous Year of the Five Emperors.
It has been thought that Otho's suicide was committed in order to steer his country from the path to civil war. Just as he had come to power, many Romans learned to respect Otho in his death. Few could believe that a renowned former companion of Nero had chosen such an honourable end. Tacitus wrote that some of the soldiers committed suicide beside his funeral pyre "because they loved their emperor and wished to share his glory".
Writing during the reign of the Emperor Domitian (AD 81–96), the Roman poet Martial expressed his admiration for Otho's choice to spare the empire from civil war through sacrificing himself:
Suetonius, in "The Lives of the Caesars", comments on Otho's appearance and personal hygiene.
Juvenal, in a passage in the Satire II ridiculing male homosexuality, specifically mentions Otho as being vain and effeminate, looking at himself in the mirror before going into battle, and "plaster[ing] his face with dough" in order to look good. | https://en.wikipedia.org/wiki?curid=22481 |
Orsini family
The Orsini family is an Italian noble family that was one of the most influential princely families in medieval Italy and Renaissance Rome. Members of the Orsini family include three popes: Celestine III (1191–1198), Nicholas III (1277–1280), and Benedict XIII (1724–1730). In addition, the family membership includes 34 cardinals, numerous "condottieri", and other significant political and religious figures.
According to their family legend, the Orsini are descended from the Julio-Claudian dynasty of ancient Rome. The Orsini carried on a political feud with the Colonna family for centuries in Rome, until it was stopped by Papal Bull in 1511. In 1571, the heads of both families married nieces of Pope Sixtus V.
The Orsini were related to the Bobone family existing in Rome in the 11th century. The first members used the surname of Bobone-Orsini. The first known family member is one Bobone, in the early 11th century, father of Pietro, in turn father of Giacinto Bobone (1110–1198), who in 1191 became pope as Celestine III. One of the first great nepotist popes, he made two of his nephews cardinals and allowed his cousin Giovanni Gaetano (Giangaetano, died 1232) to buy the fiefs of Vicovaro, Licenza, Roccagiovine and Nettuno, which formed the nucleus of the future territorial power of the family.
The Bobone surname was lost with his children, who were called "de domo filiorum Ursi". Two of them, Napoleone and Matteo Rosso the Great (1178–1246), considerably increased the prestige of the family. The former was the founder of the first southern line, which disappeared with Camillo Pardo in 1553. He obtained the city of Manoppello, later a countship, and was "gonfaloniere" of the Papal States. Matteo Rosso, called the Great, was the effective lord of Rome from 1241, when he defeated the Imperial troops, to 1243, holding the title of Senator. Two of his sons, and Napoleone, were also Senators. Matteo ousted the traditional rivals, the Colonna family, from Rome and extended the Orsini territories southwards up to Avellino and northwards to Pitigliano. During his life, the family entered firmly in the Guelph party. He had some ten sons, who divided the fiefs after his deaths: Gentile (died 1246) originated the Pitigliano line and the second southern line, Rinaldo that of Monterotondo, Napoleone (died 1267) that of Bracciano, and another Matteo Rosso that of Montegiordano, from the name of the district in Rome housing the family's fortress. The most distinguished of his sons was Giovanni Gaetano (died 1280): elected pope as Nicholas III, he named his nephew Bertoldo (d. 1289) as count of Romagna, and had two nephews and a brother created cardinals.
The rise of the Orsini did not stop after Nicholas' death. Bertoldo's son, Gentile II (1250–1318), was two times Senator of Rome, podestà of Viterbo and, from 1314, "Gran Giustiziere" ("Great Justicer") of the Kingdom of Naples. He married Clarice Ruffo, daughter of the counts of Catanzaro, forming an alliance of the most powerful Calabrian dynasty. His son Romano (1268–1327), called Romanello, was Royal Vicar of Rome in 1326, and inherited the countship of Soana through his marriage with Anastasia de Montfort, Countess of Nola. Romano's stance was markedly Guelph. After his death, his two sons divided his fiefs, forming the Pitigliano and the second southern line.
Roberto (1295–1345), Gentile II's grandson, married Sibilla del Balzo, daughter of the Great Senechal of the Kingdom of Naples. Among his sons, Giacomo (died 13 August 1379; Dean of Salisbury, Archdeacon of Leicester and Archdeacon of Durham) was created cardinal by Gregory XI in 1371, while Nicola (August 27, 1331 – February 14, 1399) obtained the counties of Ariano and Celano. The latter was also Senator of Rome and enlarged the family territories in Lazio and Tuscany.
His second son, Raimondello Orsini del Balzo, supported Charles III' "coup d'état" in Naples against Queen Joan I. Under king Ladislaus he was among the few Neapolitan feudataries who were able to maintain their territorial power after the royal war against them. However, at his death in 1406 the southern Orsini fiefs were confiscated. Relationships with the royal family remained cold under Joan II; However, when Raimondello's son Giannantonio (1386–1453) sent his troops to help her against the usurpation attempt of James of Bourbon, he received in exchange the Principality of Taranto.
The links with the court increased further under Sergianni Caracciolo, Joan's lover and Great Senechal. A younger brother of Giannantonio married one of Sergianni's daughters. However, the Orsini changed side when Alfonso V of Aragon started his conquest of the Kingdom of Naples. Giannantonio was awarded with the duchy of Bari, the position of Great Connestable and an appanage of 100,000 "ducati". Giannantonio remained faithful to Alfonso's heir, Ferdinand I, but was killed during a revolt of nobles. Having died without legitimate sons, much of his possessions were absorbed into the Royal Chamber.
This line was initiated by Guido Orsini, second son of Romano, who inherited the county of Soana, on the western side of Lake Bolsena in southern Tuscany. He and his descendants ruled over the fiefs of Soana, Pitigliano and Nola, but in the early 15th century wars against the Republic of Siena and the Colonnas caused the loss of several territories. Bertoldo (died 1417) managed to keep only Pitigliano, while his grandson Orso (died July 5, 1479) was count of Nola and fought as condottiere under the Duke of Milan and the Republic of Venice. Later he entered the service of Ferdinand I of Naples, but, not having taken part in the Barons' conspiracy, he was rewarded with the fiefs of Ascoli and Atripalda. He took part in the Aragonese campaign in Tuscany and was killed at the siege of Viterbo.
The most outstanding member of the Pitigliano line was Niccolò, one of the major condottiere of the time. His son Ludovico (died January 27, 1534) and his nephew Enrico (died 1528) participated in the Italian Wars at the service of both France and Spain, often changing side with the typical ease of the Italian military leaders of the time. Two of Ludovico's daughters married relevant figures: Geronima to Pier Luigi Farnese, illegitimate son of Pope Paul III and Marzia to Gian Giacomo Medici of Marignano, an important general of the Spanish army.
The line started to decay after the loss of Nola by Ludovico, who was also forced to accept the Sienese suzerainty over Pitigliano. Under his son Giovan Francesco (died May 8, 1567) the county entered the orbit of the Grand Duke of Tuscany. Later, the attempt of Alessandro (died February 9, 1604) to obtain the title of Monterotondo was thwarted by Pope Gregory XIII. His son Giannantonio (March 25, 1569 – 1613) sold Pitigliano to Tuscany, in exchange for the marquisate of Monte San Savino.
The line became extinct in 1640 with the death of Alessandro.
This line was founded by Rinaldo, third son of Matteo Rosso the Great. His son, Napoleone, became a cardinal in 1288 and remained a prominent member of the Curia until his death at Avignon in 1342.
This branch of the family was often involved in the baronial struggles of the Late Middle Ages Rome, at least three members of the family being elected as Senators, while others fought as condottieri. Francesco in 1370 took part to the war of Florence against the Visconti of Milan. Orso (died July 24, 1424) died fighting for the king of Naples in the Battle of Zagonara against the Milanese. His sons Giacomo (died 1482) and Lorenzo (1452) battled for the Papal States, Naples and Florence. One of Giacomo's daughters, Clarice (1453–July 30, 1488) became Lorenzo de' Medici's wife. Franciotto Orsini was created cardinal by Leo X in 1517.
The most important member of the Monterotondo Orsinis was Giovani Battista Orsini, who became cardinal under Sixtus IV (1483). He was probably among the promoters of the failed plot against Cesare Borgia in 1502, being assassinated on February 22 of 1503 as retaliation, together with other members of the family : Giulio survived captivity under Cesare, and Paolo and Francesco 4th Duke of Gravina were strangled to death on the 18th of January 1503.
The line decayed from the late 16th century, when several members were assassinated or lost their lands for various reasons. Its last representatives Enrico (died September 12, 1643) and Francesco (1592 - September 21, 1650) sold Monterotondo to the Barberini in 1641.
Napoleone, another son of Matteo Rosso the Great, received Bracciano, Nerola and other lands in what is now northern Lazio. In 1259 he was Senator of Rome. Thanks to the strategic positions of their fiefs, and to their famous castle built in Bracciano in 1426, they were the most powerful Orsini line in the Lazio. Count Carlo (died after 1485), son of another Napoleone (died October 3, 1480), was Papal Gonfaloniere. By his marriage with a Francesca Orsini of Monterotondo was born Gentile Virginio Orsini, one of the most prominent figures of Italian politics in the late 15th century. After Carlo's death, he enlarged the family's tenure with lands inherited by his wife, another Orsini from Salerno, and most of all he was amongst the favourites of Ferdinand I of Naples, who appointed him as Great Constable of Naples. Together with his cousin, the Cardinal Giovanni Battista, he was among the fiercest opponents of popes Innocent VIII and Alexander VI. In 1492 Gentile Virginio bought the county of Anguillara from Franceschetto Cybo.
During Charles VIII of France's descent into Italy, he managed to keep Bracciano. Ferdinand II had his fiefs confiscated and imprisoned him in Castel dell'Ovo, where he was poisoned in 1497. The family recovered this setback under the more friendly Medici popes of the early 16th century. His son Giangiordano was Prince Assistant to the Papal Throne. His nephew Virginio was a famous admiral for the Papal States and France, but in 1539 he had his fiefs confiscated under the charge of treason.
Paolo Giordano was created first Duke of Bracciano in 1560. The son of Girolamo Orsini and Francesca Sforza, he was grandson, on his father’s side, of Felice della Rovere (illegitimate daughter of Pope Julius II) and Gian Giordano Orsini and, on his mother’s side, of Count Bosio Sforza and Costanza Farnese, an illegitimate daughter of Pope Paul III. An accomplished condottiero, he was however also a ruthless figure who had his wife Isabella de' Medici murdered. For this and other homicides he had to flee to northern Italy. He was succeeded by Virginio, whose heir Paolo Giordano II married the princess of Piombino and was created Prince of the Holy Roman Empire. His brother Alessandro was cardinal and Papal legate, and another brother, Ferdinando (died March 4, 1660) acquired the assets of the other line of San Gemini. In the 17th century the Dukes of Bracciano moved their residence to Rome. This, along with a general economical decadence, damaged the dukedom, and last Duke and Prince, Flavio (March 4, 1620 – April 5, 1698) was forced by the huge debts to sell it to Livio Odescalchi.
The line of Gravina, from the name of the eponymous city in Apulia, is the only existing line of the Orsini. It descends from Francesco (died 1456), a son of Count Carlo of Bracciano. Most of his fief were located in northern Lazio, but he entered in the Neapolitan orbit when in 1418 he was called by Sergianni Caracciolo to fight against the Angevine troops, which he defeated. By marriage, he obtained the title of count of Gravina. He was made Duke of Gravina by King Alfonso, title definitely assigned to his son Giacomo (died 1472), to which had been added the counties of Conversano, Campagna and Copertino. Two of Francesco's son, Marino (died 1471) and Giovanni Battista (died June 8, 1476), were respectively archbishop of Taranto and Grand Master of Knights of Rhodes.
The fourth duke, Francesco, was part of a conspiracy along with his brothers Giulio and Paolo against Cesare Borgia but were found out and Francesco was strangled to death on 18 January 1503 along with his brother Paolo. One of Francesco's nephews, Flavio Orsini, was created cardinal in 1565. The fifth duke, Ferdinando (died December 6, 1549) had all his fiefs confiscated by the Spaniards, but regained it after a 40,000 scudi payment.
After the heirless death of Duke Michele Antonio (January 26, 1627), his lands passed to his cousin Pietro Orsini, count of Muro Lucano (died 1641). The latter's nephew Pier Francesco, who had renounced the succession in favour of his brother Domenico to become a Dominican, was later elected pope with the name of Benedict XIII.
His successor raised Benedict XIII's nephew, Prince Beroaldo Orsini, to the dignity of Prince Assistants to the Papal Throne (title held until 1958), after the emperor Charles VI had already, in 1724, made him a prince of the Holy Roman Empire. The last cardinal from the family was Domenico.
The family moved to Rome in the 18th century, where Duke Domenico (November 23, 1790 – April 28, 1874), married Maria Luisa Torlonia in 1823. In 1850 he was Minister of War and General Lieutenant of the Papal Armies, and, also, Senator of Rome.
The remaining princely family is represented by Prince Domenico Napoleone Orsini, Duke of Gravina (b. 1948). With no sons or male-line descendants, the heir to the dukedom of Gravina is his unmarried brother Don Benedetto Orsini (b. 1956), followed by his cousin Prince Raimondo Orsini d'Aragona (b. 1931), who is married to Princess Khetevan Bagration-Mukhransky.
Fellipi Orsini
Apart from the Bracciano castle, other notable buildings and structures associated with the Orsini include:
The Orsini family was briefly mentioned in Boccaccio's book "The Decameron" in the 5th day, 3rd story. In the woods, it is described that soldiers from a rival family's soldiers attacked a fictional character in the book named Pietro while they had become lost in the woods about eight miles from Rome. Boccaccio describes the soldiers acting to spite of the Orsini's. Furthermore, a castle named Campo de' Fiori, was included in the text. | https://en.wikipedia.org/wiki?curid=22482 |
Optics
Optics is the branch of physics that studies the behaviour and properties of light, including its interactions with matter and the construction of instruments that use or detect it. Optics usually describes the behaviour of visible, ultraviolet, and infrared light. Because light is an electromagnetic wave, other forms of electromagnetic radiation such as X-rays, microwaves, and radio waves exhibit similar properties.
Most optical phenomena can be accounted for by using the classical electromagnetic description of light. Complete electromagnetic descriptions of light are, however, often difficult to apply in practice. Practical optics is usually done using simplified models. The most common of these, geometric optics, treats light as a collection of rays that travel in straight lines and bend when they pass through or reflect from surfaces. Physical optics is a more comprehensive model of light, which includes wave effects such as diffraction and interference that cannot be accounted for in geometric optics. Historically, the ray-based model of light was developed first, followed by the wave model of light. Progress in electromagnetic theory in the 19th century led to the discovery that light waves were in fact electromagnetic radiation.
Some phenomena depend on the fact that light has both wave-like and particle-like properties. Explanation of these effects requires quantum mechanics. When considering light's particle-like properties, the light is modelled as a collection of particles called "photons". Quantum optics deals with the application of quantum mechanics to optical systems.
Optical science is relevant to and studied in many related disciplines including astronomy, various engineering fields, photography, and medicine (particularly ophthalmology and optometry). Practical applications of optics are found in a variety of technologies and everyday objects, including mirrors, lenses, telescopes, microscopes, lasers, and fibre optics.
Optics began with the development of lenses by the ancient Egyptians and Mesopotamians. The earliest known lenses, made from polished crystal, often quartz, date from as early as 2000 BC from Crete (Archaeological Museum of Heraclion, Greece). Lenses from Rhodes date around 700 BC, as do Assyrian lenses such as the Nimrud lens. The ancient Romans and Greeks filled glass spheres with water to make lenses. These practical developments were followed by the development of theories of light and vision by ancient Greek and Indian philosophers, and the development of geometrical optics in the Greco-Roman world. The word "optics" comes from the ancient Greek word ("optikē"), meaning "appearance, look".
Greek philosophy on optics broke down into two opposing theories on how vision worked, the intromission theory and the emission theory. The intromission approach saw vision as coming from objects casting off copies of themselves (called eidola) that were captured by the eye. With many propagators including Democritus, Epicurus, Aristotle and their followers, this theory seems to have some contact with modern theories of what vision really is, but it remained only speculation lacking any experimental foundation.
Plato first articulated the emission theory, the idea that visual perception is accomplished by rays emitted by the eyes. He also commented on the parity reversal of mirrors in "Timaeus". Some hundred years later, Euclid (4th–3rd century BC) wrote a treatise entitled "Optics" where he linked vision to geometry, creating "geometrical optics". He based his work on Plato's emission theory wherein he described the mathematical rules of perspective and described the effects of refraction qualitatively, although he questioned that a beam of light from the eye could instantaneously light up the stars every time someone blinked. Euclid stated the principle of shortest trajectory of light, and considered multiple reflections on flat and spherical mirrors.
Ptolemy, in his treatise "Optics", held an extramission-intromission theory of vision: the rays (or flux) from the eye formed a cone, the vertex being within the eye, and the base defining the visual field. The rays were sensitive, and conveyed information back to the observer's intellect about the distance and orientation of surfaces. He summarized much of Euclid and went on to describe a way to measure the angle of refraction, though he failed to notice the empirical relationship between it and the angle of incidence. Plutarch (1st–2nd century AD) described multiple reflections on spherical mirrors and discussed the creation of magnified and reduced images, both real and imaginary, including the case of chirality of the images.
During the Middle Ages, Greek ideas about optics were resurrected and extended by writers in the Muslim world. One of the earliest of these was Al-Kindi (c. 801–873) who wrote on the merits of Aristotelian and Euclidean ideas of optics, favouring the emission theory since it could better quantify optical phenomena. In 984, the Persian mathematician Ibn Sahl wrote the treatise "On burning mirrors and lenses", correctly describing a law of refraction equivalent to Snell's law. He used this law to compute optimum shapes for lenses and curved mirrors. In the early 11th century, Alhazen (Ibn al-Haytham) wrote the "Book of Optics" ("Kitab al-manazir") in which he explored reflection and refraction and proposed a new system for explaining vision and light based on observation and experiment. He rejected the "emission theory" of Ptolemaic optics with its rays being emitted by the eye, and instead put forward the idea that light reflected in all directions in straight lines from all points of the objects being viewed and then entered the eye, although he was unable to correctly explain how the eye captured the rays. Alhazen's work was largely ignored in the Arabic world but it was anonymously translated into Latin around 1200 A.D. and further summarised and expanded on by the Polish monk Witelo making it a standard text on optics in Europe for the next 400 years.
In the 13th century in medieval Europe, English bishop Robert Grosseteste wrote on a wide range of scientific topics, and discussed light from four different perspectives: an epistemology of light, a metaphysics or cosmogony of light, an etiology or physics of light, and a theology of light, basing it on the works Aristotle and Platonism. Grosseteste's most famous disciple, Roger Bacon, wrote works citing a wide range of recently translated optical and philosophical works, including those of Alhazen, Aristotle, Avicenna, Averroes, Euclid, al-Kindi, Ptolemy, Tideus, and Constantine the African. Bacon was able to use parts of glass spheres as magnifying glasses to demonstrate that light reflects from objects rather than being released from them.
The first wearable eyeglasses were invented in Italy around 1286.
This was the start of the optical industry of grinding and polishing lenses for these "spectacles", first in Venice and Florence in the thirteenth century, and later in the spectacle making centres in both the Netherlands and Germany. Spectacle makers created improved types of lenses for the correction of vision based more on empirical knowledge gained from observing the effects of the lenses rather than using the rudimentary optical theory of the day (theory which for the most part could not even adequately explain how spectacles worked). This practical development, mastery, and experimentation with lenses led directly to the invention of the compound optical microscope around 1595, and the refracting telescope in 1608, both of which appeared in the spectacle making centres in the Netherlands.
In the early 17th century, Johannes Kepler expanded on geometric optics in his writings, covering lenses, reflection by flat and curved mirrors, the principles of pinhole cameras, inverse-square law governing the intensity of light, and the optical explanations of astronomical phenomena such as lunar and solar eclipses and astronomical parallax. He was also able to correctly deduce the role of the retina as the actual organ that recorded images, finally being able to scientifically quantify the effects of different types of lenses that spectacle makers had been observing over the previous 300 years. After the invention of the telescope, Kepler set out the theoretical basis on how they worked and described an improved version, known as the "Keplerian telescope", using two convex lenses to produce higher magnification.
Optical theory progressed in the mid-17th century with treatises written by philosopher René Descartes, which explained a variety of optical phenomena including reflection and refraction by assuming that light was emitted by objects which produced it. This differed substantively from the ancient Greek emission theory. In the late 1660s and early 1670s, Isaac Newton expanded Descartes' ideas into a corpuscle theory of light, famously determining that white light was a mix of colours which can be separated into its component parts with a prism. In 1690, Christiaan Huygens proposed a wave theory for light based on suggestions that had been made by Robert Hooke in 1664. Hooke himself publicly criticised Newton's theories of light and the feud between the two lasted until Hooke's death. In 1704, Newton published "Opticks" and, at the time, partly because of his success in other areas of physics, he was generally considered to be the victor in the debate over the nature of light.
Newtonian optics was generally accepted until the early 19th century when Thomas Young and Augustin-Jean Fresnel conducted experiments on the interference of light that firmly established light's wave nature. Young's famous double slit experiment showed that light followed the law of superposition, which is a wave-like property not predicted by Newton's corpuscle theory. This work led to a theory of diffraction for light and opened an entire area of study in physical optics. Wave optics was successfully unified with electromagnetic theory by James Clerk Maxwell in the 1860s.
The next development in optical theory came in 1899 when Max Planck correctly modelled blackbody radiation by assuming that the exchange of energy between light and matter only occurred in discrete amounts he called "quanta". In 1905, Albert Einstein published the theory of the photoelectric effect that firmly established the quantization of light itself. In 1913, Niels Bohr showed that atoms could only emit discrete amounts of energy, thus explaining the discrete lines seen in emission and absorption spectra. The understanding of the interaction between light and matter that followed from these developments not only formed the basis of quantum optics but also was crucial for the development of quantum mechanics as a whole. The ultimate culmination, the theory of quantum electrodynamics, explains all optics and electromagnetic processes in general as the result of the exchange of real and virtual photons. Quantum optics gained practical importance with the inventions of the maser in 1953 and of the laser in 1960.
Following the work of Paul Dirac in quantum field theory, George Sudarshan, Roy J. Glauber, and Leonard Mandel applied quantum theory to the electromagnetic field in the 1950s and 1960s to gain a more detailed understanding of photodetection and the statistics of light.
Classical optics is divided into two main branches: geometrical (or ray) optics and physical (or wave) optics. In geometrical optics, light is considered to travel in straight lines, while in physical optics, light is considered as an electromagnetic wave.
Geometrical optics can be viewed as an approximation of physical optics that applies when the wavelength of the light used is much smaller than the size of the optical elements in the system being modelled.
"Geometrical optics", or "ray optics", describes the propagation of light in terms of "rays" which travel in straight lines, and whose paths are governed by the laws of reflection and refraction at interfaces between different media. These laws were discovered empirically as far back as 984 AD and have been used in the design of optical components and instruments from then until the present day. They can be summarised as follows:
When a ray of light hits the boundary between two transparent materials, it is divided into a reflected and a refracted ray.
where is a constant for any two materials and a given colour of light. If the first material is air or vacuum, is the refractive index of the second material.
The laws of reflection and refraction can be derived from Fermat's principle which states that "the path taken between two points by a ray of light is the path that can be traversed in the least time."
Geometric optics is often simplified by making the paraxial approximation, or "small angle approximation". The mathematical behaviour then becomes linear, allowing optical components and systems to be described by simple matrices. This leads to the techniques of Gaussian optics and "paraxial ray tracing", which are used to find basic properties of optical systems, such as approximate image and object positions and magnifications.
Reflections can be divided into two types: specular reflection and diffuse reflection. Specular reflection describes the gloss of surfaces such as mirrors, which reflect light in a simple, predictable way. This allows for production of reflected images that can be associated with an actual (real) or extrapolated (virtual) location in space. Diffuse reflection describes non-glossy materials, such as paper or rock. The reflections from these surfaces can only be described statistically, with the exact distribution of the reflected light depending on the microscopic structure of the material. Many diffuse reflectors are described or can be approximated by Lambert's cosine law, which describes surfaces that have equal luminance when viewed from any angle. Glossy surfaces can give both specular and diffuse reflection.
In specular reflection, the direction of the reflected ray is determined by the angle the incident ray makes with the surface normal, a line perpendicular to the surface at the point where the ray hits. The incident and reflected rays and the normal lie in a single plane, and the angle between the reflected ray and the surface normal is the same as that between the incident ray and the normal. This is known as the Law of Reflection.
For flat mirrors, the law of reflection implies that images of objects are upright and the same distance behind the mirror as the objects are in front of the mirror. The image size is the same as the object size. The law also implies that mirror images are parity inverted, which we perceive as a left-right inversion. Images formed from reflection in two (or any even number of) mirrors are not parity inverted. Corner reflectors produce reflected rays that travel back in the direction from which the incident rays came. This is called retroreflection.
Mirrors with curved surfaces can be modelled by ray tracing and using the law of reflection at each point on the surface. For mirrors with parabolic surfaces, parallel rays incident on the mirror produce reflected rays that converge at a common focus. Other curved surfaces may also focus light, but with aberrations due to the diverging shape causing the focus to be smeared out in space. In particular, spherical mirrors exhibit spherical aberration. Curved mirrors can form images with magnification greater than or less than one, and the magnification can be negative, indicating that the image is inverted. An upright image formed by reflection in a mirror is always virtual, while an inverted image is real and can be projected onto a screen.
Refraction occurs when light travels through an area of space that has a changing index of refraction; this principle allows for lenses and the focusing of light. The simplest case of refraction occurs when there is an interface between a uniform medium with index of refraction formula_2 and another medium with index of refraction formula_3. In such situations, Snell's Law describes the resulting deflection of the light ray:
where formula_5 and formula_6 are the angles between the normal (to the interface) and the incident and refracted waves, respectively.
The index of refraction of a medium is related to the speed, , of light in that medium by
where is the speed of light in vacuum.
Snell's Law can be used to predict the deflection of light rays as they pass through linear media as long as the indexes of refraction and the geometry of the media are known. For example, the propagation of light through a prism results in the light ray being deflected depending on the shape and orientation of the prism. In most materials, the index of refraction varies with the frequency of the light. Taking this into account, Snell's Law can be used to predict how a prism will disperse light into a spectrum. The discovery of this phenomenon when passing light through a prism is famously attributed to Isaac Newton.
Some media have an index of refraction which varies gradually with position and, therefore, light rays in the medium are curved. This effect is responsible for mirages seen on hot days: a change in index of refraction air with height causes light rays to bend, creating the appearance of specular reflections in the distance (as if on the surface of a pool of water). Optical materials with varying index of refraction are called gradient-index (GRIN) materials. Such materials are used to make gradient-index optics.
For light rays travelling from a material with a high index of refraction to a material with a low index of refraction, Snell's law predicts that there is no formula_6 when formula_5 is large. In this case, no transmission occurs; all the light is reflected. This phenomenon is called total internal reflection and allows for fibre optics technology. As light travels down an optical fibre, it undergoes total internal reflection allowing for essentially no light to be lost over the length of the cable.
A device which produces converging or diverging light rays due to refraction is known as a "lens". Lenses are characterized by their focal length: a converging lens has positive focal length, while a diverging lens has negative focal length. Smaller focal length indicates that the lens has a stronger converging or diverging effect. The focal length of a simple lens in air is given by the lensmaker's equation.
Ray tracing can be used to show how images are formed by a lens. For a thin lens in air, the location of the image is given by the simple equation
where formula_11 is the distance from the object to the lens, formula_12 is the distance from the lens to the image, and formula_13 is the focal length of the lens. In the sign convention used here, the object and image distances are positive if the object and image are on opposite sides of the lens.
Incoming parallel rays are focused by a converging lens onto a spot one focal length from the lens, on the far side of the lens. This is called the rear focal point of the lens. Rays from an object at finite distance are focused further from the lens than the focal distance; the closer the object is to the lens, the further the image is from the lens.
With diverging lenses, incoming parallel rays diverge after going through the lens, in such a way that they seem to have originated at a spot one focal length in front of the lens. This is the lens's front focal point. Rays from an object at finite distance are associated with a virtual image that is closer to the lens than the focal point, and on the same side of the lens as the object. The closer the object is to the lens, the closer the virtual image is to the lens. As with mirrors, upright images produced by a single lens are virtual, while inverted images are real.
Lenses suffer from aberrations that distort images. "Monochromatic aberrations" occur because the geometry of the lens does not perfectly direct rays from each object point to a single point on the image, while chromatic aberration occurs because the index of refraction of the lens varies with the wavelength of the light.
In physical optics, light is considered to propagate as a wave. This model predicts phenomena such as interference and diffraction, which are not explained by geometric optics. The speed of light waves in air is approximately 3.0×108 m/s (exactly 299,792,458 m/s in vacuum). The wavelength of visible light waves varies between 400 and 700 nm, but the term "light" is also often applied to infrared (0.7–300 μm) and ultraviolet radiation (10–400 nm).
The wave model can be used to make predictions about how an optical system will behave without requiring an explanation of what is "waving" in what medium. Until the middle of the 19th century, most physicists believed in an "ethereal" medium in which the light disturbance propagated. The existence of electromagnetic waves was predicted in 1865 by Maxwell's equations. These waves propagate at the speed of light and have varying electric and magnetic fields which are orthogonal to one another, and also to the direction of propagation of the waves. Light waves are now generally treated as electromagnetic waves except when quantum mechanical effects have to be considered.
Many simplified approximations are available for analysing and designing optical systems. Most of these use a single scalar quantity to represent the electric field of the light wave, rather than using a vector model with orthogonal electric and magnetic vectors.
The Huygens–Fresnel equation is one such model. This was derived empirically by Fresnel in 1815, based on Huygens' hypothesis that each point on a wavefront generates a secondary spherical wavefront, which Fresnel combined with the principle of superposition of waves. The Kirchhoff diffraction equation, which is derived using Maxwell's equations, puts the Huygens-Fresnel equation on a firmer physical foundation. Examples of the application of Huygens–Fresnel principle can be found in the articles on diffraction and Fraunhofer diffraction.
More rigorous models, involving the modelling of both electric and magnetic fields of the light wave, are required when dealing with materials whose electric and magnetic properties affect the interaction of light with the material. For instance, the behaviour of a light wave interacting with a metal surface is quite different from what happens when it interacts with a dielectric material. A vector model must also be used to model polarised light.
Numerical modeling techniques such as the finite element method, the boundary element method and the transmission-line matrix method can be used to model the propagation of light in systems which cannot be solved analytically. Such models are computationally demanding and are normally only used to solve small-scale problems that require accuracy beyond that which can be achieved with analytical solutions.
All of the results from geometrical optics can be recovered using the techniques of Fourier optics which apply many of the same mathematical and analytical techniques used in acoustic engineering and signal processing.
Gaussian beam propagation is a simple paraxial physical optics model for the propagation of coherent radiation such as laser beams. This technique partially accounts for diffraction, allowing accurate calculations of the rate at which a laser beam expands with distance, and the minimum size to which the beam can be focused. Gaussian beam propagation thus bridges the gap between geometric and physical optics.
In the absence of nonlinear effects, the superposition principle can be used to predict the shape of interacting waveforms through the simple addition of the disturbances. This interaction of waves to produce a resulting pattern is generally termed "interference" and can result in a variety of outcomes. If two waves of the same wavelength and frequency are "in phase", both the wave crests and wave troughs align. This results in constructive interference and an increase in the amplitude of the wave, which for light is associated with a brightening of the waveform in that location. Alternatively, if the two waves of the same wavelength and frequency are out of phase, then the wave crests will align with wave troughs and vice versa. This results in destructive interference and a decrease in the amplitude of the wave, which for light is associated with a dimming of the waveform at that location. See below for an illustration of this effect.
Since the Huygens–Fresnel principle states that every point of a wavefront is associated with the production of a new disturbance, it is possible for a wavefront to interfere with itself constructively or destructively at different locations producing bright and dark fringes in regular and predictable patterns. Interferometry is the science of measuring these patterns, usually as a means of making precise determinations of distances or angular resolutions. The Michelson interferometer was a famous instrument which used interference effects to accurately measure the speed of light.
The appearance of thin films and coatings is directly affected by interference effects. Antireflective coatings use destructive interference to reduce the reflectivity of the surfaces they coat, and can be used to minimise glare and unwanted reflections. The simplest case is a single layer with thickness one-fourth the wavelength of incident light. The reflected wave from the top of the film and the reflected wave from the film/material interface are then exactly 180° out of phase, causing destructive interference. The waves are only exactly out of phase for one wavelength, which would typically be chosen to be near the centre of the visible spectrum, around 550 nm. More complex designs using multiple layers can achieve low reflectivity over a broad band, or extremely low reflectivity at a single wavelength.
Constructive interference in thin films can create strong reflection of light in a range of wavelengths, which can be narrow or broad depending on the design of the coating. These films are used to make dielectric mirrors, interference filters, heat reflectors, and filters for colour separation in colour television cameras. This interference effect is also what causes the colourful rainbow patterns seen in oil slicks.
Diffraction is the process by which light interference is most commonly observed. The effect was first described in 1665 by Francesco Maria Grimaldi, who also coined the term from the Latin "diffringere", 'to break into pieces'. Later that century, Robert Hooke and Isaac Newton also described phenomena now known to be diffraction in Newton's rings while James Gregory recorded his observations of diffraction patterns from bird feathers.
The first physical optics model of diffraction that relied on the Huygens–Fresnel principle was developed in 1803 by Thomas Young in his interference experiments with the interference patterns of two closely spaced slits. Young showed that his results could only be explained if the two slits acted as two unique sources of waves rather than corpuscles. In 1815 and 1818, Augustin-Jean Fresnel firmly established the mathematics of how wave interference can account for diffraction.
The simplest physical models of diffraction use equations that describe the angular separation of light and dark fringes due to light of a particular wavelength (λ). In general, the equation takes the form
where formula_15 is the separation between two wavefront sources (in the case of Young's experiments, it was two slits), formula_16 is the angular separation between the central fringe and the formula_17th order fringe, where the central maximum is formula_18.
This equation is modified slightly to take into account a variety of situations such as diffraction through a single gap, diffraction through multiple slits, or diffraction through a diffraction grating that contains a large number of slits at equal spacing. More complicated models of diffraction require working with the mathematics of Fresnel or Fraunhofer diffraction.
X-ray diffraction makes use of the fact that atoms in a crystal have regular spacing at distances that are on the order of one angstrom. To see diffraction patterns, x-rays with similar wavelengths to that spacing are passed through the crystal. Since crystals are three-dimensional objects rather than two-dimensional gratings, the associated diffraction pattern varies in two directions according to Bragg reflection, with the associated bright spots occurring in unique patterns and formula_15 being twice the spacing between atoms.
Diffraction effects limit the ability for an optical detector to optically resolve separate light sources. In general, light that is passing through an aperture will experience diffraction and the best images that can be created (as described in diffraction-limited optics) appear as a central spot with surrounding bright rings, separated by dark nulls; this pattern is known as an Airy pattern, and the central bright lobe as an Airy disk. The size of such a disk is given by
where "θ" is the angular resolution, "λ" is the wavelength of the light, and "D" is the diameter of the lens aperture. If the angular separation of the two points is significantly less than the Airy disk angular radius, then the two points cannot be resolved in the image, but if their angular separation is much greater than this, distinct images of the two points are formed and they can therefore be resolved. Rayleigh defined the somewhat arbitrary "Rayleigh criterion" that two points whose angular separation is equal to the Airy disk radius (measured to first null, that is, to the first place where no light is seen) can be considered to be resolved. It can be seen that the greater the diameter of the lens or its aperture, the finer the resolution. Interferometry, with its ability to mimic extremely large baseline apertures, allows for the greatest angular resolution possible.
For astronomical imaging, the atmosphere prevents optimal resolution from being achieved in the visible spectrum due to the atmospheric scattering and dispersion which cause stars to twinkle. Astronomers refer to this effect as the quality of astronomical seeing. Techniques known as adaptive optics have been used to eliminate the atmospheric disruption of images and achieve results that approach the diffraction limit.
Refractive processes take place in the physical optics limit, where the wavelength of light is similar to other distances, as a kind of scattering. The simplest type of scattering is Thomson scattering which occurs when electromagnetic waves are deflected by single particles. In the limit of Thomson scattering, in which the wavelike nature of light is evident, light is dispersed independent of the frequency, in contrast to Compton scattering which is frequency-dependent and strictly a quantum mechanical process, involving the nature of light as particles. In a statistical sense, elastic scattering of light by numerous particles much smaller than the wavelength of the light is a process known as Rayleigh scattering while the similar process for scattering by particles that are similar or larger in wavelength is known as Mie scattering with the Tyndall effect being a commonly observed result. A small proportion of light scattering from atoms or molecules may undergo Raman scattering, wherein the frequency changes due to excitation of the atoms and molecules. Brillouin scattering occurs when the frequency of light changes due to local changes with time and movements of a dense material.
Dispersion occurs when different frequencies of light have different phase velocities, due either to material properties ("material dispersion") or to the geometry of an optical waveguide ("waveguide dispersion"). The most familiar form of dispersion is a decrease in index of refraction with increasing wavelength, which is seen in most transparent materials. This is called "normal dispersion". It occurs in all dielectric materials, in wavelength ranges where the material does not absorb light. In wavelength ranges where a medium has significant absorption, the index of refraction can increase with wavelength. This is called "anomalous dispersion".
The separation of colours by a prism is an example of normal dispersion. At the surfaces of the prism, Snell's law predicts that light incident at an angle θ to the normal will be refracted at an angle arcsin(sin (θ) / "n"). Thus, blue light, with its higher refractive index, is bent more strongly than red light, resulting in the well-known rainbow pattern.
Material dispersion is often characterised by the Abbe number, which gives a simple measure of dispersion based on the index of refraction at three specific wavelengths. Waveguide dispersion is dependent on the propagation constant. Both kinds of dispersion cause changes in the group characteristics of the wave, the features of the wave packet that change with the same frequency as the amplitude of the electromagnetic wave. "Group velocity dispersion" manifests as a spreading-out of the signal "envelope" of the radiation and can be quantified with a group dispersion delay parameter:
where formula_22 is the group velocity. For a uniform medium, the group velocity is
where "n" is the index of refraction and "c" is the speed of light in a vacuum. This gives a simpler form for the dispersion delay parameter:
If "D" is less than zero, the medium is said to have "positive dispersion" or normal dispersion. If "D" is greater than zero, the medium has "negative dispersion". If a light pulse is propagated through a normally dispersive medium, the result is the higher frequency components slow down more than the lower frequency components. The pulse therefore becomes "positively chirped", or "up-chirped", increasing in frequency with time. This causes the spectrum coming out of a prism to appear with red light the least refracted and blue/violet light the most refracted. Conversely, if a pulse travels through an anomalously (negatively) dispersive medium, high frequency components travel faster than the lower ones, and the pulse becomes "negatively chirped", or "down-chirped", decreasing in frequency with time.
The result of group velocity dispersion, whether negative or positive, is ultimately temporal spreading of the pulse. This makes dispersion management extremely important in optical communications systems based on optical fibres, since if dispersion is too high, a group of pulses representing information will each spread in time and merge, making it impossible to extract the signal.
Polarization is a general property of waves that describes the orientation of their oscillations. For transverse waves such as many electromagnetic waves, it describes the orientation of the oscillations in the plane perpendicular to the wave's direction of travel. The oscillations may be oriented in a single direction (linear polarization), or the oscillation direction may rotate as the wave travels (circular or elliptical polarization). Circularly polarised waves can rotate rightward or leftward in the direction of travel, and which of those two rotations is present in a wave is called the wave's chirality.
The typical way to consider polarization is to keep track of the orientation of the electric field vector as the electromagnetic wave propagates. The electric field vector of a plane wave may be arbitrarily divided into two perpendicular components labeled "x" and "y" (with z indicating the direction of travel). The shape traced out in the x-y plane by the electric field vector is a Lissajous figure that describes the "polarization state". The following figures show some examples of the evolution of the electric field vector (blue), with time (the vertical axes), at a particular point in space, along with its "x" and "y" components (red/left and green/right), and the path traced by the vector in the plane (purple): The same evolution would occur when looking at the electric field at a particular time while evolving the point in space, along the direction opposite to propagation.
"Linear"
"Circular"
"Elliptical polarization"
In the leftmost figure above, the x and y components of the light wave are in phase. In this case, the ratio of their strengths is constant, so the direction of the electric vector (the vector sum of these two components) is constant. Since the tip of the vector traces out a single line in the plane, this special case is called linear polarization. The direction of this line depends on the relative amplitudes of the two components.
In the middle figure, the two orthogonal components have the same amplitudes and are 90° out of phase. In this case, one component is zero when the other component is at maximum or minimum amplitude. There are two possible phase relationships that satisfy this requirement: the "x" component can be 90° ahead of the "y" component or it can be 90° behind the "y" component. In this special case, the electric vector traces out a circle in the plane, so this polarization is called circular polarization. The rotation direction in the circle depends on which of the two phase relationships exists and corresponds to "right-hand circular polarization" and "left-hand circular polarization".
In all other cases, where the two components either do not have the same amplitudes and/or their phase difference is neither zero nor a multiple of 90°, the polarization is called elliptical polarization because the electric vector traces out an ellipse in the plane (the "polarization ellipse"). This is shown in the above figure on the right. Detailed mathematics of polarization is done using Jones calculus and is characterised by the Stokes parameters.
Media that have different indexes of refraction for different polarization modes are called "birefringent". Well known manifestations of this effect appear in optical wave plates/retarders (linear modes) and in Faraday rotation/optical rotation (circular modes). If the path length in the birefringent medium is sufficient, plane waves will exit the material with a significantly different propagation direction, due to refraction. For example, this is the case with macroscopic crystals of calcite, which present the viewer with two offset, orthogonally polarised images of whatever is viewed through them. It was this effect that provided the first discovery of polarization, by Erasmus Bartholinus in 1669. In addition, the phase shift, and thus the change in polarization state, is usually frequency dependent, which, in combination with dichroism, often gives rise to bright colours and rainbow-like effects. In mineralogy, such properties, known as pleochroism, are frequently exploited for the purpose of identifying minerals using polarization microscopes. Additionally, many plastics that are not normally birefringent will become so when subject to mechanical stress, a phenomenon which is the basis of photoelasticity. Non-birefringent methods, to rotate the linear polarization of light beams, include the use of prismatic polarization rotators which use total internal reflection in a prism set designed for efficient collinear transmission.
Media that reduce the amplitude of certain polarization modes are called "dichroic", with devices that block nearly all of the radiation in one mode known as "polarizing filters" or simply "polarisers". Malus' law, which is named after Étienne-Louis Malus, says that when a perfect polariser is placed in a linear polarised beam of light, the intensity, "I", of the light that passes through is given by
where
A beam of unpolarised light can be thought of as containing a uniform mixture of linear polarizations at all possible angles. Since the average value of formula_26 is 1/2, the transmission coefficient becomes
In practice, some light is lost in the polariser and the actual transmission of unpolarised light will be somewhat lower than this, around 38% for Polaroid-type polarisers but considerably higher (>49.9%) for some birefringent prism types.
In addition to birefringence and dichroism in extended media, polarization effects can also occur at the (reflective) interface between two materials of different refractive index. These effects are treated by the Fresnel equations. Part of the wave is transmitted and part is reflected, with the ratio depending on angle of incidence and the angle of refraction. In this way, physical optics recovers Brewster's angle. When light reflects from a thin film on a surface, interference between the reflections from the film's surfaces can produce polarization in the reflected and transmitted light.
Most sources of electromagnetic radiation contain a large number of atoms or molecules that emit light. The orientation of the electric fields produced by these emitters may not be correlated, in which case the light is said to be "unpolarised". If there is partial correlation between the emitters, the light is "partially polarised". If the polarization is consistent across the spectrum of the source, partially polarised light can be described as a superposition of a completely unpolarised component, and a completely polarised one. One may then describe the light in terms of the degree of polarization, and the parameters of the polarization ellipse.
Light reflected by shiny transparent materials is partly or fully polarised, except when the light is normal (perpendicular) to the surface. It was this effect that allowed the mathematician Étienne-Louis Malus to make the measurements that allowed for his development of the first mathematical models for polarised light. Polarization occurs when light is scattered in the atmosphere. The scattered light produces the brightness and colour in clear skies. This partial polarization of scattered light can be taken advantage of using polarizing filters to darken the sky in photographs. Optical polarization is principally of importance in chemistry due to circular dichroism and optical rotation (""circular birefringence"") exhibited by optically active (chiral) molecules.
"Modern optics" encompasses the areas of optical science and engineering that became popular in the 20th century. These areas of optical science typically relate to the electromagnetic or quantum properties of light but do include other topics. A major subfield of modern optics, quantum optics, deals with specifically quantum mechanical properties of light. Quantum optics is not just theoretical; some modern devices, such as lasers, have principles of operation that depend on quantum mechanics. Light detectors, such as photomultipliers and channeltrons, respond to individual photons. Electronic image sensors, such as CCDs, exhibit shot noise corresponding to the statistics of individual photon events. Light-emitting diodes and photovoltaic cells, too, cannot be understood without quantum mechanics. In the study of these devices, quantum optics often overlaps with quantum electronics.
Specialty areas of optics research include the study of how light interacts with specific materials as in crystal optics and metamaterials. Other research focuses on the phenomenology of electromagnetic waves as in singular optics, non-imaging optics, non-linear optics, statistical optics, and radiometry. Additionally, computer engineers have taken an interest in integrated optics, machine vision, and photonic computing as possible components of the "next generation" of computers.
Today, the pure science of optics is called optical science or optical physics to distinguish it from applied optical sciences, which are referred to as optical engineering. Prominent subfields of optical engineering include illumination engineering, photonics, and optoelectronics with practical applications like lens design, fabrication and testing of optical components, and image processing. Some of these fields overlap, with nebulous boundaries between the subjects terms that mean slightly different things in different parts of the world and in different areas of industry. A professional community of researchers in nonlinear optics has developed in the last several decades due to advances in laser technology.
A laser is a device that emits light (electromagnetic radiation) through a process called "stimulated emission". The term "laser" is an acronym for "Light Amplification by Stimulated Emission of Radiation". Laser light is usually spatially coherent, which means that the light either is emitted in a narrow, low-divergence beam, or can be converted into one with the help of optical components such as lenses. Because the microwave equivalent of the laser, the "maser", was developed first, devices that emit microwave and radio frequencies are usually called "masers".
The first working laser was demonstrated on 16 May 1960 by Theodore Maiman at Hughes Research Laboratories. When first invented, they were called "a solution looking for a problem". Since then, lasers have become a multibillion-dollar industry, finding utility in thousands of highly varied applications. The first application of lasers visible in the daily lives of the general population was the supermarket barcode scanner, introduced in 1974. The laserdisc player, introduced in 1978, was the first successful consumer product to include a laser, but the compact disc player was the first laser-equipped device to become truly common in consumers' homes, beginning in 1982. These optical storage devices use a semiconductor laser less than a millimetre wide to scan the surface of the disc for data retrieval. Fibre-optic communication relies on lasers to transmit large amounts of information at the speed of light. Other common applications of lasers include laser printers and laser pointers. Lasers are used in medicine in areas such as bloodless surgery, laser eye surgery, and laser capture microdissection and in military applications such as missile defence systems, electro-optical countermeasures (EOCM), and lidar. Lasers are also used in holograms, bubblegrams, laser light shows, and laser hair removal.
The Kapitsa–Dirac effect causes beams of particles to diffract as the result of meeting a standing wave of light. Light can be used to position matter using various phenomena (see optical tweezers).
Optics is part of everyday life. The ubiquity of visual systems in biology indicates the central role optics plays as the science of one of the five senses. Many people benefit from eyeglasses or contact lenses, and optics are integral to the functioning of many consumer goods including cameras. Rainbows and mirages are examples of optical phenomena. Optical communication provides the backbone for both the Internet and modern telephony.
The human eye functions by focusing light onto a layer of photoreceptor cells called the retina, which forms the inner lining of the back of the eye. The focusing is accomplished by a series of transparent media. Light entering the eye passes first through the cornea, which provides much of the eye's optical power. The light then continues through the fluid just behind the cornea—the anterior chamber, then passes through the pupil. The light then passes through the lens, which focuses the light further and allows adjustment of focus. The light then passes through the main body of fluid in the eye—the vitreous humour, and reaches the retina. The cells in the retina line the back of the eye, except for where the optic nerve exits; this results in a blind spot.
There are two types of photoreceptor cells, rods and cones, which are sensitive to different aspects of light. Rod cells are sensitive to the intensity of light over a wide frequency range, thus are responsible for black-and-white vision. Rod cells are not present on the fovea, the area of the retina responsible for central vision, and are not as responsive as cone cells to spatial and temporal changes in light. There are, however, twenty times more rod cells than cone cells in the retina because the rod cells are present across a wider area. Because of their wider distribution, rods are responsible for peripheral vision.
In contrast, cone cells are less sensitive to the overall intensity of light, but come in three varieties that are sensitive to different frequency-ranges and thus are used in the perception of colour and photopic vision. Cone cells are highly concentrated in the fovea and have a high visual acuity meaning that they are better at spatial resolution than rod cells. Since cone cells are not as sensitive to dim light as rod cells, most night vision is limited to rod cells. Likewise, since cone cells are in the fovea, central vision (including the vision needed to do most reading, fine detail work such as sewing, or careful examination of objects) is done by cone cells.
Ciliary muscles around the lens allow the eye's focus to be adjusted. This process is known as accommodation. The near point and far point define the nearest and farthest distances from the eye at which an object can be brought into sharp focus. For a person with normal vision, the far point is located at infinity. The near point's location depends on how much the muscles can increase the curvature of the lens, and how inflexible the lens has become with age. Optometrists, ophthalmologists, and opticians usually consider an appropriate near point to be closer than normal reading distance—approximately 25 cm.
Defects in vision can be explained using optical principles. As people age, the lens becomes less flexible and the near point recedes from the eye, a condition known as presbyopia. Similarly, people suffering from hyperopia cannot decrease the focal length of their lens enough to allow for nearby objects to be imaged on their retina. Conversely, people who cannot increase the focal length of their lens enough to allow for distant objects to be imaged on the retina suffer from myopia and have a far point that is considerably closer than infinity. A condition known as astigmatism results when the cornea is not spherical but instead is more curved in one direction. This causes horizontally extended objects to be focused on different parts of the retina than vertically extended objects, and results in distorted images.
All of these conditions can be corrected using corrective lenses. For presbyopia and hyperopia, a converging lens provides the extra curvature necessary to bring the near point closer to the eye while for myopia a diverging lens provides the curvature necessary to send the far point to infinity. Astigmatism is corrected with a cylindrical surface lens that curves more strongly in one direction than in another, compensating for the non-uniformity of the cornea.
The optical power of corrective lenses is measured in diopters, a value equal to the reciprocal of the focal length measured in metres; with a positive focal length corresponding to a converging lens and a negative focal length corresponding to a diverging lens. For lenses that correct for astigmatism as well, three numbers are given: one for the spherical power, one for the cylindrical power, and one for the angle of orientation of the astigmatism.
Optical illusions (also called visual illusions) are characterized by visually perceived images that differ from objective reality. The information gathered by the eye is processed in the brain to give a percept that differs from the object being imaged. Optical illusions can be the result of a variety of phenomena including physical effects that create images that are different from the objects that make them, the physiological effects on the eyes and brain of excessive stimulation (e.g. brightness, tilt, colour, movement), and cognitive illusions where the eye and brain make unconscious inferences.
Cognitive illusions include some which result from the unconscious misapplication of certain optical principles. For example, the Ames room, Hering, Müller-Lyer, Orbison, Ponzo, Sander, and Wundt illusions all rely on the suggestion of the appearance of distance by using converging and diverging lines, in the same way that parallel light rays (or indeed any set of parallel lines) appear to converge at a vanishing point at infinity in two-dimensionally rendered images with artistic perspective. This suggestion is also responsible for the famous moon illusion where the moon, despite having essentially the same angular size, appears much larger near the horizon than it does at zenith. This illusion so confounded Ptolemy that he incorrectly attributed it to atmospheric refraction when he described it in his treatise, "Optics".
Another type of optical illusion exploits broken patterns to trick the mind into perceiving symmetries or asymmetries that are not present. Examples include the café wall, Ehrenstein, Fraser spiral, Poggendorff, and Zöllner illusions. Related, but not strictly illusions, are patterns that occur due to the superimposition of periodic structures. For example, transparent tissues with a grid structure produce shapes known as moiré patterns, while the superimposition of periodic transparent patterns comprising parallel opaque lines or curves produces line moiré patterns.
Single lenses have a variety of applications including photographic lenses, corrective lenses, and magnifying glasses while single mirrors are used in parabolic reflectors and rear-view mirrors. Combining a number of mirrors, prisms, and lenses produces compound optical instruments which have practical uses. For example, a periscope is simply two plane mirrors aligned to allow for viewing around obstructions. The most famous compound optical instruments in science are the microscope and the telescope which were both invented by the Dutch in the late 16th century.
Microscopes were first developed with just two lenses: an objective lens and an eyepiece. The objective lens is essentially a magnifying glass and was designed with a very small focal length while the eyepiece generally has a longer focal length. This has the effect of producing magnified images of close objects. Generally, an additional source of illumination is used since magnified images are dimmer due to the conservation of energy and the spreading of light rays over a larger surface area. Modern microscopes, known as "compound microscopes" have many lenses in them (typically four) to optimize the functionality and enhance image stability. A slightly different variety of microscope, the comparison microscope, looks at side-by-side images to produce a stereoscopic binocular view that appears three dimensional when used by humans.
The first telescopes, called refracting telescopes, were also developed with a single objective and eyepiece lens. In contrast to the microscope, the objective lens of the telescope was designed with a large focal length to avoid optical aberrations. The objective focuses an image of a distant object at its focal point which is adjusted to be at the focal point of an eyepiece of a much smaller focal length. The main goal of a telescope is not necessarily magnification, but rather collection of light which is determined by the physical size of the objective lens. Thus, telescopes are normally indicated by the diameters of their objectives rather than by the magnification which can be changed by switching eyepieces. Because the magnification of a telescope is equal to the focal length of the objective divided by the focal length of the eyepiece, smaller focal-length eyepieces cause greater magnification.
Since crafting large lenses is much more difficult than crafting large mirrors, most modern telescopes are "reflecting telescopes", that is, telescopes that use a primary mirror rather than an objective lens. The same general optical considerations apply to reflecting telescopes that applied to refracting telescopes, namely, the larger the primary mirror, the more light collected, and the magnification is still equal to the focal length of the primary mirror divided by the focal length of the eyepiece. Professional telescopes generally do not have eyepieces and instead place an instrument (often a charge-coupled device) at the focal point instead.
The optics of photography involves both lenses and the medium in which the electromagnetic radiation is recorded, whether it be a plate, film, or charge-coupled device. Photographers must consider the reciprocity of the camera and the shot which is summarized by the relation
In other words, the smaller the aperture (giving greater depth of focus), the less light coming in, so the length of time has to be increased (leading to possible blurriness if motion occurs). An example of the use of the law of reciprocity is the Sunny 16 rule which gives a rough estimate for the settings needed to estimate the proper exposure in daylight.
A camera's aperture is measured by a unitless number called the f-number or f-stop, #, often notated as formula_28, and given by
where formula_13 is the focal length, and formula_31 is the diameter of the entrance pupil. By convention, "#" is treated as a single symbol, and specific values of # are written by replacing the number sign with the value. The two ways to increase the f-stop are to either decrease the diameter of the entrance pupil or change to a longer focal length (in the case of a zoom lens, this can be done by simply adjusting the lens). Higher f-numbers also have a larger depth of field due to the lens approaching the limit of a pinhole camera which is able to focus all images perfectly, regardless of distance, but requires very long exposure times.
The field of view that the lens will provide changes with the focal length of the lens. There are three basic classifications based on the relationship to the diagonal size of the film or sensor size of the camera to the focal length of the lens:
Modern zoom lenses may have some or all of these attributes.
The absolute value for the exposure time required depends on how sensitive to light the medium being used is (measured by the film speed, or, for digital media, by the quantum efficiency). Early photography used media that had very low light sensitivity, and so exposure times had to be long even for very bright shots. As technology has improved, so has the sensitivity through film cameras and digital cameras.
Other results from physical and geometrical optics apply to camera optics. For example, the maximum resolution capability of a particular camera set-up is determined by the diffraction limit associated with the pupil size and given, roughly, by the Rayleigh criterion.
The unique optical properties of the atmosphere cause a wide range of spectacular optical phenomena. The blue colour of the sky is a direct result of Rayleigh scattering which redirects higher frequency (blue) sunlight back into the field of view of the observer. Because blue light is scattered more easily than red light, the sun takes on a reddish hue when it is observed through a thick atmosphere, as during a sunrise or sunset. Additional particulate matter in the sky can scatter different colours at different angles creating colourful glowing skies at dusk and dawn. Scattering off of ice crystals and other particles in the atmosphere are responsible for halos, afterglows, coronas, rays of sunlight, and sun dogs. The variation in these kinds of phenomena is due to different particle sizes and geometries.
Mirages are optical phenomena in which light rays are bent due to thermal variations in the refraction index of air, producing displaced or heavily distorted images of distant objects. Other dramatic optical phenomena associated with this include the Novaya Zemlya effect where the sun appears to rise earlier than predicted with a distorted shape. A spectacular form of refraction occurs with a temperature inversion called the Fata Morgana where objects on the horizon or even beyond the horizon, such as islands, cliffs, ships or icebergs, appear elongated and elevated, like "fairy tale castles".
Rainbows are the result of a combination of internal reflection and dispersive refraction of light in raindrops. A single reflection off the backs of an array of raindrops produces a rainbow with an angular size on the sky that ranges from 40° to 42° with red on the outside. Double rainbows are produced by two internal reflections with angular size of 50.5° to 54° with violet on the outside. Because rainbows are seen with the sun 180° away from the centre of the rainbow, rainbows are more prominent the closer the sun is to the horizon. | https://en.wikipedia.org/wiki?curid=22483 |
Olbers' paradox
In astrophysics and physical cosmology, Olbers' paradox, named after the German astronomer Heinrich Wilhelm Olbers (1758–1840), also known as the "dark night sky paradox", is the argument that the darkness of the night sky conflicts with the assumption of an infinite and eternal static universe. In the hypothetical case that the universe is static, homogeneous at a large scale, and populated by an infinite number of stars, then any line of sight from Earth must end at the (very bright) surface of a star and hence the night sky should be completely illuminated and very bright. This contradicts the observed darkness and non-uniformity of the night.
The darkness of the night sky is one of the pieces of evidence for a dynamic universe, such as the Big Bang model. That model explains the observed non-uniformity of brightness by invoking spacetime's expansion, which lengthens the light originating from the Big Bang to microwave levels via a process known as redshift; this microwave radiation background has wavelengths much longer than those of visible light, so appears dark to the naked eye. Other explanations for the paradox have been offered, but none have wide acceptance in cosmology.
The first one to address the problem of an infinite number of stars and the resulting heat in the Cosmos was Cosmas Indicopleustes, a Greek monk from Alexandria, who states in his "Topographia Christiana": "The crystal-made sky sustains the heat of the Sun, the moon, and the infinite number of stars; otherwise, it would have been full of fire, and it could melt or set on fire."
Edward Robert Harrison's "Darkness at Night: A Riddle of the Universe" (1987) gives an account of the dark night sky paradox, seen as a problem in the history of science. According to Harrison, the first to conceive of anything like the paradox was Thomas Digges, who was also the first to expound the Copernican system in English and also postulated an infinite universe with infinitely many stars. Kepler also posed the problem in 1610, and the paradox took its mature form in the 18th century work of Halley and Cheseaux. The paradox is commonly attributed to the German amateur astronomer Heinrich Wilhelm Olbers, who described it in 1823, but Harrison shows convincingly that Olbers was far from the first to pose the problem, nor was his thinking about it particularly valuable. Harrison argues that the first to set out a satisfactory resolution of the paradox was Lord Kelvin, in a little known 1901 paper, and that Edgar Allan Poe's essay "" (1848) curiously anticipated some qualitative aspects of Kelvin's argument:
The paradox is that a static, infinitely old universe with an infinite number of stars distributed in an infinitely large space would be bright rather than dark.
To show this, we divide the universe into a series of concentric shells, 1 light year thick. A certain number of stars will be in the shell 1,000,000,000 to 1,000,000,001 light years away. If the universe is homogeneous at a large scale, then there would be four times as many stars in a second shell, which is between 2,000,000,000 and 2,000,000,001 light years away. However, the second shell is twice as far away, so each star in it would appear one quarter as bright as the stars in the first shell. Thus the total light received from the second shell is the same as the total light received from the first shell.
Thus each shell of a given thickness will produce the same net amount of light regardless of how far away it is. That is, the light of each shell adds to the total amount. Thus the more shells, the more light; and with infinitely many shells, there would be a bright night sky.
While dark clouds could obstruct the light, these clouds would heat up, until they were as hot as the stars, and then radiate the same amount of light.
Kepler saw this as an argument for a finite observable universe, or at least for a finite number of stars. In general relativity theory, it is still possible for the paradox to hold in a finite universe: though the sky would not be infinitely bright, every point in the sky would still be like the surface of a star.
The poet Edgar Allan Poe suggested that the finite size of the observable universe resolves the apparent paradox. More specifically, because the universe is finitely old and the speed of light is finite, only finitely many stars can be observed from Earth (although the whole universe can be infinite in space). The density of stars within this finite volume is sufficiently low that any line of sight from Earth is unlikely to reach a star.
However, the Big Bang theory seems to introduce a new problem: it states that the sky was much brighter in the past, especially at the end of the recombination era, when it first became transparent. All points of the local sky at that era were comparable in brightness to the surface of the Sun, due to the high temperature of the universe in that era; and most light rays will originate not from a star but the relic of the Big Bang.
This problem is addressed by the fact that the Big Bang theory also involves the expansion of space, which can cause the energy of emitted light to be reduced via redshift. More specifically, the extremely energetic radiation from the Big Bang has been redshifted to microwave wavelengths (1100 times the length of its original wavelength) as a result of the cosmic expansion, and thus forms the cosmic microwave background radiation. This explains the relatively low light densities and energy levels present in most of our sky today despite the assumed bright nature of the Big Bang. The redshift also affects light from distant stars and quasars, but this diminution is minor, since the most distant galaxies and quasars have redshifts of only around 5 to 8.6.
The redshift hypothesised in the Big Bang model would by itself explain the darkness of the night sky even if the universe were infinitely old. In the Steady state theory the universe is infinitely old and uniform in time as well as space. There is no Big Bang in this model, but there are stars and quasars at arbitrarily great distances. The expansion of the universe causes the light from these distant stars and quasars to redshift, so that the total light flux from the sky remains finite. Thus the observed radiation density (the sky brightness of extragalactic background light) can be independent of finiteness of the universe. Mathematically, the total electromagnetic energy density (radiation energy density) in thermodynamic equilibrium from Planck's law is
e.g. for temperature 2.7 K it is 40 fJ/m3 ... 4.5×10−31 kg/m3 and for visible temperature 6000 K we get 1 J/m3 ... 1.1×10−17 kg/m3. But the total radiation emitted by a star (or other cosmic object) is at most equal to the total nuclear binding energy of isotopes in the star. For the density of the observable universe of about 4.6×10−28 kg/m3 and given the known abundance of the chemical elements, the corresponding maximal radiation energy density of 9.2×10−31 kg/m3, i.e. temperature 3.2 K (matching the value observed for the optical radiation temperature by Arthur Eddington). This is close to the summed energy density of the cosmic microwave background (CMB) and the cosmic neutrino background. The Big Bang hypothesis predicts that the CBR should have the same energy density as the binding energy density of the primordial helium, which is much greater than the binding energy density of the non-primordial elements; so it gives almost the same result. However, the steady-state model does not predict the angular distribution of the microwave background temperature accurately (as the standard ΛCDM paradigm does). Nevertheless, the modified gravitation theories (without metric expansion of the universe) cannot be ruled out by CMB and BAO observations.
Stars have a finite age and a finite power, thereby implying that each star has a finite impact on a sky's light field density. Edgar Allan Poe suggested that this idea could provide a resolution to Olbers' paradox; a related theory was also proposed by Jean-Philippe de Chéseaux. However, stars are continually being born as well as dying. As long as the density of stars throughout the universe remains constant, regardless of whether the universe itself has a finite or infinite age, there would be infinitely many other stars in the same angular direction, with an infinite total impact. So the finite age of the stars does not explain the paradox.
Suppose that the universe were not expanding, and always had the same stellar density; then the temperature of the universe would continually increase as the stars put out more radiation. Eventually, it would reach 3000 K (corresponding to a typical photon energy of 0.3 eV and so a frequency of 7.5×1013 Hz), and the photons would begin to be absorbed by the hydrogen plasma filling most of the universe, rendering outer space opaque. This maximal radiation density corresponds to about eV/m3 = , which is much greater than the observed value of . So the sky is about five hundred billion times darker than it would be if the universe was neither expanding nor too young to have reached equilibrium yet. However, recent observations increasing the lower bound on the number of galaxies suggest UV absorption by hydrogen and reemission in near-IR (not visible) wavelengths also plays a role.
A different resolution, which does not rely on the Big Bang theory, was first proposed by Carl Charlier in 1908 and later rediscovered by Benoît Mandelbrot in 1974. They both postulated that if the stars in the universe were distributed in a hierarchical fractal cosmology (e.g., similar to Cantor dust)—the average density of any region diminishes as the region considered increases—it would not be necessary to rely on the Big Bang theory to explain Olbers' paradox. This model would not rule out a Big Bang, but would allow for a dark sky even if the Big Bang had not occurred.
Mathematically, the light received from stars as a function of star distance in a hypothetical fractal cosmos is
where:
The function of luminosity from a given distance "L"("r")"N"("r") determines whether the light received is finite or infinite. For any luminosity from a given distance proportional to "r""a", formula_3 is infinite for "a" ≥ −1 but finite for "a" −2, then for formula_3 to be finite, must be proportional to "r""b", where "b" 2. This would correspond to a fractal dimension of 2. Thus the fractal dimension of the universe would need to be less than 2 for this explanation to work.
This explanation is not widely accepted among cosmologists, since the evidence suggests that the fractal dimension of the universe is at least 2. Moreover, the majority of cosmologists accept the cosmological principle, which assumes that matter at the scale of billions of light years is distributed isotropically. Contrarily, fractal cosmology requires anisotropic matter distribution at the largest scales. Cosmic microwave background radiation has cosine anisotropy. | https://en.wikipedia.org/wiki?curid=22486 |
Occult
The term "occult sciences" was used in the 16th century to refer to astrology, alchemy, and natural magic. The term occultism emerged in 19th-century France, where it came to be associated with various French esoteric groups connected to Éliphas Lévi and Papus, and in 1875 was introduced into the English language by the esotericist Helena Blavatsky. Throughout the 20th century, the term was used idiosyncratically by a range of different authors, but by the 21st century was commonly employed – including by academic scholars of esotericism – to refer to a range of esoteric currents that developed in the mid-19th century and their descendants. "Occultism" is thus often used to categorise such esoteric traditions as Spiritualism, Theosophy, Anthroposophy, the Hermetic Order of the Golden Dawn, and New Age.
Particularly since the late twentieth century, various authors have used "the occult" as a substantivized adjective. In this usage, "the occult" is a category into which varied beliefs and practices are placed if they are considered to fit into neither religion nor science. "The occult" in this sense is very broad, encompassing such phenomenon as beliefs in vampires or fairies and movements like ufology and parapsychology. In that same period, "occult" and "culture" were combined to form the neologism "occulture".
The idea of "occult sciences" developed in the sixteenth century. The term usually encompassed three practices—astrology, alchemy, and natural magic—although sometimes various forms of divination were also included rather than being subsumed under natural magic. These were grouped together because, according to the Dutch scholar of hermeticism Wouter Hanegraaff, "each one of them engaged in a systematic investigation of nature and natural processes, in the context of theoretical frameworks that relied heavily on a belief in occult qualities, virtues or forces."
Although there are areas of overlap between these different occult sciences, they are separate and in some cases practitioners of one would reject the others as being illegitimate.
During the Age of Enlightenment, the term "occult" increasingly came to be seen as intrinsically incompatible with the concept of "science". From that point on, use of the term "occult science(s)" implied a conscious polemic against mainstream science. Nevertheless, the philosopher and card game historian, Michael Dummett, whose analysis of the historical evidence suggested that fortune-telling and occult interpretations using cards were unknown before the 18th century, said that the term 'occult science' was not misplaced because "people who believe in the possibility of unveiling the future or of exercising supernormal powers do so because the efficacy of the methods they employ coheres with some systematic conception which they hold of the way the universe functions...however flimsy its empirical basis."
In his 1871 book "Primitive Culture", the anthropologist Edward Tylor used the term "occult science" as a synonym for "magic".
Occult qualities are properties that have no known rational explanation; in the Middle Ages, for example, magnetism was considered an occult quality. Aether is another such element. Newton's contemporaries severely criticized his theory that gravity was effected through "action at a distance", as occult.
In the English-speaking world, prominent figures in the development of occultism included Helena Blavatsky and other figures associated with her Theosophical Society, senior figures in the Hermetic Order of the Golden Dawn like William Wynn Westcott and Samuel Liddell MacGregor Mathers, as well as other individuals such as Paschal Beverly Randolph, Emma Hardinge Britten, Arthur Edward Waite, and—in the early twentieth century—Aleister Crowley, Dion Fortune, and Israel Regardie. By the end of the nineteenth century, occultist ideas had also spread into other parts of Europe, such as the German Empire, Austria-Hungary, and the Kingdom of Italy.
Unlike older forms of esotericism, occultism does not reject "scientific progress or modernity". Lévi had stressed the need to solve the conflict between science and religion, something that he believed could be achieved by turning to what he thought was the ancient wisdom found in magic. The French scholar of Western esotericism Antoine Faivre noted that rather than outright accepting "the triumph of scientism", occultists sought "an alternative solution", trying to integrate "scientific progress or modernity" with "a global vision that will serve to make the vacuousness of materialism more apparent". The Dutch scholar of hermeticism Wouter Hanegraaff remarked that occultism was "essentially an attempt to adapt esotericism" to the "disenchanted world", a post-Enlightenment society in which growing scientific discovery had eradicated the "dimension of irreducible mystery" previously present. In doing so, he noted, occultism distanced itself from the "traditional esotericism" which accepted the premise of an "enchanted" world. According to the British historian of Western esotericism Nicholas Goodrick-Clarke, occultist groups typically seek "proofs and demonstrations by recourse to scientific tests or terminology".
In his work about Lévi, the German historian of religion Julian Strube has argued that the occultist wish for a "synthesis" of religion, science, and philosophy directly resulted from the context of contemporary socialism and progressive Catholicism. Similar to spiritualism, but in declared opposition to it, the emergence of occultism should thus be seen within the context of radical social reform, which was often concerned with establishing new forms of "scientific religion" while at the same time propagating the revival of an ancient tradition of "true religion". Indeed, the emergence of both modern esotericism and socialism in July Monarchy France have been inherently intertwined.
Another feature of occultists is that—unlike earlier esotericists—they often openly distanced themselves from Christianity, in some cases (like that of Crowley) even adopting explicitly anti-Christian stances. This reflected how pervasive the influence of secularisation had been on all areas of European society. In rejecting Christianity, these occultists sometimes turned towards pre-Christian belief systems and embraced forms of Modern Paganism, while others instead took influence from the religions of Asia, such as Hinduism and Buddhism. In various cases, certain occultists did both. Another characteristic of these occultists was the emphasis that they placed on "the spiritual realization of the individual", an idea that would strongly influence the twentieth-century New Age and Human Potential Movement. This spiritual realization was encouraged both through traditional Western 'occult sciences' like alchemy and ceremonial magic, but by the start of the twentieth century had also begun to include practices drawn from non-Western contexts, such as yoga.
Although occultism is distinguished from earlier forms of esotericism, many occultists have also been involved in older esoteric currents. For instance, occultists like François-Charles Barlet and Rudolf Steiner were also theosophers, adhering to the ideas of the early modern Christian thinker Jakob Bohme, and seeking to integrate ideas from Bohmian theosophy and occultism. It has been noted, however, that this distancing from the Theosophical Society should be understood in the light of polemical identity formations amongst esotericists towards the end of the nineteenth century.
The earliest known usage of the term "occultism" is in the French language, as "l'occultisme". In this form it appears in A. de Lestrange's article that was published in Jean-Baptiste Richard de Randonvilliers' "Dictionnaire des mots nouveaux" ("Dictionary of new words") in 1842. However, it was not related, at this point, to the notion of "Ésotérisme chrétien", as has been claimed by Hanegraaff, but to describe a political "system of occulticity" that was directed against priests and aristocrats. The French esotericist Éliphas Lévi then used the term in his influential book on ritual magic, "Dogme et rituel de la haute magie", first published in 1856. In 1853, the Freemasonic author Jean-Marie Ragon had already used "occultisme" in his popular work "Maçonnerie occulte", relating it to earlier practices that, since the Renaissance, had been termed "occult sciences" or "occult philosophy"—but also to the recent socialist teachings of Charles Fourier. Lévi was familiar with that work and might have borrowed the term from there. In any case, Lévi also claimed to be a representative of an older tradition of occult science or occult philosopy. It was from his usage of the term "occultisme" that it gained wider usage; according to Faivre, Lévi was "the principal exponent of esotericism in Europe and the United States" at that time.
The earliest use of the term "occultism" in the English language appears to be in "A Few Questions to 'Hiraf'", an 1875 article published in the American Spiritualist magazine, "Spiritual Scientist". The article had been written by Helena Blavatsky, a Russian émigré living in the United States who founded the religion of Theosophy.
Various twentieth-century writers on the subject used the term "occultism" in different ways. Some writers, such as the German philosopher Theodor W. Adorno in his "Theses Against Occultism", employed the term as a broad synonym for irrationality. In his 1950 book "L'occultisme", Robert Amadou used the term as a synonym for esotericism, an approach that the later scholar of esotericism Marco Pasi suggested left the term "superfluous". Unlike Amadou, other writers saw "occultism" and "esotericism" as different, albeit related, phenomena. In the 1970s, the sociologist Edward Tiryakian distinguished between "occultism", which he used in reference to practices, techniques, and procedures, and "esotericism", which he defined as the religious or philosophical belief systems on which such practices are based. This division was initially adopted by the early academic scholar of esotericism, Antoine Faivre, although he later abandoned it; it has been rejected by most scholars who study esotericism.
A different division was used by the Traditionalist author René Guénon, who used "esotericism" to describe what he believed was the Traditionalist, inner teaching at the heart of most religions, while "occultism" was used pejoratively to describe new religions and movements that he disapproved of, such as Spiritualism, Theosophy, and various secret societies. Guénon's use of this terminology was adopted by later writers like Serge Hutin and Luc Benoist. As noted by Hanegraaff, Guénon's use of these terms are rooted in his Traditionalist beliefs and "cannot be accepted as scholarly valid".
The term "occultism" derives from the older term "occult", much as the term "esotericism" derives from the older term "esoteric". However, the historian of esotericism Wouter Hanegraaff stated that it was important to distinguish between the meanings of the term "occult" and "occultism". Occultism is not a homogenous movement and is widely diverse.
Over the course of its history, the term "occultism" has been used in various different ways. However, in contemporary uses, "occultism" commonly refers to forms of esotericism that developed in the nineteenth century and their twentieth-century derivations. In a descriptive sense, it has been used to describe forms of esotericism which developed in nineteenth-century France, especially in the Neo-Martinist environment. According to the historian of esotericism Antoine Faivre, it is with the esotericist Éliphas Lévi that "the occultist current properly so-called" first appears. Other prominent French esotericists involved in developing occultism included Papus, Stanislas de Guaita, Joséphin Péladan, Georges-Albert Puyou de Pouvourville, and Jean Bricaud.
In the mid-1990s, a new definition of "occultism" was put forth by Wouter Hanegraaff. According to Hanegraaff, the term "occultism" can be used not only for the nineteenth-century groups which openly self-described using that term but can also be used in reference to "the "type" of esotericism that they represent".
Seeking to define "occultism" so that the term would be suitable "as an etic category" for scholars, Hanegraaff devised the following definition: "a category in the study of religions, which comprises "all attempts by esotericists to come to terms with a disenchanted world or, alternatively, by people in general to make sense of esotericism from the perspective of a disenchanted secular world"". Hanegraaff noted that this etic usage of the term would be independent of emic usages of the term employed by occultists and other esotericists themselves.
In this definition, "occultism" covers many esoteric currents that have developed from the mid-nineteenth century onward, including Spiritualism, Theosophy, the Hermetic Order of the Golden Dawn, and the New Age. Employing this etic understanding of "occultism", Hanegraaff argued that its development could begin to be seen in the work of the Swedish esotericist Emanuel Swedenborg and in the Mesmerist movement of the eighteenth century, although added that occultism only emerged in "fully-developed form" as Spiritualism, a movement that developed in the United States during the mid-nineteenth century.
Marco Pasi suggested that the use of Hanegraaff's definition might cause confusion by presenting a group of nineteenth-century esotericists who called themselves "occultists" as just one part of a broader category of esotericists whom scholars would call "occultists".
Following these discussions, Julian Strube argued that Lévi and other contemporary authors who would now be regarded as esotericists developed their ideas not against the background of an "esoteric tradition" in the first place. Rather, Lévi's notion of occultism emerged in the context of highly influential radical socialist movements and widespread progressive, so-called neo-Catholic ideas. This further complicates Hanegraaff's characteristics of occultism, since, throughout the nineteenth century, they apply to these reformist movements rather than to a supposed group of esotericists.
The term "occult" has also been used as a substantivized adjective as "the occult", a term that has been particularly widely used among journalists and sociologists. This term was popularised by the publication of Colin Wilson's 1971 book "".
This term has been used as an "intellectual waste-basket" into which a wide array of beliefs and practices have been placed because they do not fit readily into the categories of religion or science. According to Hanegraaff, "the occult" is a category into which gets placed a range of beliefs from "spirits or fairies to parapsychological experiments, from UFO-abductions to Oriental mysticism, from vampire legends to channelling, and so on".
The neologism "occulture" was used within the industrial music scene of the late twentieth century, and was probably coined by one of its central figures, the musician and occultist Genesis P-Orridge.
It was in this scene that the scholar of religion Christopher Partridge encountered the term.
Partridge used the term in an academic sense. They stated that occulture was "the new spiritual environment in the West; the reservoir feeding new spiritual springs; the soil in which new spiritualities are growing".
Recently scholars have offered perspectives on the occult as intertwined with media and technology. Examples include the work of film and media theorist Jeffrey Sconce and religious studies scholar John Durham Peters, both of whom suggest that occult movements historically utilize media and apparati as tools to reveal hidden aspects of reality or laws of nature. Erik Davis in his book "Techgnosis" gives an overview of occultism both ancient and modern from the perspective of cybernetics and information technologies. Philosopher Eugene Thacker discusses Agrippa's 'occult philosophy' in his book "In The Dust Of This Planet", where he shows how the horror genre utilizes occult themes to reveal hidden realities. | https://en.wikipedia.org/wiki?curid=22487 |
Oklahoma
Oklahoma () is a state in the South Central region of the United States, bordered by the state of Texas on the south and west, Kansas on the north, Missouri on the northeast, Arkansas on the east, New Mexico on the west, and Colorado on the northwest. It is the 20th-most extensive and the 28th-most populous of the 50 United States. Its residents are known as Oklahomans (or colloquially, "Okies"), and its capital and largest city is Oklahoma City.
The state's name is derived from the Choctaw words and , meaning "red people". It is also known informally by its nickname, "The Sooner State", in reference to the non-Native settlers who staked their claims on land before the official opening date of lands in the western Oklahoma Territory or before the Indian Appropriations Act of 1889, which increased European-American settlement in the eastern Indian Territory. Oklahoma Territory and Indian Territory were merged into the State of Oklahoma when it became the 46th state to enter the union on November 16, 1907.
With ancient mountain ranges, prairie, mesas, and eastern forests, most of Oklahoma lies in the Great Plains, Cross Timbers, and the U.S. Interior Highlands, all regions prone to severe weather. Oklahoma is on a confluence of three major American cultural regions and historically served as a route for cattle drives, a destination for Southern settlers, and a government-sanctioned territory for Native Americans. More than 25 Native American languages are spoken in Oklahoma.
A major producer of natural gas, oil, and agricultural products, Oklahoma relies on an economic base of aviation, energy, telecommunications, and biotechnology. Oklahoma City and Tulsa serve as Oklahoma's primary economic anchors, with nearly two-thirds of Oklahomans living within their metropolitan statistical areas.
The name "Oklahoma" comes from the Choctaw phrase "okla" "humma", literally meaning "red people". Choctaw Nation Chief Allen Wright suggested the name in 1866 during treaty negotiations with the federal government on the use of Indian Territory, in which he envisioned an all-Indian state controlled by the United States Superintendent of Indian Affairs. Equivalent to the English word "Indian", "okla humma" was a phrase in the Choctaw language that described Native American people as a whole. "Oklahoma" later became the de facto name for Oklahoma Territory, and it was officially approved in 1890, two years after the area was opened to white settlers.
The name of the state is , and . In the Chickasaw language, the state is known as Oklahomma', and in Arapaho as "bo'oobe"' (literally meaning "red earth").
Oklahoma is the 20th-largest state in the United States, covering an area of , with of land and of water. It lies partly in the Great Plains near the geographical center of the 48 contiguous states. It is bounded on the east by Arkansas and Missouri, on the north by Kansas, on the northwest by Colorado, on the far west by New Mexico, and on the south and near-west by Texas.
Oklahoma is between the Great Plains and the Ozark Plateau in the Gulf of Mexico watershed, generally sloping from the high plains of its western boundary to the low wetlands of its southeastern boundary. Its highest and lowest points follow this trend, with its highest peak, Black Mesa, at above sea level, situated near its far northwest corner in the Oklahoma Panhandle. The state's lowest point is on the Little River near its far southeastern boundary near the town of Idabel, which dips to above sea level.
Among the most geographically diverse states, Oklahoma is one of four to harbor more than 10 distinct ecological regions, with 11 in its borders—more per square mile than in any other state. Its western and eastern halves, however, are marked by extreme differences in geographical diversity: Eastern Oklahoma touches eight ecological regions and its western half contains three. Although having fewer ecological regions Western Oklahoma contains many rare, relic species.
Oklahoma has four primary mountain ranges: the Ouachita Mountains, the Arbuckle Mountains, the Wichita Mountains, and the Ozark Mountains. Contained within the U.S. Interior Highlands region, the Ozark and Ouachita Mountains are the only major mountainous region between the Rocky Mountains and the Appalachians. A portion of the Flint Hills stretches into north-central Oklahoma, and near the state's eastern border, The Oklahoma Tourism & Recreation Department regards Cavanal Hill as the world's tallest hill; at , it fails their definition of a mountain by one foot.
The semi-arid high plains in the state's northwestern corner harbor few natural forests; the region has a rolling to flat landscape with intermittent canyons and mesa ranges like the Glass Mountains. Partial plains interrupted by small, sky island mountain ranges like the Antelope Hills and the Wichita Mountains dot southwestern Oklahoma; transitional prairie and oak savannas cover the central portion of the state. The Ozark and Ouachita Mountains rise from west to east over the state's eastern third, gradually increasing in elevation in an eastward direction.
More than 500 named creeks and rivers make up Oklahoma's waterways, and with 200 lakes created by dams, it holds the nation's highest number of artificial reservoirs. Most of the state lies in two primary drainage basins belonging to the Red and Arkansas rivers, though the Lee and Little rivers also contain significant drainage basins.
Due to Oklahoma's location at the confluence of many geographic regions, the state's climatic regions have a high rate of biodiversity. Forests cover 24 percent of Oklahoma and prairie grasslands composed of shortgrass, mixed-grass, and tallgrass prairie, harbor expansive ecosystems in the state's central and western portions, although cropland has largely replaced native grasses. Where rainfall is sparse in the state's western regions, shortgrass prairie and shrublands are the most prominent ecosystems, though pinyon pines, red cedar (junipers), and ponderosa pines grow near rivers and creek beds in the panhandle's far western reaches. Southwestern Oklahoma contains many rare, disjunct species including sugar maple, bigtooth maple, nolina and southern live oak.
Marshlands, cypress forests and mixtures of shortleaf pine, loblolly pine, blue palmetto, and deciduous forests dominate the state's southeastern quarter, while mixtures of largely post oak, elm, red cedar ("Juniperus virginiana") and pine forests cover northeastern Oklahoma.
The state holds populations of white-tailed deer, mule deer, antelope, coyotes, mountain lions, bobcats, elk, and birds such as quail, doves, cardinals, bald eagles, red-tailed hawks, and pheasants. In prairie ecosystems, American bison, greater prairie chickens, badgers, and armadillo are common, and some of the nation's largest prairie dog towns inhabit shortgrass prairie in the state's panhandle. The Cross Timbers, a region transitioning from prairie to woodlands in Central Oklahoma, harbors 351 vertebrate species. The Ouachita Mountains are home to black bear, red fox, gray fox, and river otter populations, which coexist with 328 vertebrate species in southeastern Oklahoma. Also, in southeastern Oklahoma lives the American alligator.
Oklahoma has fifty state parks, six national parks or protected regions, two national protected forests or grasslands, and a network of wildlife preserves and conservation areas. Six percent of the state's 10 million acres (40,000 km2) of forest is public land, including the western portions of the Ouachita National Forest, the largest and oldest national forest in the Southern United States.
With , the Tallgrass Prairie Preserve in north-central Oklahoma is the largest protected area of tallgrass prairie in the world and is part of an ecosystem that encompasses only ten percent of its former land area, once covering fourteen states. In addition, the Black Kettle National Grassland covers of prairie in southwestern Oklahoma. The Wichita Mountains Wildlife Refuge is the oldest and largest of nine National Wildlife Refuges in the state and was founded in 1901, encompassing .
Of Oklahoma's federally protected parks or recreational sites, the Chickasaw National Recreation Area is the largest, with . Other sites include the Santa Fe and Trail of Tears national historic trails, the Fort Smith and Washita Battlefield national historic sites, and the Oklahoma City National Memorial.
Oklahoma is in a humid subtropical region. Oklahoma lies in a transition zone between semi-arid further to the west, humid continental to the north, and humid subtropical to the east and southeast. Most of the state lies in an area known as Tornado Alley characterized by frequent interaction between cold, dry air from Canada, warm to hot, dry air from Mexico and the Southwestern U.S., and warm, moist air from the Gulf of Mexico. The interactions between these three contrasting air currents produces severe weather (severe thunderstorms, damaging thunderstorm winds, large hail and tornadoes) with a frequency virtually unseen anywhere else on planet Earth. An average 62 tornadoes strike the state per year—one of the highest rates in the world.
Because of Oklahoma's position between zones of differing prevailing temperature and winds, weather patterns within the state can vary widely over relatively short distances, and they can change drastically in a short time. On November 11, 1911, the temperature at Oklahoma City reached (the record high for that date), then a cold front of unprecedented intensity slammed across the state, causing the temperature to reach (the record low for that date) by midnight. This type of phenomenon is also responsible for many of the tornadoes in the area, such as the 1912 Oklahoma tornado outbreak when a warm front traveled along a stalled cold front, resulting in an average of about one tornado per hour.
The humid subtropical climate (Koppen "Cfa") of central, southern and eastern Oklahoma is influenced heavily by southerly winds bringing moisture from the Gulf of Mexico. Traveling westward, the climate transitions progressively toward a semi-arid zone (Koppen "BSk") in the high plains of the Panhandle and other western areas from about Lawton westward, less frequently touched by southern moisture. Precipitation and temperatures decline from east to west accordingly, with areas in the southeast averaging an annual temperature of and an annual rainfall of generally over and up to , while areas of the (higher-elevation) panhandle average , with an annual rainfall under .
Over almost all of Oklahoma, winter is the driest season. Average monthly precipitation increases dramatically in the spring to a peak in May, the wettest month over most of the state, with its frequent and not uncommonly severe thunderstorm activity. Early June can still be wet, but most years see a marked decrease in rainfall during June and early July. Mid-summer (July and August) represents a secondary dry season over much of Oklahoma, with long stretches of hot weather with only sporadic thunderstorm activity not uncommon many years. Severe drought is common in the hottest summers, such as those of 1934, 1954, 1980 and 2011, all of which featured weeks on end of virtual rainlessness and highs well over . Average precipitation rises again from September to mid-October, representing a secondary wetter season, then declines from late October through December.
The entire state frequently experiences temperatures above or below , though below-zero temperatures are rare in south-central and southeastern Oklahoma. Snowfall ranges from an average of less than in the south to just over on the border of Colorado in the panhandle. The state is home to the Storm Prediction Center, the National Severe Storms Laboratory, and the Warning Decision Training Division, all part of the National Weather Service and in Norman.
Evidence suggests indigenous peoples traveled through Oklahoma as early as the last ice age. Ancestors of the Wichita, Kichai, Teyas, Escanjaques, and Caddo lived in what is now Oklahoma. Southern Plains villagers lived in the central and west of the state, with a subgroup, the Panhandle culture people living in the panhandle region. Caddoan Mississippian culture peoples lived in the eastern part of the state. Spiro Mounds, in what is now Spiro, Oklahoma, was a major Mississippian mound complex that flourished between AD 850 and 1450.
The Spaniard Francisco Vázquez de Coronado traveled through the state in 1541, but French explorers claimed the area in the 1700s. In the 18th century, Kiowa, Apache, and Comanche entered the region from the west and Quapaw and Osage peoples moved into what is now eastern Oklahoma. French colonists claimed the region until 1803, when all the French territory west of the Mississippi River was acquired by the United States in the Louisiana Purchase.
The territory now known as Oklahoma was first a part of the Arkansas Territory from 1819 until 1828.
During the 19th century, thousands of Native Americans were expelled from their ancestral homelands from across North America and transported to the area including and surrounding present-day Oklahoma. The Choctaw was the first of the Five Civilized Tribes to be removed from the Southeastern United States. The phrase "Trail of Tears" originated from a description of the removal of the Choctaw Nation in 1831, although the term is usually used for the Cherokee removal.
Seventeen thousand Cherokees and 2,000 of their black slaves were deported. The area, already occupied by Osage and Quapaw tribes, was called for the Choctaw Nation until revised Native American and then later American policy redefined the boundaries to include other Native Americans. By 1890, more than 30 Native American nations and tribes had been concentrated on land within Indian Territory or "Indian Country".
All Five Civilized Tribes supported and signed treaties with the Confederate military during the American Civil War. The Cherokee Nation had an internal civil war. Slavery in Indian Territory was not abolished until 1866.
In the period between 1866 and 1899, cattle ranches in Texas strove to meet the demands for food in eastern cities and railroads in Kansas promised to deliver in a timely manner. Cattle trails and cattle ranches developed as cowboys either drove their product north or settled illegally in Indian Territory. In 1881, four of five major cattle trails on the western frontier traveled through Indian Territory.
Increased presence of white settlers in Indian Territory prompted the United States Government to establish the Dawes Act in 1887, which divided the lands of individual tribes into allotments for individual families, encouraging farming and private land ownership among Native Americans but expropriating land to the federal government. In the process, railroad companies took nearly half of Indian-held land within the territory for outside settlers and for purchase.
Major land runs, including the Land Run of 1889, were held for settlers where certain territories were opened to settlement starting at a precise time. Usually land was open to settlers on a first come first served basis. Those who broke the rules by crossing the border into the territory before the official opening time were said to have been crossing the border "sooner", leading to the term "sooners", which eventually became the state's official nickname.
Deliberations to make the territory into a state began near the end of the 19th century, when the Curtis Act continued the allotment of Indian tribal land.
Attempts to create an all-Indian state named "Oklahoma" and a later attempt to create an all-Indian state named "Sequoyah" failed but the Sequoyah Statehood Convention of 1905 eventually laid the groundwork for the Oklahoma Statehood Convention, which took place two years later. On June 16, 1906, Congress enacted a statute authorizing the people of the Oklahoma and Indian Territories (as well what would become the states of Arizona and New Mexico) to form a constitution and state government in order to be admitted as a state. On November 16, 1907, President Theodore Roosevelt issued Presidential Proclamation no. , establishing Oklahoma as the 46th state in the Union.
The new state became a focal point for the emerging oil industry, as discoveries of oil pools prompted towns to grow rapidly in population and wealth. Tulsa eventually became known as the "Oil Capital of the World" for most of the 20th century and oil investments fueled much of the state's early economy. In 1927, Oklahoman businessman Cyrus Avery, known as the "Father of Route 66", began the campaign to create U.S. Route 66. Using a stretch of highway from Amarillo, Texas to Tulsa, Oklahoma to form the original portion of Highway 66, Avery spearheaded the creation of the U.S. Highway 66 Association to oversee the planning of Route 66, based in his hometown of Tulsa.
Oklahoma also has a rich African-American history. Many black towns thrived in the early 20th century because of black settlers moving from neighboring states, especially Kansas. The politician Edward P. McCabe encouraged black settlers to come to what was then Indian Territory. He discussed with President Theodore Roosevelt the possibility of making Oklahoma a majority-black state.
By the early 20th century, the Greenwood neighborhood of Tulsa was one of the most prosperous African-American communities in the United States. Jim Crow laws had established racial segregation since before the start of the 20th century, but the blacks had created a thriving area.
Social tensions were exacerbated by the revival of the Ku Klux Klan after 1915. The Tulsa race massacre broke out in 1921, with whites attacking blacks. In one of the costliest episodes of racial violence in American history, sixteen hours of rioting resulted in 35 city blocks destroyed, $1.8 million in property damage, and a death toll estimated to be as high as 300 people. By the late 1920s, the Ku Klux Klan had declined to negligible influence within the state.
During the 1930s, parts of the state began suffering the consequences of poor farming practice. This period was known as the Dust Bowl, throughout which areas of Kansas, Texas, New Mexico and northwestern Oklahoma were hampered by long periods of little rainfall, strong winds, and abnormally high temperatures, sending thousands of farmers into poverty and forcing them to relocate to more fertile areas of the western United States. Over a twenty-year period ending in 1950, the state saw its only historical decline in population, dropping 6.9 percent as impoverished families migrated out of the state after the Dust Bowl.
Soil and water conservation projects markedly changed practices in the state and led to the construction of massive flood control systems and dams; they built hundreds of reservoirs and man-made lakes to supply water for domestic needs and agricultural irrigation. By the 1960s, Oklahoma had created more than 200 lakes, the most in the nation.
In 1995, Oklahoma City was the site of the most destructive act of domestic terrorism in American history. The Oklahoma City bombing of April 19, 1995, in which Timothy McVeigh detonated a large, crude explosive device outside the Alfred P. Murrah Federal Building, killed 168 people, including 19 children. For his crime, McVeigh was executed by the federal government on June 11, 2001. His accomplice, Terry Nichols, is serving life in prison without parole for helping plan the attack and prepare the explosive.
On May 31, 2016, several cities experienced record setting flooding.
The United States Census Bureau estimates Oklahoma's population was 3,956,971 on July 1, 2019, a 5.48% increase since the 2010 United States Census.
At the 2010 Census, 68.7% of the population was non-Hispanic white, down from 88% in 1970, 7.3% non-Hispanic Black or African American, 8.2% non-Hispanic American Indian and Alaska Native, 1.7% non-Hispanic Asian, 0.1% non-Hispanic Native Hawaiian and Other Pacific Islander, 0.1% from some other race (non-Hispanic) and 5.1% of two or more races (non-Hispanic). 8.9% of Oklahoma's population was of Hispanic, Latino, or Spanish origin (they may be of any race).
, 47.3% of Oklahoma's population younger than age1 were minorities, meaning they had at least one parent who was not non-Hispanic white.
The state had the second-highest number of Native Americans in 2002, estimated at 395,219, as well as the second-highest percentage among all states.
In 2011, U.S. Census Bureau American Community Survey data from 2005 to 2009 indicated about 5% of Oklahoma's residents were born outside the United States. This is lower than the national figure (about 12.5% of U.S. residents were foreign-born).
The center of population of Oklahoma is in Lincoln County near the town of Sparks.
The state's 2006 per capita personal income ranked 37th at $32,210, though it has the third-fastest-growing per capita income in the nation and ranks consistently among the lowest states in cost of living index. The Oklahoma City suburb Nichols Hills is first on Oklahoma locations by per capita income at $73,661, though Tulsa County holds the highest average. In 2011, 7.0% of Oklahomans were under the age of 5, 24.7% under 18, and 13.7% were 65 or older. Females made up 50.5% of the population.
The state is in the U.S. Census' Southern region. According to the 2010 United States Census, Oklahoma is the 28th-most populous state with inhabitants but the 19th-largest by land area spanning of land. Oklahoma is divided into 77 counties and contains 597 incorporated municipalities consisting of cities and towns.
In Oklahoma, cities are all those incorporated communities which are 1,000 or more in population and are incorporated as cities. Towns are limited to town board type of municipal government. Cities may choose among aldermanic, mayoral, council-manager, and home-rule charter types of government. Cities may also petition to incorporate as towns.
The English language has been official in the state of Oklahoma since 2010. The variety of North American English spoken is called Oklahoma English, and this dialect is quite diverse with its uneven blending of features of North Midland, South Midland, and Southern dialects. In 2000, 2,977,187 Oklahomans—92.6% of the resident population five years or older—spoke only English at home, a decrease from 95% in 1990. 238,732 Oklahoma residents reported speaking a language other than English in the 2000 census, about 7.4% of the state's population.
The two most commonly spoken native North American languages are Cherokee and Choctaw with 10,000 Cherokee speakers living within the Cherokee Nation tribal jurisdiction area of eastern Oklahoma, and another 10,000 Choctaw speakers living in the Choctaw Nation directly south of the Cherokees. Cherokee is an official language in the Cherokee Nation tribal jurisdiction area and in the United Keetoowah Band of Cherokee Indians.
In addition to Cherokee, more than 25 Native American languages are spoken in Oklahoma, second only to California. However, only Cherokee, if any, exhibits some language vitality at present. The Ethnologue sees Cherokee as moribund because the only remaining active users of the language are members of the grandparent generation and older.
Spanish is the second-most commonly spoken language in the state, with 141,060 speakers counted in 2000. German has 13,444 speakers representing about 0.4% of the state's population, and Vietnamese is spoken by 11,330 people, or about 0.4% of the population, many of whom live in the Asia District of Oklahoma City. Other languages include French with 8,258 speakers (0.3%), Chinese with 6,413 (0.2%), Korean with 3,948 (0.1%), Arabic with 3,265 (0.1%), other Asian languages with 3,134 (0.1%), Tagalog with 2,888 (0.1%), Japanese with 2,546 (0.1%), and African languages with 2,546 (0.1%).
Oklahoma is part of a geographical region characterized by conservative and Evangelical Christianity known as the "Bible Belt". Spanning the southern and eastern parts of the United States, the area is known for politically and socially conservative views, with the Republican Party having the greater number of voters registered between the two parties. Tulsa, the state's second-largest city, home to Oral Roberts University, is sometimes called the "buckle of the Bible Belt".
According to the Pew Research Center, the majority of Oklahoma's religious adherents are Christian, accounting for about 80 percent of the population. The percentage of Catholics is half the national average, while the percentage of Evangelical Protestants is more than twice the national average (tied with Arkansas for the largest percentage of any state).
In 2010, the state's largest church memberships were in the Southern Baptist Convention (886,394 members), the United Methodist Church (282,347), the Roman Catholic Church (178,430), and the Assemblies of God (85,926) and The Church of Jesus Christ of Latter-day Saints (47,349). Other religions represented in the state include Buddhism, Hinduism, and Islam.
In 2000, there were about 5,000 Jews and 6,000 Muslims, with ten congregations to each group.
Oklahoma religious makeup:
Oklahoma has been described as "the world's prison capital", with 1,079 of every 100,000 residents imprisoned in 2018, the highest incarceration rate of any state, and by comparison, higher than the incarceration rates of any country in the world.
Oklahoma is host to a diverse range of sectors including aviation, energy, transportation equipment, food processing, electronics, and telecommunications. Oklahoma is an important producer of natural gas, aircraft, and food. The state ranks third in the nation for production of natural gas, is the 27th-most agriculturally productive state, and also ranks 5th in production of wheat. Four Fortune 500 companies and six Fortune 1000 companies are headquartered in Oklahoma, and it has been rated one of the most business-friendly states in the nation, with the 7th-lowest tax burden in 2007.
In 2010, Oklahoma City-based Love's Travel Stops & Country Stores ranked 18th on the Forbes list of largest private companies, Tulsa-based QuikTrip ranked 37th, and Oklahoma City-based Hobby Lobby ranked 198th in 2010 report. Oklahoma's gross domestic product grew from $131.9 billion in 2006 to $147.5 billion in 2010, a jump of 10.6 percent. Oklahoma's gross domestic product per capita was $35,480 in 2010, which was ranked 40th among the states.
Though oil has historically dominated the state's economy, a collapse in the energy industry during the 1980s led to the loss of nearly 90,000 energy-related jobs between 1980 and 2000, severely damaging the local economy. Oil accounted for 35 billion dollars in Oklahoma's economy in 2007, and employment in the state's oil industry was outpaced by five other industries in 2007. , the state's unemployment rate is 4.4%.
In mid-2011, Oklahoma had a civilian labor force of 1.7 million and non-farm employment fluctuated around 1.5 million. The government sector provides the most jobs, with 339,300 in 2011, followed by the transportation and utilities sector, providing 279,500 jobs, and the sectors of education, business, and manufacturing, providing 207,800, 177,400, and 132,700 jobs, respectively. Among the state's largest industries, the aerospace sector generates $11 billion annually.
Tulsa is home to the largest airline maintenance base in the world, which serves as the global maintenance and engineering headquarters for American Airlines. In total, aerospace accounts for more than 10 percent of Oklahoma's industrial output, and it is one of the top 10 states in aerospace engine manufacturing. Because of its position in the center of the United States, Oklahoma is also among the top states for logistic centers, and a major contributor to weather-related research.
The state is the top manufacturer of tires in North America and contains one of the fastest-growing biotechnology industries in the nation. In 2005, international exports from Oklahoma's manufacturing industry totaled $4.3 billion, accounting for 3.6 percent of its economic impact. Tire manufacturing, meat processing, oil and gas equipment manufacturing, and air conditioner manufacturing are the state's largest manufacturing industries.
Oklahoma is the nation's third-largest producer of natural gas, and its fifth-largest producer of crude oil. The state also has the second-greatest number of active drilling rigs, and it is even ranked fifth in crude oil reserves. While the state is ranked eighth for installed wind energy capacity in 2011, it is at the bottom of states in usage of renewable energy, with 94% of its electricity being generated by non-renewable sources in 2009, including 25% from coal and 46% from natural gas. Oklahoma has no nuclear power. Ranking 13th for total energy consumption per capita in 2009, the state's energy costs were eighth-lowest in the nation.
As a whole, the oil energy industry contributes $35 billion to Oklahoma's gross domestic product (GDP), and employees of the state's oil-related companies earn an average of twice the state's typical yearly income. In 2009, the state had 83,700 commercial oil wells churning of crude oil. Eight and a half percent of the nation's natural gas supply is held in Oklahoma, with being produced in 2009.
The Oklahoma Stack Play is a geographic referenced area in the Anadarko Basin. The oil field "Sooner Trend", Anadarko basin and the counties of Kingfisher and Canadian make up the basis for the "Oklahoma STACK". Other Plays such as the Eagle Ford are geological rather than geographical.
According to "Forbes" magazine, Oklahoma City-based Devon Energy Corporation, Chesapeake Energy Corporation, and SandRidge Energy Corporation are the largest private oil-related companies in the nation, and all Oklahoma's Fortune 500 companies are energy-related. Tulsa's ONEOK and Williams Companies are the state's largest and second-largest companies respectively, also ranking as the nation's second- and third-largest companies in the field of energy, according to "Fortune" magazine. The magazine also placed Devon Energy as the second-largest company in the mining and crude oil-producing industry in the nation, while Chesapeake Energy ranks seventh respectively in that sector and Oklahoma Gas & Electric ranks as the 25th-largest gas and electric utility company.
Oklahoma Gas & Electric, commonly referred to as OG&E (NYSE: OGE) operates four base electric power plants in Oklahoma. Two of them are coal-fired power plants: one in Muskogee, and the other in Red Rock. Two are gas-fired power plants: one in Harrah and the other in Konawa. OG&E was the first electric company in Oklahoma to generate electricity from wind farms in 2003.
Source:
The 27th-most agriculturally productive state, Oklahoma is fifth in cattle production and fifth in production of wheat. Approximately 5.5 percent of American beef comes from Oklahoma, while the state produces 6.1 percent of American wheat, 4.2 percent of American pig products, and 2.2 percent of dairy products.
The state had 85,500 farms in 2012, collectively producing $4.3 billion in animal products and fewer than one billion dollars in crop output with more than $6.1 billion added to the state's gross domestic product. Poultry and swine are its second- and third-largest agricultural industries.
With an educational system made up of public school districts and independent private institutions, Oklahoma had 638,817 students enrolled in 1,845 public primary, secondary, and vocational schools in 533 school districts . Oklahoma has the highest enrollment of Native American students in the nation with 126,078 students in the 2009–10 school year. Oklahoma spent $7,755 for each student in 2008, and was 47th in the nation in expenditures per student, though its growth of total education expenditures between 1992 and 2002 ranked 22nd.
The state is among the best in pre-kindergarten education, and the National Institute for Early Education Research rated it first in the United States with regard to standards, quality, and access to pre-kindergarten education in 2004, calling it a model for early childhood schooling. High school dropout rate decreased from 3.1 to 2.5 percent between 2007 and 2008 with Oklahoma ranked among 18 other states with 3 percent or less dropout rate. In 2004, the state ranked 36th in the nation for the relative number of adults with high school diplomas, though at 85.2 percent, it had the highest rate among Southern states. According to a study conducted by the Pell Institute, Oklahoma ranks 48th in college-participation for low-income students.
The University of Oklahoma, Oklahoma State University, the University of Central Oklahoma, and Northeastern State University are the largest public institutions of higher education in Oklahoma, operating through one primary campus and satellite campuses throughout the state. The two state universities, along with Oklahoma City University and the University of Tulsa, rank among the country's best in undergraduate business programs.
Oklahoma City University School of Law, University of Oklahoma College of Law, and University of Tulsa College of Law are the state's only ABA-accredited institutions. Both University of Oklahoma and University of Tulsa are Tier1 institutions, with the University of Oklahoma ranked 68th and the University of Tulsa ranked 86th in the nation.
Oklahoma holds eleven public regional universities, including Northeastern State University, the second-oldest institution of higher education west of the Mississippi River, also containing the only College of Optometry in Oklahoma and the largest enrollment of Native American students in the nation by percentage and amount. Langston University is Oklahoma's only historically black college. Six of the state's universities were placed in the Princeton Review's list of best 122 regional colleges in 2007, and three made the list of top colleges for best value. The state has 55 post-secondary technical institutions operated by Oklahoma's CareerTech program for training in specific fields of industry or trade.
In the 2007–2008 school year, there were 181,973 undergraduate students, 20,014 graduate students, and 4,395 first-professional degree students enrolled in Oklahoma colleges. Of these students, 18,892 received a bachelor's degree, 5,386 received a master's degree, and 462 received a first professional degree. This means the state of Oklahoma produces an average of 38,278-degree-holders per completions component (i.e. July 1, 2007June 30, 2008). National average is 68,322 total degrees awarded per completions component.
Beginning on April 2, 2018, tens of thousands of K–12 public school teachers went on strike due to lack of funding. According to the National Education Association, teachers in Oklahoma had ranked 49th out of the 50 states in terms of teacher pay in 2016. The Oklahoma Legislature had passed a measure a week earlier to raise teacher salaries by $6,100, but it fell short of the $10,000 raise for teachers, $5,000 raise for other school employees, and $200 million increase in extra education funding many had sought. A survey in 2019 found that the pay raise obtained by the strike lifted the State's teacher pay ranking to 34th in the nation.
The Cherokee Nation instigated a ten-year plan that involved growing new speakers of the Cherokee language from childhood as well as speaking it exclusively at home. The plan was part of an ambitious goal that in fifty years would have at least 80% of their people fluent. The Cherokee Preservation Foundation has invested $3 million into opening schools, training teachers, and developing curricula for language education, as well as initiating community gatherings where the language can be actively used.
A Cherokee language immersion school in Tahlequah, Oklahoma educates students from pre-school through eighth grade.
Oklahoma is placed in the South by the United States Census Bureau, but other definitions place the state at least partly in the Southwest, Midwest, Upland South, and Great Plains. Oklahomans have a high rate of English, Scotch-Irish, German, and Native American ancestry, with 25 different native languages spoken.
Because many Native Americans were forced to move to Oklahoma when White settlement in North America increased, Oklahoma has much linguistic diversity. Mary Linn, an associate professor of anthropology at the University of Oklahoma and the associate curator of Native American languages at the Sam Noble Museum, notes Oklahoma also has high levels of language endangerment.
Sixty-seven Native American tribes are represented in Oklahoma, including 39 federally recognized tribes, who are headquartered and have tribal jurisdictional areas in the state. Western ranchers, Native American tribes, Southern settlers, and eastern oil barons have shaped the state's cultural predisposition, and its largest cities have been named among the most underrated cultural destinations in the United States.
Residents of Oklahoma are associated with traits of Southern hospitality—the 2006 Catalogue for Philanthropy (with data from 2004) ranks Oklahomans 7th in the nation for overall generosity. The state has also been associated with a negative cultural stereotype first popularized by John Steinbeck's novel "The Grapes of Wrath", which described the plight of uneducated, poverty-stricken Dust Bowl-era farmers deemed "Okies". However, the term is often used in a positive manner by Oklahomans.
In the state's largest urban areas, pockets of jazz culture flourish, and Native American, Mexican American, and Asian American communities produce music and art of their respective cultures. The Oklahoma Mozart Festival in Bartlesville is one of the largest classical music festivals on the southern plains, and Oklahoma City's Festival of the Arts has been named one of the top fine arts festivals in the nation.
The state has a rich history in ballet with five Native American ballerinas attaining worldwide fame. These were Yvonne Chouteau, sisters Marjorie and Maria Tallchief, Rosella Hightower and Moscelyne Larkin, known collectively as the Five Moons. "The New York Times" rates the Tulsa Ballet as one of the top ballet companies in the United States. The Oklahoma City Ballet and University of Oklahoma's dance program were formed by ballerina Yvonne Chouteau and husband Miguel Terekhov. The University program was founded in 1962 and was the first fully accredited program of its kind in the United States.
In Sand Springs, an outdoor amphitheater called "Discoveryland!" is the official performance headquarters for the musical "Oklahoma!" Ridge Bond, native of McAlester, Oklahoma, starred in the Broadway and International touring productions of "Oklahoma!", playing the role of "Curly McClain" in more than 2,600 performances. In 1953 he was featured along with the "Oklahoma!" cast on a CBS Omnibus television broadcast. Bond was instrumental in the Oklahoma! title song becoming the Oklahoma state song and is also featured on the U.S. postage stamp commemorating the musical's 50th anniversary. Historically, the state has produced musical styles such as The Tulsa Sound and western swing, which was popularized at Cain's Ballroom in Tulsa. The building, known as the "Carnegie Hall of Western Swing", served as the performance headquarters of Bob Wills and the Texas Playboys during the 1930s. Stillwater is known as the epicenter of Red Dirt music, the best-known proponent of which is the late Bob Childers.
Prominent theatre companies in Oklahoma include, in the capital city, Oklahoma City Theatre Company, Carpenter Square Theatre, Oklahoma Shakespeare in the Park, and CityRep. CityRep is a professional company affording equity points to those performers and technical theatre professionals. In Tulsa, Oklahoma's oldest resident professional company is American Theatre Company, and Theatre Tulsa is the oldest community theatre company west of the Mississippi. Other companies in Tulsa include Heller Theatre and Tulsa Spotlight Theater. The cities of Norman, Lawton, and Stillwater, among others, also host well-reviewed community theatre companies.
Oklahoma is in the nation's middle percentile in per capita spending on the arts, ranking 17th, and contains more than 300 museums. The Philbrook Museum of Tulsa is considered one of the top 50 fine art museums in the United States, and the Sam Noble Oklahoma Museum of Natural History in Norman, one of the largest university-based art and history museums in the country, documents the natural history of the region. The collections of Thomas Gilcrease are housed in the Gilcrease Museum of Tulsa, which also holds the world's largest, most comprehensive collection of art and artifacts of the American West.
The Egyptian art collection at the Mabee-Gerrer Museum of Art in Shawnee is considered to be the finest Egyptian collection between Chicago and Los Angeles. The Oklahoma City Museum of Art contains the most comprehensive collection of glass sculptures by artist Dale Chihuly in the world, and Oklahoma City's National Cowboy & Western Heritage Museum documents the heritage of the American Western frontier. With remnants of the Holocaust and artifacts relevant to Judaism, the Sherwin Miller Museum of Jewish Art of Tulsa preserves the largest collection of Jewish art in the Southwest United States.
Oklahoma's centennial celebration was named the top event in the United States for 2007 by the American Bus Association, and consisted of multiple celebrations saving with the 100th anniversary of statehood on November 16, 2007. Annual ethnic festivals and events take place throughout the state such as Native American powwows and ceremonial events, and include festivals (as examples) in Scottish, Irish, German, Italian, Vietnamese, Chinese, Czech, Jewish, Arab, Mexican and African-American communities depicting cultural heritage or traditions.
Oklahoma City is home to a few reoccurring events and festivals. During a ten-day run in Oklahoma City, the State Fair of Oklahoma attracts roughly one million people along with the annual Festival of the Arts. Large national pow wows, various Latin and Asian heritage festivals, and cultural festivals such as the Juneteenth celebrations are held in Oklahoma City each year. The Oklahoma City Pride Parade has been held annually in late June since 1987 in the gay district of Oklahoma City on 39th and Penn. The First Friday Art Walk in the Paseo Arts District is an art appreciation festival held the first Friday of every month. Additionally, an annual art festival is held in the Paseo on Memorial Day Weekend.
The Tulsa State Fair attracts more than a million people each year during its ten-day run, and the city's Mayfest festival entertained more than 375,000 in four days during 2007. In 2006, Tulsa's Oktoberfest was named one of the top 10 in the world by "USA Today" and one of the top German food festivals in the nation by "Bon Appétit" magazine.
Norman plays host to the Norman Music Festival, a festival that highlights native Oklahoma bands and musicians. Norman is also host to the Medieval Fair of Norman, which has been held annually since 1976 and was Oklahoma's first medieval fair. The Fair was held first on the south oval of the University of Oklahoma campus and in the third year moved to the Duck Pond in Norman until the Fair became too big and moved to Reaves Park in 2003. The Medieval Fair of Norman is Oklahoma's "largest weekend event and the third-largest event in Oklahoma, and was selected by Events Media Network as one of the top 100 events in the nation".
Oklahoma has teams in basketball, football, arena football, baseball, soccer, hockey, and wrestling in Oklahoma City, Tulsa, Enid, Norman, and Lawton. The Oklahoma City Thunder of the National Basketball Association (NBA) is the state's only major league sports franchise. The state had a team in the Women's National Basketball Association, the Tulsa Shock, from 2010 through 2015, but the team relocated to Dallas–Fort Worth after that season and became the Dallas Wings.
Oklahoma has teams in several minor leagues, including Minor League Baseball at the AAA and AA levels (Oklahoma City Dodgers and Tulsa Drillers, respectively), hockey's ECHL with the Tulsa Oilers, and a number of indoor football leagues. In the last-named sport, the state's most notable team was the Tulsa Talons, which played in the Arena Football League until 2012, when the team was moved to San Antonio. The Oklahoma Defenders replaced the Talons as Tulsa's only professional arena football team, playing the CPIFL. The Oklahoma City Blue, of the NBA G League, relocated to Oklahoma City from Tulsa in 2014, where they were formerly known as the Tulsa 66ers. Tulsa is the base for the Tulsa Revolution, which plays in the American Indoor Soccer League. Enid and Lawton host professional basketball teams in the USBL and the CBA.
The NBA's New Orleans Hornets became the first major league sports franchise based in Oklahoma when the team was forced to relocate to Oklahoma City's Ford Center, now known as Chesapeake Energy Arena, for two seasons following Hurricane Katrina in 2005. In July 2008, the Seattle SuperSonics relocated to Oklahoma City and began to play at the Ford Center as the Oklahoma City Thunder for the , becoming the state's first permanent major league franchise.
Collegiate athletics are a popular draw in the state. The state has four schools that compete at the highest level of college sports, NCAA Division I. The most prominent are the state's two members of the Big 12 Conference, one of the so-called Power Five conferences of the top tier of college football, Division I FBS. The University of Oklahoma and Oklahoma State University average well over 50,000 fans attending their football games, and Oklahoma's football program ranked 12th in attendance among American colleges in 2010, with an average of 84,738 people attending its home games. The two universities meet several times each year in rivalry matches known as the Bedlam Series, which are some of the greatest sporting draws to the state. "Sports Illustrated" magazine rates Oklahoma and Oklahoma State among the top colleges for athletics in the nation.
Two private institutions in Tulsa, the University of Tulsa and Oral Roberts University; are also Division I members. Tulsa competes in FBS football and other sports in the American Athletic Conference, while Oral Roberts, which does not sponsor football, is a member of the Summit League. In addition, 12 of the state's smaller colleges and universities compete in NCAA Division II as members of three different conferences, and eight other Oklahoma institutions participate in the NAIA, mostly within the Sooner Athletic Conference.
Regular LPGA tournaments are held at Cedar Ridge Country Club in Tulsa, and major championships for the PGA or LPGA have been played at Southern Hills Country Club in Tulsa, Oak Tree Country Club in Oklahoma City, and Cedar Ridge Country Club in Tulsa. Rated one of the top golf courses in the nation, Southern Hills has hosted four PGA Championships, including one in 2007, and three U.S. Opens, the most recent in 2001. Rodeos are popular throughout the state, and Guymon, in the state's panhandle, hosts one of the largest in the nation.
Oklahoma was the 21st-largest recipient of medical funding from the federal government in 2005, with health-related federal expenditures in the state totaling $75,801,364; immunizations, bioterrorism preparedness, and health education were the top three most funded medical items. Instances of major diseases are near the national average in Oklahoma, and the state ranks at or slightly above the rest of the country in percentage of people with asthma, diabetes, cancer, and hypertension.
In 2000, Oklahoma ranked 45th in physicians per capita and slightly below the national average in nurses per capita, but was slightly above the national average in hospital beds per 100,000 people and above the national average in net growth of health services over a twelve-year period. One of the worst states for percentage of insured people, nearly 25 percent of Oklahomans between the age of 18 and 64 did not have health insurance in 2005, the fifth-highest rate in the nation.
Oklahomans are in the upper half of Americans in terms of obesity prevalence, and the state is the 5th most obese in the nation, with 30.3 percent of its population at or near obesity. Oklahoma ranked last among the 50 states in a 2007 study by the Commonwealth Fund on health care performance.
The OU Medical Center, Oklahoma's largest collection of hospitals, is the only hospital in the state designated a LevelI trauma center by the American College of Surgeons. OU Medical Center is on the grounds of the Oklahoma Health Center in Oklahoma City, the state's largest concentration of medical research facilities.
The Cancer Treatment Centers of America at Southwestern Regional Medical Center in Tulsa is one of four such regional facilities nationwide, offering cancer treatment to the entire southwestern United States, and is one of the largest cancer treatment hospitals in the country. The largest osteopathic teaching facility in the nation, Oklahoma State University Medical Center at Tulsa, also rates as one of the largest facilities in the field of neuroscience.
On June 26, 2018, Oklahoma made marijuana legal for medical purposes. This was a milestone for a state in the Bible Belt.
Oklahoma City and Tulsa are the 45th- and 61st-largest media markets in the United States as ranked by Nielsen Media Research. The state's third-largest media market, Lawton-Wichita Falls, Texas, is ranked 149th nationally by the agency. Broadcast television in Oklahoma began in 1949 when KFOR-TV (then WKY-TV) in Oklahoma City and KOTV-TV in Tulsa began broadcasting a few months apart. Currently, all major American broadcast networks have affiliated television stations in the state.
The state has two primary newspapers. "The Oklahoman", based in Oklahoma City, is the largest newspaper in the state and 54th-largest in the nation by circulation, with a weekday readership of 138,493 and a Sunday readership of 202,690. The "Tulsa World", the second-most widely circulated newspaper in Oklahoma and 79th in the nation, holds a Sunday circulation of 132,969 and a weekday readership of 93,558. Oklahoma's first newspaper was established in 1844, called the "Cherokee Advocate", and was written in both Cherokee and English. In 2006, there were more than 220 newspapers in the state, including 177 with weekly publications and 48 with daily publications.
The state's first radio station, WKY in Oklahoma City, signed on in 1920, followed by KRFU in Bristow, which later on moved to Tulsa and became KVOO in 1927. In 2006, there were more than 500 radio stations in Oklahoma broadcasting with various local or nationally owned networks. Five universities in Oklahoma operate non-commercial, public radio stations/networks.
Oklahoma has a few ethnic-oriented TV stations broadcasting in Spanish and Asian languages, and there is some Native American programming. TBN, a Christian religious television network, has a studio in Tulsa, and built its first entirely TBN-owned affiliate in Oklahoma City in 1980.
Transportation in Oklahoma is generated by an anchor system of Interstate Highways, inter-city rail lines, airports, inland ports, and mass transit networks. Situated along an integral point in the United States Interstate network, Oklahoma contains three primary Interstate highways and four auxiliary Interstate Highways. In Oklahoma City, Interstate 35 intersects with Interstate 44 and Interstate 40, forming one of the most important intersections along the United States highway system.
More than of roads make up the state's major highway skeleton, including state-operated highways, ten turnpikes or major toll roads, and the longest drivable stretch of Route 66 in the nation. In 2008, Interstate 44 in Oklahoma City was Oklahoma's busiest highway, with a daily traffic volume of 123,300 cars. In 2010, the state had the nation's third-highest number of bridges classified as structurally deficient, with nearly 5,212 bridges in disrepair, including 235 National Highway System Bridges.
Oklahoma's largest commercial airport is Will Rogers World Airport in Oklahoma City, averaging a yearly passenger count of more than 3.5 million (1.7 million boardings) in 2010. Tulsa International Airport, the state's second-largest commercial airport, served more than 1.3 million boardings in 2010. Between the two, six airlines operate in Oklahoma. In terms of traffic, R. L. Jones Jr. (Riverside) Airport in Tulsa is the state's busiest airport, with 335,826 takeoffs and landings in 2008. Oklahoma has more than 150 public-use airports.
Oklahoma is connected to the nation's rail network via Amtrak's "Heartland Flyer", its only regional passenger rail line. It currently stretches from Oklahoma City to Fort Worth, Texas, though lawmakers began seeking funding in early 2007 to connect the "Heartland Flyer" to Tulsa.
Two inland ports on rivers serve Oklahoma: the Port of Muskogee and the Tulsa Port of Catoosa. The state's only port handling international cargo, the Tulsa Port of Catoosa is the most inland ocean-going port in the nation and ships over two million tons of cargo each year. Both ports are on the McClellan–Kerr Arkansas River Navigation System, which connects barge traffic from Tulsa and Muskogee to the Mississippi River via the Verdigris and Arkansas rivers, contributing to one of the busiest waterways in the world.
Oklahoma is a constitutional republic with a government modeled after the Federal government of the United States, with executive, legislative, and judicial branches. The state has 77 counties with jurisdiction over most local government functions within each respective domain, five congressional districts, and a voting base with a plurality in the Republican Party. State officials are elected by plurality voting in the state of Oklahoma.
Oklahoma has capital punishment as a legal sentence, and the state has had (between 1976 through mid-2011) the highest per capita execution rate in the nation.
The Legislature of Oklahoma consists of the Senate and the House of Representatives. As the lawmaking branch of the state government, it is responsible for raising and distributing the money necessary to run the government. The Senate has 48 members serving four-year terms, while the House has 101 members with two-year terms. The state has a term limit for its legislature that restricts any one person to twelve cumulative years service between both legislative branches.
Oklahoma's judicial branch consists of the Oklahoma Supreme Court, the Oklahoma Court of Criminal Appeals, and 77 District Courts that each serve one county. The Oklahoma judiciary also contains two independent courts: a Court of Impeachment and the Oklahoma Court on the Judiciary. Oklahoma has two courts of last resort: the state Supreme Court hears civil cases, and the state Court of Criminal Appeals hears criminal cases (this split system exists only in Oklahoma and neighboring Texas). Judges of those two courts, as well as the Court of Civil Appeals are appointed by the Governor upon the recommendation of the state Judicial Nominating Commission, and are subject to a non-partisan retention vote on a six-year rotating schedule.
The executive branch consists of the Governor, their staff, and other elected officials. The principal head of government, the Governor is the chief executive of the Oklahoma executive branch, serving as the ex officio Commander-in-chief of the Oklahoma National Guard when not called into Federal use and reserving the power to veto bills passed through the Legislature. The responsibilities of the Executive branch include submitting the budget, ensuring state laws are enforced, and ensuring peace within the state is preserved.
The state is divided into 77 counties that govern locally, each headed by a three-member council of elected commissioners, a tax assessor, clerk, court clerk, treasurer, and sheriff. While each municipality operates as a separate and independent local government with executive, legislative and judicial power, county governments maintain jurisdiction over both incorporated cities and non-incorporated areas within their boundaries, but have executive power but no legislative or judicial power. Both county and municipal governments collect taxes, employ a separate police force, hold elections, and operate emergency response services within their jurisdiction. Other local government units include school districts, technology center districts, community college districts, rural fire departments, rural water districts, and other special use districts.
Thirty-nine Native American tribal governments are based in Oklahoma, each holding limited powers within designated areas. While Indian reservations typical in most of the United States are not present in Oklahoma, tribal governments hold land granted during the Indian Territory era, but with limited jurisdiction and no control over state governing bodies such as municipalities and counties. Tribal governments are recognized by the United States as quasi-sovereign entities with executive, judicial, and legislative powers over tribal members and functions, but are subject to the authority of the United States Congress to revoke or withhold certain powers. The tribal governments are required to submit a constitution and any subsequent amendments to the United States Congress for approval.
Oklahoma has 11 substate districts including the two large Councils of Governments, INCOG in Tulsa (Indian Nations Council of Governments) and ACOG (Association of Central Oklahoma Governments).
During the first half-century of statehood, it was considered a Democratic stronghold, being carried by the Republican Party in only two presidential elections (1920 and 1928). After the 1948 election, the state turned firmly Republican. Although registered Republicans were a minority in the state until 2015, starting in 1952, Oklahoma has been carried by Republican presidential candidates in all but one election (1964).
Generally, Republicans are strongest in the suburbs of Oklahoma City and Tulsa, as well as the Panhandle. Democrats are strongest in the eastern part of the state and Little Dixie, as well as the most heavily African American and inner parts of Oklahoma City and Tulsa. With a population of 8.6% Native American in the state, it is also worth noting most Native American precincts vote Democratic in margins exceeded only by African Americans.
Following the 2000 census, the Oklahoma delegation to the U.S. House of Representatives was reduced from six to five representatives, each serving one congressional district. In the current Congress, all but one of Oklahoma's entire delegation are Republicans.
Oklahoma had 598 incorporated places in 2010, including four cities over 100,000 in population and 43 over 10,000. Two of the fifty largest cities in the United States are in Oklahoma, Oklahoma City and Tulsa, and sixty-five percent of Oklahomans live within their metropolitan areas, or spheres of economic and social influence defined by the United States Census Bureau as a metropolitan statistical area. Oklahoma City, the state's capital and largest city, had the largest metropolitan area in the state in 2010, with 1,252,987 people, and the metropolitan area of Tulsa had 937,478 residents. Between 2000 and 2010, the leading cities in population growth were Blanchard (172.4%), Elgin (78.2%), Jenks (77.0%), Piedmont (56.7%), Bixby (56.6%), and Owasso (56.3%).
In descending order of population, Oklahoma's largest cities in 2010 were: Oklahoma City (579,999, +14.6%), Tulsa (391,906, −0.3%), Norman (110,925, +15.9%), Broken Arrow (98,850, +32.0%), Lawton (96,867, +4.4%), Edmond (81,405, +19.2%), Moore (55,081, +33.9%), Midwest City (54,371, +0.5%), Enid (49,379, +5.0%), and Stillwater (45,688, +17.0%). Of the state's ten largest cities, three are outside the metropolitan areas of Oklahoma City and Tulsa, and only Lawton has a metropolitan statistical area of its own as designated by the United States Census Bureau, though the metropolitan statistical area of Fort Smith, Arkansas extends into the state.
Under Oklahoma law, municipalities are divided into two categories: cities, defined as having more than 1,000 residents, and towns, with under 1,000 residents. Both have legislative, judicial, and public power within their boundaries, but cities can choose between a mayor–council, council–manager, or strong mayor form of government, while towns operate through an elected officer system.
State law codifies Oklahoma's state emblems and honorary positions; the Oklahoma Senate or House of Representatives may adopt resolutions designating others for special events and to benefit organizations. In 2012 the House passed HCR 1024, which would change the state motto from "Labor Omnia Vincit" to "Oklahoma—In God We Trust!" The author of the resolution stated a constituent researched the Oklahoma Constitution and found no "official" vote regarding "Labor Omnia Vincit", therefore opening the door for an entirely new motto. | https://en.wikipedia.org/wiki?curid=22489 |
Orhan
Orhan Ghazi (; , also spelled Orkhan, c. 1281 – March 1362) was the second bey of the Ottoman Beylik from 1323/4 to 1362. He was born in Söğüt, as the son of Osman Gazi and Malhun Hatun. His grandfather was Ertuğrul.
In the early stages of his reign, Orhan focused his energies on conquering most of northwestern Anatolia. The majority of these areas were under Byzantine rule and he won his first battle at Pelekanon against the Byzantine Emperor Andronikos III Palaiologos. Orhan also occupied the lands of the Karasids of Balıkesir and the Ahis of Ankara.
A series of civil wars surrounding the ascension of the nine-year-old Byzantine emperor John V Palaiologos greatly benefited Orhan. In the Byzantine civil war of 1341–1347, the regent John VI Kantakouzenos married his daughter Theodora to Orhan and employed Ottoman warriors against the rival forces of the empress dowager, allowing them to loot Thrace. In the Byzantine civil war of 1352–1357, Kantakouzenos used Ottoman forces against John V, granting them the use of a European fortress at Çimpe around 1352. A major earthquake devastated Gallipoli (modern Gelibolu) two years later, after which Orhan's son, Süleyman Pasha, occupied the town, giving the Ottomans a strong bridgehead into mainland Europe.
According to Ibn Battuta, Orhan was "the greatest of the Turkmen kings and the richest in wealth, lands, and military forces".
Orhan was Born in Söğüt around 1281, Orhan was the first son of Osman I. Orhan's grandfather, Ertuğrul Gazi, named his grandson after Orhan Alp. The early childhood and adulthood of Orhan are unknown, but he grew very close to his father. Some historical articles claim that when Orhan was 20 years old, his father sent him to the small Ottoman province of Nakihir, but Orhan returned to the Ottoman capital, Sogut, in 1309.
Osman Gazi died in either 1323 or 1324, and Orhan succeeded him. According to Ottoman tradition, when Orhan succeeded his father, he proposed to his brother, Alaeddin, that they should share the emerging empire. The latter refused on the grounds that their father had designated Orhan as sole successor, and that the empire should not be divided. He only accepted as his share the revenues of a single village near Bursa.
Orhan then told him, "Since, my brother, thou will not take the flocks and the herds that I offer thee, be thou the shepherd of my people; be my Vizier." The word vizier, "vezir" in the Ottoman language, from Arabic "wazīr", meant "the bearer of a burden". Alaeddin, in accepting the office, accepted his brother's burden of power, according to oriental historians. Alaeddin, like many of his successors in that office, did not often command the armies in person, but he occupied himself with the foundation and management of the civil and military institutions of the state.
According to some authorities, it was in Alaeddin's time, and by his advice, that the Ottomans ceased acting like vassals to the Seljuk ruler: they no longer stamped money with his image or used his name in public prayers. These changes are attributed by others to Osman himself, but the vast majority of the oriental writers concur in attributing to Alaeddin the introduction of laws respecting the costume of the various subjects of the empire, and the creation and funding of a standing army of regular troops. It was by his advice and that of a contemporary Turkish statesman that the celebrated corps of Janissaries was formed, an institution which European writers erroneously fix at a later date, and ascribe to Murad I.
Alaeddin, by his military legislation, may be truly said to have organized victory for the Ottoman dynasty. He organised for the Ottoman Beylik a standing army of regularly paid and disciplined infantry and horses, a full century before Charles VII of France established his fifteen permanent companies of men-at-arms, which are generally regarded as the first modern standing army.
Orhan's predecessors, Ertuğrul and Osman I, had made war at the head of the armed vassals and volunteers. This army rode on horseback to their prince's banner when summoned for each expedition, and were disbanded as soon as the campaign was over. Alaeddin determined to ensure any future success by forming a corps of paid infantry, which was to be kept in constant readiness for service. These troops were called Yaya, or piyade. They were divided into tens, hundreds, and thousands with their commanders. Their pay was high, and their pride soon caused their sovereign some anxiety. Orhan wished to provide a check to them, and he took counsel for this purpose with his brother Alaeddin and Kara Khalil Çandarlı (of House of Candar), who was connected with the royal house by marriage. Çandarlı laid before his master and the vizier a project. Out of this arose the renowned corps of Janissaries, which was considered the scourge of the Balkans and Central Europe for a long time, until it was abolished by Sultan Mahmud II in 1826.
Çandarlı proposed to Orhan to create an army entirely composed of the children of conquered places. Çandarlı argued that:
He also claimed that the formation of Janissary out of conquered children would induce other people to adopt, not only out of the children of the conquered nations, but out of a crowd of their friends and relations, who would come as volunteers to join the Ottoman ranks. Acting on this advice, Orhan selected a thousand of the finest boys from conquered Christian families. The recruits were trained according to their individual abilities, and employed in posts ranging from professional soldier to Grand Vizier. This practice continued for centuries, until the reign of Sultan Mehmet IV.
Orhan, with the help of gazi commanders at the head of his forces of light cavalry, started a series of conquests of Byzantine territories in northwest Anatolia. First, in 1321, Mudanya was captured on the Sea of Marmara, which was the port of Bursa. He then sent a column under Konur Alp towards West Black Sea coast; another column under Aqueda to capture Kocaeli, and finally a column to capture the southeast coast of the Sea of Marmara. Then, he captured the city of Bursa just with diplomatic negotiations. The Byzantine commander of the Bursa fort, called Evronos Bey, became a commander of a light cavalry force and even his sons and grandsons served Ottoman Beylik in this capacity to conquer and hold many areas in Balkans. Once the city of Bursa was captured, Orhan sent cavalry troops towards Bosphorus, capturing Byzantine coastal towns of Marmara. There were even sightings of Ottoman light cavalry along the Bosphoros coast.
The Byzantine Emperor Andronicus III gathered together a mercenary army and set off towards Anatolia on the peninsular lands of Kocaeli. But at the present towns of Darica, at a site then called Pelekanon, not too far from Üsküdar, he met with Orhan's troops. In the ensuing battle of Pelekanon, the Byzantine forces were routed by Orhan's disciplined troops. Thereafter Andronicus abandoned the idea of getting the Kocaeli lands back and never again conducted a field battle against the Ottoman forces.
The city of Nicaea (second only to Constantinople in the Byzantine Empire) surrendered to him after a three-year siege that concluded in 1331. The city of Nicomedia (now Izmit) was also captured, in 1337. Orhan gave the command of it to his eldest son, Suleyman Pasha, who had directed the operations of the siege.
In 1338 by capturing Scutari (now Üsküdar) most of Northwest Anatolia was in Ottoman hands. The Byzantines still controlled the coastal strip from Sile on the Black Sea to Scutari and the city of Amastris (now Amasra) in Paphlagonia, but these were so scattered and isolated as to be no threat to the Ottomans.
Then, there was a change of strategy in 1345. Instead of aiming to gain land from non-Muslims, Orhan took over a Turkish principality, Karesi (present Balıkesir and surrounds). According to Islamic philosophy of war, the areas under Islamic rule were to be "abodes of peace" and the other areas "abodes of war". In "abodes of war" conducting a war was considered a good deed. Karesi principality was a state governed by a Turkish emir and its main inhabitants were Turkish; so it was an "abode of peace". Ottomans had to have special justification for conquering fellow Muslim Turkish principalities.
In the case of Karesi, the ruler had died and had left two sons whose claims to the post of Emir were equally valid. So there was a fight between the armed supporters of the two claimant princes. Orhan's pretext for invasion was that he was acting as a bringer of peace. In the end of the invasion by Ottoman troops the two brothers were pushed to the castle of their capital city of Pergamum (now Bergama). One was killed and the other was captured. The territories around Pergamum and Palaeocastro (Balıkesir) were annexed to Orhan's domains. This conquest was particularly important since it brought Orhan's territories to Çanakkale, the Anatolian side of the Dardanelles Straits.
With the conquest of Karesi, nearly the whole of northwestern Anatolia was included in the Ottoman Beylik, and the four cities of Bursa, Nicomedia İzmit, Nicaea, İznik, and Pergamum (Bergama) had become strongholds of its power. At this stage of his conquests Orhan's Ottoman Principality had four provinces:
A twenty-year period of peace followed the acquisition of Karesi. During this time, the Ottoman sovereign was actively occupied in perfecting the civil and military institutions which his brother had introduced, in securing internal order, in founding and endowing mosques and schools, and in the construction of vast public edifices, many of which still stand. Orhan did not continue with any other conquests in Anatolia except taking over Ankara from the commercial-religious fraternity guild of Ahis.
The general diffusion of Turkish populations over Anatolia, before Osman's time, was in main part a push from the Mongol conquest of Central Asia, Iran and then East Anatolia. Turkish peoples had founded a number of principalities after the demise of the Anatolian Sultanate of Rum, after its defeat by the Ilkhanate Mongols. Although they were all of Turkish stock, they were all rivals for dominant status in Anatolia.
After the Byzantine defeat of the Battle of Pelekanon, Orhan developed friendly relations with Andronicus III Palaeologus, and maintained them with some of his successors. Therefore, the Ottoman power experienced a twenty-year period of general repose.
However, as the Byzantine civil war of 1341–1347 dissipated the last resources of the Byzantine Empire, the auxiliary armies of the Emirs of Turkish principalities were frequently called over and employed in Europe. In 1346, The Emperor John VI Cantacuzene recognised Orhan as the most powerful sovereign of the Turks. He aspired to attach the Ottoman forces permanently to his interests, and hoped to achieve this by giving his second daughter, Theodora, in marriage to their ruler, despite differences of creed and the disparity of age. However, in Byzantine and in Western European history, dynastic marriages were quite usual and there are many examples which were much more strange.
The splendour of the wedding between Orhan and Theodora at Selymbria (Silivri) is elaborately described by Byzantine writers. In the following year, Orhan and Theodora visited his imperial father-in-law at Üsküdar, (then Chrysopolis) the suburb of Constantinople on the Asiatic side of the Bosporus where there was a display of festive splendor. However, this close relationship soured when Byzantines suffered from marauding migrant Turcoman bands that had crossed the Marmara Sea and Dardanelles and pillaged several towns in Thrace. After a series of such raids, the Byzantines had to use superior forces to deal with them.
During Orhan's reign as the Ottoman emir, the Byzantine Empire declined – partly due to the ambitions of Italian maritime states and to the aggression of the Turcomans and other city Turks, but also due to civil wars within the empire.
During these years the Byzantine Empire became so weak that commercial supremacy in the surrounding seas around it became a bone of contention for the Italian maritime commercial city states. The Republic of Genoa possessed Galata, a separate Genoese city across the Golden Horn from Constantinople itself. The Genoese had fought the Byzantines earlier in 1348 when the Byzantines had decreased their customs tariffs in order to attract trade to the Byzantine side of the Golden Horn. In 1352 the rivalry for trade led to a war between Genoa and Venice. The Genoese, in trying to repel a Venetian fleet from destroying their ships in Golden Horn, bombarded the sea walls of Constantinople and pushed the Byzantines to ally with the Venetians. The Venetians assembled a large naval force, including hired fleets from Peter IV of Aragon and from the Byzantine Empire of John VI Cantacuzene. The sea battle between the Venetian fleet under the command of Niccolo Pisani and the Genoese fleet under Paganino Doria led to defeat of Venetians and their Byzantine allies. Orhan opposed the Venetians, whose fleets and piratical raids were disrupting his seaward provinces, and who had met his diplomatic overtures with contempt. The Venetians were allies of John VI, so Orhan sent an auxiliary force across the straits to Galata, which there co-operated with the Genoese.
In the midst of the distress and confusion that the Byzantine Empire now suffered, Orhan's eldest son, Suleyman Pasha, captured the Castle of Tzympe (Cinbi) in a bold move which gave the Turks a permanent foothold on the European side of the Dardanelles Straits. He also started to settle migrant Turcomans and town-dwelling Turks in the strategic city and castle of Gelibolu (Gallipoli), which had been devastated by a severe earthquake and was therefore evacuated by its inhabitants. Suleyman refused various financial inducements offered by John VI to empty the castle and the city. The emperor pleaded with his son-in-law Orhan to meet personally and discuss the matter, but the request was either rejected or could not be carried out due to Orhan's age and ill-health.
This military situation remained unresolved, in part because of the eruption of hostilities between John VI and his co-emperor and son-in-law John V Palaeologus. John V was dismissed from his imperial post and exiled to Tenedos; Cantacuzene's son Matthew was crowned as the co-emperor. But very soon John V returned from exile with Venetian help and conducted a coup, taking over the government of Constantinople. Although the two men came to an agreement to share power, John VI resigned from his imperial post and became a monk. Each of these two contestants for power was continually soliciting Orhan's aid against the other, and Orhan supported whichever side would benefit the Ottomans.
Orhan was the longest living and one of the longest reigning of the future Ottoman Sultans. In his last years he had left most of the powers of state in the hands of his second son Murad and lived a secluded life in Bursa.
In 1356 Orhan and Theodora's son, Khalil, was abducted somewhere on the Bay of Izmit. A Genoese commercial boat captain, which was conducting acts of piracy alongside commercial activity, was able to capture the young prince and take him over to Phocaea on the Aegean Sea, which was under Genoese rule. Orhan was very much upset by this kidnapping and conducted talks with his brother-in-law and now sole Byzantine Emperor John V Palaeologos. As to the agreement, John V with a Byzantine naval fleet went to Phocaea, paid the ransom demanded of 100,000 "hyperpyra", and brought Khalil back to Ottoman territory.
In 1357 Orhan's eldest and most experienced son and likely heir, Suleyman Pasha, died after injuries sustained from a fall from a horse near Bolayir on the coast of the sea of Marmara. The horse that Suleyman fell from was buried alongside him and their tombs can still be seen today. Orhan was said to have been greatly affected by the death of his son.
Orhan died soon after, likely from natural causes. It seems rather likely that the death of his son was taxing on his health, however. Orhan died in 1362, in Bursa, at the age of eighty, after a reign of thirty-six years. He is buried in the türbe (tomb) with his wife and children, called "Gümüşlü Kumbet" in Bursa.
In 1351, Orhan and Stefan Uroš IV Dušan of Serbia were negotiating about a potential alliance. There was a proposal to marry Dušan's daughter Theodora to Orhan, or one of his sons. However, the Serbian diplomats were attacked by Nikephoros Orsini, after which the negotiations broke down, the marriage didn't take place, and Serbia and the Ottoman state resumed hostilities. | https://en.wikipedia.org/wiki?curid=22493 |
Osman II
Osman II ( "‘Osmān-i sānī"; 3 November 1604 – 20 May 1622), commonly known in Turkey as Genç Osman (meaning "Osman the Young"), was the Sultan of the Ottoman Empire from 1618 until his regicide on 20 May 1622.
Osman II was born at Topkapı Palace, Constantinople, the son of Sultan Ahmed I (1603–17) and one of his consorts Mahfiruz Hatun. According to later traditions, at a young age, his mother had paid a great deal of attention to Osman's education, as a result of which Osman II became a known poet and would have mastered many languages, including Arabic, Persian, Greek, Latin, and Italian; this has been refuted since. Osman was born eleven months after his father Ahmed Ahmed's transition to the throne. He was trained in the palace. According to foreign observers, he was one of the most cultured of Ottoman princes.
Osman's failure to capture the throne at the death of his father Ahmed may have been caused by the absence of a mother to lobby in his favor; his own mother was probably already dead or in exile.
Osman II ascended the throne at the age of 14 as the result of a coup d'état against his uncle Mustafa I "the Intestable" (1617–18, 1622–23). Despite his youth, Osman II soon sought to assert himself as a ruler, and after securing the empire's eastern border by signing a peace treaty (Treaty of Serav) with Safavid Persia, he personally led the Ottoman invasion of Poland during the Moldavian Magnate Wars. Forced to sign a peace treaty with the Poles after the Battle of Chotin (Chocim) (which was, in fact, a siege of Chotin defended by the Lithuanian–Polish hetman Jan Karol Chodkiewicz) in September–October, 1621, Osman II returned home to Constantinople in shame, blaming the cowardice of the Janissaries and the insufficiency of his statesmen for his humiliation.
The basic and exceptional weakness from which Osman II suffered was the conspicuous absence of a female power basis in the harem. From 1620 until Osman's death, a governess ("daye hatun", lit. wet-nurse) was appointed as a stand-in valide, and she could not counterbalance the contriving of Mustafa I's mother in the Old Palace. Although he did have a loyal chief black eunuch at his side, this could not compensate for the absence of what in the politics of that period was a winning combination, valide sultan–chief black eunuch, especially in the case of a young and very ambitious ruler. According to Piterberg, Osman II did not have haseki sultan, opposite with Peirce who claim that Ayşe was Osman's haseki. But it is clear that Ayşe could not take valide's role during her spouse's reign.
In the autumn of 1620, Özi Beylerbeyi İskender Pasha seized the secret letter sent by Transylvanian Prince Bethlen Gabor to Istanbul and sent it to Poland, and Osman also became a veteran of the people around him. He decided to embark on a Polish expedition. Continuing preparations for the Polish campaign. Neither cold nor famine nor the British ambassador John Eyre could determine Osman. The ambassador of Sigismund, the King of Poland, was brought into Istanbul despite the severe colds. The janissaries and army were not willing to go on a campaign, regardless of their conditions.
Following the murder of Şehzade Mehmed on 12 January 1621, a severe snow started in Istanbul. The people of Istanbul were drastically affected by the cold, which increased local violence on 24 January 1621, more so than the palace murder. This is the biggest natural disaster that concerns the capital in Osman's four-year short reign. Bostanzade Yahya Efendi, one of those who lived through this cold, tells that the Golden Horn and the Bosphorus were covered with ice in the end of January-beginning of February: "Between Üsküdar and Beşiktaş, the men walk around and go to Üsküdar. They came from Istanbul on foot. And the year became a gala (famine).
It was snowing for 15 days, that the frosts were frozen from the severity of the cold, but that the river was open between Sarayburnu and Üsküdar. For this natural disaster, froze in thirty thousand froze between Üsküdar and Istanbul from the cold," Haşimi Çelebi, "The road became Üsküdar, the Mediterranean froze a thousand thirty". As a result of the inconvenience of the Zahire ships, there was a complete famine in Istanbul, and 75 dirhams of bread jumped to one akche, and the oak of the meat to 15 akches.
Seeking a counterweight to Janissary influence, Osman II closed their coffee shops (the gathering points for conspiracies against the throne) and started planning to create a new and more loyal army consisting of Anatolian sekbans. The result was a palace uprising by the Janissaries, who promptly imprisoned the young sultan in Yedikule Fortress in Istanbul, where Osman II was strangled to death. More specifically, and according to chronicles, his testicles were "crushed" by Pehlivan the Oil Wrestler. After the death of Osman II, his ear was cut off and represented to Halime Sultan and Sultan Mustafa I to confirm his death and Mustafa would no longer need to fear his nephew. It was the first time in the Ottoman history that a Sultan was executed by the Janissaries.
This disaster is one of the most discussed topics in Ottoman history. Hasanbegzade, Karaçelebizade, Solakzade, Peçevi, Müneccimbaşı and Naima dates, in the Fezleke of Katip Çelebi, detailed and some of them were narrated in a story style.
Osman had three consorts:
Osman had one son:
In the 2015 Turkish television series "", Osman II was portrayed by actor Taner Ölmez. | https://en.wikipedia.org/wiki?curid=22494 |
Oberon (programming language)
Oberon is a general-purpose programming language first published in 1987 by Niklaus Wirth and the latest member of the Wirthian family of ALGOL-like languages (Euler, Algol-W, Pascal, Modula, and Modula-2). Oberon was the result of a concentrated effort to increase the power of Modula-2, the direct successor of Pascal, and simultaneously to reduce its complexity. Its principal new feature is the concept of type extension of record types: It permits the construction of new data types on the basis of existing ones and to relate them, deviating from the dogma of strictly static data typing. Type extension is Wirth's way of inheritance reflecting the viewpoint of the parent site. Oberon was developed as part of the implementation of the Oberon operating system at ETH Zurich in Switzerland. The name is from the moon of Uranus, Oberon.
Oberon is still maintained by Wirth and the latest Project Oberon compiler update is dated Mar 6, 2020.
Oberon is designed with a motto attributed to Albert Einstein in mind: “Make things as simple as possible, but not simpler.” The principal guideline was to concentrate on features that are basic and essential and to omit ephemeral issues. Another factor was recognition of the growth of complexity in languages such as C++ and Ada: in contrast to these, Oberon emphasizes the use of the library concept for extending the language. Enumeration and subrange types, which were present in Modula-2, have been removed; similarly, set types have been limited to small sets of integers, and the number of low-level facilities has been sharply reduced (most particularly, type transfer functions have been eliminated). Elimination of the remaining potentially-unsafe facilities concludes the most essential step toward obtaining a truly high-level language. Very close type-checking even across modules, strict index-checking at run time, null-pointer checking, and the safe type extension concept largely allow the programmer to rely on the language rules alone.
The intent of this strategy was to produce a language that is easier to learn, simpler to implement, and very efficient. Oberon compilers have been viewed as compact and fast, while providing adequate code quality compared to commercial compilers.
The following features characterize the Oberon language:
Oberon supports extension of record types for the construction of abstractions and heterogeneous structures. In contrast to the later dialects—Oberon-2 and Active Oberon—the original Oberon doesn't have a dispatch mechanism as a language feature but rather as programming technique or design pattern. This gives great flexibility in the OOP world. In the Oberon operating system two programming techniques have been used in conjunction for the dispatch call: Method suite and Message handler.
In this technique a table of procedure variables is defined and a global variable of this type is declared in the extended module and assigned back in the generic module:
We extend the generic type Figure to a specific shape:
Dynamic dispatch is only done via procedures in Figures module that is the generic module.
This technique consists of replacing the set of methods with a single procedure, which discriminates among the various methods:
We extend the generic type Figure to a specific shape:
In the Oberon operating system both of these techniques are used for dynamic dispatch. The first one is used for a known set of methods; the second is used for any new methods declared in the extension module. For example, if the extension module Rectangles were to implement a new Rotate() procedure, within the Figures module it could only be called via a message handler.
No-cost implementations of Oberon (the language) and Oberon (the operating system) can be found on the Internet (several are from ETHZ itself).
A few changes were made to the first released specification (object-oriented programming features were added, the 'FOR' loop was reinstated, for instance); the result was Oberon-2. There is a release called "Native Oberon" which includes an operating system, and can directly boot on PC class hardware. A .NET implementation of Oberon with the addition of some minor .NET-related extensions has also been developed at ETHZ. In 1993 an ETHZ spin off company brought a dialect of Oberon-2 to the market with the name Oberon-L, which was renamed to Component Pascal in 1997.
Oberon-2 compilers developed by ETH include versions for Microsoft Windows, Linux, Solaris, and classic Mac OS. Furthermore, there are implementations for various other operating systems, such as Atari-TOS or AmigaOS.
There is an Oberon-2 Lex scanner and Yacc parser by Stephen J Bevan of Manchester University, UK, based on the one in the Mössenböck and Wirth reference. It is at version 1.4.
There is also the Oxford Oberon-2 Compiler, which also understands Oberon-07 and Vishap Oberon. The latter is based on Josef Templ's Oberon to C transpiler called Ofront, which in turn is based on the OP2 Compiler developed by Regis Crelier at ETHZ .
Oberon-07, defined by Niklaus Wirth in 2007 and revised in 2011, 2013, 2014, 2015 and 2016 is based on the original version of Oberon rather than Oberon-2. The main changes are: explicit numeric conversion functions (e.g. FLOOR and FLT) must be used, the LOOP and EXIT statements have been eliminated, WHILE statements have been extended, CASE statements can be used for type extension tests, RETURN statements can only be connected to the end of a function, imported variables and structured value parameters are read-only and arrays can be assigned without using COPY. For full details, see The Programming Language Oberon-07.
Oberon-07 compilers have been developed for use with several different computer systems. Wirth's compiler targets a RISC processor of his own design that was used to implement the 2013 version of the Project Oberon operating system on a Xilinx FPGA Spartan-3 board. Ports of the RISC processor to FPGA Spartan-6, Spartan-7, Artix-7 and a RISC emulator for Windows (compilable on Linux and OS X, as well as binaries available for Windows) also exist. OBNC compiles via C and can be used on any POSIX compatible operating system. The commercial Astrobe implementation targets 32-bit ARM Cortex-M3, M4 and M7 microcontrollers. The Patchouli compiler produces 64-bit Windows binaries. Oberon-07M produces 32-bit Windows binaries and implements revision 2008 of the language. Akron's produces binaries for both Windows and Linux. oberonjs translates Oberon to JavaScript, while oberonc is an implementation for the Java virtual machine.
Active Oberon is yet another variant of Oberon, which adds objects (with object-centered access protection and local activity control), system-guarded assertions, preemptive priority scheduling and a changed syntax for methods (- type-bound procedures in the Oberon world). Objects may be active, which means that they may be threads or processes. Additionally, Active Oberon has a way to implement operators (including overloading), an advanced syntax for using arrays (see OberonX language extensions and Proceedings of the 7th Joint Modular Languages Conference 2006 Oxford, UK), and knows about namespaces (see Proposal for Module Contexts). The operating system A2 - Bluebottle, especially the kernel, synchronizes and coordinates different active objects.
ETHZ has released Active Oberon which supports active objects, and the Bluebottle operating system and environment (JDK, HTTP, FTP, etc.) for the language. As with many prior designs from ETHZ, versions of both are available for download on the Internet. As this is written, both single and dual x86 CPUs and the StrongARM family are supported.
Development has continued on languages in this family. A further extension of Oberon-2, originally named Oberon/L but later renamed to Component Pascal, was developed for Windows and classic Mac OS by Oberon microsystems, a commercial company spin-off from ETHZ, and for .NET by Queensland University of Technology. In addition, the Lagoona and Obliq languages carry the Oberon spirit into specialized areas.
Recent .NET development efforts at ETHZ have been focused on a new language called Zonnon. This includes the features of Oberon and restores some from Pascal (enumerated types, built-in IO) but has some syntactic differences. Additional features include support for active objects, operator overloading and exception handling. Zonnon is available as a plug-in language for the Microsoft Visual Studio for .NET development environment.
Oberon-V (originally called Seneca, after Seneca the Younger) is a descendant of Oberon designed for numerical applications on supercomputers, especially vector or pipelined architectures. It includes array constructors and an ALL statement. (See "Seneca - A Language for Numerical Applications on Vectorcomputers", Proc CONPAR 90 - VAPP IV Conf. R. Griesemer, Diss Nr. 10277, ETH Zurich.) | https://en.wikipedia.org/wiki?curid=22496 |
OpenGL
OpenGL (Open Graphics Library) is a cross-language, cross-platform application programming interface (API) for rendering 2D and 3D vector graphics. The API is typically used to interact with a graphics processing unit (GPU), to achieve hardware-accelerated rendering.
Silicon Graphics, Inc. (SGI) began developing OpenGL in 1991 and released it on June 30, 1992; applications use it extensively in the fields of computer-aided design (CAD), virtual reality, scientific visualization, information visualization, flight simulation, and video games. Since 2006, OpenGL has been managed by the non-profit technology consortium Khronos Group.
The OpenGL specification describes an abstract API for drawing 2D and 3D graphics. Although it is possible for the API to be implemented entirely in software, it is designed to be implemented mostly or entirely in hardware.
The API is defined as a set of functions which may be called by the client program, alongside a set of named integer constants (for example, the constant GL_TEXTURE_2D, which corresponds to the decimal number 3553). Although the function definitions are superficially similar to those of the programming language C, they are language-independent. As such, OpenGL has many language bindings, some of the most noteworthy being the JavaScript binding WebGL (API, based on OpenGL ES 2.0, for 3D rendering from within a web browser); the C bindings WGL, GLX and CGL; the C binding provided by iOS; and the Java and C bindings provided by Android.
In addition to being language-independent, OpenGL is also cross-platform. The specification says nothing on the subject of obtaining, and managing an OpenGL context, leaving this as a detail of the underlying windowing system. For the same reason, OpenGL is purely concerned with rendering, providing no APIs related to input, audio, or windowing.
OpenGL is an evolving API. New versions of the OpenGL specifications are regularly released by the Khronos Group, each of which extends the API to support various new features. The details of each version are decided by consensus between the Group's members, including graphics card manufacturers, operating system designers, and general technology companies such as Mozilla and Google.
In addition to the features required by the core API, graphics processing unit (GPU) vendors may provide additional functionality in the form of "extensions". Extensions may introduce new functions and new constants, and may relax or remove restrictions on existing OpenGL functions. Vendors can use extensions to expose custom APIs without needing support from other vendors or the Khronos Group as a whole, which greatly increases the flexibility of OpenGL. All extensions are collected in, and defined by, the OpenGL Registry.
Each extension is associated with a short identifier, based on the name of the company which developed it. For example, Nvidia's identifier is NV, which is part of the extension name codice_1, the constant codice_2, and the function codice_3. If multiple vendors agree to implement the same functionality using the same API, a shared extension may be released, using the identifier EXT. In such cases, it could also happen that the Khronos Group's Architecture Review Board gives the extension their explicit approval, in which case the identifier ARB is used.
The features introduced by each new version of OpenGL are typically formed from the combined features of several widely implemented extensions, especially extensions of type ARB or EXT.
OpenGL's popularity is partially due to the quality of its official documentation. The OpenGL Architecture Review Board released a series of manuals along with the specification which have been updated to track changes in the API. These are commonly referred to by the colors of their covers:
Historic books (pre-OpenGL 2.0):
The earliest versions of OpenGL were released with a companion library called the OpenGL Utility Library (GLU). It provided simple, useful features which were unlikely to be supported in contemporary hardware, such as tessellating, and generating mipmaps and primitive shapes. The GLU specification was last updated in 1998 and depends on OpenGL features which are now deprecated.
Given that creating an OpenGL context is quite a complex process, and given that it varies between operating systems, automatic OpenGL context creation has become a common feature of several game-development and user-interface libraries, including SDL, Allegro, SFML, FLTK, and Qt. A few libraries have been designed solely to produce an OpenGL-capable window. The first such library was OpenGL Utility Toolkit (GLUT), later superseded by freeglut. GLFW is a newer alternative.
Given the high workload involved in identifying and loading OpenGL extensions, a few libraries have been designed which load all available extensions and functions automatically. Examples include GLEE, GLEW and glbinding. Extensions are also loaded automatically by most language bindings, such as JOGL and PyOpenGL.
Mesa 3D is an open-source implementation of OpenGL. It can do pure software rendering, and it may also use hardware acceleration on BSD, Linux, and other platforms by taking advantage of the Direct Rendering Infrastructure. As of version 13.0, it implements version 4.5 of the OpenGL standard.
In the 1980s, developing software that could function with a wide range of graphics hardware was a real challenge. Software developers wrote custom interfaces and drivers for each piece of hardware. This was expensive and resulted in multiplication of effort.
By the early 1990s, Silicon Graphics (SGI) was a leader in 3D graphics for workstations. Their IRIS GL API was considered state-of-the-art and became the de facto industry standard, overshadowing the open standards-based PHIGS. This was because IRIS GL was considered easier to use, and because it supported immediate mode rendering. By contrast, PHIGS was considered difficult to use and outdated in functionality.
SGI's competitors (including Sun Microsystems, Hewlett-Packard and IBM) were also able to bring to market 3D hardware supported by extensions made to the PHIGS standard, which pressured SGI to open source a version of IrisGL as a public standard called OpenGL.
However, SGI had many customers for whom the change from IrisGL to OpenGL would demand significant investment. Moreover, IrisGL had API functions that were irrelevant to 3D graphics. For example, it included a windowing, keyboard and mouse API, in part because it was developed before the X Window System and Sun's NeWS. And, IrisGL libraries were unsuitable for opening due to licensing and patent issues. These factors required SGI to continue to support the advanced and proprietary Iris Inventor and Iris Performer programming APIs while market support for OpenGL matured.
One of the restrictions of IrisGL was that it only provided access to features supported by the underlying hardware. If the graphics hardware did not support a feature natively, then the application could not use it. OpenGL overcame this problem by providing software implementations of features unsupported by hardware, allowing applications to use advanced graphics on relatively low-powered systems. OpenGL standardized access to hardware, pushed the development responsibility of hardware interface programs (device drivers) to hardware manufacturers, and delegated windowing functions to the underlying operating system. With so many different kinds of graphics hardware, getting them all to speak the same language in this way had a remarkable impact by giving software developers a higher level platform for 3D-software development.
In 1992, SGI led the creation of the OpenGL Architecture Review Board (OpenGL ARB), the group of companies that would maintain and expand the OpenGL specification in the future.
In 1994, SGI played with the idea of releasing something called "OpenGL++" which included elements such as a scene-graph API (presumably based on their Performer technology). The specification was circulated among a few interested parties – but never turned into a product.
Microsoft released Direct3D in 1995, which eventually became the main competitor of OpenGL. Over 50 game developers signed an open letter to Microsoft, released on June 12, 1997, calling on the company to actively support Open GL. On December 17, 1997, Microsoft and SGI initiated the Fahrenheit project, which was a joint effort with the goal of unifying the OpenGL and Direct3D interfaces (and adding a scene-graph API too). In 1998, Hewlett-Packard joined the project. It initially showed some promise of bringing order to the world of interactive 3D computer graphics APIs, but on account of financial constraints at SGI, strategic reasons at Microsoft, and a general lack of industry support, it was abandoned in 1999.
In July 2006, the OpenGL Architecture Review Board voted to transfer control of the OpenGL API standard to the Khronos Group.
In June 2018, Apple deprecated OpenGL APIs on all of their platforms (iOS, macOS and tvOS), strongly encouraging developers to use their proprietary Metal API, which has been available for a few years.
The first version of OpenGL, version 1.0, was released on June 30, 1992 by Mark Segal and Kurt Akeley. Since then, OpenGL has occasionally been extended by releasing a new version of the specification. Such releases define a baseline set of features which all conforming graphics cards must support, and against which new extensions can more easily be written. Each new version of OpenGL tends to incorporate several extensions which have widespread support among graphics-card vendors, although the details of those extensions may be changed.
"Release date": September 7, 2004
OpenGL 2.0 was originally conceived by 3Dlabs to address concerns that OpenGL was stagnating and lacked a strong direction. 3Dlabs proposed a number of major additions to the standard. Most of these were, at the time, rejected by the ARB or otherwise never came to fruition in the form that 3Dlabs proposed. However, their proposal for a C-style shading language was eventually completed, resulting in the current formulation of the OpenGL Shading Language (GLSL or GLslang). Like the assembly-like shading languages it was replacing, it allowed replacing the fixed-function vertex and fragment pipe with shaders, though this time written in a C-like high-level language.
The design of GLSL was notable for making relatively few concessions to the limits of the hardware then available. This hearkened back to the earlier tradition of OpenGL setting an ambitious, forward-looking target for 3D accelerators rather than merely tracking the state of currently available hardware. The final OpenGL 2.0 specification includes support for GLSL.
Before the release of OpenGL 3.0, the new revision had the codename Longs Peak. At the time of its original announcement, Longs Peak was presented as the first major API revision in OpenGL's lifetime. It consisted of an overhaul to the way that OpenGL works, calling for fundamental changes to the API.
The draft introduced a change to object management. The GL 2.1 object model was built upon the state-based design of OpenGL. That is, to modify an object or to use it, one needs to bind the object to the state system, then make modifications to the state or perform function calls that use the bound object.
Because of OpenGL's use of a state system, objects must be mutable. That is, the basic structure of an object can change at any time, even if the rendering pipeline is asynchronously using that object. A texture object can be redefined from 2D to 3D. This requires any OpenGL implementations to add a degree of complexity to internal object management.
Under the Longs Peak API, object creation would become atomic, using templates to define the properties of an object which would be created with one function call. The object could then be used immediately across multiple threads. Objects would also be immutable; however, they could have their contents changed and updated. For example, a texture could change its image, but its size and format could not be changed.
To support backwards compatibility, the old state based API would still be available, but no new functionality would be exposed via the old API in later versions of OpenGL. This would have allowed legacy code bases, such as the majority of CAD products, to continue to run while other software could be written against or ported to the new API.
Longs Peak was initially due to be finalized in September 2007 under the name OpenGL 3.0, but the Khronos Group announced on October 30 that it had run into several issues that it wished to address before releasing the specification. As a result, the spec was delayed, and the Khronos Group went into a media blackout until the release of the final OpenGL 3.0 spec.
The final specification proved far less revolutionary than the Longs Peak proposal. Instead of removing all immediate mode and fixed functionality (non-shader mode), the spec included them as deprecated features. The proposed object model was not included, and no plans have been announced to include it in any future revisions. As a result, the API remained largely the same with a few existing extensions being promoted to core functionality.
Among some developer groups this decision caused something of an uproar, with many developers professing that they would switch to DirectX in protest. Most complaints revolved around the lack of communication by Khronos to the development community and multiple features being discarded that were viewed favorably by many. Other frustrations included the requirement of DirectX 10 level hardware to use OpenGL 3.0 and the absence of geometry shaders and instanced rendering as core features.
Other sources reported that the community reaction was not quite as severe as originally presented, with many vendors showing support for the update.
"Release date": August 11, 2008
OpenGL 3.0 introduced a deprecation mechanism to simplify future revisions of the API. Certain features, marked as deprecated, could be completely disabled by requesting a "forward-compatible context" from the windowing system. OpenGL 3.0 features could still be accessed alongside these deprecated features, however, by requesting a "full context".
Deprecated features include:
"Release date": March 24, 2009
OpenGL 3.1 fully removed all of the features which were deprecated in version 3.0, with the exception of wide lines. From this version onwards, it's not possible to access new features using a "full context", or to access deprecated features using a "forward-compatible context". An exception to the former rule is made if the implementation supports the ARB_compatibility extension, but this is not guaranteed.
"Release date": August 3, 2009
OpenGL 3.2 further built on the deprecation mechanisms introduced by OpenGL 3.0, by dividing the specification into a "core profile" and "compatibility profile". Compatibility contexts include the previously-removed fixed-function APIs, equivalent to the ARB_compatibility extension released alongside OpenGL 3.1, while core contexts do not. OpenGL 3.2 also included an upgrade to GLSL version 1.50.
"Release date": March 11, 2010
OpenGL 4.0 was released alongside version 3.3. It was designed for hardware able to support Direct3D 11.
As in OpenGL 3.0, this version of OpenGL contains a high number of fairly inconsequential extensions, designed to thoroughly expose the abilities of Direct3D 11-class hardware. Only the most influential extensions are listed below.
Hardware support: Nvidia GeForce 400 series and newer, AMD Radeon HD 5000 Series and newer (FP64 shaders implemented by emulation on some TeraScale GPUs), Intel HD Graphics in Intel Ivy Bridge processors and newer.
"Release date": July 26, 2010
Hardware support: Nvidia GeForce 400 series and newer, AMD Radeon HD 5000 Series and newer (FP64 shaders implemented by emulation on some TeraScale GPUs), Intel HD Graphics in Intel Ivy Bridge processors and newer.
"Release date:" August 8, 2011
Hardware support: Nvidia GeForce 400 series and newer, AMD Radeon HD 5000 Series and newer (FP64 shaders implemented by emulation on some TeraScale GPUs), and Intel HD Graphics in Intel Haswell processors and newer. (Linux Mesa: Ivy Bridge and newer)
"Release date:" August 6, 2012
Hardware support: AMD Radeon HD 5000 Series and newer (FP64 shaders implemented by emulation on some TeraScale GPUs), Intel HD Graphics in Intel Haswell processors and newer. (Linux Mesa: Ivy Bridge without stencil texturing, Haswell and newer), Nvidia GeForce 400 series and newer.
"Release date:" July 22, 2013
Hardware support: AMD Radeon HD 5000 Series and newer (FP64 shaders implemented by emulation on some TeraScale GPUs), Intel HD Graphics in Intel Broadwell processors and newer (Linux Mesa: Haswell and newer), Nvidia GeForce 400 series and newer, Tegra K1.
"Release date:" August 11, 2014
Hardware support: AMD Radeon HD 5000 Series and newer (FP64 shaders implemented by emulation on some TeraScale GPUs), Intel HD Graphics in Intel Broadwell processors and newer (Linux Mesa: Haswell and newer), Nvidia GeForce 400 series and newer, Tegra K1, and Tegra X1.
"Release date:" July 31, 2017
Hardware support: AMD Radeon HD 5000 Series and newer (FP64 shaders implemented by emulation on some TeraScale GPUs), Intel Haswell and newer, Nvidia GeForce 400 series and newer.
Driver support:
OpenGL ES (and OpenGL) is deprecated in Apple's operating systems, but still works in up to at least iOS 12.
Vulkan, formerly named the "Next Generation OpenGL Initiative" (glNext), is a grounds-up redesign effort to unify OpenGL and OpenGL ES into one common API that will not be backwards compatible with existing OpenGL versions.
The initial version of Vulkan API was released on February 16, 2016. | https://en.wikipedia.org/wiki?curid=22497 |
Orbit
In physics, an orbit is the gravitationally curved trajectory of an object, such as the trajectory of a planet around a star or a natural satellite around a planet. Normally, orbit refers to a regularly repeating trajectory, although it may also refer to a non-repeating trajectory. To a close approximation, planets and satellites follow elliptic orbits, with the center of mass being orbited at a focal point of the ellipse, as described by Kepler's laws of planetary motion.
For most situations, orbital motion is adequately approximated by Newtonian mechanics, which explains gravity as a force obeying an inverse-square law. However, Albert Einstein's general theory of relativity, which accounts for gravity as due to curvature of spacetime, with orbits following geodesics, provides a more accurate calculation and understanding of the exact mechanics of orbital motion.
Historically, the apparent motions of the planets were described by European and Arabic philosophers using the idea of celestial spheres. This model posited the existence of perfect moving spheres or rings to which the stars and planets were attached. It assumed the heavens were fixed apart from the motion of the spheres, and was developed without any understanding of gravity. After the planets' motions were more accurately measured, theoretical mechanisms such as deferent and epicycles were added. Although the model was capable of reasonably accurately predicting the planets' positions in the sky, more and more epicycles were required as the measurements became more accurate, hence the model became increasingly unwieldy. Originally geocentric, it was modified by Copernicus to place the Sun at the centre to help simplify the model. The model was further challenged during the 16th century, as comets were observed traversing the spheres.
The basis for the modern understanding of orbits was first formulated by Johannes Kepler whose results are summarised in his three laws of planetary motion. First, he found that the orbits of the planets in our Solar System are elliptical, not circular (or epicyclic), as had previously been believed, and that the Sun is not located at the center of the orbits, but rather at one focus. Second, he found that the orbital speed of each planet is not constant, as had previously been thought, but rather that the speed depends on the planet's distance from the Sun. Third, Kepler found a universal relationship between the orbital properties of all the planets orbiting the Sun. For the planets, the cubes of their distances from the Sun are proportional to the squares of their orbital periods. Jupiter and Venus, for example, are respectively about 5.2 and 0.723 AU distant from the Sun, their orbital periods respectively about 11.86 and 0.615 years. The proportionality is seen by the fact that the ratio for Jupiter, 5.23/11.862, is practically equal to that for Venus, 0.7233/0.6152, in accord with the relationship. Idealised orbits meeting these rules are known as Kepler orbits.
Isaac Newton demonstrated that Kepler's laws were derivable from his theory of gravitation and that, in general, the orbits of bodies subject to gravity were conic sections (this assumes that the force of gravity propagates instantaneously). Newton showed that, for a pair of bodies, the orbits' sizes are in inverse proportion to their masses, and that those bodies orbit their common center of mass. Where one body is much more massive than the other (as is the case of an artificial satellite orbiting a planet), it is a convenient approximation to take the center of mass as coinciding with the center of the more massive body.
Advances in Newtonian mechanics were then used to explore variations from the simple assumptions behind Kepler orbits, such as the perturbations due to other bodies, or the impact of spheroidal rather than spherical bodies. Lagrange (1736–1813) developed a new approach to Newtonian mechanics emphasizing energy more than force, and made progress on the three body problem, discovering the Lagrangian points. In a dramatic vindication of classical mechanics, in 1846 Urbain Le Verrier was able to predict the position of Neptune based on unexplained perturbations in the orbit of Uranus.
Albert Einstein (1879-1955) in his 1916 paper "The Foundation of the General Theory of Relativity" explained that gravity was due to curvature of space-time and removed Newton's assumption that changes propagate instantaneously. This led astronomers to recognize that Newtonian mechanics did not provide the highest accuracy in understanding orbits. In relativity theory, orbits follow geodesic trajectories which are usually approximated very well by the Newtonian predictions (except where there are very strong gravity fields and very high speeds) but the differences are measurable. Essentially all the experimental evidence that can distinguish between the theories agrees with relativity theory to within experimental measurement accuracy. The original vindication of general relativity is that it was able to account for the remaining unexplained amount in precession of Mercury's perihelion first noted by Le Verrier. However, Newton's solution is still used for most short term purposes since it is significantly easier to use and sufficiently accurate.
Within a planetary system, planets, dwarf planets, asteroids and other minor planets, comets, and space debris orbit the system's barycenter in elliptical orbits. A comet in a parabolic or hyperbolic orbit about a barycenter is not gravitationally bound to the star and therefore is not considered part of the star's planetary system. Bodies which are gravitationally bound to one of the planets in a planetary system, either natural or artificial satellites, follow orbits about a barycenter near or within that planet.
Owing to mutual gravitational perturbations, the eccentricities of the planetary orbits vary over time. Mercury, the smallest planet in the Solar System, has the most eccentric orbit. At the present epoch, Mars has the next largest eccentricity while the smallest orbital eccentricities are seen with Venus and Neptune.
As two objects orbit each other, the periapsis is that point at which the two objects are closest to each other and the apoapsis is that point at which they are the farthest. (More specific terms are used for specific bodies. For example, "perigee" and "apogee" are the lowest and highest parts of an orbit around Earth, while "perihelion" and "aphelion" are the closest and farthest points of an orbit around the Sun.)
In the case of planets orbiting a star, the mass of the star and all its satellites are calculated to be at a single point called the barycenter. The paths of all the star's satellites are elliptical orbits about that barycenter. Each satellite in that system will have its own elliptical orbit with the barycenter at one focal point of that ellipse. At any point along its orbit, any satellite will have a certain value of kinetic and potential energy with respect to the barycenter, and that energy is a constant value at every point along its orbit. As a result, as a planet approaches periapsis, the planet will increase in speed as its potential energy decreases; as a planet approaches apoapsis, its velocity will decrease as its potential energy increases.
There are a few common ways of understanding orbits:
As an illustration of an orbit around a planet, the Newton's cannonball model may prove useful (see image below). This is a 'thought experiment', in which a cannon on top of a tall mountain is able to fire a cannonball horizontally at any chosen muzzle speed. The effects of air friction on the cannonball are ignored (or perhaps the mountain is high enough that the cannon is above the Earth's atmosphere, which is the same thing).
If the cannon fires its ball with a low initial speed, the trajectory of the ball curves downward and hits the ground (A). As the firing speed is increased, the cannonball hits the ground farther (B) away from the cannon, because while the ball is still falling towards the ground, the ground is increasingly curving away from it (see first point, above). All these motions are actually "orbits" in a technical sense – they are describing a portion of an elliptical path around the center of gravity – but the orbits are interrupted by striking the Earth.
If the cannonball is fired with sufficient speed, the ground curves away from the ball at least as much as the ball falls – so the ball never strikes the ground. It is now in what could be called a non-interrupted, or circumnavigating, orbit. For any specific combination of height above the center of gravity and mass of the planet, there is one specific firing speed (unaffected by the mass of the ball, which is assumed to be very small relative to the Earth's mass) that produces a circular orbit, as shown in (C).
As the firing speed is increased beyond this, non-interrupted elliptic orbits are produced; one is shown in (D). If the initial firing is above the surface of the Earth as shown, there will also be non-interrupted elliptical orbits at slower firing speed; these will come closest to the Earth at the point half an orbit beyond, and directly opposite the firing point, below the circular orbit.
At a specific horizontal firing speed called escape velocity, dependent on the mass of the planet, an open orbit (E) is achieved that has a parabolic path. At even greater speeds the object will follow a range of hyperbolic trajectories. In a practical sense, both of these trajectory types mean the object is "breaking free" of the planet's gravity, and "going off into space" never to return.
The velocity relationship of two moving objects with mass can thus be considered in four practical classes, with subtypes:
It is worth noting that orbital rockets are launched vertically at first to lift the rocket above the atmosphere (which causes frictional drag), and then slowly pitch over and finish firing the rocket engine parallel to the atmosphere to achieve orbit speed.
Once in orbit, their speed keeps them in orbit above the atmosphere. If e.g., an elliptical orbit dips into dense air, the object will lose speed and re-enter (i.e. fall). Occasionally a space craft will intentionally intercept the atmosphere, in an act commonly referred to as an aerobraking maneuver.
In most situations relativistic effects can be neglected, and Newton's laws give a sufficiently accurate description of motion. The acceleration of a body is equal to the sum of the forces acting on it, divided by its mass, and the gravitational force acting on a body is proportional to the product of the masses of the two attracting bodies and decreases inversely with the square of the distance between them. To this Newtonian approximation, for a system of two-point masses or spherical bodies, only influenced by their mutual gravitation (called a two-body problem), their trajectories can be exactly calculated. If the heavier body is much more massive than the smaller, as in the case of a satellite or small moon orbiting a planet or for the Earth orbiting the Sun, it is accurate enough and convenient to describe the motion in terms of a coordinate system that is centered on the heavier body, and we say that the lighter body is in orbit around the heavier. For the case where the masses of two bodies are comparable, an exact Newtonian solution is still sufficient and can be had by placing the coordinate system at the center of mass of the system.
Energy is associated with gravitational fields. A stationary body far from another can do external work if it is pulled towards it, and therefore has gravitational "potential energy". Since work is required to separate two bodies against the pull of gravity, their gravitational potential energy increases as they are separated, and decreases as they approach one another. For point masses the gravitational energy decreases to zero as they approach zero separation. It is convenient and conventional to assign the potential energy as having zero value when they are an infinite distance apart, and hence it has a negative value (since it decreases from zero) for smaller finite distances.
When only two gravitational bodies interact, their orbits follow a conic section. The orbit can be open (implying the object never returns) or closed (returning). Which it is depends on the total energy (kinetic + potential energy) of the system. In the case of an open orbit, the speed at any position of the orbit is at least the escape velocity for that position, in the case of a closed orbit, the speed is always less than the escape velocity. Since the kinetic energy is never negative, if the common convention is adopted of taking the potential energy as zero at infinite separation, the bound orbits will have negative total energy, the parabolic trajectories zero total energy, and hyperbolic orbits positive total energy.
An open orbit will have a parabolic shape if it has velocity of exactly the escape velocity at that point in its trajectory, and it will have the shape of a hyperbola when its velocity is greater than the escape velocity. When bodies with escape velocity or greater approach each other, they will briefly curve around each other at the time of their closest approach, and then separate, forever.
All closed orbits have the shape of an ellipse. A circular orbit is a special case, wherein the foci of the ellipse coincide. The point where the orbiting body is closest to Earth is called the perigee, and is called the periapsis (less properly, "perifocus" or "pericentron") when the orbit is about a body other than Earth. The point where the satellite is farthest from Earth is called the apogee, apoapsis, or sometimes apifocus or apocentron. A line drawn from periapsis to apoapsis is the line-of-apsides. This is the major axis of the ellipse, the line through its longest part.
Bodies following closed orbits repeat their paths with a certain time called the period. This motion is described by the empirical laws of Kepler, which can be mathematically derived from Newton's laws. These can be
formulated as follows:
Note that while bound orbits of a point mass or a spherical body with a Newtonian gravitational field are closed ellipses, which repeat the same path exactly and indefinitely, any non-spherical or non-Newtonian effects (such as caused by the slight oblateness of the Earth, or by relativistic effects, thereby changing the gravitational field's behavior with distance) will cause the orbit's shape to depart from the closed ellipses characteristic of Newtonian two-body motion. The two-body solutions were published by Newton in Principia in 1687. In 1912, Karl Fritiof Sundman developed a converging infinite series that solves the three-body problem; however, it converges too slowly to be of much use. Except for special cases like the Lagrangian points, no method is known to solve the equations of motion for a system with four or more bodies.
Rather than an exact closed form solution, orbits with many bodies can be approximated with arbitrarily high accuracy. These approximations take two forms:
Differential simulations with large numbers of objects perform the calculations in a hierarchical pairwise fashion between centers of mass. Using this scheme, galaxies, star clusters and other large assemblages of objects have been simulated.
The Earth follows an ellipse round the sun.
But unlike the ellipse followed by a pendulum or an object attached to a spring, the sun is at a focal point of the ellipse and not at its centre.
The following derivation applies to such an elliptical orbit.
We start only with the Newtonian law of gravitation stating that the gravitational acceleration towards the central body is related to the inverse of the square of the distance between them, namely
where "F"2 is the force acting on the mass "m"2 caused by the gravitational attraction mass "m"1 has for "m"2, "G" is the universal gravitational constant, and "r" is the distance between the two masses centers.
From Newton's Second Law, the summation of the forces acting on "m"2 related to that bodies acceleration:
where "A"2 is the acceleration of "m"2 caused by the force of gravitational attraction "F"2 of "m"1 acting on "m"2.
Combining Eq 1 and 2:
Solving for the acceleration, "A"2:
where formula_5 is the standard gravitational parameter, in this case formula_6. It is understood that the system being described is "m"2, hence the subscripts can be dropped.
We assume that the central body is massive enough that it can be considered to be stationary and we ignore the more subtle effects of general relativity.
When a pendulum or an object attached to a spring swings in an ellipse,
the inward acceleration/force is proportional to the distance formula_7
Due to the way vectors add, the component of the force in the formula_8 or in the formula_9 directions are also proportionate to the respective
components of the distances, formula_10. Hence, the entire analysis can be done separately in these dimensions. This results in the harmonic parabolic equations formula_11 and formula_12 of the ellipse. In contrast, with the decreasing relationship formula_13, the dimensions cannot be separated.
The location of the orbiting object at the current time formula_14 is located in the plane using
Vector calculus in polar coordinates both with the standard Euclidean basis and with the polar basis
with the origin coinciding with the center of force.
Let formula_15 be the distance between the object and the center and
formula_16 be the angle it has rotated.
Let formula_8 and formula_9 be the standard Euclidean bases and let formula_19 and formula_20 be the radial and transverse polar basis with the first being the unit vector pointing from the central body to the current location of the orbiting object and the second being the orthogonal unit vector pointing in the direction that the orbiting object would travel if orbiting in a counter clockwise circle. Then the vector to the orbiting object is
We use formula_22 and formula_23 to denote the standard derivatives of how this distance and angle change over time. We take the derivative of a vector to see how it changes over time by subtracting its location at time
formula_14 from that at time formula_25 and dividing by formula_26. The result is also a vector. Because our basis vector formula_27 moves as the object orbits, we start by differentiating it.
From time formula_14 to formula_25,
the vector formula_27 keeps its beginning at the origin and rotates from
angle formula_16 to formula_32 which moves its head a distance formula_33 in the perpendicular direction formula_34 giving a derivative of formula_35.
We can now find the velocity and acceleration of our orbiting object.
The coefficients of formula_27
and formula_34 give the accelerations in the radial and transverse directions.
As said, Newton gives this first due to gravity is formula_46 and the second is zero.
Equation (2) can be rearranged using integration by parts.
We can multiply through by formula_15 because it is not zero unless the orbiting object crashes.
Then having the derivative be zero gives that the function is a constant.
which is actually the theoretical proof of Kepler's second law (A line joining a planet and the Sun sweeps out equal areas during equal intervals of time). The constant of integration, "h", is the angular momentum per unit mass.
In order to get an equation for the orbit from equation (1), we need to eliminate time. (See also Binet equation.)
In polar coordinates, this would express the distance formula_15 of the orbiting object from the center as a function of its angle formula_16. However, it is easier to
introduce the auxiliary variable formula_51 and to express formula_52 as a function of formula_16. Derivatives of formula_54 with respect to time may be rewritten as derivatives of formula_55 with respect to angle.
Plugging these into (1) gives
So for the gravitational force – or, more generally, for "any" inverse square force law – the right hand side of the equation becomes a constant and the equation is seen to be the harmonic equation (up to a shift of origin of the dependent variable). The solution is:
where "A" and "θ"0 are arbitrary constants.
This resulting equation of the orbit of the object is that of an ellipse in Polar form relative to one of the focal points. This is put into a more standard form by
letting formula_63 be the eccentricity,
letting formula_64 be the semi-major axis.
Finally, letting formula_65 so the long axis of the ellipse is along the positive "x" coordinate.
The above classical (Newtonian) analysis of orbital mechanics assumes that the more subtle effects of general relativity, such as frame dragging and gravitational time dilation are negligible. Relativistic effects cease to be negligible when near very massive bodies (as with the precession of Mercury's orbit about the Sun), or when extreme precision is needed (as with calculations of the orbital elements and time signal references for GPS satellites.).
The analysis so far has been two dimensional; it turns out that an unperturbed orbit is two-dimensional in a plane fixed in space, and thus the extension to three dimensions requires simply rotating the two-dimensional plane into the required angle relative to the poles of the planetary body involved.
The rotation to do this in three dimensions requires three numbers to uniquely determine; traditionally these are expressed as three angles.
The orbital period is simply how long an orbiting body takes to complete one orbit.
Six parameters are required to specify a Keplerian orbit about a body. For example, the three numbers that specify the body's initial position, and the three values that specify its velocity will define a unique orbit that can be calculated forwards (or backwards) in time. However, traditionally the parameters used are slightly different.
The traditionally used set of orbital elements is called the set of Keplerian elements, after Johannes Kepler and his laws. The Keplerian elements are six:
In principle once the orbital elements are known for a body, its position can be calculated forward and backwards indefinitely in time. However, in practice, orbits are affected or perturbed, by other forces than simple gravity from an assumed point source (see the next section), and thus the orbital elements change over time.
An orbital perturbation is when a force or impulse which is much smaller than the overall force or average impulse of the main gravitating body and which is external to the two orbiting bodies causes an acceleration, which changes the parameters of the orbit over time.
A small radial impulse given to a body in orbit changes the eccentricity, but not the orbital period (to first order). A prograde or retrograde impulse (i.e. an impulse applied along the orbital motion) changes both the eccentricity and the orbital period. Notably, a prograde impulse at periapsis raises the altitude at apoapsis, and vice versa, and a retrograde impulse does the opposite. A transverse impulse (out of the orbital plane) causes rotation of the orbital plane without changing the period or eccentricity. In all instances, a closed orbit will still intersect the perturbation point.
If an orbit is about a planetary body with significant atmosphere, its orbit can decay because of drag. Particularly at each periapsis, the object experiences atmospheric drag, losing energy. Each time, the orbit grows less eccentric (more circular) because the object loses kinetic energy precisely when that energy is at its maximum. This is similar to the effect of slowing a pendulum at its lowest point; the highest point of the pendulum's swing becomes lower. With each successive slowing more of the orbit's path is affected by the atmosphere and the effect becomes more pronounced. Eventually, the effect becomes so great that the maximum kinetic energy is not enough to return the orbit above the limits of the atmospheric drag effect. When this happens the body will rapidly spiral down and intersect the central body.
The bounds of an atmosphere vary wildly. During a solar maximum, the Earth's atmosphere causes drag up to a hundred kilometres higher than during a solar minimum.
Some satellites with long conductive tethers can also experience orbital decay because of electromagnetic drag from the Earth's magnetic field. As the wire cuts the magnetic field it acts as a generator, moving electrons from one end to the other. The orbital energy is converted to heat in the wire.
Orbits can be artificially influenced through the use of rocket engines which change the kinetic energy of the body at some point in its path. This is the conversion of chemical or electrical energy to kinetic energy. In this way changes in the orbit shape or orientation can be facilitated.
Another method of artificially influencing an orbit is through the use of solar sails or magnetic sails. These forms of propulsion require no propellant or energy input other than that of the Sun, and so can be used indefinitely. See statite for one such proposed use.
Orbital decay can occur due to tidal forces for objects below the synchronous orbit for the body they're orbiting. The gravity of the orbiting object raises tidal bulges in the primary, and since below the synchronous orbit the orbiting object is moving faster than the body's surface the bulges lag a short angle behind it. The gravity of the bulges is slightly off of the primary-satellite axis and thus has a component along the satellite's motion. The near bulge slows the object more than the far bulge speeds it up, and as a result the orbit decays. Conversely, the gravity of the satellite on the bulges applies torque on the primary and speeds up its rotation. Artificial satellites are too small to have an appreciable tidal effect on the planets they orbit, but several moons in the Solar System are undergoing orbital decay by this mechanism. Mars' innermost moon Phobos is a prime example, and is expected to either impact Mars' surface or break up into a ring within 50 million years.
Orbits can decay via the emission of gravitational waves. This mechanism is extremely weak for most stellar objects, only becoming significant in cases where there is a combination of extreme mass and extreme acceleration, such as with black holes or neutron stars that are orbiting each other closely.
The standard analysis of orbiting bodies assumes that all bodies consist of uniform spheres, or more generally, concentric shells each of uniform density. It can be shown that such bodies are gravitationally equivalent to point sources.
However, in the real world, many bodies rotate, and this introduces oblateness and distorts the gravity field, and gives a quadrupole moment to the gravitational field which is significant at distances comparable to the radius of the body. In the general case, the gravitational potential of a rotating body such as, e.g., a planet is usually expanded in multipoles accounting for the departures of it from spherical symmetry. From the point of view of satellite dynamics, of particular relevance are the so-called even zonal harmonic coefficients, or even zonals, since they induce secular orbital perturbations which are cumulative over time spans longer than the orbital period. They do depend on the orientation of the body's symmetry axis in the space, affecting, in general, the whole orbit, with the exception of the semimajor axis.
The effects of other gravitating bodies can be significant. For example, the orbit of the Moon cannot be accurately described without allowing for the action of the Sun's gravity as well as the Earth's. One approximate result is that bodies will usually have reasonably stable orbits around a heavier planet or moon, in spite of these perturbations, provided they are orbiting well within the heavier body's Hill sphere.
When there are more than two gravitating bodies it is referred to as an n-body problem. Most n-body problems have no closed form solution, although some special cases have been formulated.
For smaller bodies particularly, light and stellar wind can cause significant perturbations to the attitude and direction of motion of the body, and over time can be significant. Of the planetary bodies, the motion of asteroids is particularly affected over large periods when the asteroids are rotating relative to the Sun.
Mathematicians have discovered that it is possible in principle to have multiple bodies in non-elliptical orbits that repeat periodically, although most such orbits are not stable regarding small perturbations in mass, position, or velocity. However, some special stable cases have been identified, including a planar figure-eight orbit occupied by three moving bodies. Further studies have discovered that nonplanar orbits are also possible, including one involving 12 masses moving in 4 roughly circular, interlocking orbits topologically equivalent to the edges of a cuboctahedron.
Finding such orbits naturally occurring in the universe is thought to be extremely unlikely, because of the improbability of the required conditions occurring by chance.
Orbital mechanics or astrodynamics is the application of ballistics and celestial mechanics to the practical problems concerning the motion of rockets and other spacecraft. The motion of these objects is usually calculated from Newton's laws of motion and Newton's law of universal gravitation. It is a core discipline within space mission design and control. Celestial mechanics treats more broadly the orbital dynamics of systems under the influence of gravity, including spacecraft and natural astronomical bodies such as star systems, planets, moons, and comets. Orbital mechanics focuses on spacecraft trajectories, including orbital maneuvers, orbit plane changes, and interplanetary transfers, and is used by mission planners to predict the results of propulsive maneuvers. General relativity is a more exact theory than Newton's laws for calculating orbits, and is sometimes necessary for greater accuracy or in high-gravity situations (such as orbits close to the Sun).
All geostationary orbits are also geosynchronous, but not all geosynchronous orbits are geostationary. A geostationary orbit stays exactly above the equator, whereas a geosynchronous orbit may swing north and south to cover more of the Earth's surface. Both complete one full orbit of Earth per sidereal day (relative to the stars, not the Sun).
The gravitational constant "G" has been calculated as:
Thus the constant has dimension density−1 time−2. This corresponds to the following properties.
Scaling of distances (including sizes of bodies, while keeping the densities the same) gives similar orbits without scaling the time: if for example distances are halved, masses are divided by 8, gravitational forces by 16 and gravitational accelerations by 2. Hence velocities are halved and orbital periods remain the same. Similarly, when an object is dropped from a tower, the time it takes to fall to the ground remains the same with a scale model of the tower on a scale model of the Earth.
Scaling of distances while keeping the masses the same (in the case of point masses, or by reducing the densities) gives similar orbits; if distances are multiplied by 4, gravitational forces and accelerations are divided by 16, velocities are halved and orbital periods are multiplied by 8.
When all densities are multiplied by 4, orbits are the same; gravitational forces are multiplied by 16 and accelerations by 4, velocities are doubled and orbital periods are halved.
When all densities are multiplied by 4, and all sizes are halved, orbits are similar; masses are divided by 2, gravitational forces are the same, gravitational accelerations are doubled. Hence velocities are the same and orbital periods are halved.
In all these cases of scaling. if densities are multiplied by 4, times are halved; if velocities are doubled, forces are multiplied by 16.
These properties are illustrated in the formula (derived from the formula for the orbital period)
for an elliptical orbit with semi-major axis "a", of a small body around a spherical body with radius "r" and average density "ρ", where "T" is the orbital period. See also Kepler's Third Law.
The application of certain orbits or orbital maneuvers to specific useful purposes have been the subject of patents.
Some bodies are tidally locked with other bodies, meaning that one side of the celestial body is permanently facing its host object. This is the case for Earth-Moon and Pluto-Charon system. | https://en.wikipedia.org/wiki?curid=22498 |
Notary public
A notary public (or notary or public notary) of the common law is a public officer constituted by law to serve the public in non-contentious matters usually concerned with estates, deeds, powers-of-attorney, and foreign and international business. A notary's main functions are to administer oaths and affirmations, take affidavits and statutory declarations, witness and authenticate the execution of certain classes of documents, take acknowledgments of deeds and other conveyances, protest notes and bills of exchange, provide notice of foreign drafts, prepare marine or ship's protests in cases of damage, provide exemplifications and notarial copies, and perform certain other official acts depending on the jurisdiction. Any such act is known as a notarization. The term "notary public" only refers to common-law notaries and should not be confused with civil-law notaries.
With the exceptions of Louisiana, Puerto Rico, Quebec (whose private law is based on civil law), and British Columbia (whose notarial tradition stems from scrivener notary practice), a notary public in the rest of the United States and most of Canada has powers that are far more limited than those of civil-law or other common-law notaries, both of whom are qualified lawyers admitted to the bar: such notaries may be referred to as notaries-at-law or lawyer notaries. Therefore, at common law, notarial service is distinctly different from the practice of law, and giving legal advice and preparing legal instruments is forbidden to lay notaries such as those appointed throughout most of the United States of America.
Notaries are appointed by a government authority, such as a court or lieutenant governor, or by a regulating body often known as a society or faculty of notaries public. For lawyer notaries, an appointment may be for life, while lay notaries are usually commissioned for a briefer term, with the possibility of renewal.
In most common law countries, appointments and their number for a given notarial district are highly regulated. However, since the majority of American notaries are lay persons who provide officially required services, commission numbers are not regulated, which is part of the reason why there are far more notaries in the United States than in other countries (4.5 million vs. approx. 740 in England and Wales and approx. 1,250 in Australia and New Zealand). Furthermore, all U.S. and some Canadian notarial functions are applied to domestic affairs and documents, where fully systematized attestations of signatures and acknowledgment of deeds are a universal requirement for document authentication. By contrast, outside North American common law jurisdictions, notarial practice is restricted to international legal matters or where a foreign jurisdiction is involved, and almost all notaries are also qualified lawyers.
For the purposes of authentication, most countries require commercial or personal documents which originate from or are signed in another country to be notarized before they can be used or officially recorded or before they can have any legal effect. To these documents a notary affixes a notarial certificate which attests to the execution of the document, usually by the person who appears before the notary, known as an appearer or constituent (U.S.). In places where lawyer notaries are the norm, a notary may also draft legal instruments known as notarial acts or deeds which have probative value and executory force, as they do in civil law jurisdictions. Originals or secondary originals are then filed and stored in the notary's archives, or protocol.
Notaries are generally required to undergo special training in the performance of their duties. Some must also first serve as an apprentice before being commissioned or licensed to practice their profession. In many countries, even licensed lawyers, e.g., barristers or solicitors, must follow a prescribed specialized course of study and be mentored for two years before being allowed to practice as a notary (e.g., British Columbia, England). However, notaries public in the U.S., of which the vast majority are lay people, require only a brief training seminar and are expressly forbidden to engage in any activities that could be construed as the unlicensed practice of law unless they are also qualified attorneys. Notarial practice is universally considered to be distinct and separate from that of an attorney (solicitor/barrister). In England and Wales, there is a course of study for notaries which is conducted under the auspices of the University of Cambridge and the Society of Notaries of England and Wales. In the State of Victoria, Australia, applicants for appointment must first complete a Graduate Diploma of Notarial Practice which is administered by the Sir Zelman Cowen Centre in Victoria University, Melbourne.
In bi-juridical jurisdictions, such as South Africa or Louisiana, the office of notary public is a legal profession with educational requirements similar to those for attorneys. Many even have institutes of higher learning that offer degrees in notarial law. Therefore, despite their name, "notaries public" in these jurisdictions are in effect civil law notaries.
Notaries public (also called "notaries", "notarial officers", or "public notaries") hold an office that can trace its origins back to the ancient Roman Republic, when they were called "scribae" ("scribes"), "tabelliones forenses", or "personae publicae".
The history of notaries is set out in detail in Chapter 1 of "Brooke's Notary" (13th edition):
A collection of articles on notary history, including Ancient Egypt, Phoenicia, Babylonia, Rome, Greece, medieval Europe, the Renaissance, Columbus, Spanish Conquistadors, French Louisiana, New England colonial notaries, Republic of Texas notaries and Colorado Old West notaries, is available in the notary history section of the Colorado Notary Blog at the following link.
The duties and functions of notaries public are described in "Brooke's Notary" on page 19 in these terms:
A notary, in almost all common law jurisdictions other than most of North America, is a practitioner trained in the drafting and execution of legal documents. Notaries traditionally recorded matters of judicial importance as well as private transactions or events where an officially authenticated record or a document drawn up with professional skill or knowledge was required. The functions of notaries specifically include the preparation of certain types of documents (including international contracts, deeds, wills, and powers of attorney) and certification of their due execution, administering of oaths, witnessing affidavits and statutory declarations, certification of copy documents, noting and protesting of bills of exchange, and the preparation of ships' protests.
Documents certified by notaries are sealed with the notary's seal or stamp and are recorded by the notary in a register (also called a "protocol") maintained and permanently kept by him or her. These are known as "notarial acts".
In countries subscribing to the Hague Convention Abolishing the Requirement of Legalization for Foreign Public Documents or Apostille Convention, only one further act of certification is required, known as an apostille, and is issued by a government department (usually the Foreign Affairs Department or similar). For countries which are not subscribers to that convention, an "authentication" or "legalization" must be provided by one of a number of methods, including by the Foreign Affairs Ministry of the country from which the document is being sent or the embassy, Consulate-General, consulate or High Commission of the country to which it is being sent.
In all Australian states and territories (except Queensland) notaries public are appointed by the Supreme Court of the relevant state or territory. Very few have been appointed as a notary for more than one state or territory.
Queensland, like New Zealand, continues the practice of appointment by the Archbishop of Canterbury acting through the Master of the Faculties.
Australian notaries are lawyers and are members of the Australian and New Zealand College of Notaries, the Society of Notaries of New South Wales Inc., the Public Notaries Society of Western Australia Inc, and other state-based societies. The overall number of lawyers who choose to become a notary is relatively low. For example, in South Australia (a state with a population of 1.5 million), of the over 2,500 lawyers in that state only about 100 are also notaries and most of those do not actively practice as such. In Melbourne, Victoria, in 2002 there were only 66 notaries for a city with a population of 3.5 million and only 90 for the entire state. In Western Australia, there are approximately 58 notaries as at 2017 for a city with a population of 2.07 million people. Compare this with the United States where it has been estimated that there are nearly 5 million notaries for a nation with a population of 296 million.
As Justice Debelle of the Supreme Court of South Australia said in the case of "In The Matter of an Application by Marilyn Reys Bos to be a Public Notary" [2003] SASC 320, delivered 12 September 2003, in refusing the application by a non-lawyer for appointment as a notary:
Historically there have been some very rare examples of patent attorneys or accountants being appointed, but that now seems to have ceased.
However, there are three significant differences between notaries and other lawyers.
Their principal duties include:
It is usual for Australian notaries to use an embossed seal with a red wafer, and now some notaries also use an inked stamp replicating the seal. It is also common for the seal or stamp to include the notary's chosen logo or symbol.
In South Australia and Scotland, it is acceptable for a notary to use the letters "NP" after their name. Thus a South Australian notary may have "John Smith LLB NP" or similar on his business card or letterhead.
Australian notaries do not hold "commissions" which can expire. Generally, once appointed they are authorized to act as a notary for life and can only be "struck off" the Roll of Notaries for proven misconduct. In certain states, for example, New South Wales and Victoria, they cease to be qualified to continue as a notary once they cease to hold a practicing certificate as a legal practitioner. Even judges, who do not hold practicing certificates, are not eligible to continue to practice as notaries.
Notaries in some states of Australia are regulated by legislation. In New South Wales the Public Notaries Act 1997 applies and in Victoria the Public Notaries Act 2001 applies.
There are also Notary Societies throughout Australia and the societies keep a searchable list of their members. In New South Wales, The Society of Notaries of New South Wales Inc.; in Queensland The Society of Notaries Queensland Inc.; in South Australia the Notaries' Society of South Australia Inc. and in Victoria, The Society of Notaries of Victoria Inc..
Notaries collecting information for the purposes of verification of the signature of the deponent might retain the details of documents which identify the deponent, and this information is subject to the Privacy Act 1988. A notary must protect the personal information the notary holds from misuse and loss and from unauthorised access, modification or disclosure.
All Australian jurisdictions also have justices of the peace (JP) or commissioners for affidavits and other unqualified persons who are qualified to take affidavits or statutory declarations and to certify documents. However they can only do so if the relevant affidavit, statutory declaration or copy document is to be used only in Australia and not in a foreign country, with the possible exception of a few Commonwealth countries not including the United Kingdom or New Zealand except for very limited purposes. Justices of the peace (JPs) are (usually) laypersons who have minimal, if any, training (depending on the jurisdiction) but are of proven good character. Therefore, a US notary resembles an Australian JP rather than an Australian notary.
Canadian notaries public (except in the Province of British Columbia and Quebec) are very much like their American counterparts, generally restricted to administering oaths, witnessing signatures on affidavits and statutory declarations, providing acknowledgements, certifying true copies, and so forth.
In British Columbia, a notary public is more like a British or Australian notary. Notaries are appointed for life by the Supreme Court of British Columbia and as a self-regulating profession, the Society of Notaries Public of British Columbia is the regulatory body overseeing and setting standards to maintain public confidence. A BC Notary is also a Commissioner for Taking Affidavits for British Columbia, by reason of office. Furthermore, BC notaries exercise far greater power, able to dispense legal advice and draft public instruments including:
In Nova Scotia a person may be a notary public, a commissioner of oaths, or both. A notary public and a commissioner of oaths are regulated by the provincial Notaries and Commissioners Act. Individuals hold a commission granted to them by the Minister of Justice.
Under the Act a notary public in has the "power of drawing, passing, keeping and issuing all deeds and contracts, charter-parties and other mercantile transactions in this Province, and also of attesting all commercial instruments brought before him for public protestation, and otherwise of acting as is usual in the office of notary, and may demand, receive and have all the rights, profits and emoluments rightfully appertaining and belonging to the said calling of notary during pleasure."
Under the Act a commissioner of oaths is "authorized to administer oaths and take and receive affidavits, declarations and affirmations within the Province in and concerning any cause, matter or thing, depending or to be had in the Supreme Court, or any other court in the Province."
Every barrister of the Supreme Court of Nova Scotia is a commissioner of oaths but must receive an additional commission to act as a notary public.
"A Commissioner of Oaths is deemed to be an officer of the Supreme Court of Nova Scotia. Commissioners take declarations concerning any matter to come before a court in the Province.". Additionally, individuals with other specific qualifications, such as being a current Member of the Legislative Assembly, commissioned officer of the Royal Canadian Mounted Police or Canadian Forces make act as if explicitly being a Commissioner of Oaths.
In Quebec, civil-law notaries ("notaires") are full lawyers licensed to practice notarial law and regulated by the Chamber of Notaries of Quebec. Quebec notaries draft and prepare major legal instruments (notarial acts), provide complex legal advice, represent clients (out of court) and make appearances on their behalf, act as arbitrator, mediator, or conciliator, and even act as a court commissioner in non-contentious matters. To become a notary in Quebec, a candidate must hold a bachelor's degree in civil law and a one-year Master's in notarial law and serve a traineeship ("stage") before being admitted to practice.
The concept of notaries public in Quebec does not exist. Instead, the province has Commissioners of Oaths ("Commissaires à l'assermentation") which serve to authenticate legal documents at a fixed maximal rate of $5.00CAD.
The Commissioner of Oaths is empowered to administer and witness the swearing of oaths or solemn affirmations in the taking of an affidavit for any potential legal matter under the provincial or state legislation.
Witnessing the signature process and certification service are common tasks for the Commissioner of Oaths. Documents and attachments may need authentication, attestation, certification or notarization.
The central government appoints notaries for the whole or any part of the country. State governments, too, appoint notaries for the whole or any part of the states. On an application being made, any person who had been practicing as a Lawyer for at least ten years is eligible to be appointed a notary. The applicant, if not a legal practitioner, should be a member of the Indian Legal Service or have held an office under the central or state government, requiring special knowledge of law, after enrollment as an advocate or held an office in the department of Judge, Advocate-General or in the armed forces.
Notary public is a trained lawyer that should pass some special examinations to be able to open their office and start their work. Persian meaning of this word is means head of the office and their assistant called . Both these persons should have bachelor's degree in law or master's degree in civil-law.
There is archival evidence showing that public notaries, acting pursuant to papal and imperial authority, practised in Ireland in the 13th century, and it is reasonable to assume that notaries functioned here before that time. In Ireland, public notaries were at various times appointed by the Archbishop of Canterbury and the Archbishop of Armagh. The position remained so until the Reformation.
After the Reformation, persons appointed to the office of public notary either in Great Britain or Ireland received the faculty by royal authority, and appointments under faculty from the Pope and the emperor ceased.
In 1871, under the Matrimonial Causes and Marriage Law (Ireland) Amendment 1870, the jurisdiction previously exercised by the Archbishop of Armagh in the appointment of notaries was vested in and became exercisable by the Lord Chancellor of Ireland.
In 1920, the power to appoint notaries public was transferred to the Lord Lieutenant of Ireland. The position in Ireland changed once again in 1924 following the establishment of the Irish Free State. Under the Courts of Justice Act, 1924 the jurisdiction over notaries public was transferred to the Chief Justice of the Irish Free State.
In 1961, under the Courts (Supplemental Provisions) Act of that year, and the power to appoint notaries public became exercisable by the Chief Justice. This remains the position in Ireland, where notaries are appointed on petition to the Supreme Court, after passing prescribed examinations. The governing body is the Faculty of Notaries Public in Ireland. The vast majority of notaries in Ireland are also solicitors. A non-solicitor, who was successful in the examinations as set by the governing body, applied in the standard way to the Chief Justice to be appointed a notary public. The Chief Justice heard the adjourned application on 3 March 2009 and appointed the non-solicitor as a notary on 18 July 2011.
In Ireland notaries public cannot agree on a standard fee due to competition law. In practice the price per signature appears to be €85. A cheaper alternative is to visit a commissioner for oaths who will charge less per signature, but that is only possible where whoever is to receive a document will recognize the signature of a commissioner for oaths.
A notary public is a lawyer authorized by the Attorney General. The fees are regulated by the Notary Public (Fees) Rules 1954.
A commissioner for oaths is a person appointed by the Chief Justice under section 11 of Court of Judicature Act 1964, and Commissioners for Oaths Rules 1993.
A notary public in New Zealand is a lawyer authorised by the Archbishop of Canterbury in England to officially witness signatures on legal documents, collect sworn statements, administer oaths and certify the authenticity of legal documents usually for use overseas.
The Master of the Faculties appoints notaries in the exercise of the general authorities granted by s 3 of the Ecclesiastical Licences Act 1533 and Public Notaries Act 1833. Recommendations are made by the New Zealand Society of Notaries, which normally requires and applicant to have 10 years’ experience post admission as a lawyer and 5 years as a Law Firm Partner or equivalent.
Also because of Te Tiriti o Waitangi 1840 (a protectorate treaty between Her Majesty the Queen of England and the Maori tribes) each tribe is considered an independent sovereign authority and have their own form of governance with a confederation due to their constitution or Declaration of Independence- He Wakaputanga o te Rangatiratanga o Nu Tireni 1835 . Tribal chiefs (Rangatira),or tribal government administrators delegated in the position of notary public, may notarize legal documents, witness signatures, collect sworn statements, administer oaths and certify the authenticity of legal documents for use overseas. They may certify under the jurisdiction of Nu Tireni, Aotearoa, Te Ika a Maui, or Te Waipounamu.
Notaries in Sri Lanka are more akin to civil law notaries, their main functions are conveyancing, drafting of legal instruments, etc. They are appointed under the Notaries Ordinance No 1 of 1907. They must pass exam held by the Ministry of Justice and apprentice under senior notary for a period of two years. Alternatively, attorneys at law who pass the conveyancing exam are also admitted as a notary public under warrant of the Minister. The Minister of Justice may appoint any attorney at law as a Commissioner for Oaths, authorized to certify and authenticate the affidavit/documents and any such other certificates that are submitted by the general public with the intention of certifying by the Commissioner for Oath.
After the passage of the Ecclesiastical Licences Act 1533, which was a direct result of the Reformation in England, all notary appointments were issued directly through the Court of Faculties. The Court of Faculties is attached to the office of the Archbishop of Canterbury.
In England and Wales there are two main classes of notaries – general notaries and scrivener notaries. Their functions are almost identical. All notaries, like solicitors, barristers, legal executives, costs lawyers and licensed conveyancers, are also commissioners for oaths. They also acquire the same powers as solicitors and other law practitioners, with the exception of the right to represent others before the courts (unless also members of the bar or admitted as a solicitor) once they are commissioned notaries. In practice almost all English notaries, and all Scottish ones, are also solicitors, and usually practise as solicitors.
Commissioners of oaths are able to undertake the bulk of routine domestic attestation work within the UK. Many documents, including signatures for normal property transactions, do not need professional attestation of signature at all, a lay witness being sufficient.
In practice the need for notaries in purely English legal matters is very small; for example they are not involved in normal property transactions. Since a great many solicitors also perform the function of commissioners for oaths and can witness routine declarations etc. (all are qualified to do so, but not all offer the service), most work performed by notaries relates to international matters in some way. They witness or authenticate documents to be used abroad. Many English notaries have strong foreign language skills and often a foreign legal qualification. The work of notaries and solicitors in England is separate although most notaries are solicitors. The Notaries Society gives the number of notaries in England and Wales as "about 1,000," all but seventy of whom are solicitors.
Scrivener notaries get their name from the Scriveners' Company. Until 1999, when they lost this monopoly, they were the only notaries permitted to practise in the City of London. They used not to have to first qualify as solicitors, but they had knowledge of foreign laws and languages.
Currently to qualify as a notary public in England and Wales it is necessary to have earned a law degree or qualified as a solicitor or barrister in the past five years, and then to take a two-year distance-learning course styled the Postgraduate Diploma in Notarial Practice. At the same time, any applicant must also gain practical experience. The few who go on to become scrivener notaries require further study of two foreign languages and foreign law and a two-year mentorship under an active Scrivener notary.
The other notaries in England are either ecclesiastical notaries whose functions are limited to the affairs of the Church of England or other qualified persons who are not trained as solicitors or barristers but satisfy the Master of the Faculties of the Archbishop of Canterbury that they possess an adequate understanding of the law. Both the latter two categories are required to pass examinations set by the Master of Faculties.
The regulation of notaries was modernised by section 57 of the Courts and Legal Services Act 1990.
Notarial services generally include:
Notaries public have existed in Scotland since the 13th century and developed as a distinct element of the Scottish legal profession. Those who wish to practice as a notary must petition the Court of Session. This petition is usually presented at the same time as a petition to practice as a solicitor, but can sometimes be earlier or later. However, to qualify, a notary must hold a current Practising Certificate from the Law Society of Scotland, a new requirement from 2007, before which all Scottish solicitors were automatically notaries.
Whilst notaries in Scotland are always solicitors, the profession remains separate in that there are additional rules and regulations governing notaries and it is possible to be a solicitor, but not a notary. Since 2007 an additional Practising Certificate is required, so now most, but not all, solicitors in Scotland are notaries – a significant difference from the English profession. They are also separate from notaries in other jurisdictions of the United Kingdom.
The profession is administered by the Council of the Law Society of Scotland under the Law Reform (Miscellaneous Provisions) (Scotland) Act 1990.
In Scotland, the duties and services provided by the notary are similar to England and Wales, although they are needed for some declarations in divorce matters for which they are not in England. Their role declined following the Law Agents (Scotland) Amendment Act 1896 which stipulated only enrolled law agents could become notaries and the Conveyancing (Scotland) Act 1924 which extended notarial execution to law agents. The primary functions of a Scottish notary are:
In the United States, a notary public is a person appointed by a state government (e.g., the governor, lieutenant governor, state secretary, or in some cases the state legislature) and whose primary role is to serve the public as an impartial witness when important documents are signed. Since the notary is a state officer, a notary's duties may vary widely from state to state and in most cases bars a notary from acting outside their home state unless they also have a commission there as well.
In 32 states the main requirements are to fill out a form and pay a fee; many states have restrictions concerning notaries with criminal histories, but the requirements vary from state to state. Notaries in 18 states and the District of Columbia are required to take a course, pass an exam, or both; the education or exam requirements in Delaware and Kansas only apply to notaries who will perform electronic notarizations.
A notary is almost always permitted to notarize a document anywhere in the state where their commission is issued. Some states simply issue a commission "at large" meaning no indication is made as to from what county the person's commission was issued, but some states do require the notary include the county of issue of their commission as part of the jurat, or where seals are required, to indicate the county of issue of their commission on the seal. Merely because a state requires indicating the county where the commission was issued does not necessarily mean that the notary is restricted to notarizing documents in that county, although some states may impose this as a requirement.
Some states (Montana, Wyoming, North Dakota, among others) allow a notary who is commissioned in a state bordering that state to also act as a notary in the state if the other allows the same. Thus someone who was commissioned in Montana could notarize documents in Wyoming and North Dakota, and a notary commissioned in Wyoming could notarize documents in Montana. A notary from Wyoming could not notarize documents while in North Dakota (or the inverse) unless they had a commission from North Dakota or a state bordering North Dakota that also allowed North Dakota notaries to practice in that state as well.
Notaries in the United States are much less closely regulated than notaries in most other common-law countries, typically because U.S. notaries have little legal authority. In the United States, a lay notary may not offer legal advice or prepare documents – except in Louisiana and Puerto Rico – and in most cases cannot recommend how a person should sign a document or what type of notarization is necessary. There are some exceptions; for example, Florida notaries may take affidavits, draft inventories of safe deposit boxes, draft protests for payment of dishonored checks and promissory notes, and solemnize marriages. In most states, a notary can also certify or attest a copy or facsimile.
The most common notarial acts in the United States are the taking of acknowledgements and oaths. Many professions may require a person to double as a notary public, which is why US court reporters are often notaries as this enables them to swear in witnesses (deponents) when they are taking depositions, and secretaries, bankers, and some lawyers are commonly notaries public. Despite their limited role, some American notaries may also perform a number of far-ranging acts not generally found anywhere else. Depending on the jurisdiction, they may: take depositions, certify any and all petitions (ME), witness third-party absentee ballots (ME), provide no-impediment marriage licenses, solemnize civil marriages (ME, FL, SC), witness the opening of a safe deposit box or safe and take an official inventory of its contents, take a renunciation of dower or inheritance (SC), and so on.
"An acknowledgment is a formal [oral] declaration before an authorized public officer. It is made by a person executing [signing] an instrument who states that it was their free act and deed." That is, the person signed it without undue influence and for the purposes detailed in it. A certificate of acknowledgment is a written statement signed (and in some jurisdictions, sealed) by the notary or other authorized official that serves to prove that the acknowledgment occurred. The form of the certificate varies from jurisdiction to jurisdiction, but will be similar to the following:
Before me, the undersigned authority, on this ______ day of ___________, 20__ personally appeared _________________________, to me well known to be the person who executed the foregoing instrument, and he/she acknowledged before me that he/she executed the same as his/her voluntary act and deed.
A jurat is the official written statement by a notary public that they have administered and witnessed an oath or affirmation for an oath of office, or on an affidavit – that is, that a person has sworn to or affirmed the truth of information contained in a document, under penalty of perjury, whether that document is a lengthy deposition or a simple statement on an application form. The simplest form of jurat and the oath or affirmation administered by a notary are:
In the U.S., notarial acts normally include what is called a venue or caption, that is, an official listing of the place where a notarization occurred, usually in the form of the state and county and with the abbreviation "ss." (for Latin "scilicet", "to wit") normally referred to as a "subscript", often in these forms:
The venue is usually set forth at the beginning of the instrument or at the top of the notary’s certificate. If at the head of the document, it is usually referred to as a caption. In times gone by, the notary would indicate the street address at which the ceremony was performed, and this practice, though unusual today, is occasionally encountered.
The laws throughout the United States vary on the requirement for a notary to keep and maintain records. Some states require records, some suggest or encourage records, or do not require or recommend records at all.
The California Secretary of State, Notary Public & Special Filings Section, is responsible for appointing and commissioning qualified persons as notaries public for four-year terms.
Prior to sitting for the notary exam, one must complete a mandatory six-hour course of study. This required course of study is conducted either in an online, home study, or in-person format via an approved notary education vendor. Both prospective notaries as well as current notaries seeking reappointment must undergo an "expanded" FBI and California Department of Justice background check.
Various statutes, rules, and regulations govern notaries public. California law sets maximum, but not minimum, fees for services related to notarial acts (e.g., per signature: acknowledgment $15, jurat $15, certified power of attorney $15, et cetera) A finger print (typically the right thumb) may be required in the notary journal based on the transaction in question (e.g., deed, quitclaim deed, deed of trust affecting real property, power of attorney document, et cetera). Documents with blank spaces cannot be notarized (a further anti-fraud measure). California explicitly prohibits notaries public from using literal foreign language translation of their title.
The use of a notary seal is required.
Notarial acts performed in Colorado are governed under the Notaries Public Act, 12-55-101, et seq. Pursuant to the Act, notaries are appointed by the Secretary of State for a term not to exceed four years. Notaries may apply for appointment or reappointment online at the Secretary of State's website. A notary may apply for reappointment to the notary office 90 days before their commission expires. Since May 2010, all new notaries and expired notaries are required to take an approved training course and pass an examination to ensure minimal competence of the Notaries Public Act. A course of instruction approved by the Secretary of State may be administered by approved vendors and shall bear an emblem with a certification number assigned by the Secretary of State's office. An approved course of instruction covers relevant provisions of the Colorado Notaries Public Act, the Model Notary Act, and widely accepted best practices. In addition to courses offered by approved vendors, the Secretary of State offers free certification courses at the Secretary of State's office. To sign up for a free course, visit the notary public training page at the following link. A third party seeking to verify the status of a Colorado notary may do so by visiting the Secretary of State's website at the following link. Constituents seeking an apostille or certificate of magistracy are requested to complete the form found on the following page before sending in their documents or presenting at the Secretary of State's office.
Florida notaries public are appointed by the Governor to serve a four-year term. New applicants and commissioned notaries public must be bona fide residents of the State of Florida, and first time applicants must complete a mandatory three-hour education course administered by an approved educator. Florida state law also requires that a notary public post bond in the amount of $7,500.00. A bond is required in order to compensate an individual harmed as a result of a breach of duty by the notary. Applications are submitted and processed through an authorized bonding agency. Florida is one of three states (Maine and South Carolina are the others) where a notary public can solemnize the rites of matrimony (perform a marriage ceremony).
The Florida Department of State appoints civil law notaries, also called "Florida International Notaries", who must be Florida attorneys who have practiced law for five or more years. Applicants must attend a seminar and pass an exam administered by the Florida Department of State or any private vendor approved by the department. Such civil law notaries are appointed for life and may perform all of the acts of a notary public in addition to preparing authentic acts.
Notaries public in Illinois are appointed by the Secretary of State for a four-year term. Also, residents of a state bordering Illinois (Iowa, Kentucky, Missouri, Wisconsin) who work or have a place of business in Illinois can be appointed for a one-year term. Notaries must be United States citizens (though the requirement that a notary public must be a United States citizen is unconstitutional; see "Bernal v. Fainter"), or aliens lawfully admitted for permanent residence; be able to read and write the English language; be residents of (or employed within) the State of Illinois for at least 30 days; be at least 18 years old; not be convicted of a felony; and not had a notary commission revoked or suspended during the past 10 years.
An applicant for the notary public commission must also post a $5,000 bond, usually with an insurance company and pay an application fee of $10. The application is usually accompanied with an oath of office. If the Secretary of State's office approves the application, the Secretary of State then sends the commission to the clerk of the county where the applicant resides. If the applicant records the commission with the county clerk, they then receive the commission. Illinois law prohibits notaries from using the literal Spanish translation in their title and requires them to use a rubber stamp seal for their notarizations. The notary public can then perform their duties anywhere in the state, as long as the notary resides (or works or does business) in the county where they were appointed.
A notary public in Kentucky is appointed by either the Secretary of State or the Governor to administer oaths and take proof of execution and acknowledgements of instruments. Notaries public fulfill their duties to deter fraud and ensure proper execution. There are two separate types of notaries public that are commissioned in Kentucky. They are Notary Public: State at Large and Notary Public: Special Commission. They have two distinct sets of duties and two different routes of commissioning. For both types of commissions, applicants must be eighteen (18) years of age, of good moral character (not a convicted felon) and capable of discharging the duties imposed upon him/her by law. In addition, the application must be approved by one of the following officials in the county of application: a Circuit Judge, the Circuit Court Clerk, the county Judge/Executive, the County Clerk, a county Magistrate or member of the Kentucky General Assembly. The term of office for both types of notary public is four years.
A "Notary Public: State at Large" is either a resident or non-resident of Kentucky who is commissioned to perform notarial acts anywhere within the physical borders of the Commonwealth of Kentucky that may be recorded either in-state or in another state. In order to become a Notary Public: State at Large, the applicant must be a resident of the county from which he/she makes application or be principally employed in the county from which he/she makes the application. A completed application is sent to the Secretary of State's office with the required fee. Once the application is approved by the Secretary of State, the commission is sent to the county clerk in the county of application and a notice of appointment is sent to the applicant. The applicant will have thirty days to go to the county clerk's office where they will be required to 1.) Post either a surety or property bond (bonding requirements and amounts vary by county) 2.) Take the Oath/Affirmation of Office and 3.) File and record the commission with the county clerk.
A "Notary Public: Special Commission" is either a resident or non-resident of Kentucky who is commissioned to perform notarial acts either inside or outside the borders of the Commonwealth on documents that must be recorded in Kentucky. The main difference in the appointment process is that, unlike a Notary Public: State at Large, a Notary Public: Special Commission is not required to post bond before taking the oath/affirmation nor are they required to be a resident or employed in Kentucky. In addition, where a Notary Public: State at Large is commissioned directly by the Secretary of State, a Notary Public: Special Commission is appointed by the Governor on the recommendation of the Secretary of State. It is permitted to hold a commission as both a Notary Public: State at Large and a Notary Public: Special Commission, however separate applications and filing fees are required.
A Kentucky Notary Public is not required to use a seal or stamp and a notarization with just the signature of the notary is considered to be valid. It is, however, recommended that a seal or stamp be used as they may be required on documents recorded or used in another state. If a seal or stamp is used, it is required to have the name of the notary as listed on their commission as well as their full title of office (Notary Public: State at Large or Notary Public: Special Commission). A notary journal is also recommended but not required (except in the case of recording protests, which must be recorded in a well-bound and indexed journal).
Louisiana notaries public are commissioned by the Governor. They are the only notaries to be appointed for life. The Louisiana notary public is a civil law notary with broad powers, as authorized by law, usually reserved for the American style combination "barrister/solicitor" lawyers and other legally authorized practitioners in other states. A commissioned notary in Louisiana is a civil law notary that can perform/prepare many civil law notarial acts usually associated with attorneys and other legally authorized practitioners in other states, except represent another person or entity before a court of law for a fee (unless they are also admitted to the bar). Notaries are not allowed to give "legal" advice, but they are allowed to give "notarial" advice – i.e., explain or recommend what documents are needed or required to perform a certain act – and do all things necessary or incidental to the performance of their civil law notarial duties. They can prepare any document a civil law notary can prepare (to include inventories, appraisements, partitions, wills, protests, matrimonial contracts, conveyances, and, generally, all contracts and instruments in writing) and, if ordered or requested to by a judge, prepare certain notarial legal documents, in accordance with law, to be returned and filed with that court of law.
Maine Notaries Public are appointed by the Secretary of State to serve a seven-year term. In 1981, the process to merge the office of Justice of the Peace into that of Notary Public began, with all the duties of a Justice of the Peace fully transferred to a Notary Public in 1988. Because of this, Maine is one of three states (Florida and South Carolina are the others) where a Notary Public has the authority to solemnize the rites of matrimony (perform a marriage ceremony).
Maryland notaries public are appointed by the governor on the recommendation of the secretary of state to serve a four-year term. New applicants and commissioned notaries public must be bona fide residents of the State of Maryland or work in the state. An application must be approved by a state senator before it is submitted to the secretary of state. The official document of appointment is imprinted with the signatures of the governor and the secretary of state as well as the Great Seal of Maryland. Before exercising the duties of a notary public, an appointee must appear before the clerk of one of Maryland's 24 circuit courts to take an oath of office.
A bond is not required. Seals are required, and notary is required to keep a log of all notarial acts, indicating the name of the person, their address, what type of document is being notarized, the type of ID used to authenticate them (or that they are known personally) by the notary, and the person's signature. The notary's log is the only document for which a notary may write their own certificate.
When having a person make an affidavit, state law requires the person state the phrase "under penalty of perjury."
Minnesota notaries public are commissioned by the Governor with the advice and consent of the Senate for a five-year term. All commissions expire on 31 January of the fifth year following the year of issue. Citizens and resident aliens over the age of 18 years apply to the Secretary of State for appointment and reappointment. Residents of adjoining counties in adjoining states may also apply for a notary commission in Minnesota. Notaries public have the power to administer all oaths required or authorized to be administered in the state; take and certify all depositions to be used in any of the courts of the state; take and certify all acknowledgments of deeds, mortgages, liens, powers of attorney and other instruments in writing or electronic records; and receive, make out and record notarial protests. The Secretary of State's website () provides more information about the duties, requirements and appointments of notaries public.
Montana notaries public are appointed by the Secretary of State and serve a four-year term. A Montana notary public has jurisdiction throughout the states of Montana, North Dakota, and Wyoming. These states permit notaries from neighboring states to act in the state in the same manner as one from that state under reciprocity, e.g., as long as that state grants notaries from neighboring states to act in their state. [Montana Code 1-5-605]
The Secretary of State is charged with the responsibility of appointing notaries by the provisions of Chapter 240 of the Nevada Revised Statutes. Nevada notaries public who are not also practicing attorneys are prohibited by law from using "notario", "notario publico" or any non-English term to describe their services. (2005 Changes to NRS 240)
Notaries are commissioned by the State Treasurer for a period of five years. Notaries must also be sworn in by the clerk of the county in which they reside. A person can become a notary in the state of New Jersey if they: (1) are over the age of 18; (2) are a resident of New Jersey OR is regularly employed in New Jersey and lives in an adjoining state; (3) have never been convicted of a crime under the laws of any state or the United States, for an offense involving dishonesty, or a crime of the first or second degree, unless the person has met the requirements of the Rehabilitated Convicted Offenders Act (). Notary applications must be endorsed by a state legislator.
Notaries in the state of New Jersey serve as impartial witnesses to the signing of documents, attests to the signature on the document, and may also administer oaths and affirmations. Seals are not required; many people prefer them and as a result, most notaries have seals in addition to stamps. Notaries may administer oaths and affirmations to public officials and officers of various organizations. They may also administer oaths and affirmations in order to execute jurats for affidavits/verifications, and to swear in witnesses.
Notaries are prohibited from predating actions; lending notary equipment to someone else (stamps, seals, journals, etc.); preparing legal documents or giving legal advice; appearing as a representative of another person in a legal proceeding. Notaries should also refrain from notarizing documents in which they have a personal interest.
Pursuant to state law, attorneys licensed in New Jersey may administer oaths and affirmations
New York notaries are empowered to administer oaths and affirmations (including oaths of office), to take affidavits and depositions, to receive and certify acknowledgments or proofs (of execution) of deeds, mortgages and powers of attorney and other instruments in writing; to demand acceptance or payment of foreign and inland bills of exchange, promissory notes and obligations in writing, and to protest these (that is, certify them) for non-acceptance or non-payment. Additional powers include required presence at a forced opening of an abandoned safe deposit box and certain election law privileges regarding petitioning. They are not authorized to perform a civil marriage ceremony, nor certify "true copies" of certain publicly recorded documents. Every county clerk's office in New York State (including within the City of New York) must have a notary public available to serve the public free of charge, during business hours with no limit on quantity or type of document.
Attorneys admitted to the New York Bar are eligible to apply for and receive an appointment as a notary public in the State of New York. "Nota bene": they are not "automatically" appointed as a notary public because they are a member of the New York Bar. An interested attorney is required to follow the same appointment process as a non-attorney; however, the proctored, written state examination requirement is waived by statute for members of the bar in good standing.
New York notaries initially must pass a test and then renew their status every 4 years.
A notary in the Commonwealth of Pennsylvania is empowered to perform seven distinct official acts: take affidavits, verifications, acknowledgments and depositions, certify copies of documents, administer oaths and affirmations, and protest dishonored negotiable instruments. A notary is strictly prohibited from giving legal advice or drafting legal documents such as contracts, mortgages, leases, wills, powers of attorney, liens or bonds. Pennsylvania is one of the few states with a successful Electronic Notarization Initiative.
South Carolina notaries public are appointed by the Governor to serve a ten-year term. All applicants must first have that application endorsed by a state legislator before submitting their application to the Secretary of State. South Carolina is one of three states (Florida and Maine are the others) where a notary public can solemnize the rites of matrimony (perform a marriage ceremony) (2005). If you live in South Carolina but work in North Carolina, Georgia or Washington, DC, these states will permit you to become a notary public for their state. South Carolina does not offer this provision to out-of-state residents that work in South Carolina(2012).
Utah notaries public are appointed by the Lieutenant Governor to serve a four-year term. Utah used to require that impression seals be used, but now it is optional. The seal must be in purple ink.
A Virginia notary must either be a resident of Virginia or work in Virginia, and is authorized to acknowledge signatures, take oaths, and certify copies of non-government documents which are not otherwise available, e.g. a notary cannot certify a copy of a birth or death certificate since a certified copy of the document can be obtained from the issuing agency. Changes to the law effective 1 July 2008 imposes certain new requirements; while seals are still not required, if they are used they must be photographically reproducible. Also, the notary's registration number must appear on any document notarized. Changes to the law effective 1 July 2008 will permit notarization of electronic signatures.
On 1 July 2012, Virginia became the first state to authorize a signer to be in a remote location and have a document notarized electronically by an approved Virginia electronic notary using audio-visual conference technology by passing the bills SB 827 and HB 2318.
In Washington any adult resident of the state, or resident of Oregon or Idaho who is employed in Washington or member of the United States military or their spouse, may apply to become a notary public. Applicants for commissioning as a Notary Public must: (a) be literate in the English language, (b) be endorsed by three adult residents of Washington who are not related to the applicant, (c) pay $30, (d) possess a surety bond in the amount of $10,000, (e) swear under oath to act in accordance with the state's laws governing the practice of notaries. In addition, the director of licensing is authorized to deny a commission to any applicant who has had a professional license revoked, has been convicted of a serious crime, or who has been found culpable of misconduct during a previous term as a notary public.
A notary public is appointed for a term of 4 years.
Notaries public in this state are also referred to under law as a Conservator of the Peace as per Attorney General decision on 4 June 1921
Wyoming notaries public are appointed by the Secretary of State and serve a four-year term. A Wyoming notary public has jurisdiction throughout the states of Wyoming and Montana. These states permit notaries from neighboring states to act in the state in the same manner as one from that state under reciprocity, e.g. as long as that state grants notaries from neighboring states to act in their state.
A Maryland requirement that to obtain a commission, a notary declare their belief in God, as required by the Maryland Constitution, was found by the United States Supreme Court in "Torcaso v. Watkins", to be unconstitutional. Historically, some states required that a notary be a citizen of the United States. However, the U.S. Supreme Court, in the case of "Bernal v. Fainter" , declared that to be impermissible.
In the U.S., there are reports of notaries (or people claiming to be notaries) having taken advantage of the differing roles of notaries in common law and civil law jurisdictions to engage in the unauthorized practice of law. The victims of such scams are typically illegal immigrants from civil law countries who need assistance with, for example, their immigration papers and want to avoid hiring an attorney. Confusion often results from the mistaken premise that a notary public in the United States serves the same function as a "Notario Publico" in Spanish-speaking countries (which are civil law countries, "see below"). For this reason, some states, like Texas, require that notaries specify that they are not "Notario Publico" when advertising services in languages other than English. Prosecutions in such cases are difficult, as the victims are often deported and thus unavailable to testify.
Certain members of the United States Armed Forces are given the powers of a notary under federal law (). Some military members have authority to certify documents or administer oaths, without being given all notarial powers (, , ). In addition to the powers granted by the federal government, some states have enacted laws granting notarial powers to commissioned officers.
Certain personnel at U.S. embassies and consulates may be given the powers of a notary under federal law ( and ).
The role of notaries in civil law countries is much greater than in common law countries. Civilian notaries are full-time lawyers and holders of a public office who routinely undertake non-contentious transactional work done in common law countries by attorneys/solicitors, as well as, in some countries, those of government registries, title offices, and public recorders. The qualifications imposed by civil law countries are much greater, requiring generally an undergraduate law degree, a graduate degree in notarial law and practice, three or more years of practical training ("articles") under an established notary, and must sit a national examination to be admitted to practice. Typically, notaries work in private practice and are fee earners, but a small minority of countries have salaried public service (or "government" / "state") notaries (e.g., Ukraine, Russia, Baden-Württemberg in Germany (until 2017), certain cantons of Switzerland, Portugal).
Notaries in civil law countries have had a critical historical role in providing archives. A considerable amount of historical data of tremendous value is available in France, Spain and Italy thanks to notarial minutes, contracts and conveyances, some of great antiquity which have reached us in spite of losses, deteriorations and willful destructions.
Civil law notaries have jurisdiction over strictly non-contentious domestic civil-private law in the areas of property law, family law, agency, wills and succession, and company formation. The point to which a country's notarial profession monopolizes these areas can vary greatly. On one extreme is France (and French-derived systems) which statutorily give notaries a monopoly over their reserved areas of practice, as opposed to Austria where there is no discernible monopoly whatsoever and notaries are in direct competition with attorneys/solicitors.
In the few United States jurisdictions where trained notaries are allowed (such as Louisiana, Puerto Rico), the practice of these legal practitioners is limited to legal advice on purely non-contentious matters that fall within the purview of a notary's reserved areas of practice.
Thailand is a mixed law country with strong civil law traditional. Public notaries in Thailand are Thai lawyers who have a special license.
Upon the death of President Warren G. Harding in 1923, Calvin Coolidge was sworn in as President by his father, John Calvin Coolidge, Sr., a Vermont notary public. However, as there was some controversy as to whether a state notary public had the authority to administer the presidential oath of office, Coolidge took the oath, again, upon returning to Washington. | https://en.wikipedia.org/wiki?curid=21481 |
Nairobi
Nairobi () is the capital and the largest city of Kenya. The name comes from the Maasai phrase "Enkare Nairobi", which translates to "cool water", a reference to the Nairobi River which flows through the city. The city proper had a population of 4,397,073 in the 2019 census, while the metropolitan area has a population of 9,354,580. The city is popularly referred to as the Green City in the Sun.
Nairobi was founded in 1899 by the colonial authorities in British East Africa, as a rail depot on the Uganda Railway. The town quickly grew to replace Mombasa as the capital of Kenya in 1907. After independence in 1963, Nairobi became the capital of the Republic of Kenya. During Kenya's colonial period, the city became a centre for the colony's coffee, tea and sisal industry. The city lies on the River Athi in the southern part of the country, and has an elevation of above sea level.
According to the 2019 census, in the administrative area of Nairobi, 4,397,073 inhabitants lived within .
Home to thousands of Kenyan businesses and over 100 major international companies and organizations, including the United Nations Environment Programme (UN Environment) and the United Nations Office at Nairobi (UNON), Nairobi is an established hub for business and culture. The Nairobi Securities Exchange (NSE) is one of the largest in Africa and the second-oldest exchange on the continent. It is Africa's fourth-largest exchange in terms of trading volume, capable of making 10 million trades a day.
Nairobi is found within the Greater Nairobi Metropolitan region, which consists of 5 out of 47 counties in Kenya, which generates about 60% of the entire nation's GDP. The counties are:
The site of Nairobi was originally part of an uninhabited swamp. The name Nairobi itself comes from the Maasai expression meaning "cool waters", referring to the cold water stream which flowed through the area. With the arrival of the Uganda Railway, the site was identified by Sir George Whitehouse for a store depot, shunting ground and camping ground for the Indian labourers working on the railway. Whitehouse, chief engineer of the railway, favoured the site as an ideal resting place due to its high elevation, temperate climate and being situated before the steep ascent of the Limuru escarpments. His choice was however criticised by officials within the Protectorate government who felt the site was too flat, poorly drained and relatively infertile.
In 1898, Arthur Church was commissioned to design the first town layout for the railway depot. It constituted two streets – Victoria Street and Station Street, ten avenues, staff quarters and an Indian commercial area. The railway arrived at Nairobi on 30 May 1899, and soon Nairobi replaced Machakos as the headquarters of the provincial administration for Ukamba province. On the arrival of the railway, Whitehouse remarked that "Nairobi itself will in the course of the next two years become a large and flourishing place and already there are many applications for sites for hotels, shops and houses. The town's early years were however beset with problems of malaria leading to at least one attempt to have the town moved. In the early 1900s, Bazaar Street (now Biashara Street) was completely rebuilt after an outbreak of plague and the burning of the original town.
Between 1902 and 1910, the town's population rose from 5,000 to 16,000 and grew around administration and tourism, initially in the form of big game hunting. In 1907, Nairobi replaced Mombasa as the capital of the East Africa Protectorate. In 1908, a further outbreak of the plague led to Europeans concluding that the cause was unhygienic conditions in the Indian Bazaar. The government responded by restricting lower class Indians and African natives to specific quarters for residence and trade setting a precedent for racial segregation in the commercial sphere. By the outset of the First World War, Nairobi was well established as a European settler colony through immigration and land alienation. In 1919, Nairobi was declared to be a municipality.
In 1921, Nairobi had 24,000 residents, of which 12,000 were native Africans. The next decade would see a growth in native African communities into Nairobi, where they would go on to constitute a majority for the first time. In February 1926, colonial officer Eric Dutton passed through Nairobi on his way to Mount Kenya, and said of the city:
The continuous expansion of the city began to anger the Maasai, as the city was devouring their land to the south. It also angered the Kikuyu people, who wanted the land returned to them. After the end of World War II, this friction developed into the Mau Mau rebellion. Jomo Kenyatta, Kenya's future president, was jailed for his involvement even though there was no evidence linking him to the rebellion. The pressure exerted from the locals onto the British resulted in Kenyan independence in 1963, with Nairobi as the capital of the new republic.
After independence, Nairobi grew rapidly and this growth put pressure on the city's infrastructure. Power cuts and water shortages were a common occurrence, though in the past few years better city planning has helped to put some of these problems in check.
On 11 September 1973, the Kenyatta International Conference Centre KICC was open to the public. The 28-storey building at the time was designed by the Norwegian architect Karl Henrik Nøstvik and Kenyan David Mutiso. The construction was done in three phases. Phase I was the construction of the podium, Phase II consisted of the main tower, and Phase III involved the Plenary. Construction was completed in 1973, with the opening ceremony occurring on 11 September and being presided over by Kenya's founding father President Kenyatta. It is the only building within the city with a helipad that is open to the public. Of the buildings built in the Seventies, the KICC was the most eco-friendly and most environmentally conscious structure; its main frame was constructed with locally available materials gravel, sand, cement and wood, and it had wide open spaces which allowed for natural aeration and natural lighting. Cuboids made up the plenary hall, the tower consisted of a cylinder composed of several cuboids, and the amphitheater and helipad both resembled cones. The tower was built around a concrete core and it had no walls but glass windows, which allowed for maximum natural lighting. It had the largest halls in eastern and central Africa.
Three years prior in 1972, the World Bank approved funds for further expansion of the then Nairobi Airport (now Jomo Kenyatta International Airport), including a new international and domestic passenger terminal building, the airport's first dedicated cargo and freight terminal, new taxiways, associated aprons, internal roads, car parks, police and fire stations, a State Pavilion, airfield and roadway lighting, fire hydrant system, water, electrical, telecommunications and sewage systems, a dual carriageway passenger access road, security, drainage and the building of the main access road to the airport (Airport South Road). The total cost of the project was more than US$29 million (US$111.8 million in 2013 dollars). On 14 March 1978, construction of the terminal building was completed on the other side of the airport's single runway and opened by President Jomo Kenyatta less than five months before his death. The airport was renamed Jomo Kenyatta International Airport in memory of its First President.
The United States Embassy, then located in downtown Nairobi, was bombed in August 1998 by Al-Qaida, as one of a series of US embassy bombings. It is now the site of a memorial park.
On 9 November 2012, President Mwai Kibaki opened the KES 31 billion Thika Superhighway. This mega-project of Kenya started in 2009 and ended in 2011. It involved expanding the four-lane carriageway to eight lanes, building underpasses, providing interchanges at roundabouts, erecting flyovers and building underpasses to ease congestion. The 50.4-kilometre road was built in three phases: Uhuru Highway to Muthaiga Roundabout; Muthaiga Roundabout to Kenyatta University and; Kenyatta University to Thika Town.
On 31 May 2017, The current president Uhuru Kenyatta inaugurated the Standard Gauge railway which runs from Nairobi to Mombasa and vice versa. It was primarily built by a Chinese firm with about 90% of total funding from China and about 10% from the Kenyan government. A second phase is also being built which will link Naivasha to the existing route and also the Uganda border.
The city is situated at and and occupies .
Nairobi is situated between the cities of Kampala and Mombasa. As Nairobi is adjacent to the eastern edge of the Rift Valley, minor earthquakes and tremors occasionally occur. The Ngong Hills, located to the west of the city, are the most prominent geographical feature of the Nairobi area. Mount Kenya is situated north of Nairobi, and Mount Kilimanjaro is towards the south-east.
The Nairobi River and its tributaries traverse through the Nairobi County and joins the larger River Athi on the eastern edge of the county.
Nobel Peace Prize laureate Wangari Maathai fought fiercely to save the indigenous Karura Forest in northern Nairobi which was under threat of being replaced by housing and other infrastructure.
Nairobi's western suburbs stretch all the way from the Kenyatta National Hospital in the south to the UN headquarters at Gigiri suburb in the north, a distance of about . The city is centred on the City Square, which is located in the Central Business District. The Kenyan Parliament buildings, the Holy Family Cathedral, Nairobi City Hall, Nairobi Law Courts, and the Kenyatta Conference Centre all surround the square.
Under the Köppen climate classification, Nairobi has a subtropical highland climate (Cwb). At above sea level, evenings may be cool, especially in the June/July season, when the temperature can drop to . The sunniest and warmest part of the year is from December to March, when temperatures average in the mid-twenties Celsius during the day. The mean maximum temperature for this period is .
There are rainy seasons, but rainfall can be moderate. The cloudiest part of the year is just after the first rainy season, when, until September, conditions are usually overcast with drizzle. As Nairobi is situated close to the equator, the differences between the seasons are minimal. The seasons are referred to as the wet season and dry season. The timing of sunrise and sunset varies little throughout the year for the same reason.
Nairobi is divided into a series of constituencies with each being represented by members of Parliament in the National Assembly. These constituencies are: Makadara, Kamukunji, Starehe, Langata, Dagoretti, Westlands, Kasarani, and Embakasi. The main administrative divisions of Nairobi are Central, Dagoretti, Embakasi, Kasarani, Kibera, Makadara, Pumwani, and Westlands. Most of the upmarket suburbs are situated to the west and north-central of Nairobi, where most European settlers resided during the colonial times AKA 'Ubabini'. These include Karen, Langata, Lavington, Gigiri, Muthaiga, Brookside, Spring Valley, Loresho, Kilimani, Kileleshwa, Hurlingham, Runda, Kitisuru, Nyari, Kyuna, Lower Kabete, Westlands, and Highridge, although Kangemi, Kawangware, and Dagoretti are lower income areas close to these affluent suburbs. The city's colonial past is commemorated by many English place-names.
Most lower-middle and upper middle income neighbourhoods are located in the north-central areas such as Highridge, Parklands, Ngara, Pangani, and areas to the southwest and southeast of the metropolitan area near the Jomo Kenyatta International Airport. The most notable ones include Avenue Park, Fedha, Pipeline, Donholm, Greenfields, Nyayo, Taasia, Baraka, Nairobi West, Madaraka, Siwaka, South B, South C, Mugoya, Riverbank, Hazina, Buru Buru, Uhuru, Harambee Civil Servants', Akiba, Kimathi, Pioneer, and Koma Rock to the centre-east and Kasarani to northeast area among others. The low and lower income estates are located mainly in far eastern Nairobi. These include, Umoja, Kariokor, Dandora, Kariobangi, Kayole, Embakasi, and Huruma. Kitengela suburb, though located further southeast, Ongata Rongai and Kiserian further southwest, and Ngong/Embulbul suburbs also known as 'Diaspora' to the far west are considered part of the Greater Nairobi Metropolitan area. More than 90% of Nairobi residents work within the Nairobi Metropolitan area, in the formal and informal sectors. Many Somali immigrants have also settled in Eastleigh, nicknamed "Little Mogadishu".
The Kibera slum in Nairobi (with an estimated population of at least 500,000 to over 1,000,000 people) was thought to be Africa's second largest slum. However, recent census results have shown that Kibera is indeed much smaller than originally thought.
Nairobi has many parks and open spaces throughout the city. Much of the city has dense tree-cover and plenty of green spaces. The most famous park in Nairobi is Uhuru Park. The park borders the central business district and the neighbourhood Upper Hill. Uhuru ("Freedom" in Swahili) Park is a centre for outdoor speeches, services, and rallies. The park was to be built over by former President Daniel arap Moi, who wanted the 62-storey headquarters of his party, the Kenya African National Union, situated in the park. However, the park was saved following a campaign by Nobel Peace Prize winner Wangari Maathai.
Central Park is adjacent to Uhuru Park, and includes a memorial for Jomo Kenyatta, the first president of Kenya, and the Moi Monument, built in 1988 to commemorate the second president's first decade in power. Other notable open spaces include Jeevanjee Gardens, City Park, 7 August Memorial Park, and Nairobi Arboretum.
The colonial 1948 Master Plan for Nairobi still acts as the governing mechanism when it comes to making decisions related to urban planning. The Master Plan at the time, which was designed for 250,000 people, allocated 28% of Nairobi's land to public space, but because of rapid population growth, much of the vitality of public spaces within the city are increasingly threatened. City Park, the only natural park in Nairobi, for example, was originally 150 acres, but has since lost approximately 50 acres of land to private development through squatting and illegal alienation which began in the 1980s.
The City of Nairobi enjoys the status of a full administrative County.
The Nairobi province differs in several ways from other Kenyan regions. The county is entirely urban. It has only one local council, Nairobi City Council. Nairobi Province was not divided into "districts" until 2007, when three districts were created. In 2010, along with the new constitution, Nairobi was renamed a county.
Nairobi County has 17 constituencies. Constituency name may differ from division name, such that Starehe Constituency is equal to Central Division, Lang'ata Constituency to Kibera division, and Kamukunji Constituency to Pumwani Division in terms of boundaries.
Nairobi is divided into 17 constituencies and 85 wards, mostly named after residential estates. Kibera Division, for example, includes Kibera (Kenya's largest slum) as well as affluent estates of Karen and Langata.
Nairobi is home to the Nairobi Securities Exchange (NSE), one of Africa's largest stock exchanges. The NSE was officially recognised as an overseas stock exchange by the London Stock Exchange in 1953. The exchange is Africa's 4th largest in terms of trading volumes, and 5th largest in terms of Market Capitalization as a percentage of GDP.
Nairobi is the regional headquarters of several international companies and organisations. In 2007, General Electric, Young & Rubicam, Google, Coca-Cola, IBM Services, and Cisco Systems relocated their African headquarters to the city. The United Nations Office at Nairobi hosts UN Environment and UN-Habitat headquarters.
Several of Africa's largest companies are headquartered in Nairobi. Safaricom, the largest company in Kenya by assets and profitability is headquartered in Nairobi, KenGen, which is the largest African stock outside South Africa, is based in the city. Kenya Airways, Africa's fourth largest airline, uses Nairobi's Jomo Kenyatta International Airport as a hub.
Nairobi has not been left behind by the FinTech phenomenon that has taken over worldwide. It has produced a couple of tech firms like Craft Silicon, Kangai Technologies, and Jambo Pay which have been in the forefront of technology, innovation and cloud based computing services. Their products are widely used and have considerable market share presence within Kenya and outside its borders.
Goods manufactured in Nairobi include clothing, textiles, building materials, processed foods, beverages, and cigarettes. Several foreign companies have factories based in and around the city. These include Goodyear, General Motors, Toyota Motors, and Coca-Cola.
Nairobi has a large tourist industry, being both a tourist destination and a transport hub.
Nairobi has grown around its central business district. This takes a rectangular shape, around the Uhuru Highway, Haille Selassie Avenue, Moi Avenue, and University Way. It features many of Nairobi's important buildings, including the City Hall and Parliament Building. The city square is also located within the perimeter.
Most of the skyscrapers in this region are the headquarters of businesses and corporations, such as I&M and the Kenyatta International Conference Centre. The United States Embassy bombing took place in this district, prompting the building of a new embassy building in the suburbs.
In 2011, the city was considered to have about 4 million residents. A large beautification project took place in the Central Business District, as the city prepared to host the 2006 Afri-Cities summit. Iconic buildings such as the Kenyatta International Conference Centre had their exteriors cleaned and repainted.
Nairobi downtown area or central business district is bordered to the southwest by Uhuru Park and Central Park. The Mombasa to Kampala railway runs to the southeast of the district.
Two areas outside of the Central Business District area that are seeing a growth in companies and office space are Upper Hill, which is located, approximately from the Central Business District and Westlands, which is also about the same distance, away from the city centre.
Companies that have moved from the Central Business District to Upper Hill include Citibank and in 2007, Coca-Cola began construction of their East and Central African headquarters in Upper Hill, cementing the district as the preferred location for office space in Nairobi. The largest office development in this area is UAP Tower, completed recently in 2015 and officially opened for business on July 4, 2016. It is a 33-storey tower and reaches a height of 163 meters. The World Bank and International Finance Corporation (part of the World Bank Group) are also located in Upper Hill at the Delta Center, Menegai Road. Earlier on, they were located in the Hill Park Building and CBA Building respectively (both also in Upper Hill), and prior to that in View Park towers in the Central Business District.
To accommodate the large demand for floor space in Nairobi, various commercial projects are being constructed. New business parks are being built in the city, including the flagship Nairobi Business Park.
Construction boom and real estate development projects
Nairobi is undergoing a construction boom. Major real estate projects and skyscrapers are coming up in the city. Among them are:The pinnacle twin towers which will tower at 314 m, Britam Tower (200 m), Avic International Africa headquarters (176 m), Prism tower (140 m), Pan Africa insurance towers, Pallazzo offices, and many other projects. Shopping malls are also being constructed like the recently completed Garden city Mall, Centum's Two rivers Mall, The Hub in Karen, Karen waterfront, Thika Greens, and the recently reconstructed Westgate Mall. High-class residential apartments for living are coming up like Le Mac towers, a residential tower in Westlands Nairobi with 23 floors. Avic International is also putting up a total of four residential apartments on Waiyaki way: a 28-level tower, two 24-level towers, and a 25-level tower. Hotel towers are also being erected in the city. Avic International is putting up a 30-level hotel tower of 141 m in the Westlands. The hotel tower will be operated by Marriot group. Jabavu limited is constructing a 35 floor hotel tower in Upper Hill which will be high over 140 metres in the city skyline. Arcon Group Africa has also announced plans to erect a skyscraper in Upper hill which will have 66 floors and tower over 290 metres, further cementing Upper hill as the preferred metropolis for multinational corporations launching their operations in the Kenyan capital.
Also see List of tallest buildings in Kenya
Population of Nairobi between 1906 and 2019
Nairobi has experienced one of the highest growth rates of any city in Africa. Since its foundation in 1899, Nairobi has grown to become the second largest city in the African Great Lakes, despite being one of youngest cities in the region. The growth rate of Nairobi is 4.1% a year. It is estimated that Nairobi's population will reach 5 million in 2025.
These data fit remarkably closely (r^2 = 0.9994) to a logistic curve with t(0) = 1900, P(0)=8500, r = 0.059 and K = 8,000,000. This suggests a 2011 growth rate of 3.5% (the CIA estimate of 4.5% cited above would have been true in 2005). According to this curve, the population of the city will be below 4 million in 2015, and will reach 5 million in 2025.
Given this high population growth, owing itself both to urban migration and high birth rates, the economy has yet to catch up. Unemployment is estimated at 40% within the city, mainly in the high-density, low income areas of the city which can make them seem even denser than the higher-income neighborhoods.
Kenya National Theatre, and the Kenya National Archives. Art galleries in Nairobi include the Rahimtulla Museum of Modern Art (Ramoma), the Mizizi Arts Centre, and the Nairobi National Museum.
There is also the Karen Blixen Museum and the Nairobi National Museum. There is Kuona Art Center for visual artists in Nairobi.
By the mid twentieth century, many foreigners settled in Nairobi from other parts of the British Empire, primarily India and parts of (present-day) Pakistan. These immigrants were workers who arrived to construct the Kampala – Mombasa railway, settling in Nairobi after its completion, and also merchants from Gujarat. Nairobi also has established communities from Somalia and Sudan.
Nairobi has two informal nicknames. The first is "The Green City in the Sun", which is derived from the city's foliage and warm climate. The second is the "Safari Capital of the World", which is used due to Nairobi's prominence as a hub for safari tourism.
"Kwani?" is Kenya's first literary journal and was established by writers living in Nairobi. Nairobi's publishing houses have also produced the works of some of Kenya's authors, including Ngũgĩ wa Thiong'o and Meja Mwangi who were part of post-colonial writing.
Many film makers also practice their craft out of Nairobi. Film-making is still young in the country, but people like producer Njeri Karago and director Judy Kibinge are paving the way for others.
Perhaps the most famous book and film set in Nairobi is "Out of Africa". The book was written by Karen Blixen, whose pseudonym was Isak Dinesen, and it is her account of living in Kenya. Karen Blixen lived in the Nairobi area from 1917 to 1931. The neighbourhood in which she lived, Karen, is named after her.
In 1985, "Out of Africa" was made into a film, directed by Sydney Pollack. The film won 28 awards, including seven Academy Awards. The popularity of the film prompted the opening of Nairobi's Karen Blixen Museum.
Nairobi is also the setting of many of the novels of Ngũgĩ wa Thiong'o, Kenya's foremost writer.
Nairobi has been the set of several other American and British films. The most recent of these was "The Constant Gardener" (2005), a large part of which was filmed in the city. The story revolves around a British diplomat in Nairobi whose wife is murdered in northern Kenya. Much of the filming was in the Kibera slum.
Among the latest Kenyan actors in Hollywood who identify with Nairobi is Lupita Nyong'o. Lupita received an Oscar award for best supporting actress in her role as Patsy in the film "12 Years a Slave" during the "86th Academy Awards" at the Dolby theatre in Los Angeles. Lupita is the daughter of Kenyan politician Peter Anyang' Nyong'o.
Most new Hollywood films are nowadays screened at Nairobi's cinemas. Up until the early 1990s, there were only a few film theatres and the repertoire was limited. There are also two drive-in cinemas in Nairobi.
In 2015 and 2016, Nairobi was the focus point for the American television series "Sense8" which shot its first and second seasons partly in the city. The TV series has high reviews in The Internet Movie Database (IMDB).
In 2015 Nairobi was also featured in the British thriller film "Eye in the Sky", which is a story about a lieutenant general and a colonel who faced political opposition after ordering a drone missile strike to take out a group of suicide bombers in Nairobi.
In 2017, the name "Nairobi" was taken as a code-name by a female main character in the famous Spanish TV series "Money Heist".
In Nairobi, there are a range of restaurants and, besides being home to "nyama choma" which is a local term used to refer to roasted meat, there are American fast food restaurants such as KFC, Subway, Domino's Pizza, Pizza Hut, Hardee's and Burger King which are popular, and the longer established South African chains, Galittos, Steers, PizzaMojo, Spur Steak Ranches. Coffee houses, doubling up as restaurants, mostly frequented by the upper middle classes, such as Artcaffe, Nairobi Java House and Dormans have become increasingly popular in recent days. Traditional food joints such as the popular K'osewe's in the city centre and Amaica, which specialize in African delicacies, are also widespread. The Kenchic franchise which specialized in old-school chicken and chips meals was also popular, particularly among the lower classes and students, with restaurants all over the city and its suburbs. However, as of February 2016, Kenchic stopped operating its eatery business. Upscale restaurants often specialize in specific cuisines such as Italian, Lebanese, Ethiopian, French, but are more likely to be found in five star hotels and the wealthier suburbs in the West and South of the city.
Nairobi has an annual restaurant week (NRW) at the beginning of the year, January–February. Nairobi's restaurants offer dining packages at reduced prices. NRW is managed by Eatout Kenya which is an online platform that lists and reviews restaurants in Nairobi, and provides a platform for Kenyan foodies to congregate and share.
Nairobi is the centre of Kenya's music scene. Benga is a Kenyan genre which was developed in Nairobi. The style is a fusion of jazz and Luo music forms. Mugithi is another popular genre in Kenya, with its origins in the central parts of the country. A majority of music videos of leading local musicians are also filmed in the city.
In the 1970s, Nairobi became the prominent centre for music in the African Great Lakes. During this period, Nairobi was established as a hub of soukous music. This genre was originally developed in Kinshasa and Brazzaville. After the political climate in the region deteriorated, many Congolese artists relocated to Nairobi. Artists such as Orchestra Super Mazembe moved from Congo to Nairobi and found great success. Virgin records became aware of the popularity of the genre and signed recording contracts with several soukous artists.
More recently, Nairobi has become the centre of the Kenyan hip hop scene, with Kalamashaka, Gidi Gidi Majimaji being the pioneers of urban music in Kenya. The genre has become very popular amongst local youth, and domestic musicians have become some of the most popular in the region. Successful artists based in Nairobi include Jua Cali, Nonini, Camp Mulla, Juliani, Eric Wainaina, Suzanna Owinyo and Nameless. Popular Record labels include Ogopa DJs, Grand Pa Records, Main Switch, Red Black and Green Republik, Calif Records and Bornblack Music Group.
Many foreign musicians who tour Africa perform in Nairobi. Bob Marley's first-ever visit to Africa started in Nairobi. Acts that have performed in Nairobi include Lost Boyz, Wyclef Jean, Shaggy, Akon, Eve, T.O.K, Sean Paul, Wayne Wonder, Alaine, Konshens, Ja Rule, and Morgan Heritage, and Cabo Snoop. Other international musicians who have performed in Nairobi include the rocking show by Don Carlos, Demarco, Busy Signal, Mr. Vegas and the Elephant man crew.
Nairobi, including the coastal towns of Mombasa and Diani, have recently become the centre of Electronic Dance Music (EDM) in Kenya producing DJs as well as producers like Suraj, Jack Rooster, Euggy, DJ Fita, Noise on Demand, DJ Vidza, DJ Coco EM. Prominent international composers and DJs have also toured in Nairobi, including Diplo, Major Lazer, Kyau & Albert, Solarity, Ronski Speed, and Boom Jinx.
Many nightclubs in and around the city have witnessed a growth in the population that exclusively listen to Electronic Dance Music, especially amongst the younger generations. These youth also support many local EDM producers and DJs, such as Jahawi, Mikhail Kuzi, Barney Barrow, Jack Rooster, HennessyLive, Trancephilic5 As well as up and comers such as L.A Dave, Eric K, Raj El Rey, Tom Parker and more.
Gospel music is also very popular in Nairobi just as in the rest of Kenya, with gospel artistes having a great impact in the mostly Christian city. Artistes such as Esther Wahome, Eunice Njeri, Daddy Owen, Emmy Kosgei and the late Angela Chibalonza, among others, have a great pull over the general population while others like MOG, Juliani, Ecko dyda, DK Kwenye Beat have great influence over the younger generation. Their concerts are also very popular and they have as much influence as the great secular artistes. The most popular are Groove tours, TSO (Totally Sold Out) new year concerts.
Musical group Sauti Sol performed for U.S. President Barack Obama when he was in the city for the 2015 Global Entrepreneurship Summit.
Nairobi is the African Great Lakes region's sporting centre. The premier sports facility in Nairobi and generally in Kenya is the Moi International Sports Centre in the suburb of Kasarani. The complex was completed in 1987, and was used to host the 1987 All Africa Games. The complex comprises a 60,000 seater stadium, the second largest in the African Great Lakes (after Tanzania's new national stadium), a 5,000 seater gymnasium, and a 2,000 seater aquatics centre.
The Nyayo National Stadium is Nairobi's second largest stadium renowned for hosting global rugby event under the "Safaricom Sevens." Completed in 1983, the stadium has a capacity of 30,000. This stadium is primarily used for football. The facility is located close to the Central Business District, which makes it a convenient location for political gatherings.
Nairobi City Stadium is the city's first stadium, and used for club football. Nairobi Gymkhana is the home of the Kenyan cricket team, and was a venue for the 2003 Cricket World Cup. Notable annual events staged in Nairobi include Safari Rally (although it lost its World Rally Championship status in 2003), Safari Sevens rugby union tournament, and Nairobi Marathon.
Football is the most popular sport in the city by viewership and participation. This is highlighted by the number of football clubs in the city, including Kenyan Premier League sides Gor Mahia, A.F.C. Leopards, Tusker and Mathare United.
There are six golf courses within a 20 km radius of Nairobi. The oldest 18-hole golf course in the city is the Royal Nairobi Golf Club. It was established in 1906 by the British, just seven years after the city was founded. Other notable golf clubs include the Windsor Country Club, Karen Country Club, and Muthaiga Golf Club. The Kenya Open golf tournament, which is part of the European Tour, takes place in Nairobi. The Ngong Racecourse in Nairobi is the centre of horse racing in Kenya.
Rugby is also a popular sport in Nairobi with 8 of the 12 top flight clubs based here.
Basketball is also a popular sport played in the city's srimary, Secondary and college leagues. Many of the city's urban youth are basketball fans and watch the American NBA.
Among the places of worship, they are predominantly Christian churches and temples : Roman Catholic Archdiocese of Nairobi (Catholic Church), Anglican Church of Kenya (Anglican Communion), Presbyterian Church of East Africa (World Communion of Reformed Churches), Baptist Convention of Kenya (Baptist World Alliance), Assemblies of God. There are also Muslim mosques including Jamia Mosque.
The majority of schools follow either the Kenyan Curriculum or the British Curriculum. There is also International School of Kenya and Rosslyn Academy, both of which follow the North American Curriculum, Swedish school in N'gong, and the German school in Gigiri.
Nairobi is home to several Universities and Colleges.
Numerous other universities have also opened satellite campuses in Nairobi. The Railways Training Institute established in 1956, is also a notable institution of higher learning with a campus in Nairobi.
Major plans are being implemented in the need to decongest the city's traffic and the completion of Thika Road has given the city a much needed face-lift attributed to road's enhancement of global standards. Several projects have been completed (Syokimau Rail Station, the Eastern and Northern Bypasses) while numerous other projects are still underway. The country's head of state announced (when he opened Syokimau Rail Service) that Kenya was collaborating with other countries in the region to develop railway infrastructure to improve regional connectivity under the ambitious LAPPSET project which is the single largest and most expensive in the continent.
Kenya signed a bilateral agreement with Uganda to facilitate joint development of the Mombasa-Malaba-Kampala standard gauge railway. A branch line will also be extended to Kisumu.
Similarly, Kenya signed a Memorandum of Understanding with the Government of Ethiopia for the development of Lamu-Addis Ababa standard gauge railway. Under the Lamu-South Sudan and Ethiopia Transport Corridor Project, the development of a railway component is among the priority projects.
The development of these critical transport facilities will, besides reducing transport costs due to faster movement of goods and people within the region, also increase trade, improve the socio-economic welfare of Northern Kenya and boost the country's potential in attracting investments from all over the world.
The first phase of the standard gauge railway project was launched on 31 May 2017 by the President of Kenya Uhuru Kenyatta in a ceremony that saw thousands of Kenyans ride on the inaugural trip free of charge. The two passenger locomotives christened "Madaraka Express" operate daily trips between Nairobi and Mombasa.
Jomo Kenyatta International Airport is the largest airport in Kenya. Domestic travelers made up 40% of overall passengers in 2016. An increase of 32% in 5 yrs since 2012. JKIA had more than 7 million passengers going through it in 2016. In February 2017, JKIA received a Category One Status from the FAA boosting the airport's status as a Regional Aviation hub.
Wilson Airport is a general-aviation airport handling smaller aircraft, mostly propeller-driven. In July 2016, construction of a new air traffic control Tower commenced at a cost of KES 163 million (approximately US$1.63 million).
Eastleigh Airport is a military base airport. In its earlier years, it was utilised as a landing strip in the pre-jet airline era. It was mostly used as a British passenger and mail route from Southampton to Cape Town in the 1930s & 1940s. This route was served by flying boats between Britain and Kisumu and then by land-based aircraft on the routes to the south.
Matatus are the most common form of public transport in Nairobi.
Matatu, which literally translates to "three cents for a ride" (nowadays much more) are privately owned minibuses, and the most popular form of local transport. They generally seat fourteen to twenty-four. Matatus operate within Nairobi, its environs and suburbs and from Nairobi to other towns around the country. The matatu's route is imprinted along a yellow stripe on the side of the bus, and matatus plying specific routes have specific route numbers. However, in November 2014 President Uhuru Kenyatta lifted the ban on the yellow stripe and allowed matatus to maintain the colourful graphics in an effort to support the youth in creating employment. Matatus in Nairobi were easily distinguishable by their extravagant paint schemes, as owners would paint their matatu with various colourful decorations, such as their favourite football team or hip hop artist. More recently, some have even painted Barack Obama's face on their vehicle. They are notorious for their poor safety records, which are a result of overcrowding and reckless driving. Due to the intense competition between matatus, many are equipped with powerful sound systems and television screens to attract more customers.
However, in 2004, a law was passed requiring all matatus to include seat belts and speed governors and to be painted with a yellow stripe. At first, this caused a furore amongst Matatu operators, but they were pressured by government and the public to make the changes. Matatus are now limited to . However, many of the matatu vehicles have had their speed governors disabled, which is evident by them travelling at speeds well over .
Buses are increasingly becoming common in the city with some even going to the extents of installing complimentary WiFi systems in partnership with the leading mobile service provider. There are four major bus companies operating the city routes and are the traditional Kenya Bus Service (KBS), and newer private operators Citi Hoppa, Compliant MOA and Double M. The Citi Hoppa buses are distinguishable by their green livery, the Double M buses are painted purple, Compliant MOA by their distinctively screaming names and mix of white, blue colours while the KBS buses are painted blue.
Companies such as Easy Coach, Crown Bus, Coast Bus, Modern Coast, Eldoret Express, Chania, the Guardian Angel, Spanish and Mash Poa run scheduled buses and luxury coaches to other cities and towns.
Nairobi was founded as a railway town, and the main headquarters of Kenya Railways (KR) is still situated at Nairobi railway station, which is located near the city centre. The line runs through Nairobi, from Mombasa to Kampala. Its main use is freight traffic connecting Nairobi to Mombasa and Kisumu. A number of morning and evening commuter trains connect the centre with the suburbs, but the city has no proper light rail, tramway, or rapid transit lines. A proposal has been passed for the construction of a commuter rail line. The country's third president since independence, President Mwai Kibaki on Tuesday, 13 November 2012 launched the Syokimau Rail Service marking a major milestone in the history of railway development in the country. The opening of the station marked another milestone in efforts to realise various projects envisaged under the Vision 2030 Economic Blueprint. The new station has a train that ferries passengers from Syokimau to the city centre cutting travel time by half. Opening of the station marks the completion of the first phase of the Sh24b Nairobi Commuter Rail Network that is geared at easing traffic congestion in Nairobi, blamed for huge economic losses. Other modern stations include Imara Daima Railway Station and Makadara Railway Station.
The new Mombasa-Nairobi Standard Gauge Railway connects the port city of Mombasa and Nairobi. The new railway line has virtually replaced the old metre-gauge railway. The Nairobi Terminus is located at Syokimau, some 20 km from the city centre. Passengers travelling from Mombasa are transferred the short distance into the CBD with the metre-gauge trains.
Nairobi is served by highways that link Mombasa to Kampala in Uganda and Arusha in Tanzania. These are earmarked to ease the daily motor traffic within and surrounding the metro area. However, driving in Nairobi is chaotic. Most of the roads are tarmacked and there are signs showing directions to certain neighbourhoods. The city is connected to the Jomo Kenyatta International Airport by the Mombasa Highway, which passes through Industrial Area, South B, South C and Embakasi. Ongata Rongai, Langata and Karen are connected to the city centre by Langata Road, which runs to the south. Lavington, Riverside, and Westlands are connected by Waiyaki Way. Kasarani, Eastlands, and Embakasi are connected by Thika Road, Jogoo Road, and Outer Ring Road.
Highways connect the city with other major towns such as Mombasa, Machakos, Voi, (A109), Eldoret, Kisumu, Nakuru, Naivasha, and Namanga Border Tanzania (A104).
Nairobi is undergoing major road constructions to update its infrastructure network. The new systems of roads, flyovers, and bridges would cut outrageous traffic levels caused the inability of the current infrastructure to cope with the soaring economic growth in the past few years. It is also a major component of Kenya's Vision 2030 and Nairobi Metropolis plans. Most roads now are well lit and surfaced with adequate signage.
94% of the piped water supply for Nairobi comes from rivers and reservoirs in the Aberdare Range north of the city, of which the reservoir of the Thika Dam is the most important one. Water distribution losses – technically called non-revenue water – are 40%, and only 40% of those with house connections receive water continuously. Slum residents receive water through water kiosks and end up paying much higher water prices than those fortunate enough to have access to piped water at their residence.
There is wide variety regarding standards of living in Nairobi. Most wealthy Kenyans live in Nairobi, but the majority of Nairobians are of average and low income. Half of the population has been estimated to live in slums which cover just 5% of the city area. The growth of these slums is a result of urbanisation, poor town planning, and the unavailability of loans for low income earners.
Kibera is one of the largest slums in Africa, and is situated to the west of Nairobi. (Kibera comes from the Nubian word Kibra, meaning "forest" or "jungle"). The slums cover two square kilometres and are on government land. Kibera has been the setting for several films, the most recent being "The Constant Gardener".
Other notable slums include Mathare and Korogocho. Altogether, 66 areas are counted as slums within Nairobi.
Many Nairobi non-slum-dwellers live in relatively good housing conditions. Large houses can be found in many of the upmarket neighbourhoods, especially to the west of Nairobi. Historically, British occupiers have settled in Gigiri, Muthaiga, Langata and Karen. Other middle and high income estates include Parklands, Westlands, Hurlingham, Kilimani, Milimani, Spring Valley, Lavington, Rosslyn, Kitisuru, and Nairobi Hill.
To accommodate the growing middle class, many new apartments and housing developments are being built in and around the city. The most notable development is "Greenpark", at Athi River, Machakos County from Nairobi's Central Business District. Over 5,000 houses, villas and apartments are being constructed at this development, including leisure, retail and commercial facilities. The development is being marketed to families, as are most others within the city. Eastlands also houses most of the city's middle class and includes South C, South B, Embakasi, Buru Buru, Komarock, Donholm, Umoja, and various others.
Throughout the 2000s, Nairobi had struggled with rising crime, earning a reputation for being a dangerous city and the nickname "Nairobbery," a name which persists today. On 7 August 1998, the US Embassy was bombed, killing 224 people and injuring 4000. In 2001, the United Nations International Civil Service Commission rated Nairobi as among the most insecure cities in the world, classifying the city as "status C". In the United Nations report, it was stated that in 2001, nearly one third of all Nairobi residents experienced some form of robbery in the city. The head of one development agency cited the notoriously high levels of violent armed robberies, burglaries, and carjackings. Crime had risen in Nairobi as a result of unplanned urbanisation, with a minimal number of police stations and a proper security infrastructure. However, many claim that the biggest factor for the city's alarming crime rate is police corruption, which leaves many criminals unpunished. As a security precaution, most large houses have a watch guard, burglar grills, and dogs to patrol their grounds during the night. Most crimes, however, occur around the poor neighbourhoods where it gets dangerous during night hours.
In 2006, crime decreased in the city, due to increased security and an improved police presence. Despite this, in 2007, the Kenyan government and US State Department have announced that Nairobi is experiencing a greater level of violent crime than in previous years. Since then, the government has taken measures to combat crime with heavy police presence in and around the city while US government has updated its travel warning for the country.
Following a grenade attack in October 2011 by a local Kenyan man, with terrorist links, the city faced a heightened security presence. Fears spread over further promised retaliations by the Al-Shabaab group of rebels over Kenya's involvement in a coordinated operation with the Somalian military against the insurgent outfit.
There have been a spate of blasts in Nairobi which started on 10 March 2012, where assailants threw grenades at a busy bus station and a blue-collar bar in Nairobi, killing nine and injuring more than 50. On 28 May 2012, 28 people were injured in an explosion in a shopping complex in downtown Nairobi, near Moi avenue. On 21 September 2013, Al-Shabaab-associated militants attacked the Westgate Mall. 67 people were killed.
On January 15, 2019, five gunmen attacked the DusitD2 hotel in Nairobi's Westlands neighborhood. The attack began with a suicide bomber in the hotel lobby, and was followed by gunfire. Terror group al-Shabaab claimed responsibility for the attack that killed 21 people. The attack was unexpected, because the area that it took place in is generally understood to be a very safe area. Citizens of many countries were inside the hotel due to Nairobi being East Africa's economic hub.
Nairobi is home to most of Kenya's news and media organisations. The city is also home to the region's largest newspapers: the "Daily Nation" and "The Standard". These are circulated within Kenya and cover a range of domestic and regional issues. Both newspapers are published in English.
Kenya Broadcasting Corporation, a state-run television and radio station, is headquartered in the city. Kenya Television Network is part of the Standard Group and was Kenya's first privately owned TV station. The Nation Media Group runs NTV which is based in Nairobi. There are also a number of prominent radio stations located in Kenya's capital including KISS 100, Capital FM, East FM, Kameme FM, Metro FM, and Family FM, among others.
Several multinational media organisations have their regional headquarters in Nairobi. These include the BBC, CNN, Agence France-Presse, Reuters, Deutsche Welle, and the Associated Press. The East African bureau of CNBC Africa is located in Nairobi's city centre, while the Nairobi bureau of "The New York Times" is located in the suburb of Gigiri. The broadcast headquarters of CCTV Africa are located in Nairobi.
Nairobi has grown since 1899. A population projection in the 21st century is listed below.
Nairobi is twinned with: | https://en.wikipedia.org/wiki?curid=21482 |
Numeral (linguistics)
In linguistics, a numeral (or number word) in the broadest sense is a word or phrase that describes a numerical quantity. Some theories of grammar use the word "numeral" to refer to cardinal numbers that act as a determiner to specify the quantity of a noun, for example the "two" in "two hats". Some theories of grammar do not include determiners as a part of speech and consider "two" in this example to be an adjective. Some theories consider "numeral" to be a synonym for "number" and assign all numbers (including ordinal numbers like the compound word "seventy-fifth") to a part of speech called "numerals" Numerals in the broad sense can also be analyzed as a noun ("three is a small number"), as a pronoun ("the two went to town"), or for a small number of words as an adverb ("I rode the slide twice").
Numerals can express relationships like quantity (cardinal numbers), sequence (ordinal numbers), frequency (once, twice), and part (fraction).
Numerals may be attributive, as in "two dogs", or pronominal, as in "I saw two (of them)".
Many words of different parts of speech indicate number or quantity. Such words are called quantifiers. Examples are words such as "every", "most", "least", "some", etc. Numerals are distinguished from other quantifiers by the fact that they designate a specific number. Examples are words such as "five, ten, fifty, one hundred, etc." They may or may not be treated as a distinct part of speech; this may vary, not only with the language, but with the choice of word. For example, "dozen" serves the function of a noun, "first" serves the function of an adjective, and "twice" serves the function of an adverb. In Old Church Slavonic, the cardinal numbers 5 to 10 were feminine nouns; when quantifying a noun, that noun was declined in the genitive plural like other nouns that followed a noun of quantity (one would say the equivalent of "five of people"). In English grammar, the classification ""numeral"" (viewed as a part of speech) is reserved for those words which have distinct grammatical behavior: when a numeral modifies a noun, it may replace the article: the/some dogs played in the park" → twelve dogs played in the park". (Note that *"dozen dogs played in the park" is not grammatical, so "dozen" is not a numeral in this sense.) English numerals indicate cardinal numbers. However, not all words for cardinal numbers are necessarily numerals. For example, "million" is grammatically a noun, and must be preceded by an article or numeral itself.
Numerals may be simple, such as 'eleven', or compound, such as 'twenty-three'.
In linguistics, however, numerals are classified according to purpose: examples are ordinal numbers ("first", "second", "third", etc.; from 'third' up, these are also used for fractions), multiplicative numbers ("once", "twice", and "thrice"), multipliers ("single", "double", and "triple"), and distributive numbers ("singly", "doubly", and "triply"). Georgian, Latin, and Romanian (see Romanian distributive numbers) have regular distributive numbers, such as Latin "singuli" "one-by-one", "bini" "in pairs, two-by-two", "terni" "three each", etc. In languages other than English, there may be other kinds of number words. For example, in Slavic languages there are collective numbers which describe sets, such as "pair" or "dozen" in English (see Russian numerals, Polish numerals).
Some languages have a very limited set of numerals, and in some cases they arguably do not have any numerals at all, but instead use more generic quantifiers, such as 'pair' or 'many'. However, by now most such languages have borrowed the numeral system or part of the numeral system of a national or colonial language, though in a few cases (such as Guarani ), a numeral system has been invented internally rather than borrowed. Other languages had an indigenous system but borrowed a second set of numerals anyway. An example is Japanese, which uses either native or Chinese-derived numerals depending on what is being counted.
In many languages, such as Chinese, numerals require the use of numeral classifiers. Many sign languages, such as ASL, incorporate numerals.
English has derived numerals for multiples of its base ("fifty, sixty," etc), and some languages have simplex numerals for these, or even for numbers between the multiples of its base. Balinese, for example, currently has a decimal system, with words for 10, 100, and 1000, but has additional simplex numerals for 25 (with a second word for 25 only found in a compound for 75), 35, 45, 50, 150, 175, 200 (with a second found in a compound for 1200), 400, 900, and 1600. In Hindustani, the numerals between 10 and 100 have developed to the extent that they need to be learned independently.
In many languages, numerals up to the base are a distinct part of speech, while the words for powers of the base belong to one of the other word classes. In English, these higher words are hundred 102, thousand 103, million 106, and higher powers of a thousand (short scale) or of a million (long scale—see names of large numbers). These words cannot modify a noun without being preceded by an article or numeral (*"hundred dogs played in the park"), and so are nouns.
In East Asia, the higher units are hundred, thousand, myriad 104, and powers of myriad. In India, they are hundred, thousand, lakh 105, crore 107, and so on. The Mesoamerican system, still used to some extent in Mayan languages, was based on powers of 20: "bak’" 400 (202), "pik" 8000 (203), "kalab" 160,000 (204), etc.
The cardinal numbers have numerals. In the following tables, [and] indicates that the word "and" is used in some dialects (such as British English), and omitted in other dialects (such as American English).
This table demonstrates the standard English construction of some cardinal numbers. (See next table for names of larger cardinals.)
This table compares the English names of cardinal numbers according to various American, British, and Continental European conventions. See English numerals or names of large numbers for more information on naming numbers.
There is no consistent and widely accepted way to extend cardinals beyond centillion (centilliard).
The following table details the myriad, octad, chinese myriad, Chinese long and -yllion names for powers of 10.
There is also a Knuth-proposed system notation of numbers, named the -yllion system. For instance, in this system, 1032 would be represented as 1'0000,0000;0000,0000:0000,0000;0000,0000.
This is a table of English names for non-negative rational numbers less than or equal to 1. It also lists alternative names, but there is no widespread convention for the names of extremely small positive numbers.
Keep in mind that rational numbers like 0.12 can be represented in infinitely many ways, e.g. "zero-point-one-two" (0.12), "twelve percent" (12%), "three twenty-fifths" (), "nine seventy-fifths" (), "six fiftieths" (), "twelve hundredths" (), "twenty-four two-hundredths" (), etc.
Various terms have arisen to describe commonly used measured quantities.
Not all languages have numeral systems. Specifically, there is not much need for numeral systems among hunter-gatherers who do not engage in commerce. Many languages around the world have no numerals above two to four—or at least did not before contact with the colonial societies—and speakers of these languages may have no tradition of using the numerals they did have for counting. Indeed, several languages from the Amazon have been independently reported to have no specific number words other than 'one'. These include Nadëb, pre-contact Mocoví and Pilagá, Culina and pre-contact Jarawara, Jabutí, Canela-Krahô, Botocudo (Krenák), Chiquitano, the Campa languages, Arabela, and Achuar. Some languages of Australia, such as Warlpiri, do not have words for quantities above two, as did many Khoisan languages at the time of European contact. Such languages do not have a word class of 'numeral'.
Most languages with both numerals and counting use base 8, 10, 12, or 20. Base 10 appears to come from counting one's fingers, base 20 from the fingers and toes, base 8 from counting the spaces between the fingers (attested in California), and base 12 from counting the knuckles (3 each for the four fingers).
Many languages of Melanesia have (or once had) counting systems based on parts of the body which do not have a numeric base; there are (or were) no numerals, but rather nouns for relevant parts of the body—or simply pointing to the relevant spots—were used for quantities. For example, 1–4 may be the fingers, 5 'thumb', 6 'wrist', 7 'elbow', 8 'shoulder', etc., across the body and down the other arm, so that the opposite little finger represents a number between 17 (Torres Islands) to 23 (Eleman). For numbers beyond this, the torso, legs and toes may be used, or one might count back up the other arm and back down the first, depending on the people.
Binary systems are base 2, often using zeros and ones. With only two symbols binary is useful for logical systems like computers.
Base 3 counting has practical usage in some analog logic, in baseball scoring and in self–similar mathematical structures.
Some Austronesian and Melanesian ethnic groups, some Sulawesi and some Papua New Guineans, count with the base number four, using the term "asu" and "aso", the word for dog, as the ubiquitous village dog has four legs. This is argued by anthropologists to be also based on early humans noting the human and animal shared body feature of two arms and two legs as well as its ease in simple arithmetic and counting. As an example of the system's ease a realistic scenario could include a farmer returning from the market with fifty "asu" heads of pig (200), less 30 "asu" (120) of pig bartered for 10 "asu" (40) of goats noting his new pig count total as twenty "asu": 80 pigs remaining. The system has a correlation to the dozen counting system and is still in common use in these areas as a natural and easy method of simple arithmetic.
Quinary systems are based on the number 5. It is almost certain the quinary system developed from counting by fingers (five fingers per hand). An example are the Epi languages of Vanuatu, where 5 is "luna" 'hand', 10 "lua-luna" 'two hand', 15 "tolu-luna" 'three hand', etc. 11 is then "lua-luna tai" 'two-hand one', and 17 "tolu-luna lua" 'three-hand two'.
5 is a common "auxiliary base", or "sub-base", where 6 is 'five and one', 7 'five and two', etc. Aztec was a vigesimal (base-20) system with sub-base 5.
The Morehead-Maro languages of Southern New Guinea are examples of the rare base 6 system with monomorphemic words running up to 66. Examples are Kanum and Kómnzo. The Sko languages on the North Coast of New Guinea follow a base-24 system with a sub-base of 6.
Septenary systems are very rare, as few natural objects consistently have seven distinctive features. Traditionally, it occurs in week-related timing. It has been suggested that the Palikur language has a base-seven system, but this is dubious.
Octal counting systems are based on the number 8. Examples can be found in the Yuki language of California and in the Pamean languages of Mexico, because the Yuki and Pame keep count by using the four spaces between their fingers rather than the fingers themselves.
It has been suggested that Nenets has a base-nine system.
A majority of traditional number systems are decimal. This dates back at least to the ancient Egyptians, who used a wholly decimal system. Anthropologists hypothesize this may be due to humans having five digits per hand, ten in total. There are many regional variations including:
Duodecimal systems are based on 12.
These include:
Duodecimal numeric systems have some practical advantages over decimal. It is much easier to divide the base digit twelve (which is a highly composite number) by many important divisors in market and trade settings, such as the numbers 2, 3, 4 and 6.
Because of several measurements based on twelve, many Western languages have words for base-twelve units such as "dozen", "gross" and "great gross", which allow for rudimentary duodecimal nomenclature, such as "two gross six dozen" for 360. Ancient Romans used a decimal system for integers, but switched to duodecimal for fractions, and correspondingly Latin developed a rich vocabulary for duodecimal-based fractions (see Roman numerals). A notable fictional duodecimal system was that of J. R. R. Tolkien's Elvish languages, which used duodecimal as well as decimal.
Hexadecimal systems are based on 16.
The traditional Chinese units of measurement were base-16. For example, one jīn (斤) in the old system equals sixteen taels. The suanpan (Chinese abacus) can be used to perform hexadecimal calculations such as additions and subtractions.
South Asian monetary systems were base-16. One rupee in Pakistan and India was divided into 16 annay. A single anna was subdivided into four paisa or twelve pies (thus there were 64 paise or 192 pies in a rupee). The anna was demonetised as a currency unit when India decimalised its currency in 1957, followed by Pakistan in 1961.
Vigesimal numbers use the number 20 as the base number for counting. Anthropologists are convinced the system originated from digit counting, as did bases five and ten, twenty being the number of human fingers and toes combined.
The system is in widespread use across the world. Some include the classical Mesoamerican cultures, still in use today in the modern indigenous languages of their descendants, namely the Nahuatl and Mayan languages (see Maya numerals). A modern national language which uses a full vigesimal system is Dzongkha in Bhutan.
Partial vigesimal systems are found in some European languages: Basque, Celtic languages, French (from Celtic), Danish, and Georgian. In these languages the systems are vigesimal up to 99, then decimal from 100 up. That is, 140 is 'one hundred two score', not *seven score, and there is no numeral for 400.
The term "score" originates from tally sticks, and is perhaps a remnant of Celtic vigesimal counting. It was widely used to learn the pre-decimal British currency in this idiom: "a dozen pence and a score of bob", referring to the 20 shillings in a pound. For Americans the term is most known from the opening of the Gettysburg Address: ""Four score and seven years ago our fathers..."".
The Sko languages have a base-24 system with a sub-base of 6.
Ngiti has base 32.
Ekari has a base-60 system. Sumeria had a base-60 system with a decimal sub-base (perhaps a conflation of the decimal and a duodecimal systems of its constituent peoples), which was the origin of the numbering of modern degrees, minutes, and seconds.
Supyire is said to have a base-80 system; it counts in twenties (with 5 and 10 as sub-bases) up to 80, then by eighties up to 400, and then by 400s (great scores).
799 [i.e. 400 + (4 x 80) + (3 x 20) + {10 + (5 + 4)}]’
A database Numeral Systems of the World's Languages compiled by Eugene S.L. Chan of Hong Kong is hosted by the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany. The database currently contains data for about 4000 languages. | https://en.wikipedia.org/wiki?curid=21483 |
Neutrino
A neutrino ( or ) (denoted by the Greek letter ) is a fermion (an elementary particle with spin of) that interacts only via the weak subatomic force and gravity. The neutrino is so named because it is electrically neutral and because its rest mass is so small ("-ino") that it was long thought to be zero. The mass of the neutrino is much smaller than that of the other known elementary particles. The weak force has a very short range, the gravitational interaction is extremely weak, and neutrinos do not participate in the strong interaction. Thus, neutrinos typically pass through normal matter unimpeded and undetected.
Weak interactions create neutrinos in one of three leptonic flavors: electron neutrinos muon neutrinos (), or tau neutrinos (), in association with the corresponding charged lepton. Although neutrinos were long believed to be massless, it is now known that there are three discrete neutrino masses with different tiny values, but they do not correspond uniquely to the three flavors. A neutrino created with a specific flavor has an associated specific quantum superposition of all three mass states. As a result, neutrinos oscillate between different flavors in flight. For example, an electron neutrino produced in a beta decay reaction may interact in a distant detector as a muon or tau neutrino. Although only differences between squares of the three mass values are known as of 2019, cosmological observations imply that the sum of the three masses must be less than one millionth that of the electron.
For each neutrino, there also exists a corresponding antiparticle, called an "antineutrino", which also has spin of and no electric charge. Antineutrinos are distinguished from the neutrinos by having opposite signs of lepton number and right-handed instead of left-handed chirality. To conserve total lepton number (in nuclear beta decay), electron neutrinos only appear together with positrons (anti-electrons) or electron-antineutrinos, whereas electron antineutrinos only appear with electrons or electron neutrinos.
Neutrinos are created by various radioactive decays; the following list is not exhaustive, but includes some of those processes:
The majority of neutrinos which are detected about the Earth are from nuclear reactions inside the Sun.
At the surface of our planet, the flux is about 65 billion () solar neutrinos, per second per square centimeter.
Neutrinos can also be used for tomography of the interior of the earth.
Research is intense in the hunt to elucidate the essential nature of neutrinos, with aspirations of finding:
The neutrino was postulated first by Wolfgang Pauli in 1930 to explain how beta decay could conserve energy, momentum, and angular momentum (spin). In contrast to Niels Bohr, who proposed a statistical version of the conservation laws to explain the observed continuous energy spectra in beta decay, Pauli hypothesized an undetected particle that he called a "neutron", using the same "-on" ending employed for naming both the proton and the electron. He considered that the new particle was emitted from the nucleus together with the electron or beta particle in the process of beta decay.
James Chadwick discovered a much more massive neutral nuclear particle in 1932 and named it a neutron also, leaving two kinds of particles with the same name. Earlier (in 1930) Pauli had used the term "neutron" for both the neutral particle that conserved energy in beta decay, and a presumed neutral particle in the nucleus; initially he did not consider these two neutral particles as distinct from each other. The word "neutrino" entered the scientific vocabulary through Enrico Fermi, who used it during a conference in Paris in July 1932 and at the Solvay Conference in October 1933, where Pauli also employed it. The name (the Italian equivalent of "little neutral one") was jokingly coined by Edoardo Amaldi during a conversation with Fermi at the Institute of Physics of via Panisperna in Rome, in order to distinguish this light neutral particle from Chadwick's heavy neutron.
In Fermi's theory of beta decay, Chadwick's large neutral particle could decay to a proton, electron, and the smaller neutral particle (now called an "electron antineutrino"):
Fermi's paper, written in 1934, unified Pauli's neutrino with Paul Dirac's positron and Werner Heisenberg's neutron–proton model and gave a solid theoretical basis for future experimental work. The journal "Nature" rejected Fermi's paper, saying that the theory was "too remote from reality". He submitted the paper to an Italian journal, which accepted it, but the general lack of interest in his theory at that early date caused him to switch to experimental physics.
By 1934, there was experimental evidence against Bohr's idea that energy conservation is invalid for beta decay: At the Solvay conference of that year, measurements of the energy spectra of beta particles (electrons) were reported, showing that there is a strict limit on the energy of electrons from each type of beta decay. Such a limit is not expected if the conservation of energy is invalid, in which case any amount of energy would be statistically available in at least a few decays. The natural explanation of the beta decay spectrum as first measured in 1934 was that only a limited (and conserved) amount of energy was available, and a new particle was sometimes taking a varying fraction of this limited energy, leaving the rest for the beta particle. Pauli made use of the occasion to publicly emphasize that the still-undetected "neutrino" must be an actual particle.
In 1942, Wang Ganchang first proposed the use of beta capture to experimentally detect neutrinos. In the 20 July 1956 issue of "Science", Clyde Cowan, Frederick Reines, F. B. Harrison, H. W. Kruse, and A. D. McGuire published confirmation that they had detected the neutrino, a result that was rewarded almost forty years later with the 1995 Nobel Prize.
In this experiment, now known as the Cowan–Reines neutrino experiment, antineutrinos created in a nuclear reactor by beta decay reacted with protons to produce neutrons and positrons:
The positron quickly finds an electron, and they annihilate each other. The two resulting gamma rays (γ) are detectable. The neutron can be detected by its capture on an appropriate nucleus, releasing a gamma ray. The coincidence of both events – positron annihilation and neutron capture – gives a unique signature of an antineutrino interaction.
In February 1965, the first neutrino found in nature was identified in one of South Africa's gold mines by a group which included Friedel Sellschop. The experiment was performed in a specially prepared chamber at a depth of 3 km in the ERPM mine near Boksburg. A plaque in the main building commemorates the discovery. The experiments also implemented a primitive neutrino astronomy and looked at issues of neutrino physics and weak interactions.
The antineutrino discovered by Cowan and Reines is the antiparticle of the electron neutrino.
In 1962, Leon M. Lederman, Melvin Schwartz and Jack Steinberger showed that more than one type of neutrino exists by first detecting interactions of the muon neutrino (already hypothesised with the name "neutretto"), which earned them the 1988 Nobel Prize in Physics.
When the third type of lepton, the tau, was discovered in 1975 at the Stanford Linear Accelerator Center, it was also expected to have an associated neutrino (the tau neutrino). First evidence for this third neutrino type came from the observation of missing energy and momentum in tau decays analogous to the beta decay leading to the discovery of the electron neutrino. The first detection of tau neutrino interactions was announced in 2000 by the DONUT collaboration at Fermilab; its existence had already been inferred by both theoretical consistency and experimental data from the Large Electron–Positron Collider.
In the 1960s, the now-famous Homestake experiment made the first measurement of the flux of electron neutrinos arriving from the core of the Sun and found a value that was between one third and one half the number predicted by the Standard Solar Model. This discrepancy, which became known as the solar neutrino problem, remained unresolved for some thirty years, while possible problems with both the experiment and the solar model were investigated, but none could be found. Eventually it was realized that both were actually correct, and that the discrepancy between them was due to neutrinos being more complex than was previously assumed. It was postulated that the three neutrinos had nonzero and slightly different masses, and could therefore oscillate into undetectable flavors on their flight to the Earth. This hypothesis was investigated by a new series of experiments, thereby opening a new major field of research that still continues. Eventual confirmation of the phenomenon of neutrino oscillation led to two Nobel prizes, to Raymond Davis, Jr., who conceived and led the Homestake experiment, and to Art McDonald, who led the SNO experiment, which could detect all of the neutrino flavors and found no deficit.
A practical method for investigating neutrino oscillations was first suggested by Bruno Pontecorvo in 1957 using an analogy with kaon oscillations; over the subsequent 10 years he developed the mathematical formalism and the modern formulation of vacuum oscillations. In 1985 Stanislav Mikheyev and Alexei Smirnov (expanding on 1978 work by Lincoln Wolfenstein) noted that flavor oscillations can be modified when neutrinos propagate through matter. This so-called Mikheyev–Smirnov–Wolfenstein effect (MSW effect) is important to understand because many neutrinos emitted by fusion in the Sun pass through the dense matter in the solar core (where essentially all solar fusion takes place) on their way to detectors on Earth.
Starting in 1998, experiments began to show that solar and atmospheric neutrinos change flavors (see Super-Kamiokande and Sudbury Neutrino Observatory). This resolved the solar neutrino problem: the electron neutrinos produced in the Sun had partly changed into other flavors which the experiments could not detect.
Although individual experiments, such as the set of solar neutrino experiments, are consistent with non-oscillatory mechanisms of neutrino flavor conversion, taken altogether, neutrino experiments imply the existence of neutrino oscillations. Especially relevant in this context are the reactor experiment KamLAND and the accelerator experiments such as MINOS. The KamLAND experiment has indeed identified oscillations as the neutrino flavor conversion mechanism involved in the solar electron neutrinos. Similarly MINOS confirms the oscillation of atmospheric neutrinos and gives a better determination of the mass squared splitting. Takaaki Kajita of Japan, and Arthur B. McDonald of Canada, received the 2015 Nobel Prize for Physics for their landmark finding, theoretical and experimental, that neutrinos can change flavors.
Raymond Davis, Jr. and Masatoshi Koshiba were jointly awarded the 2002 Nobel Prize in Physics. Both conducted pioneering work on solar neutrino detection, and Koshiba's work also resulted in the first real-time observation of neutrinos from the SN 1987A supernova in the nearby Large Magellanic Cloud. These efforts marked the beginning of neutrino astronomy.
SN 1987A represents the only verified detection of neutrinos from a supernova. However, many stars have gone supernova in the universe, leaving a theorized diffuse supernova neutrino background.
Neutrinos have half-integer spin (½ ); therefore they are fermions. Neutrinos are leptons. They have only been observed to interact through the weak force, although it is assumed that they also interact gravitationally.
Weak interactions create neutrinos in one of three leptonic flavors: electron neutrinos (), muon neutrinos (), or tau neutrinos (), associated with the corresponding charged leptons, the electron (), muon (), and tau (), respectively.
Although neutrinos were long believed to be massless, it is now known that there are three discrete neutrino masses; each neutrino flavor state is a linear combination of the three discrete mass eigenstates. Although only differences of squares of the three mass values are known as of 2016, experiments have shown that these masses are tiny in magnitude. From cosmological measurements, it has been calculated that the sum of the three neutrino masses must be less than one millionth that of the electron.
More formally, neutrino flavor eigenstates (creation and annihilation combinations) are not the same as the neutrino mass eigenstates (simply labelled “1”, “2”, and “3”). As of 2016, it is not known which of these three is the heaviest. In analogy with the mass hierarchy of the charged leptons, the configuration with mass 2 being lighter than mass 3 is conventionally called the “normal hierarchy”, while in the “inverted hierarchy”, the opposite would hold. Several major experimental efforts are underway to help establish which is correct.
A neutrino created in a specific flavor eigenstate is in an associated specific quantum superposition of all three mass eigenstates. This is possible because the three masses differ so little that they cannot be experimentally distinguished within any practical flight path, due to the uncertainty principle. The proportion of each mass state in the produced pure flavor state has been found to depend profoundly on that flavor. The relationship between flavor and mass eigenstates is encoded in the PMNS matrix. Experiments have established values for the elements of this matrix.
A non-zero mass allows neutrinos to possibly have a tiny magnetic moment; if so, neutrinos would interact electromagnetically, although no such interaction has ever been observed.
Neutrinos oscillate between different flavors in flight. For example, an electron neutrino produced in a beta decay reaction may interact in a distant detector as a muon or tau neutrino, as defined by the flavor of the charged lepton produced in the detector. This oscillation occurs because the three mass state components of the produced flavor travel at slightly different speeds, so that their quantum mechanical wave packets develop relative phase shifts that change how they combine to produce a varying superposition of three flavors. Each flavor component thereby oscillates as the neutrino travels, with the flavors varying in relative strengths. The relative flavor proportions when the neutrino interacts represent the relative probabilities for that flavor of interaction to produce the corresponding flavor of charged lepton.
There are other possibilities in which neutrino could oscillate even if they were massless: If Lorentz symmetry were not an exact symmetry, neutrinos could experience Lorentz-violating oscillations.
Neutrinos traveling through matter, in general, undergo a process analogous to light traveling through a transparent material. This process is not directly observable because it does not produce ionizing radiation, but gives rise to the MSW effect. Only a small fraction of the neutrino's energy is transferred to the material.
For each neutrino, there also exists a corresponding antiparticle, called an "antineutrino", which also has no electric charge and half-integer spin. They are distinguished from the neutrinos by having opposite signs of lepton number and opposite chirality. As of 2016, no evidence has been found for any other difference. In all observations so far of leptonic processes (despite extensive and continuing searches for exceptions), there is never any change in overall lepton number; for example, if total lepton number is zero in the initial state, electron neutrinos appear in the final state together with only positrons (anti-electrons) or electron-antineutrinos, and electron antineutrinos with electrons or electron neutrinos.
Antineutrinos are produced in nuclear beta decay together with a beta particle, in which, e.g., a neutron decays into a proton, electron, and antineutrino. All antineutrinos observed thus far possess right-handed helicity (i.e. only one of the two possible spin states has ever been seen), while neutrinos are left-handed. Nevertheless, as neutrinos have mass, their helicity is frame-dependent, so it is the related frame-independent property of chirality that is relevant here.
Antineutrinos were first detected as a result of their interaction with protons in a large tank of water. This was installed next to a nuclear reactor as a controllable source of the antineutrinos (See: Cowan–Reines neutrino experiment).
Researchers around the world have begun to investigate the possibility of using antineutrinos for reactor monitoring in the context of preventing the proliferation of nuclear weapons.
Because antineutrinos and neutrinos are neutral particles, it is possible that they are the same particle. Particles that have this property are known as Majorana particles, named after the Italian physicist Ettore Majorana who first proposed the concept. For the case of neutrinos this theory has gained popularity as it can be used, in combination with the seesaw mechanism, to explain why neutrino masses are so small compared to those of the other elementary particles, such as electrons or quarks. Majorana neutrinos would have the property that the neutrino and antineutrino could be distinguished only by chirality; what experiments observe as a difference between the neutrino and antineutrino could simply be due to one particle with two possible chiralities.
, it is not known whether neutrinos are Majorana or Dirac particles. It is possible to test this property experimentally. For example, if neutrinos are indeed Majorana particles, then lepton-number violating processes such as neutrinoless double beta decay would be allowed, while they would not if neutrinos are Dirac particles. Several experiments have been and are being conducted to search for this process, e.g. GERDA, EXO, and SNO+. The cosmic neutrino background is also a probe of whether neutrinos are Majorana particles, since there should be a different number of cosmic neutrinos detected in either the Dirac or Majorana case.
Neutrinos can interact with a nucleus, changing it to another nucleus. This process is used in radiochemical neutrino detectors. In this case, the energy levels and spin states within the target nucleus have to be taken into account to estimate the probability for an interaction. In general the interaction probability increases with the number of neutrons and protons within a nucleus.
It is very hard to uniquely identify neutrino interactions among the natural background of radioactivity. For this reason, in early experiments a special reaction channel was chosen to facilitate the identification: the interaction of an antineutrino with one of the hydrogen nuclei in the water molecules. A hydrogen nucleus is a single proton, so simultaneous nuclear interactions, which would occur within a heavier nucleus, don't need to be considered for the detection experiment. Within a cubic metre of water placed right outside a nuclear reactor, only relatively few such interactions can be recorded, but the setup is now used for measuring the reactor's plutonium production rate.
Very much like neutrons do in nuclear reactors, neutrinos can induce fission reactions within heavy nuclei. So far, this reaction has not been measured in a laboratory, but is predicted to happen within stars and supernovae. The process affects the abundance of isotopes seen in the universe. Neutrino fission of deuterium nuclei has been observed in the Sudbury Neutrino Observatory, which uses a heavy water detector.
There are three known types ("flavors") of neutrinos: electron neutrino , muon neutrino , and tau neutrino , named after their partner leptons in the Standard Model (see table at right). The current best measurement of the number of neutrino types comes from observing the decay of the boson. This particle can decay into any light neutrino and its antineutrino, and the more available types of light neutrinos, the shorter the lifetime of the boson. Measurements of the lifetime have shown that three light neutrino flavors couple to the . The correspondence between the six quarks in the Standard Model and the six leptons, among them the three neutrinos, suggests to physicists' intuition that there should be exactly three types of neutrino.
There are several active research areas involving the neutrino. Some are concerned with testing predictions of neutrino behavior. Other research is focused on measurement of unknown properties of neutrinos; there is special interest in experiments that determine their masses and rates of CP violation, which cannot be predicted from current theory.
International scientific collaborations install large neutrino detectors near nuclear reactors or in neutrino beams from particle accelerators to better constrain the neutrino masses and the values for the magnitude and rates of oscillations between neutrino flavors. These experiments are thereby searching for the existence of CP violation in the neutrino sector; that is, whether or not the laws of physics treat neutrinos and antineutrinos differently.
The KATRIN experiment in Germany began to acquire data in June 2018 to determine the value of the mass of the electron neutrino, with other approaches to this problem in the planning stages.
Despite their tiny masses, neutrinos are so numerous that their gravitational force can influence other matter in the universe.
The three known neutrino flavors are the only established elementary particle candidates for dark matter, specifically hot dark matter, although the conventional neutrinos seem to be essentially ruled out as substantial proportion of dark matter based on observations of the cosmic microwave background. It still seems plausible that heavier, sterile neutrinos might compose warm dark matter, if they exist.
Other efforts search for evidence of a sterile neutrino – a fourth neutrino flavor that does not interact with matter like the three known neutrino flavors. The possibility of "sterile" neutrinos is unaffected by the Z boson decay measurements described above: If their mass is greater than half the Z boson's mass, they could not be a decay product. Therefore, heavy sterile neutrinos would have a mass of at least 45.6 GeV.
The existence of such particles is in fact hinted by experimental data from the LSND experiment. On the other hand, the currently running MiniBooNE experiment suggested that sterile neutrinos are not required to explain the experimental data, although the latest research into this area is on-going and anomalies in the MiniBooNE data may allow for exotic neutrino types, including sterile neutrinos. A recent re-analysis of reference electron spectra data from the Institut Laue-Langevin has also hinted at a fourth, sterile neutrino.
According to an analysis published in 2010, data from the Wilkinson Microwave Anisotropy Probe of the cosmic background radiation is compatible with either three or four types of neutrinos.
Another hypothesis concerns "neutrinoless double-beta decay", which, if it exists, would violate lepton number conservation. Searches for this mechanism are underway but have not yet found evidence for it. If they were to, then what are now called antineutrinos could not be true antiparticles.
Cosmic ray neutrino experiments detect neutrinos from space to study both the nature of neutrinos and the cosmic sources producing them.
Before neutrinos were found to oscillate, they were generally assumed to be massless, propagating at the speed of light. According to the theory of special relativity, the question of neutrino velocity is closely related to their mass: If neutrinos are massless, they must travel at the speed of light, and if they have mass they cannot reach the speed of light. Due to their tiny mass, the predicted speed is extremely close to the speed of light in all experiments, and current detectors are not sensitive to the expected difference.
Also some Lorentz-violating variants of quantum gravity might allow faster-than-light neutrinos. A comprehensive framework for Lorentz violations is the Standard-Model Extension (SME).
In the early 1980s, first measurements of neutrino speed were done using pulsed pion beams (produced by pulsed proton beams hitting a target). The pions decayed producing neutrinos, and the neutrino interactions observed within a time window in a detector at a distance were consistent with the speed of light. This measurement was repeated in 2007 using the MINOS detectors, which found the speed of neutrinos to be, at the 99% confidence level, in the range between and . The central value of 1.000051 "c" is higher than the speed of light but is also consistent with a velocity of exactly "c" or even slightly less. This measurement set an upper bound on the mass of the muon neutrino of at 99% confidence. After the detectors for the project were upgraded in 2012, MINOS refined their initial result and found agreement with the speed of light, with the difference in the arrival time of neutrinos and light of −0.0006% (±0.0012%).
A similar observation was made, on a much larger scale, with supernova 1987A (SN 1987A). 10 MeV antineutrinos from the supernova were detected within a time window that was consistent with the speed of light for the neutrinos. So far, all measurements of neutrino speed have been consistent with the speed of light.
In September 2011, the OPERA collaboration released calculations showing velocities of 17 GeV and 28 GeV neutrinos exceeding the speed of light in their experiments. In November 2011, OPERA repeated its experiment with changes so that the speed could be determined individually for each detected neutrino. The results showed the same faster-than-light speed. In February 2012, reports came out that the results may have been caused by a loose fiber optic cable attached to one of the atomic clocks which measured the departure and arrival times of the neutrinos. An independent recreation of the experiment in the same laboratory by ICARUS found no discernible difference between the speed of a neutrino and the speed of light.
In June 2012, CERN announced that new measurements conducted by all four Gran Sasso experiments (OPERA, ICARUS, Borexino and LVD) found agreement between the speed of light and the speed of neutrinos, finally refuting the initial OPERA claim.
The Standard Model of particle physics assumed that neutrinos are massless. The experimentally established phenomenon of neutrino oscillation, which mixes neutrino flavour states with neutrino mass states (analogously to CKM mixing), requires neutrinos to have nonzero masses. Massive neutrinos were originally conceived by Bruno Pontecorvo in the 1950s. Enhancing the basic framework to accommodate their mass is straightforward by adding a right-handed Lagrangian.
Providing for neutrino mass can be done in two ways, and some proposals use both:
The strongest upper limit on the masses of neutrinos comes from cosmology: the Big Bang model predicts that there is a fixed ratio between the number of neutrinos and the number of photons in the cosmic microwave background. If the total energy of all three types of neutrinos exceeded an average of per neutrino, there would be so much mass in the universe that it would collapse. This limit can be circumvented by assuming that the neutrino is unstable, but there are limits within the Standard Model that make this difficult. A much more stringent constraint comes from a careful analysis of cosmological data, such as the cosmic microwave background radiation, galaxy surveys, and the Lyman-alpha forest. These indicate that the summed masses of the three neutrinos must be less than .
The Nobel prize in Physics 2015 was awarded to both Takaaki Kajita and Arthur B. McDonald for their experimental discovery of neutrino oscillations, which demonstrates that neutrinos have mass.
In 1998, research results at the Super-Kamiokande neutrino detector determined that neutrinos can oscillate from one flavor to another, which requires that they must have a nonzero mass. While this shows that neutrinos have mass, the absolute neutrino mass scale is still not known. This is because neutrino oscillations are sensitive only to the difference in the squares of the masses. The best estimate of the difference in the squares of the masses of mass eigenstates 1 and 2 was published by KamLAND in 2005: |Δ"m"| = . In 2006, the MINOS experiment measured oscillations from an intense muon neutrino beam, determining the difference in the squares of the masses between neutrino mass eigenstates 2 and 3. The initial results indicate |Δ"m"| = 0.0027 eV2, consistent with previous results from Super-Kamiokande. Since |Δ"m"| is the difference of two squared masses, at least one of them has to have a value which is at least the square root of this value. Thus, there exists at least one neutrino mass eigenstate with a mass of at least .
In 2009, lensing data of a galaxy cluster were analyzed to predict a neutrino mass of about . This surprisingly high value requires that the three neutrino masses be nearly equal, with neutrino oscillations on the order of milli-electron-volts. In 2016 this was updated to a mass of . It predicts 3 sterile neutrinos of the same mass, stems with the Planck dark matter fraction and the non-observation of neutrinoless double beta decay. The masses lie below the Mainz-Troitsk upper bound of for the electron antineutrino. The latter is being tested since June 2018 in the KATRIN experiment, that searches for a mass between and .
A number of efforts are under way to directly determine the absolute neutrino mass scale in laboratory experiments. The methods applied involve nuclear beta decay (KATRIN and MARE).
On 31 May 2010, OPERA researchers observed the first tau neutrino candidate event in a muon neutrino beam, the first time this transformation in neutrinos had been observed, providing further evidence that they have mass.
In July 2010, the 3-D MegaZ DR7 galaxy survey reported that they had measured a limit of the combined mass of the three neutrino varieties to be less than . A tighter upper bound yet for this sum of masses, , was reported in March 2013 by the Planck collaboration, whereas a February 2014 result estimates the sum as 0.320 ± 0.081 eV based on discrepancies between the cosmological consequences implied by Planck's detailed measurements of the cosmic microwave background and predictions arising from observing other phenomena, combined with the assumption that neutrinos are responsible for the observed weaker gravitational lensing than would be expected from massless neutrinos.
If the neutrino is a Majorana particle, the mass may be calculated by finding the half-life of neutrinoless double-beta decay of certain nuclei. The current lowest upper limit on the Majorana mass of the neutrino has been set by KamLAND-Zen: 0.060–0.161 eV.
Standard Model neutrinos are fundamental point-like particles, without any width or volume. Since the neutrino is an elementary particle it does not have a size in the same sense as everyday objects. Properties associated with conventional "size" are absent: There is no minimum distance between them, and neutrinos cannot be condensed into a separate uniform substance that occupies a finite volume.
Experimental results show that within the margin of error, all produced and observed neutrinos have left-handed helicities (spins antiparallel to momenta), and all antineutrinos have right-handed helicities. In the massless limit, that means that only one of two possible chiralities is observed for either particle. These are the only chiralities included in the Standard Model of particle interactions.
It is possible that their counterparts (right-handed neutrinos and left-handed antineutrinos) simply do not exist. If they do, their properties are substantially different from observable neutrinos and antineutrinos. It is theorized that they are either very heavy (on the order of GUT scale—see "Seesaw mechanism"), do not participate in weak interaction (so-called "sterile neutrinos"), or both.
The existence of nonzero neutrino masses somewhat complicates the situation. Neutrinos are produced in weak interactions as chirality eigenstates. Chirality of a massive particle is not a constant of motion; helicity is, but the chirality operator does not share eigenstates with the helicity operator. Free neutrinos propagate as mixtures of left- and right-handed helicity states, with mixing amplitudes on the order of . This does not significantly affect the experiments, because neutrinos involved are nearly always ultrarelativistic, and thus mixing amplitudes are vanishingly small. Effectively, they travel so quickly and time passes so slowly in their rest-frames that they do not have enough time to change over any observable path. For example, most solar neutrinos have energies on the order of –, so the fraction of neutrinos with "wrong" helicity among them cannot exceed .
An unexpected series of experimental results for the rate of decay of heavy highly charged radioactive ions circulating in a storage ring has provoked theoretical activity in an effort to find a convincing explanation. The rates of weak decay of two radioactive species with half lives of about 40 seconds and 200 seconds are found to have a significant oscillatory modulation, with a period of about 7 seconds.
The observed phenomenon is known as the GSI anomaly, as the storage ring is a facility at the GSI Helmholtz Centre for Heavy Ion Research in Darmstadt Germany. As the decay process produces an electron neutrino, some of the proposed explanations for the observed oscillation rate invoke neutrino properties. Initial ideas related to flavour oscillation were met with skepticism. A more recent proposal involves mass differences between neutrino mass eigenstates.
Nuclear reactors are the major source of human-generated neutrinos. The majority of energy in a nuclear reactor is generated by fission (the four main fissile isotopes in nuclear reactors are , , and ), the resultant neutron-rich daughter nuclides rapidly undergo additional beta decays, each converting one neutron to a proton and an electron and releasing an electron antineutrino (). Including these subsequent decays, the average nuclear fission releases about of energy, of which roughly 95.5% is retained in the core as heat, and roughly 4.5% (or about ) is radiated away as antineutrinos. For a typical nuclear reactor with a thermal power of , the total power production from fissioning atoms is actually , of which is radiated away as antineutrino radiation and never appears in the engineering. This is to say, of fission energy is "lost" from this reactor and does not appear as heat available to run turbines, since antineutrinos penetrate all building materials practically without interaction.
The antineutrino energy spectrum depends on the degree to which the fuel is burned (plutonium-239 fission antineutrinos on average have slightly more energy than those from uranium-235 fission), but in general, the "detectable" antineutrinos from fission have a peak energy between about 3.5 and , with a maximum energy of about . There is no established experimental method to measure the flux of low-energy antineutrinos. Only antineutrinos with an energy above threshold of can trigger inverse beta decay and thus be unambiguously identified (see below). An estimated 3% of all antineutrinos from a nuclear reactor carry an energy above this threshold. Thus, an average nuclear power plant may generate over antineutrinos per second above this threshold, but also a much larger number ( this number) below the energy threshold, which cannot be seen with present detector technology.
Some particle accelerators have been used to make neutrino beams. The technique is to collide protons with a fixed target, producing charged pions or kaons. These unstable particles are then magnetically focused into a long tunnel where they decay while in flight. Because of the relativistic boost of the decaying particle, the neutrinos are produced as a beam rather than isotropically. Efforts to design an accelerator facility where neutrinos are produced through muon decays are ongoing. Such a setup is generally known as a "neutrino factory".
Nuclear weapons also produce very large quantities of neutrinos. Fred Reines and Clyde Cowan considered the detection of neutrinos from a bomb prior to their search for reactor neutrinos; a fission reactor was recommended as a better alternative by Los Alamos physics division leader J.M.B. Kellogg. Fission weapons produce antineutrinos (from the fission process), and fusion weapons produce both neutrinos (from the fusion process) and antineutrinos (from the initiating fission explosion).
Neutrinos are produced together with the natural background radiation. In particular, the decay chains of and isotopes, as well as, include beta decays which emit antineutrinos. These so-called geoneutrinos can provide valuable information on the Earth's interior. A first indication for geoneutrinos was found by the KamLAND experiment in 2005, updated results have been presented by KamLAND and Borexino. The main background in the geoneutrino measurements are the antineutrinos coming from reactors.
Atmospheric neutrinos result from the interaction of cosmic rays with atomic nuclei in the Earth's atmosphere, creating showers of particles, many of which are unstable and produce neutrinos when they decay. A collaboration of particle physicists from Tata Institute of Fundamental Research (India), Osaka City University (Japan) and Durham University (UK) recorded the first cosmic ray neutrino interaction in an underground laboratory in Kolar Gold Fields in India in 1965.
Solar neutrinos originate from the nuclear fusion powering the Sun and other stars.
The details of the operation of the Sun are explained by the Standard Solar Model. In short: when four protons fuse to become one helium nucleus, two of them have to convert into neutrons, and each such conversion releases one electron neutrino.
The Sun sends enormous numbers of neutrinos in all directions. Each second, about 65 billion () solar neutrinos pass through every square centimeter on the part of the Earth orthogonal to the direction of the Sun. Since neutrinos are insignificantly absorbed by the mass of the Earth, the surface area on the side of the Earth opposite the Sun receives about the same number of neutrinos as the side facing the Sun.
In 1966, Colgate and White calculated that neutrinos carry away most of the gravitational energy released by the collapse of massive stars, events now categorized as Type Ib and Ic and Type II supernovae. When such stars collapse, matter densities at the core become so high () that the degeneracy of electrons is not enough to prevent protons and electrons from combining to form a neutron and an electron neutrino. A second and more profuse neutrino source is the thermal energy (100 billion kelvins) of the newly formed neutron core, which is dissipated via the formation of neutrino–antineutrino pairs of all flavors.
Colgate and White's theory of supernova neutrino production was confirmed in 1987, when neutrinos from Supernova 1987A were detected. The water-based detectors Kamiokande II and IMB detected 11 and 8 antineutrinos (lepton number = −1) of thermal origin, respectively, while the scintillator-based Baksan detector found 5 neutrinos (lepton number = +1) of either thermal or electron-capture origin, in a burst less than 13 seconds long. The neutrino signal from the supernova arrived at earth several hours before the arrival of the first electromagnetic radiation, as expected from the evident fact that the latter emerges along with the shock wave. The exceptionally feeble interaction with normal matter allowed the neutrinos to pass through the churning mass of the exploding star, while the electromagnetic photons were slowed.
Because neutrinos interact so little with matter, it is thought that a supernova's neutrino emissions carry information about the innermost regions of the explosion. Much of the "visible" light comes from the decay of radioactive elements produced by the supernova shock wave, and even light from the explosion itself is scattered by dense and turbulent gases, and thus delayed. The neutrino burst is expected to reach Earth before any electromagnetic waves, including visible light, gamma rays, or radio waves. The exact time delay of the electromagnetic waves' arrivals depends on the velocity of the shock wave and on the thickness of the outer layer of the star. For a Type II supernova, astronomers expect the neutrino flood to be released seconds after the stellar core collapse, while the first electromagnetic signal may emerge hours later, after the explosion shock wave has had time to reach the surface of the star. The Supernova Early Warning System project uses a network of neutrino detectors to monitor the sky for candidate supernova events; the neutrino signal will provide a useful advance warning of a star exploding in the Milky Way.
Although neutrinos pass through the outer gases of a supernova without scattering, they provide information about the deeper supernova core with evidence that here, even neutrinos scatter to a significant extent. In a supernova core the densities are those of a neutron star (which is expected to be formed in this type of supernova), becoming large enough to influence the duration of the neutrino signal by delaying some neutrinos. The 13 second-long neutrino signal from SN 1987A lasted far longer than it would take for unimpeded neutrinos to cross through the neutrino-generating core of a supernova, expected to be only 3200 kilometers in diameter for SN 1987A.
The number of neutrinos counted was also consistent with a total neutrino energy of , which was estimated to be nearly all of the total energy of the supernova.
For an average supernova, approximately 1057 (an octodecillion) neutrinos are released, but the actual number detected at a terrestrial detector formula_1 will be far smaller, at the level of
where formula_3 is the mass of the detector (with e.g. Super Kamiokande having a mass of 50 kton) and formula_4 is the distance to the supernova. Hence in practice it will only be possible to detect neutrino bursts from supernovae within or nearby the Milky Way (our own galaxy). In addition to the detection of neutrinos from individual supernovae, it should also be possible to detect the diffuse supernova neutrino background, which originates from all supernovae in the Universe.
The energy of supernova neutrinos ranges from a few to several tens of MeV. The sites where cosmic rays are accelerated are expected to produce neutrinos that are at least one million times more energetic, produced from turbulent gaseous environments left over by supernova explosions: the supernova remnants. The origin of the cosmic rays was attributed to supernovas by Walter Baade and Fritz Zwicky; this hypothesis was refined by Vitaly L. Ginzburg and Sergei I. Syrovatsky who attributed the origin to supernova remnants, and supported their claim by the crucial remark, that the cosmic ray losses of the Milky Way is compensated, if the efficiency of acceleration in supernova remnants is about 10 percent. Ginzburg and Syrovatskii's hypothesis is supported by the specific mechanism of "shock wave acceleration" happening in supernova remnants, which is consistent with the original theoretical picture drawn by Enrico Fermi, and is receiving support from observational data. The very-high-energy neutrinos are still to be seen, but this branch of neutrino astronomy is just in its infancy. The main existing or forthcoming experiments that aim at observing very-high-energy neutrinos from our galaxy are Baikal, AMANDA, IceCube, ANTARES, NEMO and Nestor. Related information is provided by very-high-energy gamma ray observatories, such as VERITAS, HESS and MAGIC. Indeed, the collisions of cosmic rays are supposed to produce charged pions, whose decay give the neutrinos, and also neutral pions, whose decay give gamma rays: the environment of a supernova remnant is transparent to both types of radiation.
Still-higher-energy neutrinos, resulting from the interactions of extragalactic cosmic rays, could be observed with the Pierre Auger Observatory or with the dedicated experiment named ANITA.
It is thought that, just like the cosmic microwave background radiation left over from the Big Bang, there is a background of low-energy neutrinos in our Universe. In the 1980s it was proposed that these may be the explanation for the dark matter thought to exist in the universe. Neutrinos have one important advantage over most other dark matter candidates: They are known to exist. This idea also has serious problems.
From particle experiments, it is known that neutrinos are very light. This means that they easily move at speeds close to the speed of light. For this reason, dark matter made from neutrinos is termed "hot dark matter". The problem is that being fast moving, the neutrinos would tend to have spread out evenly in the universe before cosmological expansion made them cold enough to congregate in clumps. This would cause the part of dark matter made of neutrinos to be smeared out and unable to cause the large galactic structures that we see.
These same galaxies and groups of galaxies appear to be surrounded by dark matter that is not fast enough to escape from those galaxies. Presumably this matter provided the gravitational nucleus for formation. This implies that neutrinos cannot make up a significant part of the total amount of dark matter.
From cosmological arguments, relic background neutrinos are estimated to have density of 56 of each type per cubic centimeter and temperature () if they are massless, much colder if their mass exceeds . Although their density is quite high, they have not yet been observed in the laboratory, as their energy is below thresholds of most detection methods, and due to extremely low neutrino interaction cross-sections at sub-eV energies. In contrast, boron-8 solar neutrinos—which are emitted with a higher energy—have been detected definitively despite having a space density that is lower than that of relic neutrinos by some 6 orders of magnitude.
Neutrinos as such cannot be detected directly, because they do not ionize the materials they are passing through (they do not carry electric charge and other proposed effects, like the MSW effect, do not produce traceable radiation). A unique reaction to identify antineutrinos, sometimes referred to as inverse beta decay, as applied by Reines and Cowan (see below), requires a very large detector to detect a significant number of neutrinos. All detection methods require the neutrinos to carry a minimum threshold energy. So far, there is no detection method for low-energy neutrinos, in the sense that potential neutrino interactions (for example by the MSW effect) cannot be uniquely distinguished from other causes. Neutrino detectors are often built underground to isolate the detector from cosmic rays and other background radiation.
Antineutrinos were first detected in the 1950s near a nuclear reactor. Reines and Cowan used two targets containing a solution of cadmium chloride in water. Two scintillation detectors were placed next to the cadmium targets. Antineutrinos with an energy above the threshold of caused charged current interactions with the protons in the water, producing positrons and neutrons. This is very much like decay, where energy is used to convert a proton into a neutron, a positron () and an electron neutrino () is emitted:
From known decay:
In the Cowan and Reines experiment, instead of an outgoing neutrino, you have an incoming antineutrino () from a nuclear reactor:
The resulting positron annihilation with electrons in the detector material created photons with an energy of about . Pairs of photons in coincidence could be detected by the two scintillation detectors above and below the target. The neutrons were captured by cadmium nuclei resulting in gamma rays of about that were detected a few microseconds after the photons from a positron annihilation event.
Since then, various detection methods have been used. Super Kamiokande is a large volume of water surrounded by photomultiplier tubes that watch for the Cherenkov radiation emitted when an incoming neutrino creates an electron or muon in the water. The Sudbury Neutrino Observatory is similar, but used heavy water as the detecting medium, which uses the same effects, but also allows the additional reaction any-flavor neutrino photo-dissociation of deuterium, resulting in a free neutron which is then detected from gamma radiation after chlorine-capture. Other detectors have consisted of large volumes of chlorine or gallium which are periodically checked for excesses of argon or germanium, respectively, which are created by electron-neutrinos interacting with the original substance. MINOS used a solid plastic scintillator coupled to photomultiplier tubes, while Borexino uses a liquid pseudocumene scintillator also watched by photomultiplier tubes and the NOνA detector uses liquid scintillator watched by avalanche photodiodes. The IceCube Neutrino Observatory uses of the Antarctic ice sheet near the south pole with photomultiplier tubes distributed throughout the volume.
The University of Liverpool ND280 detector employs the novel use of gadolinium encased light detectors in a temperature controlled magnetic field capturing double light pulse events. The T2K experiment developed the technology and practical experiments were successful in both Japan and at Wylfa power station.
Neutrinos' low mass and neutral charge mean they interact exceedingly weakly with other particles and fields. This feature of weak interaction interests scientists because it means neutrinos can be used to probe environments that other radiation (such as light or radio waves) cannot penetrate.
Using neutrinos as a probe was first proposed in the mid-20th century as a way to detect conditions at the core of the Sun. The solar core cannot be imaged directly because electromagnetic radiation (such as light) is diffused by the great amount and density of matter surrounding the core. On the other hand, neutrinos pass through the Sun with few interactions. Whereas photons emitted from the solar core may require 40,000 years to diffuse to the outer layers of the Sun, neutrinos generated in stellar fusion reactions at the core cross this distance practically unimpeded at nearly the speed of light.
Neutrinos are also useful for probing astrophysical sources beyond the Solar System because they are the only known particles that are not significantly attenuated by their travel through the interstellar medium. Optical photons can be obscured or diffused by dust, gas, and background radiation. High-energy cosmic rays, in the form of swift protons and atomic nuclei, are unable to travel more than about 100 megaparsecs due to the Greisen–Zatsepin–Kuzmin limit (GZK cutoff). Neutrinos, in contrast, can travel even greater distances barely attenuated.
The galactic core of the Milky Way is fully obscured by dense gas and numerous bright objects. Neutrinos produced in the galactic core might be measurable by Earth-based neutrino telescopes.
Another important use of the neutrino is in the observation of supernovae, the explosions that end the lives of highly massive stars. The core collapse phase of a supernova is an extremely dense and energetic event. It is so dense that no known particles are able to escape the advancing core front except for neutrinos. Consequently, supernovae are known to release approximately 99% of their radiant energy in a short (10 second) burst of neutrinos. These neutrinos are a very useful probe for core collapse studies.
The rest mass of the neutrino is an important test of cosmological and astrophysical theories (see "Dark matter"). The neutrino's significance in probing cosmological phenomena is as great as any other method, and is thus a major focus of study in astrophysical communities.
The study of neutrinos is important in particle physics because neutrinos typically have the lowest mass, and hence are examples of the lowest-energy particles theorized in extensions of the Standard Model of particle physics.
In November 2012, American scientists used a particle accelerator to send a coherent neutrino message through 780 feet of rock. This marks the first use of neutrinos for communication, and future research may permit binary neutrino messages to be sent immense distances through even the densest materials, such as the Earth's core.
In July 2018, the IceCube Neutrino Observatory announced that they have traced an extremely-high-energy neutrino that hit their Antarctica-based research station in September 2017 back to its point of origin in the blazar TXS 0506 +056 located 3.7 billion light-years away in the direction of the constellation Orion. This is the first time that a neutrino detector has been used to locate an object in space and that a source of cosmic rays has been identified. | https://en.wikipedia.org/wiki?curid=21485 |
Nanotechnology
Nanotechnology (or "nanotech") is manipulation of matter on an atomic, molecular, and supramolecular scale. The earliest, widespread description of nanotechnology referred to the particular technological goal of precisely manipulating atoms and molecules for fabrication of macroscale products, also now referred to as molecular nanotechnology. A more generalized description of nanotechnology was subsequently established by the National Nanotechnology Initiative, which defines nanotechnology as the manipulation of matter with at least one dimension sized from 1 to 100 nanometers. This definition reflects the fact that quantum mechanical effects are important at this quantum-realm scale, and so the definition shifted from a particular technological goal to a research category inclusive of all types of research and technologies that deal with the special properties of matter which occur below the given size threshold. It is therefore common to see the plural form "nanotechnologies" as well as "nanoscale technologies" to refer to the broad range of research and applications whose common trait is size.
Nanotechnology as defined by size is naturally very broad, including fields of science as diverse as surface science, organic chemistry, molecular biology, semiconductor physics, energy storage, microfabrication, molecular engineering, etc. The associated research and applications are equally diverse, ranging from extensions of conventional device physics to completely new approaches based upon molecular self-assembly, from developing new materials with dimensions on the nanoscale to direct control of matter on the atomic scale.
Scientists currently debate the future implications of nanotechnology. Nanotechnology may be able to create many new materials and devices with a vast range of applications, such as in nanomedicine, nanoelectronics, biomaterials energy production, and consumer products. On the other hand, nanotechnology raises many of the same issues as any new technology, including concerns about the toxicity and environmental impact of nanomaterials, and their potential effects on global economics, as well as speculation about various doomsday scenarios. These concerns have led to a debate among advocacy groups and governments on whether special regulation of nanotechnology is warranted.
The concepts that seeded nanotechnology were first discussed in 1959 by renowned physicist Richard Feynman in his talk "There's Plenty of Room at the Bottom", in which he described the possibility of synthesis via direct manipulation of atoms.
In 1960, Egyptian engineer Mohamed Atalla and Korean engineer Dawon Kahng at Bell Labs fabricated the first MOSFET (metal–oxide–semiconductor field-effect transistor) with a gate oxide thickness of 100 nm, along with a gate length of 20µm. In 1962, Atalla and Kahng fabricated a nanolayer-base metal–semiconductor junction (M–S junction) transistor that used gold (Au) thin films with a thickness of 10 nm.
The term "nano-technology" was first used by Norio Taniguchi in 1974, though it was not widely known. Inspired by Feynman's concepts, K. Eric Drexler used the term "nanotechnology" in his 1986 book "Engines of Creation: The Coming Era of Nanotechnology", which proposed the idea of a nanoscale "assembler" which would be able to build a copy of itself and of other items of arbitrary complexity with atomic control. Also in 1986, Drexler co-founded The Foresight Institute (with which he is no longer affiliated) to help increase public awareness and understanding of nanotechnology concepts and implications.
The emergence of nanotechnology as a field in the 1980s occurred through convergence of Drexler's theoretical and public work, which developed and popularized a conceptual framework for nanotechnology, and high-visibility experimental advances that drew additional wide-scale attention to the prospects of atomic control of matter. Since the popularity spike in the 1980s, most of nanotechnology has involved investigation of several approaches to making mechanical devices out of a small number of atoms.
In the 1980s, two major breakthroughs sparked the growth of nanotechnology in modern era. First, the invention of the scanning tunneling microscope in 1981 which provided unprecedented visualization of individual atoms and bonds, and was successfully used to manipulate individual atoms in 1989. The microscope's developers Gerd Binnig and Heinrich Rohrer at IBM Zurich Research Laboratory received a Nobel Prize in Physics in 1986. Binnig, Quate and Gerber also invented the analogous atomic force microscope that year.
Second, fullerenes were discovered in 1985 by Harry Kroto, Richard Smalley, and Robert Curl, who together won the 1996 Nobel Prize in Chemistry. C60 was not initially described as nanotechnology; the term was used regarding subsequent work with related graphene tubes (called carbon nanotubes and sometimes called Bucky tubes) which suggested potential applications for nanoscale electronics and devices. The discovery of carbon nanotubes is largely attributed to Sumio Iijima of NEC in 1991, for which Iijima won the inaugural 2008 Kavli Prize in Nanoscience.
In 1987, Bijan Davari led an IBM research team that demonstrated the first MOSFET with a 10 nm gate oxide thickness, using tungsten-gate technology. Multi-gate MOSFETs enabled scaling below 20 nm gate length, starting with the FinFET (fin field-effect transistor), a three-dimensional, non-planar, double-gate MOSFET. The FinFET originates from the research of Digh Hisamoto at Hitachi Central Research Laboratory in 1989. At UC Berkeley, FinFET devices were fabricated by a group consisting of Hisamoto along with TSMC's Chenming Hu and other international researchers including Tsu-Jae King Liu, Jeffrey Bokor, Hideki Takeuchi, K. Asano, Jakub Kedziersk, Xuejue Huang, Leland Chang, Nick Lindert, Shibly Ahmed and Cyrus Tabery. The team fabricated FinFET devices down to a 17nm process in 1998, and then 15nm in 2001. In 2002, a team including Yu, Chang, Ahmed, Hu, Liu, Bokor and Tabery fabricated a 10nm FinFET device.
In the early 2000s, the field garnered increased scientific, political, and commercial attention that led to both controversy and progress. Controversies emerged regarding the definitions and potential implications of nanotechnologies, exemplified by the Royal Society's report on nanotechnology. Challenges were raised regarding the feasibility of applications envisioned by advocates of molecular nanotechnology, which culminated in a public debate between Drexler and Smalley in 2001 and 2003.
Meanwhile, commercialization of products based on advancements in nanoscale technologies began emerging. These products are limited to bulk applications of nanomaterials and do not involve atomic control of matter. Some examples include the Silver Nano platform for using silver nanoparticles as an antibacterial agent, nanoparticle-based transparent sunscreens, carbon fiber strengthening using silica nanoparticles, and carbon nanotubes for stain-resistant textiles.
Governments moved to promote and fund research into nanotechnology, such as in the U.S. with the National Nanotechnology Initiative, which formalized a size-based definition of nanotechnology and established funding for research on the nanoscale, and in Europe via the European Framework Programmes for Research and Technological Development.
By the mid-2000s new and serious scientific attention began to flourish. Projects emerged to produce nanotechnology roadmaps which center on atomically precise manipulation of matter and discuss existing and projected capabilities, goals, and applications.
In 2006, a team of Korean researchers from the Korea Advanced Institute of Science and Technology (KAIST) and the National Nano Fab Center developed a 3 nm MOSFET, the world's smallest nanoelectronic device. It was based on gate-all-around (GAA) FinFET technology.
Over sixty countries created nanotechnology research and development (R&D) government programs between 2001 and 2004. Government funding was exceeded by corporate spending on nanotechnology R&D, with most of the funding coming from corporations based in the United States, Japan and Germany. The top five organizations that filed the most intellectual patents on nanotechnology R&D between 1970 and 2011 were Samsung Electronics (2,578 first patents), Nippon Steel (1,490 first patents), IBM (1,360 first patents), Toshiba (1,298 first patents) and Canon (1,162 first patents). The top five organizations that published the most scientific papers on nanotechnology research between 1970 and 2012 were the Chinese Academy of Sciences, Russian Academy of Sciences, Centre national de la recherche scientifique, University of Tokyo and Osaka University.
Nanotechnology is the engineering of functional systems at the molecular scale. This covers both current work and concepts that are more advanced. In its original sense, nanotechnology refers to the projected ability to construct items from the bottom up, using techniques and tools being developed today to make complete, high performance products.
One nanometer (nm) is one billionth, or 10−9, of a meter. By comparison, typical carbon-carbon bond lengths, or the spacing between these atoms in a molecule, are in the range , and a DNA double-helix has a diameter around 2 nm. On the other hand, the smallest cellular life-forms, the bacteria of the genus "Mycoplasma", are around 200 nm in length. By convention, nanotechnology is taken as the scale range following the definition used by the National Nanotechnology Initiative in the US. The lower limit is set by the size of atoms (hydrogen has the smallest atoms, which are approximately a quarter of a nm kinetic diameter) since nanotechnology must build its devices from atoms and molecules. The upper limit is more or less arbitrary but is around the size below which phenomena not observed in larger structures start to become apparent and can be made use of in the nano device. These new phenomena make nanotechnology distinct from devices which are merely miniaturised versions of an equivalent macroscopic device; such devices are on a larger scale and come under the description of microtechnology.
To put that scale in another context, the comparative size of a nanometer to a meter is the same as that of a marble to the size of the earth. Or another way of putting it: a nanometer is the amount an average man's beard grows in the time it takes him to raise the razor to his face.
Two main approaches are used in nanotechnology. In the "bottom-up" approach, materials and devices are built from molecular components which assemble themselves chemically by principles of molecular recognition. In the "top-down" approach, nano-objects are constructed from larger entities without atomic-level control.
Areas of physics such as nanoelectronics, nanomechanics, nanophotonics and nanoionics have evolved during the last few decades to provide a basic scientific foundation of nanotechnology.
Several phenomena become pronounced as the size of the system decreases. These include statistical mechanical effects, as well as quantum mechanical effects, for example the "quantum size effect" where the electronic properties of solids are altered with great reductions in particle size. This effect does not come into play by going from macro to micro dimensions. However, quantum effects can become significant when the nanometer size range is reached, typically at distances of 100 nanometers or less, the so-called quantum realm. Additionally, a number of physical (mechanical, electrical, optical, etc.) properties change when compared to macroscopic systems. One example is the increase in surface area to volume ratio altering mechanical, thermal and catalytic properties of materials. Diffusion and reactions at nanoscale, nanostructures materials and nanodevices with fast ion transport are generally referred to nanoionics. "Mechanical" properties of nanosystems are of interest in the nanomechanics research. The catalytic activity of nanomaterials also opens potential risks in their interaction with biomaterials.
Materials reduced to the nanoscale can show different properties compared to what they exhibit on a macroscale, enabling unique applications. For instance, opaque substances can become transparent (copper); stable materials can turn combustible (aluminium); insoluble materials may become soluble (gold). A material such as gold, which is chemically inert at normal scales, can serve as a potent chemical catalyst at nanoscales. Much of the fascination with nanotechnology stems from these quantum and surface phenomena that matter exhibits at the nanoscale.
Modern synthetic chemistry has reached the point where it is possible to prepare small molecules to almost any structure. These methods are used today to manufacture a wide variety of useful chemicals such as pharmaceuticals or commercial polymers. This ability raises the question of extending this kind of control to the next-larger level, seeking methods to assemble these single molecules into supramolecular assemblies consisting of many molecules arranged in a well defined manner.
These approaches utilize the concepts of molecular self-assembly and/or supramolecular chemistry to automatically arrange themselves into some useful conformation through a bottom-up approach. The concept of molecular recognition is especially important: molecules can be designed so that a specific configuration or arrangement is favored due to non-covalent intermolecular forces. The Watson–Crick basepairing rules are a direct result of this, as is the specificity of an enzyme being targeted to a single substrate, or the specific folding of the protein itself. Thus, two or more components can be designed to be complementary and mutually attractive so that they make a more complex and useful whole.
Such bottom-up approaches should be capable of producing devices in parallel and be much cheaper than top-down methods, but could potentially be overwhelmed as the size and complexity of the desired assembly increases. Most useful structures require complex and thermodynamically unlikely arrangements of atoms. Nevertheless, there are many examples of self-assembly based on molecular recognition in biology, most notably Watson–Crick basepairing and enzyme-substrate interactions. The challenge for nanotechnology is whether these principles can be used to engineer new constructs in addition to natural ones.
Molecular nanotechnology, sometimes called molecular manufacturing, describes engineered nanosystems (nanoscale machines) operating on the molecular scale. Molecular nanotechnology is especially associated with the molecular assembler, a machine that can produce a desired structure or device atom-by-atom using the principles of mechanosynthesis. Manufacturing in the context of productive nanosystems is not related to, and should be clearly distinguished from, the conventional technologies used to manufacture nanomaterials such as carbon nanotubes and nanoparticles.
When the term "nanotechnology" was independently coined and popularized by Eric Drexler (who at the time was unaware of an earlier usage by Norio Taniguchi) it referred to a future manufacturing technology based on molecular machine systems. The premise was that molecular scale biological analogies of traditional machine components demonstrated molecular machines were possible: by the countless examples found in biology, it is known that sophisticated, stochastically optimised biological machines can be produced.
It is hoped that developments in nanotechnology will make possible their construction by some other means, perhaps using biomimetic principles. However, Drexler and other researchers have proposed that advanced nanotechnology, although perhaps initially implemented by biomimetic means, ultimately could be based on mechanical engineering principles, namely, a manufacturing technology based on the mechanical functionality of these components (such as gears, bearings, motors, and structural members) that would enable programmable, positional assembly to atomic specification. The physics and engineering performance of exemplar designs were analyzed in Drexler's book "Nanosystems".
In general it is very difficult to assemble devices on the atomic scale, as one has to position atoms on other atoms of comparable size and stickiness. Another view, put forth by Carlo Montemagno, is that future nanosystems will be hybrids of silicon technology and biological molecular machines. Richard Smalley argued that mechanosynthesis are impossible due to the difficulties in mechanically manipulating individual molecules.
This led to an exchange of letters in the ACS publication Chemical & Engineering News in 2003. Though biology clearly demonstrates that molecular machine systems are possible, non-biological molecular machines are today only in their infancy. Leaders in research on non-biological molecular machines are Dr. Alex Zettl and his colleagues at Lawrence Berkeley Laboratories and UC Berkeley. They have constructed at least three distinct molecular devices whose motion is controlled from the desktop with changing voltage: a nanotube nanomotor, a molecular actuator, and a nanoelectromechanical relaxation oscillator. See nanotube nanomotor for more examples.
An experiment indicating that positional molecular assembly is possible was performed by Ho and Lee at Cornell University in 1999. They used a scanning tunneling microscope to move an individual carbon monoxide molecule (CO) to an individual iron atom (Fe) sitting on a flat silver crystal, and chemically bound the CO to the Fe by applying a voltage.
The nanomaterials field includes subfields which develop or study materials having unique properties arising from their nanoscale dimensions.
These seek to arrange smaller components into more complex assemblies.
These seek to create smaller devices by using larger ones to direct their assembly.
These seek to develop components of a desired functionality without regard to how they might be assembled.
These subfields seek to anticipate what inventions nanotechnology might yield, or attempt to propose an agenda along which inquiry might progress. These often take a big-picture view of nanotechnology, with more emphasis on its societal implications than the details of how such inventions could actually be created.
Nanomaterials can be classified in 0D, 1D, 2D and 3D nanomaterials. The dimensionality play a major role in determining the characteristic of nanomaterials including physical, chemical and biological characteristics. With the decrease in dimensionality, an increase in surface-to-volume ratio is observed. This indicate that smaller dimensional nanomaterials have higher surface area compared to 3D nanomaterials. Recently, two dimensional (2D) nanomaterials are extensively investigated for electronic, biomedical, drug delivery and biosensor applications.
There are several important modern developments. The atomic force microscope (AFM) and the Scanning Tunneling Microscope (STM) are two early versions of scanning probes that launched nanotechnology. There are other types of scanning probe microscopy. Although conceptually similar to the scanning confocal microscope developed by Marvin Minsky in 1961 and the scanning acoustic microscope (SAM) developed by Calvin Quate and coworkers in the 1970s, newer scanning probe microscopes have much higher resolution, since they are not limited by the wavelength of sound or light.
The tip of a scanning probe can also be used to manipulate nanostructures (a process called positional assembly). Feature-oriented scanning methodology may be a promising way to implement these nanomanipulations in automatic mode. However, this is still a slow process because of low scanning velocity of the microscope.
Various techniques of nanolithography such as optical lithography, X-ray lithography, dip pen nanolithography, electron beam lithography or nanoimprint lithography were also developed. Lithography is a top-down fabrication technique where a bulk material is reduced in size to nanoscale pattern.
Another group of nanotechnological techniques include those used for fabrication of nanotubes and nanowires, those used in semiconductor fabrication such as deep ultraviolet lithography, electron beam lithography, focused ion beam machining, nanoimprint lithography, atomic layer deposition, and molecular vapor deposition, and further including molecular self-assembly techniques such as those employing di-block copolymers. The precursors of these techniques preceded the nanotech era, and are extensions in the development of scientific advancements rather than techniques which were devised with the sole purpose of creating nanotechnology and which were results of nanotechnology research.
The top-down approach anticipates nanodevices that must be built piece by piece in stages, much as manufactured items are made. Scanning probe microscopy is an important technique both for characterization and synthesis of nanomaterials. Atomic force microscopes and scanning tunneling microscopes can be used to look at surfaces and to move atoms around. By designing different tips for these microscopes, they can be used for carving out structures on surfaces and to help guide self-assembling structures. By using, for example, feature-oriented scanning approach, atoms or molecules can be moved around on a surface with scanning probe microscopy techniques. At present, it is expensive and time-consuming for mass production but very suitable for laboratory experimentation.
In contrast, bottom-up techniques build or grow larger structures atom by atom or molecule by molecule. These techniques include chemical synthesis, self-assembly and positional assembly. Dual polarisation interferometry is one tool suitable for characterisation of self assembled thin films. Another variation of the bottom-up approach is molecular beam epitaxy or MBE. Researchers at Bell Telephone Laboratories like John R. Arthur. Alfred Y. Cho, and Art C. Gossard developed and implemented MBE as a research tool in the late 1960s and 1970s. Samples made by MBE were key to the discovery of the fractional quantum Hall effect for which the 1998 Nobel Prize in Physics was awarded. MBE allows scientists to lay down atomically precise layers of atoms and, in the process, build up complex structures. Important for research on semiconductors, MBE is also widely used to make samples and devices for the newly emerging field of spintronics.
However, new therapeutic products, based on responsive nanomaterials, such as the ultradeformable, stress-sensitive Transfersome vesicles, are under development and already approved for human use in some countries.
Because of the variety of potential applications (including industrial and military), governments have invested billions of dollars in nanotechnology research. Prior to 2012, the USA invested $3.7 billion using its National Nanotechnology Initiative, the European Union invested $1.2 billion, and Japan invested $750 million. Over sixty countries created nanotechnology research and development (R&D) programs between 2001 and 2004. In 2012, the US and EU each invested on nanotechnology research, followed by Japan with . Global investment reached in 2012. Government funding was exceeded by corporate R&D spending on nanotechnology research, which was in 2012. The largest corporate R&D spenders were from the US, Japan and Germany which accounted for a combined .
As of August 21, 2008, the Project on Emerging Nanotechnologies estimates that over 800 manufacturer-identified nanotech products are publicly available, with new ones hitting the market at a pace of 3–4 per week. The project lists all of the products in a publicly accessible online database. Most applications are limited to the use of "first generation" passive nanomaterials which includes titanium dioxide in sunscreen, cosmetics, surface coatings, and some food products; Carbon allotropes used to produce gecko tape; silver in food packaging, clothing, disinfectants and household appliances; zinc oxide in sunscreens and cosmetics, surface coatings, paints and outdoor furniture varnishes; and cerium oxide as a fuel catalyst.
Further applications allow tennis balls to last longer, golf balls to fly straighter, and even bowling balls to become more durable and have a harder surface. Trousers and socks have been infused with nanotechnology so that they will last longer and keep people cool in the summer. Bandages are being infused with silver nanoparticles to heal cuts faster. Video game consoles and personal computers may become cheaper, faster, and contain more memory thanks to nanotechnology. Also, to build structures for on chip computing with light, for example on chip optical quantum information processing, and picosecond transmission of information.
Nanotechnology may have the ability to make existing medical applications cheaper and easier to use in places like the general practitioner's office and at home. Cars are being manufactured with nanomaterials so they may need fewer metals and less fuel to operate in the future.
Scientists are now turning to nanotechnology in an attempt to develop diesel engines with cleaner exhaust fumes. Platinum is currently used as the diesel engine catalyst in these engines. The catalyst is what cleans the exhaust fume particles. First a reduction catalyst is employed to take nitrogen atoms from NOx molecules in order to free oxygen. Next the oxidation catalyst oxidizes the hydrocarbons and carbon monoxide to form carbon dioxide and water. Platinum is used in both the reduction and the oxidation catalysts. Using platinum though, is inefficient in that it is expensive and unsustainable. Danish company InnovationsFonden invested DKK 15 million in a search for new catalyst substitutes using nanotechnology. The goal of the project, launched in the autumn of 2014, is to maximize surface area and minimize the amount of material required. Objects tend to minimize their surface energy; two drops of water, for example, will join to form one drop and decrease surface area. If the catalyst's surface area that is exposed to the exhaust fumes is maximized, efficiency of the catalyst is maximized. The team working on this project aims to create nanoparticles that will not merge. Every time the surface is optimized, material is saved. Thus, creating these nanoparticles will increase the effectiveness of the resulting diesel engine catalyst—in turn leading to cleaner exhaust fumes—and will decrease cost. If successful, the team hopes to reduce platinum use by 25%.
Nanotechnology also has a prominent role in the fast developing field of Tissue Engineering. When designing scaffolds, researchers attempt to mimic the nanoscale features of a cell's microenvironment to direct its differentiation down a suitable lineage. For example, when creating scaffolds to support the growth of bone, researchers may mimic osteoclast resorption pits.
Researchers have successfully used DNA origami-based nanobots capable of carrying out logic functions to achieve targeted drug delivery in cockroaches. It is said that the computational power of these nanobots can be scaled up to that of a Commodore 64.
Commercial nanoelectronic semiconductor device fabrication began in the 2010s. In 2013, SK Hynix began commercial mass-production of a 16nm process, TSMC began production of a 16nm FinFET process, and Samsung Electronics began production of a 10nm process. TSMC began production of a 7 nm process in 2017, and Samsung began production of a 5 nm process in 2018. In 2019, Samsung announced plans for the commercial production of a 3nm GAAFET process by 2021.
Commercial production of nanoelectronic semiconductor memory also began in the 2010s. In 2013, SK Hynix began mass-production of 16nm NAND flash memory, and Samsung began production of 10nm multi-level cell (MLC) NAND flash memory. In 2017, TSMC began production of SRAM memory using a 7 nm process.
An area of concern is the effect that industrial-scale manufacturing and use of nanomaterials would have on human health and the environment, as suggested by nanotoxicology research. For these reasons, some groups advocate that nanotechnology be regulated by governments. Others counter that overregulation would stifle scientific research and the development of beneficial innovations. Public health research agencies, such as the National Institute for Occupational Safety and Health are actively conducting research on potential health effects stemming from exposures to nanoparticles.
Some nanoparticle products may have unintended consequences. Researchers have discovered that bacteriostatic silver nanoparticles used in socks to reduce foot odor are being released in the wash. These particles are then flushed into the waste water stream and may destroy bacteria which are critical components of natural ecosystems, farms, and waste treatment processes.
Public deliberations on risk perception in the US and UK carried out by the Center for Nanotechnology in Society found that participants were more positive about nanotechnologies for energy applications than for health applications, with health applications raising moral and ethical dilemmas such as cost and availability.
Experts, including director of the Woodrow Wilson Center's Project on Emerging Nanotechnologies David Rejeski, have testified that successful commercialization depends on adequate oversight, risk research strategy, and public engagement. Berkeley, California is currently the only city in the United States to regulate nanotechnology; Cambridge, Massachusetts in 2008 considered enacting a similar law, but ultimately rejected it. Over the next several decades, applications of nanotechnology will likely include much higher-capacity computers, active materials of various kinds, and cellular-scale biomedical devices.
Nanofibers are used in several areas and in different products, in everything from aircraft wings to tennis rackets. Inhaling airborne nanoparticles and nanofibers may lead to a number of pulmonary diseases, e.g. fibrosis. Researchers have found that when rats breathed in nanoparticles, the particles settled in the brain and lungs, which led to significant increases in biomarkers for inflammation and stress response and that nanoparticles induce skin aging through oxidative stress in hairless mice.
A two-year study at UCLA's School of Public Health found lab mice consuming nano-titanium dioxide showed DNA and chromosome damage to a degree "linked to all the big killers of man, namely cancer, heart disease, neurological disease and aging".
A major study published more recently in Nature Nanotechnology suggests some forms of carbon nanotubes – a poster child for the "nanotechnology revolution" – could be as harmful as asbestos if inhaled in sufficient quantities. Anthony Seaton of the Institute of Occupational Medicine in Edinburgh, Scotland, who contributed to the article on carbon nanotubes said "We know that some of them probably have the potential to cause mesothelioma. So those sorts of materials need to be handled very carefully." In the absence of specific regulation forthcoming from governments, Paull and Lyons (2008) have called for an exclusion of engineered nanoparticles in food. A newspaper article reports that workers in a paint factory developed serious lung disease and nanoparticles were found in their lungs.
Calls for tighter regulation of nanotechnology have occurred alongside a growing debate related to the human health and safety risks of nanotechnology. There is significant debate about who is responsible for the regulation of nanotechnology. Some regulatory agencies currently cover some nanotechnology products and processes (to varying degrees) – by "bolting on" nanotechnology to existing regulations – there are clear gaps in these regimes. Davies (2008) has proposed a regulatory road map describing steps to deal with these shortcomings.
Stakeholders concerned by the lack of a regulatory framework to assess and control risks associated with the release of nanoparticles and nanotubes have drawn parallels with bovine spongiform encephalopathy ("mad cow" disease), thalidomide, genetically modified food, nuclear energy, reproductive technologies, biotechnology, and asbestosis. Dr. Andrew Maynard, chief science advisor to the Woodrow Wilson Center's Project on Emerging Nanotechnologies, concludes that there is insufficient funding for human health and safety research, and as a result there is currently limited understanding of the human health and safety risks associated with nanotechnology. As a result, some academics have called for stricter application of the precautionary principle, with delayed marketing approval, enhanced labelling and additional safety data development requirements in relation to certain forms of nanotechnology.
The Royal Society report identified a risk of nanoparticles or nanotubes being released during disposal, destruction and recycling, and recommended that "manufacturers of products that fall under extended producer responsibility regimes such as end-of-life regulations publish procedures outlining how these materials will be managed to minimize possible human and environmental exposure" (p. xiii).
The Center for Nanotechnology in Society has found that people respond to nanotechnologies differently, depending on application – with participants in public deliberations more positive about nanotechnologies for energy than health applications – suggesting that any public calls for nano regulations may differ by technology sector. | https://en.wikipedia.org/wiki?curid=21488 |
NetHack
NetHack is an open source single-player roguelike video game, first released in 1987 and maintained by the NetHack DevTeam. The game is a software fork of the 1982 game "Hack", itself inspired by the 1980 game "Rogue". The player takes the role as one of several pre-defined character classes to descend through multiple dungeon floors, fighting monsters and collecting treasure, to recover the "Amulet of Yendor" at the lowest floor and then escape. As a traditional roguelike, "NetHack" features procedural-generated dungeons and treasure, hack and slash combat, tile-based gameplay (using ASCII graphics by default but with optional graphical tilesets), and permadeath, forcing the player to restart anew should their character die. While "Rogue", "Hack" and other earlier roguelikes stayed true to a high fantasy setting, "NetHack" introduced humorous and anachronistic elements over time, including popular cultural reference to works such as "Discworld" and "Raiders of the Lost Ark".
Comparing it with "Rogue", "Engadget"s Justin Olivetti wrote that it took its exploration aspect and "made it far richer with an encyclopedia of objects, a larger vocabulary, a wealth of pop culture mentions, and a puzzler's attitude." In 2000, "Salon" described it as "one of the finest gaming experiences the computing world has to offer".
Before starting a game, players choose their character's race, role, sex, and alignment, or allow the game to assign the attributes randomly. There are traditional fantasy roles such as knight, wizard, rogue, and priest; but there are also unusual roles, including archaeologist, tourist, and caveman. The player character's role and alignment dictate which deity the character serves in the game, "how other monsters react toward you", as well as character skills and attributes.
After the player character is created, the main objective is introduced. To win the game, the player must retrieve the Amulet of Yendor, found at the lowest level of the dungeon, and offer it to their deity. Successful completion of this task rewards the player with the gift of immortality, and the player is said to "ascend", attaining the status of demigod. Along the path to the amulet, a number of sub-quests must be completed, including one class-specific quest.
The player's character is, unless they opt not to be, accompanied by a pet animal, typically a kitten or little dog, although knights begin with a saddled pony. Pets grow from fighting, and they can be changed by various means. Most of the other monsters may also be tamed using magic or food.
"NetHack"'s dungeon spans about fifty primary levels, most of which are procedurally generated when the player character enters them for the first time. A typical level contains a way "up" and "down" to other levels. These may be stairways, ladders, trapdoors, etc. Levels also contain several "rooms" joined by corridors. These rooms are randomly generated rectangles (as opposed to the linear corridors) and may contain features such as altars, shops, fountains, traps, thrones, pools of water, and sinks based on the randomly generated features of the room. Some specific levels follow one of many fixed designs or contain fixed elements. Later versions of the game added special branches of dungeon levels. These are optional routes that may feature more challenging monsters but can reward more desirable treasure to complete the main dungeon. Levels, once generated, remained persistent, in contrast to games that followed "Moria"-style of level generation.
"NetHack" features a variety of items: weapons (melee or ranged), armor to protect the player, scrolls and spellbooks to read, potions to quaff, wands, rings, amulets, and an assortment of tools, such as keys and lamps.
"NetHack"'s identification of items is almost identical to "Rogue"'s. For example, a newly discovered potion may be referred to as a "pink potion" with no other clues as to its identity. Players can perform a variety of actions and tricks to deduce, or at least narrow down, the identity of the potion. The most obvious is the somewhat risky tactic of simply drinking it. All items of a certain type will have the same description. For instance, all "scrolls of enchant weapon" may be labeled "TEMOV", and once one has been identified, all "scrolls of enchant weapon" found later will be labeled unambiguously as such. Starting a new game will scramble the items descriptions again, so the "silver ring" that is a "ring of levitation" in one game might be a "ring of hunger" in another.
As in many other roguelike games, all items in "NetHack" are either "blessed", "uncursed", or "cursed". The majority of items are found uncursed, but the blessed or cursed status of an item is unknown until it is identified or detected through other means.
Generally, a blessed item will be more powerful than an uncursed item, and a cursed item will be less powerful, with the added disadvantage that once it has been equipped by the player, it cannot be easily unequipped. Where an object would bestow an effect upon the character, a curse will generally make the effect harmful, or increase the amount of harm done. However, there are very specific exceptions. For example, drinking a cursed "potion of gain level" will make the character literally rise through the ceiling to the level above, instead of gaining an experience level.
As in other roguelike games, "NetHack" features permadeath: expired characters cannot be revived.
Although "NetHack" can be completed by new or intermediate players without any artificial limitations, experienced players can attempt "conducts" for an additional challenge. These are voluntary restrictions on actions taken, such as using no wishes, following a vegetarian or vegan diet, or even killing no monsters. While conducts are generally tracked by the game and are displayed at death or ascension, unofficial conducts are practiced within the community.
When a player dies, the cause of death and score is created and added to the list where the player's character is ranked against other previous characters. The prompt "Do you want your possessions identified?" is given by default at the end of any game, allowing the player to learn any unknown properties of the items in their inventory at death. The player's attributes (such as resistances, luck, and others), conduct (usually self-imposed challenges, such as playing as an atheist or a vegetarian), and a tally of creatures killed, may also be displayed.
The game sporadically saves a level on which a character has died and then integrates that level into a later game. This is done via "bones files", which are saved on the computer hosting the game. A player using a publicly hosted copy of the game can thus encounter the remains and possessions of many other players, although many of these possessions may have become cursed.
Because of the numerous ways that a player-character could die between a combination of their own actions as well as from reactions from the game's interacting systems, players frequently refer to untimely deaths as "Yet Another Stupid Death" (YASD). Such deaths are considered part of learning to play "NetHack" as to avoid conditions where the same death may happen again.
"NetHack" does allow players to save the game so that one does not have to complete the game in one session, but on opening a new game, the previous save file is subsequently wiped as to enforce the permadeath option. One option some players use is to make a backup copy of the save game file before playing a game, and should their character die, restoring from the copied version, a practice known as "save scumming". Additionally, players can also manipulate the "bones files" in a manner not intended by the developers. While these help the player to learn the game and get around limits of permadeath, both are considered forms of cheating the game.
"NetHack" is largely based on discovering secrets and tricks during gameplay. It can take years for one to become well-versed in them, and even experienced players routinely discover new ones. A number of "NetHack" fan sites and discussion forums offer lists of game secrets known as "spoilers".
"NetHack" was originally created with only a simple ASCII text-based user interface, although the option to use something more elaborate was added later in its development. Interface elements such as the environment, entities, and objects are represented by arrangements of ASCII or Extended ASCII glyphs, "DEC graphics", or "IBM graphics" mode. In addition to the environment, the interface also displays character and situational information.
A detailed example:
You see here a silver ring.
The player (the '@' sign, a wizard in this case) has entered the level via the stairs (the '' sign) to the next level.
Apart from the original termcap interface shown above, there are other interfaces that replace standard screen representations with two-dimensional images, or tiles, collectively known as "tiles mode". Graphic interfaces of this kind have been successfully implemented on the Amiga, the X Window System, the Microsoft Windows GUI, the Qt toolkit, and the GNOME libraries.
Enhanced graphical options also exist, such as the isometric perspective of "Falcon's Eye" and "Vulture's Eye", or the three-dimensional rendering that noegnud offers. "Vulture's Eye" is a fork of the now defunct Falcon's Eye project. "Vulture's Eye" adds additional graphics, sounds, bug fixes and performance enhancements and is under active development in an open collaborative environment.
"NetHack" is a software derivative of "Hack", which itself was inspired by "Rogue". "Hack" was created by students Jay Fenlason, Kenny Woodland, Mike Thome, and Jonathan Payne at Lincoln-Sudbury Regional High School as part of a computer class, after seeing and playing "Rogue" at the University of California Berkeley computer labs. The group had tried to get the source code of "Rogue" from Glenn Wichman and Michael Toy to build upon, but Wichman and Toy had refused, forcing the students to build the dungeon-creation routines on their own. As such, the game was named "Hack" in part for the hack-and-slash gameplay and that the code to generate the dungeons was considered a programming hack. After their classes ended, the students' work on the program also ended, though they had a working game. Fenlason provided the source code to a local USENIX conference, and eventually it was uploaded to USENET newsgroups. The code drew the attention of many players who started working to modify and improve the game as well as port it to other computer systems. "Hack" did not have any formal maintainer and while one person was generally recognized to hold the main code to the current version of "Hack", many software forks emerged from the unorganized development of the game.
Eventually, Mike Stephenson took on the role as maintainer of the "Hack" source code. At this point, he decided to create a new fork of the game, bringing in novel ideas from Izchak Miller, a philosophy professor at University of Pennsylvania, and Janet Walz, another computer hacker. They called themselves the DevTeam and renamed their branch "NetHack" since their collaboration work was done over the Internet. They expanded the bestiary and other objects in the game, and drew from other sources outside of the high fantasy setting, such as from "Discworld" with the introduction of the tourist character class. Knowing of the multiple forks of "Hack" that existed, the DevTeam established a principle that while the game was open source and anyone could create a fork as a new project, only a few select members in the DevTeam could make modifications to the main source repository of the game, so that players could be assured that the DevTeam's release was the legitimate version of "NetHack".
The DevTeam's first release of "NetHack" was on 28 July 1987.
The core DevTeam had expanded with the release of "NetHack" 3.0 in July 1989. By that point, they had established a tight-lipped culture, revealing little, if anything, between releases. Owing to the ever-increasing depth and complexity found in each release, the development team enjoys a near-mythical status among fans. This perceived omniscience is captured in the initialism TDTTOE, "The DevTeam Thinks of Everything", in that many of the possible emergent gameplay elements that could occur due to the behavior of the complex game systems had already been programmed in by the DevTeam. Since version 3.0, the DevTeam has typically kept to minor bug fix updates, represented by a change in the third version number (e.g. v3.0.1 over v3.0.0), and only releases major updates (v3.1.0 over v3.0.0) when significant new features are added to the game, including support for new platforms. Many of those from the community that helped with the ports to other systems were subsequently invited to be part of the DevTeam as the team's needs grew, with Stephenson remaining the key member currently.
Updates to the game were generally regular from around 1987 through 2003, with the DevTeam releasing v3.4.3 in December 2003. Subsequent updates from the DevTeam included new tilesets and compatibility with variants of Mac OS, but no major updates to the game had been made. In the absence of new releases from the developers, several community-made updates to the code and variants developed by fans emerged.
On 7 December 2015, version 3.6.0 was released, the first major release in over a decade. While the patch did not add major new gameplay features, the update was designed to prepare the game for expansion in the future, with the DevTeam's patch notes stating "This release consists of a series of foundational changes in the team, underlying infrastructure and changes to the approach to game development". Stephenson said that despite the number of roguelike titles that had emerged since the v3.4.3 release, they saw that "NetHack" was still being talked about online in part due to its high degree of portability, and decided to continue its development. According to DevTeam member Paul Winner, they looked to evaluate what community features had been introduced in the prior decade to improve the game while maintaining the necessary balance. The update came shortly after the death of Terry Pratchet, whose "Discworld" had been influential on the game, and the new update included a tribute to him. With the v3.6.0 release, "NetHack" remains "one of the oldest games still being developed".
A public read-only mirror of "NetHack" git repository was made available on 10 February 2016. Since v3.6.0, the DevTeam has continued to push updates to the title, with the latest being v3.6.6 on 8 March 2020. Version 3.7.0 is currently in development.
, the official source release supports the following systems: Windows, Linux, macOS, Windows CE, OS/2, Unix (BSD, System V, Solaris, HP-UX), BeOS, and VMS.
"NetHack" is released under the NetHack General Public License, which was written in 1989 by Mike Stephenson, patterned after the GNU bison license (which was written by Richard Stallman in 1988). Like the Bison license, and Stallman's later GNU General Public License, the "NetHack" license was written to allow the free sharing and modification of the source code under its protection. At the same time, the license explicitly states that the source code is not covered by any warranty, thus protecting the original authors from litigation. The NetHack General Public License is a copyleft software license certified as an open source license by the Open Source Initiative.
The NetHack General Public License allows anyone to port the game to a platform not supported by the official DevTeam, provided that they use the same license. Over the years this licensing has led to a large number of ports and internationalized versions in German, Japanese, and Spanish. The license also allows for software forks as long as they are distributed under the same license, except that the creator of a derivative work is allowed to offer warranty protection on the new work. The derivative work is required to indicate the modifications made and the dates of changes. In addition, the source code of the derivative work must be made available, free of charge except for nominal distribution fees. This has also allowed source code forks of "NetHack" including "Slash'EM" and "UnNetHack"
Bugs, humorous messages, stories, experiences, and ideas for the next version are discussed on the Usenet newsgroup rec.games.roguelike.nethack.
A public server at nethack.alt.org, commonly known as "NAO", gives players access to NetHack through a Telnet or SSH interface. A browser-based client is also available on the same site. Ebonhack connects to NAO with a graphical tiles-based interface.
During the whole month of November, the annual /dev/null NetHack Tournament took place every year from 1999 to 2016. The Junethack Cross-Variant Summer Tournament has taken place annually since 2011.
The Facebook artificial intelligence (AI) research team, along with researchers at the University of Oxford, the New York University, the Imperial College London, and University College London, developed an open-source platform called the NetHack Learning Environment, designed to teach AI agents to play "NetHack". The base environment is able to manuevuer the agent and fight its way through dungeons, but the team seeks community help to build an AI on the complexities of "NetHack" interconnected systems, using implicit knowledge that comes from player-made resources, thus given a means for programmers to hook into the environment with additional resources. | https://en.wikipedia.org/wiki?curid=21489 |
Nylon
Nylon is a generic designation for a family of synthetic polymers, based on aliphatic or semi-aromatic polyamides.
Nylon is a thermoplastic silky material
that can be melt-processed into fibers, films, or shapes. It is made of repeating units linked by amide links similar to the peptide bonds in proteins.
Nylon polymers can be mixed with a wide variety of additives to achieve many different property variations.
Nylon polymers have found significant commercial applications in fabric and fibers (apparel, flooring and rubber reinforcement), in shapes (molded parts for cars, electrical equipment, etc.), and in films (mostly for food packaging).
Nylon was the first commercially successful synthetic thermoplastic polymer. DuPont began its research project in 1927.
The first example of nylon (nylon 6,6) using diamines on February 28, 1935, by Wallace Hume Carothers at DuPont's research facility at the DuPont Experimental Station. In response to Carothers' work, Paul Schlack at IG Farben developed nylon 6, a different molecule based on caprolactam, on January 29, 1938.
Nylon was first used commercially in a nylon-bristled toothbrush in 1938, followed more famously in women's stockings or "nylons" which were shown at the 1939 New York World's Fair and first sold commercially in 1940. During World War II, almost all nylon production was diverted to the military for use in parachutes and parachute cord. Wartime uses of nylon and other plastics greatly increased the market for the new materials.
DuPont, founded by Éleuthère Irénée du Pont, first produced gunpowder and later cellulose-based paints. Following WWI, DuPont produced synthetic ammonia and other chemicals. DuPont began experimenting with the development of cellulose based fibers, eventually producing the synthetic fiber rayon. DuPont's experience with rayon was an important precursor to its development and marketing of nylon.
DuPont's invention of nylon spanned an eleven-year period, ranging from the initial research program in polymers in 1927 to its announcement in 1938, shortly before the opening of the 1939 New York World's Fair. The project grew from a new organizational structure at DuPont, suggested by Charles Stine in 1927, in which the chemical department would be composed of several small research teams that would focus on "pioneering research" in chemistry and would "lead to practical applications". Harvard instructor Wallace Hume Carothers was hired to direct the polymer research group. Initially he was allowed to focus on pure research, building on and testing the theories of German chemist Hermann Staudinger. He was very successful, as research he undertook greatly improved the knowledge of polymers and contributed to science.
In the spring of 1930, Carothers and his team had already synthesized two new polymers. One was neoprene, a synthetic rubber greatly used during World War II. The other was a white elastic but strong paste that would later become nylon. After these discoveries, Carothers' team was made to shift its research from a more pure research approach investigating general polymerization to a more practically-focused goal of finding "one chemical combination that would lend itself to industrial applications".
It wasn't until the beginning of 1935 that a polymer called "polymer 6-6" was finally produced. Carothers' coworker, Washington University alumnus Julian W. Hill had used a cold drawing method to produce a polyester in 1930. This cold drawing method was later used by Carothers in 1935 to fully develop nylon. The first example of nylon (nylon 6,6) was produced on February 28, 1935, at DuPont's research facility at the DuPont Experimental Station. It had all the desired properties of elasticity and strength.
However, it also required a complex manufacturing process that would become the basis of industrial production in the future. DuPont obtained a patent for the polymer in September 1938, and quickly achieved a monopoly of the fiber. Carothers died 16 months before the announcement of nylon, therefore he was never able to see his success.
The production of nylon required interdepartmental collaboration between three departments at DuPont: the Department of Chemical Research, the Ammonia Department, and the Department of Rayon. Some of the key ingredients of nylon had to be produced using high pressure chemistry, the main area of expertise of the Ammonia Department. Nylon was considered a “godsend to the Ammonia Department”, which had been in financial difficulties. The reactants of nylon soon constituted half of the Ammonia department's sales and helped them come out of the period of the Great Depression by creating jobs and revenue at DuPont.
DuPont's nylon project demonstrated the importance of chemical engineering in industry, helped create jobs, and furthered the advancement of chemical engineering techniques. In fact, it developed a chemical plant that provided 1800 jobs and used the latest technologies of the time, which are still used as a model for chemical plants today. The ability to acquire a large number of chemists and engineers quickly was a huge contribution to the success of DuPont's nylon project. The first nylon plant was located at Seaford, Delaware, beginning commercial production on December 15, 1939. On October 26, 1995, the Seaford plant was designated a National Historic Chemical Landmark by the American Chemical Society.
An important part of nylon's popularity stems from DuPont's marketing strategy. DuPont promoted the fiber to increase demand before the product was available to the general market. Nylon's commercial announcement occurred on October 27, 1938, at the final session of the "Herald Tribune"s yearly "Forum on Current Problems", on the site of the approaching New York City world's fair. The "first man-made organic textile fiber" which was derived from "coal, water and air" and promised to be "as strong as steel, as fine as the spider's web" was received enthusiastically by the audience, many of them middle-class women, and made the headlines of most newspapers. Nylon was introduced as part of "The world of tomorrow" at the 1939 New York World's Fair and was featured at DuPont's "Wonder World of Chemistry" at the Golden Gate International Exposition in San Francisco in 1939. Actual nylon stockings were not shipped to selected stores in the national market until May 15, 1940. However, a limited number were released for sale in Delaware before that. The first public sale of nylon stockings occurred on October 24, 1939, in Wilmington, Delaware. 4,000 pairs of stockings were available, all of which were sold within three hours.
Another added bonus to the campaign was that it meant reducing silk imports from Japan, an argument that won over many wary customers. Nylon was even mentioned by President Roosevelt's cabinet, which addressed its "vast and interesting economic possibilities" five days after the material was formally announced.
However, the early excitement over nylon also caused problems. It fueled unreasonable expectations that nylon would be better than silk, a miracle fabric as strong as steel that would last forever and never run. Realizing the danger of claims such as "New Hosiery Held Strong as Steel" and "No More Runs", DuPont scaled back the terms of the original announcement, especially those stating that nylon would possess the strength of steel.
Also, DuPont executives marketing nylon as a revolutionary man-made material did not at first realize that some consumers experienced a sense of unease and distrust, even fear, towards synthetic fabrics.
A particularly damaging news story, drawing on DuPont's 1938 patent for the new polymer, suggested that one method of producing nylon might be to use cadaverine (pentamethylenediamine), a chemical extracted from corpses. Although scientists asserted that cadaverine was also extracted by heating coal, the public often refused to listen. A woman confronted one of the lead scientists at DuPont and refused to accept that the rumour was not true.
DuPont changed its campaign strategy, emphasizing that nylon was made from "coal, air and water", and started focusing on the personal and aesthetic aspects of nylon, rather than its intrinsic qualities. Nylon was thus domesticated, and attention shifted to the material and consumer aspect of the fiber with slogans like "If it's nylon, it's prettier, and oh! How fast it dries!".
After nylon's nationwide release in 1940, production was increased. 1300 tons of the fabric were produced during 1940. During their first year on the market, 64 million pairs of nylon stockings were sold. In 1941, a second plant was opened in Martinsville, Virginia due to the success of the fabric.
While nylon was marketed as the durable and indestructible material of the people, it was sold at almost twice the price of silk stockings ($4.27 per pound of nylon versus $2.79 per pound of silk). Sales of nylon stockings were strong in part due to changes in women's fashion. As Lauren Olds explains: "by 1939 [hemlines] had inched back up to the knee, closing the decade just as it started off". The shorter skirts were accompanied by a demand for stockings that offered fuller coverage without the use of garters to hold them up.
However, as of February 11, 1942, nylon production was redirected from being a consumer material to one used by the military. DuPont's production of nylon stockings and other lingerie stopped, and most manufactured nylon was used to make parachutes and tents for World War II. Although nylon stockings already made before the war could be purchased, they were generally sold on the black market for as high as $20.
Once the war ended, the return of nylon was awaited with great anticipation. Although DuPont projected yearly production of 360 million pairs of stockings, there were delays in converting back to consumer rather than wartime production. In 1946, the demand for nylon stockings could not be satisfied, which led to the Nylon riots. In one case, an estimated 40,000 people lined up in Pittsburgh to buy 13,000 pairs of nylons. In the meantime, women cut up nylon tents and parachutes left from the war in order to make blouses and wedding dresses. Between the end of the war and 1952, production of stockings and lingerie used 80% of the world's nylon. DuPont put a lot of focus on catering to the civilian demand, and continually expanded its production.
As pure nylon hosiery was sold in a wider market, problems became apparent. Nylon stockings were found to be fragile, in the sense that the thread often tended to unravel lengthwise, creating 'runs'. People also reported that pure nylon textiles could be uncomfortable due to nylon's lack of absorbency. Moisture stayed inside the fabric near the skin under hot or moist conditions instead of being "wicked" away. Nylon fabric could also be itchy, and tended to cling and sometimes spark as a result of static electrical charge built up by friction.
Also, under some conditions stockings could decompose turning back into nylon's original components of air, coal, and water. Scientists explained this as a result of air pollution, attributing it to London smog in 1952, as well as poor air quality in New York and Los Angeles.
The solution found to problems with pure nylon fabric was to blend nylon with other existing fibers or polymers such as cotton, polyester, and spandex. This led to the development of a wide array of blended fabrics. The new nylon blends retained the desirable properties of nylon (elasticity, durability, ability to be dyed) and kept clothes prices low and affordable.
As of 1950, the New York Quartermaster Procurement Agency (NYQMPA), which developed and tested textiles for the army and navy, had committed to developing a wool-nylon blend. They were not the only ones to introduce blends of both natural and synthetic fibers. "America's Textile Reporter" referred to 1951 as the "Year of the blending of the fibers". Fabric blends included mixes like "Bunara" (wool-rabbit-nylon) and "Casmet" (wool-nylon-fur). In Britain in November 1951, the inaugural address of the 198th session of the Royal Society for the Encouragement of Arts, Manufactures and Commerce focused on the blending of textiles.
DuPont's Fabric Development Department cleverly targeted French fashion designers, supplying them with fabric samples. In 1955, designers such as Coco Chanel, Jean Patou, and Christian Dior showed gowns created with DuPont fibers, and fashion photographer Horst P. Horst was hired to document their use of DuPont fabrics. "American Fabrics" credited blends with providing "creative possibilities and new ideas for fashions which had been hitherto undreamed of."
DuPont went through an extensive process to generate names for its new product.
In 1940, John W. Eckelberry of DuPont stated that the letters "nyl" were arbitrary and the "on" was copied from the suffixes of other fibers such as cotton and Rayon. A later publication by DuPont ("Context", vol. 7, no. 2, 1978) explained that the name was originally intended to be "No-Run" ("run" meaning "unravel"), but was modified to avoid making such an unjustified claim. Since the products were not really run-proof, the vowels were swapped to produce "nuron", which was changed to "nilon" "to make it sound less like a nerve tonic". For clarity in pronunciation, the "i" was changed to "y."
In spite of oil shortages in the 1970s, consumption of nylon textiles continued to grow by 7.5 per cent per annum between the 1960s and 1980s.
Overall production of synthetic fibers, however, dropped from 63% of the worlds textile production in 1965, to 45% of the world's textile production in early 1970s. The appeal of "new" technologies wore off, and nylon fabric "was going out of style in the 1970s". Also, consumers became concerned about environmental costs throughout the production cycle: obtaining the raw materials (oil), energy use during production, waste produced during creation of the fiber, and eventual waste disposal of materials that were not biodegradable.
Synthetic fibers have not dominated the market since the 1950s and 1960s. , nylon continued to represent about 12% (8 million pounds) of the world's production of synthetic fibers. As one of the largest engineering polymer families, the global demand of nylon resins and compounds was valued at roughly US$20.5 billion in 2013. The market is expected to reach US$30 billion by 2020 by following an average annual growth of 5.5%.
Although pure nylon has many flaws and is now rarely used, its derivatives have greatly influenced and contributed to society. From scientific discoveries relating to the production of plastics and polymerization, to economic impact during the depression and the changing of women's fashion, nylon was a revolutionary product. The Lunar Flag Assembly, the first flag planted on the moon in a symbolic gesture of celebration, was made of nylon. The flag itself cost $5.50, but had to have a specially-designed flagpole with a horizontal bar so that it would appear to "fly".
One historian describes nylon as "an object of desire", comparing the invention to Coca-Cola in the eyes of 20th century consumers.
Nylons are condensation polymers or copolymers, formed by reacting difunctional monomers containing equal parts of amine and carboxylic acid, so that amides are formed at both ends of each monomer in a process analogous to polypeptide biopolymers. Most nylons are made from the reaction of a dicarboxylic acid with a diamine (e.g. PA66) or a lactam or amino acid with itself (e.g. PA6). In the first case, the "repeating unit" consists of one of each monomer, so that they alternate in the chain, similar to the so-called ABAB structure of polyesters and polyurethanes. Since each monomer in this copolymer has the same reactive group on both ends, the direction of the amide bond reverses between each monomer, unlike natural polyamide proteins, which have overall directionality: C terminal → N terminal. In the second case (so called AA), the repeating unit corresponds to the single monomer.
In common usage, the prefix "PA" (polyamide) or the name "Nylon" are used interchangeably and are equivalent in meaning.
The nomenclature used for nylon polymers was devised during the synthesis of the first simple aliphatic nylons and uses numbers to describe the number of carbons in each monomer unit, including the carbon(s) of the carboxylic acid(s). Subsequent use of cyclic and aromatic monomers required the use of letters or sets of letters. One number after "PA" or "Nylon" indicates a homopolymer which is "monadic" or based on one amino acid (minus H2O) as monomer:
Two numbers or sets of letters indicate a "dyadic" homopolymer formed from two monomers: one diamine and one dicarboxylic acid. The first number indicates the number of carbons in the diamine. The two numbers should be separated by a comma for clarity, but the comma is often omitted.
For copolymers the comonomers or pairs of comonomers are separated by slashes:
The term polyphthalamide (abbreviated to PPA) is used when 60% or more moles of the carboxylic acid portion of the repeating unit in the polymer chain is composed of a combination of terephthalic acid (TPA) and isophthalic acid (IPA).
Wallace Carothers at DuPont patented nylon 66 using amides.
In the case of nylons that involve reaction of a diamine and a dicarboxylic acid, it is difficult to get the proportions exactly correct, and deviations can lead to chain termination at molecular weights less than a desirable 10,000 daltons (u). To overcome this problem, a crystalline, solid "nylon salt" can be formed at room temperature, using an exact 1:1 ratio of the acid and the base to neutralize each other. The salt is crystallized to purify it and obtain the desired precise stoichiometry. Heated to 285 °C (545 °F), the salt reacts to form nylon polymer with the production of water.
The synthetic route using lactams (cyclic amides) was developed by Paul Schlack at IG Farben, leading to nylon 6, or polycaprolactam — formed by a ring-opening polymerization. The peptide bond within the caprolactam is broken with the exposed active groups on each side being incorporated into two new bonds as the monomer becomes part of the polymer backbone.
The 428 °F (220 °C) melting point of nylon 6 is lower than the 509 °F (265 °C) melting point of nylon 66.
Nylon 510, made from pentamethylene diamine and sebacic acid, was studied by Carothers even before nylon 66 and has superior properties, but is more expensive to make. In keeping with this naming convention, "nylon 6,12" or "PA 612" is a copolymer of a 6C diamine and a 12C diacid. Similarly for PA 510 PA 611; PA 1012, etc. Other nylons include copolymerized dicarboxylic acid/diamine products that are "not" based upon the monomers listed above. For example, some fully aromatic nylons (known as "aramids") are polymerized with the addition of diacids like terephthalic acid (→ Kevlar, Twaron) or isophthalic acid (→ Nomex), more commonly associated with polyesters. There are copolymers of PA 66/6; copolymers of PA 66/6/12; and others. In general linear polymers are the most useful, but it is possible to introduce branches in nylon by the condensation of dicarboxylic acids with polyamines having three or more amino groups.
The general reaction is:
Two molecules of water are given off and the nylon is formed. Its properties are determined by the R and R' groups in the monomers. In nylon 6,6, R = 4C and R' = 6C alkanes, but one also has to include the two carboxyl carbons in the diacid to get the number it donates to the chain. In Kevlar, both R and R' are benzene rings.
Industrial synthesis is usually done by heating the acids, amines or lactams to remove water, but in the laboratory, diacid chlorides can be reacted with diamines. For example, a popular demonstration of interfacial polymerization (the "nylon rope trick") is the synthesis of nylon 66 from adipoyl chloride and hexamethylene diamine.
Nylons can also be synthesized from dinitriles using acid catalysis. For example, this method is applicable for preparation of nylon 1,6 from adiponitrile, formaldehyde and water. Additionally, nylons can be synthesized from diols and dinitriles using this method as well.
Nylon monomers are manufactured by a variety of routes, starting in most cases from crude oil but sometimes from biomass. Those in current production are described below.
Various diamine components can be used, which are derived from a variety of sources. Most are petrochemicals, but bio-based materials are also being developed.
Due to the large number of diamines, diacids and aminoacids that can be synthesized, many nylon polymers have been made experimentally and characterized to varying degrees. A smaller number have been scaled up and offered commercially, and these are detailed below.
Homopolymer nylons derived from one monomer
Examples of these polymers that are or were commercially available
Homopolymer polyamides derived from pairs of diamines and diacids (or diacid derivatives). Shown in the table below are polymers which are or have been offered commercially either as homopolymers or as a part of a copolymer.
Examples of these polymers that are or were commercially available
It is easy to make mixtures of the monomers or sets of monomers used to make nylons to obtain copolymers. This lowers crystallinity and can therefore lower the melting point.
Some copolymers that have been or are commercially available are listed below:
Most nylon polymers are miscible with each other allowing a range of blends to be made. The two polymers can react with one another by transamidation to form random copolymers.
According to their crystallinity, polyamides can be:
According to this classification, PA66, for example, is an aliphatic semi-crystalline homopolyamide.
All nylons are susceptible to hydrolysis, especially by strong acids, a reaction essentially the reverse of the synthetic reaction shown above. The molecular weight of nylon products so attacked drops, and cracks form quickly at the affected zones. Lower members of the nylons (such as nylon 6) are affected more than higher members such as nylon 12. This means that nylon parts cannot be used in contact with sulfuric acid for example, such as the electrolyte used in lead–acid batteries.
When being molded, nylon must be dried to prevent hydrolysis in the molding machine barrel since water at high temperatures can also degrade the polymer. The reaction is of the type:
Berners-Lee calculates the average greenhouse gas footprint of nylon in manufacturing carpets at 5.43 kg CO2 equivalent per kg, when produced in Europe. This gives it almost the same carbon footprint as wool, but with greater durability and therefore a lower overall carbon footprint.
Data published by PlasticsEurope indicates for nylon 66 a greenhouse gas footprint of 6.4 kg CO2 equivalent per kg, and an energy consumption of 138 kJ/kg. When considering the environmental impact of nylon, it is important to consider the use phase. In particular when cars are lightweight, significant savings in fuel consumption and CO2 emissions are achieved.
Various nylons break down in fire and form hazardous smoke, and toxic fumes or ash, typically containing hydrogen cyanide. Incinerating nylons to recover the high energy used to create them is usually expensive, so most nylons reach the garbage dumps, decaying slowly. Discarded nylon fabric takes 30–40 years to decompose. Nylon is a robust polymer and lends itself well to recycling. Much nylon resin is recycled directly in a closed loop at the injection molding machine, by grinding sprues and runners and mixing them with the virgin granules being consumed by the molding machine.
Nylon can be recycled but only a few companies do so. Aquafil has demonstrated recycling fishing nets lost in the ocean into apparel Vanden recycles Nylon and other polyamides (PA) and has operations in UK, Australia, Hong Kong, UAE, Turkey and Finland.
Above their melting temperatures, "T"m, thermoplastics like nylon are amorphous solids or viscous fluids in which the chains approximate random coils. Below "T"m, amorphous regions alternate with regions which are lamellar crystals. The amorphous regions contribute elasticity and the crystalline regions contribute strength and rigidity. The planar amide (-CO-NH-) groups are very polar, so nylon forms multiple hydrogen bonds among adjacent strands. Because the nylon backbone is so regular and symmetrical, especially if all the amide bonds are in the "trans" configuration, nylons often have high crystallinity and make excellent fibers. The amount of crystallinity depends on the details of formation, as well as on the kind of nylon.
Nylon 66 can have multiple parallel strands aligned with their neighboring peptide bonds at coordinated separations of exactly 6 and 4 carbons for considerable lengths, so the carbonyl oxygens and amide hydrogens can line up to form interchain hydrogen bonds repeatedly, without interruption (see the figure opposite). Nylon 510 can have coordinated runs of 5 and 8 carbons. Thus parallel (but not antiparallel) strands can participate in extended, unbroken, multi-chain β-pleated sheets, a strong and tough supermolecular structure similar to that found in natural silk fibroin and the β-keratins in feathers. (Proteins have only an amino acid α-carbon separating sequential -CO-NH- groups.) Nylon 6 will form uninterrupted H-bonded sheets with mixed directionalities, but the β-sheet wrinkling is somewhat different. The three-dimensional disposition of each alkane hydrocarbon chain depends on rotations about the 109.47° tetrahedral bonds of singly bonded carbon atoms.
When extruded into fibers through pores in an industry spinneret, the individual polymer chains tend to align because of viscous flow. If subjected to cold drawing afterwards, the fibers align further, increasing their crystallinity, and the material acquires additional tensile strength. In practice, nylon fibers are most often drawn using heated rolls at high speeds.
Block nylon tends to be less crystalline, except near the surfaces due to shearing stresses during formation. Nylon is clear and colorless, or milky, but is easily dyed. Multistranded nylon cord and rope is slippery and tends to unravel. The ends can be melted and fused with a heat source such as a flame or electrode to prevent this.
Nylons are hygroscopic, and will absorb or desorb moisture as a function of the ambient humidity. Variations in moisture content have several effects on the polymer. Firstly, the dimensions will change, but more importantly moisture acts as a plasticizer, lowering the glass transition temperature ("T"g), and consequently the elastic modulus at temperatures below the "T"g
When dry, polyamide is a good electrical insulator. However, polyamide is hygroscopic. The absorption of water will change some of the material's properties such as its electrical resistance. Nylon is less absorbent than wool or cotton.
The characteristic features of nylon 6,6 include:
On the other hand, nylon 6 is easy to dye, more readily fades; it has a higher impact resistance, a more rapid moisture absorption, greater elasticity and elastic recovery.
Nylon clothing tends to be less flammable than cotton and rayon, but nylon fibers may melt and stick to skin.
Nylon was first used commercially in a nylon-bristled toothbrush in 1938, followed more famously in women's stockings or "nylons" which were shown at the 1939 New York World's Fair and first sold commercially in 1940. Its use increased dramatically during World War II, when the need for fabrics increased dramatically.
Bill Pittendreigh, DuPont, and other individuals and corporations worked diligently during the first few months of World War II to find a way to replace Asian silk and hemp with nylon in parachutes. It was also used to make tires, tents, ropes, ponchos, and other military supplies. It was even used in the production of a high-grade paper for U.S. currency. At the outset of the war, cotton accounted for more than 80% of all fibers used and manufactured, and wool fibers accounted for nearly all of the rest. By August 1945, manufactured fibers had taken a market share of 25%, at the expense of cotton. After the war, because of shortages of both silk and nylon, nylon parachute material was sometimes repurposed to make dresses.
Nylon 6 and 66 fibers are used in carpet manufacture.
Nylon is one kind of fibers used in tire cord. Herman E. Schroeder pioneered application of nylon in tires.
Nylon resins are widely used in the automobile industry especially in the engine compartment.
Molded nylon is used in hair combs and mechanical parts such as machine screws, gears, gaskets, and other low- to medium-stress components previously cast in metal. Engineering-grade nylon is processed by extrusion, casting, and injection molding. Type 6,6 Nylon 101 is the most common commercial grade of nylon, and Nylon 6 is the most common commercial grade of molded nylon. For use in tools such as spudgers, nylon is available in glass-filled variants which increase structural and impact strength and rigidity, and molybdenum disulfide-filled variants which increase lubricity. Nylon can be used as the matrix material in composite materials, with reinforcing fibers like glass or carbon fiber; such a composite has a higher density than pure nylon. Such thermoplastic composites (25% to 30% glass fiber) are frequently used in car components next to the engine, such as intake manifolds, where the good heat resistance of such materials makes them feasible competitors to metals.
Nylon was used to make the stock of the Remington Nylon 66 rifle. The frame of the modern Glock pistol is made of a nylon composite.
Nylon resins are used as a component of food packaging films where an oxygen barrier is needed. Some of the terpolymers based upon nylon are used every day in packaging. Nylon has been used for meat wrappings and sausage sheaths. The high temperature resistance of nylon makes it useful for oven bags.
Nylon filaments are primarily used in brushes especially toothbrushes and string trimmers. They are also used as monofilaments in fishing line. Nylon 610 and 612 are the most used polymers for filaments.
Its various properties also make it very useful as a material in additive manufacturing; specifically as a filament in consumer and professional grade fused deposition modeling 3D printers.
Nylon resins can be extruded into rods, tubes and sheets.
Nylon powders are used to powder coat metals. Nylon 11 and nylon 12 are the most widely used.
In the mid-1940s, classical guitarist Andrés Segovia mentioned the shortage of good guitar strings in the United States, particularly his favorite Pirastro catgut strings, to a number of foreign diplomats at a party, including General Lindeman of the British Embassy. A month later, the General presented Segovia with some nylon strings which he had obtained via some members of the DuPont family. Segovia found that although the strings produced a clear sound, they had a faint metallic timbre which he hoped could be eliminated.
Nylon strings were first tried on stage by Olga Coelho in New York in January, 1944.
In 1946, Segovia and string maker Albert Augustine were introduced by their mutual friend Vladimir Bobri, editor of Guitar Review. On the basis of Segovia's interest and Augustine's past experiments, they decided to pursue the development of nylon strings. DuPont, skeptical of the idea, agreed to supply the nylon if Augustine would endeavor to develop and produce the actual strings. After three years of development, Augustine demonstrated a nylon first string whose quality impressed guitarists, including Segovia, in addition to DuPont.
Wound strings, however, were more problematic. Eventually, however, after experimenting with various types of metal and smoothing and polishing techniques, Augustine was also able to produce high quality nylon wound strings. | https://en.wikipedia.org/wiki?curid=21490 |
Nerd
A nerd is a person seen as overly intellectual, obsessive, introverted or lacking social skills. Such a person may spend inordinate amounts of time on unpopular, little known, or non-mainstream activities, which are generally either highly technical, abstract, or relating to topics of science fiction or fantasy, to the exclusion of more mainstream activities. Additionally, many so-called nerds are described as being shy, quirky, pedantic, and unattractive.
Originally derogatory, the term "nerd" was a stereotype, but as with other pejoratives, it has been reclaimed and redefined by some as a term of pride and group identity. However, the augmentative terms, geek and dork, have not experienced a similar positive drift in meaning and usage.
The first documented appearance of the word "nerd" is as the name of a creature in Dr. Seuss's book "If I Ran the Zoo" (1950), in which the narrator Gerald McGrew claims that he would collect "a Nerkle, a Nerd, and a Seersucker too" for his imaginary zoo. The slang meaning of the term dates to 1951. That year, "Newsweek" magazine reported on its popular use as a synonym for "drip" or "square" in Detroit, Michigan. By the early 1960s, usage of the term had spread throughout the United States, and even as far as Scotland. At some point, the word took on connotations of bookishness and social ineptitude.
An alternate spelling, as "nurd" or "gnurd", also began to appear in the mid-1960s or early 1970s. Author Philip K. Dick claimed to have coined the "nurd" spelling in 1973, but its first recorded use appeared in a 1965 student publication at Rensselaer Polytechnic Institute (RPI). Oral tradition there holds that the word is derived from "knurd" ("drunk" spelled backward), which was used to describe people who studied rather than partied. The term "gnurd " (spelled with the "g") was in use at the Massachusetts Institute of Technology (MIT) by 1965. The term "nurd" was also in use at the Massachusetts Institute of Technology as early as 1971.
According to "Online Etymology Dictionary", the word is an alteration of the 1940s term ""nert"" (meaning "stupid or crazy person"), which is itself an alteration of "nut" (nutcase).
The term was popularized in the 1970s by its heavy use in the sitcom "Happy Days".
Because of the nerd stereotype, many smart people are often thought of as nerdy. This belief can be harmful, as it can cause high-school students to "switch off their lights" out of fear of being branded as a nerd, and cause otherwise appealing people to be considered nerdy simply for their intellect. It was once thought that intellectuals were nerdy because they were envied. However, Paul Graham stated in his essay, "Why Nerds are Unpopular", that intellect is neutral, meaning that you are neither loved nor despised for it. He also states that it is only the correlation that makes smart teens automatically seem nerdy, and that a nerd is someone that is not socially adept enough. Additionally, he says that the reason why many smart kids are unpopular is that they "don't have time for the activities required for popularity."
Stereotypical nerd appearance, often lampooned in caricatures, can include very large glasses, braces, buck teeth, severe acne and pants worn high at the waist. Following suit of popular use in emoticons, Unicode released in 2015 its "Nerd Face" character, featuring some of those stereotypes: 🤓 (code point U+1F913). In the media, many nerds are males, portrayed as being physically unfit, either overweight or skinny due to lack of physical exercise. It has been suggested by some, such as linguist Mary Bucholtz, that being a nerd may be a state of being "hyperwhite" and rejecting African-American culture and slang that "cool" white children use. However, after the "Revenge of the Nerds" movie franchise (with multicultural nerds), and the introduction of the Steve Urkel character on the television series "Family Matters", nerds have been seen in all races and colors as well as more recently being a frequent young East Asian or Indian male stereotype in North America. Portrayal of "nerd girls", in films such as "She's Out of Control", "Welcome to the Dollhouse" and "She's All That" depicts that smart but nerdy women might suffer later in life if they do not focus on improving their physical attractiveness.
In the United States, a 2010 study published in the "Journal of International and Intercultural Communication" indicated that Asian Americans are perceived as most likely to be nerds, followed by White Americans, while non-White Hispanics and Black Americans were perceived as least likely to be nerds. These stereotypes stem from concepts of Orientalism and Primitivism, as discussed in Ron Eglash's essay "Race, Sex, and Nerds": "From Black Geeks to Asian American Hipsters".
Some of the stereotypical behaviors associated with the "nerd" stereotype have correlations with the traits of Asperger's Syndrome or other autism-spectrum conditions.
The rise of Silicon Valley and the American computer industry at large has allowed many so-called "nerdy people" to accumulate large fortunes and influence media culture. Many stereotypically nerdy interests, such as superhero, fantasy and science fiction works, are now international popular culture hits. Some measures of nerdiness are now allegedly considered desirable, as, to some, it suggests a person who is intelligent, respectful, interesting, and able to earn a large salary. Stereotypical nerd qualities are evolving, going from awkwardness and social ostracism to an allegedly more widespread acceptance and sometimes even celebration of their differences.
Johannes Grenzfurthner, researcher, self-proclaimed nerd and director of nerd documentary "Traceroute", reflects on the emergence of nerds and nerd culture:
In the 1984 film "Revenge of the Nerds" Robert Carradine worked to embody the nerd stereotype; in doing so, he helped create a definitive image of nerds. Additionally, the storyline presaged, and may have helped inspire, the "nerd pride" that emerged in the 1990s. "American Splendor" regular Toby Radloff claims this was the movie that inspired him to become "The Genuine Nerd from Cleveland, Ohio." In the "American Splendor" film, Toby's friend, "American Splendor" author Harvey Pekar, was less receptive to the movie, believing it to be hopelessly idealistic, explaining that Toby, an adult low income file clerk, had nothing in common with the middle class kids in the film who would eventually attain college degrees, success, and cease being perceived as nerds. Many, however, seem to share Radloff's view, as "nerd pride" has become more widespread in the years since. MIT professor Gerald Sussman, for example, seeks to instill pride in nerds:
The popular computer-related news website Slashdot uses the tagline "News for nerds. Stuff that matters." The Charles J. Sykes quote "Be nice to nerds. Chances are you'll end up working for one" has been popularized on the Internet and incorrectly attributed to Bill Gates. In Spain, Nerd Pride Day has been observed on May 25 since 2006, the same day as Towel Day, another somewhat nerdy holiday. The date was picked as it is the anniversary of the release of "".
An episode from the animated series "Freakazoid", titled "Nerdator", includes the use of nerds to power the mind of a Predator-like enemy. Towards the middle of the show, he gave this speech. :
The Danish reality TV show "FC Zulu", known in the internationally franchised format as "FC Nerds", established a format wherein a team of nerds, after two or three months of training, competes with a professional soccer team.
Some commentators consider that the word is devalued when applied to people who adopt a sub-cultural pattern of behaviour, rather than being reserved for people with a marked ability.
Although originally being predominately an American stereotype, Nerd culture has grown across the globe and is now more acceptable and common than ever. Australian events such as Oz Comic-Con (a large comic book and Cosplay convention, similar to San Diego Comic-Con International) and Supernova, are incredibly popular events among the culture of people who identify themselves as nerds. In 2016, Oz Comic-Con in Perth saw almost 20,000 cos-players and comic book fans meet to celebrate the event, hence being named a "professionally organised Woodstock for geeks".
Individuals who are labeled as "nerds" are often the target of bullying due to a range of reasons that may include physical appearance or social background. Paul Graham has suggested that the reason nerds are frequently singled out for bullying is their indifference to popularity or social context, in the face of a youth culture that views popularity as paramount. However, research findings suggest that bullies are often as socially inept as their academically better-performing victims, and that popularity fails to confer protection from bullying. Other commentators have pointed out that pervasive harassment of intellectually-oriented youth began only in the mid-twentieth century and some have suggested that its cause involves jealousy over future employment opportunities and earning potential. | https://en.wikipedia.org/wiki?curid=21494 |
Nucleic acid
Nucleic acids are the biopolymers, or large biomolecules, essential to all known forms of life. The term "nucleic acid" is the overall name for DNA and RNA. They are composed of nucleotides, which are the monomers made of three components: a 5-carbon sugar, a phosphate group and a nitrogenous base. If the sugar is a compound ribose, the polymer is RNA (ribonucleic acid); if the sugar is derived from ribose as deoxyribose, the polymer is DNA (deoxyribonucleic acid).
Nucleic acids are the most important of all biomolecules. These are found in abundance in all living things, where they function to create and encode and then store information of every living cell of every life-form organism on Earth. In turn, they function to transmit and express that information inside and outside the cell nucleus—to the interior operations of the cell and ultimately to the next generation of each living organism. The encoded information is contained and conveyed via the nucleic acid sequence, which provides the 'ladder-step' ordering of nucleotides within the molecules of RNA and DNA.
Strings of nucleotides are bonded to form helical backbones—typically, one for RNA, two for DNA—and assembled into chains of base-pairs selected from the five primary, or canonical, nucleobases, which are: adenine, cytosine, guanine, thymine, and uracil. Thymine occurs only in DNA and uracil only in RNA. Using amino acids and the process known as protein synthesis, the specific sequencing in DNA of these nucleobase-pairs enables storing and transmitting coded instructions as genes. In RNA, base-pair sequencing provides for manufacturing new proteins that determine the frames and parts and most chemical processes of all life forms.
Experimental studies of nucleic acids constitute a major part of modern biological and medical research, and form a foundation for genome and forensic science, and the biotechnology and pharmaceutical industries.
Naked NA refers to NA that is not associated with proteins, lipids, or any other molecule to help "protect it". Naked DNA can be found when transcriptional bursting is occurring.
The term "nucleic acid" is the overall name for DNA and RNA, members of a family of biopolymers, and is synonymous with "polynucleotide". Nucleic acids were named for their initial discovery within the nucleus, and for the presence of phosphate groups (related to phosphoric acid). Although first discovered within the nucleus of eukaryotic cells, nucleic acids are now known to be found in all life forms including within bacteria, archaea, mitochondria, chloroplasts, and viruses (There is debate as to whether viruses are living or non-living). All living cells contain both DNA and RNA (except some cells such as mature red blood cells), while viruses contain either DNA or RNA, but usually not both.
The basic component of biological nucleic acids is the nucleotide, each of which contains a pentose sugar (ribose or deoxyribose), a phosphate group, and a nucleobase.
Nucleic acids are also generated within the laboratory, through the use of enzymes (DNA and RNA polymerases) and by solid-phase chemical synthesis. The chemical methods also enable the generation of altered nucleic acids that are not found in nature, for example peptide nucleic acids.
Nucleic acids are generally very large molecules. Indeed, DNA molecules are probably the largest individual molecules known. Well-studied biological nucleic acid molecules range in size from 21 nucleotides (small interfering RNA) to large chromosomes (human chromosome 1 is a single molecule that contains 247 million base pairs).
In most cases, naturally occurring DNA molecules are double-stranded and RNA molecules are single-stranded. There are numerous exceptions, however—some viruses have genomes made of double-stranded RNA and other viruses have single-stranded DNA genomes, and, in some circumstances, nucleic acid structures with three or four strands can form.
Nucleic acids are linear polymers (chains) of nucleotides. Each nucleotide consists of three components: a purine or pyrimidine nucleobase (sometimes termed "nitrogenous base" or simply "base"), a pentose sugar, and a phosphate group. The substructure consisting of a nucleobase plus sugar is termed a nucleoside. Nucleic acid types differ in the structure of the sugar in their nucleotides–DNA contains 2'-deoxyribose while RNA contains ribose (where the only difference is the presence of a hydroxyl group). Also, the nucleobases found in the two nucleic acid types are different: adenine, cytosine, and guanine are found in both RNA and DNA, while thymine occurs in DNA and uracil occurs in RNA.
The sugars and phosphates in nucleic acids are connected to each other in an alternating chain (sugar-phosphate backbone) through phosphodiester linkages. In conventional nomenclature, the carbons to which the phosphate groups attach are the 3'-end and the 5'-end carbons of the sugar. This gives nucleic acids directionality, and the ends of nucleic acid molecules are referred to as 5'-end and 3'-end. The nucleobases are joined to the sugars via an N-glycosidic linkage involving a nucleobase ring nitrogen (N-1 for pyrimidines and N-9 for purines) and the 1' carbon of the pentose sugar ring.
Non-standard nucleosides are also found in both RNA and DNA and usually arise from modification of the standard nucleosides within the DNA molecule or the primary (initial) RNA transcript. Transfer RNA (tRNA) molecules contain a particularly large number of modified nucleosides.
Double-stranded nucleic acids are made up of complementary sequences, in which extensive Watson-Crick base pairing results in a highly repeated and quite uniform double-helical three-dimensional structure. In contrast, single-stranded RNA and DNA molecules are not constrained to a regular double helix, and can adopt highly complex three-dimensional structures that are based on short stretches of intramolecular base-paired sequences including both Watson-Crick and noncanonical base pairs, and a wide range of complex tertiary interactions.
Nucleic acid molecules are usually unbranched and may occur as linear and circular molecules. For example, bacterial chromosomes, plasmids, mitochondrial DNA, and chloroplast DNA are usually circular double-stranded DNA molecules, while chromosomes of the eukaryotic nucleus are usually linear double-stranded DNA molecules. Most RNA molecules are linear, single-stranded molecules, but both circular and branched molecules can result from RNA splicing reactions. The total amount of pyrimidines is equal to the total amount of purines. The diameter of the helix is about 20Å.
One DNA or RNA molecule differs from another primarily in the sequence of nucleotides. Nucleotide sequences are of great importance in biology since they carry the ultimate instructions that encode all biological molecules, molecular assemblies, subcellular and cellular structures, organs, and organisms, and directly enable cognition, memory, and behavior ("see Genetics"). Enormous efforts have gone into the development of experimental methods to determine the nucleotide sequence of biological DNA and RNA molecules, and today hundreds of millions of nucleotides are sequenced daily at genome centers and smaller laboratories worldwide. In addition to maintaining the GenBank nucleic acid sequence database, the National Center for Biotechnology Information (NCBI, https://www.ncbi.nlm.nih.gov) provides analysis and retrieval resources for the data in GenBank and other biological data made available through the NCBI web site.
Deoxyribonucleic acid (DNA) is a nucleic acid containing the genetic instructions used in the development and functioning of all known living organisms. The DNA segments carrying this genetic information are called genes. Likewise, other DNA sequences have structural purposes or are involved in regulating the use of this genetic information. Along with RNA and proteins, DNA is one of the three major macromolecules that are essential for all known forms of life.
DNA consists of two long polymers of simple units called nucleotides, with backbones made of sugars and phosphate groups joined by ester bonds. These two strands run in opposite directions to each other and are, therefore, anti-parallel. Attached to each sugar is one of four types of molecules called nucleobases (informally, bases). It is the sequence of these four nucleobases along the backbone that encodes information. This information is read using the genetic code, which specifies the sequence of the amino acids within proteins. The code is read by copying stretches of DNA into the related nucleic acid RNA in a process called transcription.
Within cells, DNA is organized into long structures called chromosomes. During cell division these chromosomes are duplicated in the process of DNA replication, providing each cell its own complete set of chromosomes. Eukaryotic organisms (animals, plants, fungi, and protists) store most of their DNA inside the cell nucleus and some of their DNA in organelles, such as mitochondria or chloroplasts. In contrast, prokaryotes (bacteria and archaea) store their DNA only in the cytoplasm. Within the chromosomes, chromatin proteins such as histones compact and organize DNA. These compact structures guide the interactions between DNA and other proteins, helping control which parts of the DNA are transcribed.
Ribonucleic acid (RNA) functions in converting genetic information from genes into the amino acid sequences of proteins. The three universal types of RNA include transfer RNA (tRNA), messenger RNA (mRNA), and ribosomal RNA (rRNA). Messenger RNA acts to carry genetic sequence information between DNA and ribosomes, directing protein synthesis. Ribosomal RNA is a major component of the ribosome, and catalyzes peptide bond formation. Transfer RNA serves as the carrier molecule for amino acids to be used in protein synthesis, and is responsible for decoding the mRNA. In addition, many other classes of RNA are now known.
Artificial nucleic acid analogues have been designed and synthesized by chemists, and include peptide nucleic acid, morpholino- and locked nucleic acid, glycol nucleic acid, and threose nucleic acid. Each of these is distinguished from naturally occurring DNA or RNA by changes to the backbone of the molecules. | https://en.wikipedia.org/wiki?curid=21496 |
Nitrate
Nitrate is an anion (negative ion) with the molecular formula ; or a salt with that anion. The name is also used for organic compounds that contain the nitrate ester functional group –.
Nitrates are common components of fertilizers and explosives. Almost all nitrate salts are soluble in water. A common example of an inorganic nitrate salt is potassium nitrate (saltpeter).
Removal of one electron yields the nitrate radical, also called nitrogen trioxide .
The anion is the conjugate base of nitric acid, consisting of one central nitrogen atom surrounded by three identically bonded oxygen atoms in a trigonal planar arrangement. The nitrate ion carries a formal charge of −1. This charge results from a combination formal charge in which each of the three oxygens carries a − charge, whereas the nitrogen carries a +1 charge, all these adding up to formal charge of the polyatomic nitrate ion. This arrangement is commonly used as an example of resonance. Like the isoelectronic carbonate ion, the nitrate ion can be represented by resonance structures:
A rich source of inorganic nitrate in the human diets come from leafy green foods, such as spinach and arugula. (inorganic nitrate) is the viable active component within beetroot juice and other vegetables. Drinking water is also a dietary source.
Dietary nitrate supplementation delivers positive results when testing endurance exercise performance.
Ingestion of large doses of nitrate either in the form of pure sodium nitrate or beetroot juice in young healthy individuals rapidly increases plasma nitrate concentration about 2-3 fold, and this elevated nitrate concentration can be maintained for at least 2 weeks. Increased plasma nitrate stimulates the production of nitric oxide. Nitric oxide is important physiological signalling molecule that is used in, among other things, regulation of muscle blood flow and mitochondrial respiration.
Nitrite consumption is primarily determined by the amount of processed meats eaten, and the concentration of nitrates in these meats. Although nitrites are the nitrogen compound chiefly used in meat curing, nitrates are used as well. Nitrates lead to the formation of nitrosamines. The production of carcinogenic nitrosamines may be inhibited by the use of the antioxidants vitamin C and the alpha-tocopherol form of vitamin E during curing.
Anti-hypertensive diets, such as the DASH diet, typically contain high levels of nitrates, which are first reduced to nitrite in the saliva, as detected in saliva testing, prior to forming nitric oxide.
Nitrate salts are found naturally on earth as large deposits, particularly of nitratine, a major source of sodium nitrate.
Nitrates are produced by a number of species of nitrifying bacteria, and the nitrate compounds for gunpowder (see this topic for more) were historically produced, in the absence of mineral nitrate sources, by means of various fermentation processes using urine and dung.
As a byproduct of lightning strikes in earth's nitrogen-oxygen rich atmosphere, nitric acid is produced when nitrogen dioxide reacts with water vapor.
Nitrates are produced industrially from nitric acid.
Nitrates are mainly produced for use as fertilizers in agriculture because of their high solubility and biodegradability. The main nitrate fertilizers are ammonium, sodium, potassium, calcium, and magnesium salts. Several million kilograms are produced annually for this purpose.
The second major application of nitrates is as oxidizing agents, most notably in explosives where the rapid oxidation of carbon compounds liberates large volumes of gases (see gunpowder for an example). Sodium nitrate is used to remove air bubbles from molten glass and some ceramics. Mixtures of the molten salt are used to harden some metals.
Almost all methods for detection of nitrate rely on its conversion to nitrite followed by nitrite-specific tests. The reduction of nitrate to nitrite is effected by copper-cadmium material. The sample is introduced with a flow injection analyzer, and the resulting nitrite-containing effluent is then combined with a reagent for colorimetric or electrochemical detection. The most popular of these assays is the Griess test, whereby nitrite is converted to an deeply colored azo dye, suited for UV-vis spectroscopic analysis. The method exploits the reactivity of nitrous acid derived from acidification of nitrite. Nitrous acid selectively reacts with aromatic amines to give diazonium salts, which in turn couple with a second reagent to give the azo dye. The detection limit is 0.02 to 2 μM. Methods have been highly adapted to biological samples.
The acute toxicity of nitrate is low. "Substantial disagreement" exists about the long-term risks of nitrate exposure. The two areas of possible concern are that (i) nitrate could be a precursor to nitrite in the lower gut, and nitrite is a precursor to nitrosamines, which are implicated in carcinogenesis, and (ii) nitrate is implicated in methemoglobinemia, a disorder of red blood cells hemoglobin.
Nitrates do not affect infants and pregnant women. Blue baby syndrome is caused by a number of other factors such as gastric upset, such as diarrheal infection, protein intolerance, heavy metal toxicity etc., with nitrates playing a minor role.
Through the Safe Drinking Water Act, the United States Environmental Protection Agency has set a maximum contaminant level of 10 mg/L or 10 ppm of nitrates in drinking water.
An acceptable daily intake (ADI) for nitrate ions was established in the range of 0–3.7 mg (kg body weight)−1 day−1 by the Joint FAO/WHO Expert Committee on Food additives (JEFCA).
In freshwater or estuarine systems close to land, nitrate can reach concentrations that are lethal to fish. While nitrate is much less toxic than ammonia, levels over 30 ppm of nitrate can inhibit growth, impair the immune system and cause stress in some aquatic species. Nitrate toxicity remains the subject of debate.
In most cases of excess nitrate concentrations in aquatic systems, the primary source is surface runoff from agricultural or landscaped areas that have received excess nitrate fertilizer. The resulting eutrophication and algae blooms result in anoxia and dead zones. As a consequence, as nitrate forms a component of total dissolved solids, they are widely used as an indicator of water quality.
Symptoms of nitrate poisoning in domestic animals include increased heart rate and respiration; in advanced cases blood and tissue may turn a blue or brown color. Feed can be tested for nitrate; treatment consists of supplementing or substituting existing supplies with lower nitrate material. Safe levels of nitrate for various types of livestock are as follows:
The values above are on a dry (moisture-free) basis.
Nitrate formation with elements of the periodic table. | https://en.wikipedia.org/wiki?curid=21497 |
Nevis
Nevis is a small island in the Caribbean Sea that forms part of the inner arc of the Leeward Islands chain of the West Indies. Nevis and the neighbouring island of Saint Kitts constitute one country: the Federation of Saint Kitts and Nevis. Nevis is located near the northern end of the Lesser Antilles archipelago, about 350 km east-southeast of Puerto Rico and 80 km west of Antigua. Its area is and the capital is Charlestown.
Saint Kitts and Nevis are separated by a shallow channel known as "The Narrows". Nevis is roughly conical in shape with a volcano known as Nevis Peak at its centre. The island is fringed on its western and northern coastlines by sandy beaches which are composed of a mixture of white coral sand with brown and black sand which is eroded and washed down from the volcanic rocks that make up the island. The gently-sloping coastal plain ( wide) has natural freshwater springs as well as non-potable volcanic hot springs, especially along the western coast.
The island was named "Oualie" ("Land of Beautiful Waters") by the Caribs and "Dulcina" ("Sweet Island") by the early British settlers. The name "Nevis" is derived from the Spanish "Nuestra Señora de las Nieves" (which means Our Lady of the Snows); the name first appears on maps in the 16th century. Nevis is also known by the sobriquet "Queen of the Caribees", which it earned in the 18th century when its sugar plantations created much wealth for the British.
Nevis is of particular historical significance to Americans because it was the birthplace and early childhood home of Alexander Hamilton. For the British, Nevis is the place where Horatio Nelson was stationed as a young sea captain, and is where he met and married a Nevisian, Frances Nisbet, the young widow of a plantation-owner.
The majority of the approximately 12,000 Nevisians are of primarily African descent, with notable British, Portuguese and Lebanese minority communities. English is the official language, and the literacy rate, 98 percent, is one of the highest in the Western Hemisphere.
In 1498, Christopher Columbus gave the island the name "San Martín" (Saint Martin). However, the confusion of numerous poorly-charted small islands in the Leeward Island chain meant that this name ended up being accidentally transferred to another island, which is still known as Saint-Martin/Sint Maarten.
The current name "Nevis" was derived from a Spanish name "Nuestra Señora de las Nieves" by a process of abbreviation and anglicisation. The Spanish name means Our Lady of the Snows. It is not known who chose this name for the island, but it is a reference to the story of a 4th-century Catholic miracle: a snowfall on the Esquiline Hill in Rome. Presumably the white clouds that usually cover the top of Nevis Peak reminded someone of this story of a miraculous snowfall in a hot climate.
Nevis was part of the Spanish claim to the Caribbean islands, a claim pursued until the Treaty of Madrid (1670), even though there were no Spanish settlements on the island. According to Vincent Hubbard, author of "Swords, Ships & Sugar: History of Nevis", the Spanish ruling caused many of the Arawak groups who were not ethnically Caribs to "be redefined as Caribs overnight". Records indicate that the Spanish enslaved large numbers of the native inhabitants on the more accessible of the Leeward Islands and sent them to Cubagua, Venezuela to dive for pearls. Hubbard suggests that the reason the first European settlers found so few "Caribs" on Nevis is that they had already been rounded up by the Spanish and shipped off to be used as slaves.
Nevis had been settled for more than two thousand years by Amerindian people prior to been sighted by Columbus in 1493. The indigenous people of Nevis during these periods belonged to the Leeward Island Amerindian groups popularly referred to as Arawaks and Caribs, a complex mosaic of ethnic groups with similar culture and language. Dominican anthropologist Lennox Honychurch traces the European use of the term "Carib" to refer to the Leeward Island aborigines to Columbus, who picked it up from the Taínos on Hispaniola. It was not a name the Caribs called themselves. "Carib Indians" was the generic name used for all groups believed involved in cannibalistic war rituals, more particularly, the consumption of parts of a killed enemy's body.
The Amerindian name for Nevis was "Oualie", land of beautiful waters. The structure of the Island Carib language has been linguistically identified as Arawakan.
In spite of the Spanish claim, Nevis continued to be a popular stop-over point for English and Dutch ships on their way to the North American continent. Captain Bartholomew Gilbert of Plymouth visited the island in 1603, spending two weeks to cut twenty tons of lignum vitae wood. Gilbert sailed on to Virginia to seek out survivors of the Roanoke settlement in what is now North Carolina. Captain John Smith visited Nevis also on his way to Virginia in 1607. This was the voyage which founded Jamestown, the first permanent English settlement in the New World.
On 30 August 1620 James VI and I of Scotland and England asserted sovereignty over Nevis by giving a Royal Patent for colonisation to the Earl of Carlisle. However, actual European settlement did not happen until 1628, when Anthony Hilton moved from nearby Saint Kitts following a murder plot against him. He was accompanied by 80 other settlers, soon to be boosted by a further 100 settlers from London who had originally hoped to settle Barbuda. Hilton became the first Governor of Nevis. After the Treaty of Madrid (1670) between Spain and England, Nevis became the seat of the British colony and the Admiralty Court also sat in Nevis. Between 1675 and 1730, the island was the headquarters for the slave trade for the Leeward Islands, with approximately 6,000–7,000 enslaved West Africans passing through en route to other islands each year. The Royal African Company brought all its ships through Nevis. A 1678 census shows a community of Irish people – 22% of the population – existing as either indentured servants or freemen.
Due to the profitable Slave Trade and the high quality of Nevisian sugar cane, the island soon became a dominant source of wealth for Great Britain and the slave-owning British plantocracy. When the Leeward Islands were separated from Barbados in 1671, Nevis became the seat of the Leeward Islands colony and was given the nickname "Queen of the Caribees". It remained colonial capital for the Leeward Islands until the seat was transferred to Antigua for military reasons in 1698. During this period, Nevis was the richest of the British Leeward Islands. The island outranked larger islands like Jamaica in sugar production in the late 17th century. The wealth of the planters on the island is evident in the tax records preserved at the Calendar State Papers in the British Colonial Office Public Records, where the amount of tax collected on the Leeward Islands was recorded. The sums recorded for 1676 as "head tax on slaves", a tax payable in sugar, amounted to 384,600 pounds in Nevis, as opposed to 67,000 each in Antigua and Saint Kitts, 62,500 in Montserrat, and 5,500 total in the other five islands. The profits on sugar cultivation in Nevis was enhanced by the fact that the cane juice from Nevis yielded an unusually high amount of sugar. A gallon (3.79 litres) of cane juice from Nevis yielded 24 ounces (0.71 litres) of sugar, whereas a gallon from Saint Kitts yielded 16 ounces (0.47 litres). Twenty percent of the British Empire's total sugar production in 1700 was derived from Nevisian plantations. Exports from West Indian colonies like Nevis were worth more than all the exports from all the mainland Thirteen Colonies of North America combined at the time of the American Revolution.
The enslaved families formed the large labour force required to work the sugar plantations. After the 1650s the supply of white indentured servants began to dry up due to increased wages in England and less incentive to migrate to the colonies. By the end of the 17th century, the population of Nevis consisted of a small, rich planter elite in control, a marginal population of poor Whites, a great majority of African-descended slaves, and an unknown number of Maroons, escaped slaves living in the mountains. In 1780, 90 percent of the 10 000 people living on Nevis were Black. Some of the maroons joined with the few remaining Caribs in Nevis to form a resistance force. Memories of the Nevisian maroons' struggle under the plantation system are preserved in place names such as Maroon Hill, an early centre of resistance.
The great wealth generated by the colonies of the West Indies led to wars among Spain, Britain, and France. The formation of the United States can be said to be a partial by-product of these wars and the strategic trade aims that often ignored North America. Three privateers (William Kidd being one of them) were employed by the British Crown to help protect ships in Nevis' waters.
During the 17th century, the French, based on Saint Kitts, launched many attacks on Nevis, sometimes assisted by the Island Caribs, who in 1667 sent a large fleet of canoes along in support. In the same year a Franco–Dutch invasion fleet was repelled off Nevis by an English fleet. Letters and other records from the era indicate that the English on Nevis hated and feared the Amerindians. In 1674 and 1683 they participated in attacks on Carib villages in Dominica and St. Vincent, in spite of a lack of official approval from the Crown for the attack.
On Nevis, the English built Fort Charles and a series of smaller fortifications to aid in defending the island. This included Saddle Hill Battery, built in 1740 to replace a deodand on Mount Nevis.
In 1706, Pierre Le Moyne d'Iberville, the French Canadian founder of Louisiana in North America, decided to drive the English out of Nevis and thus also stop pirate attacks on French ships; he considered Nevis the region's headquarters for piracy against French trade. During d'Iberville's invasion of Nevis, French buccaneers were used in the front line, infamous for being ruthless killers after the pillaging during the wars with Spain where they gained a reputation for torturing and murdering non-combatants. In the face of the invading force, the English militiamen of Nevis fled. Some planters burned the plantations, rather than letting the French have them, and hid in the mountains. It was the enslaved Africans who held the French at bay by taking up arms to defend their families and the island. The slave quarters had been looted and burned as well, as the main reward promised the men fighting on the French side in the attack was the right to capture as many slaves as possible and resell them in Martinique.
During the fighting, 3,400 enslaved Nevisians were captured and sent off to Martinique, but about 1,000 more, poorly armed and militarily untrained, held the French troops at bay, by "murderous fire" according to an eyewitness account by an English militiaman. He wrote that "the slaves' brave behaviour and defence there shamed what some of their masters did, and they do not shrink to tell us so." After 18 days of fighting, the French were driven off the island. Among the Nevisian men, women and children carried away on d'Iberville's ships, six ended up in Louisiana, the first persons of African descent to arrive there.
One consequence of the French attack was a collapsed sugar industry and during the ensuing hardship on Nevis, small plots of land on the plantations were made available to the enslaved families in order to control the loss of life due to starvation. With less profitability for the absentee plantation owners, the import of food supplies for the plantation workers dwindled. Between 1776 and 1783, when the food supplies failed to arrive altogether due to the rebellion in North America, 300–400 enslaved Nevisians starved to death. On 1 August 1834, slavery was abolished in the British Empire. In Nevis, 8,815 slaves were freed. The first Monday in August is celebrated as Emancipation Day and is part of the annual Nevis Culturama festival.
A four-year apprenticeship programme followed the abolishment of slavery on the plantations. In spite of the continued use of the labour force, the Nevisian slave owners were paid over £150,000 in compensation from the British Government for the loss of property, whereas the enslaved families received nothing for 200 years of labour. One of the wealthiest planter families in Nevis, the Pinneys of Montravers Plantation, claimed £36,396 (worth close to £1,800,000 today) in compensation for the slaves on the family-owned plantations around the Caribbean.
Because of the early distribution of plots and because many of the planters departed from the island when sugar cultivation became unprofitable, a relatively large percentage of Nevisians already owned or controlled land at emancipation. Others settled on crown land. This early development of a society with a majority of small, landowning farmers and entrepreneurs created a stronger middle class in Nevis than in Saint Kitts, where the sugar industry continued until 2006. Even though the 15 families in the wealthy planter elite no longer control the arable land, Saint Kitts still has a large, landless working class population.
Nevis was united with Saint Kitts and Anguilla in 1882, and they became an associated state with full internal autonomy in 1967, though Anguilla seceded in 1971. Together, Saint Kitts and Nevis became independent on 19 September 1983. On 10 August 1998, a referendum on Nevis to separate from Saint Kitts had 2,427 votes in favour and 1,498 against, falling short of the two-thirds majority needed.
Before 1967, the local government of Saint Kitts was also the government of Nevis and Anguilla. Nevis had two seats and Anguilla one seat in the government. The economic and infrastructural development of the two smaller islands was not a priority to the colonial federal government.
When the hospital in Charlestown was destroyed in a hurricane in 1899, planting of trees in the squares of Saint Kitts and refurbishing of government buildings, also in Saint Kitts, took precedence over the rebuilding of the only hospital in Nevis. After five years without any proper medical facilities, the leaders in Nevis initiated a campaign, threatening to seek independence from Saint Kitts. The British Administrator in Saint Kitts, Charles Cox, was unmoved. He stated that Nevis did not need a hospital since there had been no significant rise in the number of deaths during the time Nevisians had been without a hospital. Therefore, no action was needed on behalf of the government, and besides, Cox continued, the Legislative Council regarded "Nevis and Anguilla as a drag on St. Kitts and would willingly see a separation". Finally, a letter of complaint to the metropolitan British Foreign Office gave result and the federal government in Saint Kitts was ordered by their superiors in London to take speedy action. The Legislative Council took another five years to consider their options. The final decision by the federal government was to not rebuild the old hospital after all but to instead convert the old Government House in Nevis into a hospital, named Alexandra Hospital after Queen Alexandra, wife of King Edward VII. A majority of the funds assigned for the hospital could thus be spent on the construction of a new official residence in Nevis.
After d'Iberville's invasion in 1704, records show Nevis’ sugar industry in ruins and a decimated population begging the English Parliament and relatives for loans and monetary assistance to stave off island-wide starvation. The sugar industry on the island never fully recovered and during the general depression that followed the loss of the West Indian sugar monopoly, Nevis fell on hard times and the island became one of the poorest in the region. The island remained poorer than Saint Kitts until 1991, when the fiscal performance of Nevis edged ahead of the fiscal performance of Saint Kitts for the first time since the French invasion.
Electricity was introduced in Nevis in 1954 when two generators were shipped in to provide electricity to the area around Charlestown. In this regard, Nevis fared better than Anguilla, where there were no paved roads, no electricity and no telephones until 1967. However, electricity did not become available island-wide on Nevis until 1971.
An ambitious infrastructure development programme was introduced in the early 2000s which included a transformation of the Charlestown port, construction of a new deep-water harbour, resurfacing and widening the Island Main Road, a new airport terminal and control tower, and a major airport expansion, which required the relocation of an entire village in order to make room for the runway extension.
Modernised classrooms and better-equipped schools, as well as improvements in the educational system, have contributed to a leap in academic performance on the island. The pass rate among the Nevisian students sitting for the Caribbean Examination Council (CXC) exams, the Cambridge General Certificate of Education Examination (GCE) and the Caribbean Advance Proficiency Examinations is now consistently among the highest in the English-speaking Caribbean.
The formation of the island began in mid-Pliocene times, approximately 3.45 million years ago. Nine distinct eruptive centres from different geological ages, ranging from mid-Pliocene to Pleistocene, have contributed to the formation. No single model of the island's geological evolution can, therefore, be ascertained.
Nevis Peak ( is the dormant remnant of one of these ancient stratovolcanoes. The last activity took place about 100,000 years ago, but active fumaroles and hot springs are still found on the island, the most recent formed in 1953. The composite cone of Nevis volcano has two overlapping summit craters that are partially filled by a lava dome, created in recent, pre-Columbian time. Pyroclastic flows and mudflows were deposited on the lower slopes of the cone simultaneously. Nevis Peak is located on the outer crater rim. Four other lava domes were constructed on the flanks of the volcano, one on the northeast flank (Madden's Mount), one on the eastern flank (Butlers Mountain), one on the northwest coast (Mount Lily) and one on the south coast (Saddle Hill, with a height of 375 metres). The southernmost point on the island is Dogwood Point which is also the southernmost point of the Federation of Saint Kitts and Nevis.
During the last ice age, when the sea level was 60 m lower, the three islands of Saint Kitts, Nevis and Sint Eustatius (also known as Statia) were connected as one island. Saba, however, is separated from these three by a deeper channel.
There are visible wave-breaking reefs along the northern and eastern shorelines. To the south and west, the reefs are located in deeper water and are suitable for scuba diving. The most developed beach on Nevis is the 6.5 km long Pinney's Beach, on the western or Caribbean coast. There are sheltered swimming beaches in Oualie Bay and Cades Bay. The eastern coast of the island faces into the Atlantic Ocean and can have strong surf in parts of the shore which are unprotected by fringing coral reefs. The colour of the sand on the beaches of Nevis is variable: on a lot of the bigger beaches the sand is a yellow-grey in colour, but some beaches on the southern coast have darker, reddish, or even black sand. Under a microscope it becomes clear that Nevis sand is a mixture of tiny fragments of coral, many foraminifera, and small crystals of the various mineral constituents of the volcanic rock of which the island is made.
Seven volcanic centers make up Nevis. These include Round Hill (3.43 Ma), Cades Bay (3.22 Ma), Hurricane Hill (2.7 Ma), Saddle Hill (1.8 Ma), Butlers Mountain (1.1 Ma), Red Cliff and Nevis Peak (0.98 Ma). These are mainly andesite and dacite lava domes, with associated block and ash flows, plus lahars. Nevis Peak has the highest elevation, at 984 m. Cades Bay and Farm Estate Soufriere are noted areas of hydrothermal activity.
Water has been piped since 1911 from a spring called the "Source", located 1800 feet up the mountain, to storage tanks at Rawlins Village, and since 1912, to Butler's Village. Additional drinking water comes from Nelson's Spring near Cotton Ground and Bath Spring. Groundwater has been extracted since the 1990s, and mixed with the Source water.
During the 17th and 18th centuries, massive deforestation was undertaken by the planters as the land was initially cleared for sugar cultivation. This intense land exploitation by the sugar and cotton industry lasted almost 300 years, and greatly changed the island's ecosystem.
In some places along the windswept southeast or "Windward" coast of the island, the landscape is radically altered compared with how it used to be in pre-colonial times. Due to extreme land erosion, the topsoil was swept away, and in some places at the coast, sheer cliffs as high as have developed.
Thick forest once covered the eastern coastal plain, where the Amerindians built their first settlements during the Aceramic period, complementing the ecosystem surrounding the coral reef just offshore. It was the easy access to fresh water on the island and the rich food source represented by the ocean life sheltered by the reef that made it feasible for the Amerindians to settle this area around 600 BC. With the loss of the natural vegetation, the balance in runoff nutrients to the reef was disturbed, eventually causing as much as 80 percent of the large eastern fringing reef to become inactive. As the reef broke apart, it, in turn, provided less protection for the coastline.
During times of maximum cultivation, sugar cane fields stretched from the coastline of Nevis up to an altitude at which the mountain slopes were too steep and rocky to farm. Nonetheless, once the sugar industry was finally abandoned, vegetation on the leeward side of the island regrew reasonably well, as scrub and secondary forest.
Nevis has several natural freshwater springs (including Nelson's Spring). The island also has numerous non-potable volcanic hot springs, including most notably the Bath Spring near Bath village, just south of the capital Charlestown.
After heavy rains, powerful rivers of rainwater pour down the numerous ravines (known as ghauts). When the water reaches the coastline, the corresponding coastal ponds, both freshwater and brackish, fill to capacity and beyond, spilling over into the sea.
With modern development, the existing freshwater springs are no longer enough to supply water to the whole island. The water supply now comes mostly from Government wells. The major source of potable water for the island is groundwater, obtained from 14 active wells. Water is pumped from the wells, stored and allowed to flow by gravity to the various locations.
The climate is tropical with little variation, tempered all year round (but particularly from December through February) by the steady north-easterly winds, called the trade winds. There is a slightly hotter and somewhat rainier season from May to November.
Nevis lies within the track area of tropical storms and occasional hurricanes. These storms can develop between August and October. This time of year has the heaviest rainfalls.
The official currency is the Eastern Caribbean dollar (EC$), which is shared by eight other territories in the region.
The European Commission's Delegation in Barbados and the Eastern Caribbean estimates the annual per capita Gross Domestic Product (GDP) on Nevis to be about 10 percent higher than on St. Kitts.
The major source of revenue for Nevis today is tourism. During the 2003–2004 season, approximately 40,000 tourists visited Nevis. A five star hotel "(The Four Seasons Resort Nevis, West Indies)", four exclusive restored plantation inns, and several smaller hotels including Oualie Beach Resort are currently in operation. Larger developments along the west coast have recently been approved and are in the process of being developed.
The introduction of secrecy legislation has made offshore financial services a rapidly growing economic sector in Nevis. Incorporation of companies, international insurance and reinsurance, as well as several international banks, trust companies, asset management firms, have created a boost in the economy. During 2005, the Nevis Island Treasury collected $94.6 million in annual revenue, compared to $59.8 million during 2001. In 1998, 17,500 international banking companies were registered in Nevis. Registration and annual filing fees paid in 1999 by these entities amounted to over 10 percent of Nevis’ revenues. The offshore financial industry gained importance during the financial disaster of 1999 when Hurricane Lenny damaged the major resort on the island, causing the hotel to be closed down for a year and 400 of the 700 employees to be laid off.
In 2000, the Financial Action Task Force, part of the Organisation for Economic Co-operation and Development (OECD), issued a blacklist of 35 nations which were said to be non-cooperative in the campaign against tax evasion and money laundering. The list included the Federation of Saint Kitts and Nevis.
The political structure for the Federation of Saint Kitts and Nevis is based on the Westminster Parliamentary system, but it is a unique structure in that Nevis has its own unicameral legislature, consisting of Her Majesty's representative (the Deputy Governor General) and members of the Nevis Island Assembly. Nevis has considerable autonomy in its legislative branch. The constitution actually empowers the Nevis Island Legislature to make laws that cannot be abrogated by the National Assembly. In addition, Nevis has a constitutionally protected right to secede from the federation, should a two-third majority of the island's population vote for independence in a local referendum. Section 113.(1) of the constitution states: "The Nevis Island Legislature may provide that the island of Nevis shall cease to be federated with the island of Saint Christopher and accordingly that this Constitution shall no longer have effect in the island of Nevis."
Nevis has its own premier and its own government, the Nevis Island Administration. It collects its own taxes and has a separate budget, with a current account surplus. According to a statement released by the Nevis Ministry of Finance in 2005, Nevis had one of the highest growth rates in gross national product and per capita income in the Caribbean at that point.
Nevis elections are scheduled every five years. The Nevis elections of 2013, called on 23 January 2013, was won by the party in opposition, the Concerned Citizens Movement (CCM), led by Vance Amory. The CCM won three of the five seats in the Nevis Island Assembly, while the incumbent party, the Nevis Reformation Party (NRP), won two.
In the federal elections of 2010, the CCM won two of the three Nevis assigned Federal seats, while the NRP won one. Of the eight Saint Kitts assigned federal seats, the St Kitts-Nevis Labour Party won six and the People's Action Movement (PAM) two.
Joseph Parry, leader of the opposition, has indicated that he favours constitutional reform over secession for Nevis. His party, the NRP, has historically been the strongest and most ardent proponent for Nevis independence; the party came to power with secession as the main campaign issue. In 1975, the NRP manifesto declared that: "The Nevis Reformation Party will strive at all costs to gain secession for Nevis from St. Kitts – a privilege enjoyed by the island of Nevis prior to 1882."
A cursory proposal for constitutional reform was presented by the NRP in 1999, but the issue was not prominent in the 2006 election campaign and it appears a detailed proposal has yet to be worked out and agreed upon within the party.
In "Handbook of Federal Countries" published by Forum of Federations, the authors consider the constitution problematic because it does not "specifically outline" the federal financial arrangements or the means by which the central government and Nevis Island Administration can raise revenue: "In terms of the NIA, the constitution only states (in s. 108(1)) that 'all revenues...raised or received by the Administration...shall be paid into and form a fund styled the Nevis Island Consolidated Fund.' [...] Section 110(1) states that the proceeds of all 'takes' collected in St. Kitts and Nevis under any law are to be shared between the federal government and the Nevis Island Administration based on population. The share going to the NIA, however, is subject to deductions (s. 110(2)), such as the cost of common services and debt charges, as determined by the Governor-General (s.110(3)) on the advice of the Prime Minister who can also take advice from the Premier of Nevis (s.110(4))."
According to a 1995 report by the Commonwealth Observer Group of the Commonwealth Secretariat, "the federal government is also the local government of St Kitts and this has resulted in a perception among the political parties in Nevis that the interests of the people of Nevis are being neglected by the federal government which is more concerned with the administration of St Kitts than with the federal administration."
Simeon Daniel, Nevis' first Premier and former leader of the Nevis Reformation Party (NRP) and Vance Amory, Premier and leader of the Concerned Citizens Movement (CCM), made sovereign independence for Nevis from the Federation of Saint Kitts and Nevis part of their parties' agenda. Since independence from the United Kingdom in 1983, the Nevis Island Administration and the Federal Government have been involved in several conflicts over the interpretation of the new constitution which came into effect at independence. During an interview on Voice of America in March 1998, repeated in a government-issued press release headlined "PM Douglas Maintains 1983 Constitution is Flawed", Prime Minister Denzil Douglas called the constitution a "recipe for disaster and disharmony among the people of both islands".
A crisis developed in 1984 when the People's Action Movement (PAM) won a majority in the Federal elections and temporarily ceased honouring the Federal Government's financial obligations to Nevis. Consequently, cheques issued by the Nevis Administration were not honoured by the Bank, public servants in Nevis were not paid on time and the Nevis Island Administration experienced difficulties in meeting its financial obligations.
There is also substantial support in Nevis for British Overseas Territory status similar to Anguilla's, which was formerly the third of the tri-state Saint Christopher-Nevis-Anguilla colony.
In 1996, four new bills were introduced in the National Assembly in Saint Kitts, one of which made provisions to have revenue derived from activities in Nevis paid directly to the treasury in Saint Kitts instead of to the treasury in Nevis. Another bill, The Financial Services Committee Act, contained provisions that all investments in Saint Kitts and Nevis would require approval by an investment committee in Saint Kitts. This was controversial, because ever since 1983 the Nevis Island Administration had approved all investments for Nevis, on the basis that the constitution vests legislative authority for industries, trades and businesses and economic development in Nevis to the Nevis Island Administration.
All three representatives from Nevis, including the leader of the opposition in the Nevis Island Assembly, objected to the introduction of these bills into the National Assembly in Saint Kitts, arguing that the bills would affect the ability of Nevis to develop its offshore financial services sector and that the bills would be detrimental to the Nevis economy. All the representatives in opposition in the National Assembly shared the conviction that the bills if passed into law, would be unconstitutional and undermine the constitutional and legislative authority of the Nevis Island Administration, as well as result in the destruction of the economy of Nevis.
The constitutional crisis initially developed when the newly appointed Attorney General refused to grant permission for the Nevis Island Administration to assert its legal right in the Courts. After a decision of the High Court in favour of the Nevis Island Administration, the Prime Minister gave newspaper interviews stating that he "refused to accept the decision of the High Court". Due to the deteriorating relationship between the Nevis Island Administration and the Federal Government, a Constitutional Committee was appointed in April 1996 to advise on whether or not the present constitutional arrangement between the islands should continue. The committee recommended constitutional reform and the establishment of an island administration for Saint Kitts, separate from the Federal Government.
The Federal Government in Saint Kitts fills both functions today and Saint Kitts does not have an equivalent to the Nevis Island Administration. Disagreements between the political parties in Nevis and between the Nevis Island Administration and the Federal Government have prevented the recommendations by the electoral committee from being implemented. The problematic political arrangement between the two islands, therefore, continues to date.
Nevis has continued developing its own legislation, such as The Nevis International Insurance Ordinance and the Nevis International Mutual Funds Ordinance of 2004, but calls for secession are often based on concerns that the legislative authority of the Nevis Island Administration might be challenged again in the future.
The issues of political dissension between Saint Kitts and Nevis are often centred around perceptions of imbalance in the economic structure. As noted by many scholars, Nevisians have often referred to a structural imbalance in Saint Kitts' favour in how funds are distributed between the two islands and this issue has made the movement for Nevis secession a constant presence in the island's political arena, with many articles appearing in the local press expressing concerns such as those compiled by Everton Powell in "What Motivates Our Call for Independence":
A referendum on secession from the Federation of St. Kitts and Nevis was held in 1998. Although 62% voted in favor of a secession, a two-thirds majority would have been necessary for the referendum to succeed.
The island of Nevis is divided into five administrative subdivisions called parishes, each of which has an elected representative in the Nevis Island Assembly. The division of this almost round island into parishes was done in a circular sector pattern, so each parish is shaped like a pie slice, reaching from the highest point of Nevis Peak down to the coastline.
The parishes have double names, for example Saint George Gingerland. The first part of the name is the name of the patron saint of the parish church, and the second part of the name is the traditional common name of the parish. Often the parishes are referred to simply by their common names. The religious part of a parish name is sometimes written or pronounced in the possessive: Saint George's Gingerland.
The five parishes of Nevis are:
"Culturama", the annual cultural festival of Nevis, is celebrated during the Emancipation Day weekend, the first week of August. The festivities include many traditional folk dances, such as the masquerade, the Moko jumbies on stilts, Cowboys and Indians, and Plait the Ribbon, a May pole dance. The celebration was given a more organised form in 1974, including a Miss Culture Show and a Calypso Competition, as well as drama performances, old fashion Troupes (including Johnny Walkers, Giant and Spear, Bulls, Red Cross and Blue Ribbon), arts and crafts exhibitions and recipe competitions. According to the Nevis Department of Culture, the aim is to protect and encourage indigenous folklore, in order to make sure that the uniquely Caribbean culture can "reassert itself and flourish".
The official language is English, yet Saint Kitts Creole (known on the island as 'Nevisian' or 'Nevis creole') is also widely spoken. The local creole is actually more widely spoken on Nevis than on the neighbouring island.
Nevisian culture has since the 17th century incorporated African, European and East Indian cultural elements, creating a distinct Afro-Caribbean culture. Several historical anthropologists have done field research Nevis and in Nevisian migrant communities in order to trace the creation and constitution of a Nevisian cultural community. Karen Fog Olwig published her research about Nevis in 1993, writing that the areas where the Afro-Caribbean traditions were especially strong and flourishing relate to kinship and subsistence farming. However, she adds, Afro-Caribbean cultural impulses were not recognised or valued in the colonial society and were therefore often expressed through Euro-Caribbean cultural forms. Examples of European forms appropriated to express Afro-Caribbean culture are the Nevisian and Kittitian "Tea Meetings" and "Christmas Sports". According to anthropologist Roger D. Abrahams, these traditional performance art forms are "Nevisian approximation of British performance codes, techniques, and patterns". He writes that the Tea Meetings were staged as theatrical "battles between decorum and chaos", decorum represented by the ceremony chairmen and chaos the hecklers in the audience, with a diplomatic King or a Queen presiding over the battle to ensure fairness.
The Christmas Sports included a form of comedy and satire based on local events and gossip. They were historically an important part of the Christmas celebrations in Nevis, performed on Christmas Eve by small troupes consisting of five or six men accompanied by string bands from different parts of the island. One of the men in the troupe was dressed as a woman, playing all the female parts in the dramatisations. The troupes moved from yard to yard to perform their skits, using props, face paint and costumes to play the roles of well-known personalities in the community. Examples of gossip about undesired behaviour that could surface in the skits for comic effect were querulous neighbours, adulterous affairs, planters mistreating workers, domestic disputes or abuse, crooked politicians and any form of stealing or cheating experienced in the society. Even though no names were mentioned in these skits, the audience would usually be able to guess who the heckling message in the troupe's dramatised portrayals was aimed at, as it was played out right on the person's own front yard. The acts thus functioned as social and moral commentaries on current events and behaviours in Nevisian society. This particular form is called "Bazzarding" by many locals. Abrahams theorises that Christmas Sports are rooted in the pre-emancipation Christmas and New Year holiday celebrations, when the enslaved population had several days off.
American folklorist and musicologist Alan Lomax visited Nevis in 1962 in order to conduct long-term research into the black folk culture of the island. His field trip to Nevis and surrounding islands resulted in the anthology "Lomax Caribbean Voyage" series.
Among the Nevisians recorded were chantey-singing fishermen in a session organised in a rum shop in Newcastle; Santoy, the Calypsonian, performing calypsos by Nevisian ballader and local legend Charles Walters to guitar and cuatro; and string bands, fife players and drummers from Gingerland, performing quadrilles.
The island is also known for "Jamband music", which is the kind of music performed by local bands during the "Culturama Festival" and is key to "Jouvert" dancing. The sounds of the so-called "Iron Band" are also popular within the culture; many locals come together using any old pans, sinks, or other kits of any sort; which they use to create sounds and music. This form of music is played throughout the villages during the Christmas and carnival seasons.
A series of earthquakes during the 18th century severely damaged most of the colonial-era stone buildings of Charlestown. The Georgian stone buildings in Charlestown that are visible today had to be partially rebuilt after the earthquakes, and this led to the development of a new architectural style, consisting of a wooden upper floor over a stone ground floor; the new style resisted earthquake damage much more effectively.
Two famous Nevisian buildings from the 18th century are Hermitage Plantation, built of lignum vitae wood in 1740, the oldest surviving wooden house still in use in the Caribbean today, and the Bath Hotel, the first hotel in the Caribbean, a luxury hotel and spa built by John Huggins in 1778. The soothing waters of the hotel's hot spring and the lively social life on Nevis attracted many famous Europeans including Antigua-based Admiral Nelson, and Prince William Henry, Duke of Clarence, (future William IV of the United Kingdom), who attended balls and private parties at the Bath Hotel. Today, the building serves as government offices, and there are two outdoor hot-spring bathing spots which were specially constructed in recent years for public use.
An often repeated legend appears to suggest that a destructive 1680 or 1690 earthquake and tsunami destroyed the buildings of the original capital Jamestown on the west coast. Folk tales say that the town sank beneath the ocean, and the tsunami is blamed for the escape of (possibly fictional) pirate Red Legs Greaves. However, archaeologists from the University of Southampton who have done excavations in the area, have found no evidence to indicate that the story is true. They state that this story may originate with an over-excited Victorian letter writer sharing somewhat exaggerated accounts of his exotic life in the tropical colony with a British audience back home. One such letter recounts that so much damage was done to the town that it was completely evacuated, and was engulfed by the sea. Early maps do not, however, actually show a settlement called "Jamestown", only "Morton's Bay", and later maps show that all that was left of Jamestown/Morton's Bay in 1818 was a building labelled "Pleasure House". Very old bricks that wash up on Pinney's Beach after storms may have contributed to this legend of a sunken town; however these bricks are thought to be dumped ballast from 17th and 18th century sailing ships. | https://en.wikipedia.org/wiki?curid=21503 |
Nicole Kidman
Nicole Mary Kidman (born 20 June 1967) is an Australian actress, philanthropist and producer. Her awards include an Academy Award, two Primetime Emmy Awards, and four Golden Globe Awards. She was listed among the highest-paid actresses in the world in 2006, 2018, and 2019. "Time" magazine twice named her one of the 100 most influential people in the world, in 2004 and 2018.
Kidman began her acting career in Australia with the 1983 films "Bush Christmas" and "BMX Bandits". Her breakthrough came in 1989 with the thriller film "Dead Calm" and the miniseries "Bangkok Hilton". In 1990, she made her Hollywood debut in the racing film "Days of Thunder", opposite Tom Cruise. She went on to achieve wider recognition with lead roles in "Far and Away" (1992), "Batman Forever" (1995), "To Die For" (1995) and "Eyes Wide Shut" (1999). Kidman won the Academy Award for Best Actress for portraying the writer Virginia Woolf in the drama "The Hours" (2002). Her other Oscar-nominated roles were as a courtesan in the musical "Moulin Rouge!" (2001) and emotionally troubled mothers in the dramas "Rabbit Hole" (2010) and "Lion" (2016).
Kidman's other film credits include "The Others" (2001), "Cold Mountain" (2003), "Dogville" (2003), "Birth" (2004), "The Stepford Wives" (2004) "Australia" (2008), "The Paperboy" (2012), "Paddington" (2014), "Destroyer" (2018), "Aquaman" (2018) and "Bombshell" (2019). Her television roles include two projects for HBO, the biopic "Hemingway & Gellhorn" (2012) and the drama series "Big Little Lies" (2017–2019). The latter earned Kidman the Primetime Emmy Award for Outstanding Lead Actress and Outstanding Limited Series.
Kidman has been a Goodwill ambassador for UNICEF since 1994 and for UNIFEM since 2006. In 2006, she was appointed Companion of the Order of Australia. Since she was born to Australian parents in Hawaii, Kidman has dual citizenship of Australia and the United States. In 2010, she founded the production company Blossom Films. She has been married to singer Keith Urban since 2006, and was earlier married to Tom Cruise.
Kidman was born on 20 June 1967, in Honolulu, Hawaii, while her Australian parents were temporarily in the United States on student visas. Her mother, Janelle Ann (née Glenny), is a nursing instructor who edited her husband's books and was a member of the Women's Electoral Lobby; her father, Antony Kidman, was a biochemist, clinical psychologist and author. Kidman's ancestry includes Irish and Scottish heritage.
Being born in Hawaii, she was given the Hawaiian name "Hōkūlani", meaning "heavenly star". The inspiration came from a baby elephant born around the same time at the Honolulu Zoo.
At the time of Kidman's birth, her father was a graduate student at the University of Hawaiʻi at Mānoa. He became a visiting fellow at the National Institute of Mental Health of the United States. Opposed to the war in Vietnam, Kidman's parents participated in anti-war protests while living in Washington, D.C. The family returned to Australia when Kidman was four and her mother now lives on Sydney's North Shore. Kidman has a younger sister, Antonia Kidman, a journalist and TV presenter.
Kidman grew up in Sydney and attended Lane Cove Public School and North Sydney Girls' High School. She was enrolled in ballet at three and showed her natural talent for acting in her primary and high school years. She says that she was first inspired to become an actress upon seeing Margaret Hamilton's performance as the Wicked Witch of the West in "The Wizard of Oz". Kidman has revealed that she was timid as a child, saying, "I am very shy – really shy – I even had a stutter as a kid, which I slowly got over, but I still regress into that shyness. So I don't like walking into a crowded restaurant by myself; I don't like going to a party by myself."
She initially studied at the Phillip Street Theatre in Sydney, alongside Naomi Watts who had attended the same high school. She also attended the Australian Theatre for Young People. Here she took up drama, mime and performing in her teens, finding acting to be a refuge. Owing to her fair skin and naturally red hair, the Australian sun forced the young Kidman to rehearse in halls of the theatre. A regular at the Phillip Street Theatre, she received praise and encouragement to pursue acting full-time.
In 1983, aged 16, Kidman made her film debut in a remake of the Australian holiday season favourite "Bush Christmas". By the end of 1983, she had a supporting role in the television series "Five Mile Creek". In 1984, her mother was diagnosed with breast cancer, which caused Kidman to halt her acting work temporarily while she studied massage so she could help her mother with physical therapy. She began gaining popularity in the mid-1980s after appearing in several film roles, including "BMX Bandits" (1983), "Watch the Shadows Dance" (1987 aka "Nightmaster"), and the romantic comedy "Windrider" (1986), which earned Kidman attention due to her racy scenes. Also during the decade, she appeared in several Australian productions, including the soap opera "A Country Practice" and the 1987 miniseries "Vietnam". She also made guest appearances on Australian television programs and TV movies.
In 1988, Kidman appeared in "Emerald City", based on the play of the same name. The Australian film earned her an Australian Film Institute award for Best Supporting Actress. Kidman next starred with Sam Neill in "Dead Calm" (1989) as Rae Ingram, playing the wife of a naval officer. The thriller brought Kidman to international recognition; "Variety" commented: "Throughout the film, Kidman is excellent. She gives the character of Rae real tenacity and energy." Meanwhile, critic Roger Ebert noted the excellent chemistry between the leads, stating, "Kidman and Zane do generate real, palpable hatred in their scenes together." She followed that up with the Australian miniseries "Bangkok Hilton". She next moved on to star alongside her then-boyfriend and future husband, Tom Cruise, in the 1990 auto racing film "Days of Thunder", as a young doctor who falls in love with a NASCAR driver. It is Kidman's American debut and was among the highest-grossing films of the year.
In 1991, she co-starred with Thandie Newton and former classmate Naomi Watts in the Australian independent film "Flirting". They portrayed high school girls in this coming of age story, which won the Australian Film Institute Award for Best Film. That same year, her work in the film "Billy Bathgate" earned Kidman her first Golden Globe Award nomination, for Best Supporting Actress. "The New York Times", in its film review, called her "a beauty with, it seems, a sense of humor". The following year, she and Cruise re-teamed for Ron Howard's Irish epic "Far and Away" (1992), which was a modest critical and commercial success. In 1993, she starred in the thriller "Malice" opposite Alec Baldwin and the drama "My Life" opposite Michael Keaton.
In 1995, Kidman played Dr. Chase Meridian, the damsel in distress, in the superhero film "Batman Forever", opposite Val Kilmer as the film's title character. The same year, she starred in Gus Van Sant's critically acclaimed dark comedy "To Die For", in which she played the murderous newscaster Suzanne Stone. Of Kidman's Golden Globe Award-winning performance, Mick LaSalle of the "San Francisco Chronicle" said "[she] brings to the role layers of meaning, intention and impulse. Telling her story in close-up – as she does throughout the film – Kidman lets you see the calculation, the wheels turning, the transparent efforts to charm that succeed in charming all the same." Kidman next appeared, alongside Barbara Hershey and John Malkovich, in "The Portrait of a Lady" (1996), based on the novel of the same name, and starred in "The Peacemaker" (1997) as White House nuclear expert Dr. Julia Kelly, opposite George Clooney. The latter film grossed US$110 million worldwide. Kidman starred in comedy "Practical Magic" (1998) with Sandra Bullock as two witch sisters who face a curse which threatens to prevent them ever finding lasting love. While the film opened atop the chart on its North American opening weekend, it flopped at the box office. She returned to her work on stage the same year in the David Hare play "The Blue Room", which opened in London.
In 1999, Kidman reunited with then husband, Tom Cruise, to portray a Manhattan couple on a sexual odyssey, in "Eyes Wide Shut", the final film of director Stanley Kubrick. It was subject to censorship controversies due to the explicit nature of its sex scenes. After a brief hiatus and a highly publicised divorce from Cruise, Kidman returned to the screen to play a mail-order bride in the British-American drama "Birthday Girl". In 2001, Kidman played the cabaret actress and courtesan Satine in Baz Luhrmann's musical "Moulin Rouge!", opposite Ewan McGregor. Her performance and her singing received positive reviews; Paul Clinton of "CNN.com" called it her best work since "To Die For", and wrote "[she] is smoldering and stunning as Satine. She moves with total confidence throughout the film [...] Kidman seems to specialize in 'ice queen' characters, but with Satine, she allows herself to thaw, just a bit." Subsequently, Kidman received her second Golden Globe Award, for Best Actress in a Motion Picture Musical or Comedy, as well as many other acting awards and nominations. She also received her first Academy Award nomination, for Best Actress.
Kidman also starred in Alejandro Amenábar's horror film "The Others" (2001), as Grace Stewart, a mother living in the Channel Islands during World War II who suspects her house is haunted. Grossing over US$210 million worldwide, the film also earned several Goya Award nominations, including a Best Actress nomination for Kidman. She received her second BAFTA Award and fifth Golden Globe Award nominations. Roger Ebert commented that "Alejandro Amenábar has the patience to create a languorous, dreamy atmosphere, and Nicole Kidman succeeds in convincing us that she is a normal person in a disturbing situation, and not just a standard-issue horror movie hysteric." Kidman was named the World's Most Beautiful Person by "People" magazine.
In 2002, Kidman won critical praise for her portrayal of Virginia Woolf in Stephen Daldry's "The Hours", which stars Meryl Streep and Julianne Moore. Kidman famously wore prosthetics that were applied to her nose making her almost unrecognisable playing the author during her time in 1920s England, and her bouts with depression and mental illness while trying to write her novel, "Mrs. Dalloway". The film earned positive notices and several nominations, including for an Academy Award for Best Picture. "The New York Times" wrote that, "Ms. Kidman, in a performance of astounding bravery, evokes the savage inner war waged by a brilliant mind against a system of faulty wiring that transmits a searing, crazy static into her brain". Kidman won numerous critics' awards, including her first BAFTA Award, third Golden Globe Award, and the Academy Award for Best Actress. As the first Australian actress to win an Academy Award, Kidman made a teary acceptance speech about the importance of art, even during times of war, saying, "Why do you come to the Academy Awards when the world is in such turmoil? Because art is important. And because you believe in what you do and you want to honour that, and it is a tradition that needs to be upheld."
Following her Oscar win, Kidman appeared in three very different films in 2003. The first, a leading role in "Dogville", by Danish director Lars von Trier, was an experimental film set on a bare soundstage. Though the film divided critics in the United States, Kidman still earned praise for her performance. Peter Travers of "Rolling Stone" magazine stated: "Kidman gives the most emotionally bruising performance of her career in Dogville, a movie that never met a cliche it didn't stomp on." The second was an adaptation of Philip Roth's novel "The Human Stain", opposite Anthony Hopkins. Her third film was Anthony Minghella's war drama "Cold Mountain". Kidman appeared opposite Jude Law and Renée Zellweger, playing Southerner Ada Monroe, who is in love with Law's character and separated by the Civil War. "TIME" magazine wrote, "Kidman takes strength from Ada's plight and grows steadily, literally luminous. Her sculptural pallor gives way to warm radiance in the firelight". The film garnered several award nominations and wins for its actors; Kidman received her sixth Golden Globe Award nomination at the 61st Golden Globe Awards for Best Actress.
In 2004 she appeared in the film "Birth", which received controversy over a scene in which Kidman shares a bath with her co-star, 10-year-old Cameron Bright. At a press conference at the Venice Film Festival, Kidman addressed the controversy saying, "It wasn't that I wanted to make a film where I kiss a 10-year-old boy. I wanted to make a film where you understand love". Kidman earned her seventh Golden Globe nomination, for Best Actress – Motion Picture Drama. That same year, she appeared as a successful producer in the black comedy-science-fiction film "The Stepford Wives", a remake of the 1975 film of the same name, directed by Frank Oz. In 2005, Kidman appeared opposite Sean Penn in the Sydney Pollack thriller "The Interpreter", playing UN translator Silvia Broome, and with Will Ferrell in the romantic comedy "Bewitched", based on the 1960s TV sitcom of the same name. While neither film fared well in the United States, both were international successes. Kidman and Ferrell earned the Razzie Award for Worst Screen Couple.
In conjunction with her success in the film industry, Kidman became the face of the "Chanel No. 5" perfume brand. She starred in a campaign of television and print ads with Rodrigo Santoro, directed by "Moulin Rouge!" director Baz Luhrmann, to promote the fragrance during the holiday seasons of 2004, 2005, 2006, and 2008. The three-minute commercial produced for "Chanel No. 5" made Kidman the record holder for the most money paid per minute to an actor after she reportedly earned US$12million for the three-minute advert. During this time, Kidman was also listed as the 45th Most Powerful Celebrity on the 2005 "Forbes" Celebrity 100 List. She made a reported US$14.5 million in 2004–2005. On "People" magazine's list of 2005's highest-paid actresses, Kidman was second behind Julia Roberts, with US$16–17 million per-film price tag. Nintendo in 2007 announced that Kidman would be the new face of Nintendo's advertising campaign for the Nintendo DS game More Brain Training in its European market.
In 2006, Kidman portrayed photographer Diane Arbus in the biographical film "Fur", opposite Robert Downey Jr., and lent her voice to the animated film "Happy Feet", which grossed over US$384 million worldwide. In 2007, she starred in the science-fiction movie "The Invasion" directed by Oliver Hirschbiegel, a remake of the 1956 "Invasion of the Body Snatchers", and starred opposite Jennifer Jason Leigh and Jack Black in Noah Baumbach's comedy-drama "Margot at the Wedding", which earned her a Satellite Award nomination for Best Actress – Musical or Comedy. She also starred in the fantasy-adventure, "The Golden Compass" (2007), playing the villainous Marisa Coulter.
In 2008, she reunited with "Moulin Rouge!" director Baz Luhrmann in the Australian period film "Australia", set in the remote Northern Territory during the Japanese attack on Darwin during World War II. Kidman played opposite Hugh Jackman as an Englishwoman feeling overwhelmed by the continent. The acting was praised and the movie was a box office success worldwide. Kidman appeared in the 2009 Rob Marshall musical "Nine", portraying the Federico Fellini-like character's muse, Claudia Jenssen, with fellow Oscar winners Daniel Day-Lewis, Judi Dench, Marion Cotillard, Penélope Cruz and Sophia Loren. Kidman, whose screen time was brief compared to the other actresses, performed the musical number "Unusual Way", alongside Day-Lewis. The film received several Golden Globe Award and Academy Award nominations, and earned Kidman a fourth Screen Actors Guild Award nomination, as part of the Outstanding Performance by a Cast in a Motion Picture.
In 2010, Kidman starred with Aaron Eckhart in the film adaptation of the Pulitzer Prize-winning play "Rabbit Hole", for which she vacated her role in the Woody Allen picture "You Will Meet a Tall Dark Stranger". Her portrayal as a grieving mother in the film earned her critical acclaim, and received nominations for the Academy Awards, Golden Globe Awards, and Screen Actors Guild Awards. She lent her voice to a promotional video that Australia used to support its bid to host the 2018 FIFA World Cup. In 2011, she starred alongside Nicolas Cage in director Joel Schumacher's action-thriller "Trespass", with the stars playing a married couple taken hostage, and appeared with Adam Sandler and Jennifer Aniston in Dennis Dugan's romantic comedy "Just Go with It", as a trophy wife.
In 2012, Kidman and Clive Owen starred in the HBO film "Hemingway & Gellhorn", and about Ernest Hemingway and his relationship with Martha Gellhorn. In Lee Daniels' adaptation of the Pete Dexter novel, "The Paperboy" (2012), she portrayed death row groupie Charlotte Bless, and performed sex scenes that she claims not to have remembered until seeing the finished film. The film competed in the 2012 Cannes Film Festival, and Kidman's performance drew nominations for the SAG and the Saturn Award for Best Supporting Actress, gave Kidman her second Golden Globe Award nomination for Best Supporting Actress and her tenth nomination overall. In 2012, Kidman's audiobook recording of Virginia Woolf's "To the Lighthouse" was released at Audible.com. Kidman starred as an unstable mother in Park Chan-wook's "Stoker" (2013), to a positive response and a Saturn Award nomination for Best Supporting Actress. In April 2013 she was selected as a member of the main competition jury at the 2013 Cannes Film Festival.
In 2014, Kidman starred in the biographical film "Grace of Monaco" in the title role that chronicles the 1962 crisis, in which Charles de Gaulle blockaded the tiny principality, angered by Monaco's status as a tax haven for wealthy French subjects and Kelly's contemplating a Hollywood return to star in Alfred Hitchcock's "Marnie". Opening out of competition at the 2014 Cannes Film Festival, the film received largely negative reviews. Kidman also starred in two films with Colin Firth that year, the first being the British-Australian historical drama "The Railway Man", in which Kidman played an officer's wife. Katherine Monk of the Montreal Gazette said of Kidman's performance, "It's a truly masterful piece of acting that transcends Teplitzky's store-bought framing, but it's Kidman who delivers the biggest surprise: For the first time since her eyebrows turned into solid marble arches, the Australian Oscar winner is truly terrific". Her second film with Firth was the British thriller film "Before I Go To Sleep", portraying a car crash survivor with brain damage. She also appeared in the family film "Paddington" (2014) as a villain.
In 2015, Kidman starred in the drama "Strangerland", which opened at the 2015 Sundance Film Festival, and the Jason Bateman-directed "The Family Fang", produced by Kidman's production company, Blossom Films, which premiered at the 2015 Toronto International Film Festival. In her other 2015 film release, the biographical drama "Queen of the Desert", she portrayed writer, traveller, political officer, administrator, and archaeologist Gertrude Bell. Kidman played a district attorney, opposite Julia Roberts and Chiwetel Ejiofor, in the little-seen film "Secret in Their Eyes" (also 2015), a remake of the 2009 Argentine film of the same name, both based on the novel "La pregunta de sus ojos" by author Eduardo Sacheri. After more than 15 years, Kidman returned to the West End in the UK premiere of "Photograph 51" at the Noël Coward Theatre. She starred as British scientist Rosalind Franklin, working for the discovery of the structure of DNA, in the production from 5 September to 21 November 2015, directed by Michael Grandage. Her return to the West End was hailed a success, especially after having won an acting award for her portrayal in the play.
In 2016's "Lion", Kidman portrayed Sue, the adoptive mother of Saroo Brierley, an Indian boy who was separated from his birth family, a role she felt connected to as she herself is the mother of adopted children. She earned favorable reviews for her performance, as well as nominations for the Academy Award for Best Supporting Actress, her fourth nomination overall, and her eleventh Golden Globe Award nomination, among others. Richard Roeper of the "Chicago Sun-Times" thought that "Kidman gives a powerful and moving performance as Saroo's adoptive mother, who loves her son with every molecule of her being, but comes to understand his quest. It's as good as anything she's done in the last decade." Budgeted at US$12 million, "Lion" earned over US$140 million globally. She also gave a voice-over performance for the English version of the animated film "The Guardian Brothers."
In 2017, Kidman returned to television for "Big Little Lies", a drama series based on Liane Moriarty's novel, which premiered on HBO. She also served as producer alongside her co-star, Reese Witherspoon, and the show's director, Jean-Marc Vallée. She played Celeste Wright, a former lawyer and housewife, who is concealing her abusive relationship with her husband, played by Alexander Skarsgård. Matthew Jacobs of "The Huffington Post" considered that she "delivered a career-defining performance", while Ann Hornaday of "The Washington Post" wrote that "Kidman belongs in the pantheon of great actresses". She won the Primetime Emmy Award for Outstanding Lead Actress in a Limited Series or Movie for her performance, as well as winning the Primetime Emmy Award for Outstanding Limited Series as a producer. She also won a Critics' Choice Television Award, Golden Globe Award, and Screen Actors Guild Award.
Kidman next played Martha Farnsworth, the headmistress of an all-girls school during the American Civil War, in Sofia Coppola's drama "The Beguiled", a remake of a 1971 film of the same name, which premiered at the 2017 Cannes Film Festival, competing for the Palme d'Or. Both films were adaptations of a novel by Thomas P. Cullinan, The film was an arthouse success, and Katie Walsh of "Tribune News Service" found Kidman to be "particularly, unsurprisingly excellent in her performance as the steely Miss Martha. She is controlled and in control, unflappable. Her genteel manners and femininity co-exist easily with her toughness." Kidman had two other films premiere at the festival, the science-fiction romantic comedy "How to Talk to Girls at Parties", reuniting her with director John Cameron Mitchell, and the psychological thriller "The Killing of a Sacred Deer", directed by Yorgos Lanthimos, which also competed for the Palme d'Or. Also in 2017, Kidman played supporting roles in the television series "" and in the comedy-drama "The Upside", a remake of the 2011 French comedy "The Intouchables", starring Bryan Cranston and Kevin Hart.
Kidman starred in two 2018 dramas —"Destroyer" and "Boy Erased". In the former, she played a detective troubled by a case for two decades. Peter Debruge of "Variety" and Brooke Marine of "W" both found her "unrecognizable" in the role and Debruge added that "she disappears into an entirely new skin, rearranging her insides to fit the character’s tough hide", whereas Marine highlighted Kidman's method acting. The latter film is based on Garrard Conley's "", and features Russell Crowe and Kidman as socially conservative parents who send their son (played by Lucas Hedges) to a gay conversion program. Richard Lawson of "Vanity Fair" credited all three performers for "elevating the fairly standard-issue material to poignant highs". Also that year, Kidman played Queen Atlanna, the mother of the title character, in the DC Extended Universe superhero film "Aquaman".
"Forbes" ranked her as the fourth highest-paid actress in the world in 2019, with an annual income of $34 million. She took on the supporting part of a rich socialite in John Crowley's drama "The Goldfinch", an adaptation of the novel of the same name by Donna Tartt, starring Ansel Elgort. Although it was poorly received, Owen Gleiberman commended Kidman for playing her part with "elegant affection". She next starred as Gretchen Carlson in the drama "Bombshell", directed by Jay Roach, about sexual harassment at Fox News. For her work, she received a nomination for the Screen Actors Guild Award for Outstanding Performance by a Female Actor in a Supporting Role.
Kidman has signed on to star and serve as executive producer on three television miniseries. First, Kidman will headline the HBO miniseries, "The Undoing", based on the novel "You Should Have Known" by Jean Hanff Korelitz. "The Undoing" was previously set to premiere in May 2020 but due to the COVID-19 pandemic, HBO pushed the premiere to the fall of 2020. Second, Kidman will headline the Hulu miniseries "Nine Perfect Strangers" based on the novel of the same name by Liane Moriarty. "Nine Perfect Strangers" is set to premiere sometime in 2021. Thirdly, Kidman will headline the Amazon Prime Video thriller miniseries based on the upcoming novel "Pretty Things" by Janelle Brown. Furthermore, Kidman will serve as an executive producer for a television series adaption "The Expatriates" based upon the novel of the same name by Janice Y.K. Lee for Amazon Prime Video.
Kidman has been married twice: first to actor Tom Cruise, and later to country singer Keith Urban. Kidman met Cruise in November 1989, while filming "Days of Thunder"; they were married on Christmas Eve in Telluride, Colorado. The couple adopted a daughter, Isabella Jane Cruise (born 1992), and a son, Connor Antony (born 1995). On 5 February 2001, the couple's spokesperson announced their separation. Cruise filed for divorce two days later, and the marriage was dissolved in August of that year, with Cruise citing irreconcilable differences. In a 2007 interview with "Marie Claire", Kidman noted the incorrect reporting of the ectopic pregnancy early in her marriage. "It was wrongly reported as miscarriage, by everyone who picked up the story." "So it's huge news, and it didn't happen."
In the June 2006 issue of "Ladies' Home Journal", she said she still loved Cruise: "He was huge; still is. To me, he was just Tom, but to everybody else, he is huge. But he was lovely to me and I loved him. I still love him." In addition, she has expressed shock about their divorce. In 2015, former Church of Scientology executive Mark Rathbun claimed in a documentary film that he was instructed to "facilitate [Cruise's] break-up with Nicole Kidman". Cruise's auditor further claimed Kidman had been wiretapped on Cruise's suggestion.
Prior to marrying Cruise, Kidman had been involved in relationships with Australian actor Marcus Graham and "Windrider" (1986) co-star Tom Burlinson. She was also said to be involved with Adrien Brody. The film "Cold Mountain" brought rumours that an affair between Kidman and co-star Jude Law was responsible for the break-up of his marriage. Both denied the allegations, and Kidman won an undisclosed sum from the British tabloids that published the story. She met musician Lenny Kravitz in 2003, and dated him into 2004. Kidman was also romantically linked to rapper Q-Tip. Robbie Williams claims he had a short romance with Kidman on her yacht in summer 2004.
In a 2007 "Vanity Fair" interview, Kidman revealed that she had been secretly engaged to someone prior to her present relationship to New Zealand-Australian country singer Keith Urban, whom she met at G'Day LA, an event honouring Australians, in January 2005. Kidman married Urban on 25 June 2006, at Cardinal Cerretti Memorial Chapel in the grounds of St Patrick's Estate, Manly in Sydney. In an interview in 2015, Kidman said, "We didn't really know each other – we got to know each other during our marriage." They maintain homes in Sydney, Sutton Forest (New South Wales, Australia); Los Angeles; Nashville (Tennessee, U.S.); and a condominium in Manhattan purchased for US$10 million. The couple's first daughter Sunday Rose was born in 2008, in Nashville. In 2010, Kidman and Urban had their second daughter Faith Margaret via gestational surrogacy at Nashville's Centennial Women's Hospital. In an interview by Tina Brown at the 2015 Women in the World conference, she stated that her attention turned to her career after her divorce from Cruise: "Out of my divorce came work that was applauded so that was an interesting thing for me", leading to her Academy Award in 2003.
Kidman is Catholic and even considered becoming a nun at one point. She attended Mary Mackillop Chapel in North Sydney. Following criticism of "The Golden Compass" by Catholic leaders as anti-Catholic, Kidman told "Entertainment Weekly" that the Catholic Church is part of her "essence", and that her religious beliefs would prevent her from taking a role in a film she perceived as anti-Catholic. During her divorce from Tom Cruise, she stated that she did not want their children raised as Scientologists. She has been reluctant to discuss Scientology since her divorce.
A supporter of women's rights, Kidman testified before the United States House of Representatives Committee on Foreign Affairs to support the International Violence Against Women Act in 2009. In January 2017, she stated her support for the legalisation of same-sex marriage in Australia. Kidman has also donated to U.S. Democratic party candidates.
In 2002, Kidman first appeared on the Australian rich list published annually in the "Business Review Weekly" with an estimated net worth of A$122 million. In the 2011 published list, Kidman's wealth was estimated at A$304 million, down from A$329 million in 2010. Kidman has raised money for, and drawn attention to, disadvantaged children around the world. In 1994, she was appointed a goodwill ambassador for UNICEF, and in 2004, she was honoured as a "Citizen of the World" by the United Nations. Kidman joined the Little Tee Campaign for breast cancer care to design T-shirts or vests to raise money to fight the disease; motivated by her mother's own battle with breast cancer in 1984.
In the 2006 Australia Day Honours, Kidman was appointed Companion of Order of Australia (AC) for "service to the performing arts as an acclaimed motion picture performer, to health care through contributions to improve medical treatment for women and children and advocacy for cancer research, to youth as a principal supporter of young performing artists, and to humanitarian causes in Australia and internationally". However, due to film commitments and her wedding to Urban, it wasn't until 13 April 2007 that she was presented with the honour. It was presented by the Governor-General of Australia, Major General Michael Jeffery, in a ceremony at Government House, Canberra.
Kidman was appointed goodwill ambassador of the United Nations Development Fund for Women (UNIFEM) in 2006. She visited Kosovo in 2006 to learn about women's experiences of conflict and UNIFEM's support efforts. She is also the international spokesperson for UNIFEM's Say NO – UNiTE to End Violence against Women initiative. Kidman and the UNIFEM executive director presented over five million signatures collected during the first phase of this to the UN Secretary-General on 25 November 2008. In 2016, Kidman donated $50,000 to UN Women.
In the beginning of 2009, Kidman appeared in a series of postage stamps featuring Australian actors. She, Geoffrey Rush, Russell Crowe and Cate Blanchett each appear twice in the series: once as themselves and once as their Academy Award-nominated character; Kidman's second stamp showed her as Satine from "Moulin Rouge!". On 8 January 2010, alongside Nancy Pelosi, Joan Chen and Joe Torre, Kidman attended the ceremony to help the Family Violence Prevention Fund break ground on a new international centre located in the Presidio of San Francisco. In 2015, Kidman became the brand ambassador for Etihad Airways.
Kidman supports the Nashville Predators, being seen and photographed almost nightly throughout the season. Additionally, she supports the Sydney Swans in the Australian Football League and once served as a club ambassador.
Kidman's discography consists of one spoken word album, one extended play, three singles, three music videos, ten other appearances, a number of unreleased tracks and two tribute songs recorded by various artists.
Kidman, primarily known in the field of acting, entered the music industry in the 2000s after recording a number of tracks for the soundtrack album to Baz Luhrmann's 2001 motion picture "Moulin Rouge!", which she starred in. Her duet with Ewan McGregor entitled "Come What May" was released as her debut and the second single of the OST through Interscope on 24 September 2001. The composition became the eighth-highest selling single by an Australian artist for that year, being certified Gold by Australian Recording Industry Association, while reaching on the UK Singles Chart at number twenty-seven. In addition, the song received a nomination at the 59th Golden Globe Awards as the Best Original Song, and has been listed as the eighty-fifth within AFI's 100 Years...100 Songs by American Film Institute.
"Somethin' Stupid", a cover version of Frank and Nancy Sinatra followed soon. The track, recorded as a duet with English singer-songwriter Robbie Williams, was issued on 14 December 2001 by Chrysalis Records as the lead single of his fourth studio album, "Swing When You're Winning". Kidman's second single topped the official music charts in Italy, | https://en.wikipedia.org/wiki?curid=21504 |
Nucleotide
Nucleotides are organic molecules consisting of a nucleoside and a phosphate. They serve as monomeric units of the nucleic acid polymers deoxyribonucleic acid (DNA) and ribonucleic acid (RNA), both of which are essential biomolecules within all life-forms on Earth. Nucleotides are composed of three subunit molecules: a nitrogenous base (also known as nucleobase), a five-carbon sugar (ribose or deoxyribose), and a phosphate group consisting of one to three phosphates. The four nitrogenous bases in DNA are guanine, adenine, cytosine and thymine; in RNA, uracil is used in place of thymine.
Nucleotides also play a central role in metabolism at a fundamental, cellular level. They provide chemical energy—in the form of the nucleoside triphosphates, adenosine triphosphate (ATP), guanosine triphosphate (GTP), cytidine triphosphate (CTP) and uridine triphosphate (UTP)—throughout the cell for the many cellular functions that demand energy, including: amino acid, protein and cell membrane synthesis, moving the cell and cell parts (both internally and intercellularly), cell division, etc. In addition, nucleotides participate in cell signaling (cyclic guanosine monophosphate or cGMP and cyclic adenosine monophosphate or cAMP), and are incorporated into important cofactors of enzymatic reactions (e.g. coenzyme A, FAD, FMN, NAD, and NADP+).
In experimental biochemistry, nucleotides can be radiolabeled using radionuclides to yield radionucleotides.
A nucleotide is composed of three distinctive chemical sub-units: a five-carbon sugar molecule, a nitrogenous base—which two together are called a nucleoside—and one phosphate group. With all three joined, a nucleotide is also termed a "nucleoside "mono"phosphate", "nucleoside "di"phosphate" or "nucleoside "tri"phosphate", depending on how many phosphates make up the phosphate group.
In nucleic acids, nucleotides contain either a purine or a pyrimidine base—i.e., the nitrogenous base molecule, also known as a nucleobase—and are termed "ribo"nucleotides if the sugar is ribose, or "deoxyribo"nucleotides if the sugar is deoxyribose. Individual phosphate molecules repetitively connect the sugar-ring molecules in two adjacent nucleotide monomers, thereby connecting the nucleotide monomers of a nucleic acid end-to-end into a long chain. These chain-joins of sugar and phosphate molecules create a 'backbone' strand for a single- or double helix. In any one strand, the chemical orientation (directionality) of the chain-joins runs from the 5'-end to the 3'-end ("read": 5 prime-end to 3 prime-end)—referring to the five carbon sites on sugar molecules in adjacent nucleotides. In a double helix, the two strands are oriented in opposite directions, which permits base pairing and complementarity between the base-pairs, all which is essential for replicating or transcribing the encoded information found in DNA.
Nucleic acids then are polymeric macromolecules assembled from nucleotides, the monomer-units of nucleic acids. The purine bases adenine and guanine and pyrimidine base cytosine occur in both DNA and RNA, while the pyrimidine bases thymine (in DNA) and uracil (in RNA) occur in just one. Adenine forms a base pair with thymine with two hydrogen bonds, while guanine pairs with cytosine with three hydrogen bonds.
In addition to being building blocks for construction of nucleic acid polymers, singular nucleotides play roles in cellular energy storage and provision, cellular signaling, as a source of phosphate groups used to modulate the activity of proteins and other signaling molecules, and as enzymatic cofactors, often carrying out redox reactions. Signaling cyclic nucleotides are formed by binding the phosphate group twice to the same sugar molecule, bridging the 5'- and 3'- hydroxyl groups of the sugar. Some signaling nucleotides differ from the standard single-phosphate group configuration, in having multiple phosphate groups attached to different positions on the sugar. Nucleotide cofactors include a wider range of chemical groups attached to the sugar via the glycosidic bond, including nicotinamide and flavin, and in the latter case, the ribose sugar is linear rather than forming the ring seen in other nucleotides.
Nucleotides can be synthesized by a variety of means both in vitro and in vivo.
In vitro, protecting groups may be used during laboratory production of nucleotides. A purified nucleoside is protected to create a phosphoramidite, which can then be used to obtain analogues not found in nature and/or to synthesize an oligonucleotide.
In vivo, nucleotides can be synthesized de novo or recycled through salvage pathways. The components used in de novo nucleotide synthesis are derived from biosynthetic precursors of carbohydrate and amino acid metabolism, and from ammonia and carbon dioxide. The liver is the major organ of de novo synthesis of all four nucleotides. De novo synthesis of pyrimidines and purines follows two different pathways. Pyrimidines are synthesized first from aspartate and carbamoyl-phosphate in the cytoplasm to the common precursor ring structure orotic acid, onto which a phosphorylated ribosyl unit is covalently linked. Purines, however, are first synthesized from the sugar template onto which the ring synthesis occurs. For reference, the syntheses of the purine and pyrimidine nucleotides are carried out by several enzymes in the cytoplasm of the cell, not within a specific organelle. Nucleotides undergo breakdown such that useful parts can be reused in synthesis reactions to create new nucleotides.
The synthesis of the pyrimidines CTP and UTP occurs in the cytoplasm and starts with the formation of carbamoyl phosphate from glutamine and CO2. Next, aspartate carbamoyltransferase catalyzes a condensation reaction between aspartate and carbamoyl phosphate to form carbamoyl aspartic acid, which is cyclized into 4,5-dihydroorotic acid by dihydroorotase. The latter is converted to orotate by dihydroorotate oxidase. The net reaction is:
Orotate is covalently linked with a phosphorylated ribosyl unit. The covalent linkage between the ribose and pyrimidine occurs at position C1 of the ribose unit, which contains a pyrophosphate, and N1 of the pyrimidine ring. Orotate phosphoribosyltransferase (PRPP transferase) catalyzes the net reaction yielding orotidine monophosphate (OMP):
Orotidine 5'-monophosphate is decarboxylated by orotidine-5'-phosphate decarboxylase to form uridine monophosphate (UMP). PRPP transferase catalyzes both the ribosylation and decarboxylation reactions, forming UMP from orotic acid in the presence of PRPP. It is from UMP that other pyrimidine nucleotides are derived. UMP is phosphorylated by two kinases to uridine triphosphate (UTP) via two sequential reactions with ATP. First the diphosphate form UDP is produced, which in turn is phosphorylated to UTP. Both steps are fueled by ATP hydrolysis:
CTP is subsequently formed by amination of UTP by the catalytic activity of CTP synthetase. Glutamine is the NH3 donor and the reaction is fueled by ATP hydrolysis, too:
Cytidine monophosphate (CMP) is derived from cytidine triphosphate (CTP) with subsequent loss of two phosphates.
The atoms that are used to build the purine nucleotides come from a variety of sources:
The de novo synthesis of purine nucleotides by which these precursors are incorporated into the purine ring proceeds by a 10-step pathway to the branch-point intermediate IMP, the nucleotide of the base hypoxanthine. AMP and GMP are subsequently synthesized from this intermediate via separate, two-step pathways. Thus, purine moieties are initially formed as part of the ribonucleotides rather than as free bases.
Six enzymes take part in IMP synthesis. Three of them are multifunctional:
The pathway starts with the formation of PRPP. PRPS1 is the enzyme that activates R5P, which is formed primarily by the pentose phosphate pathway, to PRPP by reacting it with ATP. The reaction is unusual in that a pyrophosphoryl group is directly transferred from ATP to C1 of R5P and that the product has the α configuration about C1. This reaction is also shared with the pathways for the synthesis of Trp, His, and the pyrimidine nucleotides. Being on a major metabolic crossroad and requiring much energy, this reaction is highly regulated.
In the first reaction unique to purine nucleotide biosynthesis, PPAT catalyzes the displacement of PRPP's pyrophosphate group (PPi) by an amide nitrogen donated from either glutamine (N), glycine (N&C), aspartate (N), folic acid (C1), or CO2. This is the committed step in purine synthesis. The reaction occurs with the inversion of configuration about ribose C1, thereby forming β-5-phosphorybosylamine (5-PRA) and establishing the anomeric form of the future nucleotide.
Next, a glycine is incorporated fueled by ATP hydrolysis and the carboxyl group forms an amine bond to the NH2 previously introduced. A one-carbon unit from folic acid coenzyme N10-formyl-THF is then added to the amino group of the substituted glycine followed by the closure of the imidazole ring. Next, a second NH2 group is transferred from a glutamine to the first carbon of the glycine unit. A carboxylation of the second carbon of the glycin unit is concomitantly added. This new carbon is modified by the additional of a third NH2 unit, this time transferred from an aspartate residue. Finally, a second one-carbon unit from formyl-THF is added to the nitrogen group and the ring covalently closed to form the common purine precursor inosine monophosphate (IMP).
Inosine monophosphate is converted to adenosine monophosphate in two steps. First, GTP hydrolysis fuels the addition of aspartate to IMP by adenylosuccinate synthase, substituting the carbonyl oxygen for a nitrogen and forming the intermediate adenylosuccinate. Fumarate is then cleaved off forming adenosine monophosphate. This step is catalyzed by adenylosuccinate lyase.
Inosine monophosphate is converted to guanosine monophosphate by the oxidation of IMP forming xanthylate, followed by the insertion of an amino group at C2. NAD+ is the electron acceptor in the oxidation reaction. The amide group transfer from glutamine is fueled by ATP hydrolysis.
In humans, pyrimidine rings (C, T, U) can be degraded completely to CO2 and NH3 (urea excretion). That having been said, purine rings (G, A) cannot. Instead they are degraded to the metabolically inert uric acid which is then excreted from the body. Uric acid is formed when GMP is split into the base guanine and ribose. Guanine is deaminated to xanthine which in turn is oxidized to uric acid. This last reaction is irreversible. Similarly, uric acid can be formed when AMP is deaminated to IMP from which the ribose unit is removed to form hypoxanthine. Hypoxanthine is oxidized to xanthine and finally to uric acid. Instead of uric acid secretion, guanine and IMP can be used for recycling purposes and nucleic acid synthesis in the presence of PRPP and aspartate (NH3 donor).
An unnatural base pair (UBP) is a designed subunit (or nucleobase) of DNA which is created in a laboratory and does not occur in nature. In 2012, a group of American scientists led by Floyd Romesberg, a chemical biologist at the Scripps Research Institute in San Diego, California, published that his team designed an unnatural base pair (UBP). The two new artificial nucleotides or "Unnatural Base Pair" (UBP) were named d5SICS and dNaM. More technically, these artificial nucleotides bearing hydrophobic nucleobases, feature two fused aromatic rings that form a (d5SICS–dNaM) complex or base pair in DNA. In 2014 the same team from the Scripps Research Institute reported that they synthesized a stretch of circular DNA known as a plasmid containing natural T-A and C-G base pairs along with the best-performing UBP Romesberg's laboratory had designed, and inserted it into cells of the common bacterium "E. coli" that successfully replicated the unnatural base pairs through multiple generations. This is the first known example of a living organism passing along an expanded genetic code to subsequent generations. This was in part achieved by the addition of a supportive algal gene that expresses a nucleotide triphosphate transporter which efficiently imports the triphosphates of both d5SICSTP and dNaMTP into "E. coli" bacteria. Then, the natural bacterial replication pathways use them to accurately replicate the plasmid containing d5SICS–dNaM.
The successful incorporation of a third base pair is a significant breakthrough toward the goal of greatly expanding the number of amino acids which can be encoded by DNA, from the existing 21 amino acids to a theoretically possible 172, thereby expanding the potential for living organisms to produce novel proteins. The artificial strings of DNA do not encode for anything yet, but scientists speculate they could be designed to manufacture new proteins which could have industrial or pharmaceutical uses.
Nucleotide (abbreviated "nt") is a common unit of length for single-stranded nucleic acids, similar to how base pair is a unit of length for double-stranded nucleic acids.
A study done by the Department of Sports Science at the University of Hull in Hull, UK has shown that nucleotides have significant impact on cortisol levels in saliva. Post exercise, the experimental nucleotide group had lower cortisol levels in their blood than the control or the placebo. Additionally, post supplement values of Immunoglobulin A were significantly higher than either the placebo or the control. The study concluded, "nucleotide supplementation blunts the response of the hormones associated with physiological stress."
Another study conducted in 2013 looked at the impact nucleotide supplementation had on the immune system in athletes. In the study, all athletes were male and were highly skilled in taekwondo. Out of the twenty athletes tested, half received a placebo and half received 480 mg per day of nucleotide supplement. After thirty days, the study concluded that nucleotide supplementation may counteract the impairment of the body's immune function after heavy exercise.
The IUPAC has designated the symbols for nucleotides. Apart from the five (A, G, C, T/U) bases, often degenerate bases are used especially for designing PCR primers. These nucleotide codes are listed here. Some primer sequences may also include the character "I", which codes for the non-standard nucleotide inosine. Inosine occurs in tRNAs, and will pair with adenine, cytosine, or thymine. This character does not appear in the following table however, because it does not represent a degeneracy. While inosine can serve a similar function as the degeneracy "D", it is an actual nucleotide, rather than a representation of a mix of nucleotides that covers each possible pairing needed. | https://en.wikipedia.org/wiki?curid=21505 |
Numerical analysis
Numerical analysis is the study of algorithms that use numerical approximation (as opposed to symbolic manipulations) for the problems of mathematical analysis (as distinguished from discrete mathematics). Numerical analysis naturally finds application in all fields of engineering and the physical sciences, but in the 21st century also the life sciences, social sciences, medicine, business and even the arts have adopted elements of scientific computations. The growth in computing power has revolutionized the use of realistic mathematical models in science and engineering, and subtle numerical analysis is required to implement these detailed models of the world. For example, ordinary differential equations appear in celestial mechanics (predicting the motions of planets, stars and galaxies); numerical linear algebra is important for data analysis; stochastic differential equations and Markov chains are essential in simulating living cells for medicine and biology.
Before the advent of modern computers, numerical methods often depended on hand interpolation formulas applied to data from large printed tables. Since the mid 20th century, computers calculate the required functions instead, but many of the same formulas nevertheless continue to be used as part of the software algorithms.
The numerical point of view goes back to the earliest mathematical writings. A tablet from the Yale Babylonian Collection (YBC 7289), gives a sexagesimal numerical approximation of the square root of 2, the length of the diagonal in a unit square.
Numerical analysis continues this long tradition: rather than exact symbolic answers, which can only be applied to real-world measurements by translation into digits, it gives approximate solutions within specified error bounds.
The overall goal of the field of numerical analysis is the design and analysis of techniques to give approximate but accurate solutions to hard problems, the variety of which is suggested by the following:
The rest of this section outlines several important themes of numerical analysis.
The field of numerical analysis predates the invention of modern computers by many centuries. Linear interpolation was already in use more than 2000 years ago. Many great mathematicians of the past were preoccupied by numerical analysis, as is obvious from the names of important algorithms like Newton's method, Lagrange interpolation polynomial, Gaussian elimination, or Euler's method.
To facilitate computations by hand, large books were produced with formulas and tables of data such as interpolation points and function coefficients. Using these tables, often calculated out to 16 decimal places or more for some functions, one could look up values to plug into the formulas given and achieve very good numerical estimates of some functions. The canonical work in the field is the NIST publication edited by Abramowitz and Stegun, a 1000-plus page book of a very large number of commonly used formulas and functions and their values at many points. The function values are no longer very useful when a computer is available, but the large listing of formulas can still be very handy.
The mechanical calculator was also developed as a tool for hand computation. These calculators evolved into electronic computers in the 1940s, and it was then found that these computers were also useful for administrative purposes. But the invention of the computer also influenced the field of numerical analysis, since now longer and more complicated calculations could be done.
Consider the problem of solving
for the unknown quantity "x".
For the iterative method, apply the bisection method to "f"("x") = 3"x"3 − 24. The initial values are "a" = 0, "b" = 3, "f"("a") = −24, "f"("b") = 57.
From this table it can be concluded that the solution is between 1.875 and 2.0625. The algorithm might return any number in that range with an error less than 0.2.
In a two-hour race, the speed of the car is measured at three instants and recorded in the following table.
A discretization would be to say that the speed of the car was constant from 0:00 to 0:40, then from 0:40 to 1:20 and finally from 1:20 to 2:00. For instance, the total distance traveled in the first 40 minutes is approximately . This would allow us to estimate the total distance traveled as + + = , which is an example of numerical integration (see below) using a Riemann sum, because displacement is the integral of velocity.
Ill-conditioned problem: Take the function . Note that "f"(1.1) = 10 and "f"(1.001) = 1000: a change in "x" of less than 0.1 turns into a change in "f"("x") of nearly 1000. Evaluating "f"("x") near "x" = 1 is an ill-conditioned problem.
Well-conditioned problem: By contrast, evaluating the same function near "x" = 10 is a well-conditioned problem. For instance, "f"(10) = 1/9 ≈ 0.111 and "f"(11) = 0.1: a modest change in "x" leads to a modest change in "f"("x").
Direct methods compute the solution to a problem in a finite number of steps. These methods would give the precise answer if they were performed in infinite precision arithmetic. Examples include Gaussian elimination, the QR factorization method for solving systems of linear equations, and the simplex method of linear programming. In practice, finite precision is used and the result is an approximation of the true solution (assuming stability).
In contrast to direct methods, iterative methods are not expected to terminate in a finite number of steps. Starting from an initial guess, iterative methods form successive approximations that converge to the exact solution only in the limit. A convergence test, often involving the residual, is specified in order to decide when a sufficiently accurate solution has (hopefully) been found. Even using infinite precision arithmetic these methods would not reach the solution within a finite number of steps (in general). Examples include Newton's method, the bisection method, and Jacobi iteration. In computational matrix algebra, iterative methods are generally needed for large problems.
Iterative methods are more common than direct methods in numerical analysis. Some methods are direct in principle but are usually used as though they were not, e.g. GMRES and the conjugate gradient method. For these methods the number of steps needed to obtain the exact solution is so large that an approximation is accepted in the same manner as for an iterative method.
Furthermore, continuous problems must sometimes be replaced by a discrete problem whose solution is known to approximate that of the continuous problem; this process is called 'discretization'. For example, the solution of a differential equation is a function. This function must be represented by a finite amount of data, for instance by its value at a finite number of points at its domain, even though this domain is a continuum.
The study of errors forms an important part of numerical analysis. There are several ways in which error can be introduced in the solution of the problem.
Round-off errors arise because it is impossible to represent all real numbers exactly on a machine with finite memory (which is what all practical digital computers are).
Truncation errors are committed when an iterative method is terminated or a mathematical procedure is approximated, and the approximate solution differs from the exact solution. Similarly, discretization induces a discretization error because the solution of the discrete problem does not coincide with the solution of the continuous problem. For instance, in the iteration in the sidebar to compute the solution of formula_1, after 10 or so iterations, it can be concluded that the root is roughly 1.99 (for example). Therefore, there is a truncation error of 0.01.
Once an error is generated, it will generally propagate through the calculation. For instance, already noted is that the operation + on a calculator (or a computer) is inexact. It follows that a calculation of the type is even more inexact.
The truncation error is created when a mathematical procedure is approximated. To integrate a function exactly it is required to find the sum of infinite trapezoids, but numerically only the sum of only finite trapezoids can be found, and hence the approximation of the mathematical procedure. Similarly, to differentiate a function, the differential element approaches zero but numerically only a finite value of the differential element can be chosen.
Numerical stability is a notion in numerical analysis. An algorithm is called 'numerically stable' if an error, whatever its cause, does not grow to be much larger during the calculation. This happens if the problem is 'well-conditioned', meaning that the solution changes by only a small amount if the problem data are changed by a small amount. To the contrary, if a problem is 'ill-conditioned', then any small error in the data will grow to be a large error.
Both the original problem and the algorithm used to solve that problem can be 'well-conditioned' or 'ill-conditioned', and any combination is possible.
So an algorithm that solves a well-conditioned problem may be either numerically stable or numerically unstable. An art of numerical analysis is to find a stable algorithm for solving a well-posed mathematical problem. For instance, computing the square root of 2 (which is roughly 1.41421) is a well-posed problem. Many algorithms solve this problem by starting with an initial approximation "x"0 to formula_2, for instance "x"0 = 1.4, and then computing improved guesses "x"1, "x"2, etc. One such method is the famous Babylonian method, which is given by "x""k"+1 = "xk"/2 + 1/"xk". Another method, called 'method X', is given by "x""k"+1 = ("x""k"2 − 2)2 + "x""k". A few iterations of each scheme are calculated in table form below, with initial guesses "x"0 = 1.4 and "x"0 = 1.42.
Observe that the Babylonian method converges quickly regardless of the initial guess, whereas Method X converges extremely slowly with initial guess "x"0 = 1.4 and diverges for initial guess "x"0 = 1.42. Hence, the Babylonian method is numerically stable, while Method X is numerically unstable.
The field of numerical analysis includes many sub-disciplines. Some of the major ones are:
One of the simplest problems is the evaluation of a function at a given point. The most straightforward approach, of just plugging in the number in the formula is sometimes not very efficient. For polynomials, a better approach is using the Horner scheme, since it reduces the necessary number of multiplications and additions. Generally, it is important to estimate and control round-off errors arising from the use of floating point arithmetic.
Interpolation solves the following problem: given the value of some unknown function at a number of points, what value does that function have at some other point between the given points?
Extrapolation is very similar to interpolation, except that now the value of the unknown function at a point which is outside the given points must be found.
Regression is also similar, but it takes into account that the data is imprecise. Given some points, and a measurement of the value of some function at these points (with an error), the unknown function can be found. The least squares-method is one way to achieve this.
Another fundamental problem is computing the solution of some given equation. Two cases are commonly distinguished, depending on whether the equation is linear or not. For instance, the equation formula_7 is linear while formula_8 is not.
Much effort has been put in the development of methods for solving systems of linear equations. Standard direct methods, i.e., methods that use some matrix decomposition are Gaussian elimination, LU decomposition, Cholesky decomposition for symmetric (or hermitian) and positive-definite matrix, and QR decomposition for non-square matrices. Iterative methods such as the Jacobi method, Gauss–Seidel method, successive over-relaxation and conjugate gradient method are usually preferred for large systems. General iterative methods can be developed using a matrix splitting.
Root-finding algorithms are used to solve nonlinear equations (they are so named since a root of a function is an argument for which the function yields zero). If the function is differentiable and the derivative is known, then Newton's method is a popular choice. Linearization is another technique for solving nonlinear equations.
Several important problems can be phrased in terms of eigenvalue decompositions or singular value decompositions. For instance, the spectral image compression algorithm is based on the singular value decomposition. The corresponding tool in statistics is called principal component analysis.
Optimization problems ask for the point at which a given function is maximized (or minimized). Often, the point also has to satisfy some constraints.
The field of optimization is further split in several subfields, depending on the form of the objective function and the constraint. For instance, linear programming deals with the case that both the objective function and the constraints are linear. A famous method in linear programming is the simplex method.
The method of Lagrange multipliers can be used to reduce optimization problems with constraints to unconstrained optimization problems.
Numerical integration, in some instances also known as numerical quadrature, asks for the value of a definite integral. Popular methods use one of the Newton–Cotes formulas (like the midpoint rule or Simpson's rule) or Gaussian quadrature. These methods rely on a "divide and conquer" strategy, whereby an integral on a relatively large set is broken down into integrals on smaller sets. In higher dimensions, where these methods become prohibitively expensive in terms of computational effort, one may use Monte Carlo or quasi-Monte Carlo methods (see Monte Carlo integration), or, in modestly large dimensions, the method of sparse grids.
Numerical analysis is also concerned with computing (in an approximate way) the solution of differential equations, both ordinary differential equations and partial differential equations.
Partial differential equations are solved by first discretizing the equation, bringing it into a finite-dimensional subspace. This can be done by a finite element method, a finite difference method, or (particularly in engineering) a finite volume method. The theoretical justification of these methods often involves theorems from functional analysis. This reduces the problem to the solution of an algebraic equation.
Since the late twentieth century, most algorithms are implemented in a variety of programming languages. The Netlib repository contains various collections of software routines for numerical problems, mostly in Fortran and C. Commercial products implementing many different numerical algorithms include the IMSL and NAG libraries; a free-software alternative is the GNU Scientific Library.
There are several popular numerical computing applications such as MATLAB, TK Solver, S-PLUS, and IDL as well as free and open source alternatives such as FreeMat, Scilab, GNU Octave (similar to Matlab), and IT++ (a C++ library). There are also programming languages such as R (similar to S-PLUS) and Python with libraries such as NumPy, SciPy and SymPy. Performance varies widely: while vector and matrix operations are usually fast, scalar loops may vary in speed by more than an order of magnitude.
Many computer algebra systems such as Mathematica also benefit from the availability of arbitrary-precision arithmetic which can provide more accurate results.
Also, any spreadsheet software can be used to solve simple problems relating to numerical analysis. | https://en.wikipedia.org/wiki?curid=21506 |
Noosphere
The noosphere is a philosophical concept developed and popularized by the French philosopher and Jesuit priest Pierre Teilhard de Chardin and the biogeochemist Vladimir Vernadsky. Vernadsky defined the noosphere as the new state of the biosphere and described as the planetary "sphere of reason". The noosphere represents the highest stage of biospheric development, its defining factor being the development of humankind's rational activities.
The word is derived from the Greek νόος ("mind", "reason") and σφαῖρα (""), in lexical analogy to "atmosphere" and "biosphere". The concept, however, cannot be accredited to a single author. The founding authors Vladimir Ivanovich Vernadsky and Pierre Teilhard de Chardin developed two related but starkly different concepts, the former being grounded in the geological sciences and the latter, in theology. Both conceptions of the noosphere share the common thesis that together human reason and the scientific thought has and will continue to create the next evolutionary geological layer. This geological layer is part of the evolutionary chain. Second generation authors, predominantly of Russian origin, have further developed the Vernadskian concept, creating the related concepts: noocenosis and noocenology.
The term noosphere was first used in the publications of Pierre Teilhard de Chardin in 1922 in his "Cosmogenesis". Vernadsky was most likely introduced to the term by a common acquaintance, Édouard Le Roy, during a stay in Paris. Some sources claim Édouard Le Roy actually first proposed the term. Vernadsky himself wrote that he was first introduced to the concept by Le Roy in his 1927 lectures at the College of France, and that Le Roy had emphasized a mutual exploration of the concept with Teilhard de Chardin. According to Vernadsky's own letters, he took Le Roy’s ideas on the noosphere from Le Roys article "Les origines humaines et l’evolution de l’intelligence", part III: "La noosphere et l’hominisation", before reworking the concept within his own field, biogeochemistry. The historian Bailes concludes that Vernadsky and Teilhard de Chardin were mutual influences on each other, as Teilhard de Chardin also attended the Vernadsky's lectures on biogeochemistry, before creating the concept of the noosphere.
An account stated that Le Roy and Teilhard was not aware of the concept of biosphere in their noosphere concept and that it was Vernadsky who introduced them to this notion, which gave their conceptualization a grounding on natural sciences. Both Teilhard de Chardin and Vernadsky base their conceptions of the noosphere on the term 'biosphere', developed by Edward Suess in 1875. Despite the differing backgrounds, approaches and focuses of Teilhard and Vernadsky, they have a few fundamental themes in common. Both scientists overstepped the boundaries of natural science and attempted to create all-embracing theoretical constructions founded in philosophy, social sciences and authorized interpretations of the evolutionary theory. Moreover, both thinkers were convinced of the teleological character of evolution. They also argued that human activity becomes a geological power and that the manner by which it is directed can influence the environment. There are, however, fundamental differences in the two conceptions.
In the theory of Vernadsky, the noosphere is the third in a succession of phases of development of the Earth, after the geosphere (inanimate matter) and the biosphere (biological life). Just as the emergence of life fundamentally transformed the geosphere, the emergence of human cognition fundamentally transforms the biosphere. In contrast to the conceptions of the Gaia theorists, or the promoters of cyberspace, Vernadsky's noosphere emerges at the point where humankind, through the mastery of nuclear processes, begins to create resources through the transmutation of elements. It is also currently being researched as part of the Global Consciousness Project.
Teilhard perceived a directionality in evolution along an axis of increasing "Complexity/Consciousness". For Teilhard, the noosphere is the sphere of thought encircling the earth that has emerged through evolution as a consequence of this growth in complexity / consciousness. The noosphere is therefore as much part of nature as the barysphere, lithosphere, hydrosphere, atmosphere, and biosphere. As a result, Teilhard sees the "social phenomenon [as] the culmination of and not the attenuation of the biological phenomenon." These social phenomena are part of the noosphere and include, for example, legal, educational, religious, research, industrial and technological systems. In this sense, the noosphere emerges through and is constituted by the interaction of human minds. The noosphere thus grows in step with the organization of the human mass in relation to itself as it populates the earth. Teilhard argued the noosphere evolves towards ever greater personalisation, individuation and unification of its elements. He saw the Christian notion of love as being the principal driver of noogenesis. Evolution would culminate in the Omega Point—an apex of thought/consciousness—which he identified with the eschatological return of Christ.
One of the original aspects of the noosphere concept deals with evolution. Henri Bergson, with his "L'évolution créatrice" (1907), was one of the first to propose evolution is "creative" and cannot necessarily be explained solely by Darwinian natural selection. "L'évolution créatrice" is upheld, according to Bergson, by a constant vital force which animates life and fundamentally connects mind and body, an idea opposing the dualism of René Descartes. In 1923, C. Lloyd Morgan took this work further, elaborating on an "emergent evolution" which could explain increasing complexity (including the evolution of mind). Morgan found many of the most interesting changes in living things have been largely discontinuous with past evolution. Therefore, these living things did not necessarily evolve through a gradual process of natural selection. Rather, he posited, the process of evolution experiences jumps in complexity (such as the emergence of a self-reflective universe, or noosphere), in a sort of qualitative punctuated equilibrium. Finally, the complexification of human cultures, particularly language, facilitated a quickening of evolution in which cultural evolution occurs more rapidly than biological evolution. Recent understanding of human ecosystems and of human impact on the biosphere have led to a link between the notion of sustainability with the "co-evolution" and harmonization of cultural and biological evolution. | https://en.wikipedia.org/wiki?curid=21508 |
Niccolò Paganini
Niccolò (or Nicolò) Paganini (; 27 October 178227 May 1840) was an Italian violinist, violist, guitarist, and composer. He was the most celebrated violin virtuoso of his time, and left his mark as one of the pillars of modern violin technique. His 24 Caprices for Solo Violin Op. 1 are among the best known of his compositions, and have served as an inspiration for many prominent composers.
Niccolò Paganini was born in Genoa, then capital of the Republic of Genoa, the third of the six children of Antonio and Teresa (née Bocciardo) Paganini. Paganini's father was an unsuccessful trader, but he managed to supplement his income through playing music on the mandolin. At the age of five, Paganini started learning the mandolin from his father, and moved to the violin by the age of seven. His musical talents were quickly recognized, earning him numerous scholarships for violin lessons. The young Paganini studied under various local violinists, including Giovanni Servetto and Giacomo Costa, but his progress quickly outpaced their abilities. Paganini and his father then traveled to Parma to seek further guidance from Alessandro Rolla. But upon listening to Paganini's playing, Rolla immediately referred him to his own teacher, Ferdinando Paer and, later, Paer's own teacher, Gasparo Ghiretti. Though Paganini did not stay long with Paer or Ghiretti, the two had considerable influence on his composition style.
The French invaded northern Italy in March 1796, and Genoa was not spared. The Paganinis sought refuge in their country property in Romairone, near Bolzaneto. It was in this period that Paganini is thought to have developed his relationship with the guitar. He mastered the guitar, but preferred to play it in exclusively intimate, rather than public concerts. He later described the guitar as his "constant companion" on his concert tours. By 1800, Paganini and his father traveled to Livorno, where Paganini played in concerts and his father resumed his maritime work. In 1801, the 18-year-old Paganini was appointed first violin of the Republic of Lucca, but a substantial portion of his income came from freelancing. His fame as a violinist was matched only by his reputation as a gambler and womanizer.
In 1805, Lucca was annexed by Napoleonic France, and the region was ceded to Napoleon's sister, Elisa Baciocchi. Paganini became a violinist for the Baciocchi court, while giving private lessons to Elisa's husband, Felice. In 1807, Baciocchi became the Grand Duchess of Tuscany and her court was transferred to Florence. Paganini was part of the entourage, but, towards the end of 1809, he left Baciocchi to resume his freelance career.
For the next few years, Paganini returned to touring in the areas surrounding Parma and Genoa. Though he was very popular with the local audience, he was still not very well known in the rest of Europe. His first break came from an 1813 concert at La Scala in Milan. The concert was a great success. As a result, Paganini began to attract the attention of other prominent, though more conservative, musicians across Europe. His early encounters with Charles Philippe Lafont and Louis Spohr created intense rivalry. His concert activities, however, were still limited to Italy for the next few years.
In 1827, Pope Leo XII honoured Paganini with the Order of the Golden Spur. His fame spread across Europe with a concert tour that started in Vienna in August 1828, stopping in every major European city in Germany, Poland, and Bohemia until February 1831 in Strasbourg. This was followed by tours in Paris and Britain. His technical ability and his willingness to display it received much critical acclaim. In addition to his own compositions, theme and variations being the most popular, Paganini also performed modified versions of works (primarily concertos) written by his early contemporaries, such as Rodolphe Kreutzer and Giovanni Battista Viotti.
Paganini's travels also brought him into contact with eminent guitar virtuosi of the day, including Ferdinando Carulli in Paris and Mauro Giuliani in Vienna. But this experience did not inspire him to play public concerts with guitar, and even performances of his own guitar trios and quartets were private to the point of being behind closed doors.
Throughout his life, Paganini was no stranger to chronic illnesses. Although no definite medical proof exists, he was reputed to have been affected by Marfan syndrome or Ehlers–Danlos syndrome. In addition, his frequent concert schedule, as well as his extravagant lifestyle, took their toll on his health. He was diagnosed with syphilis as early as 1822, and his remedy, which included mercury and opium, came with serious physical and psychological side effects. In 1834, while still in Paris, he was treated for tuberculosis. Though his recovery was reasonably quick, after the illness his career was marred by frequent cancellations due to various health problems, from the common cold to depression, which lasted from days to months.
In September 1834, Paganini put an end to his concert career and returned to Genoa. Contrary to popular beliefs involving his wishing to keep his music and techniques secret, Paganini devoted his time to the publication of his compositions and violin methods. He accepted students, of whom two enjoyed moderate success: violinist Camillo Sivori and cellist Gaetano Ciandelli. Neither, however, considered Paganini helpful or inspirational. In 1835, Paganini returned to Parma, this time under the employ of Archduchess Marie Louise of Austria, Napoleon's second wife. He was in charge of reorganizing her court orchestra. However, he eventually conflicted with the players and court, so his visions never saw completion. In Paris, he befriended the 11-year-old Polish virtuoso Apollinaire de Kontski, giving him some lessons and a signed testimonial. It was widely put about, falsely, that Paganini was so impressed with de Kontski's skills that he bequeathed him his violins and manuscripts.
In 1836, Paganini returned to Paris to set up a casino. Its immediate failure left him in financial ruin, and he auctioned off his personal effects, including his musical instruments, to recoup his losses. At Christmas of 1838, he left Paris for Marseilles and, after a brief stay, travelled to Nice where his condition worsened. In May 1840, the Bishop of Nice sent Paganini a local parish priest to perform the last rites. Paganini assumed the sacrament was premature, and refused.
A week later, on 27 May 1840, Paganini died from internal hemorrhaging before a priest could be summoned. Because of this, and his widely rumored association with the devil, the Church denied his body a Catholic burial in Genoa. It took four years and an appeal to the Pope before the Church let his body be transported to Genoa, but it was still not buried. His body was finally buried in 1876, in a cemetery in Parma. In 1893, the Czech violinist František Ondříček persuaded Paganini's grandson, Attila, to allow a viewing of the violinist's body. After this episode, Paganini's body was finally reinterred in a new cemetery in Parma in 1896.
Though having no shortage of romantic conquests, Paganini was seriously involved with a singer named Antonia Bianchi from Como, whom he met in Milan in 1813. The two gave concerts together throughout Italy. They had a son, Achille Ciro Alessandro, born on 23 July 1825 in Palermo and baptized at San Bartolomeo's. They never legalized their union and it ended around April 1828 in Vienna. Paganini brought Achille on his European tours, and Achille later accompanied his father until the latter's death. He was instrumental in dealing with his father's burial, years after his death.
Throughout his career, Paganini also became close friends with composers Gioachino Rossini and Hector Berlioz. Rossini and Paganini met in Bologna in the summer of 1818. In January 1821, on his return from Naples, Paganini met Rossini again in Rome, just in time to become the substitute conductor for Rossini's opera "Matilde di Shabran", upon the sudden death of the original conductor. Paganini's efforts earned gratitude from Rossini.
Paganini met Berlioz in Paris, and was a frequent correspondent as a penfriend. He commissioned a piece from the composer, but was not satisfied with the resultant four-movement piece for orchestra and viola obbligato "Harold en Italie". He never performed it, and instead it was premiered a year later by violist Christian Urhan. He did however write his own "Sonata per Gran Viola" Op. 35 (with orchestra or guitar accompaniment). Despite his alleged lack of interest in "Harold", Paganini often referred to Berlioz as the resurrection of Beethoven and, towards the end of his life, he gave large sums to the composer. They shared an active interest in the guitar, which they both played and used in compositions. Paganini gave Berlioz a guitar, which they both signed on its sound box.
Paganini was in possession of a number of fine stringed instruments. More legendary than these were the circumstances under which he obtained (and lost) some of them. While Paganini was still a teenager in Livorno, a wealthy businessman named Livron lent him a violin, made by the master luthier Giuseppe Guarneri, for a concert. Livron was so impressed with Paganini's playing that he refused to take it back. This particular violin came to be known as "Il Cannone Guarnerius". On a later occasion in Parma, he won another valuable violin (also by Guarneri) after a difficult sight-reading challenge from a man named Pasini.
Other instruments associated with Paganini include the "Antonio Amati" 1600, the "Nicolò Amati" 1657, the "Paganini-Desaint" 1680 Stradivari, the Guarneri-filius "Andrea" 1706, the "Le Brun" 1712 Stradivari, the "Vuillaume" c. 1720 Bergonzi, the "Hubay" 1726 Stradivari, and the "Comte Cozio di Salabue" 1727 violins; the "Countess of Flanders" 1582 da Salò-di Bertolotti, and the "Mendelssohn" 1731 Stradivari violas; the "Piatti" 1700 Goffriller, the "Stanlein" 1707 Stradivari, and the "Ladenburg" 1736 Stradivari cellos; and the "Grobert of Mirecourt" 1820 (guitar). Four of these instruments were played by the Tokyo String Quartet.
Of his guitars, there is little evidence remaining of his various choices of instrument. The aforementioned guitar that he gave to Berlioz is a French instrument made by one Grobert of Mirecourt. The luthier made his instrument in the style of René Lacôte, a more well-known Paris-based guitar-maker. It is preserved and on display in the Musée de la Musique in Paris.
Of the guitars he owned through his life, there was an instrument by Gennaro Fabricatore that he had refused to sell even in his periods of financial stress, and was among the instruments in his possession at the time of his death. There is an unsubstantiated rumour that he also played Stauffer guitars; he may certainly have come across these in his meetings with Giuliani in Vienna.
Paganini composed his own works to play exclusively in his concerts, all of which profoundly influenced the evolution of violin technique. His 24 Caprices were likely composed in the period between 1805 and 1809, while he was in the service of the Baciocchi court. Also during this period, he composed the majority of the solo pieces, duo-sonatas, trios and quartets for the guitar, either as a solo instrument or with strings. These chamber works may have been inspired by the publication, in Lucca, of the guitar quintets of Boccherini. Many of his variations, including "Le Streghe", "The Carnival of Venice", and "Nel cor più non mi sento", were composed, or at least first performed, before his European concert tour.
Generally speaking, Paganini's compositions were technically imaginative, and the timbre of the instrument was greatly expanded as a result of these works. Sounds of different musical instruments and animals were often imitated. One such composition was titled "Il Fandango Spanolo" (The Spanish Dance), which featured a series of humorous imitations of farm animals. Even more outrageous was a solo piece "Duetto Amoroso", in which the sighs and groans of lovers were intimately depicted on the violin. There survives a manuscript of the "Duetto", which has been recorded. The existence of the "Fandango" is known only through concert posters.
However, his works were criticized for lacking characteristics of true polyphonism, as pointed out by Eugène Ysaÿe. Yehudi Menuhin, on the other hand, suggested that this might have been the result of his reliance on the guitar (in lieu of the piano) as an aid in composition. The orchestral parts for his concertos were often polite, unadventurous, and clearly supportive of the soloist. In this, his style is consistent with that of other Italian composers such as Giovanni Paisiello, Gioachino Rossini and Gaetano Donizetti, who were influenced by the guitar-song milieu of Naples during this period.
Paganini was also the inspiration of many prominent composers. Both "La Campanella" and the A minor Caprice (No. 24) have been an object of interest for a number of composers. Franz Liszt, Robert Schumann, Johannes Brahms, Sergei Rachmaninoff, Boris Blacher, Andrew Lloyd Webber, George Rochberg and Witold Lutosławski, among others, wrote well-known variations on these themes.
The Israeli violinist Ivry Gitlis once referred to Paganini as a phenomenon rather than a development. Though some of the techniques frequently employed by Paganini were already present, most accomplished violinists of the time focused on intonation and bowing techniques. Arcangelo Corelli (1653–1713) was considered a pioneer in transforming the violin from an ensemble instrument to a solo instrument. In the meantime, the polyphonic capability of the violin was firmly established through the Sonatas and Partitas BWV 1001–1006 of Johann Sebastian Bach (1685–1750). Other notable violinists included Antonio Vivaldi (1678–1741) and Giuseppe Tartini (1692–1770), who, in their compositions, reflected the increasing technical and musical demands on the violinist. Although the role of the violin in music drastically changed through this period, progress in violin technique was steady but slow. Techniques requiring agility of the fingers and the bow were still considered unorthodox and discouraged by the established community of violinists.
Much of Paganini's playing (and his violin composition) was influenced by two violinists, Pietro Locatelli (1693–1746) and August Duranowski (Auguste Frédéric Durand) (1770–1834). During Paganini's study in Parma, he came across the 24 Caprices of Locatelli (entitled "L'arte di nuova modulazione – Capricci enigmatici" or "The art of the new style – the enigmatic caprices"). Published in the 1730s, they were shunned by the musical authorities for their technical innovations, and were forgotten by the musical community at large. Around the same time, Durand, a former student of Giovanni Battista Viotti (1755–1824), became a celebrated violinist. He was renowned for his use of harmonics, both natural and artificial (which had previously not been attempted in performance), and the left hand pizzicato in his performance. Paganini was impressed by Durand's innovations and showmanship, which later also became the hallmarks of the young violin virtuoso. Paganini was instrumental in the revival and popularization of these violinistic techniques, which are now incorporated into regular compositions.
Another aspect of Paganini's violin techniques concerned his flexibility. He had exceptionally long fingers and was capable of playing three octaves across four strings in a hand span, an extraordinary feat even by today's standards. His seemingly unnatural ability may have been a result of Marfan syndrome.
Notable works inspired by compositions of Paganini include:
The "Caprice No. 24 in A minor", Op. 1, ("Tema con variazioni") has been the basis of works by many other composers. Notable examples include Brahms's "Variations on a Theme of Paganini" and Rachmaninoff's "Rhapsody on a Theme of Paganini".
The Paganini Competition ("Premio Paganini") is an international violin competition created in 1954 in his home city of Genoa and named in his honour.
In 1972 the State of Italy purchased a large collection of Niccolò Paganini manuscripts from the W. Heyer Library of Cologne. They are housed at the Biblioteca Casanatense in Rome.
In 1982 the city of Genoa commissioned a thematic catalogue of music by Paganini, edited by Maria Rosa Moretti and Anna Sorrento, hence the abbreviation "MS" assigned to his catalogued works.
A minor planet 2859 Paganini discovered in 1978 by Soviet astronomer Nikolai Chernykh is named after him.
Although no photographs of Paganini are known to exist, in 1900 Italian violin maker Giuseppe Fiorini forged the now famous fake daguerreotype of the celebrated violinist. So well in fact, that even the great classical author and conversationalist Arthur M. Abell was led to believe it to be true, reprinting the image in the 22 January 1901 issue of the "Musical Courier".
Paganini has been portrayed by a number of actors in film and television productions, including Stewart Granger in the 1946 biographical portrait "The Magic Bow", Roxy Roth in "A Song to Remember" (1945), Klaus Kinski in "Kinski Paganini" (1989) and David Garrett in "The Devil's Violinist" (2013).
In the Soviet 1982 miniseries "Niccolo Paganini", the musician was portrayed by the Armenian actor Vladimir Msryan. The series focuses on Paganini's relationship with the Roman Catholic Church. Another Soviet actor, Armen Dzhigarkhanyan, played Paganini's fictionalized arch-rival, an insidious Jesuit official. The information in the series is generally spurious, and it also plays to some of the myths and legends rampant during the musician's lifetime. One memorable scene shows Paganini's adversaries sabotaging his violin before a high-profile performance, causing all strings but one to break during the concert. An undeterred Paganini continues to perform on three, two, and finally on a single string. In actuality, Paganini himself occasionally broke strings during his performances on purpose so he could further display his virtuosity. He did this by carefully filing notches into them to weaken them, so that they would break when in use.
In Don Nigro's satirical comedy play "Paganini" (1995), the great violinist seeks vainly for his salvation, claiming that he unknowingly sold his soul to the Devil. "Variation upon variation," he cries at one point, "but which variation leads to salvation and which to damnation? Music is a question for which there is no answer." Paganini is portrayed as having killed three of his lovers and sinking repeatedly into poverty, prison, and drink. Each time he is "rescued" by the Devil, who appears in different guises, returning Paganini's violin so he can continue playing. In the end, Paganini's salvation—administered by a god-like Clockmaker—turns out to be imprisonment in a large bottle where he plays his music for the amusement of the public through all eternity. "Do not pity him, my dear," the Clockmaker tells Antonia, one of Paganini's murdered wives. "He is alone with the answer for which there is no question. The saved and the damned are the same."
Images | https://en.wikipedia.org/wiki?curid=21511 |
Nanomedicine
Nanomedicine is the medical application of nanotechnology. Nanomedicine ranges from the medical applications of nanomaterials and biological devices, to nanoelectronic biosensors, and even possible future applications of molecular nanotechnology such as biological machines. Current problems for nanomedicine involve understanding the issues related to toxicity and environmental impact of nanoscale materials (materials whose structure is on the scale of nanometers, i.e. billionths of a meter).
Functionalities can be added to nanomaterials by interfacing them with biological molecules or structures. The size of nanomaterials is similar to that of most biological molecules and structures; therefore, nanomaterials can be useful for both in vivo and in vitro biomedical research and applications. Thus far, the integration of nanomaterials with biology has led to the development of diagnostic devices, contrast agents, analytical tools, physical therapy applications, and drug delivery vehicles.
Nanomedicine seeks to deliver a valuable set of research tools and clinically useful devices in the near future. The National Nanotechnology Initiative expects new commercial applications in the pharmaceutical industry that may include advanced drug delivery systems, new therapies, and in vivo imaging. Nanomedicine research is receiving funding from the US National Institutes of Health Common Fund program, supporting four nanomedicine development centers.
Nanomedicine sales reached $16 billion in 2015, with a minimum of $3.8 billion in nanotechnology R&D being invested every year. Global funding for emerging nanotechnology increased by 45% per year in recent years, with product sales exceeding $1 trillion in 2013. As the nanomedicine industry continues to grow, it is expected to have a significant impact on the economy.
Nanotechnology has provided the possibility of delivering drugs to specific cells using nanoparticles. The overall drug consumption and side-effects may be lowered significantly by depositing the active agent in the morbid region only and in no higher dose than needed. Targeted drug delivery is intended to reduce the side effects of drugs with concomitant decreases in consumption and treatment expenses. Drug delivery focuses on maximizing bioavailability both at specific places in the body and over a period of time. This can potentially be achieved by molecular targeting by nanoengineered devices. A benefit of using nanoscale for medical technologies is that smaller devices are less invasive and can possibly be implanted inside the body, plus biochemical reaction times are much shorter. These devices are faster and more sensitive than typical drug delivery. The efficacy of drug delivery through nanomedicine is largely based upon: a) efficient encapsulation of the drugs, b) successful delivery of drug to the targeted region of the body, and c) successful release of the drug.
Drug delivery systems, lipid- or polymer-based nanoparticles, can be designed to improve the pharmacokinetics and biodistribution of the drug. However, the pharmacokinetics and pharmacodynamics of nanomedicine is highly variable among different patients. When designed to avoid the body's defence mechanisms, nanoparticles have beneficial properties that can be used to improve drug delivery. Complex drug delivery mechanisms are being developed, including the ability to get drugs through cell membranes and into cell cytoplasm. Triggered response is one way for drug molecules to be used more efficiently. Drugs are placed in the body and only activate on encountering a particular signal. For example, a drug with poor solubility will be replaced by a drug delivery system where both hydrophilic and hydrophobic environments exist, improving the solubility. Drug delivery systems may also be able to prevent tissue damage through regulated drug release; reduce drug clearance rates; or lower the volume of distribution and reduce the effect on non-target tissue. However, the biodistribution of these nanoparticles is still imperfect due to the complex host's reactions to nano- and microsized materials and the difficulty in targeting specific organs in the body. Nevertheless, a lot of work is still ongoing to optimize and better understand the potential and limitations of nanoparticulate systems. While advancement of research proves that targeting and distribution can be augmented by nanoparticles, the dangers of nanotoxicity become an important next step in further understanding of their medical uses.
Nanoparticles are under research for their potential to decrease antibiotic resistance or for various antimicrobial uses. Nanoparticles might also be used to circumvent multidrug resistance (MDR) mechanisms.
Advances in lipid nanotechnology were instrumental in engineering medical nanodevices and novel drug delivery systems, as well as in developing sensing applications. Another system for microRNA delivery under preliminary research is nanoparticles formed by the self-assembly of two different microRNAs deregulated in cancer. One potential application is based on small electromechanical systems, such as nanoelectromechanical systems being investigated for the active release of drugs and sensors for possible cancer treatment with iron nanoparticles or gold shells.
Some nanotechnology-based drugs that are commercially available or in human clinical trials include:
Existing and potential drug nanocarriers have been reviewed.
Nanoparticles have high surface area to volume ratio. This allows for many functional groups to be attached to a nanoparticle, which can seek out and bind to certain tumor cells. Additionally, the small size of nanoparticles (5 to 100 nanometers), allows them to preferentially accumulate at tumor sites (because tumors lack an effective lymphatic drainage system). Limitations to conventional cancer chemotherapy include drug resistance, lack of selectivity, and lack of solubility.
"In vivo" imaging is another area where tools and devices are being developed. Using nanoparticle contrast agents, images such as ultrasound and MRI have a favorable distribution and improved contrast. In cardiovascular imaging, nanoparticles have potential to aid visualization of blood pooling, ischemia, angiogenesis, atherosclerosis, and focal areas where inflammation is present.
The small size of nanoparticles endows them with properties that can be very useful in oncology, particularly in imaging. Quantum dots (nanoparticles with quantum confinement properties, such as size-tunable light emission), when used in conjunction with MRI (magnetic resonance imaging), can produce exceptional images of tumor sites. Nanoparticles of cadmium selenide (quantum dots) glow when exposed to ultraviolet light. When injected, they seep into cancer tumors. The surgeon can see the glowing tumor, and use it as a guide for more accurate tumor removal.These nanoparticles are much brighter than organic dyes and only need one light source for excitation. This means that the use of fluorescent quantum dots could produce a higher contrast image and at a lower cost than today's organic dyes used as contrast media. The downside, however, is that quantum dots are usually made of quite toxic elements, but this concern may be addressed by use of fluorescent dopants.
Tracking movement can help determine how well drugs are being distributed or how substances are metabolized. It is difficult to track a small group of cells throughout the body, so scientists used to dye the cells. These dyes needed to be excited by light of a certain wavelength in order for them to light up. While different color dyes absorb different frequencies of light, there was a need for as many light sources as cells. A way around this problem is with luminescent tags. These tags are quantum dots attached to proteins that penetrate cell membranes. The dots can be random in size, can be made of bio-inert material, and they demonstrate the nanoscale property that color is size-dependent. As a result, sizes are selected so that the frequency of light used to make a group of quantum dots fluoresce is an even multiple of the frequency required to make another group incandesce. Then both groups can be lit with a single light source. They have also found a way to insert nanoparticles into the affected parts of the body so that those parts of the body will glow showing the tumor growth or shrinkage or also organ trouble.
Nanotechnology-on-a-chip is one more dimension of lab-on-a-chip technology. Magnetic nanoparticles, bound to a suitable antibody, are used to label specific molecules, structures or microorganisms. In particular silica nanoparticles are inert from the photophysical point of view and might accumulate a large number of dye(s) within the nanoparticle shell. Gold nanoparticles tagged with short segments of DNA can be used for detection of genetic sequence in a sample. Multicolor optical coding for biological assays has been achieved by embedding different-sized quantum dots into polymeric microbeads. Nanopore technology for analysis of nucleic acids converts strings of nucleotides directly into electronic signatures.
Sensor test chips containing thousands of nanowires, able to detect proteins and other biomarkers left behind by cancer cells, could enable the detection and diagnosis of cancer in the early stages from a few drops of a patient's blood. Nanotechnology is helping to advance the use of arthroscopes, which are pencil-sized devices that are used in surgeries with lights and cameras so surgeons can do the surgeries with smaller incisions. The smaller the incisions the faster the healing time which is better for the patients. It is also helping to find a way to make an arthroscope smaller than a strand of hair.
Research on nanoelectronics-based cancer diagnostics could lead to tests that can be done in pharmacies. The results promise to be highly accurate and the product promises to be inexpensive. They could take a very small amount of blood and detect cancer anywhere in the body in about five minutes, with a sensitivity that is a thousand times better a conventional laboratory test. These devices that are built with nanowires to detect cancer proteins; each nanowire detector is primed to be sensitive to a different cancer marker. The biggest advantage of the nanowire detectors is that they could test for anywhere from ten to one hundred similar medical conditions without adding cost to the testing device. Nanotechnology has also helped to personalize oncology for the detection, diagnosis, and treatment of cancer. It is now able to be tailored to each individual's tumor for better performance. They have found ways that they will be able to target a specific part of the body that is being affected by cancer.
Magnetic micro particles are proven research instruments for the separation of cells and proteins from complex media. The technology is available under the name Magnetic-activated cell sorting or Dynabeads among others. More recently it was shown in animal models that magnetic nanoparticles can be used for the removal of various noxious compounds including toxins, pathogens, and proteins from whole blood in an extracorporeal circuit similar to dialysis. In contrast to dialysis, which works on the principle of the size related diffusion of solutes and ultrafiltration of fluid across a semi-permeable membrane, the purification with nanoparticles allows specific targeting of substances. Additionally larger compounds which are commonly not dialyzable can be removed.
The purification process is based on functionalized iron oxide or carbon coated metal nanoparticles with ferromagnetic or superparamagnetic properties. Binding agents such as proteins, antibodies, antibiotics, or synthetic ligands are covalently linked to the particle surface. These binding agents are able to interact with target species forming an agglomerate. Applying an external magnetic field gradient allows exerting a force on the nanoparticles. Hence the particles can be separated from the bulk fluid, thereby cleaning it from the contaminants.
The small size (< 100 nm) and large surface area of functionalized nanomagnets leads to advantageous properties compared to hemoperfusion, which is a clinically used technique for the purification of blood and is based on surface adsorption. These advantages are high loading and accessible for binding agents, high selectivity towards the target compound, fast diffusion, small hydrodynamic resistance, and low dosage.
This approach offers new therapeutic possibilities for the treatment of systemic infections such as sepsis by directly removing the pathogen. It can also be used to selectively remove cytokines or endotoxins or for the dialysis of compounds which are not accessible by traditional dialysis methods. However the technology is still in a preclinical phase and first clinical trials are not expected before 2017.
Nanotechnology may be used as part of tissue engineering to help reproduce or repair or reshape damaged tissue using suitable nanomaterial-based scaffolds and growth factors. Tissue engineering if successful may replace conventional treatments like organ transplants or artificial implants. Nanoparticles such as graphene, carbon nanotubes, molybdenum disulfide and tungsten disulfide are being used as reinforcing agents to fabricate mechanically strong biodegradable polymeric nanocomposites for bone tissue engineering applications. The addition of these nanoparticles in the polymer matrix at low concentrations (~0.2 weight %) leads to significant improvements in the compressive and flexural mechanical properties of polymeric nanocomposites. Potentially, these nanocomposites may be used as a novel, mechanically strong, light weight composite as bone implants.
For example, a flesh welder was demonstrated to fuse two pieces of chicken meat into a single piece using a suspension of gold-coated nanoshells activated by an infrared laser. This could be used to weld arteries during surgery.
Another example is nanonephrology, the use of nanomedicine on the kidney.
Neuro-electronic interfacing is a visionary goal dealing with the construction of nanodevices that will permit computers to be joined and linked to the nervous system. This idea requires the building of a molecular structure that will permit control and detection of nerve impulses by an external computer. A refuelable strategy implies energy is refilled continuously or periodically with external sonic, chemical, tethered, magnetic, or biological electrical sources, while a nonrefuelable strategy implies that all power is drawn from internal energy storage which would stop when all energy is drained. A nanoscale enzymatic biofuel cell for self-powered nanodevices have been developed that uses glucose from biofluids including human blood and watermelons. One limitation to this innovation is the fact that electrical interference or leakage or overheating from power consumption is possible. The wiring of the structure is extremely difficult because they must be positioned precisely in the nervous system. The structures that will provide the interface must also be compatible with the body's immune system.
Molecular nanotechnology is a speculative subfield of nanotechnology regarding the possibility of engineering molecular assemblers, machines which could re-order matter at a molecular or atomic scale. Nanomedicine would make use of these nanorobots, introduced into the body, to repair or detect damages and infections. Molecular nanotechnology is highly theoretical, seeking to anticipate what inventions nanotechnology might yield and to propose an agenda for future inquiry. The proposed elements of molecular nanotechnology, such as molecular assemblers and nanorobots are far beyond current capabilities. Future advances in nanomedicine could give rise to life extension through the repair of many processes thought to be responsible for aging. K. Eric Drexler, one of the founders of nanotechnology, postulated cell repair machines, including ones operating within cells and utilizing as yet hypothetical molecular machines, in his 1986 book "Engines of Creation", with the first technical discussion of medical nanorobots by Robert Freitas appearing in 1999. Raymond Kurzweil, a futurist and transhumanist, stated in his book "The Singularity Is Near" that he believes that advanced medical nanorobotics could completely remedy the effects of aging by 2030. According to Richard Feynman, it was his former graduate student and collaborator Albert Hibbs who originally suggested to him (circa 1959) the idea of a "medical" use for Feynman's theoretical micromachines (see nanotechnology). Hibbs suggested that certain repair machines might one day be reduced in size to the point that it would, in theory, be possible to (as Feynman put it) "swallow the doctor". The idea was incorporated into Feynman's 1959 essay "There's Plenty of Room at the Bottom." | https://en.wikipedia.org/wiki?curid=21514 |
Null set
In mathematical analysis, a null set formula_1 is a set that can be covered by a countable union of intervals of arbitrarily small total length. The notion of null set in set theory anticipates the development of Lebesgue measure since a null set necessarily has measure zero. More generally, on a given measure space formula_2 a null set is a set formula_3 such that formula_4.
Every countable subset of the real numbers that (i.e. finite or countably infinite) is null. For example, the set of natural numbers is countable, having cardinality formula_5 ("aleph-zero" or "aleph-null"), is null. Another example is the set of rational numbers, which is also countable, and hence null.
However, there are some uncountable sets, such as the Cantor set, that are null.
Suppose formula_6 is a subset of the real line formula_7 such that
where the "U"n are intervals and |"U"| is the length of "U", then "A" is a null set. Also known as a set of zero-content.
In terminology of mathematical analysis, this definition requires that there be a sequence of open covers of "A" for which the limit of the lengths of the covers is zero.
Null sets include all finite sets, all countable sets, and even some uncountable sets such as the Cantor set.
The empty set is always a null set. More generally, any countable union of null sets is null. Any measurable subset of a null set is itself a null set. Together, these facts show that the "m"-null sets of "X" form a sigma-ideal on "X". Similarly, the measurable "m"-null sets form a sigma-ideal of the sigma-algebra of measurable sets. Thus, null sets may be interpreted as negligible sets, defining a notion of almost everywhere.
The Lebesgue measure is the standard way of assigning a length, area or volume to subsets of Euclidean space.
A subset "N" of formula_7 has null Lebesgue measure and is considered to be a null set in formula_7 if and only if:
This condition can be generalised to formula_12, using "n"-cubes instead of intervals. In fact, the idea can be made to make sense on any Riemannian manifold, even if there is no Lebesgue measure there.
For instance:
If λ is Lebesgue measure for formula_7 and π is Lebesgue measure for formula_20, then the product measure formula_21. In terms of null sets, the following equivalence has been styled a Fubini's theorem:
Null sets play a key role in the definition of the Lebesgue integral: if functions "f" and "g" are equal except on a null set, then "f" is integrable if and only if "g" is, and their integrals are equal.
A measure in which all subsets of null sets are measurable is "complete". Any non-complete measure can be completed to form a complete measure by asserting that subsets of null sets have measure zero. Lebesgue measure is an example of a complete measure; in some constructions, it is defined as the completion of a non-complete Borel measure.
The Borel measure is not complete. One simple construction is to start with the standard Cantor set "K", which is closed hence Borel measurable, and which has measure zero, and to find a subset "F" of "K" which is not Borel measurable. (Since the Lebesgue measure is complete, this "F" is of course Lebesgue measurable.)
First, we have to know that every set of positive measure contains a nonmeasurable subset. Let "f" be the Cantor function, a continuous function which is locally constant on "Kc", and monotonically increasing on [0, 1], with "f"(0) = 0 and "f"(1) = 1. Obviously, "f"("Kc") is countable, since it contains one point per component of "Kc". Hence "f"("Kc") has measure zero, so "f"("K") has measure one. We need a strictly monotonic function, so consider "g"("x") = "f"("x") + "x". Since "g"("x") is strictly monotonic and continuous, it is a homeomorphism. Furthermore, "g"("K") has measure one. Let "E" ⊂ "g"("K") be non-measurable, and let "F" = "g"−1("E"). Because "g" is injective, we have that "F" ⊂ "K", and so "F" is a null set. However, if it were Borel measurable, then "g"("F") would also be Borel measurable (here we use the fact that the preimage of a Borel set by a continuous function is measurable; "g"("F") = ("g"−1)−1("F") is the preimage of "F" through the continuous function "h" = "g"−1.) Therefore, "F" is a null, but non-Borel measurable set.
In a separable Banach space ("X", +), the group operation moves any subset "A" ⊂ "X" to the translates "A" + "x" for any "x" ∈ "X". When there is a probability measure μ on the σ-algebra of Borel subsets of "X", such that for all "x", μ("A" + "x") = 0, then "A" is a Haar null set.
The term refers to the null invariance of the measures of translates, associating it with the complete invariance found with Haar measure.
Some algebraic properties of topological groups have been related to the size of subsets and Haar null sets.
Haar null sets have been used in Polish groups to show that when "A" is not a meagre set then "A"−1"A" contains an open neighborhood of the identity element. This property is named for Hugo Steinhaus since it is the conclusion of the Steinhaus theorem. | https://en.wikipedia.org/wiki?curid=21520 |
Artificial neural network
Artificial neural networks (ANN) or connectionist systems are computing systems vaguely inspired by the biological neural networks that constitute animal brains. Such systems "learn" to perform tasks by considering examples, generally without being programmed with task-specific rules. For example, in image recognition, they might learn to identify images that contain cats by analyzing example images that have been manually labeled as "cat" or "no cat" and using the results to identify cats in other images. They do this without any prior knowledge of cats, for example that they have fur, tails, whiskers and cat-like faces. Instead, they automatically generate identifying characteristics from the examples that they process.
An ANN is based on a collection of connected units or nodes called artificial neurons, which loosely model the neurons in a biological brain. Each connection, like the synapses in a biological brain, can transmit a signal to other neurons. An artificial neuron that receives a signal then processes it and can signal neurons connected to it.
In ANN implementations, the "signal" at a connection is a real number, and the output of each neuron is computed by some non-linear function of the sum of its inputs. The connections are called "edges". Neurons and edges typically have a "weight" that adjusts as learning proceeds. The weight increases or decreases the strength of the signal at a connection. Neurons may have a threshold such that a signal is sent only if the aggregate signal crosses that threshold. Typically, neurons are aggregated into layers. Different layers may perform different transformations on their inputs. Signals travel from the first layer (the input layer), to the last layer (the output layer), possibly after traversing the layers multiple times.
The original goal of the ANN approach was to solve problems in the same way that a human brain would. But over time, attention moved to performing specific tasks, leading to deviations from biology. ANNs have been used on a variety of tasks, including computer vision, speech recognition, machine translation, social network filtering, playing board and video games, medical diagnosis, and even in activities that have traditionally been considered as reserved to humans, like painting.
Warren McCulloch and Walter Pitts (1943) opened the subject by creating a computational model for neural networks. In the late 1940s, D. O. Hebb created a learning hypothesis based on the mechanism of neural plasticity that became known as Hebbian learning. Farley and Wesley A. Clark (1954) first used computational machines, then called "calculators", to simulate a Hebbian network. Rosenblatt (1958) created the perceptron. The first functional networks with many layers were published by Ivakhnenko and Lapa in 1965, as the Group Method of Data Handling. The basics of continuous backpropagation were derived in the context of control theory by Kelley in 1960 and by Bryson in 1961, using principles of dynamic programming.
In 1970, Seppo Linnainmaa published the general method for automatic differentiation (AD) of discrete connected networks of nested differentiable functions. In 1973, Dreyfus used backpropagation to adapt parameters of controllers in proportion to error gradients. Werbos's (1975) backpropagation algorithm enabled practical training of multi-layer networks. In 1982, he applied Linnainmaa's AD method to neural networks in the way that became widely used. Thereafter research stagnated following Minsky and Papert (1969), who discovered that basic perceptrons were incapable of processing the exclusive-or circuit and that computers lacked sufficient power to process useful neural networks.
Increasing transistor count in digital electronics provided more processing power that enabled the development of practical artificial neural networks in the 1980s.
In 1992, max-pooling was introduced to help with least-shift invariance and tolerance to deformation to aid 3D object recognition. Schmidhuber adopted a multi-level hierarchy of networks (1992) pre-trained one level at a time by unsupervised learning and fine-tuned by backpropagation.
Geoffrey Hinton et al. (2006) proposed learning a high-level representation using successive layers of binary or real-valued latent variables with a restricted Boltzmann machine to model each layer. In 2012, Ng and Dean created a network that learned to recognize higher-level concepts, such as cats, only from watching unlabeled images. Unsupervised pre-training and increased computing power from GPUs and distributed computing allowed the use of larger networks, particularly in image and visual recognition problems, which became known as "deep learning".
Ciresan and colleagues (2010) showed that despite the vanishing gradient problem, GPUs make backpropagation feasible for many-layered feedforward neural networks. Between 2009 and 2012, ANNs began winning prizes in ANN contests, approaching human level performance on various tasks, initially in pattern recognition and machine learning. For example, the bi-directional and multi-dimensional long short-term memory (LSTM) of Graves et al. won three competitions in connected handwriting recognition in 2009 without any prior knowledge about the three languages to be learned.
Ciresan and colleagues built the first pattern recognizers to achieve human-competitive/superhuman performance on benchmarks such as traffic sign recognition (IJCNN 2012).
ANNs began as an attempt to exploit the architecture of the human brain to perform tasks that conventional algorithms had little success with. They soon reoriented towards improving empirical results, mostly abandoning attempts to remain true to their biological precursors. Neurons are connected to each other in various patterns, to allow the output of some neurons to become the input of others. The network forms a directed, weighted graph.
An artificial neural network consists of a collection of simulated neurons. Each neuron is a node which is connected to other nodes via links that correspond to biological axon-synapse-dendrite connections. Each link has a weight, which determines the strength of one node's influence on another.
ANNs are composed of artificial neurons which retain the biological concept of neurons, which receive input, combine the input with their internal state ("activation") and an optional "threshold" using an "activation function", and produce output using an "output function". The initial inputs are external data, such as images and documents. The ultimate outputs accomplish the task, such as recognizing an object in an image. The important characteristic of the activation function is that it provides a smooth, differentiable transition as input values change, i.e. a small change in input produces a small change in output.
The network consists of connections, each connection providing the output of one neuron as an input to another neuron. Each connection is assigned a weight that represents its relative importance. A given neuron can have multiple input and output connections.
The "propagation function" computes the input to a neuron from the outputs of its predecessor neurons and their connections as a weighted sum. A "bias" term can be added to the result of the propagation.
The neurons are typically organized into multiple layers, especially in deep learning. Neurons of one layer connect only to neurons of the immediately preceding and immediately following layers. The layer that receives external data is the "input layer". The layer that produces the ultimate result is the "output layer". In between them are zero or more "hidden layers". Single layer and unlayered networks are also used. Between two layers, multiple connection patterns are possible. They can be "fully connected", with every neuron in one layer connecting to every neuron in the next layer. They can be "pooling", where a group of neurons in one layer connect to a single neuron in the next layer, thereby reducing the number of neurons in that layer. Neurons with only such connections form a directed acyclic graph and are known as "feedforward networks". Alternatively, networks that allow connections between neurons in the same or previous layers are known as "recurrent networks""."
A hyperparameter is a constant parameter whose value is set before the learning process begins. The values of parameters are derived via learning. Examples of hyperparameters include learning rate, the number of hidden layers and batch size. The values of some hyperparameters can be dependent on those of other hyperparameters. For example, the size of some layers can depend on the overall number of layers.
Learning is the adaptation of the network to better handle a task by considering sample observations. Learning involves adjusting the weights (and optional thresholds) of the network to improve the accuracy of the result. This is done by minimizing the observed errors. Learning is complete when examining additional observations does not usefully reduce the error rate. Even after learning, the error rate typically does not reach 0. If after learning, the error rate is too high, the network typically must be redesigned. Practically this is done by defining a cost function that is evaluated periodically during learning. As long as its output continues to decline, learning continues. The cost is frequently defined as a statistic whose value can only be approximated. The outputs are actually numbers, so when the error is low, the difference between the output (almost certainly a cat) and the correct answer (cat) is small. Learning attempts to reduce the total of the differences across the observations. Most learning models can be viewed as a straightforward application of optimization theory and statistical estimation.
The learning rate defines the size of the corrective steps that the model takes to adjust for errors in each observation. A high learning rate shortens the training time, but with lower ultimate accuracy, while a lower learning rate takes longer, but with the potential for greater accuracy. Optimizations such as Quickprop are primarily aimed at speeding up error minimization, while other improvements mainly try to increase reliability. In order to avoid oscillation inside the network such as alternating connection weights, and to improve the rate of convergence, refinements use an adaptive learning rate that increases or decreases as appropriate. The concept of momentum allows the balance between the gradient and the previous change to be weighted such that the weight adjustment depends to some degree on the previous change. A momentum close to 0 emphasizes the gradient, while a value close to 1 emphasizes the last change.
While it is possible to define a cost function ad hoc, frequently the choice is determined by the functions desirable properties (such as convexity) or because it arises from the model (e.g., in a probabilistic model the model's posterior probability can be used as an inverse cost).
Backpropagation is a method to adjust the connection weights to compensate for each error found during learning. The error amount is effectively divided among the connections. Technically, backprop calculates the gradient (the derivative) of the cost function associated with a given state with respect to the weights. The weight updates can be done via stochastic gradient descent or other methods, such as Extreme Learning Machines, "No-prop" networks, training without backtracking, "weightless" networks, and non-connectionist neural networks.
The three major learning paradigms are supervised learning, unsupervised learning and reinforcement learning. They each correspond to a particular learning task
Supervised learning uses a set of paired inputs and desired outputs. The learning task is to produce the desired output for each input. In this case the cost function is related to eliminating incorrect deductions. A commonly used cost is the mean-squared error, which tries to minimize the average squared error between the network's output and the desired output. Tasks suited for supervised learning are pattern recognition (also known as classification) and regression (also known as function approximation). Supervised learning is also applicable to sequential data (e.g., for hand writing, speech and gesture recognition). This can be thought of as learning with a "teacher", in the form of a function that provides continuous feedback on the quality of solutions obtained thus far.
In unsupervised learning, input data is given along with the cost function, some function of the data formula_1 and the network's output. The cost function is dependent on the task (the model domain) and any "a priori" assumptions (the implicit properties of the model, its parameters and the observed variables). As a trivial example, consider the model formula_2 where formula_3 is a constant and the cost formula_4. Minimizing this cost produces a value of formula_3 that is equal to the mean of the data. The cost function can be much more complicated. Its form depends on the application: for example, in compression it could be related to the mutual information between formula_1 and formula_7, whereas in statistical modeling, it could be related to the posterior probability of the model given the data (note that in both of those examples those quantities would be maximized rather than minimized). Tasks that fall within the paradigm of unsupervised learning are in general estimation problems; the applications include clustering, the estimation of statistical distributions, compression and filtering.
In applications such as playing video games, an actor takes a string of actions, receiving a generally unpredictable response from the environment after each one. The goal is to win the game, i.e., generate the most positive (lowest cost) responses. In reinforcement learning, the aim is to weight the network (devise a policy) to perform actions that minimize long-term (expected cumulative) cost. At each point in time the agent performs an action and the environment generates an observation and an instantaneous cost, according to some (usually unknown) rules. The rules and the long-term cost usually only can be estimated. At any juncture, the agent decides whether to explore new actions to uncover their costs or to exploit prior learning to proceed more quickly.
Formally the environment is modeled as a Markov decision process (MDP) with states formula_8 and actions formula_9. Because the state transitions are not known, probability distributions are used instead: the instantaneous cost distribution formula_10, the observation distribution formula_11 and the transition distribution formula_12, while a policy is defined as the conditional distribution over actions given the observations. Taken together, the two define a Markov chain (MC). The aim is to discover the lowest-cost MC.
ANNs serve as the learning component in such applications. Dynamic programming coupled with ANNs (giving neurodynamic programming) has been applied to problems such as those involved in vehicle routing, video games, natural resource management and medicine because of ANNs ability to mitigate losses of accuracy even when reducing the discretization grid density for numerically approximating the solution of control problems. Tasks that fall within the paradigm of reinforcement learning are control problems, games and other sequential decision making tasks.
Self learning in neural networks was introduced in 1982 along with a neural network capable of self-learning named Crossbar Adaptive Array (CAA). It is a system with only one input, situation s, and only one output, action (or behavior) a. It has neither external advice input nor external reinforcement input from the environment. The CAA computes, in a crossbar fashion, both decisions about actions and emotions (feelings) about encountered situations. The system is driven by the interaction between cognition and emotion. Given memory matrix W =||w(a,s)||, the crossbar self learning algorithm in each iteration performs the following computation:
The backpropagated value (secondary reinforcement) is the emotion toward the consequence situation. The CAA exists in two environments, one is behavioral environment where it behaves, and the other is genetic environment, where from it initially and only once receives initial emotions about to be encountered situations in the behavioral environment. Having received the genome vector (species vector) from the genetic environment, the CAA will learn a goal-seeking behavior, in the behavioral environment that contains both desirable and undesirable situations.
In a Bayesian framework, a distribution over the set of allowed models is chosen to minimize the cost. Evolutionary methods, gene expression programming, simulated annealing, expectation-maximization, non-parametric methods and particle swarm optimization are other learning algorithms. Convergent recursion is a learning algorithm for cerebellar model articulation controller (CMAC) neural networks.
Two modes of learning are available: stochastic and batch. In stochastic learning, each input creates a weight adjustment. In batch learning weights are adjusted based on a batch of inputs, accumulating errors over the batch. Stochastic learning introduces "noise" into the process, using the local gradient calculated from one data point; this reduces the chance of the network getting stuck in local minima. However, batch learning typically yields a faster, more stable descent to a local minimum, since each update is performed in the direction of the batch's average error. A common compromise is to use "mini-batches", small batches with samples in each batch selected stochastically from the entire data set.
ANNs have evolved into a broad family of techniques that have advanced the state of the art across multiple domains. The simplest types have one or more static components, including number of units, number of layers, unit weights and topology. Dynamic types allow one or more of these to evolve via learning. The latter are much more complicated, but can shorten learning periods and produce better results. Some types allow/require learning to be "supervised" by the operator, while others operate independently. Some types operate purely in hardware, while others are purely software and run on general purpose computers.
Some of the main breakthroughs include: convolutional neural networks that have proven particularly successful in processing visual and other two-dimensional data; long short-term memory avoid the vanishing gradient problem and can handle signals that have a mix of low and high frequency components aiding large-vocabulary speech recognition, text-to-speech synthesis, and photo-real talking heads; competitive networks such as generative adversarial networks in which multiple networks (of varying structure) compete with each other, on tasks such as winning a game or on deceiving the opponent about the authenticity of an input.
Neural architecture search (NAS) uses machine learning to automate ANN design. Various approaches to NAS have designed networks that compare well with hand-designed systems. The basic search algorithm is to propose a candidate model, evaluate it against a dataset and use the results as feedback to teach the NAS network. Available systems include AutoML and AutoKeras.
Design issues include deciding the number, type and connectedness of network layers, as well as the size of each and the connection type (full, pooling, ...).
Hyperparameters must also be defined as part of the design (they are not learned), governing matters such as how many neurons are in each layer, learning rate, step, stride, depth, receptive field and padding (for CNNs), etc.
Using Artificial neural networks requires an understanding of their characteristics.
ANN capabilities fall within the following broad categories:
Because of their ability to reproduce and model nonlinear processes, Artificial neural networks have found applications in many disciplines. Application areas include system identification and control (vehicle control, trajectory prediction, process control, natural resource management), quantum chemistry, general game playing, pattern recognition (radar systems, face identification, signal classification, 3D reconstruction, object recognition and more), sequence recognition (gesture, speech, handwritten and printed text recognition), medical diagnosis, finance (e.g. automated trading systems), data mining, visualization, machine translation, social network filtering and e-mail spam filtering. ANNs have been used to diagnose cancers, including lung cancer, prostate cancer, colorectal cancer and to distinguish highly invasive cancer cell lines from less invasive lines using only cell shape information.
ANNs have been used to accelerate reliability analysis of infrastructures subject to natural disasters and to predict foundation settlements. ANNs have also been used for building black-box models in geoscience: hydrology, ocean modelling and coastal engineering, and geomorphology. ANNs have been employed in cybersecurity, with the objective to discriminate between legitimate activities and malicious ones. For example, machine learning has been used for classifying Android malware, for identifying domains belonging to threat actors and for detecting URLs posing a security risk. Research is underway on ANN systems designed for penetration testing, for detecting botnets, credit cards frauds and network intrusions.
ANNs have been proposed as a tool to simulate the properties of many-body open quantum systems. In brain research ANNs have studied short-term behavior of individual neurons, the dynamics of neural circuitry arise from interactions between individual neurons and how behavior can arise from abstract neural modules that represent complete subsystems. Studies considered long-and short-term plasticity of neural systems and their relation to learning and memory from the individual neuron to the system level.
The multilayer perceptron is a universal function approximator, as proven by the universal approximation theorem. However, the proof is not constructive regarding the number of neurons required, the network topology, the weights and the learning parameters.
A specific recurrent architecture with rational-valued weights (as opposed to full precision real number-valued weights) has the power of a universal Turing machine, using a finite number of neurons and standard linear connections. Further, the use of irrational values for weights results in a machine with super-Turing power.
A model's "capacity" property corresponds to its ability to model any given function. It is related to the amount of information that can be stored in the network and to the notion of complexity.
Two notions of capacity are known by the community. The information capacity and the VC Dimension. The information capacity of a perceptron is intensively discussed in Sir David MacKay's book which summarizes work by Thomas Cover. The capacity of a network of standard neurons (not convolutional) can be derived by four rules that derive from understanding a neuron as an electrical element. The information capacity captures the functions modelable by the network given any data as input. The second notion, is the VC dimension. VC Dimension uses the principles of measure theory and finds the maximum capacity under the best possible circumstances. This is, given input data in a specific form. As noted in , the VC Dimension for arbitrary inputs is half the information capacity of a Perceptron. The VC Dimension for arbitrary points is sometimes referred to as Memory Capacity.
Models may not consistently converge on a single solution, firstly because local minima may exist, depending on the cost function and the model. Secondly, the optimization method used might not guarantee to converge when it begins far from any local minimum. Thirdly, for sufficiently large data or parameters, some methods become impractical.
The convergence behavior of certain types of ANN architectures are more understood than others. When the width of network approaches to infinity, the ANN is well described by its first order Taylor expansion throughout training, and so inherits the convergence behavior of affine models. Another example is when parameters are small, it is observed that ANNs often fits target functions from low to high frequencies. This phenomenon is the opposite to the behavior of some well studied iterative numerical schemes such as Jacobi method.
Applications whose goal is to create a system that generalizes well to unseen examples, face the possibility of over-training. This arises in convoluted or over-specified systems when the network capacity significantly exceeds the needed free parameters. Two approaches address over-training. The first is to use cross-validation and similar techniques to check for the presence of over-training and to select hyperparameters to minimize the generalization error.
The second is to use some form of "regularization". This concept emerges in a probabilistic (Bayesian) framework, where regularization can be performed by selecting a larger prior probability over simpler models; but also in statistical learning theory, where the goal is to minimize over two quantities: the 'empirical risk' and the 'structural risk', which roughly corresponds to the error over the training set and the predicted error in unseen data due to overfitting.
Supervised neural networks that use a mean squared error (MSE) cost function can use formal statistical methods to determine the confidence of the trained model. The MSE on a validation set can be used as an estimate for variance. This value can then be used to calculate the confidence interval of network output, assuming a normal distribution. A confidence analysis made this way is statistically valid as long as the output probability distribution stays the same and the network is not modified.
By assigning a softmax activation function, a generalization of the logistic function, on the output layer of the neural network (or a softmax component in a component-based network) for categorical target variables, the outputs can be interpreted as posterior probabilities. This is useful in classification as it gives a certainty measure on classifications.
The softmax activation function is:
A common criticism of neural networks, particularly in robotics, is that they require too much training for real-world operation. Potential solutions include randomly shuffling training examples, by using a numerical optimization algorithm that does not take too large steps when changing the network connections following an example, grouping examples in so-called mini-batches and/or introducing a recursive least squares algorithm for CMAC.
A fundamental objection is that ANNs do not sufficiently reflect neuronal function. Backpropagation is a critical step, although no such mechanism exists in biological neural networks. How information is coded by real neurons is not known. Sensor neurons fire action potentials more frequently with sensor activation and muscle cells pull more strongly when their associated motor neurons receive action potentials more frequently. Other than the case of relaying information from a sensor neuron to a motor neuron, almost nothing of the principles of how information is handled by biological neural networks is known.
A central claim of ANNs is that they embody new and powerful general principles for processing information. Unfortunately, these principles are ill-defined. It is often claimed that they are emergent from the network itself. This allows simple statistical association (the basic function of artificial neural networks) to be described as learning or recognition. Alexander Dewdney commented that, as a result, artificial neural networks have a "something-for-nothing quality, one that imparts a peculiar aura of laziness and a distinct lack of curiosity about just how good these computing systems are. No human hand (or mind) intervenes; solutions are found as if by magic; and no one, it seems, has learned anything". One response to Dewdney is that neural networks handle many complex and diverse tasks, ranging from autonomously flying aircraft to detecting credit card fraud to mastering the game of Go.
Technology writer Roger Bridgman commented:
Biological brains use both shallow and deep circuits as reported by brain anatomy, displaying a wide variety of invariance. Weng argued that the brain self-wires largely according to signal statistics and therefore, a serial cascade cannot catch all major statistical dependencies.
Large and effective neural networks require considerable computing resources. While the brain has hardware tailored to the task of processing signals through a graph of neurons, simulating even a simplified neuron on von Neumann architecture may consume vast amounts of memory and storage. Furthermore, the designer often needs to transmit signals through many of these connections and their associated neurons which require enormous CPU power and time.
Schmidhuber noted that the resurgence of neural networks in the twenty-first century is largely attributable to advances in hardware: from 1991 to 2015, computing power, especially as delivered by GPGPUs (on GPUs), has increased around a million-fold, making the standard backpropagation algorithm feasible for training networks that are several layers deeper than before. The use of accelerators such as FPGAs and GPUs can reduce training times from months to days.
Neuromorphic engineering addresses the hardware difficulty directly, by constructing non-von-Neumann chips to directly implement neural networks in circuitry. Another type of chip optimized for neural network processing is called a Tensor Processing Unit, or TPU.
Analyzing what has been learned by an ANN, is much easier than to analyze what has been learned by a biological neural network. Furthermore, researchers involved in exploring learning algorithms for neural networks are gradually uncovering general principles that allow a learning machine to be successful. For example, local vs. non-local learning and shallow vs. deep architecture.
Advocates of hybrid models (combining neural networks and symbolic approaches), claim that such a mixture can better capture the mechanisms of the human mind. | https://en.wikipedia.org/wiki?curid=21523 |
Nutrition
Nutrition is the science that interprets the nutrients and other substances in food in relation to maintenance, growth, reproduction, health and disease of an organism. It includes ingestion, absorption, assimilation, biosynthesis, catabolism and excretion.
The diet of an organism is what it eats, which is largely determined by the availability and palatability of foods. For humans, a healthy diet includes preparation of food and storage methods that preserve nutrients from oxidation, heat or leaching, and that reduces risk of foodborne illnesses. The seven major classes of human nutrients are carbohydrates, fats, fiber, minerals, proteins, vitamins, and water. Nutrients can be grouped as either macronutrients or micronutrients (needed in small quantities).
In humans, an unhealthy diet can cause deficiency-related diseases such as blindness, anemia, scurvy, preterm birth, stillbirth and cretinism, or nutrient excess health-threatening conditions such as obesity and metabolic syndrome; and such common chronic systemic diseases as cardiovascular disease, diabetes, and osteoporosis. Undernutrition can lead to wasting in acute cases, and the stunting of marasmus in chronic cases of malnutrition.
Carnivore and herbivore diets are contrasting, with basic nitrogen and carbon proportions vary for their particular foods. Many herbivores rely on bacterial fermentation to create digestible nutrients from indigestible plant cellulose, while obligate carnivores must eat animal meats to obtain certain vitamins or nutrients their bodies cannot otherwise synthesize. Animals generally have a higher requirement of energy in comparison to plants.
Plant nutrition is the study of the chemical elements that are necessary for plant growth. There are several principles that apply to plant nutrition. Some elements are directly involved in plant metabolism. However, this principle does not account for the so-called beneficial elements, whose presence, while not required, has clear positive effects on plant growth.
A nutrient that is able to limit plant growth according to Liebig's law of the minimum is considered an essential plant nutrient if the plant cannot complete its full life cycle without it. There are 16 essential plant soil nutrients, besides the three major elemental nutrients carbon and oxygen that are obtained by photosynthetic plants from carbon dioxide in air, and hydrogen, which is obtained from water.
Plants uptake essential elements from the soil through their roots and from the air (consisting of mainly nitrogen and oxygen) through their leaves. Green plants obtain their carbohydrate supply from the carbon dioxide in the air by the process of photosynthesis. Carbon and oxygen are absorbed from the air, while other nutrients are absorbed from the soil. Nutrient uptake in the soil is achieved by cation exchange, wherein root hairs pump hydrogen ions (H+) into the soil through proton pumps. These hydrogen ions displace cations attached to negatively charged soil particles so that the cations are available for uptake by the root. In the leaves, stomata open to take in carbon dioxide and expel oxygen. The carbon dioxide molecules are used as the carbon source in photosynthesis.
Although nitrogen is plentiful in the Earth's atmosphere, very few plants can use this directly. Most plants, therefore, require nitrogen compounds to be present in the soil in which they grow. This is made possible by the fact that largely inert atmospheric nitrogen is changed in a nitrogen fixation process to biologically usable forms in the soil by bacteria.
Plant nutrition is a difficult subject to understand completely, partially because of the variation between different plants and even between different species or individuals of a given clone. Elements present at low levels may cause deficiency symptoms, and toxicity is possible at levels that are too high. Furthermore, deficiency of one element may present as symptoms of toxicity from another element, and vice versa. | https://en.wikipedia.org/wiki?curid=21525 |
Number theory
Number theory (or arithmetic or higher arithmetic in older usage) is a branch of pure mathematics devoted primarily to the study of the integers and integer-valued functions. German mathematician Carl Friedrich Gauss (1777–1855) said, "Mathematics is the queen of the sciences—and number theory is the queen of mathematics." Number theorists study prime numbers as well as the properties of objects made out of integers (for example, rational numbers) or defined as generalizations of the integers (for example, algebraic integers).
Integers can be considered either in themselves or as solutions to equations (Diophantine geometry). Questions in number theory are often best understood through the study of analytical objects (for example, the Riemann zeta function) that encode properties of the integers, primes or other number-theoretic objects in some fashion (analytic number theory). One may also study real numbers in relation to rational numbers, for example, as approximated by the latter (Diophantine approximation).
The older term for number theory is "arithmetic". By the early twentieth century, it had been superseded by "number theory". (The word "arithmetic" is used by the general public to mean "elementary calculations"; it has also acquired other meanings in mathematical logic, as in "Peano arithmetic", and computer science, as in "floating point arithmetic".) The use of the term "arithmetic" for "number theory" regained some ground in the second half of the 20th century, arguably in part due to French influence. In particular, "arithmetical" is preferred as an adjective to "number-theoretic".
The earliest historical find of an arithmetical nature is a fragment of a table: the broken clay tablet Plimpton 322 (Larsa, Mesopotamia, ca. 1800 BCE) contains a list of "Pythagorean triples", that is, integers formula_1 such that formula_2.
The triples are too many and too large to have been obtained by brute force. The heading over the first column reads: "The "takiltum" of the diagonal which has been subtracted such that the width..."
The table's layout suggests that it was constructed by means of what amounts, in modern language, to the identity
which is implicit in routine Old Babylonian exercises. If some other method was used, the triples were first constructed and then reordered by formula_5, presumably for actual use as a "table", for example, with a view to applications.
It is not known what these applications may have been, or whether there could have been any; Babylonian astronomy, for example, truly came into its own only later. It has been suggested instead that the table was a source of numerical examples for school problems.
While Babylonian number theory—or what survives of Babylonian mathematics that can be called thus—consists of this single, striking fragment, Babylonian algebra (in the secondary-school sense of "algebra") was exceptionally well developed. Late Neoplatonic sources state that Pythagoras learned mathematics from the Babylonians. Much earlier sources state that Thales and Pythagoras traveled and studied in Egypt.
Euclid IX 21–34 is very probably Pythagorean; it is very simple material ("odd times even is even", "if an odd number measures [= divides] an even number, then it also measures [= divides] half of it"), but it is all that is needed to prove that formula_6
is irrational. Pythagorean mystics gave great importance to the odd and the even.
The discovery that formula_6 is irrational is credited to the early Pythagoreans (pre-Theodorus). By revealing (in modern terms) that numbers could be irrational, this discovery seems to have provoked the first foundational crisis in mathematical history; its proof or its divulgation are sometimes credited to Hippasus, who was expelled or split from the Pythagorean sect. This forced a distinction between "numbers" (integers and the rationals—the subjects of arithmetic), on the one hand, and "lengths" and "proportions" (which we would identify with real numbers, whether rational or not), on the other hand.
The Pythagorean tradition spoke also of so-called polygonal or figurate numbers. While square numbers, cubic numbers, etc., are seen now as more natural than triangular numbers, pentagonal numbers, etc., the study of the sums
of triangular and pentagonal numbers would prove fruitful in the early modern period (17th to early 19th century).
We know of no clearly arithmetical material in ancient Egyptian or Vedic sources, though there is some algebra in both. The Chinese remainder theorem appears as an exercise in "Sunzi Suanjing" (3rd, 4th or 5th century CE.) (There is one important step glossed over in Sunzi's solution: it is the problem that was later solved by Āryabhaṭa's Kuṭṭaka – see below.)
There is also some numerical mysticism in Chinese mathematics, but, unlike that of the Pythagoreans, it seems to have led nowhere. Like the Pythagoreans' perfect numbers, magic squares have passed from superstition into recreation.
Aside from a few fragments, the mathematics of Classical Greece is known to us either through the reports of contemporary non-mathematicians or through mathematical works from the early Hellenistic period. In the case of number theory, this means, by and large, "Plato" and "Euclid", respectively.
While Asian mathematics influenced Greek and Hellenistic learning, it seems to be the case that Greek mathematics is also an indigenous tradition.
Eusebius, PE X, chapter 4 mentions of Pythagoras:
"In fact the said Pythagoras, while busily studying the wisdom of each nation, visited Babylon, and Egypt, and all Persia, being instructed by the Magi and the priests: and in addition to these he is related to have studied under the Brahmans (these are Indian philosophers); and from some he gathered astrology, from others geometry, and arithmetic and music from others, and different things from different nations, and only from the wise men of Greece did he get nothing, wedded as they were to a poverty and dearth of wisdom: so on the contrary he himself became the author of instruction to the Greeks in the learning which he had procured from abroad."
Aristotle claimed that the philosophy of Plato closely followed the teachings of the Pythagoreans, and Cicero repeats this claim: "Platonem ferunt didicisse Pythagorea omnia" ("They say Plato learned all things Pythagorean").
Plato had a keen interest in mathematics, and distinguished clearly between arithmetic and calculation. (By "arithmetic" he meant, in part, theorising on number, rather than what "arithmetic" or "number theory" have come to mean.) It is through one of Plato's dialogues—namely, "Theaetetus"—that we know that Theodorus had proven that formula_8 are irrational. Theaetetus was, like Plato, a disciple of Theodorus's; he worked on distinguishing different kinds of incommensurables, and was thus arguably a pioneer in the study of number systems. (Book X of Euclid's Elements is described by Pappus as being largely based on Theaetetus's work.)
Euclid devoted part of his "Elements" to prime numbers and divisibility, topics that belong unambiguously to number theory and are basic to it (Books VII to IX of Euclid's Elements). In particular, he gave an algorithm for computing the greatest common divisor of two numbers (the Euclidean algorithm; "Elements", Prop. VII.2) and the first known proof of the infinitude of primes ("Elements", Prop. IX.20).
In 1773, Lessing published an epigram he had found in a manuscript during his work as a librarian; it claimed to be a letter sent by Archimedes to Eratosthenes. The epigram proposed what has become known as
Archimedes's cattle problem; its solution (absent from the manuscript) requires solving an indeterminate quadratic equation (which reduces to what would later be misnamed Pell's equation). As far as we know, such equations were first successfully treated by the Indian school. It is not known whether Archimedes himself had a method of solution.
Very little is known about Diophantus of Alexandria; he probably lived in the third century CE, that is, about five hundred years after Euclid. Six out of the thirteen books of Diophantus's "Arithmetica" survive in the original Greek; four more books survive in an Arabic translation. The "Arithmetica" is a collection of worked-out problems where the task is invariably to find rational solutions to a system of polynomial equations, usually of the form formula_9 or formula_10. Thus, nowadays, we speak of "Diophantine equations" when we speak of polynomial equations to which rational or integer solutions must be found.
One may say that Diophantus was studying rational points, that is, points whose coordinates are rational—on curves and algebraic varieties; however, unlike the Greeks of the Classical period, who did what we would now call basic algebra in geometrical terms, Diophantus did what we would now call basic algebraic geometry in purely algebraic terms. In modern language, what Diophantus did was to find rational parametrizations of varieties; that is, given an equation of the form (say)
formula_11, his aim was to find (in essence) three rational functions formula_12 such that, for all values of formula_13 and formula_14, setting
formula_15 for formula_16 gives a solution to formula_17
Diophantus also studied the equations of some non-rational curves, for which no rational parametrisation is possible. He managed to find some rational points on these curves (elliptic curves, as it happens, in what seems to be their first known occurrence) by means of what amounts to a tangent construction: translated into coordinate geometry
While Diophantus was concerned largely with rational solutions, he assumed some results on integer numbers, in particular that every integer is the sum of four squares (though he never stated as much explicitly).
While Greek astronomy probably influenced Indian learning, to the point of introducing trigonometry, it seems to be the case that Indian mathematics is otherwise an indigenous tradition; in particular, there is no evidence that Euclid's Elements reached India before the 18th century.
Āryabhaṭa (476–550 CE) showed that pairs of simultaneous congruences formula_18, formula_19 could be solved by a method he called "kuṭṭaka", or "pulveriser"; this is a procedure close to (a generalisation of) the Euclidean algorithm, which was probably discovered independently in India. Āryabhaṭa seems to have had in mind applications to astronomical calculations.
Brahmagupta (628 CE) started the systematic study of indefinite quadratic equations—in particular, the misnamed Pell equation, in which Archimedes may have first been interested, and which did not start to be solved in the West until the time of Fermat and Euler. Later Sanskrit authors would follow, using Brahmagupta's technical terminology. A general procedure (the chakravala, or "cyclic method") for solving Pell's equation was finally found by Jayadeva (cited in the eleventh century; his work is otherwise lost); the earliest surviving exposition appears in Bhāskara II's Bīja-gaṇita (twelfth century).
Indian mathematics remained largely unknown in Europe until the late eighteenth century; Brahmagupta and Bhāskara's work was translated into English in 1817 by Henry Colebrooke.
In the early ninth century, the caliph Al-Ma'mun ordered translations of many Greek mathematical works and at least one Sanskrit work (the "Sindhind",
which may if "a" is not divisible by a prime "p", then formula_20
The interest of Leonhard Euler (1707–1783) in number theory was first spurred in 1729, when a friend of his, the amateur Goldbach, pointed him towards some of Fermat's work on the subject. This has been called the "rebirth" of modern number theory, after Fermat's relative lack of success in getting his contemporaries' attention for the subject. Euler's work on number theory includes the following:
Joseph-Louis Lagrange (1736–1813) was the first to give full proofs of some of Fermat's and Euler's work and observations—for instance, the four-square theorem and the basic theory of the misnamed "Pell's equation" (for which an algorithmic solution was found by Fermat and his contemporaries, and also by Jayadeva and Bhaskara II before them.) He also studied quadratic forms in full generality (as opposed to formula_34)—defining their equivalence relation, showing how to put them in reduced form, etc.
Adrien-Marie Legendre (1752–1833) was the first to state the law of quadratic reciprocity. He also
conjectured what amounts to the prime number theorem and Dirichlet's theorem on arithmetic progressions. He gave a full treatment of the equation formula_35 and worked on quadratic forms along the lines later developed fully by Gauss. In his old age, he was the first to prove "Fermat's last theorem" for formula_36 (completing work by Peter Gustav Lejeune Dirichlet, and crediting both him and Sophie Germain).
In his "Disquisitiones Arithmeticae" (1798), Carl Friedrich Gauss (1777–1855) proved the law of quadratic reciprocity and developed the theory of quadratic forms (in particular, defining their composition). He also introduced some basic notation (congruences) and devoted a section to computational matters, including primality tests. The last section of the "Disquisitiones" established a link between roots of unity and number theory:
The theory of the division of the circle...which is treated in sec. 7 does not belong
by itself to arithmetic, but its principles can only be drawn from higher arithmetic.
In this way, Gauss arguably made a first foray towards both Évariste Galois's work and algebraic number theory.
Starting early in the nineteenth century, the following developments gradually took place:
Algebraic number theory may be said to start with the study of reciprocity and cyclotomy, but truly came into its own with the development of abstract algebra and early ideal theory and valuation theory; see below. A conventional starting point for analytic number theory is Dirichlet's theorem on arithmetic progressions (1837), whose proof introduced L-functions and involved some asymptotic analysis and a limiting process on a real variable. The first use of analytic ideas in number theory actually
goes back to Euler (1730s), who used formal power series and non-rigorous (or implicit) limiting arguments. The use of "complex" analysis in number theory comes later: the work of Bernhard Riemann (1859) on the zeta function is the canonical starting point; Jacobi's four-square theorem (1839), which predates it, belongs to an initially different strand that has by now taken a leading role in analytic number theory (modular forms).
The history of each subfield is briefly addressed in its own section below; see the main article of each subfield for fuller treatments. Many of the most interesting questions in each area remain open and are being actively worked on.
The term "elementary" generally denotes a method that does not use complex analysis. For example, the prime number theorem was first proven using complex analysis in 1896, but an elementary proof was found only in 1949 by Erdős and Selberg. The term is somewhat ambiguous: for example, proofs based on complex Tauberian theorems (for example, Wiener–Ikehara) are often seen as quite enlightening but not elementary, in spite of using Fourier analysis, rather than complex analysis as such. Here as elsewhere, an "elementary" proof may be longer and more difficult for most readers than a non-elementary one.
Number theory has the reputation of being a field many of whose results can be stated to the layperson. At the same time, the proofs of these results are not particularly accessible, in part because the range of tools they use is, if anything, unusually broad within mathematics.
"Analytic number theory" may be defined
Some subjects generally considered to be part of analytic number theory, for example, sieve theory, are better covered by the second rather than the first definition: some of sieve theory, for instance, uses little analysis, yet it does belong to analytic number theory.
The following are examples of problems in analytic number theory: the prime number theorem, the Goldbach conjecture (or the twin prime conjecture, or the Hardy–Littlewood conjectures), the Waring problem and the Riemann hypothesis. Some of the most important tools of analytic number theory are the circle method, sieve methods and L-functions (or, rather, the study of their properties). The theory of modular forms (and, more generally, automorphic forms) also occupies an increasingly central place in the toolbox of analytic number theory.
One may ask analytic questions about algebraic numbers, and use analytic means to answer such questions; it is thus that algebraic and analytic number theory intersect. For example, one may define prime ideals (generalizations of prime numbers in the field of algebraic numbers) and ask how many prime ideals there are up to a certain size. This question can be answered by means of an examination of Dedekind zeta functions, which are generalizations of the Riemann zeta function, a key analytic object at the roots of the subject. This is an example of a general procedure in analytic number theory: deriving information about the distribution of a sequence (here, prime ideals or prime numbers) from the analytic behavior of an appropriately constructed complex-valued function.
An "algebraic number" is any complex number that is a solution to some polynomial equation formula_37 with rational coefficients; for example, every solution formula_38 of formula_39 (say) is an algebraic number. Fields of algebraic numbers are also called "algebraic number fields", or shortly "number fields". Algebraic number theory studies algebraic number fields. Thus, analytic and algebraic number theory can and do overlap: the former is defined by its methods, the latter by its objects of study.
It could be argued that the simplest kind of number fields (viz., quadratic fields) were already studied by Gauss, as the discussion of quadratic forms in "Disquisitiones arithmeticae" can be restated in terms of ideals and
norms in quadratic fields. (A "quadratic field" consists of all
numbers of the form formula_40, where
formula_41 and formula_42 are rational numbers and formula_43
is a fixed rational number whose square root is not rational.)
For that matter, the 11th-century chakravala method amounts—in modern terms—to an algorithm for finding the units of a real quadratic number field. However, neither Bhāskara nor Gauss knew of number fields as such.
The grounds of the subject as we know it were set in the late nineteenth century, when "ideal numbers", the "theory of ideals" and "valuation theory" were developed; these are three complementary ways of dealing with the lack of unique factorisation in algebraic number fields. (For example, in the field generated by the rationals
and formula_44, the number formula_45 can be factorised both as formula_46 and
formula_47; all of formula_48, formula_49, formula_50 and
formula_51
are irreducible, and thus, in a naïve sense, analogous to primes among the integers.) The initial impetus for the development of ideal numbers (by Kummer) seems to have come from the study of higher reciprocity laws, that is, generalisations of quadratic reciprocity.
Number fields are often studied as extensions of smaller number fields: a field "L" is said to be an "extension" of a field "K" if "L" contains "K".
Classifying the possible extensions of a given number field is a difficult and partially open problem. Abelian extensions—that is, extensions "L" of "K" such that the Galois group Gal("L"/"K") of "L" over "K" is an abelian group—are relatively well understood.
Their classification was the object of the programme of class field theory, which was initiated in the late 19th century (partly by Kronecker and Eisenstein) and carried out largely in 1900–1950.
An example of an active area of research in algebraic number theory is Iwasawa theory. The Langlands program, one of the main current large-scale research plans in mathematics, is sometimes described as an attempt to generalise class field theory to non-abelian extensions of number fields.
The central problem of "Diophantine geometry" is to determine when a Diophantine equation has solutions, and if it does, how many. The approach taken is to think of the solutions of an equation as a geometric object.
For example, an equation in two variables defines a curve in the plane. More generally, an equation, or system of equations, in two or more variables defines a curve, a surface or some other such object in "n"-dimensional space. In Diophantine geometry, one asks whether there are any "rational points" (points all of whose coordinates are rationals) or
"integral points" (points all of whose coordinates are integers) on the curve or surface. If there are any such points, the next step is to ask how many there are and how they are distributed. A basic question in this direction is if there are finitely
or infinitely many rational points on a given curve (or surface).
In the Pythagorean equation formula_52
we would like to study its rational solutions, that is, its solutions
formula_53 such that
"x" and "y" are both rational. This is the same as asking for all integer solutions
to formula_54; any solution to the latter equation gives
us a solution formula_55, formula_56 to the former. It is also the
same as asking for all points with rational coordinates on the curve
described by formula_57. (This curve happens to be a circle of radius 1 around the origin.)
The rephrasing of questions on equations in terms of points on curves turns out to be felicitous. The finiteness or not of the number of rational or integer points on an algebraic curve—that is, rational or integer solutions to an equation formula_58, where formula_59 is a polynomial in two variables—turns out to depend crucially on the "genus" of the curve. The "genus" can be defined as follows: allow the variables in formula_58 to be complex numbers; then formula_58 defines a 2-dimensional surface in (projective) 4-dimensional space (since two complex variables can be decomposed into four real variables, that is, four dimensions). If we count the number of (doughnut) holes in the surface; we call this number the "genus" of formula_58. Other geometrical notions turn out to be just as crucial.
There is also the closely linked area of Diophantine approximations: given a number formula_38, then finding how well can it be approximated by rationals. (We are looking for approximations that are good relative to the amount of space that it takes to write the rational: call formula_66 (with formula_67) a good approximation to formula_38 if formula_69, where formula_70 is large.) This question is of special interest if formula_38 is an algebraic number. If formula_38 cannot be well approximated, then some equations do not have integer or rational solutions. Moreover, several concepts (especially that of height) turn out to be critical both in Diophantine geometry and in the study of Diophantine approximations. This question is also of special interest in transcendental number theory: if a number can be better approximated than any algebraic number, then it is a transcendental number. It is by this argument that and e have been shown to be transcendental.
Diophantine geometry should not be confused with the geometry of numbers, which is a collection of graphical methods for answering certain questions in algebraic number theory. "Arithmetic geometry", however, is a contemporary term
for much the same domain as that covered by the term "Diophantine geometry". The term "arithmetic geometry" is arguably used
most often when one wishes to emphasise the connections to modern algebraic geometry (as in, for instance, Faltings's theorem) rather than to techniques in Diophantine approximations.
The areas below date from no earlier than the mid-twentieth century, even if they are based on older material. For example, as is explained below, the matter of algorithms in number theory is very old, in some sense older than the concept of proof; at the same time, the modern study of computability dates only from the 1930s and 1940s, and computational complexity theory from the 1970s.
Much of probabilistic number theory can be seen as an important special case of the study of variables that are almost, but not quite, mutually independent. For example, the event that a random integer between one and a million be divisible by two and the event that it be divisible by three are almost independent, but not quite.
It is sometimes said that probabilistic combinatorics uses the fact that whatever happens with probability greater than formula_73 must happen sometimes; one may say with equal justice that many applications of probabilistic number theory hinge on the fact that whatever is unusual must be rare. If certain algebraic objects (say, rational or integer solutions to certain equations) can be shown to be in the tail of certain sensibly defined distributions, it follows that there must be few of them; this is a very concrete non-probabilistic statement following from a probabilistic one.
At times, a non-rigorous, probabilistic approach leads to a number of heuristic algorithms and open problems, notably Cramér's conjecture.
If we begin from a fairly "thick" infinite set formula_74, does it contain many elements in arithmetic progression: formula_41,
formula_76, say? Should it be possible to write large integers as sums of elements of formula_74?
These questions are characteristic of "arithmetic combinatorics". This is a presently coalescing field; it subsumes "additive number theory" (which concerns itself with certain very specific sets formula_74 of arithmetic significance, such as the primes or the squares) and, arguably, some of the "geometry of numbers",
together with some rapidly developing new material. Its focus on issues of growth and distribution accounts in part for its developing links with ergodic theory, finite group theory, model theory, and other fields. The term "additive combinatorics" is also used; however, the sets formula_74 being studied need not be sets of integers, but rather subsets of non-commutative groups, for which the multiplication symbol, not the addition symbol, is traditionally used; they can also be subsets of rings, in which case the growth of formula_80 and formula_74·formula_74 may be
compared.
While the word "algorithm" goes back only to certain readers of al-Khwārizmī, careful descriptions of methods of solution are older than proofs: such methods (that is, algorithms) are as old as any recognisable mathematics—ancient Egyptian, Babylonian, Vedic, Chinese—whereas proofs appeared only with the Greeks of the classical period.
An interesting early case is that of what we now call the Euclidean algorithm. In its basic form (namely, as an algorithm for computing the greatest common divisor) it appears as Proposition 2 of Book VII in "Elements", together with a proof of correctness. However, in the form that is often used in number theory (namely, as an algorithm for finding integer solutions to an equation formula_83,
or, what is the same, for finding the quantities whose existence is assured by the Chinese remainder theorem) it first appears in the works of Āryabhaṭa (5th–6th century CE) as an algorithm called
"kuṭṭaka" ("pulveriser"), without a proof of correctness.
There are two main questions: "Can we compute this?" and "Can we compute it rapidly?" Anyone can test whether a number is prime or, if it is not, split it into prime factors; doing so rapidly is another matter. We now know fast algorithms for testing primality, but, in spite of much work (both theoretical and practical), no truly fast algorithm for factoring.
The difficulty of a computation can be useful: modern protocols for encrypting messages (for example, RSA) depend on functions that are known to all, but whose inverses are known only to a chosen few, and would take one too long a time to figure out on one's own. For example, these functions can be such that their inverses can be computed only if certain large integers are factorized. While many difficult computational problems outside number theory are known, most working encryption protocols nowadays are based on the difficulty of a few number-theoretical problems.
Some things may not be computable at all; in fact, this can be proven in some instances. For instance, in 1970, it was proven, as a solution to Hilbert's 10th problem, that there is no Turing machine which can solve all Diophantine equations. In particular, this means that, given a computably enumerable set of axioms, there are Diophantine equations for which there is no proof, starting from the axioms, of whether the set of equations has or does not have integer solutions. (We would necessarily be speaking of Diophantine equations for which there are no integer solutions, since, given a Diophantine equation with at least one solution, the solution itself provides a proof of the fact that a solution exists. We cannot prove that a particular Diophantine equation is of this kind, since this would imply that it has no solutions.)
The number-theorist Leonard Dickson (1874–1954) said "Thank God that number theory is unsullied by any application". Such a view is no longer applicable to number theory. In 1974, Donald Knuth said "...virtually every theorem in elementary number theory arises in a natural, motivated way in connection with the problem of making computers do high-speed numerical calculations".
Elementary number theory is taught in discrete mathematics courses for computer scientists; on the other hand, number theory also has applications to the continuous in numerical analysis. As well as the well-known applications to cryptography, there are also applications to many other areas of mathematics.
The American Mathematical Society awards the "Cole Prize in Number Theory". Moreover number theory is one of the three mathematical subdisciplines rewarded by the "Fermat Prize".
Two of the most popular introductions to the subject are:
Hardy and Wright's book is a comprehensive classic, though its clarity sometimes suffers due to the authors' insistence on elementary methods (Apostol n.d.).
Vinogradov's main attraction consists in its set of problems, which quickly lead to Vinogradov's own research interests; the text itself is very basic and close to minimal. Other popular first introductions are:
Popular choices for a second textbook include: | https://en.wikipedia.org/wiki?curid=21527 |
Nitroglycerin
Nitroglycerin (NG), also known as nitroglycerine, trinitroglycerin (TNG), nitro, glyceryl trinitrate (GTN), or 1,2,3-trinitroxypropane, is a dense, colorless, oily, explosive liquid most commonly produced by nitrating glycerol with white fuming nitric acid under conditions appropriate to the formation of the nitric acid ester. Chemically, the substance is an organic nitrate compound rather than a nitro compound, yet the traditional name is often retained. Invented in 1847, nitroglycerin has been used ever since as an active ingredient in the manufacture of explosives, mostly dynamite, and as such it is employed in the construction, demolition, and mining industries. Since the 1880s, it has been used by the military as an active ingredient, and a gelatinizer for nitrocellulose, in some solid propellants, such as cordite and ballistite.
Nitroglycerin is a major component in double-based smokeless gunpowders used by reloaders. Combined with nitrocellulose, hundreds of powder combinations are used by rifle, pistol, and shotgun reloaders.
In medicine for over 130 years, nitroglycerin has been used as a potent vasodilator (dilation of the vascular system) to treat heart conditions, such as angina pectoris and chronic heart failure. Though it was previously known that these beneficial effects are due to nitroglycerin being converted to nitric oxide, a potent venodilator, the enzyme for this conversion was not discovered to be mitochondrial aldehyde dehydrogenase (ALDH2) until 2002. Nitroglycerin is available in sublingual tablets, sprays, ointments, and patches.
Nitroglycerin was the first practical explosive produced that was stronger than black powder. It was first synthesized by the Italian chemist Ascanio Sobrero in 1847, working under Théophile-Jules Pelouze at the University of Turin. Sobrero initially called his discovery "pyroglycerine" and warned vigorously against its use as an explosive. | https://en.wikipedia.org/wiki?curid=21530 |
Navy
A navy or sea force is the branch of a nation's armed forces principally designated for naval and amphibious warfare; namely, lake-borne, riverine, littoral, or ocean-borne combat operations and related functions. It includes anything conducted by surface ships, amphibious ships, submarines, and seaborne aviation, as well as ancillary support, communications, training, and other fields. The strategic offensive role of a navy is projection of force into areas beyond a country's shores (for example, to protect sea-lanes, deter or confront piracy, ferry troops, or attack other navies, ports, or shore installations). The strategic defensive purpose of a navy is to frustrate seaborne projection-of-force by enemies. The strategic task of the navy also may incorporate nuclear deterrence by use of submarine-launched ballistic missiles. Naval operations can be broadly divided between riverine and littoral applications (brown-water navy), open-ocean applications (blue-water navy), and something in between (green-water navy), although these distinctions are more about strategic scope than tactical or operational division.
In most nations, the term "naval", as opposed to "navy", is interpreted as encompassing all maritime military forces, e.g., navy, naval infantry/marine corps, and coast guard forces.
First attested in English in the early 14th century, the word "navy" came via Old French "navie", "fleet of ships", from the Latin "navigium", "a vessel, a ship, bark, boat", from "navis", "ship". The word "naval" came from Latin "navalis", "pertaining to ship"; cf. Greek ("naus"), "ship", ("nautes"), "seaman, sailor". The earliest attested form of the word is in the Mycenaean Greek compound word , "na-u-do-mo" (*), "shipbuilders", written in Linear B syllabic script.
The word formerly denoted fleets of both commercial and military nature. In modern usage "navy" used alone always denotes a military fleet, although the term "merchant navy" for a commercial fleet still incorporates the non-military word sense. This overlap in word senses between commercial and military fleets grew out of the inherently dual-use nature of fleets; centuries ago, nationality was a trait that unified a fleet across both civilian and military uses. Although nationality of commercial vessels has little importance in peacetime trade other than for tax avoidance, it can have greater meaning during wartime, when supply chains become matters of patriotic attack and defense, and when in some cases private vessels are even temporarily converted to military vessels. The latter was especially important, and common, before 20th-century military technology existed, when merely adding artillery and naval infantry to any sailing vessel could render it fully as martial as any military-owned vessel. Such privateering has been rendered obsolete in blue-water strategy since modern missile and aircraft systems grew to leapfrog over artillery and infantry in many respects; but privateering nevertheless remains potentially relevant in littoral warfare of a limited and asymmetric nature.
Naval warfare developed when humans first fought from water-borne vessels. Prior to the introduction of the cannon and ships with sufficient capacity to carry the large guns, navy warfare primarily involved ramming and boarding actions. In the time of ancient Greece and the Roman Empire, naval warfare centered on long, narrow vessels powered by banks of oarsmen (such as triremes and quinqueremes) designed to ram and sink enemy vessels or come alongside the enemy vessel so its occupants could be attacked hand-to-hand. Naval warfare continued in this vein through the Middle Ages until the cannon became commonplace and capable of being reloaded quickly enough to be reused in the same battle. The Chola Dynasty of medieval Tamil Nadu was known as one of the greatest naval powers of its time from 300 BC to 1279 AD. The Chola Navy, Chola kadarpadai comprised the naval forces of the Chola Empire along with several other Naval-arms of the country. The Chola navy played a vital role in the expansion of the Chola Tamil kingdom, including the conquest of the Sri Lanka islands, Kadaaram (Present day Burma), Sri Vijaya (present day Southeast Asia), the spread of Hinduism, Tamil architecture and Tamil culture to Southeast Asia and in curbing the piracy in Southeast Asia in 900 CE. In ancient China, large naval battles were known since the Qin dynasty ("also see" Battle of Red Cliffs, 208), employing the war junk during the Han dynasty. However, China's first official standing navy was not established until the Southern Song dynasty in the 12th century, a time when gunpowder was a revolutionary new application to warfare.
Nusantaran thalassocracies made extensive use of naval power and technologies. This enabled the seafaring Malay people to attack as far as the coast of Tanganyika and Mozambique with 1000 boats and attempted to take the citadel of Qanbaloh, about 7,000 km to their West, in 945-946 AD. In 1350 AD Majapahit launched its largest military expedition, the invasion of Pasai, with 400 large jong and innumerable smaller vessels. The second largest military expedition, invasion of Singapura in 1398, Majapahit deployed 300 jong with no less than 200,000 men.
The mass and deck space required to carry a large number of cannon made oar-based propulsion impossible, and ships came to rely primarily on sails. Warships were designed to carry increasing numbers of cannon and naval tactics evolved to bring a ship's firepower to bear in a broadside, with ships-of-the-line arranged in a line of battle.
The development of large capacity, sail-powered ships carrying cannon led to a rapid expansion of European navies, especially the Spanish and Portuguese navies which dominated in the 16th and early 17th centuries, and helped propel the age of exploration and colonialism. The repulsion of the Spanish Armada (1588) by the English fleet revolutionized naval warfare by the success of a guns-only strategy and caused a major overhaul of the Spanish Navy, partly along English lines, which resulted in even greater dominance by the Spanish. From the beginning of the 17th century the Dutch cannibalized the Portuguese Empire in the East and, with the immense wealth gained, challenged Spanish hegemony at sea. From the 1620s, Dutch raiders seriously troubled Spanish shipping and, after a number of battles which went both ways, the Dutch Navy finally broke the long dominance of the Spanish Navy in the Battle of the Downs (1639). England emerged as a major naval power in the mid-17th century in the first Anglo-Dutch war with a technical victory. Successive decisive Dutch victories in the second and third Anglo-Dutch Wars confirmed the Dutch mastery of the seas during the Dutch Golden Age, financed by the expansion of the Dutch Empire. The French Navy won some important victories near the end of the 17th century but a focus upon land forces led to the French Navy's relative neglect, which allowed the Royal Navy to emerge with an ever-growing advantage in size and quality, especially in tactics and experience, from 1695. As a response to growing naval influence of the navies of Portuguese, the warrior king of the Marathas, Chhatrapati Shivaji Maharaj laid the foundation of the Maratha navy in 1654.
Throughout the 18th century the Royal Navy gradually gained ascendancy over the French Navy, with victories in the War of Spanish Succession (1701–1714), inconclusive battles in the War of Austrian Succession (1740–1748), victories in the Seven Years' War (1754–1763), a partial reversal during the American War of Independence (1775–1783), and consolidation into uncontested supremacy during the 19th century from the Battle of Trafalgar in 1805. These conflicts saw the development and refinement of tactics which came to be called the line of battle.
The next stage in the evolution of naval warfare was the introduction of metal plating along the hull sides. The increased mass required steam-powered engines, resulting in an arms race between armor and weapon thickness and firepower. The first armored vessels, the French and British , made wooden vessels obsolete. Another significant improvement came with the invention of the rotating turrets, which allowed the guns to be aimed independently of ship movement. The battle between and during the American Civil War (1861–1865) is often cited as the beginning of this age of maritime conflict. The Russian Navy was considered the third strongest in the world on the eve of the Russo-Japanese War, which turned to be a catastrophe for the Russian military in general and the Russian Navy in particular. Although neither party lacked courage, the Russians were defeated by the Japanese in the Battle of Port Arthur, which was the first time in warfare that mines were used for offensive purposes. The warships of the Baltic Fleet sent to the Far East were lost in the Battle of Tsushima. A further step change in naval firepower occurred when the United Kingdom launched in 1906, but naval tactics still emphasized the line of battle.
The first practical military submarines were developed in the late 19th century and by the end of World War I had proven to be a powerful arm of naval warfare. During World War II, Nazi Germany's submarine fleet of U-boats almost starved the United Kingdom into submission and inflicted tremendous losses on U.S. coastal shipping. The , a sister ship of , was almost put out of action by miniature submarines known as X-Craft. The X-Craft severely damaged her and kept her in port for some months.
A major paradigm shift in naval warfare occurred with the introduction of the aircraft carrier. First at Taranto in 1940 and then at Pearl Harbor in 1941, the carrier demonstrated its ability to strike decisively at enemy ships out of sight and range of surface vessels. The Battle of Leyte Gulf (1944) was arguably the largest naval battle in history; it was also the last battle in which battleships played a significant role. By the end of World War II, the carrier had become the dominant force of naval warfare.
World War II also saw the United States become by far the largest Naval power in the world. In the late 20th and early 21st centuries, the United States Navy possessed over 70% of the world's total numbers and total tonnage of naval vessels of 1,000 tons or greater. Throughout the rest of the 20th century, the United States Navy would maintain a tonnage greater than that of the next 17 largest navies combined. During the Cold War, the Soviet Navy became a significant armed force, with large numbers of large, heavily armed ballistic missile submarines and extensive use of heavy, long-ranged antisurface missiles to counter the numerous United States carrier battle groups. Only three nations (United States, France, and Brazil) presently operate CATOBAR carriers of any size, while Russia, China and India operate sizeable STOBAR carriers (although all three are originally of Russian design). The United Kingdom is also currently constructing two carriers, which will be the largest STOVL vessels in service, and India is currently building one aircraft carrier, , and considering another. France is also looking at a new carrier, probably using a CATOBAR system and possibly based on the British "Queen Elizabeth" design.
A navy typically operates from one or more naval bases. The base is a port that is specialized in naval operations, and often includes housing, a munitions depot, docks for the vessels, and various repair facilities. During times of war temporary bases may be constructed in closer proximity to strategic locations, as it is advantageous in terms of patrols and station-keeping. Nations with historically strong naval forces have found it advantageous to obtain basing rights in other countries in areas of strategic interest.
Navy ships can operate independently or with a group, which may be a small squadron of comparable ships, or a larger naval fleet of various specialized ships. The commander of a fleet travels in the flagship, which is usually the most powerful vessel in the group. Prior to the invention of radio, commands from the flagship were communicated by means of flags. At night signal lamps could be used for a similar purpose. Later these were replaced by the radio transmitter, or the flashing light when radio silence was needed.
A "blue water navy" is designed to operate far from the coastal waters of its home nation. These are ships capable of maintaining station for long periods of time in deep ocean, and will have a long logistical tail for their support. Many are also nuclear powered to save having to refuel. By contrast a "brown water navy" operates in the coastal periphery and along inland waterways, where larger ocean-going naval vessels can not readily enter. Regional powers may maintain a "green water navy" as a means of localized force projection. Blue water fleets may require specialized vessels, such as minesweepers, when operating in the littoral regions along the coast.
A basic tradition is that all ships commissioned in a navy are referred to as ships rather than vessels, with the exception of destroyers and submarines, which are known as boats. The prefix on a ship's name indicates that it is a commissioned ship.
An important tradition on board naval vessels of some nations has been the ship's bell. This was historically used to mark the passage of time, as warning devices in heavy fog, and for alarms and ceremonies.
The ship's captain, and more senior officers are "piped" aboard the ship using a Boatswain's call.
In the United States, the First Navy Jack is a flag that has the words, "Don't Tread on Me" on the flag.
By English tradition, ships have been referred to as a "she". However, it was long considered bad luck to permit women to sail on board naval vessels. To do so would invite a terrible storm that would wreck the ship. The only women that were welcomed on board were figureheads mounted on the prow of the ship.
Firing a cannon salute partially disarms the ship, so firing a cannon for no combat reason showed respect and trust. As the tradition evolved, the number of cannon fired became an indication of the rank of the official being saluted.
Historically, navy ships were primarily intended for warfare. They were designed to withstand damage and to inflict the same, but only carried munitions and supplies for the voyage (rather than merchant cargo). Often, other ships which were not built specifically for warfare, such as the galleon or the armed merchant ships in World War II, did carry armaments. In more recent times, navy ships have become more specialized and have included supply ships, troop transports, repair ships, oil tankers and other logistics support ships as well as combat ships.
Modern navy combat ships are generally divided into seven main categories: aircraft carriers, cruisers, destroyers, frigates, corvettes, submarines, and amphibious assault ships. There are also support and auxiliary ships, including the oiler, minesweeper, patrol boat, hydrographic and oceanographic survey ship and tender. During the age of sail, the ship categories were divided into the ship of the line, frigate, and sloop-of-war.
Naval ship names are typically prefixed by an abbreviation indicating the national navy in which they serve. For a list of the prefixes used with ship names (HMS, USS, LÉ, etc.) see ship prefix.
Today ships are significantly faster than in former times, thanks to much improved propulsion systems. Also, the efficiency of the engines has improved, in terms of fuel, and of how many sailors it takes to operate them. In World War II, ships needed to refuel very often. However, today ships can go on very long journeys without refueling. Also, in World War II, the engine room needed about a dozen sailors to work the many engines, however, today, only about 4–5 are needed (depending on the class of the ship). Today, naval strike groups on longer missions are always followed by a range of support and replenishment ships supplying them with anything from fuel and munitions, to medical treatment and postal services. This allows strike groups and combat ships to remain at sea for several months at a time.
The term "boat" refers to small craft limited in their use by size and usually not capable of making lengthy independent voyages at sea. The old navy adage to differentiate between ships and boats is that boats are capable of being carried by ships. (Submarines by this rule are ships rather than boats, but are customarily referred to as boats reflecting their previous smaller size.)
Navies use many types of boat, ranging from dinghies to landing craft. They are powered by either diesel engines, out-board gasoline engines, or waterjets. Most boats are built of aluminum, fiberglass, or steel. Rigid-hulled inflatable boats are also used.
Patrol boats are used for patrols of coastal areas, lakes and large rivers.
Landing craft are designed to carry troops, vehicles, or cargo from ship to shore under combat conditions, to unload, to withdraw from the beach, and to return to the ship. They are rugged, with powerful engines, and usually armed. There are many types in today's navies including hovercraft. They will typically have a power-operated bow ramp, a cargo well and after structures that house engine rooms, pilot houses, and stowage compartments. These boats are sometimes carried by larger ships.
Special operations craft are high-speed craft used for insertion and extraction of special forces personnel and some may be transportable (and deployed) by air.
Boats used in non-combat roles include lifeboats, mail boats, line handling boats, buoy boats, aircraft rescue boats, torpedo retrievers, explosive ordnance disposal craft, utility boats, dive boats, targets, and work boats. Boats are also used for survey work, tending divers, and minesweeping operations. Boats for carrying cargo and personnel are sometimes known as launches, gigs, barges or shore party boats.
Naval forces are typically arranged into units based on the number of ships included, a single ship being the smallest operational unit. Ships may be combined into squadrons or flotillas, which may be formed into fleets. The largest unit size may be the whole Navy or Admiralty.
A task force can be assembled using ships from different fleets for an operational task.
Despite their acceptance in many areas of naval service, female sailors were not permitted to serve on board U.S. submarines until the U.S. Navy lifted the ban in April 2010. The major reasons historically cited by the U.S. Navy were the extended duty tours and close conditions which afford almost no privacy. The United Kingdom's Royal Navy has had similar restrictions. Australia, Canada, Norway, and Spain previously opened submarine service to women sailors.
A navy will typically have two sets of ranks, one for enlisted personnel and one for officers.
Typical ranks for commissioned officers include the following, in ascending order (Commonwealth ranks are listed first on each line; USA ranks are listed second in those instances where they differ from Commonwealth ranks):
"Flag officers" include any rank that includes the word "admiral" (or commodore in services other than the US Navy), and are generally in command of a battle group, strike group or similar flotilla of ships, rather than a single ship or aspect of a ship. However, commodores can also be temporary or honorary positions. For example, during World War II, a Navy captain was assigned duty as a convoy commodore, which meant that he was still a captain, but in charge of all the merchant vessels in the convoy.
The most senior rank employed by a navy will tend to vary depending on the size of the navy and whether it is wartime or peacetime, for example, few people have ever held the rank of Fleet Admiral in the U.S. Navy, the chief of the Royal Australian Navy holds the rank of Vice Admiral, and the chief of the Irish Naval Service holds the rank of Commodore.
Naval infantry, commonly known as marines, are a category of infantry that form part of a state's naval forces and perform roles on land and at sea, including amphibious operations, as well as other, naval roles. They also perform other tasks, including land warfare, separate from naval operations.
During the era of the Roman empire, naval forces included marine legionaries for maritime boarding actions. These were troops primarily trained in land warfare, and did not need to be skilled at handling a ship. Much later during the age of sail, a component of marines served a similar role, being ship-borne soldiers who were used either during boarding actions, as sharp-shooters, or in raids along shorelines.
The Spanish "Infantería de Marina" was formed in 1537, making it the oldest, current marine force in the world. The British Royal Marines combine being both a ship-based force and also being specially trained in commando-style operations and tactics, operating in some cases separately from the rest of the Royal Navy. The Royal Marines also have their own special forces unit.
In the majority of countries, the marine force is an integral part of the navy. The United States Marine Corps is a separate armed service within the United States Department of the Navy, with its own leadership structure.
Naval aviation is the application of military air power by navies, whether from warships that embark aircraft, or land bases.
In World War I several navies used floatplanes and flying boats - mainly for scouting. By World War II, aircraft carriers could carry bomber aircraft capable of attacking naval and land targets, as well as fighter aircraft for defence. Since World War II helicopters have been embarked on smaller ships in roles such as anti-submarine warfare and transport. Some navies have also operated land-based aircraft in roles such as maritime patrol and training.
Naval aviation forces primarily perform naval roles at sea. However, they are also used in a variety of other roles. | https://en.wikipedia.org/wiki?curid=21533 |
Normed vector space
In mathematics, a normed vector space or normed space is a vector space over the real or complex numbers, on which a norm is defined. A norm is the formalization and the generalization to real vector spaces of the intuitive notion of "length" in the real world. A norm is a real-valued function defined on the vector space that is commonly denoted formula_1 and has the following properties:
A norm induces a distance by the formula
Therefore, a normed vector space is a metric space, and thus a topological vector space.
An inner product space is a normed space, where the norm of a vector is the square root of the inner product of the vector by itself. The Euclidean distance in a Euclidean space is related to the norm of the associated vector space (which is a inner product space) by the formula
Two norms on the same vector space are equivalent, in the sense that they define the same topology. On a finite-dimensional vector space, all norms are equivalent. This is not true in infinite dimension, and this makes the study of normed vector spaces fundamental in functional analysis.
A normed vector space is a pair formula_9 where formula_10 is a vector space and formula_11 a norm on formula_10.
A seminormed vector space is a pair formula_13 where formula_10 is a vector space and formula_15 a seminorm on formula_10.
We often omit formula_15 or formula_11 and just write formula_10 for a space if it is clear from the context what (semi) norm we are using.
In a more general sense, a vector norm can be taken to be any real-valued function that satisfies the three properties above.
A useful variation of the triangle inequality is
This also shows that a vector norm is a continuous function.
Note that property 2 depends on a choice of norm formula_21 on the field of scalars. When the scalar field is formula_22 (or more generally a subset of formula_23), this is usually taken to be the ordinary absolute value, but other choices are possible. For example, for a vector space over formula_24 one could take formula_21 to be the "p"-adic norm, which gives rise to a different class of normed vector spaces.
If ("V", ‖·‖) is a normed vector space, the norm ‖·‖ induces a metric (a notion of "distance") and therefore a topology on "V". This metric is defined in the natural way: the distance between two vectors u and v is given by ‖u−v‖. This topology is precisely the weakest topology which makes ‖·‖ continuous and which is compatible with the linear structure of "V" in the following sense:
Similarly, for any semi-normed vector space we can define the distance between two vectors u and v as ‖u−v‖. This turns the seminormed space into a pseudometric space (notice this is weaker than a metric) and allows the definition of notions such as continuity and convergence.
To put it more abstractly every semi-normed vector space is a topological vector space and thus carries a topological structure which is induced by the semi-norm.
Of special interest are complete normed spaces called Banach spaces. Every normed vector space "V" sits as a dense subspace inside a Banach space; this Banach space is essentially uniquely defined by "V" and is called the "completion" of "V".
All norms on a finite-dimensional vector space are equivalent from a topological viewpoint as they induce the same topology (although the resulting metric spaces need not be the same). And since any Euclidean space is complete, we can thus conclude that all finite-dimensional normed vector spaces are Banach spaces. A normed vector space "V" is locally compact if and only if the unit ball "B" = {"x" : ‖"x"‖ ≤ 1} is compact, which is the case if and only if "V" is finite-dimensional; this is a consequence of Riesz's lemma. (In fact, a more general result is true: a topological vector space is locally compact if and only if it is finite-dimensional.
The point here is that we don't assume the topology comes from a norm.)
The topology of a seminormed vector space has many nice properties. Given a neighbourhood system formula_26 around 0 we can construct all other neighbourhood systems as
with
Moreover there exists a neighbourhood basis for 0 consisting of absorbing and convex sets. As this property is very useful in functional analysis, generalizations of normed vector spaces with this property are studied under the name locally convex spaces.
A topological vector space formula_29 is called normable if there exists a norm formula_30 on "X" such that the canonical metric formula_31 induces the topology formula_32 on "X".
The following theorem is due to Kolmagoroff:
Theorem A Hausdorff topological vector space is normable if and only if there exists a convex, von Neumann bounded neighborhood of formula_33.
A product of a family of normable spaces is normable if and only if only finitely many of the spaces are non-trivial (i.e. formula_34). Furthermore, the quotient of a normable space "X" by a closed vector subspace "C" is normable and if in addition "X"'s topology is given by a norm formula_30 then the map formula_36 given by formula_37 is a well defined norm on "X/C" that induces the quotient topology on "X/C".
The most important maps between two normed vector spaces are the continuous linear maps. Together with these maps, normed vector spaces form a category.
The norm is a continuous function on its vector space. All linear maps between finite dimensional vector spaces are also continuous.
An "isometry" between two normed vector spaces is a linear map "f" which preserves the norm (meaning ‖"f"(v)‖ = ‖v‖ for all vectors v). Isometries are always continuous and injective. A surjective isometry between the normed vector spaces "V" and "W" is called an "isometric isomorphism", and "V" and "W" are called "isometrically isomorphic". Isometrically isomorphic normed vector spaces are identical for all practical purposes.
When speaking of normed vector spaces, we augment the notion of dual space to take the norm into account. The dual "V" ' of a normed vector space "V" is the space of all "continuous" linear maps from "V" to the base field (the complexes or the reals) — such linear maps are called "functionals". The norm of a functional φ is defined as the supremum of |φ(v)| where v ranges over all unit vectors (i.e. vectors of norm 1) in "V". This turns "V" ' into a normed vector space. An important theorem about continuous linear functionals on normed vector spaces is the Hahn–Banach theorem.
The definition of many normed spaces (in particular, Banach spaces) involves a seminorm defined on a vector space and then the normed space is defined as the quotient space by the subspace of elements of seminorm zero. For instance, with the L"p" spaces, the function defined by
is a seminorm on the vector space of all functions on which the Lebesgue integral on the right hand side is defined and finite. However, the seminorm is equal to zero for any function supported on a set of Lebesgue measure zero. These functions form a subspace which we "quotient out", making them equivalent to the zero function.
Given "n" seminormed spaces "X""i" with seminorms "q""i" we can define the product space as
with vector addition defined as
and scalar multiplication defined as
We define a new function "q"
for example as
which is a seminorm on "X". The function "q" is a norm if and only if all "q""i" are norms.
More generally, for each real "p"≥1 we have the seminorm:
For each p this defines the same topological space.
A straightforward argument involving elementary linear algebra shows that the only finite-dimensional seminormed spaces are those arising as the product space of a normed space and a space with trivial seminorm. Consequently, many of the more interesting examples and applications of seminormed spaces occur for infinite-dimensional vector spaces. | https://en.wikipedia.org/wiki?curid=21538 |
Nicene Creed
The Nicene Creed (Greek: or, , Latin: ) is a statement of belief widely used in Christian liturgy. It is called "Nicene" because it was originally adopted in the city of Nicaea (present day İznik, Turkey) by the First Council of Nicaea in 325. In 381, it was amended at the First Council of Constantinople, and the amended form is referred to as the Nicene or the Niceno-Constantinopolitan Creed. It defines Nicene Christianity.
The Oriental Orthodox and Assyrian churches use this profession of faith with the verbs in the original plural ("we believe"), but the Eastern Orthodox and Catholic churches convert those verbs to the singular ("I believe"). The Anglican and many Protestant denominations generally use the singular form, sometimes the plural.
The Apostles' Creed is also used in the Latin West, but not in the Eastern liturgies. On Sundays and solemnities, one of these two creeds is recited in the Roman Rite Mass after the homily. The Nicene Creed is also part of the profession of faith required of those undertaking important functions within the Catholic Church.
In the Byzantine Rite, the Nicene Creed is sung or recited at the Divine Liturgy, immediately preceding the Anaphora (Eucharistic Prayer), and is also recited daily at compline.
The actual purpose of a creed is to provide a doctrinal statement of correct belief or orthodoxy. The creeds of Christianity have been drawn up at times of conflict about doctrine: acceptance or rejection of a creed served to distinguish believers and deniers of particular doctrines. For that reason, a creed was called in Greek a σύμβολον ("symbolon"), which originally meant half of a broken object which, when fitted to the other half, verified the bearer's identity. The Greek word passed through Latin "symbolum" into English "symbol", which only later took on the meaning of an outward sign of something.
The Nicene Creed was adopted to resolve the Arian controversy, whose leader, Arius, a clergyman of Alexandria, "objected to Alexander's (the bishop of the time) apparent carelessness in blurring the distinction of nature between the Father and the Son by his emphasis on eternal generation". In reply, Alexander accused Arius of denying the divinity of the Son and also of being too "Jewish" and "Greek" in his thought. Alexander and his supporters created the Nicene Creed to clarify the key tenets of the Christian faith in response to the widespread adoption of Arius' doctrine, which was henceforth marked as heresy.
The Nicene Creed of 325 explicitly affirms the co-essential divinity of the Son, applying to him the term "consubstantial". The 381 version speaks of the Holy Spirit as worshipped and glorified with the Father and the Son. The later Athanasian Creed (not used in Eastern Christianity) describes in much greater detail the relationship between Father, Son and Holy Spirit. The Apostles' Creed does not explicitly affirm the divinity of the Son and the Holy Spirit, but in the view of many who use it, this doctrine is implicit in it.
The original Nicene Creed was first adopted at the First Council of Nicaea, which opened on 19 June 325. The text ended with anathemas against Arian propositions, and these were preceded by the words "We believe in the Holy Spirit" which terminated the statements of belief.
F. J. A. Hort and Adolf von Harnack argued that the Nicene creed was the local creed of Caesarea (an important center of Early Christianity) recited in the council by Eusebius of Caesarea. Their case relied largely on a very specific interpretation of Eusebius' own account of the Council's proceedings. More recent scholarship has not been convinced by their arguments. The large number of secondary divergences from the text of the creed quoted by Eusebius make it unlikely that it was used as a starting point by those who drafted the conciliar creed. Their initial text was probably a local creed from a Syro–Palestinian source into which they awkwardly inserted phrases to define the Nicene theology. The Eusebian Creed may thus have been either a second or one of many nominations for the Nicene Creed.
The 1911 "Catholic Encyclopedia" says that, soon after the Council of Nicaea, new formulae of faith were composed, most of them variations of the Nicene Symbol, to meet new phases of Arianism, of which there were at least four before the Council of Sardica (341), at which a new form was presented and inserted in its acts, although the council did not accept it.
What is known as the "Niceno-Constantinopolitan Creed" or the "Nicene–Constantinopolitan Creed" received this name because of a belief that it was adopted at the Second Ecumenical Council held in Constantinople in 381 as a modification of the original Nicene Creed of 325. In that light, it also came to be very commonly known simply as the "Nicene Creed". It is the only authoritative "ecumenical" statement of the Christian faith accepted by the Catholic Church, the Eastern Orthodox Church, Oriental Orthodoxy, the Church of the East, much of Protestantism including the Anglican communion. (The Apostles' and Athanasian creeds are not as widely accepted.)
It differs in a number of respects, both by addition and omission, from the creed adopted at the First Council of Nicaea. The most notable difference is the additional section "And [we believe] in the Holy Ghost, the Lord and Giver-of-Life, who proceedeth from the Father, who with the Father and the Son together is worshipped and glorified, who spake by the prophets. And [we believe] in one, holy, Catholic and Apostolic Church. We acknowledge one Baptism for the remission of sins, [and] we look for the resurrection of the dead and the life of the world to come. Amen."
Since the end of the 19th century, scholars have questioned the traditional explanation of the origin of this creed, which has been passed down in the name of the council, whose official acts have been lost over time. A local council of Constantinople in 382 and the third ecumenical council (Ephesus, 431) made no mention of it, with the latter affirming the 325 creed of Nicaea as a valid statement of the faith and using it to denounce Nestorianism. Though some scholarship claims that hints of the later creed's existence are discernible in some writings, no extant document gives its text or makes explicit mention of it earlier than the fourth ecumenical council at Chalcedon in 451. Many of the bishops of the 451 council themselves had never heard of it and initially greeted it skeptically, but it was then produced from the episcopal archives of Constantinople, and the council accepted it "not as supplying any omission but as an authentic interpretation of the faith of Nicaea". In spite of the questions raised, it is considered most likely that this creed was in fact adopted at the 381 second ecumenical council.
On the basis of evidence both internal and external to the text, it has been argued that this creed originated not as an editing of the original Creed proposed at Nicaea in 325, but as an independent creed (probably an older baptismal creed) modified to make it more like the Nicene Creed. Some scholars have argued that the creed may have been presented at Chalcedon as "a precedent for drawing up new creeds and definitions to supplement the Creed of Nicaea, as a way of getting round the ban on new creeds in Canon 7 of Ephesus". It is generally agreed that the Niceno-Constantinopolitan Creed is not simply an expansion of the Creed of Nicaea, and was probably based on another traditional creed independent of the one from Nicaea.
The third Ecumenical Council (Council of Ephesus of 431) reaffirmed the original 325 version of the Nicene Creed and declared that "it is unlawful for any man to bring forward, or to write, or to compose a different () faith as a rival to that established by the holy Fathers assembled with the Holy Ghost in Nicaea" (i.e., the 325 creed). The word is more accurately translated as used by the Council to mean "different", "contradictory", rather than "another". This statement has been interpreted as a prohibition against changing this creed or composing others, but not all accept this interpretation. This question is connected with the controversy whether a creed proclaimed by an Ecumenical Council is definitive in excluding not only excisions from its text but also additions to it.
In one respect, the Eastern Orthodox Church's received text of the Niceno-Constantinopolitan Creed differs from the earliest text, which is included in the acts of the Council of Chalcedon of 451: The Eastern Orthodox Church uses the singular forms of verbs such as "I believe", in place of the plural form ("we believe") used by the council. Byzantine Rite Eastern Catholic Churches use exactly the same form of the Creed, since the Catholic Church teaches that it is wrong to add "and the Son" to the Greek verb "ἐκπορευόμενον", though correct to add it to the Latin "qui procedit", which does not have precisely the same meaning. The form generally used in Western churches does add "and the Son" and also the phrase "God from God", which is found in the original 325 Creed.
The following table, which indicates by [square brackets] the portions of the 325 text that were omitted or moved in 381, and uses "italics" to indicate what phrases, absent in the 325 text, were added in 381, juxtaposes the earlier (AD 325) and later (AD 381) forms of this Creed in the English translation given in Philip Schaff's compilation "The Creeds of Christendom" (1877).
In the late 6th century, some Latin-speaking churches added the words "and from the Son" ("Filioque") to the description of the procession of the Holy Spirit, in what many Eastern Orthodox Christians have at a later stage argued is a violation of Canon VII of the Third Ecumenical Council, since the words were not included in the text by either the Council of Nicaea or that of Constantinople. This was incorporated into the liturgical practice of Rome in 1014. "Filioque" eventually became one of the main causes for the East-West Schism in 1054, and the failures of the repeated union attempts.
The Vatican stated in 1995 that, while the words καὶ τοῦ Υἱοῦ ("and the Son") would indeed be heretical if used with the Greek verb ἐκπορεύομαι (from ἐκ, "out of" and πορεύομαι "to come or go") – which is one of the terms used by St. Gregory of Nazianzus and the one adopted by the Council of Constantinople— the word "Filioque" is not heretical when associated with the Latin verb "procedo" and the related word "processio." Whereas the verb ἐκπορεύομαι in Gregory and other Fathers necessarily means "to originate from a cause or principle," the Latin term "procedo" (from "pro", "forward;" and "cedo", "to go") has no such connotation and simply denotes the communication of the Divine Essence or Substance. In this sense, "processio" is similar in meaning to the Greek term προϊέναι, used by the Fathers from Alexandria (especially Cyril of Alexandria) as well as others. Partly due to the influence of the Latin translations of the New Testament (especially of John 15:26), the term ἐκπορευόμενον (the present participle of ἐκπορεύομαι) in the creed was translated into Latin as "procedentem". In time, the Latin version of the Creed came to be interpreted in the West in the light of the Western concept of "processio", which required the affirmation of the "Filioque" to avoid the heresy of Arianism.
The view that the Nicene Creed can serve as a touchstone of true Christian faith is reflected in the name "symbol of faith", which was given to it in Greek and Latin, when in those languages the word "symbol" meant a "token for identification (by comparison with a counterpart)".
In the Roman Rite Mass, the Latin text of the Niceno-Constantinopolitan Creed, with "Deum de Deo" (God from God) and "Filioque" (and from the Son), phrases absent in the original text, was previously the only form used for the "profession of faith". The Roman Missal now refers to it jointly with the Apostles' Creed as "the Symbol or Profession of Faith or Creed", describing the second as "the baptismal Symbol of the Roman Church, known as the Apostles' Creed".
The liturgies of the ancient Churches of Eastern Christianity (Eastern Orthodox Church, Oriental Orthodoxy, Church of the East and the Eastern Catholic Churches), use the Niceno-Constantinopolitan Creed, never the Western Apostles' Creed.
While in certain places where the Byzantine Rite is used, the choir or congregation sings the Creed at the Divine Liturgy, in many places the Creed is typically recited by the cantor, who in this capacity represents the whole congregation although many, and sometimes all, members of the congregation may join in rhythmic recitation. Where the latter is the practice, it is customary to invite, as a token of honor, any prominent lay member of the congregation who happens to be present, e.g., royalty, a visiting dignitary, the Mayor, etc., to recite the Creed in lieu of the cantor. This practice stems from the tradition that the prerogative to recite the Creed belonged to the Emperor, speaking for his populace.
Some evangelical and other Christians consider the Nicene Creed helpful and to a certain extent authoritative, but not infallibly so in view of their belief that only Scripture is truly authoritative. Non-Trinitarian groups, such as the Church of the New Jerusalem, The Church of Jesus Christ of Latter-day Saints and the Jehovah's Witnesses, explicitly reject some of the statements in the Nicene Creed.
There are several designations for the two forms of the Nicene creed, some with overlapping meanings:
In musical settings, particularly when sung in Latin, this Creed is usually referred to by its first word, "Credo".
This section is not meant to collect the texts of all liturgical versions of the Nicene Creed, and provides only three, the Greek, the Latin, and the Armenian, of special interest. Others are mentioned separately, but without the texts. All ancient liturgical versions, even the Greek, differ at least to some small extent from the text adopted by the First Councils of Nicaea and Constantinople. The Creed was originally written in Greek, owing to the location of the two councils.
But though the councils' texts have "Πιστεύομεν ... ὁμολογοῦμεν ... προσδοκοῦμεν" ("we" believe ... confess ... await), the Creed that the Churches of Byzantine tradition use in their liturgy has "Πιστεύω ... ὁμολογῶ ... προσδοκῶ" ("I" believe ... confess ... await), accentuating the personal nature of recitation of the Creed. The Latin text, as well as using the singular, has two additions: "Deum de Deo" (God from God) and "Filioque" (and from the Son). The Armenian text has many more additions, and is included as showing how that ancient church has chosen to recite the Creed with these numerous elaborations of its contents.
An English translation of the Armenian text is added; English translations of the Greek and Latin liturgical texts are given at English versions of the Nicene Creed in current use.
The Latin text adds "Deum de Deo" and "Filioque" to the Greek. On the latter see The Filioque Controversy above. Inevitably also, the overtones of the terms used, such as "" (pantokratora) and "omnipotentem", differ ("pantokratora" meaning ruler of all; "omnipotentem" meaning omnipotent, almighty). The implications of the difference in overtones of "" and "qui ... procedit" was the object of the study "The Greek and the Latin Traditions regarding the Procession of the Holy Spirit" published by the Pontifical Council for Promoting Christian Unity in 1996.
Again, the terms "" and "consubstantialem", translated as "of one being" or "consubstantial", have different overtones, being based respectively on Greek (stable being, immutable reality, substance, essence, true nature), and Latin "substantia" (that of which a thing consists, the being, essence, contents, material, substance).
"Credo", which in classical Latin is used with the accusative case of the thing held to be true (and with the dative of the person to whom credence is given), is here used three times with the preposition "in", a literal translation of the Greek "" (in unum Deum ..., in unum Dominum ..., in Spiritum Sanctum ...), and once in the classical preposition-less construction (unam, sanctam, catholicam et apostolicam Ecclesiam).
English translation of the Armenian version
The version in the Church Slavonic language, used by several Eastern Orthodox Churches is practically identical with the Greek liturgical version.
This version is used also by some Byzantine Rite Eastern Catholic Churches. Although the Union of Brest excluded addition of the "Filioque", this was sometimes added by Ruthenian Catholics, whose older liturgical books also show the phrase in brackets, and by Ukrainian Catholics. Writing in 1971, the Ruthenian Scholar Fr. Casimir Kucharek noted, "In Eastern Catholic Churches, the "Filioque" may be omitted except when scandal would ensue. Most of the Eastern Catholic Rites use it." However, in the decades that followed 1971 it has come to be used more rarely.
The versions used by Oriental Orthodoxy and the Church of the East differ from the Greek liturgical version in having "We believe", as in the original text, instead of "I believe". | https://en.wikipedia.org/wiki?curid=21541 |
Nuclear fusion
Nuclear fusion is a reaction in which two or more atomic nuclei are combined to form one or more different atomic nuclei and subatomic particles (neutrons or protons). The difference in mass between the reactants and products is manifested as either the release or absorption of energy. This difference in mass arises due to the difference in atomic "binding energy" between the atomic nuclei before and after the reaction. Fusion is the process that powers active or "main sequence" stars, or other high magnitude stars.
A fusion process that produces nuclei lighter than iron-56 or nickel-62 will generally release energy. These elements have relatively small mass per nucleon and large binding energy per nucleon. Fusion of nuclei lighter than these releases energy (an exothermic process), while fusion of heavier nuclei results in energy retained by the product nucleons, and the resulting reaction is endothermic. The opposite is true for the reverse process, nuclear fission. This means that the lighter elements, such as hydrogen and helium, are in general more fusible; while the heavier elements, such as uranium, thorium and plutonium, are more fissionable. The extreme astrophysical event of a supernova can produce enough energy to fuse nuclei into elements heavier than iron.
In 1920, Arthur Eddington suggested hydrogen-helium fusion could be the primary source of stellar energy. Quantum tunneling was discovered by Friedrich Hund in 1929, and shortly afterwards Robert Atkinson and Fritz Houtermans used the measured masses of light elements to show that large amounts of energy could be released by fusing small nuclei. Building on the early experiments in nuclear transmutation by Ernest Rutherford, laboratory fusion of hydrogen isotopes was accomplished by Mark Oliphant in 1932. In the remainder of that decade, the theory of the main cycle of nuclear fusion in stars was worked out by Hans Bethe. Research into fusion for military purposes began in the early 1940s as part of the Manhattan Project. Fusion was accomplished in 1951 with the Greenhouse Item nuclear test. Nuclear fusion on a large scale in an explosion was first carried out on 1 November 1952, in the Ivy Mike hydrogen bomb test.
Research into developing controlled fusion inside fusion reactors has been ongoing since the 1940s, but the technology is still in its development phase.
The release of energy with the fusion of light elements is due to the interplay of two opposing forces: the nuclear force, which combines together protons and neutrons, and the Coulomb force, which causes protons to repel each other. Protons are positively charged and repel each other by the Coulomb force, but they can nonetheless stick together, demonstrating the existence of another, short-range, force referred to as nuclear attraction. Light nuclei (or nuclei smaller than iron and nickel) are sufficiently small and proton-poor allowing the nuclear force to overcome repulsion. This is because the nucleus is sufficiently small that all nucleons feel the short-range attractive force at least as strongly as they feel the infinite-range Coulomb repulsion. Building up nuclei from lighter nuclei by fusion releases the extra energy from the net attraction of particles. For larger nuclei, however, no energy is released, since the nuclear force is short-range and cannot continue to act across longer nuclear length scales. Thus, energy is not released with the fusion of such nuclei; instead, energy is required as input for such processes.
Fusion powers stars and produces virtually all elements in a process called nucleosynthesis. The Sun is a main-sequence star, and, as such, generates its energy by nuclear fusion of hydrogen nuclei into helium. In its core, the Sun fuses 620 million metric tons of hydrogen and makes 606 million metric tons of helium each second. The fusion of lighter elements in stars releases energy and the mass that always accompanies it. For example, in the fusion of two hydrogen nuclei to form helium, 0.7% of the mass is carried away in the form of kinetic energy of an alpha particle or other forms of energy, such as electromagnetic radiation.
It takes considerable energy to force nuclei to fuse, even those of the lightest element, hydrogen. When accelerated to high enough speeds, nuclei can overcome this electrostatic repulsion and be brought close enough such that the attractive nuclear force is greater than the repulsive Coulomb force. The strong force grows rapidly once the nuclei are close enough, and the fusing nucleons can essentially "fall" into each other and the result is fusion and net energy produced. The fusion of lighter nuclei, which creates a heavier nucleus and often a free neutron or proton, generally releases more energy than it takes to force the nuclei together; this is an exothermic process that can produce self-sustaining reactions.
Energy released in most nuclear reactions is much larger than in chemical reactions, because the binding energy that holds a nucleus together is greater than the energy that holds electrons to a nucleus. For example, the ionization energy gained by adding an electron to a hydrogen nucleus is —less than one-millionth of the released in the deuterium–tritium (D–T) reaction shown in the adjacent diagram. Fusion reactions have an energy density many times greater than nuclear fission; the reactions produce far greater energy per unit of mass even though "individual" fission reactions are generally much more energetic than "individual" fusion ones, which are themselves millions of times more energetic than chemical reactions. Only direct conversion of mass into energy, such as that caused by the annihilatory collision of matter and antimatter, is more energetic per unit of mass than nuclear fusion. (The complete conversion of one gram of matter would release 9×1013 joules of energy.)
Research into using fusion for the production of electricity has been pursued for over 60 years. Although controlled fusion is generally manageable with current technology (e.g. fusors), successful accomplishment of economic fusion has been stymied by scientific and technological difficulties; nonetheless, important progress has been made. At present, controlled fusion reactions have been unable to produce break-even (self-sustaining) controlled fusion. The two most advanced approaches for it are magnetic confinement (toroid designs) and inertial confinement (laser designs).
Workable designs for a toroidal reactor that theoretically will deliver ten times more fusion energy than the amount needed to heat plasma to the required temperatures are in development (see ITER). The ITER facility is expected to finish its construction phase in 2025. It will start commissioning the reactor that same year and initiate plasma experiments in 2025, but is not expected to begin full deuterium-tritium fusion until 2035.
Similarly, Canadian-based General Fusion, which is developing a magnetized target fusion nuclear energy system, aims to build its demonstration plant by 2025.
The US National Ignition Facility, which uses laser-driven inertial confinement fusion, was designed with a goal of break-even fusion; the first large-scale laser target experiments were performed in June 2009 and ignition experiments began in early 2011.
An important fusion process is the stellar nucleosynthesis that powers stars, including the Sun. In the 20th century, it was recognized that the energy released from nuclear fusion reactions accounted for the longevity of stellar heat and light. The fusion of nuclei in a star, starting from its initial hydrogen and helium abundance, provides that energy and synthesizes new nuclei as a byproduct of the fusion process. Different reaction chains are involved, depending on the mass of the star (and therefore the pressure and temperature in its core).
Around 1920, Arthur Eddington anticipated the discovery and mechanism of nuclear fusion processes in stars, in his paper "The Internal Constitution of the Stars". At that time, the source of stellar energy was a complete mystery; Eddington correctly speculated that the source was fusion of hydrogen into helium, liberating enormous energy according to Einstein's equation . This was a particularly remarkable development since at that time fusion and thermonuclear energy, and even that stars are largely composed of hydrogen (see metallicity), had not yet been discovered. Eddington's paper, based on knowledge at the time, reasoned that:
All of these speculations were proven correct in the following decades.
The primary source of solar energy, and similar size stars, is the fusion of hydrogen to form helium (the proton-proton chain reaction), which occurs at a solar-core temperature of 14 million kelvin. The net result is the fusion of four protons into one alpha particle, with the release of two positrons and two neutrinos (which changes two of the protons into neutrons), and energy. In heavier stars, the CNO cycle and other processes are more important. As a star uses up a substantial fraction of its hydrogen, it begins to synthesize heavier elements. The heaviest elements are synthesized by fusion that occurs as a more massive star undergoes a violent supernova at the end of its life, a process known as supernova nucleosynthesis.
A substantial energy barrier of electrostatic forces must be overcome before fusion can occur. At large distances, two naked nuclei repel one another because of the repulsive electrostatic force between their positively charged protons. If two nuclei can be brought close enough together, however, the electrostatic repulsion can be overcome by the quantum effect in which nuclei can tunnel through coulomb forces.
When a nucleon such as a proton or neutron is added to a nucleus, the nuclear force attracts it to all the other nucleons of the nucleus (if the atom is small enough), but primarily to its immediate neighbours due to the short range of the force. The nucleons in the interior of a nucleus have more neighboring nucleons than those on the surface. Since smaller nuclei have a larger surface area-to-volume ratio, the binding energy per nucleon due to the nuclear force generally increases with the size of the nucleus but approaches a limiting value corresponding to that of a nucleus with a diameter of about four nucleons. It is important to keep in mind that nucleons are quantum objects. So, for example, since two neutrons in a nucleus are identical to each other, the goal of distinguishing one from the other, such as which one is in the interior and which is on the surface, is in fact meaningless, and the inclusion of quantum mechanics is therefore necessary for proper calculations.
The electrostatic force, on the other hand, is an inverse-square force, so a proton added to a nucleus will feel an electrostatic repulsion from "all" the other protons in the nucleus. The electrostatic energy per nucleon due to the electrostatic force thus increases without limit as nuclei atomic number grows.
The net result of the opposing electrostatic and strong nuclear forces is that the binding energy per nucleon generally increases with increasing size, up to the elements iron and nickel, and then decreases for heavier nuclei. Eventually, the binding energy becomes negative and very heavy nuclei (all with more than 208 nucleons, corresponding to a diameter of about 6 nucleons) are not stable. The four most tightly bound nuclei, in decreasing order of binding energy per nucleon, are , , , and . Even though the nickel isotope, , is more stable, the iron isotope is an order of magnitude more common. This is due to the fact that there is no easy way for stars to create through the alpha process.
An exception to this general trend is the helium-4 nucleus, whose binding energy is higher than that of lithium, the next heaviest element. This is because protons and neutrons are fermions, which according to the Pauli exclusion principle cannot exist in the same nucleus in exactly the same state. Each proton or neutron's energy state in a nucleus can accommodate both a spin up particle and a spin down particle. Helium-4 has an anomalously large binding energy because its nucleus consists of two protons and two neutrons (it is a doubly magic nucleus), so all four of its nucleons can be in the ground state. Any additional nucleons would have to go into higher energy states. Indeed, the helium-4 nucleus is so tightly bound that it is commonly treated as a single quantum mechanical particle in nuclear physics, namely, the alpha particle.
The situation is similar if two nuclei are brought together. As they approach each other, all the protons in one nucleus repel all the protons in the other. Not until the two nuclei actually come close enough for long enough so the strong nuclear force can take over (by way of tunneling) is the repulsive electrostatic force overcome. Consequently, even when the final energy state is lower, there is a large energy barrier that must first be overcome. It is called the Coulomb barrier.
The Coulomb barrier is smallest for isotopes of hydrogen, as their nuclei contain only a single positive charge. A diproton is not stable, so neutrons must also be involved, ideally in such a way that a helium nucleus, with its extremely tight binding, is one of the products.
Using deuterium–tritium fuel, the resulting energy barrier is about 0.1 MeV. In comparison, the energy needed to remove an electron from hydrogen is 13.6 eV, about 7500 times less energy. The (intermediate) result of the fusion is an unstable 5He nucleus, which immediately ejects a neutron with 14.1 MeV. The recoil energy of the remaining 4He nucleus is 3.5 MeV, so the total energy liberated is 17.6 MeV. This is many times more than what was needed to overcome the energy barrier.
The reaction cross section (σ) is a measure of the probability of a fusion reaction as a function of the relative velocity of the two reactant nuclei. If the reactants have a distribution of velocities, e.g. a thermal distribution, then it is useful to perform an average over the distributions of the product of cross section and velocity. This average is called the 'reactivity', denoted . The reaction rate (fusions per volume per time) is times the product of the reactant number densities:
If a species of nuclei is reacting with a nucleus like itself, such as the DD reaction, then the product formula_2 must be replaced by formula_3.
formula_4 increases from virtually zero at room temperatures up to meaningful magnitudes at temperatures of 10–100 keV. At these temperatures, well above typical ionization energies (13.6 eV in the hydrogen case), the fusion reactants exist in a plasma state.
The significance of formula_4 as a function of temperature in a device with a particular energy confinement time is found by considering the Lawson criterion. This is an extremely challenging barrier to overcome on Earth, which explains why fusion research has taken many years to reach the current advanced technical state.
If matter is sufficiently heated (hence being plasma) and confined, fusion reactions may occur due to collisions with extreme thermal kinetic energies of the particles. Thermonuclear weapons produce what amounts to an uncontrolled release of fusion energy. Controlled thermonuclear fusion concepts use magnetic fields to confine the plasma.
Inertial confinement fusion (ICF) is a method aimed at releasing fusion energy by heating and compressing a fuel target, typically a pellet containing deuterium and tritium.
Inertial electrostatic confinement is a set of devices that use an electric field to heat ions to fusion conditions. The most well known is the fusor. Starting in 1999, a number of amateurs have been able to do amateur fusion using these homemade devices. Other IEC devices include: the Polywell, MIX POPS and Marble concepts.
If the energy to initiate the reaction comes from accelerating one of the nuclei, the process is called "beam-target" fusion; if both nuclei are accelerated, it is "beam-beam" fusion.
Accelerator-based light-ion fusion is a technique using particle accelerators to achieve particle kinetic energies sufficient to induce light-ion fusion reactions. Accelerating light ions is relatively easy, and can be done in an efficient manner—requiring only a vacuum tube, a pair of electrodes, and a high-voltage transformer; fusion can be observed with as little as 10 kV between the electrodes. The key problem with accelerator-based fusion (and with cold targets in general) is that fusion cross sections are many orders of magnitude lower than Coulomb interaction cross sections. Therefore, the vast majority of ions expend their energy emitting bremsstrahlung radiation and the ionization of atoms of the target. Devices referred to as sealed-tube neutron generators are particularly relevant to this discussion. These small devices are miniature particle accelerators filled with deuterium and tritium gas in an arrangement that allows ions of those nuclei to be accelerated against hydride targets, also containing deuterium and tritium, where fusion takes place, releasing a flux of neutrons. Hundreds of neutron generators are produced annually for use in the petroleum industry where they are used in measurement equipment for locating and mapping oil reserves.
To overcome the problem of bremsstrahlung radiation in Beam-target fusion, a combinatorial approach has been suggested by Tri-Alpha and Helion energy companies, this method is based on interpenetration of two oppositely directed plasmoids. Theoretical works represent that by creating and warming two accelerated head-on colliding plasmoids up to some kilo electron volts thermal energy which is low in comparison with that of required for thermonuclear fusion, net fusion gain is possible even with aneutronic fuels such as p-11B. In order to attain the necessary conditions of break-even by this method the accelerated plasmoids must have enough colliding velocities of the order of some thousands of kilometers per second (106 m/s) depending on the kind of fusion fuel. In addition, the plasmoids density must be between the inertial and magnetic fusion criteria.
Muon-catalyzed fusion is a fusion process that occurs at ordinary temperatures. It was studied in detail by Steven Jones in the early 1980s. Net energy production from this reaction has been unsuccessful because of the high energy required to create muons, their short 2.2 µs half-life, and the high chance that a muon will bind to the new alpha particle and thus stop catalyzing fusion.
Some other confinement principles have been investigated.
At the temperatures and densities in stellar cores the rates of fusion reactions are notoriously slow. For example, at solar core temperature ("T" ≈ 15 MK) and density (160 g/cm3), the energy release rate is only 276 μW/cm3—about a quarter of the volumetric rate at which a resting human body generates heat. Thus, reproduction of stellar core conditions in a lab for nuclear fusion power production is completely impractical. Because nuclear reaction rates depend on density as well as temperature and most fusion schemes operate at relatively low densities, those methods are strongly dependent on higher temperatures. The fusion rate as a function of temperature (exp(−"E"/"kT")), leads to the need to achieve temperatures in terrestrial reactors 10–100 times higher temperatures than in stellar interiors: "T" ≈ 0.1–1.0×109 K.
In artificial fusion, the primary fuel is not constrained to be protons and higher temperatures can be used, so reactions with larger cross-sections are chosen. Another concern is the production of neutrons, which activate the reactor structure radiologically, but also have the advantages of allowing volumetric extraction of the fusion energy and tritium breeding. Reactions that release no neutrons are referred to as "aneutronic".
To be a useful energy source, a fusion reaction must satisfy several criteria. It must:
Few reactions meet these criteria. The following are those with the largest cross sections:
For reactions with two products, the energy is divided between them in inverse proportion to their masses, as shown. In most reactions with three products, the distribution of energy varies. For reactions that can result in more than one set of products, the branching ratios are given.
Some reaction candidates can be eliminated at once. The D-6Li reaction has no advantage compared to p+- because it is roughly as difficult to burn but produces substantially more neutrons through - side reactions. There is also a p+- reaction, but the cross section is far too low, except possibly when "T"i > 1 MeV, but at such high temperatures an endothermic, direct neutron-producing reaction also becomes very significant. Finally there is also a p+- reaction, which is not only difficult to burn, but can be easily induced to split into two alpha particles and a neutron.
In addition to the fusion reactions, the following reactions with neutrons are important in order to "breed" tritium in "dry" fusion bombs and some proposed fusion reactors:
The latter of the two equations was unknown when the U.S. conducted the Castle Bravo fusion bomb test in 1954. Being just the second fusion bomb ever tested (and the first to use lithium), the designers of the Castle Bravo "Shrimp" had understood the usefulness of 6Li in tritium production, but had failed to recognize that 7Li fission would greatly increase the yield of the bomb. While 7Li has a small neutron cross-section for low neutron energies, it has a higher cross section above 5 MeV. The 15 Mt yield was 150% greater than the predicted 6 Mt and caused unexpected exposure to fallout.
To evaluate the usefulness of these reactions, in addition to the reactants, the products, and the energy released, one needs to know something about the nuclear cross section. Any given fusion device has a maximum plasma pressure it can sustain, and an economical device would always operate near this maximum. Given this pressure, the largest fusion output is obtained when the temperature is chosen so that /T2 is a maximum. This is also the temperature at which the value of the triple product "nT"τ required for ignition is a minimum, since that required value is inversely proportional to /T2 (see Lawson criterion). (A plasma is "ignited" if the fusion reactions produce enough power to maintain the temperature without external heating.) This optimum temperature and the value of /T2 at that temperature is given for a few of these reactions in the following table.
Note that many of the reactions form chains. For instance, a reactor fueled with and creates some , which is then possible to use in the - reaction if the energies are "right". An elegant idea is to combine the reactions (8) and (9). The from reaction (8) can react with in reaction (9) before completely thermalizing. This produces an energetic proton, which in turn undergoes reaction (8) before thermalizing. Detailed analysis shows that this idea would not work well, but it is a good example of a case where the usual assumption of a Maxwellian plasma is not appropriate.
Any of the reactions above can in principle be the basis of fusion power production. In addition to the temperature and cross section discussed above, we must consider the total energy of the fusion products "E"fus, the energy of the charged fusion products "E"ch, and the atomic number "Z" of the non-hydrogenic reactant.
Specification of the - reaction entails some difficulties, though. To begin with, one must average over the two branches (2i) and (2ii). More difficult is to decide how to treat the and products. burns so well in a deuterium plasma that it is almost impossible to extract from the plasma. The - reaction is optimized at a much higher temperature, so the burnup at the optimum - temperature may be low. Therefore, it seems reasonable to assume the but not the gets burned up and adds its energy to the net reaction, which means the total reaction would be the sum of (2i), (2ii), and (1):
For calculating the power of a reactor (in which the reaction rate is determined by the D-D step), we count the - fusion energy "per D-D reaction" as "E"fus = (4.03 MeV + 17.6 MeV)×50% + (3.27 MeV)×50% = 12.5 MeV and the energy in charged particles as "E"ch = (4.03 MeV + 3.5 MeV)×50% + (0.82 MeV)×50% = 4.2 MeV. (Note: if the tritium ion reacts with a deuteron while it still has a large kinetic energy, then the kinetic energy of the helium-4 produced may be quite different from 3.5 MeV, so this calculation of energy in charged particles is only an approximation of the average.) The amount of energy per deuteron consumed is 2/5 of this, or 5.0 MeV (a specific energy of about 225 million MJ per kilogram of deuterium).
Another unique aspect of the - reaction is that there is only one reactant, which must be taken into account when calculating the reaction rate.
With this choice, we tabulate parameters for four of the most important reactions
The last column is the neutronicity of the reaction, the fraction of the fusion energy released as neutrons. This is an important indicator of the magnitude of the problems associated with neutrons like radiation damage, biological shielding, remote handling, and safety. For the first two reactions it is calculated as ("E"fus-"E"ch)/"E"fus. For the last two reactions, where this calculation would give zero, the values quoted are rough estimates based on side reactions that produce neutrons in a plasma in thermal equilibrium.
Of course, the reactants should also be mixed in the optimal proportions. This is the case when each reactant ion plus its associated electrons accounts for half the pressure. Assuming that the total pressure is fixed, this means that particle density of the non-hydrogenic ion is smaller than that of the hydrogenic ion by a factor 2/("Z"+1). Therefore, the rate for these reactions is reduced by the same factor, on top of any differences in the values of /T2. On the other hand, because the - reaction has only one reactant, its rate is twice as high as when the fuel is divided between two different hydrogenic species, thus creating a more efficient reaction.
Thus there is a "penalty" of (2/(Z+1)) for non-hydrogenic fuels arising from the fact that they require more electrons, which take up pressure without participating in the fusion reaction. (It is usually a good assumption that the electron temperature will be nearly equal to the ion temperature. Some authors, however discuss the possibility that the electrons could be maintained substantially colder than the ions. In such a case, known as a "hot ion mode", the "penalty" would not apply.) There is at the same time a "bonus" of a factor 2 for - because each ion can react with any of the other ions, not just a fraction of them.
We can now compare these reactions in the following table.
The maximum value of /T2 is taken from a previous table. The "penalty/bonus" factor is that related to a non-hydrogenic reactant or a single-species reaction. The values in the column "inverse reactivity" are found by dividing 1.24 by the product of the second and third columns. It indicates the factor by which the other reactions occur more slowly than the - reaction under comparable conditions. The column "Lawson criterion" weights these results with "E"ch and gives an indication of how much more difficult it is to achieve ignition with these reactions, relative to the difficulty for the - reaction. The next-to-last column is labeled "power density" and weights the practical reactivity by "E"fus. The final column indicates how much lower the fusion power density of the other reactions is compared to the - reaction and can be considered a measure of the economic potential.
The ions undergoing fusion in many systems will essentially never occur alone but will be mixed with electrons that in aggregate neutralize the ions' bulk electrical charge and form a plasma. The electrons will generally have a temperature comparable to or greater than that of the ions, so they will collide with the ions and emit x-ray radiation of 10–30 keV energy, a process known as Bremsstrahlung.
The huge size of the Sun and stars means that the x-rays produced in this process will not escape and will deposit their energy back into the plasma. They are said to be opaque to x-rays. But any terrestrial fusion reactor will be optically thin for x-rays of this energy range. X-rays are difficult to reflect but they are effectively absorbed (and converted into heat) in less than mm thickness of stainless steel (which is part of a reactor's shield). This means the bremsstrahlung process is carrying energy out of the plasma, cooling it.
The ratio of fusion power produced to x-ray radiation lost to walls is an important figure of merit. This ratio is generally maximized at a much higher temperature than that which maximizes the power density (see the previous subsection). The following table shows estimates of the optimum temperature and the power ratio at that temperature for several reactions:
The actual ratios of fusion to Bremsstrahlung power will likely be significantly lower for several reasons. For one, the calculation assumes that the energy of the fusion products is transmitted completely to the fuel ions, which then lose energy to the electrons by collisions, which in turn lose energy by Bremsstrahlung. However, because the fusion products move much faster than the fuel ions, they will give up a significant fraction of their energy directly to the electrons. Secondly, the ions in the plasma are assumed to be purely fuel ions. In practice, there will be a significant proportion of impurity ions, which will then lower the ratio. In particular, the fusion products themselves "must" remain in the plasma until they have given up their energy, and "will" remain some time after that in any proposed confinement scheme. Finally, all channels of energy loss other than Bremsstrahlung have been neglected. The last two factors are related. On theoretical and experimental grounds, particle and energy confinement seem to be closely related. In a confinement scheme that does a good job of retaining energy, fusion products will build up. If the fusion products are efficiently ejected, then energy confinement will be poor, too.
The temperatures maximizing the fusion power compared to the Bremsstrahlung are in every case higher than the temperature that maximizes the power density and minimizes the required value of the fusion triple product. This will not change the optimum operating point for - very much because the Bremsstrahlung fraction is low, but it will push the other fuels into regimes where the power density relative to - is even lower and the required confinement even more difficult to achieve. For - and -, Bremsstrahlung losses will be a serious, possibly prohibitive problem. For -, p+- and p+- the Bremsstrahlung losses appear to make a fusion reactor using these fuels with a quasineutral, isotropic plasma impossible. Some ways out of this dilemma are considered—and rejected—in "fundamental limitations on plasma fusion systems not in thermodynamic equilibrium". This limitation does not apply to non-neutral and anisotropic plasmas; however, these have their own challenges to contend with.
In a classical picture, nuclei can be understood as hard spheres that repel each other through the Coulomb force but fuse once the two spheres come close enough for contact. Estimating the radius of an atomic nuclei as about one femtometer, the energy needed for fusion of two hydrogen is:
formula_6
This would imply that for the core of the sun, which has a Boltzmann distribution with a temperature of around 1.4 keV, the probability hydrogen would reach the threshold is formula_7, that is, fusion would never occur. However, fusion in the sun does occur due to quantum mechanics.
The probability that fusion occurs is greatly increased compared to the classical picture, thanks to the smearing of the effective radius as the DeBroglie wavelength as well as quantum tunnelling through the potential barrier. To determine the rate of fusion reactions, the value of most interest is the cross section, which describes the probability that particle will fuse by giving a characteristic area of interaction. An estimation of the fusion cross sectional area is often broken into three pieces:
Where formula_9 is the geometric cross section, is the barrier transparency and is the reaction characteristics of the reaction.
formula_9 is of the order of the square of the de-Broglie wavelength formula_11 where formula_12 is the reduced mass of the system and formula_13 is the center of mass energy of the system.
More detailed forms of the cross section can be derived through nuclear physics based models and R-matrix theory.
The Naval Research Lab's plasma physics formulary gives the total cross section in barns as a function of the energy (in keV) of the incident particle towards a target ion at rest fit by the formula:
formula_19 with the following coefficient values:
Bosch-Hale also reports a R-matrix calculated cross sections fitting observation data with Padé rational approximating coefficients. With energy in units of keV and cross sections in units of millibarn, the factor has the form:
formula_20, with the coefficient values:
In fusions systems that are in thermal equilibrium the particles are in a Maxwell–Boltzmann distribution, meaning the particles have a range of energies centered around the plasma temperature. The sun, magnetically confined plasmas and inertial confinement fusion systems are well modeled to be in a thermal equilibrium. In these cases, the value of interest is the fusion cross section averaged across the Maxwell-Boltzmann distribution. The Naval Research Lab's plasma physics formulary tabulates Maxwell averaged fusion cross sections reactivities in formula_21.
For energies formula_22 the data can be represented by:
formula_23
formula_24
with formula_25 in units of formula_26. | https://en.wikipedia.org/wiki?curid=21544 |
National Geographic Society
The National Geographic Society (NGS), headquartered in Washington, D.C., United States, is one of the largest non-profit scientific and educational organizations in the world. Founded in 1888, its interests include geography, archaeology, and natural science, the promotion of environmental and historical conservation, and the study of world culture and history. The National Geographic Society's logo is a yellow portrait frame—rectangular in shape—which appears on the margins surrounding the front covers of its magazines and as its television channel logo. Through National Geographic Partners (a joint venture with The Walt Disney Company), the Society operates the magazine, TV channels, a website, worldwide events, and other media operations.
The National Geographic Society was founded in 1888 "to increase and diffuse geographic knowledge". It is governed by a board of trustees, whose 21 members include distinguished educators, business executives, former government officials and conservationists. The organization sponsors and funds scientific research and exploration. National Geographic maintains a museum for the public in its Washington, D.C., headquarters.
It has helped to sponsor popular traveling exhibits, such as the early 2010s "King Tut" exhibit featuring artifacts from the tomb of the young Egyptian Pharaoh. Its Education Foundation gives grants to education organizations and individuals to improve geography education. Its Committee for Research and Exploration has awarded more than 11,000 grants for scientific research and exploration.
National Geographic has retail stores in Washington, D.C., London, Sydney, and Panama. The locations outside of the United States are operated by Worldwide Retail Store S.L., a Spanish holding company.
The Society's media arm is National Geographic Partners, a joint venture between The Walt Disney Company and the Society, which publishes a journal, "National Geographic" in English and nearly 40 local-language editions. It also publishes other magazines, books, school products, maps, and Web and film products in numerous languages and countries. National Geographic's various media properties reach more than 280 million people monthly.
The National Geographic Society began as a club for an elite group of academics and wealthy patrons interested in travel and exploration. On January 13, 1888, 33 explorers and scientists gathered at the Cosmos Club, a private club then located on Lafayette Square in Washington, D.C., to organize "a society for the increase and diffusion of geographical knowledge." After preparing a constitution and a plan of organization, the National Geographic Society was incorporated two weeks later on January 27. Gardiner Greene Hubbard became its first president and his son-in-law, Alexander Graham Bell, succeeded him in 1897.
In 1899, Bell's son-in-law Gilbert Hovey Grosvenor was named the first full-time editor of National Geographic magazine and served the organization for fifty-five years (until 1954), and members of the Grosvenor family have played important roles in the organization since. Bell and Gilbert Hovey Grosvenor devised the successful marketing notion of Society membership and the first major use of photographs to tell stories in magazines.
The chairman of the National Geographic Society is Jean Case. Michael Ulica is interim president and chief executive. The editor-in-chief of National Geographic magazine is Susan Goldberg. Gilbert Melville Grosvenor, a former chairman, received the Presidential Medal of Freedom in 2005 for his leadership in geography education.
In 2004, the National Geographic Society headquarters in Washington, D.C., was one of the first buildings to receive a "Green" certification from Global Green USA. The National Geographic received the prestigious Prince of Asturias Award for Communication and Humanities in October 2006 in Oviedo, Spain.
National Geographic Expeditions was launched in 1999 to fulfill one of its mission and for the proceeds to go towards its mission.
The society purchased in 2006 Hampton-Brown, English-as-a-second-language educational material publisher, using a good part of its endowments. However, the publisher did not generate much profits. By 2009, the society's endowments were about $200 million.
National Geographic Ventures, its commercial arm, launched a music division, National Geographic Music and Radio, in 2007. The society formed in October 2007 National Geographic Entertainment division to include its entertainment units.
In 2013 the society was investigated for possible violation of the Foreign Corrupt Practices Act relating to their close association with an Egyptian government official responsible for antiquities.
On September 9, 2015, the Society announced that it would re-organize its media properties and publications into a new company known as National Geographic Partners, which would be majority-owned by 21st Century Fox (21CF) with a 73% stake. This new, for-profit corporation, would own "National Geographic" and other magazines, as well as its affiliated television networks—most of which were already owned in joint ventures with 21CF. As a consequence, the Society and 21st Century Fox announced on November 2, 2015, that 9 percent of National Geographic's 2,000 employees, approximately 180 people, would be laid off, constituting the biggest staff reduction in the Society's history. Later, The Walt Disney Company assumed 21CF's share in National Geographic Partners, following the completion of Disney's acquisition of most of 21CF assets on March 20, 2019.
The Society has helped sponsor many expeditions and research projects over the years.
The Hubbard Medal is awarded by the National Geographic Society for distinction in exploration, discovery, and research. The medal is named for Gardiner Greene Hubbard, the first National Geographic Society president. The Hubbard Medal has been presented 44 times , the most recent award going to Peter H. Raven.
The National Geographic Society also awards, rarely, the Alexander Graham Bell Medal, for exceptional contributions to geographic research. The award is named after Alexander Graham Bell, scientist, inventor and the second president of the NGS. Up to mid-2011, the medal has been twice presented:
The Society operates the National Geographic Museum, located at 1145 17th Street, NW (17th and M), in Washington, D.C. The museum features changing exhibitions featuring the work of National Geographic explorers, photographers, and scientists. There are also changing exhibits related to natural history, culture, history or society. Permanent exhibits include artifacts like the camera Robert Peary used at the North Pole and pottery that Jacques Cousteau recovered from a shipwreck.
National Geographic Partners, a for-profit joint venture between 21st Century Fox (which owns a 73% stake) and the Society (which owns 27%), was established in 2015 to handle commercial activities of the Society, including television channels worldwide (which were already co-owned by the Society and Fox) and magazine publications. The Walt Disney Company assumed 21CF’s share of National Geographic Partners in March 2019.
Most of National Geographic Partners' businesses predate the establishment in 2015, and even the launch of National Geographic Channel in Asia and Europe by the original News Corporation (of which 21st Century Fox is one of the successors) in the late 1990s.
The society formed in October 2007 National Geographic Entertainment division to include Cinema Ventures, Feature Films, Kids Entertainment, Home Entertainment and Music & Radio divisions. Music and Radio division president David Beal was appointed head of Nat Geo Entertainment.
"The National Geographic Magazine", later shortened to "National Geographic", published its first issue in October 1888, nine months after the Society was founded, as the Society's official journal, a benefit for joining the tax-exempt National Geographic Society. Starting with the February 1910 (Vol XXI, No. 2) issue, the magazine began using its now famous trademarked yellow border around the edge of its covers.
There are 12 monthly issues of "National Geographic" per year. The magazine contains articles about geography, popular science, world history, culture, current events and photography of places and things all over the world and universe. "National Geographic" magazine is currently published in 40 local-language editions in many countries around the world. Combined English and other language circulation is around 6.8 million monthly, with some 60 million readers.
In addition to its flagship magazine, the Society publishes several other periodicals:
The Society also ran an online daily news outlet called "National Geographic News".
Additionally, the Society publishes atlases, books, and maps. It previously published and co-published other magazines, including "National Geographic Adventure", "National Geographic Research" (a scientific journal), and others, and continues to publish special issues of various magazines.
The Society publishes a series of books about natural remedies and medicinal herbs. Titles include "Guide to Medicinal Herbs," "Complete Guide to Natural Home Remedies," "Nature's Best Remedies," "Healing Remedies," and "Natural Home Remedies." The books make claims to describe, among other things, plants, herbs, and essential oils purported to help treat diseases and ailments. While giving some appropriate warnings about such concerns as anecdotal evidence and side effects are given, the books have been criticized from a medical perspective for a number of reasons. These include making recommendations that lack scientific evidence, inconsistent claims from one book to the next as well as internal contradictions, and failure to mention effective and safe alternatives.
National Geographic Films was a wholly owned taxable subsidiary of the National Geographic Society.
National Geographic Films appointed Adam Leipzig as president in 2004. The society formed in October 2007 National Geographic Entertainment division to include Cinema Ventures and Feature Films. In 2008, the film division and Imagenation formed a $100 million fund to develop, produce, finance and acquire over five years 10-15 films. The first film the fund invested in was "The Way Back".
Leipzig left the company in January 2010. On March 15, 2010, former Miramax president Daniel Battsek started as National Geographic Films president. Basttsek ended up also over seeing Nat Geo Cinema Ventures distribution and big screen production before he left in 2012 becoming president of Cohen Media Group.
Films it has produced include:
In 2005, the National Geographic Society acquired the film distribution arm of Destination Cinema and entered the film distribution business.
National Geographic Cinema Ventures (NGCV) was a giant-screen, 3D and specialty films production and distribution company operated under National Geographic Entertainment.
At the late 2011 American Alliance of Museums conference, National Geographic Cinema Ventures launched the Museum Partnership Program as museums want a brand for their giant screen theaters. Starting on February 1, 2018, Cosmic Pictures gained distribution rights to a number of the NGCV library.
The Museum Partnership Program is branding and content program of National Geographic Cinema Ventures. Partner museums would receive immediate market exclusivity on their 2 new digital 3D films per year and gain access to the National Geographic organization from members to exhibition to television.
There were nine partner museum as of 2012:
Television programs produced by the National Geographic Society are also broadcast on television. National Geographic television specials and series have been aired on PBS and other networks in the United States and globally for many years. The "Geographic" series in the U.S. started on CBS in 1964, moved to ABC in 1973, shifted to PBS (produced by WQED, Pittsburgh) in 1975, shifted to NBC in 1995, and returned to PBS in 2000. It moved to National Geographic Channel in 2005.
It has featured stories on numerous scientific figures such as Jacques Cousteau, Jane Goodall, and Louis Leakey that not only featured their work but as well helped make them world-famous and accessible to millions. Most of the specials were narrated by various actors, including Glenn Close, Linda Hunt, Stacy Keach, Richard Kiley, Burgess Meredith, Susan Sarandon, Alexander Scourby, Martin Sheen, and Peter Strauss. The specials' theme music, by Elmer Bernstein, was also adopted by the National Geographic Channel.
Another long-running show is "National Geographic Explorer".
The original News Corporation launched National Geographic Channel in Asia and Europe in the late 1990s, in partnership with the Society. The Society provides programming to the National Geographic-branded channels worldwide, while, as of March 2019, The Walt Disney Company's subsidiaries (Walt Disney Television in the United States and Fox Networks Group outside the United States) handle distribution of the channels and advertisement sales. The National Geographic Channel has begun to launch a number of sub-branded channels in international markets, such as Nat Geo Wild, Nat Geo People and Nat Geo Kids.
The U.S. domestic version of National Geographic Channel was launched in January 2001 as a joint venture of National Geographic and Fox Cable Networks.
National Geographic Music and Radio (NGMR) is the music and radio division of National Geographic Ventures. The scope of the division includes National Geographic Live! events, digital music distribution, music publishing, radio content, Nat Geo Music TV channel (available in parts of Asia and Europe) and film and TV music. Clear Channel, Salem Communications and NPR were distribution partners.
In early August 2007, National Geographic Ventures announced the existence of the then-recently formed division. The division was already creating music for its feature film and kids units. Initially hired to run the division were Mark Bauman, executive vice president of radio and video production, and David Beal, head of music labels, publishing and radio operations. With National Geographic Channels, Music and Radio on October 15, 2007 launched the Nat Geo Music channel in Italy.
The society formed in October 2007 National Geographic Entertainment division to include the Music & Radio division and promoted the division president David Beal was appointed head of Nat Geo Entertainment. In 2009, the division became a full-service record label as Nat Geo Music with Mat Whittington appointed as president. | https://en.wikipedia.org/wiki?curid=21550 |
Norns
The Norns (, plural: "") in Norse mythology are female beings who rule the destiny of gods and men. They roughly correspond to other controllers of humans' destiny, such as the Fates, elsewhere in European mythology.
In Snorri Sturluson's interpretation of the "Völuspá", Urðr (Wyrd), Verðandi and Skuld, the three most important of the Norns, come out from a hall standing at the Well of Urðr or Well of Fate. They draw water from the well and take sand that lies around it, which they pour over the Yggdrasill tree so that its branches will not rot. These three Norns are described as powerful maiden giantesses (Jotuns) whose arrival from Jötunheimr ended the golden age of the gods. They may be the same as the maidens of Mögþrasir who are described in "Vafþrúðnismál" (see below).
Beside these three famous Norns, there are many others who appear at a person's birth in order to determine his or her future. In the pre-Christian Norse societies, Norns were thought to have visited newborn children. There were both malevolent and benevolent Norns: the former caused all the malevolent and tragic events in the world while the latter were kind and protective goddesses.
The origin of the name "norn" is uncertain, it may derive from a word meaning "to twine" and which would refer to their twining the thread of fate. Bek-Pedersen suggests that the word "norn" has relation to the Swedish dialect word "norna (nyrna)", a verb that means "secretly communicate". This relates to the perception of norns as shadowy, background figures who only really ever reveal their fateful secrets to men as their fates come to pass.
The name "Urðr" (Old English Wyrd, Weird) means "fate". "Wyrd" and "urðr" are etymological cognates, which does not guarantee that "wyrd" and "urðr" share the same semantic quality of "fate" over time. Both "Urðr" and "Verðandi" are derived from the Old Norse verb "verða", "to be". It is commonly asserted that while "Urðr" derives from the past tense ("that which became or happened"), "Verðandi" derives from the present tense of "verða" ("that which is happening"). "Skuld" is derived from the Old Norse verb "skulu", "need/ought to be/shall be"; its meaning is "that which should become, or that needs to occur". Due to this, it has often been inferred that the three norns are in some way connected with the past, present and future respectively, but it has been disputed that their names really imply a temporal distinction and it has been emphasised that the words do not in themselves denote chronological periods in Old Norse.
There is no clear distinction between norns, fylgjas, hamingjas and valkyries, nor with the generic term dísir. Moreover, artistic license permitted such terms to be used for mortal women in Old Norse poetry. To quote Snorri Sturluson's "Skáldskaparmál" on the various names used for women:
These unclear distinctions among norns and other Germanic female deities are discussed in Bek-Pedersen's book "Norns in Old Norse Mythology."
There are a number of surviving Old Norse sources that relate to the norns. The most important sources are the Prose Edda and the Poetic Edda. The latter contains pagan poetry where the norns are frequently referred to, while the former contains, in addition to pagan poetry, retellings, descriptions and commentaries by the 12th and 13th century Icelandic chieftain and scholar Snorri Sturluson.
A skaldic reference to the norns appears in Hvini's poem in "Ynglingatal" 24 found in "Ynglingasaga" 47, where King Halfdan is put to rest by his men at Borró. This reference brings in the phrase ""norna dómr"" which means "judgment of the nornir". In most cases, when the norns pass judgment, it means death to those who have been judged - in this case, Halfdan. Along with being associated with being bringers of death, Bek-Pedersen suggests that this phrase brings in a quasi-legal aspect to the nature of the norns. This legal association is employed quite frequently within skaldic and eddic sources. This phrase can also be seen as a threat, as death is the final and inevitable decision that the norns can make with regard to human life.
The Poetic Edda is valuable in representing older material in poetry from which Snorri tapped information in the "Prose Edda". Like "Gylfaginning", the "Poetic Edda" mentions the existence of many lesser norns beside the three main norns. Moreover, it also agrees with "Gylfaginning" by telling that they were of several races and that the dwarven norns were the daughters of Dvalin. It also suggests that the three main norns were giantesses (female Jotuns).
"Fáfnismál" contains a discussion between the hero Sigurd and the dragon Fafnir who is dying from a mortal wound from Sigurd. The hero asks Fafnir of many things, among them the nature of the norns. Fafnir explains that they are many and from several races:
It appears from "Völuspá" and "Vafþrúðnismál" that the three main norns were not originally goddesses but giants (Jotuns), and that their arrival ended the early days of bliss for the gods, but that they come for the good of humankind.
"Völuspá" relates that three giants of huge might are reported to have arrived to the gods from Jotunheim:
"Vafþrúðnismál" probably refers to the norns when it talks of maiden giants who arrive to protect the people of earth as protective spirits (hamingjas):
The "Völuspá" contains the names of the three main Norns referring to them as maidens like "Vafþrúðnismál" probably does:
The norns visited each newly born child to allot his or her future, and in "Helgakviða Hundingsbana I", the hero Helgi Hundingsbane has just been born and norns arrive at the homestead:
In "Helgakviða Hundingsbana II", Helgi Hundingsbane blames the norns for the fact that he had to kill Sigrún's father Högni and brother Bragi in order to wed her:
Like Snorri Sturluson stated in "Gylfaginning", people's fate depended on the benevolence or the malevolence of particular norns. In "Reginsmál", the water dwelling dwarf Andvari blames his plight on an evil norn, presumably one of the daughters of Dvalin:
Another instance of Norns being blamed for an undesirable situation appears in "Sigurðarkviða hin skamma", where the valkyrie Brynhild blames malevolent norns for her long yearning for the embrace of Sigurd:
Brynhild's solution was to have Gunnarr and his brothers, the lords of the Burgundians, kill Sigurd and afterwards to commit suicide in order to join Sigurd in the afterlife. Her brother Atli (Attila the Hun) avenged her death by killing the lords of the Burgundians, but since he was married to their sister Guðrún, Atli would soon be killed by her. In "Guðrúnarkviða II", the Norns actively enter the series of events by informing Atli in a dream that his wife would kill him. The description of the dream begins with this stanza:
After having killed both her husband Atli and their sons, Guðrún blames the Norns for her misfortunes, as in "Guðrúnarhvöt", where Guðrún talks of trying to escaping the wrath of the norns by trying to kill herself:
"Guðrúnarhvöt" deals with how Guðrún incited her sons to avenge the cruel death of their sister Svanhild. In "Hamðismál", her sons' expedition to the Gothic king Ermanaric to exact vengeance is fateful. Knowing that he is about to die at the hands of the Goths, her son Sörli talks of the cruelty of the norns:
Since the norns were beings of ultimate power who were working in the dark, it should be no surprise that they could be referred to in charms, as they are by Sigrdrífa in "Sigrdrífumál":
In the part of Snorri Sturluson's "Prose Edda" which is called "Gylfaginning", Gylfi, the king of Sweden, has arrived at Valhalla calling himself Gangleri. There, he receives an education in Norse mythology from what is Odin in the shape of three men. They explain to Gylfi that there are three main norns, but also many others of various races, æsir, elves and dwarves:
The three main norns take water out of the well of Urd and water Yggdrasil:
Snorri furthermore informs the reader that the youngest norn, Skuld, is in effect also a valkyrie, taking part in the selection of warriors from the slain:
Some of the legendary sagas also contain references to the norns. The "Hervarar saga" contains a poem named "Hlöðskviða", where the Gothic king Angantýr defeats a Hunnish invasion led by his Hunnish half-brother Hlöðr. Knowing that his sister, the shieldmaiden Hervör, is one of the casualties, Angantýr looks at his dead brother and laments the cruelty of the norns:
In younger legendary sagas, such as "Norna-Gests þáttr" and "Hrólfs saga kraka", the norns appear to have been synonymous with völvas (witches, female shamans). In "Norna-Gests þáttr", where they arrive at the birth of the hero to shape his destiny, the norns are not described as weaving the web of fate, instead "Norna" appears to be interchangeable and possibly a synonym of "vala" (völva).
One of the last legendary sagas to be written down, the "Hrólfs saga kraka" talks of the norns simply as evil witches. When the evil half-elven princess Skuld assembles her army to attack Hrólfr Kraki, it contains in addition to undead warriors, elves and norns.
The belief in the norns as bringers of both gain and loss would last beyond Christianization, as testifies the runic inscription N 351 M from the Borgund stave church:
Three women carved on the right panel of Franks Casket, an Anglo-Saxon whalebone chest from the eighth century, have been identified by some scholars as being three norns.
A number of theories have been proposed regarding the norns.
The Germanic Matres and Matrones, female deities venerated in North-West Europe from the 1st to the 5th century AD depicted on votive objects and altars almost entirely in groups of three from the first to the fifth century AD have been proposed as connected with the later Germanic dísir, valkyries, and norns, potentially stemming from them.
Theories have been proposed that there is no foundation in Norse mythology for the notion that the three main norns should each be associated exclusively with the past, the present, and the future; rather, all three represent "destiny" as it is twined with the flow of time. Moreover, theories have been proposed that the idea that there are three main norns may be due to a late influence from Greek and Roman mythology, where there are also spinning fate goddesses (Moirai and Parcae).
Amon Amarth wrote a Death Metal album named Fate of Norns containing the title track "Fate of Norns" released in 2004.
The Norns are the main characters of the popular anime Ah! My Goddess.
Jack and Annie meet the Norns on one of their missions in Magic Tree House.
Norns are present in Philip K. Dick's "Galactic Pot-Healer", as entities keeping a book where the future is already written.
In Neil Gaiman's "American Gods", Norns are shown as three women (one very tall, one average height, the last a dwarf) who assist Shadow in his vigil for Wednesday (Odin) on the ash tree, then stay in a croft nearby; they revive Shadow's dead wife Laura by means of the water from the pit of Urd; and they prophesy to Mr. Town, an associate of Mr. World, that his neck will be broken.
In Steins;Gate, the Norns are referenced as the tiles of missions; "Operation Urd (Urðr), Verthandi (Verðandi) and Skuld."
The Norns are alluded to in 2018’s "God of War", the eighth installment in the "God of War" series, developed by Santa Monica Studio and published by Sony Interactive Entertainment (SIE), which began the franchise’s foray into the lore of Norse mythology. As the story’s protagonist Kratos and his young son, Atreus, set off on a journey through the realm of Midgard, they continuously encounter chests known as Nornir Chest, each of which can be opened by locating three hidden rune-seals and quickly striking all three with the Leviathan Axe. Each of the Nornir Chests contain collectibles that gradually upgrade Kratos’ Health and/or Rage meters. | https://en.wikipedia.org/wiki?curid=21556 |
Nasdaq
The Nasdaq Stock Market, also known as Nasdaq or NASDAQ, is an American stock exchange located at One Liberty Plaza in New York City. It is ranked second on the list of stock exchanges by market capitalization of shares traded, behind only the New York Stock Exchange. The exchange platform is owned by Nasdaq, Inc., which also owns the Nasdaq Nordic stock market network and several U.S. stock and options exchanges.
"Nasdaq" was initially an acronym for the National Association of Securities Dealers Automated Quotations.
It was founded in 1971 by the National Association of Securities Dealers (NASD), now known as the Financial Industry Regulatory Authority (FINRA).
On February 8, 1971, the Nasdaq stock market began operations as the world's first electronic stock market. At first, it was merely a "quotation system" and did not provide a way to perform electronic trades. The Nasdaq Stock Market helped lower the bid–ask spread (the difference between the bid price and the ask price of the stock), but was unpopular among brokers as it reduced their profits.
The NASDAQ Stock Market eventually assumed the majority of major trades that had been executed by the over-the-counter (OTC) system of trading, but there are still many securities traded in this fashion. As late as 1987, the Nasdaq exchange was still commonly referred to as "OTC" in media reports and also in the monthly Stock Guides (stock guides and procedures) issued by Standard & Poor's Corporation.
Over the years, the Nasdaq Stock Market became more of a stock market by adding trade and volume reporting and automated trading systems.
In 1981, Nasdaq traded 37% of the U.S. securities markets' total of 21 billion shares. By 1991, Nasdaq's share had grown to 46%.
In 1998, it was the first stock market in the United States to trade online, using the slogan "the stock market for the next hundred years". The Nasdaq Stock Market attracted many companies during the dot-com bubble.
Its main index is the NASDAQ Composite, which has been published since its inception. The QQQ exchange-traded fund tracks the large-cap NASDAQ-100 index, which was introduced in 1985 alongside the NASDAQ Financial-100 Index, which tracks the largest 100 companies in terms of market capitalization.
In 1992, the Nasdaq Stock Market joined with the London Stock Exchange to form the first intercontinental linkage of capital markets.
In 2000, the National Association of Securities Dealers spun off the Nasdaq Stock Market to form a public company.
On March 10, 2000, the NASDAQ Composite stock market index peaked at 5,132.52, but fell to 3,227 by April 17, and, in the following 30 months, fell 78% from its peak.
In a series of sales in 2000 and 2001, FINRA sold its stake in the Nasdaq.
On July 2, 2002, Nasdaq Inc. became a public company via an initial public offering.
In 2006, the status of the Nasdaq Stock Market was changed from a stock market to a licensed national securities exchange.
In 2010, Nasdaq merged with OMX, a leading exchange operator in the Nordic countries, expanded its global footprint, and changed its name to the NASDAQ OMX Group.
To qualify for listing on the exchange, a company must be registered with the United States Securities and Exchange Commission (SEC), must have at least three market makers (financial firms that act as brokers or dealers for specific securities) and must meet minimum requirements for assets, capital, public shares, and shareholders.
In February 2011, in the wake of an announced merger of NYSE Euronext with Deutsche Börse, speculation developed that NASDAQ OMX and Intercontinental Exchange (ICE) could mount a counter-bid of their own for NYSE. NASDAQ OMX could be looking to acquire the American exchange's cash equities business, ICE the derivatives business. At the time, "NYSE Euronext's market value was $9.75 billion. Nasdaq was valued at $5.78 billion, while ICE was valued at $9.45 billion." Late in the month, Nasdaq was reported to be considering asking either ICE or the Chicago Mercantile Exchange to join in what would probably have to be, if it proceeded, an $11–12 billion counterbid.
In December 2005, NASDAQ acquired Instinet for $1.9 billion, retaining the Inet ECN and subsequently selling the agency brokerage business to Silver Lake Partners and Instinet management.
The European Association of Securities Dealers Automatic Quotation System (EASDAQ) was founded as a European equivalent to the Nasdaq Stock Market. It was purchased by NASDAQ in 2001 and became NASDAQ Europe. In 2003, operations were shut down as a result of the burst of the dot-com bubble. In 2007, NASDAQ Europe was revived first as Equiduct, and later that year, it was acquired by Börse Berlin.
On June 18, 2012, Nasdaq OMX became a founding member of the United Nations Sustainable Stock Exchanges Initiative on the eve of the United Nations Conference on Sustainable Development (Rio+20).
In November 2016, chief operating officer Adena Friedman was promoted to chief executive officer, becoming the first woman to run a major exchange in the U.S.
In 2016, Nasdaq earned $272 million in listings-related revenues.
In October 2018, the SEC ruled that the New York Stock Exchange and Nasdaq did not justify the continued price increases when selling market data.
Nasdaq quotes are available at three levels:
The Nasdaq Stock Market sessions, with times in the Eastern Time Zone are:
4:00 a.m. to 9:30 a.m. Extended-hours trading session (premarket)
9:30 a.m. to 4:00 p.m. normal trading session
4:00 p.m. to 8:00 p.m. Extended-hours trading session (postmarket)
The Nasdaq Stock Market averages about 253 trading days per year.
The Nasdaq Stock Market has three different market tiers: | https://en.wikipedia.org/wiki?curid=21559 |
New York Stock Exchange
The New York Stock Exchange (NYSE, nicknamed "The Big Board") is an American stock exchange located at 11 Wall Street, Lower Manhattan, New York City, New York. It is by far the world's largest stock exchange by market capitalization of its listed companies at US$30.1 trillion as of February 2018. The average daily trading value was approximately 169 billion in 2013. The NYSE trading floor is located at 11 Wall Street and is composed of 21 rooms used for the facilitation of trading. An additional trading room, located at 30 Broad Street, was closed in February 2007. The main building and the 11 Wall Street building were designated National Historic Landmarks in 1978.
The NYSE is owned by Intercontinental Exchange, an American holding company that it also lists (). Previously, it was part of NYSE Euronext (NYX), which was formed by the NYSE's 2007 merger with Euronext.
The earliest recorded organization of securities trading in New York among brokers directly dealing with each other can be traced to the Buttonwood Agreement. Previously, securities exchange had been intermediated by the auctioneers, who also conducted more mundane auctions of commodities such as wheat and tobacco. On May 17, 1792, twenty four brokers signed the Buttonwood Agreement, which set a floor commission rate charged to clients and bound the signers to give preference to the other signers in securities sales. The earliest securities traded were mostly governmental securities such as War Bonds from the Revolutionary War and First Bank of the United States stock, although Bank of New York stock was a non-governmental security traded in the early days. The Bank of North America, along with the First Bank of the United States and the Bank of New York, were the first shares traded on the New York Stock Exchange.
In 1817, the stockbrokers of New York, operating under the Buttonwood Agreement, instituted new reforms and reorganized. After sending a delegation to Philadelphia to observe the organization of their board of brokers, restrictions on manipulative trading were adopted, as well as formal organs of governance. After re-forming as the New York Stock and Exchange Board, the broker organization began renting out space exclusively for securities trading, which previously had been taking place at the Tontine Coffee House. Several locations were used between 1817 and 1865, when the present location was adopted.
The invention of the electrical telegraph consolidated markets and New York's market rose to dominance over Philadelphia after weathering some market panics better than other alternatives. The Open Board of Stock Brokers was established in 1864 as a competitor to the NYSE. With 354 members, the Open Board of Stock Brokers rivaled the NYSE in membership (which had 533) "because it used a more modern, continuous trading system superior to the NYSE’s twice-daily call sessions". The Open Board of Stock Brokers merged with the NYSE in 1869. Robert Wright of "Bloomberg" writes that the merger increased the NYSE's members as well as trading volume, as "several dozen regional exchanges were also competing with the NYSE for customers. Buyers, sellers and dealers all wanted to complete transactions as quickly and cheaply as technologically possible and that meant finding the markets with the most trading, or the greatest liquidity in today’s parlance. Minimizing competition was essential to keep a large number of orders flowing, and the merger helped the NYSE maintain its reputation for providing superior liquidity." The Civil War greatly stimulated speculative securities trading in New York. By 1869, membership had to be capped, and has been sporadically increased since. The latter half of the nineteenth century saw rapid growth in securities trading.
Securities trade in the latter nineteenth and early twentieth centuries was prone to panics and crashes. Government regulation of securities trading was eventually seen as necessary, with arguably the most dramatic changes occurring in the 1930s after a major stock market crash precipitated the Great Depression.
The Stock Exchange Luncheon Club was situated on the seventh floor from 1898 until its closure in 2006.
The main building, located at 18 Broad Street, between the corners of Wall Street and Exchange Place, was designated a National Historic Landmark in 1978, as was the 11 Wall Street building.
On April 21, 2005, the NYSE announced its plans to merge with Archipelago in a deal intended to reorganize the NYSE as a publicly traded company. NYSE's governing board voted to merge with rival Archipelago on December 6, 2005, and became a for-profit, public company. It began trading under the name NYSE Group on March 8, 2006. On April 4, 2007, the NYSE Group completed its merger with Euronext, the European combined stock market, thus forming NYSE Euronext, the first transatlantic stock exchange.
Wall Street is the leading US money center for international financial activities and the foremost US location for the conduct of wholesale financial services. "It comprises a matrix of wholesale financial sectors, financial markets, financial institutions, and financial industry firms" (Robert, 2002). The principal sectors are securities industry, commercial banking, asset management, and insurance.
Prior to the acquisition of NYSE Euronext by the ICE in 2013, Marsh Carter was the Chairman of the NYSE and the CEO was Duncan Niederauer. Currently, the chairman is Jeffrey Sprecher. In 2016, NYSE owner Intercontinental Exchange Inc. earned $419 million in listings-related revenues.
The exchange was closed shortly after the beginning of World War I (July 31, 1914), but it partially re-opened on November 28 of that year in order to help the war effort by trading bonds, and completely reopened for stock trading in mid-December.
On September 16, 1920, a bomb exploded on Wall Street outside the NYSE building, killing 33 people and injuring more than 400. The perpetrators were never found. The NYSE building and some buildings nearby, such as the JP Morgan building, still have marks on their façades caused by the bombing.
The Black Thursday crash of the Exchange on October 24, 1929, and the sell-off panic which started on Black Tuesday, October 29, are often blamed for precipitating the Great Depression. In an effort to restore investor confidence, the Exchange unveiled a fifteen-point program aimed to upgrade protection for the investing public on October 31, 1938.
On October 1, 1934, the exchange was registered as a national securities exchange with the U.S. Securities and Exchange Commission, with a president and a thirty-three-member board. On February 18, 1971, the non-profit corporation was formed, and the number of board members was reduced to twenty-five.
One of Abbie Hoffman's well-known publicity stunts took place in 1967, when he led members of the Yippie movement to the Exchange's gallery. The provocateurs hurled fistfuls of dollars toward the trading floor below. Some traders booed, and some laughed and waved. Three months later the stock exchange enclosed the gallery with bulletproof glass. Hoffman wrote a decade later, "We didn't call the press; at that time we really had no notion of anything called a media event."
On October 19, 1987, the Dow Jones Industrial Average (DJIA) dropped 508 points, a 22.6% loss in a single day, the second-biggest one-day drop the exchange had experienced. Black Monday was followed by Terrible Tuesday, a day in which the Exchange's systems did not perform well and some people had difficulty completing their trades.
Subsequently, there was another major drop for the Dow on October 13, 1989—the Mini-Crash of 1989. The crash was apparently caused by a reaction to a news story of a $6.75 billion leveraged buyout deal for UAL Corporation, the parent company of United Airlines, which broke down. When the UAL deal fell through, it helped trigger the collapse of the junk bond market causing the Dow to fall 190.58 points, or 6.91 percent.
Similarly, there was a panic in the financial world during the year of 1997; the Asian Financial Crisis. Like the fall of many foreign markets, the Dow suffered a 7.18% drop in value (554.26 points) on October 27, 1997, in what later became known as the 1997 Mini-Crash but from which the DJIA recovered quickly. This was the first time that the "circuit breaker" rule had operated.
On January 26, 2000, an altercation during filming of the music video for Rage Against the Machine's "Sleep Now in the Fire", directed by Michael Moore, caused the doors of the exchange to be closed and the band to be escorted from the site by security after the members attempted to gain entry into the exchange.
In the aftermath of the September 11 attacks, the NYSE was closed for four trading sessions, resuming on Monday, September 17, one of the rare times the NYSE was closed for more than one session and only the third time since March 1933. On the first day, the NYSE suffered a 7.1% drop in value (684 points); after a week, it dropped by 14% (1370 points). An estimated of $1.4 trillion was lost within five days of trading. The NYSE was only 5 blocks from Ground Zero.
On May 6, 2010, the Dow Jones Industrial Average posted its largest intraday percentage drop since the crash on October 19, 1987, with a 998-point loss later being called the 2010 Flash Crash (as the drop occurred in minutes before rebounding). The SEC and CFTC published a report on the event, although it did not come to a conclusion as to the cause. The regulators found no evidence that the fall was caused by erroneous ("fat finger") orders.
On October 29, 2012, the stock exchange was shut down for two days due to Hurricane Sandy. The last time the stock exchange was closed due to weather for a full two days was on March 12 and 13, 1888.
On May 1, 2014, the stock exchange was fined $4.5 million by the Securities and Exchange Commission to settle charges that it had violated market rules.
On August 14, 2014, Berkshire Hathaway's A Class shares, the highest priced shares on the NYSE, hit $200,000 a share for the first time.
On July 8, 2015, technical issues affected the stock exchange, halting trading at 11:32 am ET. The NYSE reassured stock traders that the outage was "not a result of a cyber breach", and the Department of Homeland Security confirmed that there was "no sign of malicious activity". Trading eventually resumed at 3:10 pm ET the same day.
On May 25, 2018, Stacey Cunningham, the NYSE's chief operating officer, became the Big Board's 67th president, succeeding Thomas Farley. She is the first female leader in the exchange's 226-year history.
The NYSE plans to temporarily move to all-electronic trading on March 23, 2020, due to the COVID-19 pandemic in the United States.
The New York Stock Exchange is closed on New Year's Day, Martin Luther King, Jr. Day, Washington's Birthday, Good Friday, Memorial Day, Fourth of July, Labor Day, Thanksgiving, and Christmas. When those holidays occur on a weekend, the holiday is observed on the closest weekday. In addition, the Stock Exchange closes early on the day before Independence Day, the day after Thanksgiving, and Christmas Eve. The NYSE averages about 253 trading days per year.
The New York Stock Exchange (sometimes referred to as "the Big Board") provides a means for buyers and sellers to trade shares of stock in companies registered for public trading. The NYSE is open for trading Monday through Friday from 9:30 am – 4:00 pm ET, with the exception of holidays declared by the Exchange in advance.
The NYSE trades in a continuous auction format, where traders can execute stock transactions on behalf of investors. They will gather around the appropriate post where a specialist broker, who is employed by a NYSE member firm (that is, he/she is not an employee of the New York Stock Exchange), acts as an auctioneer in an open outcry auction market environment to bring buyers and sellers together and to manage the actual auction. They do on occasion (approximately 10% of the time) facilitate the trades by committing their own capital and as a matter of course disseminate information to the crowd that helps to bring buyers and sellers together. The auction process moved toward automation in 1995 through the use of wireless hand held computers (HHC). The system enabled traders to receive and execute orders electronically via wireless transmission. On September 25, 1995, NYSE member Michael Einersen, who designed and developed this system, executed 1000 shares of IBM through this HHC ending a 203-year process of paper transactions and ushering in an era of automated trading.
As of January 24, 2007, all NYSE stocks can be traded via its electronic hybrid market (except for a small group of very high-priced stocks). Customers can now send orders for immediate electronic execution, or route orders to the floor for trade in the auction market. In the first three months of 2007, in excess of 82% of all order volume was delivered to the floor electronically. NYSE works with US regulators such as the SEC and CFTC to coordinate risk management measures in the electronic trading environment through the implementation of mechanisms like circuit breakers and liquidity replenishment points.
Until 2005, the right to directly trade shares on the exchange was conferred upon owners of the 1,366 "seats". The term comes from the fact that up until the 1870s NYSE members sat in chairs to trade. In 1868, the number of seats was fixed at 533, and this number was increased several times over the years. In 1953, the number of seats was set at 1,366. These seats were a sought-after commodity as they conferred the ability to directly trade stock on the NYSE, and seat holders were commonly referred to as members of the NYSE. The Barnes family is the only known lineage to have five generations of NYSE members: Winthrop H. Barnes (admitted 1894), Richard W.P. Barnes (admitted 1926), Richard S. Barnes (admitted 1951), Robert H. Barnes (admitted 1972), Derek J. Barnes (admitted 2003). Seat prices varied widely over the years, generally falling during recessions and rising during economic expansions. The most expensive inflation-adjusted seat was sold in 1929 for $625,000, which, today, would be over six million dollars. In recent times, seats have sold for as high as $4 million in the late 1990s and as low as $1 million in 2001. In 2005, seat prices shot up to $3.25 million as the exchange entered into an agreement to merge with Archipelago and became a for-profit, publicly traded company. Seat owners received $500,000 in cash per seat and 77,000 shares of the newly formed corporation. The NYSE now sells one-year licenses to trade directly on the exchange. Licenses for floor trading are available for $40,000 and a license for bond trading is available for as little as $1,000 as of 2010. Neither are resell-able, but may be transferable during a change of ownership of a corporation holding a trading license.
Following the Black Monday market crash in 1987, NYSE imposed trading curbs to reduce market volatility and massive panic sell-offs. Following the 2011 rule change, at the start of each trading day, the NYSE sets three circuit breaker levels at levels of 7% (Level 1), 13% (Level 2), and 20% (Level 3) of the average closing price of the S&P 500 for the preceding trading day. Level 1 and Level 2 declines result in a 15-minute trading halt unless they occur after 3:25 pm, when no trading halts apply. A Level 3 decline results in trading being suspended for the remainder of the day. (The biggest one-day decline in the S&P 500 since 1987 was the 11.98% drop on March 16, 2020.)
In the mid-1960s, the NYSE Composite Index (NYSE: NYA) was created, with a base value of 50 points equal to the 1965 yearly close. This was done to reflect the value of all stocks trading at the exchange instead of just the 30 stocks included in the Dow Jones Industrial Average. To raise the profile of the composite index, in 2003, the NYSE set its new base value of 5,000 points equal to the 2002 yearly close. Its close at the end of 2013 was 10,400.32.
In October 2008, NYSE Euronext completed acquisition of the American Stock Exchange (AMEX) for $260 million in stock.
On February 15, 2011, NYSE and Deutsche Börse announced their merger to form a new company, as yet unnamed, wherein Deutsche Börse shareholders would have 60% ownership of the new entity, and NYSE Euronext shareholders would have 40%.
On February 1, 2012, the European Commission blocked the merger of NYSE with Deutsche Börse, after commissioner Joaquín Almunia stated that the merger "would have led to a near-monopoly in European financial derivatives worldwide". Instead, Deutsche Börse and NYSE would have to sell either their Eurex derivatives or LIFFE shares in order to not create a monopoly. On February 2, 2012, NYSE Euronext and Deutsche Börse agreed to scrap the merger.
In April 2011, Intercontinental Exchange (ICE), an American futures exchange, and NASDAQ OMX Group had together made an unsolicited proposal to buy NYSE Euronext for approximately , a deal in which NASDAQ would have taken control of the stock exchanges. NYSE Euronext rejected this offer twice, but it was finally terminated after the United States Department of Justice indicated their intention to block the deal due to antitrust concerns.
In December 2012, ICE had proposed to buy NYSE Euronext in a stock swap with a valuation of $8 billion. NYSE Euronext shareholders would receive either $33.12 in cash, or $11.27 in cash and approximately a sixth of a share of ICE. Jeffrey Sprecher, the chairman and CEO of ICE, will retain those positions, but four members of the NYSE board of directors will be added to the ICE board.
The NYSE's opening and closing bells mark the beginning and the end of each trading day. The opening bell is rung at 9:30 am ET to mark the start of the day's trading session. At 4 pm ET the closing bell is rung and trading for the day stops. There are bells located in each of the four main sections of the NYSE that all ring at the same time once a button is pressed. There are three buttons that control the bells, located on the control panel behind the podium which overlooks the trading floor. The main bell, which is rung at the beginning and end of the trading day, is controlled by a green button. The second button, colored orange, activates a single-stroke bell that is used to signal a moment of silence. A third, red button controls a backup bell which is used in case the main bell fails to ring.
The signal to start and stop trading was not always a bell. The original signal was a gavel (which is still in use today along with the bell), but during the late 1800s, the NYSE decided to switch the gavel for a gong to signal the day's beginning and end. After the NYSE changed to its present location at 18 Broad Street in 1903, the gong was switched to the bell format that is currently being used.
A common sight today is the highly publicized events in which a celebrity or executive from a corporation stands behind the NYSE podium and pushes the button that signals the bells to ring. Due to the amount of coverage that the opening/closing bells receive, many companies coordinate new product launches and other marketing-related events to start on the same day as when the company's representatives ring the bell. It was only in 1995 that the NYSE began having special guests ring the bells on a regular basis; prior to that, ringing the bells was usually the responsibility of the exchange's floor managers.
Many of the people who ring the bell are business executives whose companies trade on the exchange. However, there have also been many famous people from outside the world of business that have rung the bell. Athletes such as Joe DiMaggio of the New York Yankees and Olympic swimming champion Michael Phelps, entertainers such as rapper Snoop Dogg, members of ESPN’s College GameDay crew, singer and actress Liza Minnelli and members of the band Kiss, and politicians such as Mayor of New York City Rudy Giuliani and President of South Africa Nelson Mandela have all had the honor of ringing the bell. Two United Nations Secretaries General have also rung the bell. On April 27, 2006, Secretary-General Kofi Annan rang the opening bell to launch the United Nations Principles for Responsible Investment. On July 24, 2013, Secretary-General Ban Ki-moon rang the closing bell to celebrate the NYSE joining the United Nations Sustainable Stock Exchanges Initiative.
In addition, there have been many bell-ringers who are famous for heroic deeds, such as members of the New York police and fire departments following the events of 9/11, members of the United States Armed Forces serving overseas, and participants in various charitable organizations.
There have also been several fictional characters that have rung the bell, including Mickey Mouse, the Pink Panther, Mr. Potato Head, the Aflac Duck, Gene of The Emoji Movie , and Darth Vader.
Notes
Bibliography | https://en.wikipedia.org/wiki?curid=21560 |
Nanoengineering
Nanoengineering is the practice of engineering on the nanoscale. It derives its name from the nanometre, a unit of measurement equalling one billionth of a meter.
Nanoengineering is largely a synonym for nanotechnology, but emphasizes the engineering rather than the pure science aspects of the field.
The first nanoengineering program was started at the University of Toronto within the Engineering Science program as one of the options of study in the final years. In 2003, the Lund Institute of Technology started a program in Nanoengineering. In 2004, the College of Nanoscale Science and Engineering at SUNY Polytechnic Institute was established on the campus of the University at Albany. In 2005, the University of Waterloo established a unique program which offers a full degree in Nanotechnology Engineering. Louisiana Tech University started the first program in the U.S. in 2005. In 2006 the University of Duisburg-Essen started a Bachelor and a Master program NanoEngineering. Unlike early NanoEngineering programs, the first NanoEngineering Department in the world, offering both undergraduate and graduate degrees, was established by the University of California, San Diego in 2007.
In 2009, the University of Toronto began offering all Options of study in Engineering Science as degrees, bringing the second nanoengineering degree to Canada. Rice University established in 2016 a Department of Materials Science and NanoEngineering (MSNE).
DTU Nanotech - the Department of Micro- and Nanotechnology - is a department at the Technical University of Denmark established in 1990.
In 2013, Wayne State University began offering a Nanoengineering Undergraduate Certificate Program, which is funded by a Nanoengineering Undergraduate Education (NUE) grant from the National Science Foundation. The primary goal is to offer specialized undergraduate training in nanotechnology. Other goals are: 1) to teach emerging technologies at the undergraduate level, 2) to train a new adaptive workforce, and 3) to retrain working engineers and professionals. | https://en.wikipedia.org/wiki?curid=21561 |
NP (complexity)
In computational complexity theory, NP (nondeterministic polynomial time) is a complexity class used to classify decision problems. NP is the set of decision problems for which the problem instances, where the answer is "yes", have proofs verifiable in polynomial time by a deterministic Turing machine.
An equivalent definition of NP is the set of decision problems "solvable" in polynomial time by a non-deterministic Turing machine. This definition is the basis for the abbreviation NP; "nondeterministic, polynomial time." These two definitions are equivalent because the algorithm based on the Turing machine consists of two phases, the first of which consists of a guess about the solution, which is generated in a non-deterministic way, while the second phase consists of a deterministic algorithm that verifies if the guess is a solution to the problem.
Decision problems are assigned complexity classes (such as NP) based on the fastest known algorithms. Therefore, decision problems may change classes if faster algorithms are discovered.
It is easy to see that the complexity class P (all problems solvable, deterministically, in polynomial time) is contained in NP (problems where solutions can be verified in polynomial time), because if a problem is solvable in polynomial time then a solution is also verifiable in polynomial time by simply solving the problem. But NP contains many more problems, the hardest of which are called NP-complete problems. An algorithm solving such a problem in polynomial time is also able to solve any other NP problem in polynomial time. The most important P versus NP (“P = NP?”) problem, asks whether polynomial time algorithms exist for solving NP-complete, and by corollary, all NP problems. It is widely believed that this is not the case.
The complexity class NP is related to the complexity class co-NP for which the answer "no" can be verified in polynomial time. Whether or not NP = co-NP is another outstanding question in complexity theory.
The complexity class NP can be defined in terms of NTIME as follows:
where formula_2 is the set of decision problems that can be solved by a non-deterministic Turing machine in formula_3 time.
Alternatively, NP can be defined using deterministic Turing machines as verifiers. A language "L" is in NP if and only if there exist polynomials "p" and "q", and a deterministic Turing machine "M", such that
Many computer science problems are contained in NP, like decision versions of many search and optimization problems.
In order to explain the verifier-based definition of NP, consider the subset sum problem:
Assume that we are given some integers, {−7, −3, −2, 5, 8}, and we wish to know whether some of these integers sum up to zero. Here, the answer is "yes", since the integers {−3, −2, 5} corresponds to the sum The task of deciding whether such a subset with zero sum exists is called the "subset sum problem".
To answer if some of the integers add to zero we can create an algorithm which obtains all the possible subsets. As the number of integers that we feed into the algorithm becomes larger, both the number of subsets and the computation time grows exponentially.
But notice that if we are given a particular subset we can "efficiently verify" whether the subset sum is zero, by summing the integers of the subset. If the sum is zero, that subset is a "proof" or witness for the answer is "yes". An algorithm that verifies whether a given subset has sum zero is a "verifier". Clearly, summing the integers of a subset can be done in polynomial time and the subset sum problem is therefore in NP.
The above example can be generalized for any decision problem. Given any instance I of problem formula_4 and witness W, if there exists a "verifier" V so that given the ordered pair (I, W) as input, V returns "yes" in polynomial time if the witness proves that the answer is "yes" and "no" in polynomial time otherwise, then formula_4 is in NP.
The "no"-answer version of this problem is stated as: "given a finite set of integers, does every non-empty subset have a nonzero sum?". The verifier-based definition of NP does "not" require an efficient verifier for the "no"-answers. The class of problems with such verifiers for the "no"-answers is called co-NP. In fact, it is an open question whether all problems in NP also have verifiers for the "no"-answers and thus are in co-NP.
In some literature the verifier is called the "certifier" and the witness the "certificate".
Equivalent to the verifier-based definition is the following characterization: NP is the class of decision problems solvable by a non-deterministic Turing machine that runs in polynomial time. That is to say, a decision problem formula_4 is in NP whenever formula_4 is recognized by some polynomial-time non-deterministic Turing machine formula_8 with an existential acceptance condition, meaning that formula_9 if and only if some computation path of formula_10 leads to an accepting state. This definition is equivalent to the verifier-based definition because a non-deterministic Turing machine could solve an NP problem in polynomial time by non-deterministically selecting a certificate and running the verifier on the certificate. Similarly, if such a machine exists, then a polynomial time verifier can naturally be constructed from it.
In this light, we can define co-NP dually as the class of decision problems recognizable by polynomial-time non-deterministic Turing machines with an existential rejection condition. Since an existential rejection condition is exactly the same thing as a universal acceptance condition, we can understand the "NP vs. co-NP" question as asking whether the existential and universal acceptance conditions have the same expressive power for the class of polynomial-time non-deterministic Turing machines.
NP is closed under union, intersection, concatenation, Kleene star and reversal. It is not known whether NP is closed under complement (this question is the so-called "NP versus co-NP" question)
Because of the many important problems in this class, there have been extensive efforts to find polynomial-time algorithms for problems in NP. However, there remain a large number of problems in NP that defy such attempts, seeming to require super-polynomial time. Whether these problems are not decidable in polynomial time is one of the greatest open questions in computer science (see P versus NP ("P=NP") problem for an in-depth discussion).
An important notion in this context is the set of NP-complete decision problems, which is a subset of NP and might be informally described as the "hardest" problems in NP. If there is a polynomial-time algorithm for even "one" of them, then there is a polynomial-time algorithm for "all" the problems in NP. Because of this, and because dedicated research has failed to find a polynomial algorithm for any NP-complete problem, once a problem has been proven to be NP-complete this is widely regarded as a sign that a polynomial algorithm for this problem is unlikely to exist.
However, in practical uses, instead of spending computational resources looking for an optimal solution, a good enough (but potentially suboptimal) solution may often be found in polynomial time. Also, the real life applications of some problems are easier than their theoretical equivalents.
The two definitions of NP as the class of problems solvable by a nondeterministic Turing machine (TM) in polynomial time and the class of problems verifiable by a deterministic Turing machine in polynomial time are equivalent. The proof is described by many textbooks, for example Sipser's "Introduction to the Theory of Computation", section 7.3.
To show this, first suppose we have a deterministic verifier. A nondeterministic machine can simply nondeterministically run the verifier on all possible proof strings (this requires only polynomially many steps because it can nondeterministically choose the next character in the proof string in each step, and the length of the proof string must be polynomially bounded). If any proof is valid, some path will accept; if no proof is valid, the string is not in the language and it will reject.
Conversely, suppose we have a nondeterministic TM called A accepting a given language L. At each of its polynomially many steps, the machine's computation tree branches in at most a finite number of directions. There must be at least one accepting path, and the string describing this path is the proof supplied to the verifier. The verifier can then deterministically simulate A, following only the accepting path, and verifying that it accepts at the end. If A rejects the input, there is no accepting path, and the verifier will always reject.
NP contains all problems in P, since one can verify any instance of the problem by simply ignoring the proof and solving it. NP is contained in PSPACE—to show this, it suffices to construct a PSPACE machine that loops over all proof strings and feeds each one to a polynomial-time verifier. Since a polynomial-time machine can only read polynomially many bits, it cannot use more than polynomial space, nor can it read a proof string occupying more than polynomial space (so we do not have to consider proofs longer than this). NP is also contained in EXPTIME, since the same algorithm operates in exponential time.
co-NP contains those problems which have a simple proof for "no" instances, sometimes called counterexamples. For example, primality testing trivially lies in co-NP, since one can refute the primality of an integer by merely supplying a nontrivial factor. NP and co-NP together form the first level in the polynomial hierarchy, higher only than P.
NP is defined using only deterministic machines. If we permit the verifier to be probabilistic (this however, is not necessarily a BPP machine), we get the class MA solvable using an Arthur-Merlin protocol with no communication from Arthur to Merlin.
NP is a class of decision problems; the analogous class of function problems is FNP.
The only known strict inclusions came from the time hierarchy theorem and the space hierarchy theorem, and respectively they are formula_11 and formula_12.
In terms of descriptive complexity theory, NP corresponds precisely to the set of languages definable by existential second-order logic (Fagin's theorem).
NP can be seen as a very simple type of interactive proof system, where the prover comes up with the proof certificate and the verifier is a deterministic polynomial-time machine that checks it. It is complete because the right proof string will make it accept if there is one, and it is sound because the verifier cannot accept if there is no acceptable proof string.
A major result of complexity theory is that NP can be characterized as the problems solvable by probabilistically checkable proofs where the verifier uses O(log "n") random bits and examines only a constant number of bits of the proof string (the class PCP(log "n", 1)). More informally, this means that the NP verifier described above can be replaced with one that just "spot-checks" a few places in the proof string, and using a limited number of coin flips can determine the correct answer with high probability. This allows several results about the hardness of approximation algorithms to be proven.
This is a list of some problems that are in NP:
All problems in P, denoted formula_13. Given a certificate for a problem in P, we can ignore the certificate and just solve the problem in polynomial time.
The decision version of the travelling salesman problem is in NP. Given an input matrix of distances between "n" cities, the problem is to determine if there is a route visiting all cities with total distance less than "k".
A proof can simply be a list of the cities. Then verification can clearly be done in polynomial time. It simply adds the matrix entries corresponding to the paths between the cities.
A non-deterministic Turing machine can find such a route as follows:
One can think of each guess as "forking" a new copy of the Turing machine to follow each of the possible paths forward, and if at least one machine finds a route of distance less than "k", that machine accepts the input. (Equivalently, this can be thought of as a single Turing machine that always guesses correctly)
A binary search on the range of possible distances can convert the decision version of Traveling Salesman to the optimization version, by calling the decision version repeatedly (a polynomial number of times).
The decision problem version of the integer factorization problem: given integers "n" and "k", is there a factor "f" with 1 < "f" < "k" and "f" dividing "n"?
The Subgraph isomorphism problem of determining whether graph contains a subgraph that is isomorphic to graph .
The boolean satisfiability problem, where we want to know whether or not a certain formula in propositional logic with boolean variables is true for some value of the variables. | https://en.wikipedia.org/wiki?curid=21562 |
Noam Chomsky
Avram Noam Chomsky (born December 7, 1928) is an American linguist, philosopher, cognitive scientist, historian, social critic, and political activist. Sometimes called "the father of modern linguistics", Chomsky is also a major figure in analytic philosophy and one of the founders of the field of cognitive science. He holds a joint appointment as Institute Professor Emeritus at the Massachusetts Institute of Technology (MIT) and Laureate Professor at the University of Arizona, and is the author of more than 100 books on topics such as linguistics, war, politics, and mass media. Ideologically, he aligns with anarcho-syndicalism and libertarian socialism.
Born to Ashkenazi Jewish immigrants in Philadelphia, Chomsky developed an early interest in anarchism from alternative bookstores in New York City. He studied at the University of Pennsylvania. During his postgraduate work in the Harvard Society of Fellows, Chomsky developed the theory of transformational grammar for which he earned his doctorate in 1955. That year he began teaching at MIT, and in 1957 emerged as a significant figure in linguistics with his landmark work "Syntactic Structures", which played a major role in remodeling the study of language. From 1958 to 1959 Chomsky was a National Science Foundation fellow at the Institute for Advanced Study. He created or co-created the universal grammar theory, the generative grammar theory, the Chomsky hierarchy, and the minimalist program. Chomsky also played a pivotal role in the decline of linguistic behaviorism, and was particularly critical of the work of B. F. Skinner.
An outspoken opponent of U.S. involvement in the Vietnam War, which he saw as an act of American imperialism, in 1967 Chomsky rose to national attention for his antiwar essay "The Responsibility of Intellectuals". Associated with the New Left, he was arrested multiple times for his activism and placed on President Richard Nixon's Enemies List. While expanding his work in linguistics over subsequent decades, he also became involved in the linguistics wars. In collaboration with Edward S. Herman, Chomsky later articulated the propaganda model of media criticism in "Manufacturing Consent" and worked to expose the Indonesian occupation of East Timor. His defense of freedom of speech, including Holocaust denial, generated significant controversy in the Faurisson affair of the 1980s. Since retiring from MIT, he has continued his vocal political activism, including opposing the 2003 invasion of Iraq and supporting the Occupy movement. Chomsky began teaching at the University of Arizona in 2017.
One of the most cited scholars alive, Chomsky has influenced a broad array of academic fields. He is widely recognized as having helped to spark the cognitive revolution in the human sciences, contributing to the development of a new cognitivistic framework for the study of language and the mind. In addition to his continued scholarship, he remains a leading critic of U.S. foreign policy, neoliberalism and contemporary state capitalism, the Israeli–Palestinian conflict, and mainstream news media. His ideas have proven highly influential in the anti-capitalist and anti-imperialist movements, but have also drawn criticism, with some accusing Chomsky of anti-Americanism.
Avram Noam Chomsky was born on December 7, 1928, in the East Oak Lane neighborhood of Philadelphia, Pennsylvania. His parents, Ze'ev "William" Chomsky and Elsie Simonofsky, were Jewish immigrants. William had fled the Russian Empire in 1913 to escape conscription and worked in Baltimore sweatshops and Hebrew elementary schools before attending university. After moving to Philadelphia, William became principal of the Congregation Mikveh Israel religious school and joined the Gratz College faculty. He placed great emphasis on educating people so that they would be "well integrated, free and independent in their thinking, concerned about improving and enhancing the world, and eager to participate in making life more meaningful and worthwhile for all", a mission that shaped and was subsequently adopted by his son. Elsie was a teacher and activist born in Belarus. They met at Mikveh Israel, where they both worked.
Noam was the Chomskys' first child. His younger brother, David Eli Chomsky, was born five years later, in 1934. The brothers were close, though David was more easygoing while Noam could be very competitive. Chomsky and his brother were raised Jewish, being taught Hebrew and regularly discussing the political theories of Zionism; the family was particularly influenced by the Left Zionist writings of Ahad Ha'am. Chomsky faced antisemitism as a child, particularly from Philadelphia's Irish and German communities.
Chomsky attended the independent, Deweyite Oak Lane Country Day School and Philadelphia's Central High School, where he excelled academically and joined various clubs and societies, but was troubled by the school's hierarchical and regimented teaching methods. He also attended Hebrew High School at Gratz College, where his father taught.
Chomsky has described his parents as "normal Roosevelt Democrats" with center-left politics, but other relatives involved in the International Ladies' Garment Workers' Union exposed him to socialism and far-left politics. He was substantially influenced by his uncle and the Jewish leftists who frequented his New York City newspaper stand to debate current affairs. Chomsky frequented left-wing and anarchist bookstores when visiting his uncle in the city, voraciously reading political literature. He wrote his first article at age 10 on the spread of fascism following the fall of Barcelona during the Spanish Civil War and, from the age of 12 or 13, identified with anarchist politics. He later described his discovery of anarchism as "a lucky accident" that made him critical of Stalinism and other forms of Marxism–Leninism.
In 1945, aged 16, Chomsky began a general program of study at the University of Pennsylvania, where he explored philosophy, logic, and languages and developed a primary interest in learning Arabic. Living at home, he funded his undergraduate degree by teaching Hebrew. Frustrated with his experiences at the university, he considered dropping out and moving to a kibbutz in Mandatory Palestine, but his intellectual curiosity was reawakened through conversations with the Russian-born linguist Zellig Harris, whom he first met in a political circle in 1947. Harris introduced Chomsky to the field of theoretical linguistics and convinced him to major in the subject. Chomsky's BA honors thesis, "Morphophonemics of Modern Hebrew", applied Harris's methods to the language. Chomsky revised this thesis for his MA, which he received from the University of Pennsylvania in 1951; it was subsequently published as a book. He also developed his interest in philosophy while at university, in particular under the tutelage of Nelson Goodman.
From 1951 to 1955 Chomsky was a member of the Society of Fellows at Harvard University, where he undertook research on what became his doctoral dissertation. Having been encouraged by Goodman to apply, Chomsky was attracted to Harvard in part because the philosopher Willard Van Orman Quine was based there. Both Quine and a visiting philosopher, J. L. Austin of the University of Oxford, strongly influenced Chomsky. In 1952 Chomsky published his first academic article, "Systems of Syntactic Analysis", which appeared not in a journal of linguistics but in "The Journal of Symbolic Logic". Highly critical of the established behaviorist currents in linguistics, in 1954 he presented his ideas at lectures at the University of Chicago and Yale University. He had not been registered as a student at Pennsylvania for four years, but in 1955 he submitted a thesis setting out his ideas on transformational grammar; he was awarded a Doctor of Philosophy degree for it, and it was privately distributed among specialists on microfilm before being published in 1975 as part of "The Logical Structure of Linguistic Theory". Harvard professor George Armitage Miller was impressed by Chomsky's thesis and collaborated with him on several technical papers in mathematical linguistics. Chomsky's doctorate exempted him from compulsory military service, which was otherwise due to begin in 1955.
In 1947 Chomsky began a romantic relationship with Carol Doris Schatz, whom he had known since early childhood. They married in 1949. After Chomsky was made a Fellow at Harvard, the couple moved to the Allston area of Boston and remained there until 1965, when they relocated to the suburb of Lexington. In 1953 the couple took a Harvard travel grant to Europe, from the United Kingdom through France, Switzerland into Italy, and Israel, where they lived in Hashomer Hatzair's HaZore'a kibbutz. Despite enjoying himself, Chomsky was appalled by the country's Jewish nationalism, anti-Arab racism and, within the kibbutz's leftist community, pro-Stalinism.
On visits to New York City, Chomsky continued to frequent the office of the Yiddish anarchist journal "Fraye Arbeter Shtime" and became enamored with the ideas of Rudolf Rocker, a contributor whose work introduced Chomsky to the link between anarchism and classical liberalism. Chomsky also read other political thinkers: the anarchists Mikhail Bakunin and Diego Abad de Santillán, democratic socialists George Orwell, Bertrand Russell, and Dwight Macdonald, and works by Marxists Karl Liebknecht, Karl Korsch, and Rosa Luxemburg. His readings convinced him of the desirability of an anarcho-syndicalist society, and he became fascinated by the anarcho-syndicalist communes set up during the Spanish Civil War, as documented in Orwell's "Homage to Catalonia" (1938). He read the leftist journal "Politics", which furthered his interest in anarchism, and the council communist periodical "Living Marxism", though he rejected the orthodoxy of its editor, Paul Mattick. He was also greatly interested in the Marlenite ideas of the Leninist League of the United States, an anti-Stalinist Marxist–Leninist group, sharing their view that the Second World War was orchestrated by Western capitalists and the Soviet Union's "state capitalists" to crush Europe's proletariat.
Chomsky befriended two linguists at the Massachusetts Institute of Technology (MIT), Morris Halle and Roman Jakobson, the latter of whom secured him an assistant professor position there in 1955. At MIT, Chomsky spent half his time on a mechanical translation project and half teaching a course on linguistics and philosophy. He described MIT as "a pretty free and open place, open to experimentation and without rigid requirements. It was just perfect for someone of my idiosyncratic interests and work." In 1957 MIT promoted him to the position of associate professor, and from 1957 to 1958 he was also employed by Columbia University as a visiting professor. The Chomskys had their first child that same year, a daughter named Aviva. He also published his first book on linguistics, "Syntactic Structures", a work that radically opposed the dominant Harris–Bloomfield trend in the field. Responses to Chomsky's ideas ranged from indifference to hostility, and his work proved divisive and caused "significant upheaval" in the discipline. The linguist John Lyons later asserted that "Syntactic Structures" "revolutionized the scientific study of language". From 1958 to 1959 Chomsky was a National Science Foundation fellow at the Institute for Advanced Study in Princeton, New Jersey.
In 1959, Chomsky published a review of B. F. Skinner's 1957 book "Verbal Behavior" in the academic journal "Language", in which he argued against Skinner's view of language as learned behavior. The review argued that Skinner ignored the role of human creativity in linguistics and helped to establish Chomsky as an intellectual. With Halle, Chomsky proceeded to found MIT's graduate program in linguistics. In 1961 he was awarded tenure, becoming a full professor in the Department of Modern Languages and Linguistics. Chomsky went on to be appointed plenary speaker at the Ninth International Congress of Linguists, held in 1962 in Cambridge, Massachusetts, which established him as the "de facto" spokesperson of American linguistics. Between 1963 and 1965 he consulted on a military-sponsored project "to establish natural language as an operational language for command and control"; Barbara Partee, a collaborator on this project and then-student of Chomsky, has said this research was justified to the military on the basis that "in the event of a nuclear war, the generals would be underground with some computers trying to manage things, and that it would probably be easier to teach computers to understand English than to teach the generals to program."
Chomsky continued to publish his linguistic ideas throughout the decade, including in "Aspects of the Theory of Syntax" (1965), "Topics in the Theory of Generative Grammar" (1966), and "" (1966). Along with Halle, he also edited the "Studies in Language" series of books for Harper and Row. As he began to accrue significant academic recognition and honors for his work, Chomsky lectured at the University of California, Berkeley, in 1966. His Beckman lectures at Berkeley were assembled and published as "Language and Mind" in 1968. Despite his growing stature, an intellectual falling-out between Chomsky and some of his early colleagues and doctoral students—including Paul Postal, John "Haj" Ross, George Lakoff, and James D. McCawley—triggered a series of academic debates that came to be known as the "Linguistics Wars", although they revolved largely around philosophical issues rather than linguistics proper.
Chomsky joined protests against U.S. involvement in the Vietnam War in 1962, speaking on the subject at small gatherings in churches and homes. His 1967 critique of U.S. involvement, "The Responsibility of Intellectuals", among other contributions to "The New York Review of Books", debuted Chomsky as a public dissident. This essay and other political articles were collected and published in 1969 as part of Chomsky's first political book, "American Power and the New Mandarins". He followed this with further political books, including "At War with Asia" (1971), "The Backroom Boys" (1973), "For Reasons of State" (1973), and "Peace in the Middle East?" (1975), published by Pantheon Books. These publications led to Chomsky's association with the American New Left movement, though he thought little of prominent New Left intellectuals Herbert Marcuse and Erich Fromm and preferred the company of activists to that of intellectuals. Chomsky remained largely ignored by the mainstream press throughout this period.
He also became involved in left-wing activism. Chomsky refused to pay half his taxes, publicly supported students who refused the draft, and was arrested while participating an antiwar teach-in outside the Pentagon. During this time, Chomsky co-founded the antiwar collective RESIST with Mitchell Goodman, Denise Levertov, William Sloane Coffin, and Dwight Macdonald. Although he questioned the objectives of the 1968 student protests, Chomsky gave many lectures to student activist groups and, with his colleague Louis Kampf, ran undergraduate courses on politics at MIT independently of the conservative-dominated political science department. When student activists campaigned to stop weapons and counterinsurgency research at MIT, Chomsky was sympathetic but felt that the research should remain under MIT's oversight and limited to systems of deterrence and defense. In 1970 he visited southeast Asia to lecture at Vietnam's Hanoi University of Science and Technology and toured war refugee camps in Laos. In 1973 he helped lead a committee commemorating the 50th anniversary of the War Resisters League.
Because of his antiwar activism, Chomsky was arrested on multiple occasions and included on President Richard Nixon's master list of political opponents. Chomsky was aware of the potential repercussions of his civil disobedience and his wife began studying for her own doctorate in linguistics to support the family in the event of Chomsky's imprisonment or joblessness. Chomsky's scientific reputation insulated him from administrative action based on his beliefs.
His work in linguistics continued to gain international recognition as he received multiple honorary doctorates. He delivered public lectures at the University of Cambridge, Columbia University (Woodbridge Lectures), and Stanford University. His appearance in a 1971 debate with French continental philosopher Michel Foucault positioned Chomsky as a symbolic figurehead of analytic philosophy. He continued to publish extensively on linguistics, producing "Studies on Semantics in Generative Grammar" (1972), an enlarged edition of "Language and Mind" (1972), and "Reflections on Language" (1975). In 1974 Chomsky became a corresponding fellow of the British Academy.
In the late 1970s and 1980s, Chomsky's linguistic publications expanded and clarified his earlier work, addressing his critics and updating his grammatical theory. His political talks often generated considerable controversy, particularly when he criticized the Israeli government and military. In the early 1970s Chomsky began collaborating with Edward S. Herman, who had also published critiques of the U.S. war in Vietnam. Together they wrote "", a book that criticized U.S. military involvement in Southeast Asia and the mainstream media's failure to cover it. Warner Modular published it in 1973, but its parent company disapproved of the book's contents and ordered all copies destroyed.
While mainstream publishing options proved elusive, Chomsky found support from Michael Albert's South End Press, an activist-oriented publishing company. In 1979, South End published Chomsky and Herman's revised "Counter-Revolutionary Violence" as the two-volume "The Political Economy of Human Rights", which compares U.S. media reactions to the Cambodian genocide and the Indonesian occupation of East Timor. It argues that because Indonesia was a U.S. ally, U.S. media ignored the East Timorese situation while focusing on events in Cambodia, a U.S. enemy. Chomsky's response included two testimonials before the United Nations' Special Committee on Decolonization, successful encouragement for American media to cover the occupation, and meetings with refugees in Lisbon. The Marxist academic Steven Lukes publicly accused Chomsky of betraying his anarchist ideals and acting as an apologist for Cambodian leader Pol Pot. The controversy damaged Chomsky's reputation, and he maintains that his critics deliberately printed lies to defame him.
Chomsky had long publicly criticized Nazism, and totalitarianism more generally, but his commitment to freedom of speech led him to defend the right of French historian Robert Faurisson to advocate a position widely characterized as Holocaust denial. Without Chomsky's knowledge, his plea for Faurisson's freedom of speech was published as the preface to the latter's 1980 book . Chomsky was widely condemned for defending Faurisson, and France's mainstream press accused Chomsky of being a Holocaust denier himself, refusing to publish his rebuttals to their accusations. Critiquing Chomsky's position, sociologist Werner Cohn later published an analysis of the affair titled "Partners in Hate: Noam Chomsky and the Holocaust Deniers". The Faurisson affair had a lasting, damaging effect on Chomsky's career, especially in France.
In 1985, during the Nicaraguan Contra War—in which the U.S. supported the contra militia against the Sandinista government—Chomsky traveled to Managua to meet with workers' organizations and refugees of the conflict, giving public lectures on politics and linguistics. Many of these lectures were published in 1987 as "On Power and Ideology: The Managua Lectures". In 1983 he published "The Fateful Triangle", which argued that the U.S. had continually used the Israeli–Palestinian conflict for its own ends. In 1988, Chomsky visited the Palestinian territories to witness the impact of Israeli occupation.
In 1988, Chomsky and Herman published "", in which they outlined their propaganda model for understanding mainstream media. They argued that even in countries without official censorship, the news is censored through five filters that have great impact on what stories are reported and how they are presented. The book was inspired by Alex Carey and adapted into . In 1989, Chomsky published "Necessary Illusions: Thought Control in Democratic Societies," in which he suggests that democratic citizens, to make a worthwhile democracy, undertake intellectual self-defense against the media and elite intellectual culture that seeks to control them. By the 1980s, Chomsky's students had become prominent linguists who, in turn, expanded and revised his linguistic theories.
In the 1990s, Chomsky embraced political activism to a greater degree than before. Retaining his commitment to the cause of East Timorese independence, in 1995 he visited Australia to talk on the issue at the behest of the East Timorese Relief Association and the National Council for East Timorese Resistance. The lectures he gave on the subject were published as "Powers and Prospects" in 1996. As a result of the international publicity Chomsky generated, his biographer Wolfgang Sperlich opined that he did more to aid the cause of East Timorese independence than anyone but the investigative journalist John Pilger. After East Timor attained independence from Indonesia in 1999, the Australian-led International Force for East Timor arrived as a peacekeeping force; Chomsky was critical of this, believing it was designed to secure Australian access to East Timor's oil and gas reserves under the Timor Gap Treaty.
After the September 11 attacks in 2001, Chomsky was widely interviewed; Seven Stories Press collated and published these interviews that October. Chomsky argued that the ensuing War on Terror was not a new development but a continuation of U.S. foreign policy and concomitant rhetoric since at least the Reagan era. He gave the D.T. Lakdawala Memorial Lecture in New Delhi in 2001, and in 2003 visited Cuba at the invitation of the Latin American Association of Social Scientists. Chomsky's 2003 "Hegemony or Survival" articulated what he called the United States' "imperial grand strategy" and critiqued the Iraq War and other aspects of the War on Terror. Chomsky toured internationally with greater regularity during this period.
Chomsky retired from MIT in 2002, but continued to conduct research and seminars on campus as an emeritus. That same year he visited Turkey to attend the trial of a publisher who had been accused of treason for printing one of Chomsky's books; Chomsky insisted on being a co-defendant and amid international media attention the Security Courts dropped the charge on the first day. During that trip Chomsky visited Kurdish areas of Turkey and spoke out in favor of the Kurds' human rights. A supporter of the World Social Forum, he attended its conferences in Brazil in both 2002 and 2003, also attending the Forum event in India.
Chomsky supported the Occupy movement, delivering talks at encampments and producing two works that chronicled its influence: "Occupy" (2012), a pamphlet, and "Occupy: Reflections on Class War, Rebellion and Solidarity" (2013). He attributed Occupy's growth to a perception that the Democratic Party had abandoned the interests of the white working class. In March 2014, Chomsky joined the advisory council of the Nuclear Age Peace Foundation, an organization that advocates the global abolition of nuclear weapons, as a senior fellow. The 2016 documentary "Requiem for the American Dream" summarizes his views on capitalism and economic inequality through a "75-minute teach-in".
In 2017, Chomsky taught a short-term politics course at the University of Arizona in Tucson and was later hired as a part-time professor in the linguistics department there, with his duties including teaching and public seminars. His salary is covered by philanthropic donations.
Chomsky signed the Declaration on the Common Language of the Croats, Serbs, Bosniaks and Montenegrins in 2018.
The basis of Chomsky's linguistic theory lies in biolinguistics, the linguistic school that holds that the principles underpinning the structure of language are biologically preset in the human mind and hence genetically inherited. As such he argues that all humans share the same underlying linguistic structure, irrespective of sociocultural differences. In adopting this position Chomsky rejects the radical behaviorist psychology of B. F. Skinner, who viewed behavior (including talking and thinking) as a completely learned product of the interactions between organisms and their environments. Accordingly, Chomsky argues that language is a unique evolutionary development of the human species and distinguished from modes of communication used by any other animal species. Chomsky's nativist, internalist view of language is consistent with the philosophical school of "rationalism" and contrasts with the anti-nativist, externalist view of language consistent with the philosophical school of "empiricism", which contends that all knowledge, including language, comes from external stimuli.
Since the 1960s Chomsky has maintained that syntactic knowledge is at least partially inborn, implying that children need only learn certain language-specific features of their native languages. He bases his argument on observations about human language acquisition and describes a "poverty of the stimulus": an enormous gap between the linguistic stimuli to which children are exposed and the rich linguistic competence they attain. For example, although children are exposed to only a very small and finite subset of the allowable syntactic variants within their first language, they somehow acquire the highly organized and systematic ability to understand and produce an infinite number of sentences, including ones that have never before been uttered, in that language. To explain this, Chomsky reasoned that the primary linguistic data must be supplemented by an innate linguistic capacity. Furthermore, while a human baby and a kitten are both capable of inductive reasoning, if they are exposed to exactly the same linguistic data, the human will always acquire the ability to understand and produce language, while the kitten will never acquire either ability. Chomsky labeled whatever relevant capacity the human has that the cat lacks the language acquisition device, and suggested that one of linguists' tasks should be to determine what that device is and what constraints it imposes on the range of possible human languages. The universal features that result from these constraints would constitute "universal grammar". Multiple scholars have challenged universal grammar on the grounds of the evolutionary infeasibility of its genetic basis for language, the lack of universal characteristics between languages, and the unproven link between innate/universal structures and the structures of specific languages. Scholar Michael Tomasello has challenged Chomsky's theory of innate syntactic knowledge as based in logic and not empiricism.
Transformational-generative grammar is a broad theory used to model, encode, and deduce a native speaker's linguistic capabilities. These models, or "formal grammars", show the abstract structures of a specific language as they may relate to structures in other languages. Chomsky developed transformational grammar in the mid-1950s, whereupon it became the dominant syntactic theory in linguistics for two decades. "Transformations" refers to syntactic relationships within language, e.g., being able to infer that the subject between two sentences is the same person. Chomsky's theory posits that language consists of both deep structures and surface structures: Outward-facing surface structures relate phonetic rules into sound, while inward-facing deep structures relate words and conceptual meaning. Transformational-generative grammar uses mathematical notation to express the rules that govern the connection between meaning and sound (deep and surface structures, respectively). By this theory, linguistic principles can mathematically generate potential sentences structures in a language.
Based on this rule-based notation of grammars, Chomsky grouped natural languages into a series of four nested subsets and increasingly complex types, together known as the Chomsky hierarchy. This classification was and remains foundational to formal language theory, and relevant to theoretical computer science, especially programming language theory, compiler construction, and automata theory.
Following transformational grammar's heyday through the mid-1970s, a derivative government and binding theory became a dominant research framework through the early 1990s (and remains an influential theory) when linguists turned to a "minimalist" approach to grammar. This research focused on the principles and parameters framework, which explained children's ability to learn any language by filling open parameters (a set of universal grammar principles) that adapt as the child encounters linguistic data. The minimalist program, initiated by Chomsky, asks which minimal principles and parameters theory fits most elegantly, naturally, and simply. In an attempt to simplify language into a system that relates meaning and sound using the minimum possible faculties, Chomsky dispenses with concepts such as "deep structure" and "surface structure" and instead emphasizes the plasticity of the brain's neural circuits, with which come an infinite number of concepts, or "logical forms". When exposed to linguistic data, a hearer-speaker's brain proceeds to associate sound and meaning, and the rules of grammar we observe are in fact only the consequences, or side effects, of the way language works. Thus while much of Chomsky's prior research focused on the rules of language, he now focuses on the mechanisms the brain uses to generate these rules and regulate speech.
Chomsky is a prominent political dissident. His political views have changed little since his childhood, when he was influenced by the emphasis on political activism that was ingrained in Jewish working-class tradition. He usually identifies as an anarcho-syndicalist or a libertarian socialist. He views these positions not as precise political theories but as ideals that he thinks best meet human needs: liberty, community, and freedom of association. Unlike some other socialists, such as Marxists, Chomsky believes that politics lies outside the remit of science, but he still roots his ideas about an ideal society in empirical data and empirically justified theories.
In Chomsky's view, the truth about political realities is systematically distorted or suppressed by an elite corporatocracy, which uses corporate media, advertising, and think tanks to promote its own propaganda. His work seeks to reveal such manipulations and the truth they obscure. Chomsky believes this web of falsehood can be broken by "common sense", critical thinking, and understanding the roles of self-interest and self-deception, and that intellectuals abdicate their moral responsibility to tell the truth about the world in fear of losing prestige and funding. He argues that, as such an intellectual, it is his duty to use his social privilege, resources, and training to aid popular democracy movements in their struggles.
Although he has joined protest marches and organized activist groups, Chomsky's primary political outlets are education and publication. He offers a wide range of political writings as well as free lessons and lectures to encourage wider political consciousness. He is a member of the Industrial Workers of the World international union.
Chomsky has been a prominent critic of American imperialism; he believes that the basic principle of the foreign policy of the United States is the establishment of "open societies" that are economically and politically controlled by the United States and where U.S.-based businesses can prosper. He argues that the U.S. seeks to suppress any movements within these countries that are not compliant with U.S. interests and to ensure that U.S.-friendly governments are placed in power. When discussing current events, he emphasizes their place within a wider historical perspective. He believes that official, sanctioned historical accounts of U.S. and British extraterritorial operations have consistently whitewashed these nations' actions in order to present them as having benevolent motives in either spreading democracy or, in older instances, spreading Christianity; criticizing these accounts, he seeks to correct them. Prominent examples he regularly cites are the actions of the British Empire in India and Africa and the actions of the U.S. in Vietnam, the Philippines, Latin America, and the Middle East.
Chomsky's political work has centered heavily on criticizing the actions of the United States. He has said he focuses on the U.S. because the country has militarily and economically dominated the world during his lifetime and because its liberal democratic electoral system allows the citizenry to influence government policy. His hope is that, by spreading awareness of the impact U.S. foreign policies have on the populations affected by them, he can sway the populations of the U.S. and other countries into opposing the policies. He urges people to criticize their governments' motivations, decisions, and actions, to accept responsibility for their own thoughts and actions, and to apply the same standards to others as to themselves.
Chomsky has been critical of U.S. involvement in the Israeli–Palestinian conflict, arguing that it has consistently blocked a peaceful settlement. Chomsky also criticizes the U.S.'s close ties with Saudi Arabia and involvement in Saudi Arabian-led intervention in Yemen, highlighting that Saudi Arabia has "one of the most grotesque human rights records in the world".
In his youth, Chomsky developed a dislike of capitalism and the pursuit of material wealth. At the same time, he developed a disdain for authoritarian socialism, as represented by the Marxist–Leninist policies of the Soviet Union. Rather than accepting the common view among U.S. economists that a spectrum exists between total state ownership of the economy and total private ownership, he instead suggests that a spectrum should be understood between total democratic control of the economy and total autocratic control (whether state or private). He argues that Western capitalist countries are not really democratic, because, in his view, a truly democratic society is one in which all persons have a say in public economic policy. He has stated his opposition to ruling elites, among them institutions like the IMF, World Bank, and GATT (precursor to the WTO).
Chomsky highlights that, since the 1970s, the U.S. has become increasingly economically unequal as a result of the repeal of various financial regulations and the rescinding of the Bretton Woods financial control agreement. He characterizes the U.S. as a "de facto" one-party state, viewing both the Republican Party and Democratic Party as manifestations of a single "Business Party" controlled by corporate and financial interests. Chomsky highlights that, within Western capitalist liberal democracies, at least 80% of the population has no control over economic decisions, which are instead in the hands of a management class and ultimately controlled by a small, wealthy elite.
Noting the entrenchment of such an economic system, Chomsky believes that change is possible through the organized cooperation of large numbers of people who understand the problem and know how they want to reorganize the economy more equitably. Acknowledging that corporate domination of media and government stifles any significant change to this system, he sees reason for optimism in historical examples such as the social rejection of slavery as immoral, the advances in women's rights, and the forcing of government to justify invasions. He views violent revolution to overthrow a government as a last resort to be avoided if possible, citing the example of historical revolutions where the population's welfare has worsened as a result of upheaval.
Chomsky sees libertarian socialist and anarcho-syndicalist ideas as the descendants of the classical liberal ideas of the Age of Enlightenment, arguing that his ideological position revolves around "nourishing the libertarian and creative character of the human being". He envisions an anarcho-syndicalist future with direct worker control of the means of production and government by workers' councils, who would select representatives to meet together at general assemblies. The point of this self-governance is to make each citizen, in Thomas Jefferson's words, "a direct participator in the government of affairs." He believes that there will be no need for political parties. By controlling their productive life, he believes that individuals can gain job satisfaction and a sense of fulfillment and purpose. He argues that unpleasant and unpopular jobs could be fully automated, carried out by workers who are specially remunerated, or shared among everyone.
Chomsky has written prolifically on the Israeli-Palestinian conflict, aiming to raise public awareness of it. He has long endorsed a left binationalist program in Israel and Palestine, seeking to create a democratic state in the Levant that is home to both Jews and Arabs. Nevertheless, given the realpolitik of the situation, he has also considered a two-state solution on the condition that the nation-states exist on equal terms. Chomsky was denied entry to the West Bank in 2010 because of his criticisms of Israel. He had been invited to deliver a lecture at Bir Zeit University and was to meet with Palestinian Prime Minister Salam Fayyad. An Israeli Foreign Ministry spokesman later said that Chomsky was denied entry by mistake.
Chomsky's political writings have largely focused on ideology, social and political power, the media, and state policy. One of his best-known works, "Manufacturing Consent", dissects the media's role in reinforcing and acquiescing to state policies across the political spectrum while marginalizing contrary perspectives. Chomsky asserts that this version of censorship, by government-guided "free market" forces, is subtler and harder to undermine than was the equivalent propaganda system in the Soviet Union. As he argues, the mainstream press is corporate-owned and thus reflects corporate priorities and interests. Acknowledging that many American journalists are dedicated and well-meaning, he argues that the mass media's choices of topics and issues, the unquestioned premises on which that coverage rests, and the range of opinions expressed are all constrained to reinforce the state's ideology: although mass media will criticize individual politicians and political parties, it will not undermine the wider state-corporate nexus of which it is a part. As evidence, he highlights that the U.S. mass media does not employ any socialist journalists or political commentators. He also points to examples of important news stories that the U.S. mainstream media has ignored because reporting on them would reflect badly upon the country, including the murder of Black Panther Fred Hampton with possible FBI involvement, the massacres in Nicaragua perpetrated by U.S.-funded Contras, and the constant reporting on Israeli deaths without equivalent coverage of the far larger number of Palestinian deaths in that conflict. To remedy this situation, Chomsky calls for grassroots democratic control and involvement of the media.
Chomsky considers most conspiracy theories fruitless, distracting substitutes for thinking about policy formation in an institutional framework, where individual manipulation is secondary to broader social imperatives. While not dismissing them outright, he considers them unproductive to challenging power in a substantial way. In response to the labeling of his own ideas as a conspiracy theory, Chomsky has said that it is very rational for the media to manipulate information in order to sell it, like any other business. He asks whether General Motors would be accused of conspiracy if it deliberately selected what it used or discarded to sell its product.
Chomsky has also been active in a number of philosophical fields, including philosophy of mind, philosophy of language, and philosophy of science. In these fields he is credited with ushering in the "cognitive revolution", a significant paradigm shift that rejected logical positivism, the prevailing philosophical methodology of the time, and reframed how philosophers think about language and the mind. Chomsky views the cognitive revolution as rooted in 17th-century rationalist ideals. His position—the idea that the mind contains inherent structures to understand language, perception, and thought—has more in common with rationalism (Enlightenment and Cartesian) than behaviorism. He named one of his key works "Cartesian Linguistics: A Chapter in the History of Rationalist Thought" (1966). In philosophy of language, Chomsky is particularly known for his criticisms of the notion of reference and meaning in human language and his perspective on the nature and function of mental representations.
Chomsky's famous 1971 debate on human nature with the French philosopher Michel Foucault was symbolic in positioning Chomsky as the prototypical analytic philosopher against Foucault, a stalwart of the continental tradition. It showed what appeared to be irreconcilable differences between two moral and intellectual luminaries of the 20th century. Foucault's position was that of critique, that human nature could not be conceived in terms foreign to present understanding, while Chomsky held that human nature contained universalities such as a common standard of moral justice as deduced through reason based on what rationally serves human necessity. Chomsky criticized postmodernism and French philosophy generally, arguing that the obscure language of postmodern, leftist philosophers gives little aid to the working classes. He has also debated analytic philosophers, including Tyler Burge, Donald Davidson, Michael Dummett, Saul Kripke, Thomas Nagel, Hilary Putnam, Willard Van Orman Quine, and John Searle.
Chomsky's contributions span intellectual and world history, including history of philosophy. Irony is a recurring characteristic of his writing, as he often implies that his readers know better, which can make them more engaged in the veracity of his claims.
Chomsky endeavors to keep his family life, linguistic scholarship, and political activism strictly separate from one another, calling himself "scrupulous at keeping my politics out of the classroom". An intensely private person, he is uninterested in appearances and the fame his work has brought him. He also has little interest in modern art and music. McGilvray suggests that Chomsky was never motivated by a desire for fame, but impelled to tell what he perceived as the truth and a desire to aid others in doing so. Chomsky acknowledges that his income affords him a privileged life compared to the majority of the world's population; nevertheless, he characterizes himself as a "worker", albeit one who uses his intellect as his employable skill. He reads four or five newspapers daily; in the US, he subscribes to "The Boston Globe", "The New York Times", "The Wall Street Journal", "Financial Times", and "The Christian Science Monitor". Chomsky is non-religious, but has expressed approval of forms of religion such as liberation theology.
Chomsky has attracted controversy for calling established political and academic figures "corrupt", "fascist", and "fraudulent". His colleague Steven Pinker has said that he "portrays people who disagree with him as stupid or evil, using withering scorn in his rhetoric", and that this contributes to the extreme reactions he receives from critics. Chomsky avoids attending academic conferences, including left-oriented ones such as the Socialist Scholars Conference, preferring to speak to activist groups or hold university seminars for mass audiences. His approach to academic freedom has led him to support MIT academics whose actions he deplores; in 1969, when Chomsky heard that Walt Rostow, a major architect of the Vietnam war, wanted to return to work at MIT, Chomsky threatened "to protest publicly" if Rostow was denied a position at MIT. In 1989, when Pentagon adviser John Deutch applied to be president of MIT, Chomsky supported his candidacy. Later, when Deutch became head of the CIA, "The New York Times" quoted Chomsky as saying, "He has more honesty and integrity than anyone I've ever met. ... If somebody's got to be running the CIA, I'm glad it's him."
Chomsky was married to Carol () from 1949 until her death in 2008. They had three children together: Aviva (b. 1957), Diane (b. 1960), and Harry (b. 1967). In 2014, Chomsky married Valeria Wasserman.
Chomsky has been a defining Western intellectual figure, central to the field of linguistics and definitive in cognitive science, computer science, philosophy, and psychology. In addition to being known as one of the most important intellectuals of his time, Chomsky carries a dual legacy as both a "leader in the field" of linguistics and "a figure of enlightenment and inspiration" for political dissenters. Despite his academic success, his political viewpoints and activism have resulted in his being distrusted by the mainstream media apparatus, and he is regarded as being "on the outer margin of acceptability". The reception of his work is intertwined with his public image as an anarchist, a gadfly, an historian, a Jew, a linguist, and a philosopher.
McGilvray observes that Chomsky inaugurated the "cognitive revolution" in linguistics, and that he is largely responsible for establishing the field as a formal, natural science, moving it away from the procedural form of structural linguistics dominant during the mid-20th century. As such, some have called Chomsky "the father of modern linguistics". Linguist John Lyons further remarked that within a few decades of publication, Chomskyan linguistics had become "the most dynamic and influential" school of thought in the field. By the 1970s his work had also come to exert a considerable influence on philosophy, and a Minnesota State University Moorhead poll ranked "Syntactic Structures" as the single most important work in cognitive science. In addition, his work in automata theory and the Chomsky hierarchy have become well known in computer science, and he is much cited in computational linguistics.
Chomsky's criticisms of behaviorism contributed substantially to the decline of behaviorist psychology; in addition, he is generally regarded as one of the primary founders of the field of cognitive science. Some arguments in evolutionary psychology are derived from his research results; Nim Chimpsky, a chimpanzee who was the subject of a study in animal language acquisition at Columbia University, was named after Chomsky in reference to his view of language acquisition as a uniquely human ability.
ACM Turing Award winner Donald Knuth credited Chomsky's work with helping him combine his interests in mathematics, linguistics, and computer science. IBM computer scientist John Backus, another Turing Award winner, used some of Chomsky's concepts to help him develop FORTRAN, the first widely used high-level computer programming language. The laureates of the 1984 Nobel Prize in Physiology or Medicine—Georges J. F. Köhler, César Milstein, and Niels Kaj Jerne—used Chomsky's generative model to explain the human immune system, equating "components of a generative grammar ... with various features of protein structures." Chomsky's theory of generative grammar has also influenced work in music theory and analysis.
An MIT press release stated that Chomsky was cited within the Arts and Humanities Citation Index more often than any other living scholar from 1980 to 1992. Chomsky was also extensively cited in the Social Sciences Citation Index and Science Citation Index during the same time period, with the librarian who conducted the research commenting that the statistics show that "he is very widely read across disciplines and that his work is used by researchers across disciplines ... it seems that you can't write a paper without citing Noam Chomsky." As a result of his influence, there are dueling camps of Chomskyan and non-Chomskyan linguistics, with the disputes between the two camps often acrimonious.
Chomsky's status as the "most-quoted living author" is credited to his political writings, which vastly outnumber his writings on linguistics. Chomsky biographer Wolfgang B. Sperlich characterizes him as "one of the most notable contemporary champions of the people"; journalist John Pilger has described him as a "genuine people's hero; an inspiration for struggles all over the world for that basic decency known as freedom. To a lot of people in the margins—activists and movements—he's unfailingly supportive." Arundhati Roy has called him "one of the greatest, most radical public thinkers of our time", and Edward Said thought him "one of the most significant challengers of unjust power and delusions". Fred Halliday has said that by the start of the 21st century Chomsky had become a "guru" for the world's anti-capitalist and anti-imperialist movements. The propaganda model of media criticism that he and Herman developed has been widely accepted in radical media critiques and adopted to some level in mainstream criticism of the media, also exerting a significant influence on the growth of alternative media, including radio, publishers, and the Internet, which in turn have helped to disseminate his work.
Sperlich also notes that Chomsky has been vilified by corporate interests, particularly in the mainstream press. University departments devoted to history and political science rarely include Chomsky's work on their undergraduate syllabi. Critics have argued that despite publishing widely on social and political issues, Chomsky has no formal expertise in these areas; he has responded that such issues are not as complex as many social scientists claim and that almost everyone is able to comprehend them regardless of whether they have been academically trained to do so. According to McGilvray, many of Chomsky's critics "do not bother quoting his work or quote out of context, distort, and create straw men that cannot be supported by Chomsky's text".
Chomsky drew criticism for not calling the Srebrenica massacre during the Bosnian War a "genocide", which he said would devalue the word, and in appearing to deny Ed Vulliamy's reporting on the existence of Bosnian concentration camps. The subsequent editorial correction of his comments, viewed as a capitulation, was criticized by multiple Balkan watchers.
Chomsky's far-reaching criticisms of U.S. foreign policy and the legitimacy of U.S. power have raised controversy. A document obtained pursuant to a Freedom of Information Act (FOIA) request from the U.S. government revealed that the Central Intelligence Agency (CIA) monitored his activities and for years denied doing so. The CIA also destroyed its files on Chomsky at some point, possibly in violation of federal law. He has often received undercover police protection at MIT and when speaking on the Middle East, but has refused uniformed police protection. German newspaper "Der Spiegel" described Chomsky as "the Ayatollah of anti-American hatred", while conservative commentator David Horowitz called him "the most devious, the most dishonest and ... the most treacherous intellect in America", whose work is infused with "anti-American dementia" and evidences his "pathological hatred of his own country". Writing in "Commentary" magazine, the journalist Jonathan Kay described Chomsky as "a hard-boiled anti-American monomaniac who simply refuses to believe anything that any American leader says".
Chomsky's criticism of Israel has led to his being called a traitor to the Jewish people and an anti-Semite. Criticizing Chomsky's defense of the right of individuals to engage in Holocaust denial on the grounds that freedom of speech must be extended to all viewpoints, Werner Cohn called Chomsky "the most important patron" of the neo-Nazi movement. The Anti-Defamation League (ADL), called him a Holocaust denier, describing him as a "dupe of intellectual pride so overweening that he is incapable of making distinctions between totalitarian and democratic societies, between oppressors and victims". In turn, Chomsky has claimed that the ADL is dominated by "Stalinist types" who oppose democracy in Israel. The lawyer Alan Dershowitz has called Chomsky a "false prophet of the left"; Chomsky called Dershowitz "a complete liar" who is on "a crazed jihad, dedicating much of his life to trying to destroy my reputation". In early 2016 President Recep Tayyip Erdoğan of Turkey publicly rebuked Chomsky after he signed an open letter condemning Erdoğan for his anti-Kurdish repression and double standards on terrorism. Chomsky accused Erdoğan of hypocrisy, noting that Erdoğan supports al-Qaeda's Syrian affiliate, the al-Nusra Front.
In February 2020, before attending the 2020 Hay Festival in Abu Dhabi, United Arab Emirates, Chomsky signed a letter of condemnation of the violation of freedom of speech in the emirate, referring to the arrest of human rights activist Ahmed Mansoor. Other signers included authors Stephen Fry and Jung Chang.
In 1970, the London "Times" named Chomsky one of the "makers of the twentieth century". He was voted the world's leading public intellectual in The 2005 Global Intellectuals Poll jointly conducted by American magazine "Foreign Policy" and British magazine "Prospect". "New Statesman" readers listed Chomsky among the world's foremost heroes in 2006.
In the United States he is a Member of the National Academy of Sciences, the American Academy of Arts and Sciences, the Linguistic Society of America, the American Philosophical Association, and the American Association for the Advancement of Science. Abroad he is a corresponding fellow of the British Academy, an honorary member of the British Psychological Society, a member of the Deutsche Akademie der Naturforscher Leopoldina, and a foreign member of the Department of Social Sciences of the Serbian Academy of Sciences and Arts. He received a 1971 Guggenheim Fellowship, the 1984 American Psychological Association Award for Distinguished Contributions to Psychology, the 1988 Kyoto Prize in Basic Sciences, the 1996 Helmholtz Medal, the 1999 Benjamin Franklin Medal in Computer and Cognitive Science, the 2010 Erich Fromm Prize, and the British Academy's 2014 Neil and Saras Smith Medal for Linguistics. He is also a two-time winner of the NCTE George Orwell Award for Distinguished Contribution to Honesty and Clarity in Public Language (1987 and 1989). He has also received the Rabindranath Tagore Centenary Award from The Asiatic Society.
Chomsky received the 2004 Carl-von-Ossietzky Prize from the city of Oldenburg, Germany, to acknowledge his body of work as a political analyst and media critic. He received an honorary fellowship in 2005 from the Literary and Historical Society of University College Dublin. He received the 2008 President's Medal from the Literary and Debating Society of the National University of Ireland, Galway. Since 2009, he has been an honorary member of International Association of Professional Translators and Interpreters (IAPTI). He received the University of Wisconsin's A.E. Havens Center's Award for Lifetime Contribution to Critical Scholarship and was inducted into IEEE Intelligent Systems' AI's Hall of Fame for "significant contributions to the field of AI and intelligent systems." Chomsky has an Erdős number of four.
In 2011, the US Peace Memorial Foundation awarded Chomsky the US Peace Prize for antiwar activities over five decades. For his work in human rights, peace, and social criticism, he received the 2011 Sydney Peace Prize, the 2017 Seán MacBride Peace Prize and the Dorothy Eldridge Peacemaker Award.
Chomsky has received honorary doctorates from institutions including the University of London and the University of Chicago (1967), Loyola University Chicago and Swarthmore College (1970), Bard College (1971), Delhi University (1972), and the University of Massachusetts (1973) among others. His public lectures have included the 1969 John Locke Lectures, 1975 Whidden Lectures, 1977 Huizinga Lecture, and 1988 Massey Lectures, among others.
Various tributes to Chomsky have been dedicated over the years. He is the eponym for a bee species, a frog species, and a building complex at the Indian university Jamia Millia Islamia. Actor Viggo Mortensen and avant-garde guitarist Buckethead dedicated their 2003 album "Pandemoniumfromamerica" to Chomsky.
Linguistics
Politics | https://en.wikipedia.org/wiki?curid=21566 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.