text
stringlengths 10
951k
| source
stringlengths 39
44
|
|---|---|
Montreal Protocol
The Montreal Protocol is an international treaty designed to protect the ozone layer by phasing out the production of numerous substances that are responsible for ozone depletion. It was agreed on 16th September 1987, and entered into force on 1st January 1989. Since then, it has undergone nine revisions, in 1990 (London), 1991 (Nairobi), 1992 (Copenhagen), 1993 (Bangkok), 1995 (Vienna), 1997 (Montreal), 1998 (Australia), 1999 (Beijing) and 2016 (Kigali) As a result of the international agreement, the ozone hole in Antarctica is slowly recovering. Climate projections indicate that the ozone layer will return to 1980 levels between 2050 and 2070. Due to its widespread adoption and implementation it has been hailed as an example of exceptional international co-operation, with Kofi Annan quoted as saying that "perhaps the single most successful international agreement to date has been the Montreal Protocol". In comparison, effective burden sharing and solution proposals mitigating regional conflicts of interest have been among the success factors for the ozone depletion challenge, where global regulation based on the Kyoto Protocol has failed to do so. In this case of the ozone depletion challenge, there was global regulation already being installed before a scientific consensus was established. Also, overall public opinion was convinced of possible imminent risks.
The two ozone treaties have been ratified by 197 parties (196 states and the European Union), making them the first universally ratified treaties in United Nations history.
These truly universal treaties have also been remarkable in the expedience of the policy-making process at the global scale, where only 14 years lapsed between a basic scientific research discovery (1973) and the international agreement signed (1985 and 1987).
The treaty is structured around several groups of halogenated hydrocarbons that deplete stratospheric ozone. All of the ozone depleting substances controlled by the Montreal Protocol contain either chlorine or bromine (substances containing only fluorine do not harm the ozone layer). Some ozone-depleting substances (ODSs) are not yet controlled by the Montreal Protocol, including nitrous oxide (N2O) For a table of ozone-depleting substances controlled by the Montreal Protocol see:
For each group of ODSs, the treaty provides a timetable on which the production of those substances must be shot out and eventually eliminated. This included a 10-year phase-in for developing countries identified in Article 5 of the treaty.
The stated purpose of the treaty is that the signatory states
There was a faster phase-out of halon-1211, -2402, -1301, There was a slower phase-out (to zero by 2010) of other substances (halon 1211, 1301, 2402; CFCs 13, 111, 112, etc.) and some chemicals were given individual attention (Carbon tetrachloride; 1,1,1-trichloroethane). The phasing-out of the less damaging HCFCs only began in 1996 and will go on until a complete phasing-out is achieved by 2030.
There were a few exceptions for "essential uses" where no acceptable substitutes were initially found (for example, in the past metered dose inhalers commonly used to treat asthma and chronic obstructive pulmonary disease were exempt) or Halon fire suppression systems used in submarines and aircraft (but not in general industry).
The substances in Group I of Annex A are:
The provisions of the Protocol include the requirement that the Parties to the Protocol base their future decisions on the current scientific, environmental, technical, and economic information that is assessed through panels drawn from the worldwide expert communities. To provide that input to the decision-making process, advances in understanding on these topics were assessed in 1989, 1991, 1994, 1998 and 2002 in a series of reports entitled Scientific assessment of ozone depletion, by the Scientific Assessment Panel (SAP).
In 1990 a Technology and Economic Assessment Panel was also established as the technology and economics advisory body to the Montreal Protocol Parties. The Technology and Economic Assessment Panel (TEAP) provides, at the request of Parties, technical information related to the alternative technologies that have been investigated and employed to make it possible to virtually eliminate use of Ozone Depleting Substances (such as CFCs and Halons), that harm the ozone layer. The TEAP is also tasked by the Parties every year to assess and evaluate various technical issues including evaluating nominations for essential use exemptions for CFCs and halons, and nominations for critical use exemptions for methyl bromide. TEAP's annual reports are a basis for the Parties’ informed decision-making.
Numerous reports have been published by various inter-governmental, governmental and non-governmental organizations to catalogue and assess alternatives to the ozone depleting substances, since the substances have been used in various technical sectors, like in refrigeration, air conditioning, flexible and rigid foam, fire protection, aerospace, electronics, agriculture, and laboratory measurements.
Under the Montreal Protocol on Substances that Deplete the Ozone Layer, especially Executive Committee (ExCom) 53/37 and ExCom 54/39, Parties to this Protocol agreed to set year 2013 as the time to freeze the consumption and production of HCFCs for developing countries. For developed countries, reduction of HCFC consumption and production began in 2004 and 2010, respectively, with 100% reduction set for 2020. Developing countries agreed to start reducing its consumption and production of HCFCs by 2015, with 100% reduction set for 2030.
Hydrochlorofluorocarbons, commonly known as HCFCs, are a group of man-made compounds containing hydrogen, chlorine, fluorine and carbon. They are not found anywhere in nature. HCFC production began to take off after countries agreed to phase out the use of CFCs in the 1980s, which were found to be destroying the ozone layer. Like CFCs, HCFCs are used for refrigeration, aerosol propellants, foam manufacture and air conditioning. Unlike the CFCs, however, most HCFCs are broken down in the lowest part of the atmosphere and pose a much smaller risk to the ozone layer. Nevertheless, HCFCs are very potent greenhouse gases, despite their very low atmospheric concentrations, measured in parts per trillion (million million).
The HCFCs are transitional CFCs replacements, used as refrigerants, solvents, blowing agents for plastic foam manufacture, and fire extinguishers. In terms of ozone depletion potential (ODP), in comparison to CFCs that have ODP 0.6 – 1.0, these HCFCs have lower ODPs (0.01 – 0.5). In terms of global warming potential (GWP), in comparison to CFCs that have GWP 4,680 – 10,720, HCFCs have lower GWPs (76 – 2,270).
On January 1, 2019 the Kigali Amendment to the Montreal Protocol came into force. Under the Kigali Amendment countries promised to reduce the use of hydrofluorocarbons (HFCs) by more than 80% over the next 30 years. By December 27, 2018, 65 countries had ratified the Amendment.
Produced mostly in developed countries, hydrofluorocarbons (HFCs) replaced CFCs and HCFCs. HFCs pose no harm to the ozone layer because, unlike CFCs and HCFCs, they do not contain chlorine. They are, however, greenhouse gases, with a high global warming potential (GWP), comparable to that of CFCs and HCFCs. In 2009, a study calculated that a fast phasedown of high-GWP HFCs could potentially prevent the equivalent of up to 8.8 Gt CO2-eq "per year" in emissions by 2050. A proposed phasedown of HFCs was hence projected to avoid up to 0.5C of warming by 2100 under the high-HFC growth scenario, and up to 0.35C under the low-HFC growth scenario. Recognizing the opportunity presented for fast and effective phasing down of HFCs through the Montreal Protocol, starting in 2009 the Federated States of Micronesia proposed an amendment to phase down high-GWP HFCs, with the U.S., Canada, and Mexico following with a similar proposal in 2010.
After seven years of negotiations, in October 2016 at the 28th Meeting of the Parties to the Montreal Protocol in Kigali, the Parties to the Montreal Protocol adopted the Kigali Amendment whereby the Parties agreed to phase down HFCs under the Montreal Protocol. The amendment to the legally-binding Montreal Protocol will ensure that industrialised countries bring down their HFC production and consumption by at least 85 per cent compared to their annual average values in the period 2011–2013. A group of developing countries including China, Brazil and South Africa are mandated to reduce their HFC use by 85 per cent of their average value in 2020-22 by the year 2045. India and some other developing countries — Iran, Iraq, Pakistan, and some oil economies like Saudi Arabia and Kuwait — will cut down their HFCs by 85 per cent of their values in 2024-26 by the year 2047.
On 17 November 2017, ahead of the 29th Meeting of the Parties of the Montreal Protocol, Sweden became the 20th Party to ratify the Kigali Amendment, pushing the Amendment over its ratification threshold ensuring that the Amendment would enter into force 1 January 2019.
In 1973, the chemists Frank Sherwood Rowland and Mario Molina, who were then at the University of California, Irvine, began studying the impacts of CFCs in the Earth's atmosphere. They discovered that CFC molecules were stable enough to remain in the atmosphere until they got up into the middle of the stratosphere where they would finally (after an average of 50–100 years for two common CFCs) be broken down by ultraviolet radiation releasing a chlorine atom. Rowland and Molina then proposed that these chlorine atoms might be expected to cause the breakdown of large amounts of ozone (O3) in the stratosphere. Their argument was based upon an analogy to contemporary work by Paul J. Crutzen and Harold Johnston, which had shown that nitric oxide (NO) could catalyze the destruction of ozone. (Several other scientists, including Ralph Cicerone, Richard Stolarski, Michael McElroy, and Steven Wofsy had independently proposed that chlorine could catalyze ozone loss, but none had realized that CFCs were a potentially large source of chlorine.) Crutzen, Molina and Rowland were awarded the 1995 Nobel Prize for Chemistry for their work on this problem.
The environmental consequence of this discovery was that, since stratospheric ozone absorbs most of the ultraviolet-B (UV-B) radiation reaching the surface of the planet, depletion of the ozone layer by CFCs would lead to an increase in UV-B radiation at the surface, resulting in an increase in skin cancer and other impacts such as damage to crops and to marine phytoplankton.
But the Rowland-Molina hypothesis was strongly disputed by representatives of the aerosol and halocarbon industries. The chair of the board of DuPont was quoted as saying that ozone depletion theory is "a science fiction tale...a load of rubbish...utter nonsense". Robert Abplanalp, the president of Precision Valve Corporation (and inventor of the first practical aerosol spray can valve), wrote to the Chancellor of UC Irvine to complain about Rowland's public statements (Roan, p. 56.)
After publishing their pivotal paper in June 1974, Rowland and Molina testified at a
hearing before the U.S. House of Representatives in December 1974. As a result, significant funding was made available to study various aspects of the problem and to confirm the initial findings. In 1976, the U.S. National Academy of Sciences (NAS) released a report that confirmed the scientific credibility of the ozone depletion hypothesis. NAS continued to publish assessments of related science for the next decade.
Then, in 1985, British Antarctic Survey scientists Joe Farman, Brian Gardiner and Jon Shanklin published results of abnormally low ozone concentrations above Halley Bay near the South Pole. They speculated that this was connected to increased levels of CFCs in the atmosphere. It took several other attempts to establish the Antarctic losses as real and significant, especially after NASA had retrieved matching data from its satellite recordings. The impact of these studies, the metaphor 'ozone hole', and the colourful visual representation in a time lapse animation proved shocking enough for negotiators in Montreal, Canada to take the issue seriously.
Also in 1985, 20 nations, including most of the major CFC producers, signed the Vienna Convention, which established a framework for negotiating international regulations on ozone-depleting substances. After the discovery of the ozone hole
by SAGE 2 it only took 18 months to reach a binding agreement in Montreal, Canada.
But the CFC industry did not give up that easily. As late as 1986, the Alliance for Responsible CFC Policy (an association representing the CFC industry founded by DuPont) was still arguing that the science was too uncertain to justify any action. In 1987, DuPont testified before the US Congress that "We believe there is no imminent crisis that demands unilateral regulation." And even in March 1988, Du Pont Chair Richard E. Heckert would write in a letter to the United States Senate, "we will not produce a product unless it can be made, used, handled and disposed of safely and consistent with appropriate safety, health and environmental quality criteria. At the moment, scientific evidence does not point to the need for dramatic CFC emission reductions. There is no available measure of the contribution of CFCs to any observed ozone change..."
The main objective of the "Multilateral Fund for the Implementation of the Montreal Protocol" is to assist developing country parties to the Montreal Protocol whose annual per capita consumption and production of ozone depleting substances (ODS) is less than 0.3 kg to comply with the control measures of the Protocol. Currently, 147 of the 196 Parties to the Montreal Protocol meet these criteria (they are referred to as Article 5 countries).
It embodies the principle agreed at the United Nations Conference on Environment and Development in 1992 that countries have a common but differentiated responsibility to protect and manage the global commons.
The Fund is managed by an Executive Committee with an equal representation of seven industrialized and seven Article 5 countries, which are elected annually by a Meeting of the Parties. The Committee reports annually to the Meeting of the Parties on its operations. The work of the Multilateral Fund on the ground in developing countries is carried out by four Implementing Agencies, which have contractual agreements with the Executive Committee:
Up to 20 percent of the contributions of contributing parties can also be delivered through their bilateral agencies in the form of eligible projects and activities.
The fund is replenished on a three-year basis by the donors. Pledges amount to US$3.1 billion over the period 1991 to 2005. Funds are used, for example, to finance the conversion of existing manufacturing processes, train personnel, pay royalties and patent rights on new technologies, and establish national ozone offices.
As of 23 June 2015, all countries in the United Nations, the Cook Islands, Holy See, Niue as well as the European Union have ratified the original Montreal Protocol (see external link below), with South Sudan being the last country to ratify the agreement, bringing the total to 197. These countries have also ratified the London, Copenhagen, Montreal, and Beijing amendments.
Since the Montreal Protocol came into effect, the atmospheric concentrations of the most important chlorofluorocarbons and related chlorinated hydrocarbons have either leveled off or decreased. Halon concentrations have continued to increase, as the halons presently stored in fire extinguishers are released, but their rate of increase has slowed and their abundances are expected to begin to decline by about 2020. Also, the concentration of the HCFCs increased drastically at least partly because of many uses (e.g. used as solvents or refrigerating agents) CFCs were substituted with HCFCs. While there have been reports of attempts by individuals to circumvent the ban, e.g. by smuggling CFCs from undeveloped to developed nations, the overall level of compliance has been high. Statistical analysis from 2010 show a clear positive signal from the Montreal Protocol to the stratospheric ozone. In consequence, the Montreal Protocol has often been called the most successful international environmental agreement to date. In a 2001 report, NASA found the ozone thinning over Antarctica had remained the same thickness for the previous three years, however in 2003 the ozone hole grew to its second largest size. The most recent (2006) scientific evaluation of the effects of the Montreal Protocol states, "The Montreal Protocol is working: There is clear evidence of a decrease in the atmospheric burden of ozone-depleting substances and some early signs of stratospheric ozone recovery." However, a more recent study seems to point to a relative increase in CFCs due to an unknown source.
Reported in 1997, significant production of CFCs occurred in Russia for sale on the black market to the EU throughout the 90s. Related US production and consumption was enabled by fraudulent reporting due to poor enforcement mechanisms. Similar illegal markets for CFCs were detected in Taiwan, Korea, and Hong Kong.
The Montreal Protocol is also expected to have effects on human health. A 2015 report by the U. S. Environmental Protection Agency estimates that the protection of the ozone layer under the treaty will prevent over 280 million cases of skin cancer, 1.5 million skin cancer deaths, and 45 million cataracts in the United States.
However, the hydrochlorofluorocarbons, or HCFCs, and hydrofluorocarbons, or HFCs, are now thought to contribute to anthropogenic global warming. On a molecule-for-molecule basis, these compounds are up to 10,000 times more potent greenhouse gases than carbon dioxide. The Montreal Protocol currently calls for a complete phase-out of HCFCs by 2030, but does not place any restriction on HFCs. Since the CFCs themselves are equally powerful greenhouse gases, the mere substitution of HFCs for CFCs does not significantly increase the rate of anthropogenic climate change, but over time a steady increase in their use could increase the danger that human activity will change the climate.
Policy experts have advocated for increased efforts to link ozone protection efforts to climate protection efforts. Policy decisions in one arena affect the costs and effectiveness of environmental improvements in the other.
In 2018, scientists monitoring the atmosphere following the 2010 phaseout date have reported evidence of continuing industrial production of CFC-11, likely in eastern Asia, with detrimental global effects on the ozone layer. A monitoring study detected fresh atmospheric releases of carbon tetrachloride from China's Shandong province, beginning sometime after 2012, and accounting for a large part of emissions exceeding global estimates under the Montreal Protocol.
The year 2012 marked the 25th anniversary of the signing of the Montreal Protocol. Accordingly, the Montreal Protocol community organized a range of celebrations at the national, regional and international levels to publicize its considerable success to date and to consider the work ahead for the future.
Among its accomplishments are: The Montreal Protocol was the first international treaty to address a global environmental regulatory challenge; the first to embrace the "precautionary principle" in its design for science-based policymaking; the first treaty where independent experts on atmospheric science, environmental impacts, chemical technology, and economics, reported directly to Parties, without edit or censorship, functioning under norms of professionalism, peer review, and respect; the first to provide for national differences in responsibility and financial capacity to respond by establishing a multilateral fund for technology transfer; the first MEA with stringent reporting, trade, and binding chemical phase-out obligations for both developed and developing countries; and, the first treaty with a financial mechanism managed democratically by an Executive Board with equal representation by developed and developing countries.
Within 25 years of signing, parties to the MP celebrate significant milestones. Significantly, the world has phased-out 98% of the Ozone-Depleting Substances (ODS) contained in nearly 100 hazardous chemicals worldwide; every country is in compliance with stringent obligations; and, the MP has achieved the status of the first global regime with universal ratification; even the newest member state, South Sudan, ratified in 2013. UNEP received accolades for achieving global consensus that "demonstrates the world’s commitment to ozone protection, and more broadly, to global environmental protection".
(referred to as Ozone Layer Protection)
|
https://en.wikipedia.org/wiki?curid=19856
|
Moncton
Moncton (; ) is one of three major urban centres in the Canadian province of New Brunswick, along with Saint John and the capital city of Fredericton. Situated in the Petitcodiac River Valley, Moncton lies at the geographic centre of the Maritime Provinces. The city has earned the nickname "Hub City" due to its central inland location in the region and its history as a railway and land transportation hub for the Maritimes.
The city proper has a population of 71,889 (2016) and a land area of . Greater Moncton has a population of 144,810 (2016), making it the largest city and census metropolitan area (CMA) in New Brunswick, and the second-largest city and CMA in the Maritime Provinces. The CMA includes the neighbouring city of Dieppe and the town of Riverview, as well as adjacent suburban areas in Westmorland and Albert counties.
Although the Moncton area was first settled in 1733, Moncton was officially founded in 1766 with the arrival of Pennsylvania Dutch immigrants from Philadelphia. Initially an agricultural settlement, Moncton was not incorporated until 1855. The city was named for Lt. Col. Robert Monckton, the British officer who had captured nearby Fort Beauséjour a century earlier. A significant wooden shipbuilding industry had developed in the community by the mid-1840s, allowing for the civic incorporation in 1855. However, the shipbuilding economy collapsed in the 1860s, causing the town to lose its civic charter in 1862. Moncton regained its charter in 1875 after the community's economy rebounded, mainly due to a growing railway industry. In 1871, the Intercolonial Railway of Canada had chosen Moncton as its headquarters, and Moncton remained a railway town for well over a century until the closure of the Canadian National Railway (CNR) locomotive shops in the late 1980s.
Although the economy of Moncton was traumatized twice—by the collapse of the shipbuilding industry in the 1860s and by the closure of the CNR locomotive shops in the 1980s—the city was able to rebound strongly on both occasions. The city adopted the motto "Resurgo" (Latin: I rise again) after its rebirth as a railway town. The city's economy is stable and diversified, primarily based on its traditional transportation, distribution, retailing, and commercial heritage, and supplemented by strength in the educational, health care, financial, information technology, and insurance sectors. The strength of Moncton's economy has received national recognition and the local unemployment rate is consistently less than the national average.
Acadians settled the head of the Bay of Fundy in the 1670s. The first reference to the "Petcoucoyer River" was on the De Meulles map of 1686. Settlement of the Petitcodiac and Memramcook river valleys began about 1700, gradually extending inland and reaching the site of present-day Moncton in 1733. The first Acadian settlers in the Moncton area established a marshland farming community and chose to name their settlement "Le Coude" (The Elbow), an allusion to the 90° bend in the river near the site of the settlement.
In 1755, nearby Fort Beausejour was captured by British forces under the command of Lt. Col. Robert Monckton. The Beaubassin region including the Memramcook and Petitcodiac river valleys subsequently fell under English control. Later that year, Governor Charles Lawrence issued a decree ordering the expulsion of the Acadian population from Nova Scotia (including recently captured areas of Acadia such as le Coude). This action came to be known as the "Great Upheaval".
The reaches of the upper Petitcodiac River valley then came under the control of the Philadelphia Land Company (one of the principals of which was Benjamin Franklin.) In 1766, Pennsylvania Dutch settlers arrived to re-establish the pre-existing farming community at Le Coude. The Settlers consisted of eight families: Heinrick Stief (Steeves); Jacob Treitz (Trites;, Matthias Sommer (Somers); Jacob Reicker (Ricker); Charles Jones (Schantz); George Wortmann (Wortman); Michael Lutz (Lutes; and George Koppel (Copple). There is a plaque dedicated in their honor at the mouth of Hall's Creek. They renamed the settlement "The Bend".
The Bend remained an agricultural settlement for nearly 80 more years. Even by 1836, there were only 20 households in the community. At this time, the Westmorland Road became open to year-round travel and a regular mail coach service was established between Saint John and Halifax. The Bend became an important transfer and rest station along the route. Over the next decade, lumbering and then shipbuilding would become important industries in the area.
The turning point for the community was when Joseph Salter took over (and expanded) a shipyard at the Bend in 1847. The expanded shipyard ultimately grew to employ about 400 workers. The Bend subsequently developed a service-based economy to support the shipyard and gradually began to acquire all the amenities of a growing town. The prosperity engendered by the wooden shipbuilding industry allowed The Bend to incorporate as the town of Moncton in 1855. Although the town was named for Lt. Col. Robert Monckton, a clerical error at the time the town was incorporated resulted in the misspelling of the community's name, which has been remained to the present day. The first mayor of Moncton was the shipbuilder Joseph Salter.
Two years later, in 1857, the European and North American Railway opened its line from Moncton to nearby Shediac. This was followed by a line from Moncton to Saint John opening in 1859. At about the time of the arrival of the railway, the popularity of steam-powered ships forced an end to the era of wooden shipbuilding. The Salter shipyard closed in 1858. The resulting industrial collapse caused Moncton to surrender its civic charter in 1862.
Moncton's economic depression did not last long and a second era of prosperity came to the area in 1871 when Moncton was selected to be the headquarters of the Intercolonial Railway of Canada (ICR). The arrival of the ICR in Moncton was a seminal event for the community. For the next 120 years, the history of the city would be firmly linked with that of the railway. In 1875, Moncton was able to reincorporate as a town and one year later, the ICR line to Quebec was opened. The railway boom that emanated from this and the associated employment growth allowed Moncton to achieve city status on April 23, 1890.
Moncton grew rapidly during the early 20th century, particularly after provincial lobbying helped the city become the eastern terminus of the massive National Transcontinental Railway project in 1912. In 1918, the ICR and National Transcontinental Railway (NTR) were merged by the federal government into the newly formed Canadian National Railways (CNR) system. The ICR shops would become CNR's major locomotive repair facility for the Maritimes and Moncton became the headquarters for CNR's Maritime division. The T. Eaton Company's catalogue warehouse moved to the city in the early 1920s, employing over 700 people. Transportation and distribution became increasingly important to the Moncton economy throughout the middle part of the 20th century. The first scheduled air service out of Moncton was established in 1928. During the Second World War the Canadian Army built a large military supply base in the city to service the Maritime military establishment. The CNR continued to dominate the economy of the city with railway employment in Moncton peaked at nearly six thousand workers in the 1950s before beginning a slow decline.
Moncton was placed on the Trans-Canada Highway network in the early 1960s after Route 2 was built along the northern perimeter of the city. Later, the Route 15 was built between the city and Shediac. At the same time, the Petitcodiac River Causeway was constructed. The Université de Moncton was founded in 1963. This institution became an important resource in the development of Acadian culture in the area.
The late 1970s and the 1980s were a period of economic hardship for the city as several major employers closed or restructured. The Eatons catalogue division, CNR's locomotive shops facility and CFB Moncton were closed during this time throwing thousands of citizens out of work.
The city diversified in the early 1990s with the rise of information technology, led by call centres which made use of the city's bilingual workforce. By the late 1990s, retail, manufacturing and service expansion began to occur in all sectors and within a decade of the closure of the CNR locomotive shops Moncton had more than made up for its employment losses. This dramatic turnaround in the fortunes of the city has been termed the "Moncton Miracle".
The growth of the community has continued unabated since the 1990s and has actually been accelerating. The confidence of the community has been bolstered by its ability to host major events such as the Francophonie Summit in 1999, a Rolling Stones concert in 2005, the Memorial Cup in 2006 and both the IAAF World Junior Championships in Athletics and a neutral site regular season CFL football game in 2010. Positive developments include the Atlantic Baptist University (later renamed Crandall University) achieving full university status and relocating to a new campus in 1996, the Greater Moncton Roméo LeBlanc International Airport opening a new terminal building and becoming a designated international airport in 2002, and the opening of the new Gunningsville Bridge to Riverview in 2005. In 2002, Moncton became Canada's first officially bilingual city. In the 2006 census, Moncton was designated a Census Metropolitan Area and became the largest metropolitan area in the province of New Brunswick.
Moncton lies in southeastern New Brunswick, at the geographic centre of the Maritime Provinces. The city is located along the north bank of the Petitcodiac River at a point where the river bends acutely from a west−east to north−south flow. This geographical feature has contributed significantly to historical names given to the community. "Petitcodiac" in the Mi'kmaq language has been translated as meaning "bends like a bow". The early Acadian settlers in the region named their community "Le Coude" which means "the elbow". Subsequent English immigrants changed the name of the settlement to "The Bend of the Petitcodiac" (or simply The Bend).
The Petitcodiac river valley at Moncton is broad and relatively flat, bounded by a long ridge to the north (Lutes Mountain) and by the rugged Caledonia Highlands to the south. Moncton lies at the original head of navigation on the river, however a causeway to Riverview (constructed in 1968) resulted in extensive sedimentation of the river channel downstream and rendered the Moncton area of the waterway unnavigable. On April 14, 2010, the causeway gates were opened in an effort to restore the silt-laden river.
The Petitcodiac River exhibits one of North America's few tidal bores: a regularly occurring wave that travels up the river on the leading edge of the incoming tide. The bore is as a result of the extreme tides of the Bay of Fundy. Originally, the bore was very impressive, sometimes between in height and extending across the width of the Petitcodiac River in the Moncton area. This wave would occur twice a day at high tide, travelling at an average speed of and producing an audible roar. Unsurprisingly, the "bore" became a very popular early tourist attraction for the city, but when the Petitcodiac causeway was built in the 1960s, the river channel quickly silted in and reduced the bore so that it rarely exceeds in height. On April 14, 2010, the causeway gates were opened in an effort to restore the silt-laden river. A recent tidal bore since the opening of the causeway gates measured a wave, unseen for many years.
Despite being less than from the Bay of Fundy and less than from the Northumberland Strait, the climate tends to be more continental than maritime during the summer and winter seasons, with maritime influences somewhat tempering the transitional seasons of spring and autumn.
Moncton has a warm summer continental climate (Köppen climate classification "Dfb") with uniform precipitation distribution. Winter days are typically cold but generally sunny with solar radiation generating some warmth. Daytime high temperatures usually range a few degrees below the freezing point. Major snowfalls can result from Nor'easter ocean storms moving up the east coast of North America. These major snowfalls typically average 20–30 cm (8–12 in) and are frequently mixed with rain or freezing rain. Spring is frequently delayed because the sea ice that forms in the nearby Gulf of St. Lawrence during the previous winter requires time to melt, and this will cool onshore winds, which can extend inland as far as Moncton. The ice burden in the gulf has diminished considerably over the course of the last decade (which may be a consequence of global warming), and the springtime cooling effect has weakened as a result. Daytime temperatures above freezing are typical by late February. Trees are usually in full leaf by late May. Summers are warm, sometimes hot, and humid due to the seasonal prevailing westerly winds strengthening the continental tendencies of the local climate. Daytime highs sometimes reach more than 30 °C (86 °F). Rainfall is generally modest, especially in late July and August, and periods of drought are not uncommon. Autumn daytime temperatures remain mild until late October. First snowfalls usually do not occur until late November and consistent snow cover on the ground does not happen until late December. The Fundy coast of New Brunswick occasionally experiences the effects of post-tropical storms. The stormiest weather of the year, with the greatest precipitation and the strongest winds, usually occurs during the fall/winter transition (November to mid-January).
The highest temperature ever recorded in Moncton was on August 18 & 19, 1935. The coldest temperature ever recorded was on February 5, 1948.
Moncton generally remains a "low rise" city. The city's skyline however encompasses many buildings and structures with varying architectural styles from many periods. The most dominant structure in the city is the Bell Aliant Tower, a microwave communications tower built in 1971. When it was constructed, it was the tallest microwave communications tower of its kind in North America. It remains the tallest structure in Moncton, dwarfing the neighbouring Place L’Assomption by . Indeed, the Bell Aliant Tower is also the tallest free-standing structure in all four Atlantic provinces. Assumption Place is a 20-story office building and is the headquarters of Assumption Mutual Life Insurance. This building is in height and is tied with Brunswick Square (Saint John) as the tallest building in the province. The Blue Cross Centre is a large nine-story building in Downtown Moncton. Although only nine stories tall, the building is architecturally distinctive, encompasses a full city block, and is the largest office building in the city in terms of square footage. It is the home of Medavie Blue Cross and the Moncton Public Library. There are about a half dozen other buildings in Moncton that range between eight and twelve stories in height, including the Delta Beausejour and Brunswick Crowne Plaza Hotels and the Terminal Plaza office complex.
The most popular park in the area is Centennial Park, which contains an artificial beach, lighted cross country skiing and hiking trails, the city's largest playground, lawn bowling and tennis facilities, a boating pond, a treetop adventure course, and Rocky Stone Field, a city owned 2,500 seat football stadium with artificial turf, and home to the Moncton Minor Football Association.
The city's other main parks are Mapleton Park in the city's north end, Irishtown Nature Park (one of the largest urban nature parks in Canada) and St. Anselme Park (located in Dieppe). The numerous neighbourhood parks throughout the metro Moncton area include Bore View Park (which overlooks the Petitcodiac River), and the downtown Victoria Park, which features a bandshell, flower gardens, fountain, and the city's cenotaph. There is an extensive system of hiking and biking trails in Metro Moncton. The Riverfront Trail is part of the Trans Canada Trail system, and various monuments and pavilions can be found along its length.
The population of Moncton is 71,889 (2016 Census). Along with Fredericton and Halifax, Moncton is one of only three Maritime cities to register a population increase in recent years. The median age in Moncton is 41.4, close to the national median age of 41.2.
Moncton is a bilingual city. About two-thirds of its residents are native English speakers, while the remaining third is French-speaking. Almost all Monctonians speak English (64.6%) or French (31.9%) as first languages; 1.6% speak both languages as a first language, and 6.9% speak another language. About 46% of the city population is bilingual and understands both English and French; the only other Canadian cities that approach this level of linguistic duality are Ottawa, Sudbury, and Montreal. Moncton became the first officially bilingual city in the country in 2002. The adjacent city of Dieppe is about 73% Francophone and has benefited from an ongoing rural depopulation of the Acadian Peninsula and areas in northern and eastern New Brunswick. The town of Riverview meanwhile is heavily (95%) Anglophone.
As of 2016, approximately 87.6% of Moncton's residents were white, while 7.4% were visible minorities and 5% were aboriginal. The largest visible minority groups in Moncton were Black (2.6%), Arab (1.3%), Chinese (0.9%), and Korean, Southeast Asian, South Asian and Filipino (0.5% each).
The Moncton census metropolitan area (CMA) had a population of 144,810 in 2016, ranking it as the 29th largest CMA in Canada.
The underpinnings of the local economy are based on Moncton's heritage as a commercial, distribution, transportation, and retailing centre. This is due to Moncton's central location in the Maritimes: it has the largest catchment area in Atlantic Canada with 1.6 million people living within a three-hour drive of the city. The insurance, information technology, educational, and health care sectors also are major factors in the local economy with the city's two hospitals alone employing over five thousand people.
Moncton has garnered national attention because of the strength of its economy. The local unemployment rate averages around 6%, which is below the national average. In 2004 Canadian Business Magazine named it "The best city for business in Canada", and in 2007 FDi magazine named it the fifth most business friendly small-sized city in North America.
A number of nationally or regionally prominent corporations have their head offices in Moncton including Atlantic Lottery Corporation, Assumption Life Insurance, Medavie Blue Cross Insurance, Armour Transportation Systems and Major Drilling Group International. Moncton also has federal public service employment, with regional head offices for Corrections Canada, Transport Canada, the Gulf Fisheries Centre and the Atlantic Canada Opportunities Agency.
There are 37 call centres in the city which employ over 5000 people. Some of the larger centres include Asurion, Numeris (formerly BBM Canada), Exxon Mobil, Royal Bank of Canada, Tangerine Bank (formerly ING Direct), UPS, Fairmont Hotels and Resorts, Rogers Communications, and Nordia Inc.. A growing high tech sector includes companies such as Gtech, Nanoptix, International Game Technology, OAO Technology Solutions, BMM Test Labs, TrustMe, and BelTek Systems Design. TD Bank announced in 2018 a new banking services centre to be located in Moncton which will employ over 1,000 people (including a previously announced customer contact centre).
Several arms of the Irving corporation have their head offices and/or major operations in greater Moncton. These include Midland Transport, Majesta/Royale Tissues, Irving Personal Care, Master Packaging, Brunswick News, and Cavendish Farms. Kent Building Supplies (an Irving subsidiary) opened their main distribution centre in the Caledonia Industrial Park in 2014. The Irving group of companies employs several thousand people in the Moncton region.
There are three large industrial parks in the metropolitan area. The Irving operations are concentrated in the Dieppe Industrial Park. The Moncton Industrial Park in the city's west end has been expanded. Molson/Coors opened a brewery in the Caledonia Industrial Park in 2007, its first new brewery in over fifty years. All three industrial parks also have large concentrations of warehousing and regional trucking facilities.
A new four-lane Gunningsville Bridge was opened in 2005, connecting downtown Riverview directly with downtown Moncton. On the Moncton side, the bridge connects with an extension of Vaughan Harvey Boulevard as well as to Assumption Boulevard and will serve as a catalyst for economic growth in the downtown area. This has become already evident as an expansion to the Blue Cross Centre was completed in 2006 and a Marriott Residence Inn opened in 2008. The new regional law courts on Assumption Blvd opened in 2011. A new 8,800 seat downtown arena (the Avenir Centre) recently opened in September 2018. On the Riverview side, the Gunningsville Bridge now connects to a new ring road around the town and is expected to serve as a catalyst for development in east Riverview.
The retail sector in Moncton has become one of the most important pillars of the local economy. Major retail projects such as Champlain Place in Dieppe and the Wheeler Park Power Centre on Trinity Drive have become major destinations for locals and for tourists alike.
Tourism is an important industry in Moncton and historically owes its origins to the presence of two natural attractions, the tidal bore of the Petitcodiac River (see above) and the optical illusion of Magnetic Hill. The tidal bore was the first phenomenon to become an attraction but the construction of the Petitcodiac causeway in the 1960s effectively extirpated the attraction. Magnetic Hill, on the city's northwest outskirts, is the city's most famous attraction. The Magnetic Hill area includes (in addition to the phenomenon itself), a golf course, major water park, zoo, and an outdoor concert facility. A$90 million casino/hotel/entertainment complex opened at Magnetic Hill in 2010.
Moncton's Capitol Theatre, an 800-seat restored 1920s-era vaudeville house on Main Street, is the main centre for cultural entertainment for the city. The theatre hosts a performing arts series and provides a venue for various theatrical performances as well as Symphony New Brunswick and the Atlantic Ballet Theatre of Canada. The adjacent Empress Theatre offers space for smaller performances and recitals. The Molson Canadian Centre at Casino New Brunswick provides a 2,000 seat venue for major touring artists and performing groups.
The Moncton-based Atlantic Ballet Theatre tours mainly in Atlantic Canada but also tours nationally and internationally on occasion. Théâtre l'Escaouette is a Francophone live theatre company which has its own auditorium and performance space on Botsford Street. The Anglophone Live Bait Theatre is based in the nearby university town of Sackville. There are several private dance and music academies in the metropolitan area, including the Capitol Theatre's own performing arts school.
The Aberdeen Cultural Centre is a major Acadian cultural cooperative containing multiple studios and galleries. Among other tenants, the centre houses the Galerie Sans Nom, the principal private art gallery in the city.
The city's two main museums are the Moncton Museum at Resurgo Place on Mountain Road and the Musée acadien at Université de Moncton. The Moncton Museum reopened following major renovations and an expansion to include the Transportation Discovery Centre. The Discovery Centre includes many hands on exhibits highlighting the city's transportation heritage. The city also has several recognized historical sites. The Free Meeting House was built in 1821 and is a New England-style meeting house located adjacent to the Moncton Museum. The Thomas Williams House, a former home of a city industrialist built in 1883, is now maintained in period style and serves as a genealogical research centre and is also home to several multicultural organizations. The Treitz Haus is located on the riverfront adjacent to Bore View Park and has been dated to 1769 both by architectural style and by dendrochronology. It is the only surviving building from the Pennsylvania Dutch era and is the oldest surviving building in the province of New Brunswick.
In film production, the city has since 1974 been home to the National Film Board of Canada's French-language Studio Acadie.
Moncton is home to the Frye Festival, an annual bilingual literary celebration held in honour of world-renowned literary critic and favourite son Northrop Frye. This event attracts noted writers and poets from around the world and takes place in the month of April.
The Atlantic Nationals Automotive Extravaganza, held each July, is the largest annual gathering of classic cars in Canada. Other notable events include The Atlantic Seafood Festival in August, The HubCap Comedy Festival, and the World Wine Festival, both held in the spring.
The Avenir Centre is an 8,800 seat arena which serves as a venue for major concerts and sporting events and is the home of the Moncton Wildcats of the Quebec Major Junior Hockey League and the Moncton Magic of the National Basketball League of Canada. The CN Sportplex is a major recreational facility which has been built on the former CN Shops property. It includes ten ballfields, six soccer fields, an indoor rink complex with four ice surfaces (the Superior Propane Centre) and the Hollis Wealth Sports Dome, an indoor air supported multi-use building. The Sports Dome is large enough to allow for year-round football, soccer and golf activities. A newly constructed YMCA near the CN Sportsplex has extensive cardio and weight training facilities, as well as three indoor pools. The CEPS at Université de Moncton contains an indoor track and a swimming pool with diving towers. The new Moncton Stadium, also located at the U de M campus was built for the 2010 IAAF World Junior Track & Field Championships. It has a permanent seating for 10,000, but is expandable to a capacity of over 20,000 for events such as professional Canadian football. The only velodrome in Atlantic Canada is in Dieppe. It has since been closed after 17 years of existence due to safety concerns in May 2018. The metro area has a total of 12 indoor hockey rinks and three curling clubs. Other public sporting and recreational facilities are scattered throughout the metropolitan area, including a new $18 million aquatic centre in Dieppe opened in 2009.
The Moncton Wildcats play major junior hockey in the Quebec Major Junior Hockey League (QMJHL). They won the President's Cup, the QMJHL championship in both 2006 and 2010. Historically there has been a longstanding presence of a Moncton-based team in the Maritime Junior A Hockey League, but the Dieppe Commandos (formerly known as the Moncton Beavers) relocated to Edmundston at the end of the 2017 season. Historically, Moncton also was home to a professional American Hockey League franchise from 1978 to 1994. The New Brunswick Hawks won the AHL Calder Cup by defeating the Binghamton Whalers in 1981–1982. The Moncton Mets played baseball in the New Brunswick Senior Baseball League and won the Canadian Senior Baseball Championship in 2006. In 2015, the Moncton Fisher Cats began play in the New Brunswick Senior Baseball League. They were formed by a merger between the Moncton Mets and the Hub City Brewers of the NBSBL. In 2011, the Moncton Miracles began play as one of the seven charter franchises of the professional National Basketball League of Canada. The franchise failed at the end of the 2016/17 season, to be immediately replaced by a new NBL franchise, the Moncton Magic, who played their inaugural season in 2017/18. The Universite de Moncton has a number of active CIS university sports programs including hockey, soccer, and volleyball. These teams are a part of the Canadian Interuniversity Sport program.
Moncton has hosted many large sporting events. The 2006 Memorial Cup was held in Moncton with the hometown Moncton Wildcats losing in the championship final to rival Quebec Remparts. Moncton hosted the Canadian Interuniversity Sports (CIS) Men's University Hockey Championship in 2007 and 2008. The World Men's Curling Championship was held in Moncton in 2009; the second time this event has taken place in the city.
Moncton also hosted the 2010 IAAF World Junior Championships in Athletics. This was the largest sporting event ever held in Atlantic Canada, with athletes from over 170 countries in attendance. The new 10,000 seat capacity Moncton Stadium was built for this event on the Université de Moncton campus. The construction of this new stadium led directly to Moncton being awarded a regular season neutral site CFL game between the Toronto Argonauts and the Edmonton Eskimos, which was held on September 26, 2010. This was the first neutral site regular season game in the history of the Canadian Football League and was played before a capacity crowd of 20,750. Additional CFL regular season games were held in 2011 and 2013, and again on August 25, 2019.
Moncton was one of only six Canadian cities chosen to host the 2015 FIFA Women's World Cup.
Major sporting events hosted by Moncton include:
The municipal government consists of a mayor and ten city councillors elected to four-year terms of office. The council is non-partisan with the mayor serving as the chairman, casting a ballot only in cases of a tie vote. There are four wards electing two councillors each with an additional two councillors selected at large by the general electorate. Day-to-day operation of the city is under the control of a City Manager.
Moncton is in the federal riding of Moncton—Riverview—Dieppe. Portions of Dieppe are in the federal riding of Beauséjour, and portions of Riverview are in the riding of Fundy Royal. In the current federal parliament, all three members from the metropolitan area belong to the Liberal party.
Aside from locally formed militia units, the military did not have a significant presence in the Moncton area until the beginning of the Second World War. In 1940, a large military supply base (later known as CFB Moncton) was constructed on a railway spur line north of downtown next to the CNR shops. This base served as the main supply depot for the large wartime military establishment in the Maritimes. In addition, two Commonwealth Air Training Plan bases were also built in the Moncton area during the war: No. 8 Service Flying Training School, RCAF, and No. 31 Personnel Depot, RAF. The RCAF also operated No. 5 Supply Depot in Moncton. A naval listening station was also constructed in Coverdale (Riverview) in 1941 to help in coordinating radar activities in the North Atlantic. Military flight training in the Moncton area terminated at the end of World War II and the naval listening station closed in 1971. CFB Moncton remained open to supply the maritime military establishment until just after the end of the Cold War.
With the closure of CFB Moncton in the early 1990s, the military presence in Moncton has been significantly reduced. The northern portion of the former base property has been turned over to the Canada Lands Corporation and is slowly being redeveloped. The southern part of the former base remains an active DND property and is now termed the Moncton Garrison. It is affiliated with CFB Gagetown. Resident components of the garrison include the 1 Engineer Support Unit(Regular force). The garrison also houses the 37 Canadian Brigade Group Headquarters (reserve force) and one of the 37 Brigades constituent units; the 8th Canadian Hussars (Princess Louise's), which is an armoured reconnaissance regiment. 3 Area support unit Det Moncton, and 42 Canadian Forces Health Services Centre Det Moncton provide logistical support for the base. In 2013, the last regular forces units left the Moncton base, but the reserve units remain active and Moncton remains the 37 Canadian Brigade Unit headquarters.
There are two major regional referral and teaching hospitals in Moncton. The Moncton Hospital has approximately 381 inpatient beds and is affiliated with Dalhousie University Medical School. It is home to the Northumberland family medicine residency training program and is a site for third and fourth year clinical training for medical students in the Dalhousie Medicine New Brunswick Training Program. The hospital hosts UNB degree programs in nursing and medical x-ray technology and professional internships in fields such as dietetics. Specialized medical services at the hospital include neurosurgery, peripheral and neuro-interventional radiology, vascular surgery, thoracic surgery, hepatobiliary surgery, orthopedics, trauma, burn unit, medical oncology, neonatal intensive care, and adolescent psychiatry. A$48 million expansion to the hospital was completed in 2009 and contains a new laboratory, ambulatory care centre, and provincial level one trauma centre. A new oncology clinic was built at the hospital and opened in late 2014. The Moncton Hospital is managed by Horizon Health Network (formerly the South East Regional Health Authority).
The Dr. Georges-L.-Dumont University Hospital Centre has about 302 beds and hosts a medical training program through the local CFMNB and distant Université de Sherbrooke Medical School. There are also degree programs in nursing, medical x-ray technology, medical laboratory technology and inhalotherapy which are administered by Université de Moncton. Specialized medical services include medical oncology, radiation oncology, orthopedics, vascular surgery, and nephrology.
A cardiac cath lab is being studied for the hospital and a new PET/CT scanner has been installed. A$75 million expansion for ambulatory care, expanded surgery suites, and medical training is currently under construction. The hospital is also the location of the Atlantic Cancer Research Institute. This hospital is managed by francophone Vitalité Health Network.
The internal working languages of the hospitals are English for the Moncton Hospital (Horizon Health Network) and French for the Dumont Hospital (Vitalité). However both health networks and their hospitals are required to provide services to the public in both official languages, in accordance with the New Brunswick Official Languages Act.
Moncton is served by the Greater Moncton Roméo LeBlanc International Airport (YQM). It was renamed for former Canadian Governor-General (and native son) Roméo LeBlanc in 2016. A new airport terminal with an international arrivals area was opened in 2002 by Her Majesty Queen Elizabeth II. The GMIA handles about 677,000 passengers per year, making it the second busiest airport in the Maritimes in terms of passenger volume. The GMIA is the 10th busiest airport in Canada in terms of freight. Regular scheduled destinations include Halifax, Montreal, Ottawa, and Toronto. Scheduled service providers include Air Canada, Air Canada Rouge, Westjet and Porter Airlines. Seasonal direct air service is provided to destinations in Cuba, Mexico, the Dominican Republic, Jamaica, and Florida, with operators including Sunwing Airlines, Air Transat, and Westjet. FedEx, UPS, and Purolator all have their Atlantic Canadian air cargo bases at the facility. The GMIA is the home of the Moncton Flight College; the largest pilot training institution in Canada, and is also the base for the regional RCMP air service, the New Brunswick Air Ambulance Service and the regional Transport Canada hangar and depot.
There is a second smaller aerodrome near Elmwood Drive. McEwen Airfield (CCG4) is a private airstrip used for general aviation. Skydive Moncton operates the province's only nationally certified sports parachute club out of this facility.
The Moncton Area Control Centre is one of only seven regional air traffic control centres in Canada. This centre monitors over 430,000 flights a year, 80% of which are either entering or leaving North American airspace.
Moncton lies on Route 2 of the Trans-Canada Highway, which leads to Nova Scotia in the east and to Fredericton and Quebec in the west. Route 15 intersects Route 2 at the eastern outskirts of Moncton, heads northeast leading to Shediac and northern New Brunswick, Route 16 connects to route 15 at Shediac and leads to Port Elgin and Prince Edward Island. Route 1 intersects Route 2 approximately west of the city and leads to Saint John and the U.S. border. Wheeler Boulevard (Route 15) serves as an internal ring road, extending from the Petitcodiac River Causeway to Dieppe before exiting the city and heading for Shediac. Inside the city it is an expressway bounded at either end by traffic circles.
Greater Moncton is served by Codiac Transpo, which is operated by the City of Moncton. It operates 40 buses on 19 routes throughout Moncton, Dieppe, and Riverview.
Maritime Bus provides intercity service to the region. Moncton is the largest hub in the system. All other major centres in New Brunswick, as well as Charlottetown, Halifax, and Truro are served out of the Moncton terminal.
Freight rail transportation in Moncton is provided by Canadian National Railway. Although the presence of the CNR in Moncton has diminished greatly since the 1970s, the railway still maintains a large classification yard and intermodal facility in the west end of the city, and the regional headquarters for Atlantic Canada is still located here as well. Passenger rail transportation is provided by Via Rail Canada, with their train the "Ocean" serving the Moncton railway station three days per week to Halifax and to Montreal, Quebec. The downtown Via station has been refurbished and also serves as the terminal for the Maritime Bus intercity bus service.
The South School Board administers 10 Francophone schools, including high schools École Mathieu-Martin and École L'Odyssée. The East School Board administers 25 Anglophone schools including Moncton, Harrison Trimble, Bernice MacNaughton, and Riverview high schools.
Post secondary education in Moncton:
Moncton's daily newspaper is the "Times & Transcript", which has the highest circulation of any daily newspaper in New Brunswick. More than 60 percent of city households subscribe daily, and more than 90 percent of Moncton residents read the Times & Transcript at least once a week. The city's other publications include "L'Acadie Nouvelle", a French newspaper published in Caraquet in northern New Brunswick.
There are 16 broadcast radio stations in the city covering a variety of genres and interests, all on the FM dial. Ten of these stations are English and six are French.
Rogers Cable has its provincial headquarters and main production facilities in Moncton and broadcasts on two community channels, Cable 9 in French and Cable 10 in English. The French-language arm of the CBC, Radio-Canada, maintains its Atlantic Canadian headquarters in Moncton. There are three other broadcast television stations in Moncton and these represent all of the major national networks.
Moncton has been the home of a number of notable people, including National Hockey League Hall of Famer and NHL scoring champion Gordie Drillon, World and Olympic champion curler Russ Howard, distinguished literary critic and theorist Northrop Frye, former Governor-General of Canada Roméo LeBlanc, and former Supreme Court Justice Ivan Cleveland Rand, developer of the Rand Formula and Canada's representative on the UNSCOP commission. Trudy Mackay FRS, renowned quantitative geneticist, member of the Royal Society and National Academy of Sciences, and recipient of the prestigious Wolf Prize for agriculture (2016), was born in Moncton. Robb Wells, the actor who plays Ricky on the Showcase hit comedy "Trailer Park Boys" hails from Moncton, along with Chris Lee, Jacques Daigle, Julie Doiron, an indie rock musician, and Holly Dignard the actress who plays Nicole Miller on the CTV series "Whistler". Harry Currie, noted Canadian conductor, musician, educator, journalist and author was born in Moncton and graduated from MHS. Antonine Maillet, a francophone author, recipient of the Order of Canada and the "Prix Goncourt", the highest honour in francophone literature, is also from Moncton. France Daigle, another acclaimed Acadian novelist and playwright, was born and resides in Moncton, and is noted for her pioneering use of chiac in Acadian literature, was the recipient of the 2012 Governor General's Literary Prize in French Fiction, for her novel "Pour Sûr" (translated into English as "For Sure"). Canadian hockey star Sidney Crosby graduated from Harrison Trimble High School in Moncton.
|
https://en.wikipedia.org/wiki?curid=19857
|
Model theory
In mathematics, model theory is the study of classes of mathematical structures (e.g. groups, fields, graphs, universes of set theory) from the perspective of mathematical logic. The objects of study are models of theories in a formal language. A set of sentences in a formal language is one of the components that form a theory. A model of a theory is a structure (e.g. an interpretation) that satisfies the sentences of that theory.
Model theory recognizes and is intimately concerned with a duality: it examines semantical elements (meaning and truth) by means of syntactical elements (formulas and proofs) of a corresponding language. In a summary definition, dating from 1973:
Model theory developed rapidly during the 1990s, and a more modern definition is provided by Wilfrid Hodges (1997):
Other nearby areas of mathematics include combinatorics, number theory, arithmetic dynamics, analytic functions, and non-standard analysis.
In a similar way to proof theory, model theory is situated in an area of interdisciplinarity among mathematics, philosophy, and computer science. The most prominent professional organization in the field of model theory is the Association for Symbolic Logic.
This page focuses on finitary first order model theory of infinite structures. Finite model theory, which concentrates on finite structures, diverges significantly from the study of infinite structures in both the problems studied and the techniques used. Model theory in higher-order logics or infinitary logics is hampered by the fact that completeness and compactness do not in general hold for these logics. However, a great deal of study has also been done in such logics.
Informally, model theory can be divided into classical model theory, model theory applied to groups and fields, and geometric model theory. A missing subdivision is computable model theory, but this can arguably be viewed as an independent subfield of logic.
Examples of early theorems from classical model theory include Gödel's completeness theorem, the upward and downward Löwenheim–Skolem theorems, Vaught's two-cardinal theorem, Scott's isomorphism theorem, the omitting types theorem, and the Ryll-Nardzewski theorem. Examples of early results from model theory applied to fields are Tarski's elimination of quantifiers for real closed fields, Ax's theorem on pseudo-finite fields, and Robinson's development of non-standard analysis. An important step in the evolution of classical model theory occurred with the birth of stability theory (through Morley's theorem on uncountably categorical theories and Shelah's classification program), which developed a calculus of independence and rank based on syntactical conditions satisfied by theories.
During the last several decades applied model theory has repeatedly merged with the more pure stability theory. The result of this synthesis is called geometric model theory in this article (which is taken to include o-minimality, for example, as well as classical geometric stability theory). An example of a proof from geometric model theory is Hrushovski's proof of the Mordell–Lang conjecture for function fields. The ambition of geometric model theory is to provide a "geography of mathematics" by embarking on a detailed study of definable sets in various mathematical structures, aided by the substantial tools developed in the study of pure model theory.
Fundamental concepts in universal algebra are signatures σ and σ-algebras. Since these concepts are formally defined in the article on structures, the present article is an informal introduction which consists of examples of the way these terms are used.
This is a very efficient way to define most classes of algebraic structures, because there is also the concept of σ-homomorphism, which correctly specializes to the usual notions of homomorphism for groups, semigroups, magmas and rings. For this to work, the signature must be chosen well.
Terms such as the σring-term "t"("u","v","w") given by are used to define identities but also to construct free algebras. An equational class is a class of structures which, like the examples above and many others, is defined as the class of all σ-structures which satisfy a certain set of identities. Birkhoff's theorem states:
An important non-trivial tool in universal algebra are ultraproducts formula_1, where "I" is an infinite set indexing a system of σ-structures "Ai", and "U" is an ultrafilter on "I".
While model theory is generally considered a part of mathematical logic, universal algebra, which grew out of Alfred North Whitehead's (1898) work on abstract algebra, is part of algebra. This is reflected by their respective MSC classifications. Nevertheless, model theory can be seen as an extension of universal algebra.
Finite model theory is the area of model theory which has the closest ties to universal algebra. Like some parts of universal algebra, and in contrast with the other areas of model theory, it is mainly concerned with finite algebras, or more generally, with finite σ-structures for signatures σ which may contain relation symbols as in the following example:
A σ-homomorphism is a map that commutes with the operations and preserves the relations in σ. This definition gives rise to the usual notion of graph homomorphism, which has the interesting property that a bijective homomorphism need not be invertible. Structures are also a part of universal algebra; after all, some algebraic structures such as ordered groups have a binary relation '" is written as a sentence formula_3.)
The logics employed in finite model theory are often substantially more expressive than first-order logic, the standard logic for model theory of infinite structures.
Whereas universal algebra provides the semantics for a signature, logic provides the syntax. With terms, identities and quasi-identities, even universal algebra has some limited syntactic tools; first-order logic is the result of making quantification explicit and adding negation into the picture.
A first-order formula is built out of atomic formulas such as "R"("f"("x","y"),"z") or "y" = "x" + 1 by means of the Boolean connectives formula_4 and prefixing of quantifiers formula_5 or formula_6. A sentence is a formula in which each occurrence of a variable is in the scope of a corresponding quantifier. Examples for formulas are φ (or φ(x) to mark the fact that at most x is an unbound variable in φ) and ψ defined as follows:
(Note that the equality symbol has a double meaning here.) It is intuitively clear how to translate such formulas into mathematical meaning. In the σsmr-structure formula_9 of the natural numbers, for example, an element "n" satisfies the formula φ if and only if "n" is a prime number. The formula ψ similarly defines irreducibility. Tarski gave a rigorous definition, sometimes called "Tarski's definition of truth", for the satisfaction relation formula_10, so that one easily proves:
A set "T" of sentences is called a (first-order) theory. A theory is satisfiable if it has a model formula_13, i.e. a structure (of the appropriate signature) which satisfies all the sentences in the set "T". Consistency of a theory is usually defined in a syntactical way, but in first-order logic by the completeness theorem there is no need to distinguish between satisfiability and consistency. Therefore, model theorists often use "consistent" as a synonym for "satisfiable".
A theory is called categorical if it determines a structure up to isomorphism, but it turns out that this definition is not useful, due to serious restrictions in the expressivity of first-order logic. The Löwenheim–Skolem theorem implies that for every theory "T" having a countable signature which has an infinite model for some infinite cardinal number, then it has a model of size κ for any infinite cardinal number κ. Since two models of different sizes cannot possibly be isomorphic, only finitary structures can be described by a categorical theory.
Lack of expressivity (when compared to higher logics such as second-order logic) has its advantages, though. For model theorists, the Löwenheim–Skolem theorem is an important practical tool rather than the source of Skolem's paradox. In a certain sense made precise by Lindström's theorem, first-order logic is the most expressive logic for which both the Löwenheim–Skolem theorem and the compactness theorem hold.
As a corollary (i.e., its contrapositive), the compactness theorem says that every unsatisfiable first-order theory has a finite unsatisfiable subset. This theorem is of central importance in infinite model theory, where the words "by compactness" are commonplace. One way to prove it is by means of ultraproducts. An alternative proof uses the completeness theorem, which is otherwise reduced to a marginal role in most of modern model theory.
As observed in the section on first-order logic, first-order theories cannot be categorical, i.e. they cannot describe a unique model up to isomorphism, unless that model is finite. But two famous model-theoretic theorems deal with the weaker notion of κ-categoricity for a cardinal κ. A theory "T" is called κ-categorical if any two models of "T" that are of cardinality κ are isomorphic. It turns out that the question of κ-categoricity depends critically on whether κ is bigger than the cardinality of the language (i.e. formula_14 + |σ|, where |σ| is the cardinality of the signature). For finite or countable signatures this means that there is a fundamental difference between formula_14-cardinality and κ-cardinality for uncountable κ.
A few characterizations of formula_14-categoricity include:
This result, due independently to Engeler, Ryll-Nardzewski and Svenonius, is sometimes referred to as the Ryll-Nardzewski theorem.
Further, formula_14-categorical theories and their countable models have strong ties with oligomorphic groups. They are often constructed as Fraïssé limits.
Michael Morley's highly non-trivial result that (for countable languages) there is only "one" notion of uncountable categoricity was the starting point for modern model theory, and in particular classification theory and stability theory:
Uncountably categorical (i.e. κ-categorical for all uncountable cardinals κ) theories are from many points of view the most well-behaved theories. A theory that is both formula_14-categorical and uncountably categorical is called totally categorical.
Set theory (which is expressed in a countable language), if it is consistent, has a countable model; this is known as Skolem's paradox, since there are sentences in set theory which postulate the existence of uncountable sets and yet these sentences are true in our countable model. Particularly the proof of the independence of the continuum hypothesis requires considering sets in models which appear to be uncountable when viewed from "within" the model, but are countable to someone "outside" the model.
The model-theoretic viewpoint has been useful in set theory; for example in Kurt Gödel's work on the constructible universe, which, along with the method of forcing developed by Paul Cohen can be shown to prove the (again philosophically interesting) independence of the axiom of choice and the continuum hypothesis from the other axioms of set theory.
In the other direction, model theory itself can be formalized within ZFC set theory. The development of the fundamentals of model theory (such as the compactness theorem) rely on the axiom of choice, or more exactly the Boolean prime ideal theorem. Other results in model theory depend on set-theoretic axioms beyond the standard ZFC framework. For example, if the Continuum Hypothesis holds then every countable model has an ultrapower which is saturated (in its own cardinality). Similarly, if the Generalized Continuum Hypothesis holds then every model has a saturated elementary extension. Neither of these results are provable in ZFC alone. Finally, some questions arising from model theory (such as compactness for infinitary logics) have been shown to be equivalent to large cardinal axioms.
A field or a vector space can be regarded as a (commutative) group by simply ignoring some of its structure. The corresponding notion in model theory is that of a reduct of a structure to a subset of the original signature. The opposite relation is called an "expansion" - e.g. the (additive) group of the rational numbers, regarded as a structure in the signature {+,0} can be expanded to a field with the signature {×,+,1,0} or to an ordered group with the signature {+,0,"All three commentators [i.e. Vaught, van Heijenoort and Dreben] agree that both the completeness and compactness theorems were implicit in Skolem 1923…." [] but it was first published in 1930, as a lemma in Kurt Gödel's proof of his completeness theorem. The Löwenheim–Skolem theorem and the compactness theorem received their respective general forms in 1936 and 1941 from Anatoly Maltsev.
The development of model theory can be traced to Alfred Tarski, a member of the Lwów–Warsaw school during the interbellum. Tarski's work included logical consequence, deductive systems, the algebra of logic, the theory of definability, and the semantic definition of truth, among other topics. His semantic methods culminated in the model theory he and a number of his Berkeley students developed in the 1950s and '60s. These modern concepts of model theory influenced Hilbert's program and modern mathematics.
|
https://en.wikipedia.org/wiki?curid=19858
|
Vought F4U Corsair
The Vought F4U Corsair is an American fighter aircraft that saw service primarily in World War II and the Korean War.
Designed and initially manufactured by Chance Vought, the Corsair was soon in great demand; additional production contracts were given to Goodyear, whose Corsairs were designated FG, and Brewster, designated F3A.
The Corsair was designed and operated as a carrier-based aircraft, and entered service in large numbers with the U.S. Navy in late 1944 and early 1945. It quickly became one of the most capable carrier-based fighter-bombers of World War II. Some Japanese pilots regarded it as the most formidable American fighter of World War II and its naval aviators achieved an 11:1 kill ratio. Early problems with carrier landings and logistics led to it being eclipsed as the dominant carrier-based fighter by the Grumman F6F Hellcat, powered by the same Double Wasp engine first flown on the Corsair's first prototype in 1940. Instead, the Corsair's early deployment was to land-based squadrons of the U.S. Marines and U.S. Navy.
The Corsair served almost exclusively as a fighter-bomber throughout the Korean War and during the French colonial wars in Indochina and Algeria. In addition to its use by the U.S. and British, the Corsair was also used by the Royal New Zealand Air Force, French Naval Aviation, and other air forces until the 1960s.
From the first prototype delivery to the U.S. Navy in 1940, to final delivery in 1953 to the French, 12,571 F4U Corsairs were manufactured in 16 separate models. Its 1942–1953 production run was the longest of any U.S. piston-engined fighter.
In February 1938 the U.S. Navy Bureau of Aeronautics published two requests for proposal for twin-engined and single-engined fighters. For the single-engined fighter the Navy requested the maximum obtainable speed, and a stalling speed not higher than . A range of was specified. The fighter had to carry four guns, or three with increased ammunition. Provision had to be made for anti-aircraft bombs to be carried in the wing. These small bombs would, according to thinking in the 1930s, be dropped on enemy aircraft formations.
In June 1938, the U.S. Navy signed a contract with Vought for a prototype bearing the factory designation V-166B, the XF4U-1, BuNo 1443. The Corsair design team was led by Rex Beisel. After mock-up inspection in February 1939, construction of the XF4U-1 powered by an XR-2800-4 prototype of the Pratt & Whitney R-2800 Double Wasp twin-row, 18-cylinder radial engine, rated at went ahead quickly, as the very first airframe ever designed from the start to have a Double Wasp engine fitted for flight. When the prototype was completed it had the biggest and most powerful engine, largest propeller, and probably the largest wing on any naval fighter to date. The first flight of the XF4U-1 was made on 29 May 1940, with Lyman A. Bullard, Jr. at the controls. The maiden flight proceeded normally until a hurried landing was made when the elevator trim tabs failed because of flutter.
On 1 October 1940, the XF4U-1 became the first single-engine U.S. fighter to fly faster than by flying at an average ground speed of from Stratford to Hartford. The USAAC's twin engine Lockheed P-38 Lightning had flown over 400 mph in January–February 1939. The XF4U-1 also had an excellent rate of climb but testing revealed that some requirements would have to be rewritten. In full-power dive tests, speeds of up to were achieved, but not without damage to the control surfaces and access panels and, in one case, an engine failure. The spin recovery standards also had to be relaxed as recovery from the required two-turn spin proved impossible without resorting to an anti-spin chute. The problems clearly meant delays in getting the design into production.
Reports coming back from the war in Europe indicated that an armament of two synchronized engine cowling-mount machine guns, and two machine guns (one in each outer wing panel) was insufficient. The U.S. Navy's November 1940 production proposals specified heavier armament. The increased armament comprised three .50 caliber machine guns mounted in each wing panel. This improvement greatly increased the ability of the Corsair to shoot down enemy aircraft.
Formal U.S. Navy acceptance trials for the XF4U-1 began in February 1941. The Navy entered into a letter of intent on 3 March 1941, received Vought's production proposal on 2 April, and awarded Vought a contract for 584 F4U-1 fighters, which were given the name "Corsair" – inherited from the firm's late-1920s Vought O2U naval biplane scout which first bore the name – on 30 June of the same year. The first production F4U-1 performed its initial flight a year later, on 24 June 1942. It was a remarkable achievement for Vought; compared to land-based counterparts, carrier aircraft are "overbuilt" and heavier, to withstand the extreme stress of deck landings.
The F4U incorporated the largest engine available at the time, the 18-cylinder Pratt & Whitney R-2800 Double Wasp radial. To extract as much power as possible, a relatively large Hamilton Standard Hydromatic three-blade propeller of was used.
To accommodate a folding wing the designers considered retracting the main landing gear rearward but, for the chord of wing that was chosen, it was difficult to make the landing gear struts long enough to provide ground clearance for the large propeller. Their solution was an inverted gull wing, which considerably shortened the required length of the struts. The anhedral of the wing's center-section also permitted the wing and fuselage to meet at the optimum angle for minimizing drag, without using wing root fairings. The bent wing, however, was heavier and more difficult to construct, offsetting these benefits.
The Corsair's aerodynamics were an advance over those of contemporary naval fighters. The F4U was the first U.S. Navy aircraft to feature landing gear that retracted into a fully enclosed wheel well. The landing gear oleo struts—each with its own strut door enclosing it when retracted—rotated through 90° during retraction, with the wheel atop the lower end of the strut when retracted. A pair of rectangular doors enclosed each wheel well, leaving a streamlined wing. This swiveling, aft-retracting landing gear design was common to the Curtiss P-40 (and its predecessor, the P-36), as adopted for the F4U Corsair's main gear and its erstwhile Pacific War counterpart, the Grumman F6F Hellcat. The oil coolers were mounted in the heavily anhedraled center-section of the wings, alongside the supercharger air intakes, and used openings in the leading edges of the wings, rather than protruding scoops. The large fuselage panels were made of aluminum and were attached to the frames with the newly developed technique of spot welding, thus mostly eliminating the use of rivets. While employing this new technology, the Corsair was also the last American-produced fighter aircraft to feature fabric as the skinning for the top and bottom of each outer wing, aft of the main spar and armament bays, and for the ailerons, elevators, and rudder. The elevators were also constructed from plywood. The Corsair, even with its streamlining and high speed abilities, could fly slowly enough for carrier landings with full flap deployment of 60°.
In part because of its advances in technology and a top speed greater than existing Navy aircraft, numerous technical problems had to be solved before the Corsair entered service. Carrier suitability was a major development issue, prompting changes to the main landing gear, tail wheel, and tailhook. Early F4U-1s had difficulty recovering from developed spins, since the inverted gull wing's shape interfered with elevator authority. It was also found that the Corsair's right wing could stall and drop rapidly and without warning during slow carrier landings. In addition, if the throttle were suddenly advanced (for example, during an aborted landing) the left wing could stall and drop so quickly that the fighter could flip over with the rapid increase in power. These potentially lethal characteristics were later solved through the addition of a small, -long stall strip to the leading edge of the outer right wing, just outboard of the gun ports. This allowed the right wing to stall at the same time as the left.
Other problems were encountered during early carrier trials. The combination of an aft cockpit and the Corsair's long nose made landings hazardous for newly trained pilots. During landing approaches, it was found that oil from the opened hydraulically-powered cowl flaps could spatter onto the windscreen, severely reducing visibility, and the undercarriage oleo struts had bad rebound characteristics on landing, allowing the aircraft to bounce down the carrier deck. The first problem was solved by locking the top cowl flaps in front of the windscreen down permanently, then replacing them with a fixed panel. The undercarriage bounce took more time to solve, but eventually a "bleed valve" incorporated in the legs allowed the hydraulic pressure to be released gradually as the aircraft landed. The Corsair was not considered fit for carrier use until the wing stall problems and the deck bounce could be solved.
Meanwhile, the more docile and simpler-to-build F6F Hellcat had begun entering service in its intended carrier-based use. The Navy wanted to standardize on one type of carrier fighter, and the Hellcat, while slower than the Corsair, was considered simpler to land on a carrier by an inexperienced pilot and proved to be successful almost immediately after introduction. The Navy's decision to choose the Hellcat meant that the Corsair was released to the U.S. Marine Corps. With no initial requirement for carrier landings, the Marine Corps deployed the Corsair to devastating effect from land bases. Corsair deployment aboard U.S. carriers was delayed until late 1944, by which time the last of the carrier landing problems, relating to the Corsair's long nose, had been tackled by the British.
Production F4U-1s featured several major modifications from the XF4U-1. A change of armament to six wing-mounted M2 Browning machine guns (three in each outer wing panel) and their ammunition (400 rounds for the inner pair, 375 rounds for the outer) meant that the location of the wing fuel tanks had to be changed. In order to keep the fuel tank close to the center of gravity, the only available position was in the forward fuselage, ahead of the cockpit. Later on, different variants of the F4U were given different armaments. While most Corsair variants had the standard armament of six .50 caliber M2 Browning machine guns, some models (like the F4U-1C) were equipped with four 20 millimeter M2 cannons for its main weapon. While these cannons were more powerful than the standard machine guns, they were not favored over the standard loadout. Only 200 models of this particular Corsair model were produced, out of the total 12,571. Other variants were capable of carrying mission specific weapons such as rockets and bombs. The F4U was able to carry up to a total of eight rockets, or four under each wing. It was able to carry up to four thousand pounds of explosive ordnance. This helped the Corsair take on a fighter bomber role, giving it a more versatile role as a ground support aircraft as well as a fighter. Accordingly, as a self-sealing fuel tank replaced the fuselage mounted armament, the cockpit had to be moved back by and the fuselage lengthened. In addition, of armor plate was installed, along with a bullet-proof windscreen which was set internally, behind the curved Plexiglas windscreen. The canopy could be jettisoned in an emergency, and half-elliptical planform transparent panels, much like those of certain models of the Curtiss P-40, were inset into the sides of the fuselage's turtledeck structure behind the pilot's headrest, providing the pilot with a limited rear view over his shoulders. A rectangular Plexiglas panel was inset into the lower center section to allow the pilot to see directly beneath the aircraft and assist with deck landings. The engine used was the more powerful R-2800-8 (B series) Double Wasp which produced . On the wings the flaps were changed to a NACA slotted type and the ailerons were increased in span to increase the roll rate, with a consequent reduction in flap span. IFF transponder equipment was fitted in the rear fuselage. These changes increased the Corsair's weight by several hundred pounds.
The performance of the Corsair was superior to most of its contemporaries. The F4U-1 was considerably faster than the Grumman F6F Hellcat and only slower than the Republic P-47 Thunderbolt; all three were powered by the R-2800. But while the P-47 achieved its highest speed at with the help of an intercooled turbocharger, the F4U-1 reached its maximum speed at , and used a mechanically supercharged engine.
The U.S. Navy received its first production F4U-1 on 31 July 1942, but getting it into service proved difficult. The framed "birdcage" style canopy provided inadequate visibility for deck taxiing, and the long "hose nose" and nose-up attitude of the Corsair made it difficult to see straight ahead. The enormous torque of the Double Wasp engine also made it a handful for inexperienced pilots if they were forced to bolter. Early Navy pilots called the F4U the "hog", "hosenose", or "bent-wing widow maker".
Carrier qualification trials on the training carrier USS "Wolverine" and escort carriers USS "Core" and USS "Charger" in 1942 found that, despite visibility issues and control sensitivity, the Corsair was "...an excellent carrier type and very easy to land aboard. It is no different than any other airplane." Two Navy units, VF-12 (October 1942) and later VF-17 (April 1943) were equipped with the F4U. By April 1943, VF-12 had successfully completed deck landing qualification.
At the time, the U.S. Navy also had the Grumman F6F Hellcat, which did not have the performance of the F4U, but was a better deck landing aircraft. The Corsair was declared "ready for combat" at the end of 1942, though qualified to operate only from land bases until the last of the carrier qualification issues were worked out. VF-17 went aboard the USS "Bunker Hill" in late 1943, and the Chief of Naval Operations wanted to equip four air groups with Corsairs by the end of 1943. The Commander, Air Forces, Pacific had a different opinion, stating that "In order to simplify spares problems and also to insure flexibility in carrier operations present practice in the Pacific is to assign all Corsairs to Marines and to equip FightRons [fighter squadrons] on medium and light carriers with Hellcats." VF-12 soon abandoned its aircraft to the Marines. VF-17 kept its Corsairs, but was removed from its carrier, , due to perceived difficulties in supplying parts at sea.
The Marines needed a better fighter than the F4F Wildcat. For them, it was not as important that the F4U could be recovered aboard a carrier, as they usually flew from land bases. Growing pains aside, Marine Corps squadrons readily took to the radical new fighter.
From February 1943 onward, the F4U operated from Guadalcanal and ultimately other bases in the Solomon Islands. A dozen USMC F4U-1s of VMF-124, commanded by Major William E. Gise, arrived at Henderson Field (code name "Cactus") on 12 February. The first recorded combat engagement was on 14 February 1943, when Corsairs of VMF-124 under Major Gise assisted P-40s and P-38s in escorting a formation of Consolidated B-24 Liberators on a raid against a Japanese aerodrome at Kahili. Japanese fighters contested the raid and the Americans got the worst of it, with four P-38s, two P-40s, two Corsairs, and two Liberators lost. No more than four Japanese Zeros were destroyed. A Corsair was responsible for one of the kills, albeit due to a midair collision. The fiasco was referred to as the "Saint Valentine's Day Massacre". Despite the debut, the Marines quickly learned how to make better use of the aircraft and started demonstrating its superiority over Japanese fighters. By May, the Corsair units were getting the upper hand, and VMF-124 had produced the first Corsair ace, Second Lieutenant Kenneth A. Walsh, who would rack up a total of 21 kills during the war. He remembered:
VMF-113 was activated on 1 January 1943 at Marine Corps Air Station El Toro as part of Marine Base Defense Air Group 41. They were soon given their full complement of 24 F4U Corsairs. On 26 March 1944, while escorting four B-25 bombers on a raid over Ponape, they recorded their first enemy kills, downing eight Japanese aircraft. In April of that year, VMF-113 was tasked with providing air support for the landings at Ujelang. Since the assault was unopposed, the squadron quickly returned to striking Japanese targets in the Marshall Islands for the remainder of 1944.
Corsairs were flown by the "Black Sheep" Squadron (VMF-214, led by Marine Major Gregory "Pappy" Boyington) in an area of the Solomon Islands called "The Slot". Boyington was credited with 22 kills in F4Us (of 28 total, including six in an AVG P-40, although his score with the AVG has been disputed). Other noted Corsair pilots of the period included VMF-124's Kenneth Walsh, James E. Swett, Archie Donahue and Bill "Casey" Case; VMF-215's Robert M. Hanson and Donald Aldrich; and VF-17's Tommy Blackburn, Roger Hedrick, and Ira Kepford. Nightfighter versions equipped Navy and Marine units afloat and ashore.
One particularly unusual kill was scored by Marine Lieutenant R. R. Klingman of VMF-312 (the "Checkerboards") over Okinawa. Klingman was in pursuit of a Japanese twin-engine aircraft at high altitude when his guns jammed due to the gun lubrication thickening from the extreme cold. He flew up and chopped off the enemy's tail with the big propeller of the Corsair. Despite missing five inches (127 mm) off the end of his propeller blades, he managed to land safely after this aerial ramming attack. He was awarded the Navy Cross.
At war's end, Corsairs were ashore on Okinawa, combating the "kamikaze", and also were flying from fleet and escort carriers. VMF-312, VMF-323, VMF-224, and a handful of others met with success in the Battle of Okinawa.
Since Corsairs were being operated from shore bases, while still awaiting approval for U.S. carrier operations, 965 FG-1As were built as "land planes" without their hydraulic wing folding mechanisms, hoping to improve performance by reducing aircraft weight, with the added benefit of minimizing complexity. (These Corsairs’ wings could still be manually folded.)
A second option was to remove the folding mechanism in the field using a kit, which could be done for Vought and Brewster Corsairs as well. On 6 December 1943, the Bureau of Aeronautics issued guidance on weight-reduction measures for the F4U-1, FG-1, and F3A. Corsair squadrons operating from land bases were authorized to remove catapult hooks, arresting hooks, and associated equipment, which eliminated 48 pounds of unnecessary weight. While there are no data to indicate to what extent these modifications were incorporated, there are numerous photos in evidence of Corsairs, of various manufacturers and models, on islands in the Pacific without tailhooks installed.
Corsairs also served well as fighter-bombers in the Central Pacific and the Philippines. By early 1944, Marine pilots were beginning to exploit the type's considerable capabilities in the close-support role in amphibious landings. Charles Lindbergh flew Corsairs with the Marines as a civilian technical advisor for United Aircraft Corporation in order to determine how best to increase the Corsair's payload and range in the attack role and to help evaluate future viability of single- versus twin-engine fighter design for Vought. Lindbergh managed to get the F4U into the air with of bombs, with a bomb on the centerline and a bomb under each wing. In the course of such experiments, he performed strikes on Japanese positions during the battle for the Marshall Islands.
By the beginning of 1945, the Corsair was a full-blown "mudfighter", performing strikes with high-explosive bombs, napalm tanks, and HVARs. It proved versatile, able to operate everything from Bat glide bombs to 11.75 in (300 mm) Tiny Tim rockets. The aircraft was a prominent participant in the fighting for the Palaus, Iwo Jima, and Okinawa.
In November 1943, while operating as a shore-based unit in the Solomon Islands, VF-17 reinstalled the tail hooks so its F4Us could land and refuel while providing top cover over the task force participating in the carrier raid on Rabaul. The squadron's pilots landed, refueled, and took off from their former home, "Bunker Hill" and on 11 November 1943.
Twelve USMC F4U-1s arrived at Henderson Field (Guadalcanal) on 12 February 1943. The U.S. Navy did not get into combat with the type until September 1943. The work done by the Royal Navy's FAA meant those models qualified the type for U.S. carrier operations first. The U.S. Navy finally accepted the F4U for shipboard operations in April 1944, after the longer oleo strut was fitted, which eliminated the tendency to bounce. The first US Corsair unit to be based effectively on a carrier was the pioneer USMC squadron VMF-124, which joined "Essex" in December 1944. They were accompanied by VMF-213. The increasing need for fighter protection against "kamikaze" attacks resulted in more Corsair units being moved to carriers.
U.S. figures compiled at the end of the war indicate that the F4U and FG flew 64,051 operational sorties for the U.S. Marines and U.S. Navy through the conflict (44% of total fighter sorties), with only 9,581 sorties (15%) flown from carrier decks. F4U and FG pilots claimed 2,140 air combat victories against 189 losses to enemy aircraft, for an overall kill ratio of over 11:1. While this gave the Corsair the lowest loss rate of any fighter of the Pacific War, this was due in part to operational circumstances; it primarily faced air-to-air combat in the Solomon Islands and Rabaul campaigns (as well as at Leyte and for kamikaze interception), but as operations shifted north and its mission shifted to ground attack the aircraft saw less exposure to enemy aircraft, while other fighter types were exposed to more air combat. Against the best Japanese opponents, the aircraft claimed a 12:1 kill ratio against the Mitsubishi A6M Zero and 6:1 against the Nakajima Ki-84, Kawanishi N1K-J, and Mitsubishi J2M combined during the last year of the war. The Corsair bore the brunt of U.S. fighter-bomber missions, delivering of bombs during the war (70% of total bombs dropped by U.S. fighters during the war).
Corsair losses in World War II were as follows:
In the early days of World War II, Royal Navy fighter requirements had been based on cumbersome two-seat designs, such as the Blackburn Skua (and its turreted derivative the Blackburn Roc) and the Fairey Fulmar, since it was expected that they would encounter only long-range bombers or flying boats and that navigation over featureless seas required the assistance of a radio operator/navigator. The Royal Navy hurriedly adopted higher-performance single-seat aircraft such as the Hawker Sea Hurricane and the less robust Supermarine Seafire alongside, but neither aircraft had sufficient range to operate at a distance from a carrier task force. The Corsair was welcomed as a more robust and versatile alternative.
In November 1943, the Royal Navy received its first batch of 95 Vought F4U-1s, which were given the designation "Corsair [Mark] I". The first squadrons were assembled and trained on the U.S. East Coast and then shipped across the Atlantic. The Royal Navy put the Corsair into carrier operations immediately. They found its landing characteristics dangerous, suffering a number of fatal crashes, but considered the Corsair to be the best option they had.
In Royal Navy service, because of the limited hangar deck height in several classes of British carrier, many Corsairs had their outer wings "clipped" by to clear the deckhead. The change in span brought about the added benefit of improving the sink rate, reducing the F4U's propensity to "float" in the final stages of landing. Despite the clipped wings and the shorter decks of British carriers, Royal Navy aviators found landing accidents less of a problem than they had been to U.S. Navy aviators, thanks to the curved approach they used: British units solved the landing visibility problem by approaching the carrier in a medium left-hand turn, which allowed the pilot to keep the carrier's deck in view over the anhedral in the left wing root. This technique was later adopted by U.S. Navy and Marine fliers for carrier use of the Corsair.
The Royal Navy developed a number of modifications to the Corsair that made carrier landings more practical. Among these were a bulged canopy (similar to the Malcolm Hood), raising the pilot's seat , and wiring shut the cowl flaps across the top of the engine compartment, diverting oil and hydraulic fluid spray around the sides of the fuselage.
The Royal Navy initially received 95 "birdcage" F4U-1s from Vought which were designated Corsair Mk I in Fleet Air Arm service. Next from Vought came 510 "blown-canopy" F4U-1A/-1Ds, which were designated Corsair Mk II (the final 150 equivalent to the F4U-1D, but not separately designated in British use). 430 Brewster Corsairs (334 F3A-1 and 96 F3A-1D), more than half of Brewster's total production, were delivered to Britain as the Corsair Mk III. 857 Goodyear Corsairs (400 FG-1/-1A and 457 FG-1D) were delivered and designated Corsair Mk IV. The Mk IIs and Mk IVs were the only versions to be used in combat.
The Royal Navy cleared the F4U for carrier operations well before the U.S. Navy and showed that the Corsair Mk II could be operated with reasonable success even from escort carriers. It was not without problems; one was excessive wear of the arrester wires, due both to the weight of the Corsair and the understandable tendency of the pilots to stay well above the stalling speed. A total of 2,012 Corsairs were supplied to the United Kingdom.
Fleet Air Arm (FAA) units were created and equipped in the United States, at Quonset Point or Brunswick and then shipped to war theaters aboard escort carriers. The first FAA Corsair unit was 1830 NAS, created on the first of June 1943, and soon operating from . At the end of the war, 18 FAA squadrons were operating the Corsair. British Corsairs served both in Europe and in the Pacific. The first, and also most important, European operations were the series of attacks (Operation Tungsten) in April, July, and August 1944 on the , for which Corsairs from and provided fighter cover. It appears the Corsairs did not encounter aerial opposition on these raids.
From April 1944, Corsairs from the British Pacific Fleet took part in several major air raids in South East Asia beginning with Operation Cockpit, an attack on Japanese targets at Sabang island, in the Dutch East Indies.
In July and August 1945, Corsair naval squadrons 1834, 1836, 1841, and 1842 took part in a series of strikes on the Japanese mainland, near Tokyo. These squadrons operated from "Victorious" and "Formidable." On 9 August 1945, days before the end of the war, Corsairs from "Formidable" attacked Shiogama harbor on the northeast coast of Japan. Royal Canadian Navy Volunteer Reserve pilot, Lieutenant Robert Hampton Gray, of 1841 Squadron was hit by flak but pressed home his attack on a Japanese destroyer, sinking it with a bomb but crashing into the sea. He was posthumously awarded Canada's last Victoria Cross, becoming the second fighter pilot of the war to earn a Victoria Cross as well as the final Canadian casualty of World War II.
FAA Corsairs originally fought in a camouflage scheme with a Dark Slate Grey/Extra Dark Sea Grey disruptive pattern on top and Sky undersides, but were later painted overall dark blue. As it had become imperative for all Allied aircraft in the Pacific Theater of World War II to abandon all use of any "red devices" in their national insignia — to prevent any chance of misidentification with Japanese military aircraft, all of which bore the circular, all-red "Hinomaru" insignia (nicknamed a "meatball" by Allied aircrew) that is still in use to this day, the United States removed all areas of red color (specifically removing the red center to the roundel) and removed any sort of national fin/rudder markings, which at that time had seven horizontal red stripes, from the American national aircraft insignia scheme by 6 May 1942. The British did likewise, starting with a simple paintover with white paint, of their "Type C" roundel's red center, at about the time the U.S. Navy removed the red-center from their roundel. Later, a shade of slate gray center color replaced the white color on the earlier roundel. When the Americans starting using the added white bars to either side of their blue/white star roundel on 28 June 1943; SEAC British Corsairs, most all of which still used the earlier blue/white Type C roundel with the red center removed, added similar white bars to either side of their blue-white roundels to emulate the Americans.
In all, out of 18 carrier-based squadrons, eight saw combat, flying intensive ground attack/interdiction operations and claiming 47.5 aircraft shot down.
At the end of World War II, under the terms of the Lend-Lease agreement, the aircraft had to be paid for or to be returned to the U.S. As the UK did not have the means to pay for them, the Royal Navy Corsairs were pushed overboard into the sea in Moreton Bay off Brisbane, Australia.
Equipped with obsolete Curtiss P-40s, Royal New Zealand Air Force (RNZAF) squadrons in the South Pacific performed impressively, in particular in the air-to-air role. The American government accordingly decided to give New Zealand early access to the Corsair, especially as it was not initially being used from carriers. Some 424 Corsairs equipped 13 RNZAF squadrons, including No. 14 Squadron RNZAF and No. 15 Squadron RNZAF, replacing Douglas SBD Dauntlesses as well as P-40s. Most of the F4U-1s were assembled by Unit 60 with a further batch assembled and flown at RNZAF Hobsonville. In total there were 336 F4U-1s and 41 F4U-1Ds used by the RNZAF during the Second World War. Sixty FG-1Ds arrived late in the war.
The first deliveries of lend-lease Corsairs began in March 1944 with the arrival of 30 F4U-1s at the RNZAF Base Depot Workshops (Unit 60) on the island of Espiritu Santo in the New Hebrides. From April, these workshops became responsible for assembling all Corsairs for the RNZAF units operating the aircraft in the South West Pacific; and a Test and Despatch flight was set up to test the aircraft after assembly. By June 1944, 100 Corsairs had been assembled and test flown. The first squadrons to use the Corsair were 20 and 21 Squadrons on Espiritu Santo, operational in May 1944. The organization of the RNZAF in the Pacific and New Zealand meant that only the pilots and a small staff belonged to each squadron (the maximum strength on a squadron was 27 pilots): squadrons were assigned to several Servicing Units (SUs, composed of 5–6 officers, 57 NCOs, 212 airmen) which carried out aircraft maintenance and operated from fixed locations: hence F4U-1 "NZ5313" was first used by 20 Squadron/1 SU on Guadalcanal in May 1944; 20 Squadron was then relocated to 2 SU on Bougainville in November. In all there were ten front line SUs plus another three based in New Zealand. Because each of the SUs painted its aircraft with distinctive markings and the aircraft themselves could be repainted in several different color schemes, the RNZAF Corsairs were far less uniform in appearance than their American and FAA contemporaries. By late 1944, the F4U had equipped all ten Pacific-based fighter squadrons of the RNZAF.
By the time the Corsairs arrived, there were very few Japanese aircraft left in New Zealand's allocated sectors of the Southern Pacific, and despite the RNZAF squadrons extending their operations to more northern islands, they were primarily used for close support of American, Australian, and New Zealand soldiers fighting the Japanese. At the end of 1945, all Corsair squadrons but one (No. 14) were disbanded. That last squadron was based in Japan, until the Corsair was retired from service in 1947.
No. 14 Squadron was given new FG-1Ds and in March 1946 transferred to Iwakuni, Japan as part of the British Commonwealth Occupation Force. Only one airworthy example of the 437 aircraft procured survives: FG-1D "NZ5648"/"ZK-COR", owned by the Old Stick and Rudder Company at Masterton, New Zealand.
On 18 July 1944, a British Corsair F4U-1A, "JT404" of 1841 Naval Air Squadron, was involved in anti-submarine patrol from HMS "Formidable" en route to Scapa Flow after the Operation Mascot attack on the German battleship "Tirpitz".. It flew in company with a Fairey Barracuda. Due to technical problems the Corsair made an emergency landing in a field on Hamarøy north of Bodø, Norway. The pilot, Lt Mattholie, was taken prisoner and the aircraft captured undamaged. Luftwaffe interrogators failed to get the pilot to explain how to fold the wings so as to transport the aircraft to Narvik. The Corsair was ferried by boat for further investigation. Later the Corsair was taken to Germany and listed as one of the captured enemy aircraft ("Beuteflugzeug") based at "Erprobungsstelle Rechlin", the central German military aviation test facility and the equivalent of the Royal Aircraft Establishment, for 1944 under repair. This was probably the only Corsair captured by the Germans.
In 1945, U.S. forces captured an F4U Corsair near the Kasumigaura flight school. The Japanese had repaired it, covering damaged parts on the wing with fabric and using spare parts from crashed F4Us. It seems Japan captured two force-landed Corsairs fairly late in the war and may have even tested one in flight.
During the Korean War, the Corsair was used mostly in the close-support role. The AU-1 Corsair was developed from the F4U-5 and was a ground-attack version which normally operated at low altitudes: as a consequence the Pratt & Whitney R-2800-83W engine used a single-stage, manually controlled supercharger, rather than the two-stage automatic supercharger of the -5. The versions of the Corsair used in Korea from 1950 to 1953 were the AU-1, F4U-4B, -4P, and -5N and 5-NL. There were dogfights between F4Us and Soviet-built Yakovlev Yak-9 fighters early in the war, but when the enemy introduced the Mikoyan-Gurevich MiG-15, the Corsair was outmatched. On 10 September 1952, a MiG-15 made the mistake of getting into a turning contest with a Corsair piloted by Marine Captain Jesse G. Folmar, with Folmar shooting the MiG down with his four 20 mm cannon. In turn, four MiG-15s shot down Folmar minutes later; Folmar bailed out and was quickly rescued with little injury.
F4U-5N and -5NL Corsair night fighters were used to attack enemy supply lines, including truck convoys and trains, as well as interdicting night attack aircraft such as the Polikarpov Po-2 "Bedcheck Charlies", which were used to harass United Nations forces at night. The F4Us often operated with the help of C-47 'flare ships' which dropped hundreds of 1,000,000 candlepower magnesium flares to illuminate the targets. For many operations detachments of U.S. Navy F4U-5Ns were posted to shore bases. The leader of one such unit, Lieutenant Guy Bordelon of VC-3 Det D (Detachment D), off , become the Navy's only ace in the war, in addition to being the only American ace in Korea that used a piston engined aircraft. Bordelon, nicknamed "Lucky Pierre", was credited with three Lavochkin La-9s or La-11s and two Yakovlev Yak-18s between 29 June and 16/17 July 1952. Navy and Marine Corsairs were credited with a total of 12 enemy aircraft.
More generally, Corsairs performed attacks with cannons, napalm tanks, various iron bombs, and unguided rockets. The 5 inch HVAR was a reliable standby; sturdy Soviet-built armor proved resistant to the HVAR's punch, which led to a new 6.5 in (16.5 cm) shaped charge antitank warhead being developed. The result was called the "Anti-Tank Aircraft Rocket (ATAR)." The 11 inch (29.85 cm) "Tiny Tim" was also used in combat, with two under the belly.
Lieutenant Thomas J. Hudner, Jr., flying an F4U-4 of VF-32 off , was awarded the Medal of Honor for crash landing his Corsair in an attempt to rescue his squadron mate, Ensign Jesse L. Brown, whose aircraft had been forced down by antiaircraft fire near Changjin. Brown, who did not survive the incident, was the U.S. Navy's first African American naval aviator.
After the war, the French Navy had an urgent requirement for a powerful carrier-borne close-air support aircraft to operate from the French Navy's four aircraft carriers that it acquired in the late 1940s (Two former U.S. Navy and two Royal Navy carriers were transferred). Secondhand US Navy Douglas SBD Dauntless dive-bombers of Flotille 3F and 4F were used to attack enemy targets and support ground forces in the First Indochina War. Former US Grumman F6F-5 Hellcats and Curtiss SB2C Helldivers were also used for close air support. A new and more capable aircraft was needed.
The last production Corsair was the "'F4U-7", which was built specifically for the French naval air arm, the Aéronavale. The XF4U-7 prototype did its test flight on 2 July 1952 with a total of 94 F4U-7s built for the French Navy's "Aéronavale" (79 in 1952, 15 in 1953), with the last of the batch, the final Corsair built, rolled out on 31 January 1953. The F4U-7s were actually purchased by the U.S. Navy and passed on to the Aéronavale through the U.S. Military Assistance Program (MAP). The French Navy used its F4U-7s during the second half of the First Indochina War in the 1950s (12.F, 14.F, 15.F Flotillas), where they were supplemented by at least 25 ex-USMC AU-1s passed on to the French in 1954, after the end of the Korean War.
On 15 January 1953, Flotille 14F, based at Karouba Air Base near Bizerte in Tunisia, became the first Aéronavale unit to receive the F4U-7 Corsair. Flotille 14F pilots arrived at Da Nang on 17 April 1954, but without their aircraft. The next day, the carrier USS "Saipan" delivered 25 war-weary ground attack ex-USMC AU-1 Corsairs (flown by VMA-212 at the end of the Korean War). During three months operating over Dien Bien Phu and Viêt-Nam, the Corsairs flew 959 combat sorties totaling 1,335 flight hours. They dropped some 700 tons of bombs and fired more than 300 rockets and 70,000 20 mm rounds. Six aircraft were damaged and two shot down by Viet Minh.
In September 1954, F4U-7 Corsairs were loaded aboard and brought back to France in November. The surviving Ex-USMC AU-1s were taken to the Philippines and returned to the U.S. Navy. In 1956, Flotille 15F returned to South Vietnam, equipped with F4U-7 Corsairs.
The 14.F and 15.F Flotillas also took part in the Anglo-French-Israeli seizure of the Suez Canal in October 1956, code-named Operation Musketeer. The Corsairs were painted with yellow and black recognition stripes for this operation. They were tasked with destroying Egyptian Navy ships at Alexandria but the presence of U.S. Navy ships prevented the successful completion of the mission. On 3 November 16 F4U-7s attacked airfields in the Delta, with one Corsair shot down by anti-aircraft fire. Two more Corsairs were damaged when landing back on the carriers. The Corsairs engaged in Operation Musketeer dropped a total of 25 tons of bombs, and fired more than 500 rockets and 16,000 20mm rounds.
As soon as they disembarked from the carriers that took part in Operation Musketeer, at the end of 1956, all three Corsair Flotillas moved to Telergma and Oran airfields in Algeria from where they provided CAS and helicopter escort. They were joined by the new "Flottille 17F", established at Hyères in April 1958.
French F4U-7 Corsairs (with some borrowed AU-1s) of the 12F, 14F, 15F, and 17F Flotillas conducted missions during the Algerian War between 1955 and 1962. Between February and March 1958, several strikes and CAS missions were launched from , the only carrier involved in the Algeria War.
France recognized Tunisian independence and sovereignty in 1956 but continued to station military forces at Bizerte and planned to extend the airbase. In 1961, Tunisia asked France to evacuate the base. Tunisia imposed a blockade on the base on 17 July, hoping to force its evacuation. This resulted in a battle between militiamen and the French military which lasted three days. French paratroopers, escorted by Corsairs of the 12F and 17F Flotillas, were dropped to reinforce the base and the Aéronavale launched air strikes on Tunisian troops and vehicles between 19–21 July, carrying out more than 150 sorties. Three Corsairs were damaged by ground fire.
In early 1959, the "Aéronavale" experimented with the Vietnam War-era SS.11 wire-guided anti-tank missile on F4U-7 Corsairs. The 12.F pilots trained for this experimental program were required to manually pilot the missile at approximatively two kilometers from the target on low altitude with a joystick using the right hand while keeping track of a flare on its tail, and piloting the aircraft using the left hand; an exercise that could be very tricky in a single-seat aircraft under combat conditions. Despite reportedly effective results during the tests, this armament was not used with Corsairs during the ongoing Algerian War.
The "Aéronavale" used 163 Corsairs (94 F4U-7s and 69 AU-1s), the last of them used by the Cuers-based 14.F Flotilla were out of service by September 1964, with some surviving for museum display or as civilian warbirds. By the early 1960s, two new modern aircraft carriers, and , had entered service with the French Navy and with them a new generation of jet-powered combat aircraft.
Corsairs flew their final combat missions in 1969 during the "Football War" between Honduras and El Salvador, in service with both air forces. The conflict was allegedly triggered, though not really caused, by a disagreement over a soccer (association football) match. Captain Fernando Soto of the Honduran Air Force shot down three Salvadoran Air Force aircraft on 17 July 1969. In the morning he shot down a Cavalier Mustang, killing the pilot. In the afternoon, he shot down two FG-1s; the pilot of the second aircraft may have bailed out, but the third exploded in the air, killing the pilot. These combats were the last ones among propeller-driven aircraft in the world and also making Soto the only pilot credited with three kills in an American continental war. El Salvador did not shoot down any Honduran aircraft. At the outset of the Football War, El Salvador enlisted the assistance of several American pilots with P-51 and F4U experience. Bob Love, a Korean war ace, Chuck Lyford, Ben Hall, and Lynn Garrison are believed to have flown combat missions, but it has never been confirmed. Lynn Garrison had purchased F4U-7 133693 from the French MAAG office when he retired from French naval service in 1964. It was registered N693M and was later destroyed in a 1987 crash in San Diego, California.
The Corsair entered service in 1942. Although designed as a carrier fighter, initial operation from carrier decks proved to be troublesome. Its low-speed handling was tricky due to the left wing stalling before the right wing. This factor, together with poor visibility over the long nose (leading to one of its nicknames, "The Hose Nose"), made landing a Corsair on a carrier a difficult task. For these reasons, most Corsairs initially went to Marine Corps squadrons which operated off land-based runways, with some early Goodyear-built examples (designated FG-1A) being built with fixed wings. The USMC aviators welcomed the Corsair with open arms as its performance was far superior to the contemporary Brewster F2A Buffalo and Grumman F4F-3 and -4 Wildcat.
Moreover, the Corsair was able to outperform the primary Japanese fighter, the A6M Zero. While the Zero could outturn the F4U at low speed, the Corsair was faster and could outclimb and outdive the A6M.
This performance advantage, combined with the ability to take severe punishment, meant a pilot could place an enemy aircraft in the killing zone of the F4U's six .50 (12.7 mm) M2 Browning machine guns and keep him there long enough to inflict major damage. The 2,300 rounds carried by the Corsair gave just under 30 seconds of fire from each gun.
Beginning in 1943, the Fleet Air Arm (FAA) also received Corsairs and flew them successfully from Royal Navy carriers in combat with the British Pacific Fleet and in Norway. These were clipped-wing Corsairs, the wingtips shortened to clear the lower overhead height of RN carriers. FAA also developed a curving landing approach to overcome the F4U's deficiencies.
Infantrymen nicknamed the Corsair "The Sweetheart of the Marianas" and "The Angel of Okinawa" for its roles in these campaigns. Among Navy and Marine aviators, the aircraft was nicknamed "Ensign Eliminator" and "Bent-Wing Eliminator" because it required many more hours of flight training to master than other Navy carrier-borne aircraft. It was also called simply "U-bird" or "Bent Wing Bird". Although Allied World War II sources frequently make the claim that the Japanese called the Corsair the "Whistling Death", Japanese sources do not support this, and it was mainly known as the Sikorsky.
The Corsair has been named the official aircraft of Connecticut due to its multiple connections to Connecticut businesses including airframe manufacturer Vought-Sikorsky Aircraft, engine manufacturer Pratt & Whitney, and propeller manufacturer Hamilton Standard .
During World War II, Corsair production expanded beyond Vought to include Brewster and Goodyear models. Allied forces flying the aircraft in World War II included the Fleet Air Arm and the Royal New Zealand Air Force. Eventually, more than 12,500 F4Us would be built, comprising 16 separate variants.
F4U-1 (called Corsair Mk I by the Fleet Air Arm):
The first production version of the Corsair with the distinctive "birdcage" canopy and low seating position. The differences over the XF4U-1 were as follows:
The Royal Navy's Fleet Air Arm received 95 Vought F4U-1s. These were all early "birdcage" Corsairs. Vought also built a single F4U-1 two-seat trainer; the Navy showed no interest.
F4U-1A (called Corsair Mk II by the Fleet Air Arm):
Mid-to-late production Corsairs incorporated a new, taller, wider canopy with only two frames — very close to what the Malcolm hood did for British fighter aircraft — along with a simplified windscreen; the new canopy design implied that the semi-elliptical turtledeck "flank" windows could be omitted. The designation F4U-1A to differentiate these Corsairs from earlier "birdcage" variants was allowed to be used internally by manufacturers The pilot's seat was raised which, combined with the new canopy and a 6-inch (152.4 mm) lengthening of the tailwheel strut, allowed the pilot better visibility over the long nose. In addition to these changes, the bombing window under the cockpit was omitted. These Corsairs introduced a -long stall strip just outboard of the gun ports on the right wing leading edge and improved undercarriage oleo struts which eliminated bouncing on landing, making these the first truly "carrier capable" F4Us.
Three hundred and sixty F4U-1As were delivered to the Fleet Air Arm. In British service, they were modified with "clipped" wings ( was cut off each wingtip) for use on British aircraft carriers, although the Royal Navy had been successfully operating the Corsair Mk I since 1 June 1943 when No. 1830 Squadron NAS was commissioned and assigned to HMS "Illustrious". F4U-1s in many USMC squadrons had their arrester hooks removed. Additionally, an experimental R-2800-8W engine with water injection was fitted on one of the late F4U-1As. After satisfactory results, many F4U-1As were fitted with the new powerplant. The aircraft carried 237 gal (897 L) in the main fuel tank, located in front of the cockpit, as well as an unarmored, non-self-sealing 62 gal (235 L) fuel tank in each wing. This version of the Corsair was the first to be able to carry a drop tank under the center-section. With drop tanks fitted, the fighter had a maximum ferry range of just over .
F3A-1 and F3A-1D (called Corsair Mk III by the Fleet Air Arm):
This was the designation for Brewster-built F4U-1. Labor troubles delayed production, and the Navy ordered the company's contract terminated; they folded soon after. Poor quality wing fittings meant that these aircraft were red-lined for speed and prohibited from aerobatics after several lost their wings. None of the Brewster-built Corsairs reached front line units. 430 Brewster Corsairs (334 F3A-1 and 96 F3A-1D), more than half of Brewster's total production, were delivered to the Fleet Air Arm.
FG-1A and FG-1D (called Corsair Mk IV by the Fleet Air Arm):
This was the designation for Corsairs that were license-built by Goodyear, to the same specifications as Vought's Corsairs. The first Goodyear built FG-1 flew in February 1943 and Goodyear began delivery of FG-1 Corsairs in April 1943. The company continued production until the end of the war and delivered 4,007 FG-1 series Corsairs, including sixty FG-1Ds to the RNZAF and 857 (400 FG-1 and FG-1A, and 457 FG-1D) to the Royal Navy as Corsair Mk IVs.
F4U-1B: This was an unofficial post-war designation used to identify F4U-1s modified for Fleet Air Arm use.
F4U-1C:
The prototype F4U-1C, appeared in August 1943 and was based on an F4U-1. A total of 200 of this variant were built from July to November 1944; all were based on the F4U-1D and were built in parallel with that variant. Intended for ground-attack as well as fighter missions, the F4U-1C was similar to the F4U-1D but its six machine guns were replaced by four 20 millimeter (0.79 in) AN/M2 cannons with 231 rounds of ammunition per gun. The F4U-1C was introduced to combat during 1945, most notably in the Okinawa campaign. The firepower of 20 mm was highly appreciated. It was believed that the 20 mm cannon was more effective for all types of combat work than the .50 caliber machine gun. However, despite the superior firepower, many navy pilots preferred .50 caliber machine guns in air combat due to jam and freezing problems of the 20mm cannons. These problems were reduced as the ordnance crews gained experience until the performance of the guns compared favorably with the .50 caliber, but freezing problems remained at until gun heaters were installed.
F4U-1D (called Corsair Mk II by the Fleet Air Arm):
This variant was introduced in April 1944, and was built in parallel with the F4U-1C. It had the new R-2800-8W Double Wasp engine equipped with water injection. This change gave the aircraft up to more power, which, in turn, increased performance. Speed was increased from to . Due to the U.S. Navy's need for fighter-bombers, it had a payload of rockets (double the -1A's) carried on permanent launching rails, as well as twin pylons for bombs or drop tanks. These modifications caused extra drag, but the additional fuel carried by the two drop tanks would still allow the aircraft to fly relatively long missions despite heavy, un-aerodynamic loads. A single piece "blown" clear-view canopy was adopted as standard equipment for the -1D model, and all later F4U production aircraft. 150 F4U-1D were delivered to the Fleet Air Arm.
F4U-1P: A rare photo reconnaissance variant.
XF4U-2: Special night fighter variant, equipped with two auxiliary fuel tanks.
F4U-2: Experimental conversion of the F4U-1 Corsair into a carrier-borne nightfighter, armed with five .50 in (12.7 mm) machine guns (the outboard, right gun was deleted), and fitted with Airborne Intercept (AI) radar set in a radome placed outboard on the starboard wing. Since Vought was preoccupied with more important projects, only 32 were converted from existing F4U-1s by the Naval Aircraft Factory and another two by front line units.
The type saw combat with VF(N)-101 aboard and USS "Intrepid" in early 1944, VF(N)-75 in the Solomon Islands, and VMF(N)-532 on Tarawa.
XF4U-3: Experimental aircraft built to hold different engines in order to test the Corsair's performance with a variety of power plants. This variant never entered service. Goodyear also contributed a number of airframes, designated FG-3, to the project. A single sub-variant XF4U-3B with minor modifications was also produced for the FAA.
XF4U-4: New engine and cowling.
F4U-4: The last variant to see action during World War II. Deliveries to the U.S. Navy of the F4U-4 began in early 1945. It had the dual-stage-supercharged -18W engine. When the cylinders were injected with the water/alcohol mixture, power was boosted to . The aircraft required an air scoop under the nose and the unarmored wing fuel tanks of 62 gal (234 L) capacities were removed for better maneuverability at the expense of maximum range. The propeller was changed to a four blade type. Maximum speed was increased to and climb rate to over 4,500 ft/min (1,180 m/min) as opposed to the 2,900 ft/min (884 m/min) of the F4U-1A. The "4-Hog" retained the original armament and had all the external load (i.e., drop tanks, bombs) capabilities of the F4U-1D. The windscreen was now flat bullet-resistant glass to avoid optical distortion, a change from the curved Plexiglas windscreens with the internal plate glass of the earlier Corsairs. Vought also tested the two F4U-4Xs (BuNos 49763 and 50301, prototypes for the new R2800) with fixed wingtip tanks (the Navy showed no interest) and an Aeroproducts six-blade contraprop (not accepted for production).
F4U-4B: 300 F4U-4s ordered with alternate gun armament of four AN/M3 cannon.
F4U-4E and F4U-4N: Developed late in WWII, these nightfighters featured radar radomes projecting from the right wingtip. The -4E was fitted with the APS-4 search radar, while the -4N was fitted with the APS-6 type. In addition, these aircraft were often refitted with four 20mm M2 cannons similar to the F4U-1C. Though these variants would not see combat during WWII, the nightfighter variants would see great use during the Korean war.
F4U-4K: Experimental drone.
F4U-4P: F4U-4 equivalent to the -1P, a rare photo reconnaissance variant.
XF4U-5: New engine cowling, other extensive changes.
F4U-5: A 1945 design modification of the F4U-4, first flown on 21 December 1945, was intended to increase the F4U-4 Corsair's overall performance and incorporate many Corsair pilots' suggestions. It featured a more powerful Pratt and Whitney R-2800-32(E) engine with a two-stage supercharger, rated at a maximum of . Other improvements included automatic blower controls, cowl flaps, intercooler doors, and oil cooler for the engine, spring tabs for the elevators and rudder, a completely modernized cockpit, a completely retractable tail wheel, and heated cannon bays and pitot head. The cowling was lowered two degrees to help with forward visibility, but perhaps most striking as the first variant to feature all-metal wings (223 units produced). Maximum speed was and max rate of climb at sea level 4,850 feet per minute.
F4U-5N: Radar equipped version (214 units produced)
F4U-5NL: Winterized version (72 units produced, 29 modified from F4U-5Ns (101 total)). Fitted with rubber de-icing boots on the leading edge of the wings and tail.
F4U-5P: Long-range photo-reconnaissance version (30 units produced)
F4U-6: Re-designated AU-1, this was a ground-attack version produced for the U.S. Marine Corps.
F4U-7 : AU-1 developed for the French Navy.
FG-1E: Goodyear FG-1 with radar equipment.
FG-1K: Goodyear FG-1 as drone.
FG-3: Turbosupercharger version converted from FG-1D.
FG-4: Goodyear F4U-4, never delivered.
AU-1: U.S. Marines attack variant with extra armor to protect the pilot and fuel tank, and the oil coolers relocated inboard to reduce vulnerability to ground fire. The supercharger was simplified as the design was intended for low-altitude operation. Extra racks were also fitted. Fully loaded for combat the AU-1 weighed 20% more than a fully loaded F4U-4, and was capable of carrying 8,200 lb of bombs. The AU-1 had a maximum speed of 238 miles per hour at 9,500 ft, when loaded with 4,600 lb of bombs and a 150-gallon drop-tank. When loaded with eight rockets and two 150-gallon drop-tanks, maximum speed was 298 mph at 19,700 ft. When not carrying external loads, maximum speed was 389 mph at 14,000 ft. First produced in 1952 and used in Korea, and retired in 1957. Re-designated from F4U-6.
In March 1944, Pratt & Whitney requested an F4U-1 Corsair from Vought Aircraft for evaluation of their new P&W R-4360, Wasp Major 4-row 28-cylinder "corncob" radial engine. The F2G-1 and F2G-2 were significantly different aircraft. F2G-1 featured a manual folding wing and propeller, while the F2G-2 had hydraulic operated folding wings, propeller, and carrier arresting hook for carrier use. There were five pre-production XF2G-1s: BuNo 14691, 14692, 14693 (Race 94), 14694 (Race 18), and 14695. There were ten production F2Gs: Five F2G-1s BuNo 88454 (Museum of Flight in Seattle, Washington), 88455, 88456, 88457 (Race 84), and 88458 (Race 57) and five F2G-2s BuNo 88459, 88460, 88461, 88462, and 88463 (Race 74). Five F2Gs were sold as surplus and went on to racing success after the war (indicated by the "Race" number after the BuNo), winning the Thompson trophy races in 1947 and 1949. The only surviving F2G-1s are BuNos 88454 and 88458 (Race 57). The only surviving F2G-2 was BuNo 88463 (Race 74). It was destroyed in a crash September 2012 after having a full restoration completed in July 2011.
"World Air Forces." Retrieved: 28 September 2012.
According to the FAA there are 45 privately owned F4Us in the U.S.
There is an F4U-4 Corsair with serial number 96995 owned by the Flying Bulls.
|
https://en.wikipedia.org/wiki?curid=11721
|
International Formula 3000
The Formula 3000 International Championship was a motor racing series created by the Fédération Internationale de l'Automobile (FIA) in 1985 to become the final preparatory step for drivers hoping to enter Formula One. Formula Two had become too expensive, and was dominated by works-run cars with factory engines; the hope was that Formula 3000 would offer quicker, cheaper, more open racing. The series began as an open specification, then tyres were standardized from 1986 onwards, followed by engines and chassis in 1996. The series ran annually until 2004, and was replaced in 2005 by the GP2 Series.
The series was staged as the Formula 3000 European Championship in 1985, as the Formula 3000 Intercontinental Championship in 1986 and 1987 and then as the Formula 3000 International Championship from 1988 to 2004.
Formula 3000 replaced Formula Two, and was so named because the engines used were limited to 3000cc maximum capacity. Initially, the Cosworth DFV was a popular choice, having been made obsolete in Formula One by the adoption of 1.5 litre turbocharged engines. The rules permitted any 90-degree V8 engine, fitted with a rev-limiter to keep power output under control. As well as the Cosworth, a Honda engine based on an Indy V8 by John Judd also appeared; a rumoured Lamborghini V8 never raced. In later years, a Mugen-Honda V8 became the unit of choice, eclipsing the DFV; Cosworth responded with the brand new AC engine. Costs began to increase significantly.
The first chassis from March, Automobiles Gonfaronnaises Sportives (AGS) and Ralt were developments of their existing 1984 Formula Two designs, although Lola's entry was based on and looked very much like an IndyCar. A few smaller teams tried obsolete three-litre Formula One cars (from Tyrrell, Williams, Minardi, Arrows and RAM), with little success—the Grand Prix and Indycar-derived entries were too unwieldy as their fuel tanks were about twice the size of those needed for F3000 races, and the weight distribution was not ideal. The first few years of the championship saw March establishing a superiority over Ralt and Lola—there was little to choose between the chassis, but more Marches were sold and ended up in better hands. In 1988, the ambitious Reynard marque entered with a brand new chassis; Reynard had won their first race in every formula they had previously entered, and did so again in F3000. The next couple of years saw Lola improve slightly—their car was competitive with the Reynard in 1990—and March slip, but both were crushed by the Reynard teams, and by the mid-90s, F3000 was a virtual Reynard monopoly, although Lola did eventually return with a promising car and the Japanese Footwork and Dome chassis were seen in Europe. Dallara briefly tried the series before moving up to Formula One, and AGS moved up from Formula Two but never recaptured their occasional success. At least one unraced F3000 chassis existed—the Wagner fitted with a straight-six short-stroke BMW. This was converted into a sports car, however.
The series saw occasional controversy. Definitive rules for the 1985 season did not appear until the championship was well under way. In 1987 questions were asked about the ability of some of the drivers, given the high number of accidents in the formula. In 1989 the eligibility of the new Reynard chassis was challenged, as it was raced with a different nose to the one that had been crash tested. This season also saw problems with driver changes - the cost of F3000 was escalating to the point that teams were finding it difficult to run drivers for a whole season. A rule limiting driver changes to two per car per season meant that some cars had to sit idle while drivers with budgets could not race them. In 1991, some Italian teams started using Agip's so-called "jungle juice" Formula One fuel, worth an estimated 15 bhp, giving their drivers a significant advantage. In the early years of the formula there was much concern about safety, with a high number of accidents resulting in injuries to drivers. There was one fatality in the International Championship - Marco Campos in the very last round of the 1995 series.
Formula 3000 races during the "open chassis" era tended to be of about 100–120 miles in distance, held at major circuits, either headlining meetings or paired with other international events. The "jewel in the crown" of the F3000 season was traditionally the Pau Grand Prix street race, rivalled for a few years by the Birmingham round. Most major circuits in France, Italy, Spain, Germany and the United Kingdom saw the series visit at least once.
In 1996, new rules introduced a single engine (a detuned Judd V8 engine, re-engineered by and badged as a Zytek) and chassis (Lola), to go along with tyre standardization (Avon) introduced in 1986. The following year the calendar was combined with that of Formula One, so the series became support races for the Grand Prix. Several Grand Prix teams established formal links with F3000 teams to develop young drivers (and engineering talent); these relationships varied from formal "junior teams" (such as the one McLaren set up for Nick Heidfeld) to fairly distant relationships based mostly upon shared sponsors and the use of the 'parent' team's name. The series grew dramatically through the late nineties, reaching an entry of nearly 40 cars - although this in itself was problematic as it meant many drivers failed to qualify. In 2000, the series was restricted to 15 teams of two cars each.
However, by 2002 expenses were once more very high and the number of entries, and sponsors, rapidly dwindled. International Formula 3000 was experiencing tough competition with cheaper formulae, such as European F3000 (using ex-FIA 1999 and 2002 Lola chassis), World Series by Nissan (also known as Formula Nissan) and Formula Renault V6 Eurocup. By the end of 2003, car counts had fallen to new lows.
The 2004 season was the last F3000 campaign, due in part to dwindling field sizes. In 2005 it was replaced with a new series known as GP2, with Renault backing.
Three past F3000 champions (Müller, Junqueira and Wirdheim) have never been entered in an F1 race. Montoya and Bourdais became Champions in North American open-wheel (CART and Champcar) respectively, with Fittipaldi, Moreno, Junqueira and Wilson also becoming race winners, and Wirdheim making the ranks. Müller became a BMW driver in WTCC touring car racing after having been a test driver for the BMW-Williams F1 project in 1999 as well as a racer of the BMW V12 LMR Le Mans winner. Sospiri attempted to qualify for one Formula One race but failed to make it, as part of the disastrous MasterCard Lola team. Wirdheim has been third driver in practice sessions for Jaguar Racing, but has never participated in a race.
Three past F3000 champions have won an F1 Grand Prix: Alesi, Panis and Montoya (who also won the Indy 500).
|
https://en.wikipedia.org/wiki?curid=11724
|
Flunitrazepam
Flunitrazepam, also known as Rohypnol among other names, is a benzodiazepine used to treat severe insomnia and assist with anesthesia. As with other hypnotics, flunitrazepam has been advised to be prescribed only on a short-term basis or by those with chronic insomnia on an occasional basis.
It was patented in 1962 and came into medical use in 1974. Flunitrazepam has been referred to as a date rape drug, though the percentage of reported rape cases in which it is involved is small.
In countries where this drug is used, it is used for treatment of sleeping problems, and in some countries to begin anesthesia. These were also the uses for which it was originally studied.
Adverse effects of flunitrazepam include dependence, both physical and psychological; reduced sleep quality resulting in somnolence; and overdose, resulting in excessive sedation, impairment of balance and speech, respiratory depression or coma, and possibly death. Because of the latter, flunitrazepam is commonly used in suicide. When used in pregnancy, it might cause hypotonia.
Flunitrazepam as with other benzodiazepines can lead to drug dependence. Discontinuation may result in the appearance of withdrawal symptoms. Abrupt withdrawal may lead to a benzodiazepine withdrawal syndrome characterised by seizures, psychosis, insomnia, and anxiety. Rebound insomnia, worse than baseline insomnia, typically occurs after discontinuation of flunitrazepam even from short-term single nightly dose therapy.
Flunitrazepam may cause a paradoxical reaction in some individuals causing symptoms including anxiety, aggressiveness, agitation, confusion, disinhibition, loss of impulse control, talkativeness, violent behavior, and even convulsions. Paradoxical adverse effects may even lead to criminal behaviour.
Benzodiazepines such as flunitrazepam are lipophilic and rapidly penetrate membranes and, therefore, rapidly cross over into the placenta with significant uptake of the drug. Use of benzodiazepines including flunitrazepam in late pregnancy, especially high doses, may result in hypotonia, also known as floppy baby syndrome.
Flunitrazepam impairs cognitive functions. This may appear as lack of concentration, confusion and anterograde amnesia. It can be described as a hangover-like effect which can persist to the next day. It also impairs psychomotor functions similar to other benzodiazepines and nonbenzodiazepine hypnotic drugs; falls and hip fractures were frequently reported. The combination with alcohol increases these impairments. Partial, but incomplete tolerance develops to these impairments.
Other adverse effects include:
Benzodiazepines require special precaution if used in the elderly, during pregnancy, in children, in alcohol- or drug-dependent individuals, and in individuals with comorbid psychiatric disorders.
Impairment of driving skills with a resultant increased risk of road traffic accidents is probably the most important adverse effect. This side-effect is not unique to flunitrazepam but also occurs with other hypnotic drugs. Flunitrazepam seems to have a particularly high risk of road traffic accidents compared to other hypnotic drugs. Extreme caution should be exercised by drivers after taking flunitrazepam.
The use of flunitrazepam in combination with alcoholic beverages synergizes the adverse effects, and can lead to toxicity and death.
Flunitrazepam is a drug that is frequently involved in drug intoxication, including overdose. Overdose of flunitrazepam may result in excessive sedation, or impairment of balance or speech. This may progress in severe overdoses to respiratory depression or coma and possibly death. The risk of overdose is increased if flunitrazepam is taken in combination with CNS depressants such as ethanol (alcohol) and opioids. Flunitrazepam overdose responds to the benzodiazepine receptor antagonist flumazenil, which thus can be used as a treatment.
As of 2016, blood tests can identify flunitrazepam at concentrations of as low as 4 ng/ml; the elimination half life of the drug is 11–25 hours. For urine samples, metabolites can be identified 60 hours to 28 days, depending on the dose and analytical method used. Hair and saliva can also be analyzed; hair is useful when a long time has transpired since ingestion, and saliva for workplace drug tests.
Flunitrazepam can be measured in blood or plasma to confirm a diagnosis of poisoning in hospitalized patients, provide evidence in an impaired driving arrest, or assist in a medicolegal death investigation. Blood or plasma flunitrazepam concentrations are usually in a range of 5–20 μg/L in persons receiving the drug therapeutically as a nighttime hypnotic, 10–50 μg/L in those arrested for impaired driving and 100–1000 μg/L in victims of acute fatal overdosage. Urine is often the preferred specimen for routine drug abuse monitoring purposes. The presence of 7-aminoflunitrazepam, a pharmacologically-active metabolite and "in vitro" degradation product, is useful for confirmation of flunitrazepam ingestion. In postmortem specimens, the parent drug may have been entirely degraded over time to 7-aminoflunitrazepam. Other metabolites include desmethylflunitrazepam and 3-hydroxydesmethylflunitrazepam.
The main pharmacological effects of flunitrazepam are the enhancement of GABA at various GABA receptors.
While 80% of flunitrazepam that is taken orally is absorbed, bioavailability in suppository form is closer to 50%.
Flunitrazepam has a long half-life of 18–26 hours, which means that flunitrazepam's effects after nighttime administration persist throughout the next day.
Flunitrazepam is lipophilic and is metabolised hepatically via oxidative pathways. The enzyme CYP3A4 is the main enzyme in its phase 1 metabolism in human liver microsomes.
Flunitrazepam is classed as a nitro-benzodiazepine. It is the fluorinated "N"-methyl derivative of nitrazepam. Other nitro-benzodiazepines include nitrazepam (the parent compound), nimetazepam (methylamino derivative) and clonazepam (2ʹ-chlorinated derivative).
Flunitrazepam was discovered at Roche as part of the benzodiazepine work led by Leo Sternbach; the patent application was filed in 1962 and it was first marketed in 1974.
Due to abuse of the drug for date rape and recreation, in 1998 Roche modified the formulation to give lower doses, make it less soluble, and add a blue dye for easier detection in drinks. It was never marketed in the US, and by 2016 had been withdrawn from the markets in Spain, France, Norway, Germany, and the UK.
A 1989 article in the "European Journal of Clinical Pharmacology" reports that benzodiazepines accounted for 52% of prescription forgeries, suggesting that benzodiazepines was a major prescription drug class of abuse. Nitrazepam accounted for 13% of forged prescriptions.
Flunitrazepam and other sedative hypnotic drugs are detected frequently in cases of people suspected of driving under the influence of drugs. Other benzodiazepines and nonbenzodiazepines (anxiolytic or hypnotic) such as zolpidem and zopiclone (as well as cyclopyrrolones, imidazopyridines, and pyrazolopyrimidines) are also found in high numbers of suspected drugged drivers. Many drivers have blood levels far exceeding the therapeutic dose range, suggesting a high degree of abuse potential for benzodiazepines and similar drugs.
In studies in Sweden, flunitrazepam was the second most common drug used in suicides, being found in about 16% of cases. In a retrospective Swedish study of 1587 deaths, in 159 cases benzodiazepines were found. In suicides when benzodiazepines were implicated, the benzodiazepines flunitrazepam and nitrazepam were occurring in significantly higher concentrations, compared to natural deaths. In 4 of the 159 cases, where benzodiazepines were found, benzodiazepines alone were the only cause of death. It was concluded that flunitrazepam and nitrazepam might be more toxic than other benzodiazepines.
Flunitrazepam is known to induce anterograde amnesia in sufficient doses; individuals are unable to remember certain events that they experienced while under the influence of the drug, which complicates investigations. This effect could be particularly dangerous if flunitrazepam is used to aid in the commission of sexual assault; victims may be unable to clearly recall the assault, the assailant, or the events surrounding the assault.
While use of flunitrazepam in sexual assault has been prominent in the media, as of 2015 appears to be fairly rare, and use of alcohol and other benzodiazepine drugs in date rape appears to be a larger but underreported problem.
In the United Kingdom, the use of flunitrazepam and other "date rape" drugs have also been connected to stealing from sedated victims. An activist quoted by a British newspaper estimated that up to 2,000 individuals are robbed each year after being spiked with powerful sedatives, making drug-assisted robbery a more commonly reported problem than drug-assisted rape.
Flunitrazepam is a Schedule III drug under the international Convention on Psychotropic Substances of 1971.
Flunitrazepam is marketed under many brand names in the countries where it is legal. It also has many street names, including "roofie" and "ruffie".
|
https://en.wikipedia.org/wiki?curid=11725
|
Fuel cell
A fuel cell is an electrochemical cell that converts the chemical energy of a fuel (often hydrogen) and an oxidizing agent (often oxygen) into electricity through a pair of redox reactions. Fuel cells are different from most batteries in requiring a continuous source of fuel and oxygen (usually from air) to sustain the chemical reaction, whereas in a battery the chemical energy usually comes from metals and their ions or oxides that are commonly already present in the battery, except in flow batteries. Fuel cells can produce electricity continuously for as long as fuel and oxygen are supplied.
The first fuel cells were invented by Sir William Grove in 1838. The first commercial use of fuel cells came more than a century later following the invention of the hydrogen–oxygen fuel cell by Francis Thomas Bacon in 1932. The alkaline fuel cell, also known as the Bacon fuel cell after its inventor, has been used in NASA space programs since the mid-1960s to generate power for satellites and space capsules. Since then, fuel cells have been used in many other applications. Fuel cells are used for primary and backup power for commercial, industrial and residential buildings and in remote or inaccessible areas. They are also used to power fuel cell vehicles, including forklifts, automobiles, buses, boats, motorcycles and submarines.
There are many types of fuel cells, but they all consist of an anode, a cathode, and an electrolyte that allows ions, often positively charged hydrogen ions (protons), to move between the two sides of the fuel cell. At the anode a catalyst causes the fuel to undergo oxidation reactions that generate ions (often positively charged hydrogen ions) and electrons. The ions move from the anode to the cathode through the electrolyte. At the same time, electrons flow from the anode to the cathode through an external circuit, producing direct current electricity. At the cathode, another catalyst causes ions, electrons, and oxygen to react, forming water and possibly other products. Fuel cells are classified by the type of electrolyte they use and by the difference in startup time ranging from 1 second for proton exchange membrane fuel cells (PEM fuel cells, or PEMFC) to 10 minutes for solid oxide fuel cells (SOFC). A related technology is flow batteries, in which the fuel can be regenerated by recharging. Individual fuel cells produce relatively small electrical potentials, about 0.7 volts, so cells are "stacked", or placed in series, to create sufficient voltage to meet an application's requirements. In addition to electricity, fuel cells produce water, heat and, depending on the fuel source, very small amounts of nitrogen dioxide and other emissions. The energy efficiency of a fuel cell is generally between 40–60%; however, if waste heat is captured in a cogeneration scheme, efficiencies of up to 85% can be obtained.
The fuel cell market is growing, and in 2013 Pike Research estimated that the stationary fuel cell market will reach 50 GW by 2020.
The first references to hydrogen fuel cells appeared in 1838. In a letter dated October 1838 but published in the December 1838 edition of "The London and Edinburgh Philosophical Magazine and Journal of Science", Welsh physicist and barrister Sir William Grove wrote about the development of his first crude fuel cells. He used a combination of sheet iron, copper and porcelain plates, and a solution of sulphate of copper and dilute acid. In a letter to the same publication written in December 1838 but published in June 1839, German physicist Christian Friedrich Schönbein discussed the first crude fuel cell that he had invented. His letter discussed current generated from hydrogen and oxygen dissolved in water. Grove later sketched his design, in 1842, in the same journal. The fuel cell he made used similar materials to today's phosphoric acid fuel cell.
In 1932, English engineer Francis Thomas Bacon successfully developed a 5 kW stationary fuel cell. The alkaline fuel cell (AFC), also known as the Bacon fuel cell after its inventor, is one of the most developed fuel cell technologies, which NASA has used since the mid-1960s.
In 1955, W. Thomas Grubb, a chemist working for the General Electric Company (GE), further modified the original fuel cell design by using a sulphonated polystyrene ion-exchange membrane as the electrolyte. Three years later another GE chemist, Leonard Niedrach, devised a way of depositing platinum onto the membrane, which served as catalyst for the necessary hydrogen oxidation and oxygen reduction reactions. This became known as the "Grubb-Niedrach fuel cell". GE went on to develop this technology with NASA and McDonnell Aircraft, leading to its use during Project Gemini. This was the first commercial use of a fuel cell. In 1959, a team led by Harry Ihrig built a 15 kW fuel cell tractor for Allis-Chalmers, which was demonstrated across the U.S. at state fairs. This system used potassium hydroxide as the electrolyte and compressed hydrogen and oxygen as the reactants. Later in 1959, Bacon and his colleagues demonstrated a practical five-kilowatt unit capable of powering a welding machine. In the 1960s, Pratt & Whitney licensed Bacon's U.S. patents for use in the U.S. space program to supply electricity and drinking water (hydrogen and oxygen being readily available from the spacecraft tanks). In 1991, the first hydrogen fuel cell automobile was developed by Roger Billings.
UTC Power was the first company to manufacture and commercialize a large, stationary fuel cell system for use as a co-generation power plant in hospitals, universities and large office buildings.
In recognition of the fuel cell industry and America's role in fuel cell development, the US Senate recognized 8 October 2015 as National Hydrogen and Fuel Cell Day, passing S. RES 217. The date was chosen in recognition of the atomic weight of hydrogen (1.008).
Fuel cells come in many varieties; however, they all work in the same general manner. They are made up of three adjacent segments: the anode, the electrolyte, and the cathode. Two chemical reactions occur at the interfaces of the three different segments. The net result of the two reactions is that fuel is consumed, water or carbon dioxide is created, and an electric current is created, which can be used to power electrical devices, normally referred to as the load.
At the anode a catalyst oxidizes the fuel, usually hydrogen, turning the fuel into a positively charged ion and a negatively charged electron. The electrolyte is a substance specifically designed so ions can pass through it, but the electrons cannot. The freed electrons travel through a wire creating the electric current. The ions travel through the electrolyte to the cathode. Once reaching the cathode, the ions are reunited with the electrons and the two react with a third chemical, usually oxygen, to create water or carbon dioxide.
Design features in a fuel cell include:
A typical fuel cell produces a voltage from 0.6–0.7 V at full rated load. Voltage decreases as current increases, due to several factors:
To deliver the desired amount of energy, the fuel cells can be combined in series to yield higher voltage, and in parallel to allow a higher current to be supplied. Such a design is called a "fuel cell stack". The cell surface area can also be increased, to allow higher current from each cell.
In the archetypical hydrogen–oxide proton-exchange membrane fuel cell design, a proton-conducting polymer membrane (typically nafion) contains the electrolyte solution that separates the anode and cathode sides. This was called a "solid polymer electrolyte fuel cell" ("SPEFC") in the early 1970s, before the proton exchange mechanism was well understood. (Notice that the synonyms "polymer electrolyte membrane" and "'proton exchange mechanism" result in the same acronym.)
On the anode side, hydrogen diffuses to the anode catalyst where it later dissociates into protons and electrons. These protons often react with oxidants causing them to become what are commonly referred to as multi-facilitated proton membranes. The protons are conducted through the membrane to the cathode, but the electrons are forced to travel in an external circuit (supplying power) because the membrane is electrically insulating. On the cathode catalyst, oxygen molecules react with the electrons (which have traveled through the external circuit) and protons to form water.
In addition to this pure hydrogen type, there are hydrocarbon fuels for fuel cells, including diesel, methanol ("see:" direct-methanol fuel cells and indirect methanol fuel cells) and chemical hydrides. The waste products with these types of fuel are carbon dioxide and water. When hydrogen is used, the CO is released when methane from natural gas is combined with steam, in a process called steam methane reforming, to produce the hydrogen. This can take place in a different location to the fuel cell, potentially allowing the hydrogen fuel cell to be used indoors—for example, in fork lifts.
The different components of a PEMFC are
The materials used for different parts of the fuel cells differ by type. The bipolar plates may be made of different types of materials, such as, metal, coated metal, graphite, flexible graphite, C–C composite, carbon–polymer composites etc. The membrane electrode assembly (MEA) is referred as the heart of the PEMFC and is usually made of a proton exchange membrane sandwiched between two catalyst-coated carbon papers. Platinum and/or similar type of noble metals are usually used as the catalyst for PEMFC. The electrolyte could be a polymer membrane.
Spendelow, Jacob and Jason Marcinkoski. "Fuel Cell System Cost – 2013" , DOE Fuel Cell Technologies Office, 16 October 2013 (archived version)
Many companies are working on techniques to reduce cost in a variety of ways including reducing the amount of platinum needed in each individual cell. Ballard Power Systems has experimented with a catalyst enhanced with carbon silk, which allows a 30% reduction (1.0–0.7 mg/cm²) in platinum usage without reduction in performance. Monash University, Melbourne uses PEDOT as a cathode. A 2011-published study documented the first metal-free electrocatalyst using relatively inexpensive doped carbon nanotubes, which are less than 1% the cost of platinum and are of equal or superior performance. A recently published article demonstrated how the environmental burdens change when using carbon nanotubes as carbon substrate for platinum.
Phosphoric acid fuel cells (PAFC) were first designed and introduced in 1961 by G. V. Elmore and H. A. Tanner. In these cells phosphoric acid is used as a non-conductive electrolyte to pass positive hydrogen ions from the anode to the cathode. These cells commonly work in temperatures of 150 to 200 degrees Celsius. This high temperature will cause heat and energy loss if the heat is not removed and used properly. This heat can be used to produce steam for air conditioning systems or any other thermal energy consuming system. Using this heat in cogeneration can enhance the efficiency of phosphoric acid fuel cells from 40–50% to about 80%. Phosphoric acid, the electrolyte used in PAFCs, is a non-conductive liquid acid which forces electrons to travel from anode to cathode through an external electrical circuit. Since the hydrogen ion production rate on the anode is small, platinum is used as catalyst to increase this ionization rate. A key disadvantage of these cells is the use of an acidic electrolyte. This increases the corrosion or oxidation of components exposed to phosphoric acid.
Solid acid fuel cells (SAFCs) are characterized by the use of a solid acid material as the electrolyte. At low temperatures, solid acids have an ordered molecular structure like most salts. At warmer temperatures (between 140–150°C for CsHSO4), some solid acids undergo a phase transition to become highly disordered "superprotonic" structures, which increases conductivity by several orders of magnitude. The first proof-of-concept SAFCs were developed in 2000 using cesium hydrogen sulfate (CsHSO4). Current SAFC systems use cesium dihydrogen phosphate (CsH2PO4) and have demonstrated lifetimes in the thousands of hours.
The alkaline fuel cell or hydrogen-oxygen fuel cell was designed and first demonstrated publicly by Francis Thomas Bacon in 1959. It was used as a primary source of electrical energy in the Apollo space program. The cell consists of two porous carbon electrodes impregnated with a suitable catalyst such as Pt, Ag, CoO, etc. The space between the two electrodes is filled with a concentrated solution of KOH or NaOH which serves as an electrolyte. H2 gas and O2 gas are bubbled into the electrolyte through the porous carbon electrodes. Thus the overall reaction involves the combination of hydrogen gas and oxygen gas to form water. The cell runs continuously until the reactant's supply is exhausted. This type of cell operates efficiently in the temperature range 343–413K and provides a potential of about 0.9V. AAEMFC is a type of AFC which employs a solid polymer electrolyte instead of aqueous potassium hydroxide (KOH) and it is superior to aqueous AFC.
Solid oxide fuel cells (SOFCs) use a solid material, most commonly a ceramic material called yttria-stabilized zirconia (YSZ), as the electrolyte. Because SOFCs are made entirely of solid materials, they are not limited to the flat plane configuration of other types of fuel cells and are often designed as rolled tubes. They require high operating temperatures (800–1000 °C) and can be run on a variety of fuels including natural gas.
SOFCs are unique since in those, negatively charged oxygen ions travel from the cathode (positive side of the fuel cell) to the anode (negative side of the fuel cell) instead of positively charged hydrogen ions travelling from the anode to the cathode, as is the case in all other types of fuel cells. Oxygen gas is fed through the cathode, where it absorbs electrons to create oxygen ions. The oxygen ions then travel through the electrolyte to react with hydrogen gas at the anode. The reaction at the anode produces electricity and water as by-products. Carbon dioxide may also be a by-product depending on the fuel, but the carbon emissions from an SOFC system are less than those from a fossil fuel combustion plant. The chemical reactions for the SOFC system can be expressed as follows:
SOFC systems can run on fuels other than pure hydrogen gas. However, since hydrogen is necessary for the reactions listed above, the fuel selected must contain hydrogen atoms. For the fuel cell to operate, the fuel must be converted into pure hydrogen gas. SOFCs are capable of internally reforming light hydrocarbons such as methane (natural gas), propane and butane. These fuel cells are at an early stage of development.
Challenges exist in SOFC systems due to their high operating temperatures. One such challenge is the potential for carbon dust to build up on the anode, which slows down the internal reforming process. Research to address this "carbon coking" issue at the University of Pennsylvania has shown that the use of copper-based cermet (heat-resistant materials made of ceramic and metal) can reduce coking and the loss of performance. Another disadvantage of SOFC systems is slow start-up time, making SOFCs less useful for mobile applications. Despite these disadvantages, a high operating temperature provides an advantage by removing the need for a precious metal catalyst like platinum, thereby reducing cost. Additionally, waste heat from SOFC systems may be captured and reused, increasing the theoretical overall efficiency to as high as 80–85%.
The high operating temperature is largely due to the physical properties of the YSZ electrolyte. As temperature decreases, so does the ionic conductivity of YSZ. Therefore, to obtain optimum performance of the fuel cell, a high operating temperature is required. According to their website, Ceres Power, a UK SOFC fuel cell manufacturer, has developed a method of reducing the operating temperature of their SOFC system to 500–600 degrees Celsius. They replaced the commonly used YSZ electrolyte with a CGO (cerium gadolinium oxide) electrolyte. The lower operating temperature allows them to use stainless steel instead of ceramic as the cell substrate, which reduces cost and start-up time of the system.
Molten carbonate fuel cells (MCFCs) require a high operating temperature, , similar to SOFCs. MCFCs use lithium potassium carbonate salt as an electrolyte, and this salt liquefies at high temperatures, allowing for the movement of charge within the cell – in this case, negative carbonate ions.
Like SOFCs, MCFCs are capable of converting fossil fuel to a hydrogen-rich gas in the anode, eliminating the need to produce hydrogen externally. The reforming process creates emissions. MCFC-compatible fuels include natural gas, biogas and gas produced from coal. The hydrogen in the gas reacts with carbonate ions from the electrolyte to produce water, carbon dioxide, electrons and small amounts of other chemicals. The electrons travel through an external circuit creating electricity and return to the cathode. There, oxygen from the air and carbon dioxide recycled from the anode react with the electrons to form carbonate ions that replenish the electrolyte, completing the circuit. The chemical reactions for an MCFC system can be expressed as follows:
As with SOFCs, MCFC disadvantages include slow start-up times because of their high operating temperature. This makes MCFC systems not suitable for mobile applications, and this technology will most likely be used for stationary fuel cell purposes. The main challenge of MCFC technology is the cells' short life span. The high-temperature and carbonate electrolyte lead to corrosion of the anode and cathode. These factors accelerate the degradation of MCFC components, decreasing the durability and cell life. Researchers are addressing this problem by exploring corrosion-resistant materials for components as well as fuel cell designs that may increase cell life without decreasing performance.
MCFCs hold several advantages over other fuel cell technologies, including their resistance to impurities. They are not prone to "carbon coking", which refers to carbon build-up on the anode that results in reduced performance by slowing down the internal fuel reforming process. Therefore, carbon-rich fuels like gases made from coal are compatible with the system. The United States Department of Energy claims that coal, itself, might even be a fuel option in the future, assuming the system can be made resistant to impurities such as sulfur and particulates that result from converting coal into hydrogen. MCFCs also have relatively high efficiencies. They can reach a fuel-to-electricity efficiency of 50%, considerably higher than the 37–42% efficiency of a phosphoric acid fuel cell plant. Efficiencies can be as high as 65% when the fuel cell is paired with a turbine, and 85% if heat is captured and used in a combined heat and power (CHP) system.
FuelCell Energy, a Connecticut-based fuel cell manufacturer, develops and sells MCFC fuel cells. The company says that their MCFC products range from 300 kW to 2.8 MW systems that achieve 47% electrical efficiency and can utilize CHP technology to obtain higher overall efficiencies. One product, the DFC-ERG, is combined with a gas turbine and, according to the company, it achieves an electrical efficiency of 65%.
The electric storage fuel cell is a conventional battery chargeable by electric power input, using the conventional electro-chemical effect. However, the battery further includes hydrogen (and oxygen) inputs for alternatively charging the battery chemically.
Glossary of terms in table:
The energy efficiency of a system or device that converts energy is measured by the ratio of the amount of useful energy put out by the system ("output energy") to the total amount of energy that is put in ("input energy") or by useful output energy as a percentage of the total input energy. In the case of fuel cells, useful output energy is measured in electrical energy produced by the system. Input energy is the energy stored in the fuel. According to the U.S. Department of Energy, fuel cells are generally between 40–60% energy efficient. This is higher than some other systems for energy generation. For example, the typical internal combustion engine of a car is about 25% energy efficient. In combined heat and power (CHP) systems, the heat produced by the fuel cell is captured and put to use, increasing the efficiency of the system to up to 85–90%.
The theoretical maximum efficiency of any type of power generation system is never reached in practice, and it does not consider other steps in power generation, such as production, transportation and storage of fuel and conversion of the electricity into mechanical power. However, this calculation allows the comparison of different types of power generation. The maximum theoretical energy efficiency of a fuel cell is 83%, operating at low power density and using pure hydrogen and oxygen as reactants (assuming no heat recapture) According to the World Energy Council, this compares with a maximum theoretical efficiency of 58% for internal combustion engines.
In a fuel-cell vehicle the tank-to-wheel efficiency is greater than 45% at low loads and shows average values of about 36% when a driving cycle like the NEDC (New European Driving Cycle) is used as test procedure. The comparable NEDC value for a Diesel vehicle is 22%. In 2008 Honda released a demonstration fuel cell electric vehicle (the Honda FCX Clarity) with fuel stack claiming a 60% tank-to-wheel efficiency.
It is also important to take losses due to fuel production, transportation, and storage into account. Fuel cell vehicles running on compressed hydrogen may have a power-plant-to-wheel efficiency of 22% if the hydrogen is stored as high-pressure gas, and 17% if it is stored as liquid hydrogen. Fuel cells cannot store energy like a battery, except as hydrogen, but in some applications, such as stand-alone power plants based on discontinuous sources such as solar or wind power, they are combined with electrolyzers and storage systems to form an energy storage system. As of 2019, 90% of hydrogen is used for oil refining, chemicals and fertilizer production, and 98% of hydrogen is produced by steam methane reforming, which emits carbon dioxide. The overall efficiency (electricity to hydrogen and back to electricity) of such plants (known as "round-trip efficiency"), using pure hydrogen and pure oxygen can be "from 35 up to 50 percent", depending on gas density and other conditions. The electrolyzer/fuel cell system can store indefinite quantities of hydrogen, and is therefore suited for long-term storage.
Solid-oxide fuel cells produce heat from the recombination of the oxygen and hydrogen. The ceramic can run as hot as 800 degrees Celsius. This heat can be captured and used to heat water in a micro combined heat and power (m-CHP) application. When the heat is captured, total efficiency can reach 80–90% at the unit, but does not consider production and distribution losses. CHP units are being developed today for the European home market.
Professor Jeremy P. Meyers, in the Electrochemical Society journal "Interface" in 2008, wrote, "While fuel cells are efficient relative to combustion engines, they are not as efficient as batteries, due primarily to the inefficiency of the oxygen reduction reaction (and ... the oxygen evolution reaction, should the hydrogen be formed by electrolysis of water)... [T]hey make the most sense for operation disconnected from the grid, or when fuel can be provided continuously. For applications that require frequent and relatively rapid start-ups ... where zero emissions are a requirement, as in enclosed spaces such as warehouses, and where hydrogen is considered an acceptable reactant, a [PEM fuel cell] is becoming an increasingly attractive choice [if exchanging batteries is inconvenient]". In 2013 military organizations were evaluating fuel cells to determine if they could significantly reduce the battery weight carried by soldiers.
Stationary fuel cells are used for commercial, industrial and residential primary and backup power generation. Fuel cells are very useful as power sources in remote locations, such as spacecraft, remote weather stations, large parks, communications centers, rural locations including research stations, and in certain military applications. A fuel cell system running on hydrogen can be compact and lightweight, and have no major moving parts. Because fuel cells have no moving parts and do not involve combustion, in ideal conditions they can achieve up to 99.9999% reliability. This equates to less than one minute of downtime in a six-year period.
Since fuel cell electrolyzer systems do not store fuel in themselves, but rather rely on external storage units, they can be successfully applied in large-scale energy storage, rural areas being one example. There are many different types of stationary fuel cells so efficiencies vary, but most are between 40% and 60% energy efficient. However, when the fuel cell's waste heat is used to heat a building in a cogeneration system this efficiency can increase to 85%. This is significantly more efficient than traditional coal power plants, which are only about one third energy efficient. Assuming production at scale, fuel cells could save 20–40% on energy costs when used in cogeneration systems. Fuel cells are also much cleaner than traditional power generation; a fuel cell power plant using natural gas as a hydrogen source would create less than one ounce of pollution (other than ) for every 1,000 kW·h produced, compared to 25 pounds of pollutants generated by conventional combustion systems. Fuel Cells also produce 97% less nitrogen oxide emissions than conventional coal-fired power plants.
One such pilot program is operating on Stuart Island in Washington State. There the Stuart Island Energy Initiative has built a complete, closed-loop system: Solar panels power an electrolyzer, which makes hydrogen. The hydrogen is stored in a tank at , and runs a ReliOn fuel cell to provide full electric back-up to the off-the-grid residence. Another closed system loop was unveiled in late 2011 in Hempstead, NY.
Fuel cells can be used with low-quality gas from landfills or waste-water treatment plants to generate power and lower methane emissions. A 2.8 MW fuel cell plant in California is said to be the largest of the type.
Combined heat and power (CHP) fuel cell systems, including micro combined heat and power (MicroCHP) systems are used to generate both electricity and heat for homes (see home fuel cell), office building and factories. The system generates constant electric power (selling excess power back to the grid when it is not consumed), and at the same time produces hot air and water from the waste heat. As the result CHP systems have the potential to save primary energy as they can make use of waste heat which is generally rejected by thermal energy conversion systems. A typical capacity range of home fuel cell is 1–3 kWel, 4–8 kWth. CHP systems linked to absorption chillers use their waste heat for refrigeration.
The waste heat from fuel cells can be diverted during the summer directly into the ground providing further cooling while the waste heat during winter can be pumped directly into the building. The University of Minnesota owns the patent rights to this type of system
Co-generation systems can reach 85% efficiency (40–60% electric and the remainder as thermal). Phosphoric-acid fuel cells (PAFC) comprise the largest segment of existing CHP products worldwide and can provide combined efficiencies close to 90%. Molten carbonate (MCFC) and solid-oxide fuel cells (SOFC) are also used for combined heat and power generation and have electrical energy efficiencies around 60%. Disadvantages of co-generation systems include slow ramping up and down rates, high cost and short lifetime. Also their need to have a hot water storage tank to smooth out the thermal heat production was a serious disadvantage in the domestic market place where space in domestic properties is at a great premium.
Delta-ee consultants stated in 2013 that with 64% of global sales the fuel cell micro-combined heat and power passed the conventional systems in sales in 2012. The Japanese ENE FARM project will pass 100,000 FC mCHP systems in 2014, 34.213 PEMFC and 2.224 SOFC were installed in the period 2012–2014, 30,000 units on LNG and 6,000 on LPG.
As of 2017, about 6500 FCEVs have been leased or sold worldwide. Three fuel cell electric vehicles have been introduced for commercial lease and sale: the Honda Clarity, Toyota Mirai and the Hyundai ix35 FCEV. Additional demonstration models include the Honda FCX Clarity, and Mercedes-Benz F-Cell. As of June 2011 demonstration FCEVs had driven more than , with more than 27,000 refuelings. Fuel cell electric vehicles feature an average range of 314 miles between refuelings. They can be refueled in less than 5 minutes. The U.S. Department of Energy's Fuel Cell Technology Program states that, as of 2011, fuel cells achieved 53–59% efficiency at one-quarter power and 42–53% vehicle efficiency at full power, and a durability of over with less than 10% degradation. In a Well-to-Wheels simulation analysis that "did not address the economics and market constraints", General Motors and its partners estimated that per mile traveled, a fuel cell electric vehicle running on compressed gaseous hydrogen produced from natural gas could use about 40% less energy and emit 45% less greenhouse gasses than an internal combustion vehicle. A lead engineer from the Department of Energy whose team is testing fuel cell cars said in 2011 that the potential appeal is that "these are full-function vehicles with no limitations on range or refueling rate so they are a direct replacement for any vehicle. For instance, if you drive a full sized SUV and pull a boat up into the mountains, you can do that with this technology and you can't with current battery-only vehicles, which are more geared toward city driving."
In 2015, Toyota introduced its first fuel cell vehicle, the Mirai, at a price of $57,000. Hyundai introduced the limited production Hyundai ix35 FCEV under a lease agreement. In 2016, Honda started leasing the Honda Clarity Fuel Cell.
Some commentators believe that hydrogen fuel cell cars will never become economically competitive with other technologies or that it will take decades for them to become profitable. Elon Musk, CEO of battery-electric vehicle maker Tesla Motors, stated in 2015 that fuel cells for use in cars will never be commercially viable because of the inefficiency of producing, transporting and storing hydrogen and the flammability of the gas, among other reasons. Jeremy P. Meyers estimated in 2008 that cost reductions over a production ramp-up period will take about 20 years after fuel-cell cars are introduced before they will be able to compete commercially with current market technologies, including gasoline internal combustion engines. In 2011, the chairman and CEO of General Motors, Daniel Akerson, stated that while the cost of hydrogen fuel cell cars is decreasing: "The car is still too expensive and probably won't be practical until the 2020-plus period, I don't know."
In 2012, Lux Research, Inc. issued a report that stated: "The dream of a hydrogen economy ... is no nearer". It concluded that "Capital cost ... will limit adoption to a mere 5.9 GW" by 2030, providing "a nearly insurmountable barrier to adoption, except in niche applications". The analysis concluded that, by 2030, PEM stationary market will reach $1 billion, while the vehicle market, including forklifts, will reach a total of $2 billion. Other analyses cite the lack of an extensive hydrogen infrastructure in the U.S. as an ongoing challenge to Fuel Cell Electric Vehicle commercialization. In 2006, a study for the IEEE showed that for hydrogen produced via electrolysis of water: "Only about 25% of the power generated from wind, water, or sun is converted to practical use." The study further noted that "Electricity obtained from hydrogen fuel cells appears to be four times as expensive as electricity drawn from the electrical transmission grid. ... Because of the high energy losses [hydrogen] cannot compete with electricity." Furthermore, the study found: "Natural gas reforming is not a sustainable solution". "The large amount of energy required to isolate hydrogen from natural compounds (water, natural gas, biomass), package the light gas by compression or liquefaction, transfer the energy carrier to the user, plus the energy lost when it is converted to useful electricity with fuel cells, leaves around 25% for practical use."
In 2014, Joseph Romm, the author of "The Hype About Hydrogen" (2005), said that FCVs still had not overcome the high fueling cost, lack of fuel-delivery infrastructure, and pollution caused by producing hydrogen. "It would take several miracles to overcome all of those problems simultaneously in the coming decades." He concluded that renewable energy cannot economically be used to make hydrogen for an FCV fleet "either now or in the future." Greentech Media's analyst reached similar conclusions in 2014. In 2015, "Clean Technica" listed some of the disadvantages of hydrogen fuel cell vehicles. So did "Car Throttle".
A 2019 video by "Real Engineering" noted that, notwithstanding the introduction of vehicles that run on hydrogen, using hydrogen as a fuel for cars does not help to reduce carbon emissions from transportation. The 95% of hydrogen still produced from fossil fuels releases carbon dioxide, and producing hydrogen from water is an energy-consuming process. Storing hydrogen requires more energy either to cool it down to the liquid state or to put it into tanks under high pressure, and delivering the hydrogen to fueling stations requires more energy and may release more carbon. The hydrogen needed to move a FCV a kilometer costs approximately 8 times as much as the electricity needed to move a BEV the same distance. A 2020 assessment concluded that hydrogen vehicles are still only 38% efficient, while battery EVs are 80% efficient.
, there were about 100 fuel cell buses running around the world, including in Whistler, Canada; San Francisco, United States; Hamburg, Germany; Shanghai, China; London, England; and São Paulo, Brazil. Most of these were manufactured by UTC Power, Toyota, Ballard, Hydrogenics, and Proton Motor. UTC buses had driven more than by 2011. Fuel cell buses have from 39% to 141% higher fuel economy than diesel buses and natural gas buses.
As of 2019, the NREL is evaluating several current and planned fuel cell bus projects in the U.S.
A fuel cell forklift (also called a fuel cell lift truck) is a fuel cell-powered industrial forklift truck used to lift and transport materials. In 2013 there were over 4,000 fuel cell forklifts used in material handling in the US, of which 500 received funding from DOE (2012). Fuel cell fleets are operated by various companies, including Sysco Foods, FedEx Freight, GENCO (at Wegmans, Coca-Cola, Kimberly Clark, and Whole Foods), and H-E-B Grocers. Europe demonstrated 30 fuel cell forklifts with Hylift and extended it with HyLIFT-EUROPE to 200 units, with other projects in France and Austria. Pike Research projected in 2011 that fuel cell-powered forklifts would be the largest driver of hydrogen fuel demand by 2020.
Most companies in Europe and the US do not use petroleum-powered forklifts, as these vehicles work indoors where emissions must be controlled and instead use electric forklifts. Fuel cell-powered forklifts can provide benefits over battery-powered forklifts as they can be refueled in 3 minutes and they can be used in refrigerated warehouses, where their performance is not degraded by lower temperatures. The FC units are often designed as drop-in replacements.
In 2005 a British manufacturer of hydrogen-powered fuel cells, Intelligent Energy (IE), produced the first working hydrogen-run motorcycle called the ENV (Emission Neutral Vehicle). The motorcycle holds enough fuel to run for four hours, and to travel in an urban area, at a top speed of . In 2004 Honda developed a fuel-cell motorcycle that utilized the Honda FC Stack.
Other examples of motorbikes and bicycles that use hydrogen fuel cells include the Taiwanese company APFCT's scooter using the fueling system from Italy's Acta SpA and the Suzuki Burgman scooter with an IE fuel cell that received EU Whole Vehicle Type Approval in 2011. Suzuki Motor Corp. and IE have announced a joint venture to accelerate the commercialization of zero-emission vehicles.
In 2003, the world's first propeller-driven airplane to be powered entirely by a fuel cell was flown. The fuel cell was a stack design that allowed the fuel cell to be integrated with the plane's aerodynamic surfaces. Fuel cell-powered unmanned aerial vehicles (UAV) include a Horizon fuel cell UAV that set the record distance flown for a small UAV in 2007. Boeing researchers and industry partners throughout Europe conducted experimental flight tests in February 2008 of a manned airplane powered only by a fuel cell and lightweight batteries. The fuel cell demonstrator airplane, as it was called, used a proton exchange membrane (PEM) fuel cell/lithium-ion battery hybrid system to power an electric motor, which was coupled to a conventional propeller.
In 2009 the Naval Research Laboratory's (NRL's) Ion Tiger utilized a hydrogen-powered fuel cell and flew for 23 hours and 17 minutes. Fuel cells are also being tested and considered to provide auxiliary power in aircraft, replacing fossil fuel generators that were previously used to start the engines and power on board electrical needs, while reducing carbon emissions. In 2016 a Raptor E1 drone made a successful test flight using a fuel cell that was lighter than the lithium-ion battery it replaced. The flight lasted 10 minutes at an altitude of , although the fuel cell reportedly had enough fuel to fly for two hours. The fuel was contained in approximately 100 solid pellets composed of a proprietary chemical within an unpressurized cartridge. The pellets are physically robust and operate at temperatures as warm as . The cell was from Arcola Energy.
Lockheed Martin Skunk Works Stalker is an electric UAV powered by solid oxide fuel cell.
The world's first fuel-cell boat HYDRA used an AFC system with 6.5 kW net output. Iceland has committed to converting its vast fishing fleet to use fuel cells to provide auxiliary power by 2015 and, eventually, to provide primary power in its boats. Amsterdam recently introduced its first fuel cell-powered boat that ferries people around the city's canals.
The Type 212 submarines of the German and Italian navies use fuel cells to remain submerged for weeks without the need to surface.
The U212A is a non-nuclear submarine developed by German naval shipyard Howaldtswerke Deutsche Werft. The system consists of nine PEM fuel cells, providing between 30 kW and 50 kW each. The ship is silent, giving it an advantage in the detection of other submarines. A naval paper has theorized about the possibility of a nuclear-fuel cell hybrid whereby the fuel cell is used when silent operations are required and then replenished from the Nuclear reactor (and water).
Portable fuel cell systems are generally classified as weighing under 10 kg and providing power of less than 5 kW. The potential market size for smaller fuel cells is quite large with an up to 40% per annum potential growth rate and a market size of around $10 billion, leading a great deal of research to be devoted to the development of portable power cells. Within this market two groups have been identified. The first is the microfuel cell market, in the 1-50 W range for power smaller electronic devices. The second is the 1-5 kW range of generators for larger scale power generation (e.g. military outposts, remote oil fields).
Microfuel cells are primarily aimed at penetrating the market for phones and laptops. This can be primarily attributed to the advantageous energy density provided by fuel cells over a lithium-ion battery, for the entire system. For a battery, this system includes the charger as well as the battery itself. For the fuel cell this system would include the cell, the necessary fuel and peripheral attachments. Taking the full system into consideration, fuel cells have been shown to provide 530Wh/kg compared to 44 Wh/kg for lithium ion batteries. However, while the weight of fuel cell systems offer a distinct advantage the current costs are not in their favor. while a battery system will generally cost around $1.20 per Wh, fuel cell systems cost around $5 per Wh, putting them at a significant disadvantage.
As power demands for cell phones increase, fuel cells could become much more attractive options for larger power generation. The demand for longer on time on phones and computers is something often demanded by consumers so fuel cells could start to make strides into laptop and cell phone markets. The price will continue to go down as developments in fuel cells continues to accelerate. Current strategies for improving micro fuelcells is through the use of carbon nanotubes. It was shown by Girishkumar et al. that depositing nanotubes on electrode surfaces allows for substantially greater surface area increasing the oxygen reduction rate.
Fuel cells for use in larger scale operations also show much promise. Portable power systems that use fuel cells can be used in the leisure sector (i.e. RVs, cabins, marine), the industrial sector (i.e. power for remote locations including gas/oil wellsites, communication towers, security, weather stations), and in the military sector. SFC Energy is a German manufacturer of direct methanol fuel cells for a variety of portable power systems. Ensol Systems Inc. is an integrator of portable power systems, using the SFC Energy DMFC. The key advantage of fuel cells in this market is the great power generation per weight. While fuel cells can be expensive, for remote locations that require dependable energy fuel cells hold great power. For a 72-h excursion the comparison in weight is substantial, with a fuel cell only weighing 15 pounds compared to 29 pounds of batteries needed for the same energy.
In 2013, "The New York Times" reported that there were "10 hydrogen stations available to the public in the entire United States: one in Columbia, S.C., eight in Southern California and the one in Emeryville". , there were 31 publicly accessible hydrogen refueling stations in the US, 28 of which were located in California.
A public hydrogen refueling station in Iceland operated from 2003 to 2007. It served three buses in the public transport net of Reykjavík. The station produced its own hydrogen with an electrolyzing unit. The 14 stations in Germany were planned to be expanded to 50 by 2015 through its public–private partnership Now GMBH.
By May 2017, there were 91 hydrogen fueling stations in Japan. As of 2016, Norway planned to build a network of hydrogen stations between the major cities, starting in 2017.
In 2012, fuel cell industry revenues exceeded $1 billion market value worldwide, with Asian pacific countries shipping more than 3/4 of the fuel cell systems worldwide. However, as of January 2014, no public company in the industry had yet become profitable. There were 140,000 fuel cell stacks shipped globally in 2010, up from 11,000 shipments in 2007, and from 2011 to 2012 worldwide fuel cell shipments had an annual growth rate of 85%. Tanaka Kikinzoku expanded its manufacturing facilities in 2011. Approximately 50% of fuel cell shipments in 2010 were stationary fuel cells, up from about a third in 2009, and the four dominant producers in the Fuel Cell Industry were the United States, Germany, Japan and South Korea. The Department of Energy Solid State Energy Conversion Alliance found that, as of January 2011, stationary fuel cells generated power at approximately $724 to $775 per kilowatt installed. In 2011, Bloom Energy, a major fuel cell supplier, said that its fuel cells generated power at 9–11 cents per kilowatt-hour, including the price of fuel, maintenance, and hardware.
Industry groups predict that there are sufficient platinum resources for future demand, and in 2007, research at Brookhaven National Laboratory suggested that platinum could be replaced by a gold-palladium coating, which may be less susceptible to poisoning and thereby improve fuel cell lifetime. Another method would use iron and sulphur instead of platinum. This would lower the cost of a fuel cell (as the platinum in a regular fuel cell costs around , and the same amount of iron costs only around ). The concept was being developed by a coalition of the John Innes Centre and the University of Milan-Bicocca. PEDOT cathodes are immune to monoxide poisoning.
In 2016, Samsung "decided to drop fuel cell-related business projects, as the outlook of the market isn't good".
|
https://en.wikipedia.org/wiki?curid=11729
|
Finlandization
Finlandization (; ; ; ) is the process by which one powerful country makes a smaller neighboring country abide by the former's foreign policy rules, while allowing it to keep its nominal independence and its own political system. The term means "to become like Finland" referring to the influence of the Soviet Union on Finland's policies during the Cold War.
The term is often considered pejorative. It originated in the West German political debate of the late 1960s and 1970s. As the term was used in Germany and other NATO countries, it referred to the decision of a country not to challenge a more powerful neighbour in foreign politics, while maintaining national sovereignty. It is commonly used in reference to Finland's policies in relation to the Soviet Union during the Cold War, but it can refer more generally to similar international relations, such as Denmark's attitude toward Germany between 1871 and 1945, or the policies of the Swiss government towards Nazi Germany until the end of World War II.
In Germany, the term was used mainly by proponents of closer adaptation to US policies, chiefly Franz Josef Strauss, but was initially coined in scholarly debate, and made known by the German political scientists Walter Hallstein and Richard Löwenthal, reflecting feared effects of withdrawal of US troops from Germany. It came to be used in the debate of the NATO countries in response to Willy Brandt's attempts to normalise relations with East Germany, and the following widespread scepticism in Germany against NATO's Dual-Track Decision. Later, after the fall of the Soviet Union, the term has been used in Finland for the post-1968 radicalisation in the latter half of the Urho Kekkonen era.
In Finland, the term "Finlandization" was perceived as blunt criticism, stemming from an inability to understand the practicalities of how a small nation needs to deal with an adjacent superpower without losing its sovereignty. These practicalities existed especially because of the lingering effect of Russian rule in their time, before the Finns first gained sovereignty, and because of the precarious power balance eastwards, springing from a geographically extended yet sparsely populated state with a traditionally imperialist superpower right across the eastern border.
The reason Finland engaged in Finlandization was primarily Realpolitik: to survive. On the other hand, the threat of the Soviet Union was used also in Finland's domestic politics in a way that possibly deepened Finlandization (playing the so-called ). Finland made such a deal with Joseph Stalin's government in the late 1940s, and it was largely respected by both parties—and to the gain of both parties—until the fall of the Soviet Union in 1991. While the Finnish political and intellectual elite mostly understood the term to refer more to the foreign policy problems of other countries, and meant mostly for domestic consumption in the speaker's own country, many ordinary Finns considered the term highly offensive. The Finnish political cartoonist Kari Suomalainen once explained Finlandization as "the art of bowing to the East without mooning the West".
Finland's foreign politics before this deal had been varied: independence from Imperial Russia with support of Imperial Germany in 1917; participation in the Russian Civil War (without official declaration of war) alongside the Triple Entente 1918–1920; a non-ratified alliance with Poland in 1922; association with the neutralist and democratic Scandinavian countries in the 1930s ended by the Winter War (1939) which Finland lost against the Soviet Union; and finally in 1940, a rapprochement with Nazi Germany, the only power able and willing to help Finland against the expansionist Soviet Union, which led to Finland's re-entry into the Second World War in 1941.
The Wehrmacht's defeat in the Battle of Stalingrad led Finland to basically revert to its 19th-century traditions, which had been perceived as highly successful until the Russification of Finland (1899–1905). Finland's leaders realised that opposing the Soviets head-on was no longer feasible. No international power was able to give the necessary support. Nazi Germany, Finland's chief supporter against Russia, was losing the war. Sweden was not big enough, and its leadership was wary of confronting Russia. The western powers were allied with the Soviet Union. Thus Finland had to face its bigger neighbour on its own, without any great power's protection. As in the 19th century, Finland chose not to challenge Soviet Russia's foreign policy, but exerted caution to keep its independence.
After the Paris Peace Treaty of 1947, Finland succeeded in retaining democracy and parliamentarism, despite the heavy political pressure on Finland's foreign and internal affairs by the Soviet Union. Finland's foreign relations were guided by the doctrine formulated by Juho Kusti Paasikivi, emphasising the necessity to maintain a good and trusting relationship with the Soviet Union.
Finland signed an Agreement of Friendship, Cooperation, and Mutual Assistance with the Soviet Union in April 1948, under which Finland was obliged to resist armed attacks by "Germany or its allies" against Finland, or against the Soviet Union through Finland, and, if necessary, ask for Soviet military aid to do so. At the same time, the agreement recognised Finland's desire to remain outside great power conflicts, allowing the country to adopt a policy of neutrality during the Cold War.
As a consequence, Finland did not participate in the Marshall Plan and took neutral positions on Soviet overseas initiatives. By keeping very cool relations to NATO and western military powers in general, Finland could fend off Soviet pressure for affiliation to the Warsaw Pact.
However, from the political scene following the post-1968 radicalisation, the Soviet adaptation spread to the editors of mass media, sparking strong forms of self-control, self-censorship and pro-Soviet attitudes. Most of the élite of media and politics shifted their attitudes to match the values that the Soviets were thought to favor and approve.
Only after the ascent of Mikhail Gorbachev to Soviet leadership in 1985 did mass media in Finland gradually begin to criticise the Soviet Union more. When the Soviet Union allowed non-communist governments to take power in Eastern Europe, Gorbachev suggested they could look to Finland as an example to follow.
In the years immediately after the war (1944–1946), the Soviet part of the allied control commission demanded that Finnish public libraries should remove from circulation more than 1,700 books that were deemed anti-Soviet, and bookstores were given catalogs of banned books. The Finnish Board of Film Classification likewise banned movies that it considered to be anti-Soviet. Banned movies included "One, Two, Three" (1961 film), directed by Billy Wilder, "The Manchurian Candidate", directed by John Frankenheimer in 1962, "One Day in the Life of Ivan Denisovich" 1970 by Finnish director Caspar Wrede and "Born American" by Finnish director Renny Harlin in 1986.
The censorship never took the form of purging. Possession or use of anti-Soviet books was not banned; it was the reprinting and distribution of such materials that was prohibited. Especially in the realm of radio and television self-censorship, it was sometimes hard to tell whether the motivations were even political: for example, once a system of blacklisting recordings had been introduced, individual policymakers within the Yleisradio also utilized it to censor songs they deemed inappropriate for other reasons, such as some of those featuring sexual innuendos or references to alcohol.
United States foreign policy experts consistently feared that Western Europe and Japan would be Finlandized by the Soviet Union, leading to a situation in which these key allies would no longer supported the United States against the Soviet Union. The theory of bandwagoning provided support for the idea that if the United States was not able to provide strong and credible support for the anti-communist positions of its allies, NATO and the U.S.–Japan alliance could collapse.
However, foreign policy scholars such as Eric Nordlinger in his book "Isolationism Reconfigured" have argued that "a vision of Finlandization in America's absence runs up squarely against the European states' long-standing Communist antipathies and wariness of Moscow's peaceful wiles, valued national traditions and strong democratic institutions, as well as their size and wherewithal".
|
https://en.wikipedia.org/wiki?curid=11732
|
Fred Singer
Siegfried Fred Singer (September 27, 1924 – April 6, 2020) was an Austrian-born American physicist and emeritus professor of environmental science at the University of Virginia. Trained as an atmospheric physicist, Singer was known for climate change denial, as well as for rejecting the scientific consensus on UV-B exposure and melanoma rates, the use of chlorofluoro compounds and stratospheric ozone loss,
|
https://en.wikipedia.org/wiki?curid=11734
|
Frederik Pohl
Frederik George Pohl Jr. (; November 26, 1919 – September 2, 2013) was an American science-fiction writer, editor, and fan, with a career spanning more than 75 years—from his first published work, the 1937 poem "Elegy to a Dead Satellite: Luna", to the 2011 novel "All the Lives He Led" and articles and essays published in 2012.
From about 1959 until 1969, Pohl edited "Galaxy" and its sister magazine "If"; the latter won three successive annual Hugo Awards as the year's best professional magazine. His 1977 novel "Gateway" won four "year's best novel" awards: the Hugo voted by convention participants, the Locus voted by magazine subscribers, the Nebula voted by American science-fiction writers, and the juried academic John W. Campbell Memorial Award. He won the Campbell Memorial Award again for the 1984 collection of novellas "Years of the City", one of two repeat winners during the first 40 years. For his 1979 novel "Jem", Pohl won a U.S. National Book Award in the one-year category Science Fiction. It was a finalist for three other year's best novel awards. He won four Hugo and three Nebula Awards, including receiving both for the 1977 novel "Gateway".
The Science Fiction and Fantasy Writers of America named Pohl its 12th recipient of the Damon Knight Memorial Grand Master Award in 1993 and he was inducted by the Science Fiction and Fantasy Hall of Fame in 1998, its third class of two dead and two living writers.
Pohl won the Hugo Award for Best Fan Writer in 2010, for his blog, "The Way the Future Blogs".
Pohl was the son of Frederik (originally Friedrich) George Pohl (a salesman of German descent) and Anna Jane Mason. Pohl Sr. held various jobs, and the Pohls lived in such wide-flung locations as Texas, California, New Mexico, and the Panama Canal Zone. The family settled in Brooklyn when Pohl was around seven.
He attended Brooklyn Technical High School, and dropped out at 17. In 2009, he was awarded an honorary diploma from Brooklyn Tech.
While a teenager, he co-founded the New York–based Futurians fan group, and began lifelong friendships with Donald Wollheim, Isaac Asimov, and others who would become important writers and editors. Pohl later said that other "friends came and went and were gone, [but] many of the ones I met through fandom were friends all their lives – Isaac, Damon Knight, Cyril Kornbluth, Dirk Wylie, [and] Dick Wilson. In fact, there are one or two – Jack Robins, Dave Kyle – whom I still count as friends, seventy-odd years later..." He published a science-fiction fanzine called "Mind of Man."
During 1936, Pohl joined the Young Communist League because of its positions for unions and against racial prejudice, Adolf Hitler, and Benito Mussolini. He became president of the local Flatbush III Branch of the YCL in Brooklyn. Pohl has said that after the Molotov–Ribbentrop Pact of 1939, the party line changed and he could no longer support it, at which point he left.
Pohl served in the United States Army from April 1943 until November 1945, rising to sergeant as an air corps weatherman. After training in Illinois, Oklahoma, and Colorado, he was mainly stationed in Italy with the 456th Bombardment Group.
Pohl was married five times. His first wife, Leslie Perri, was another Futurian; they were married in August 1940, and divorced in 1944. He then married Dorothy LesTina in Paris in August 1945 while both were serving in the military in Europe; the marriage ended in 1947. During 1948, he married Judith Merril; they had a daughter, Ann. Pohl and Merril divorced in 1952. In 1953, he married Carol M. Ulf Stanton, with whom he had three children and collaborated on several books; they separated in 1977 and were divorced in 1983. From 1984 until his death, Pohl was married to science-fiction expert and academic Elizabeth Anne Hull.
He fathered four children – Ann (m. Walter Weary), Frederik III (deceased), Frederik IV and Kathy. Grandchildren include Canadian writer Emily Pohl-Weary and chef Tobias Pohl-Weary.
From 1984 on, he lived in Palatine, Illinois, a suburb of Chicago. He was previously a longtime resident of Middletown, New Jersey.
Pohl began writing in the late 1930s, using pseudonyms for most of his early works. His first publication was the poem "Elegy to a Dead Satellite: Luna" under the name of Elton Andrews, in the October 1937 issue of "Amazing Stories", edited by T. O'Conor Sloane. (Pohl asked readers 30 years later, "we would take it as a personal favor if no one ever looked it up".) His first story, the collaboration with C.M. Kornbluth "Before the Universe", appeared in 1940 under the pseudonym S.D. Gottesman.
Pohl started a career as a literary agent in 1937, but it was a sideline for him until after World War II, when he began doing it full-time. He ended up "representing more than half the successful writers in science fiction", but his agency did not succeed financially, and he closed it down in the early 1950s.
Pohl stopped being Asimov's agent—the only one the latter ever had—when he became editor from 1939 to 1943 of two pulp magazines, "Astonishing Stories" and "Super Science Stories". Stories by Pohl often appeared in these science-fiction magazines, but never under his own name. Work written in collaboration with Cyril M. Kornbluth was credited to S. D. Gottesman or Scott Mariner; other collaborative work (with any combination of Kornbluth, Dirk Wylie, or Robert A. W. Lownes) was credited to Paul Dennis Lavond. For Pohl's solo work, stories were credited to James MacCreigh (or for one story only, Warren F. Howard.) Works by "Gottesman", "Lavond", and "MacCreigh" continued to appear in various science-fiction pulp magazines throughout the 1940s.
In his autobiography, Pohl said that he stopped editing the two magazines at roughly the time of the German invasion of the Soviet Union in 1941.
Pohl co-founded the Hydra Club, a loose collection of science-fiction professionals and fans who met during the late 1940s and 1950s.
From the early 1960s until 1969, Pohl served as editor of "Galaxy Science Fiction" and "Worlds of if" magazines, taking over after the ailing H. L. Gold could no longer continue working "around the end of 1960". Under his leadership, "if" won the Hugo Award for Best Professional Magazine for 1966, 1967 and 1968. Pohl hired Judy-Lynn del Rey as his assistant editor at "Galaxy" and "if". He also served as editor of "Worlds of Tomorrow" from its first issue in 1963 until it was merged into "if" in 1967.
In the mid-1970s, Pohl acquired and edited novels for Bantam Books, published as "Frederik Pohl Selections"; these included Samuel R. Delany's "Dhalgren" and Joanna Russ's "The Female Man". He also edited a number of science-fiction anthologies.
After World War II, Pohl worked as an advertising copywriter and then as a copywriter and book editor for "Popular Science". Following the war, Pohl began publishing material under his own name, much in collaboration with his fellow Futurian, Cyril Kornbluth.
Though the pen names of "Gottesman", "Lavond", and "MacCreigh" were retired by the early 1950s, Pohl still occasionally used pseudonyms, even after he began to publish work under his real name. These occasional pseudonyms, all of which date from the early 1950s to the early 1960s, included Charles Satterfield, Paul Flehr, Ernst Mason, Jordan Park (two collaborative novels with Kornbluth), and Edson McCann (one collaborative novel with Lester del Rey).
In the 1970s, Pohl re-emerged as a novel writer in his own right, with books such as "Man Plus" and the "Heechee" series. He won back-to-back Nebula Awards with "Man Plus" in 1976 and "Gateway", the first "Heechee" novel, in 1977. In 1978, "Gateway" swept the other two major novel honors, also winning the Hugo Award for Best Novel and John W. Campbell Memorial Award for the best science-fiction novel. Two of his stories have also earned him Hugo Awards: "The Meeting" (with Kornbluth) tied in 1973 and "Fermi and Frost" won in 1986. Another award-winning novel is "Jem" (1980), winner of the National Book Award.
His works include not only science fiction, but also articles for "Playboy" and "Family Circle" magazines and nonfiction books. For a time, he was the official authority for "Encyclopædia Britannica" on the subject of Emperor Tiberius. (He wrote a book on the subject of Tiberius, as "Ernst Mason".)
Some of his short stories take a satirical look at consumerism and advertising in the 1950s and 1960s: "The Wizards of Pung's Corners", where flashy, over-complex military hardware proved useless against farmers with shotguns, and "The Tunnel under the World", where an entire community of seeming-humans is held captive by advertising researchers. ("The Wizards of Pung's Corners" was freely translated into Chinese and then freely translated back into English as "The Wizard-Masters of Peng-Shi Angle" in the first edition of "Pohlstars" (1984)).
Pohl's Law is either "No one is ever ready for anything" or "Nothing is so good that somebody, somewhere will not hate it".
He was a frequent guest on Long John Nebel's radio show from the 1950s to the early 1970s, and an international lecturer.
Starting in 1995, when the Theodore Sturgeon Memorial Award became a juried award, Pohl served first with James Gunn and Judith Merril, and since then with several others until retiring in 2013. Pohl was associated with Gunn since the 1940s, becoming involved in 1975 with what later became Gunn's Center for the Study of Science Fiction at the University of Kansas. There, he presented many talks, recorded a discussion about "The Ideas in Science Fiction" in 1973 for the Literature of Science Fiction Lecture Series, and served the Intensive Institute on Science Fiction and Science Fiction Writing Workshop.
Pohl received the second annual J. W. Eaton Lifetime Achievement Award in Science Fiction from the University of California, Riverside Libraries at the 2009 Eaton Science Fiction Conference, "Extraordinary Voyages: Jules Verne and Beyond".
Pohl's work has been an influence on a wide variety of other science fiction writers, some of whom appear in the 2010 anthology, "", edited by Elizabeth Anne Hull.
Pohl's last novel, "All the Lives He Led", was released on April 12, 2011.
By the time of his death, he was working to finish a second volume of his autobiography "The Way the Future Was" (1979), along with an expanded version of the latter.
In addition to his solo writings, Pohl was also well known for his collaborations, beginning with his first published story. Before and following the war, Pohl did a series of collaborations with his friend Cyril Kornbluth, including a large number of short stories and several novels, among them "The Space Merchants," a dystopian satire of a world ruled by the advertising agencies.
In the mid-1950s, he began a long-running collaboration with Jack Williamson, eventually resulting in 10 collaborative novels over five decades.
Other collaborations included a novel with Lester Del Rey, "Preferred Risk" (1955). This novel was solicited for a contest by Galaxy–Simon & Schuster when the judges did not think any of the contest submissions was good enough to win their contest. It was published under the joint pseudonym Edson McCann. He also collaborated with Thomas T. Thomas on a sequel to his award-winning novel "Man Plus." He wrote two short stories with Isaac Asimov in the 1940s, both published in 1950.
He finished a novel begun by Arthur C. Clarke, "The Last Theorem", which was published on August 5, 2008.
Pohl went to the hospital in respiratory distress on the morning of September 2, 2013, and died that afternoon at the age of 93.
|
https://en.wikipedia.org/wiki?curid=11736
|
Forrest J Ackerman
Forrest James Ackerman (November 24, 1916 – December 4, 2008) was an American magazine editor, science fiction writer and literary agent, a founder of science fiction fandom, a leading expert on science fiction, horror, and fantasy films, and acknowledged as the world's most avid collector of genre books and movie memorabilia. He was based in Los Angeles, California.
During his career as a literary agent, Ackerman represented such science fiction authors as Ray Bradbury, Isaac Asimov, A.E. Van Vogt, Curt Siodmak, and L. Ron Hubbard. For more than seven decades, he was one of science fiction's staunchest spokesmen and promoters.
Ackerman was the editor and principal writer of the American magazine "Famous Monsters of Filmland", as well as an actor, from the 1950s into the 21st century. He appears in several documentaries related to this period in popular culture, like "Famous Monster: Forrest J Ackerman" (directed by Michael R. MacDonald and written by Ian Johnston), which premiered at the Egyptian Theatre in March 2009, during the Forrest J Ackerman tribute; "The Ackermonster Chronicles!" (a 2012 documentary about Ackerman by writer and filmmaker Jason V Brock); and "Charles Beaumont: The Short Life of Twilight Zone's Magic Man", about the late author Charles Beaumont, a former client of The Ackerman Agency.
Also called "Forry", "Uncle Forry", "The Ackermonster", "Dr. Acula", "Forjak", "4e" and "4SJ", Ackerman was central to the formation, organization and spread of science fiction fandom and a key figure in the wider cultural perception of science fiction as a literary, art, and film genre. Famous for his word play and neologisms, he coined the genre nickname "sci-fi". In 1953, he was voted "#1 Fan Personality" by the members of the World Science Fiction Society, a unique Hugo Award never granted to anyone else.
He was also among the first and most outspoken advocates of Esperanto in the science fiction community.
Ackerman was born Forrest James Ackerman (though he would refer to himself from the early 1930s on as "Forrest J Ackerman" with no period after the middle initial), on November 24, 1916, in Los Angeles, to Carroll Cridland (née Wyman; 1883–1977) and William Schilling Ackerman (1892–1951). His father, Chief Statistician for the Associated Oil Company, and assistant to the Vice-President in charge of transportation, was from New York and his mother was from Ohio (the daughter of architect George Wyman); she was nine years older than William.
Ackerman attended the University of California at Berkeley for a year (1934–1935), then worked as a movie projectionist and at odd jobs with fan friends prior to spending three years in the U.S. Army after enlisting on August 15, 1942, where he rose to the rank of staff sergeant, held the position of editor of his base's newspaper, and passed his entire time in service at Fort MacArthur, California.
Ackerman saw his first "imagi-movie" in 1922 ("One Glorious Day"), purchased his first science fiction magazine, "Amazing Stories", in 1926, created the Boys' Scientifiction Club in 1930 ("girl-fans were as rare as unicorn's horns in those days"). He contributed to both of the first science fiction fanzines, "The Time Traveller", and the "Science Fiction Magazine", published and edited by Shuster and Siegel of Superman fame, in 1932, and by 1933 had 127 correspondents around the world. His name was used for the character of the reporter in the original Superman story "The Reign of the Superman" in issue 3 of "Science Fiction" magazine. He was one of the early members of the Los Angeles Science Fantasy Society and remained active in it for many decades.
He attended the 1st World Science Fiction Convention in 1939, where he wore the first "futuristicostume" (designed and created by his girlfriend Myrtle R. Douglas, better known as Morojo), which sparked decades of fan costuming thereafter, the latest incarnation of which is cosplay. He attended every Worldcon but two thereafter during his lifetime. Ackerman invited Ray Bradbury to attend the Los Angeles Chapter of the Science Fiction League, then meeting weekly at Clifton's Cafeteria in downtown Los Angeles. The club changed its name to the Los Angeles Science Fantasy Society during the period it was meeting at the restaurant. (There never was a "Clifton's Cafeteria Science Fiction Club".) Among the writers frequenting the club were Robert A. Heinlein, Emil Petaja, Fredric Brown, Henry Kuttner, Leigh Brackett, and Jack Williamson. Bradbury often attended meetings with his friend Ray Harryhausen; the two Rays had been introduced to each other by Ackerman. With $90 from Ackerman and Morojo, Bradbury launched a fanzine, "Futuria Fantasia", in 1939, which ran for four issues.
Ackerman was an early member of the Los Angeles Chapter of the Science Fiction League and became so active in and important to the club that in essence he ran it, including (after the name change) the Los Angeles Science Fantasy Society, a prominent regional fan organization, as well as the National Fantasy Fan Federation (N3F). Together with Morojo, he edited and produced "Imagination!", later renamed "Voice of the Imagi-Nation" (which in 1996 would be awarded the Retro Hugo for Best Fanzine of 1946, and in 2014 for 1939), which was nominally the club fanzine for the LASFS.
In the decades that followed, Ackerman amassed an extremely large and complete collection of science fiction, fantasy, and horror film memorabilia, which, until 2002, he maintained in an 18-room home and museum known as the "Son of Ackermansion". (The original Ackermansion where he lived from the early 1950s until the mid-1970s was at 915 S. Sherbourne Drive in Los Angeles; the site is now an apartment building.) This second house, in the Los Feliz district of Los Angeles, contained some 300,000 books and pieces of film and science-fiction memorabilia. From 1951 to 2002, Ackerman entertained some 50,000 fans at open houses - including, on one such evening, a group of 186 fans and professionals that included astronaut Buzz Aldrin. Ackerman was a board member of the Seattle Science Fiction Museum and Hall of Fame, where many items of his collection are now displayed.
He knew most of the writers of science fiction in the first half of the twentieth century. As a literary agent, he represented some 200 writers, and he served as agent of record for many long-lost authors, thereby allowing their work to be reprinted in anthologies. He was Ed Wood's "illiterary" agent. Ackerman was credited with nurturing and even inspiring the careers of several early contemporaries like Ray Bradbury, Ray Harryhausen, Charles Beaumont, Marion Zimmer Bradley, and L. Ron Hubbard. He kept all of the stories submitted to his magazine, even the ones he rejected; Stephen King has stated that Ackerman showed up to a King book signing with a copy of a story King had submitted for publication when he was 11.
Ackerman had 50 stories published, including collaborations with A. E. van Vogt, Francis Flagg, Robert A. W. Lowndes, Marion Zimmer Bradley, Donald Wollheim and Catherine Moore, and the world's shortest – one letter of the alphabet. His stories have been translated into six languages. Ackerman named the sexy comic-book character Vampirella and wrote the origin story for the comic.
He also authored several lesbian stories under the name "Laurajean Ermayne" for "Vice Versa" and provided publishing assistance in the early days of the Daughters of Bilitis. He was dubbed an "honorary lesbian" at a DOB party.
Through his magazine, "Famous Monsters of Filmland" (1958–1983), Ackerman introduced the history of the science fiction, fantasy, and horror film genres to a generation of young readers. At a time when most film-related publications glorified the stars in front of the camera, "Uncle Forry", as he was referred to by many of his fans, promoted the behind-the-scenes artists involved in the magic of the movies. In this way, Ackerman provided inspiration to many who would later become successful artists, including Joe Dante, Peter Jackson, Steven Spielberg, Tim Burton, Stephen King, Donald F. Glut, Penn & Teller, Billy Bob Thornton, Gene Simmons (of the band Kiss), Rick Baker, George Lucas, Danny Elfman, Frank Darabont, Guillermo del Toro, Kirk Hammett (of the band Metallica), John Landis, television producer Kevin Burns and countless other writers, directors, artists, and craftsmen.
He also contributed to film magazines from all around the world, including the Spanish-language "" magazine from Argentina, where he had a monthly column for more than four years.
In the 1960s, Ackerman organized the publication of an English translation in the U.S. of the German science fiction series "Perry Rhodan", the longest-running science fiction series in history. These were published by Ace Books from 1969 through 1977. Ackerman's German-speaking wife Wendayne ("Wendy") did most of the translation. The American books were issued with varying frequency from one to as many as four per month. Ackerman also used the paperback series to promote science fiction short stories, including his own on occasion. These "magabooks" or "bookazines" also included a film review section, known as "Scientifilm World", and letters from readers. The American series came to an end when the management of Ace changed, and the new management decided that the series was too juvenile for their taste. The last Ace issue was #118, which corresponded to German issue #126 as some of the Ace editions contained two of the German issues, and three of the German issues had been skipped. Ackerman later published translations of German issues #127 through #145 on his own under the Master Publications imprint. (The original German series continues today and passed issue #2800 in 2015.)
A lifelong fan of science fiction "B-movies", Ackerman appeared in more than 210 films, including parts in many monster movies and science fiction films ("Dracula vs. Frankenstein", "The Howling", "The Aftermath", "Scalps", "Return of the Living Dead Part II", "Innocent Blood"), more traditional "imagi-movies" ("The Time Travelers", "Future War"), spoofs and comedies ("Amazon Women on the Moon", "The Wizard of Speed and Time", "Curse of the Queerwolf", "Transylvania Twist", "Hard to Die", "Nudist Colony of the Dead", "Attack of the 60 Foot Centerfold") and at least one major music video ("Michael Jackson's Thriller"). His Bacon number is 2.
In 1961, Ackerman narrated the record "Music for Robots" created by Frank Allison Coe. The cover featured Ackerman's face superimposed on the robot from the film "Tobor the Great". The record was reissued on CD in 2005.
Ackerman appears as a character in "The Vampire Affair" by David McDaniel (a novel in the "Man from U.N.C.L.E." series), and Philip José Farmer's novel "The Image Of The Beast", first published as the short story "Blown" in "Screw" magazine by Al Goldstein.
A character based on Ackerman and an analog to the Ackermansion appears in the collaborative novel "Fallen Angels" written jointly by Larry Niven, Jerry Pournelle, and Michael F. Flynn.
"Eccar the Man" is mentioned in "The Flying Sorcerers", a novel jointly written by Niven and David Gerrold, which features a number of characters based on notables from the science fiction community.
He appeared on the intro track of Ohio horror punk music group Manimals' 1999 album "Horrorcore".
In 2001, Ackerman played the part of an old wax museum caretaker in the camp comedy film "The Double-D Avenger" directed by William Winckler and starring Russ Meyer luminaries Kitten Natividad, Haji, and Raven De La Croix. Ackerman played a crazy old man who was in love with Kitten Natividad's character, The Double-D Avenger, and his character also talked to the Frankenstein figure and other wax monsters in the museum's chamber of horrors.
Ackerman appeared extensively on-screen discussing his life and the history of science fiction fandom in the 2006 documentary film "Finding the Future".
In 2007, Roadhouse Films of Canada released a documentary, "Famous Monster: Forrest J Ackerman". The documentary, available on DVD only in the UK, airs regularly on the BRAVO channel.
In the 2012 action film "Premium Rush", the character of the corrupt policeman Bobby Monday (played by Michael Shannon) repeatedly uses the alias "Forrest J Ackerman".
In 2013, the science fiction author Jason V Brock released a feature-length documentary about Ackerman called "The Ackermonster Chronicles!".
Ackerman had one sibling, a younger brother, Alden Lorraine Ackerman, who was killed at the Battle of the Bulge.
Ackerman was married to a German-born teacher and translator, Mathilda Wahrman (1912–1990), whom he met in the early 1950s while she was working in a book store he happened to visit. He eventually dubbed her "Wendayne" or, less formally, "Wendy", by which name she became most generally known within SF and film fandoms, after the character in "Peter Pan", his favorite fantasy. Although they went through a period of separation during the late 1950s and early 1960s, they remained officially married until her death: she suffered serious internal injuries when she was violently mugged while visiting Italy in 1990 and irreparable damage to her kidneys led to her death. They had no children of their own by choice, but Wahrman did have a son by an earlier marriage, Michael Porges, who did not get along with Ackerman and would not live in Ackerman's home.
Ackerman was fluent in the international language Esperanto, and claimed to have walked down Hollywood Boulevard arm-in-arm with Leo G. Carroll singing "La Espero", the hymn of Esperanto.
Ackerman was an atheist at age 15, but did not emphasize that fact in his public life and welcomed people of all faiths as well as no faith into his home and personal circle equally.
His first public stance on any political issue was in opposition to the Vietnam War.
On March 16, 1960, Ackerman was found guilty on five counts of mailing "letters which were obscene, lewd, indecent, lascivious and filthy in violation of Title 18, United States Code, Section 1461."
Beginning in January 2018, numerous allegations were made, some by named accusers, concerning Ackerman's long-term predatory and sexually inappropriate behaviour, much of it concerning minors. This was later widely reported and discussed in horror fan communities.
In 2003, Ackerman said, "I aim at hitting 100 and becoming the George Burns of science fiction". His health, however, had been failing. He was susceptible to infection in his later life and, after one final trip to the hospital in October 2008, informed his best friend and caregiver Joe Moe that he did not want to go on but hoped to live long enough to vote for Barack Obama in the November, 2008 presidential election.. Honoring his wishes, his friends brought him home to hospice care. However, it turned out that in order to get Ackerman home, the hospital had cured his infection with antibiotics. So Ackerman went on for a few more weeks holding what he delighted in calling "a living funeral". In his final days he saw everyone he wanted to say goodbye to. Fans were encouraged to send messages of farewell by mail.
While there were several premature reports of his death in the month prior, Ackerman died a minute before midnight on December 4, 2008, at the age of 92. From his "Acker-mini-mansion" in Hollywood, he had entertained and inspired fans weekly with his collection of memorabilia and his stories.
Upon his death, the administration of Ackerman's estate was entrusted to his friend, television producer Kevin Burns. Burns was tasked with the sale and distribution of Mr. Ackerman's extensive collection of Science Fiction and Horror memorabilia. Included in this were Bela Lugosi's ring from "Abbott and Costello Meet Frankenstein" and Lon Chaney's teeth and top hat from "London After Midnight". There were eighteen beneficiaries named in Ackerman's will, including three waitresses from his favorite restaurant and hangout, "The House of Pies".
Ackerman is interred at Forest Lawn Memorial Park (Glendale) with his wife. His plaque simply reads, "Sci-Fi Was My High".
A 2013 rebroadcast of the PBS program "Visiting ... with Huell Howser," originally airing in 2000, which featured Ackerman and highlighted his memorabilia collection, was revised to indicate that Ackerman had since died and his collection had been auctioned.
On Thursday morning, November 17, 2016 the corner of Franklin and Vermont Avenues, in the heart of the neighborhood "Uncle Forry" lived in for 30 years, was christened Forrest J Ackerman Square.
|
https://en.wikipedia.org/wiki?curid=11740
|
Fantasy film
Fantasy films are films that belong to the fantasy genre with fantastic themes, usually magic, supernatural events, mythology, folklore, or exotic fantasy worlds. The genre is considered a form of speculative fiction alongside science fiction films and horror films, although the genres do overlap. Fantasy films often have an element of magic, myth, wonder, escapism, and the extraordinary.
Several sub-categories of fantasy films can be identified, although the delineations between these subgenres, much as in fantasy literature, are somewhat fluid.
The most common fantasy subgenres depicted in movies are High Fantasy and Sword and Sorcery. Both categories typically employ quasi-medieval settings, wizards, magical creatures and other elements commonly associated with fantasy stories.
High Fantasy films tend to feature a more richly developed fantasy world, and may also be more character-oriented or thematically complex. Often, they feature a hero of humble origins and a clear distinction between good and evil set against each other in an epic struggle. Many scholars cite J. R. R. Tolkien's "The Lord of the Rings" novel as the prototypical modern example of High Fantasy in literature, and the recent Peter Jackson film adaptation of the books is a good example of the High Fantasy subgenre on the silver screen.
Sword and Sorcery movies tend to be more plot-driven than high fantasy and focus heavily on action sequences, often pitting a physically powerful but unsophisticated warrior against an evil wizard or other supernaturally endowed enemy. Although Sword and Sorcery films sometimes describe an epic battle between good and evil similar to those found in many High Fantasy movies, they may alternately present the hero as having more immediate motivations, such as the need to protect a vulnerable maiden or village, or even being driven by the desire for vengeance.
The 1982 film adaptation of Robert E. Howard's "Conan the Barbarian", for example, is a personal (non-epic) story concerning the hero's quest for revenge and his efforts to thwart a single megalomaniac—while saving a beautiful princess in the process. Some critics refer to such films by the term Sword and Sandal rather than Sword and Sorcery, although others would maintain that the Sword and Sandal label should be reserved only for the subset of fantasy films set in ancient times on the planet Earth, and still others would broaden the term to encompass films that have no fantastic elements whatsoever. To some, the term Sword and Sandal has pejorative connotations, designating a film with a low-quality script, bad acting, and poor production values.
Another important subgenre of fantasy films that has become more popular in recent years is contemporary fantasy. Such films feature magical effects or supernatural occurrences happening in the "real" world of today.
Films with live action and animation such as Disney's "Mary Poppins", "Pete's Dragon", "Enchanted", and the Robert Zemeckis film "Who Framed Roger Rabbit" are also fantasy films although are more often referred to as Live action/animation hybrids (2 of those are also classified as musicals).
Fantasy films set in the afterlife, called Bangsian Fantasy, are less common, although films such as the 1991 Albert Brooks comedy "Defending Your Life" would likely qualify. Other uncommon subgenres include Historical Fantasy and Romantic Fantasy, although 2003's "" successfully incorporated elements of both.
As noted above, superhero movies and fairy tale films might each be considered subgenres of fantasy films, although most would classify them as altogether separate movie genres.
As a cinematic genre, fantasy has traditionally not been regarded as highly as the related genre of science fiction film. Undoubtedly, the fact that until recently fantasy films often suffered from the "Sword and Sandal" afflictions of inferior production values, over-the-top acting, and decidedly poor special effects was a significant factor in fantasy film's low regard.
Since the early 2000s, however, the genre has gained new respectability in a way, driven principally by the successful adaptations of J.R.R. Tolkien's "The Lord of the Rings" and J.K. Rowling's "Harry Potter" series. Jackson's "The Lord of the Rings" trilogy is notable due to its ambitious scope, serious tone, and thematic complexity. These pictures achieved phenomenal commercial and critical success, and the of the trilogy became the first fantasy film ever to win the Academy Award for Best Picture. The "Harry Potter" series has been a tremendous financial success, has achieved critical acclaim for its design, thematic sophistication and emotional depth, grittier realism and darkness, narrative complexity, and characterization, and boasts an enormous and loyal fanbase.
Following the success of these ventures, Hollywood studios have greenlighted additional big-budget productions in the genre. These have included adaptations of the first, second, and third books in C. S. Lewis' "The Chronicles of Narnia" series and the teen novel "Eragon", as well as adaptations of Susan Cooper's "The Dark Is Rising", Cornelia Funke's "Inkheart", Philip Pullman's "The Golden Compass", Holly Black's "The Spiderwick Chronicles", Nickelodeon's TV show "", and the "Fantasia" segment (along with Johann Wolfgang von Goethe's original poem) "The Sorcerer's Apprentice"
Fantasy movies in recent years, such as "The Lord of the Rings" films, the first and third "Narnia" adaptations, and the first, second, fourth and seventh "Harry Potter" adaptations have most often been released in November and December. This is in contrast to science fiction films, which are often released during the northern hemisphere summer (June–August). All three installments of the "Pirates of the Caribbean" fantasy films, however, were released in July 2003, July 2006, and May 2007 respectively, and the latest releases in the "Harry Potter" series were released in July 2007 and July 2009. The huge commercial success of these pictures may indicate a change in Hollywood's approach to big-budget fantasy film releases.
Screenwriter and scholar Eric R. Williams identifies Fantasy Films as one of eleven super-genres in his screenwriters’ taxonomy, claiming that all feature length narrative films can be classified by these super-genres. The other ten super-genres are Action, Crime, Horror, Romance, Science Fiction, Slice of Life, Sports, Thriller, War and Western.
Fantasy films have a history almost as old as the medium itself. However, fantasy films were relatively few and far between until the 1980s, when high-tech filmmaking techniques and increased audience interest caused the genre to flourish.
What follows are some notable Fantasy films. For a more complete list see: List of fantasy films
In the era of silent film, the earliest fantasy films were those made by French film pioneer Georges Méliès from 1903. The most famous of these was 1902's "A Trip to the Moon". In the Golden Age of Silent film (1918–1926) the most outstanding fantasy films were Douglas Fairbanks' "The Thief of Bagdad" (1924), Fritz Lang's "Die Nibelungen" (1924), and "Destiny" (1921). Other notables in the genre were F.W. Murnau's romantic ghost story "Phantom", "Tarzan of the Apes" starring Elmo Lincoln, and D. W. Griffith's "The Sorrows of Satan".
Following the advent of sound films, audiences of all ages were introduced from 1937's "Snow White and the Seven Dwarfs" to 1939's "The Wizard of Oz". Also notable of the era, the iconic 1933 film "King Kong" borrows heavily from the Lost World subgenre of fantasy fiction as does such films as the 1935 adaptation of H. Rider Haggard's novel "She" about an African expedition that discovers an immortal queen known as Ayesha "She who must be obeyed". Frank Capra's 1937 picture "Lost Horizon" transported audiences to the Himalayan fantasy kingdom of Shangri-La, where the residents magically never age. Other noteworthy fantasy films of the 30s include "Tarzan the Ape Man" in 1932 starring Johnny Weissmuller starting a successful series of talking pictures based on the fantasy-adventure novels by Edgar Rice Burroughs and the G. W. Pabst directed "The Mistress of Atlantis" from 1932. 1932 saw the release of the Universal Studios monster movie "The Mummy" which combined horror with a romantic fantasy twist. more light-hearted and comedic affairs from the decade include films like 1934s romantic drama film "Death Takes a Holiday" where Fredric March plays Death who takes a human body to experience life for three days and 1937s "Topper" where a man is haunted by two fun-loving ghosts who try to make his life a little more exciting.
The 1940s then saw several full-color fantasy films produced by Alexander Korda, including "The Thief of Bagdad" (1940), a film on par with "The Wizard of Oz", and "Jungle Book" (1942). In 1946, Jean Cocteau's classic adaptation of "Beauty and the Beast" won praise for its surreal elements and for transcending the boundaries of the fairy tale genre. "Sinbad the Sailor" (1947), starring Douglas Fairbanks, Jr., has the feel of a fantasy film though it does not actually have any fantastic elements.
Several other pictures featuring supernatural encounters and aspects of Bangsian fantasy were produced in the 1940s during World War II. These include "Beyond Tomorrow", "The Devil and Daniel Webster", and "Here Comes Mr. Jordan", all from 1941, "Heaven Can Wait" the musical "Cabin in the Sky" (1943), the comedy "The Horn Blows at Midnight" and romances such as "The Ghost and Mrs. Muir" (1947), "One Touch of Venus" and "Portrait of Jennie", both 1948.
An astonishing anticipation of the full "sword and sorcery" genre was made in 1941 in Italy by Alessandro Blasetti. "La Corona di Ferro" presents the struggles of two imaginary kingdoms around the legendary Iron Crown (historically the ancient crown of Italy), with war, cruelty, betrayal, heroism, sex, magic and mysticism, a whirl of events taken from every possible fairy tale and legend source Blasetti could find. This movie is unlike anything done before; indeed, considering that it was finished fifteen years before the publication of Lord Of The Rings, its invention of a vast, national epic mythology is an act of genius. And while the storytelling is rough - due to the need to insert everything - and the resources limited, Blasetti shows how to make a little go a long way through beautifully staged and designed battle and crowd scenes.
Although it's not classified as a fantasy film, Gene Kelly's "Anchors Aweigh" had a fantasy sequence called "The King who Couldn't Dance" in which Gene did a song and dance number with Jerry Mouse from Tom and Jerry.
Because these movies do not feature elements common to high fantasy or sword and sorcery pictures, some modern critics do not consider them to be examples of the fantasy genre.
In the 1950s there were a few major fantasy films, including "Darby O'Gill and the Little People" and "The 5000 Fingers of Dr. T", the latter penned by Dr. Seuss. Jean Cocteau's Orphic Trilogy, begun in 1930 and completed in 1959, is based on Greek mythology and could be classified either as fantasy or surrealist film, depending on how the boundaries between these genres are drawn. Russian fantasy director Aleksandr Ptushko created three mythological epics from Russian fairytales, "Sadko" (1953), "Ilya Muromets" (1956), and "Sampo" (1959). Japanese director Kenji Mizoguchi's 1953 film "Ugetsu Monogatari" draws on Japanese classical ghost stories of love and betrayal.
Other notable pictures from the 1950s that feature fantastic elements and are sometimes classified as fantasy are "Harvey" (1950), featuring a púca of Celtic mythology; "Scrooge", the 1951 adaptation of Charles Dickens' "A Christmas Carol"; and Ingmar Bergman's 1957 masterpiece, "The Seventh Seal". Disney's 1951 animated film "Alice in Wonderland" is also a fantasy classic.
There were also a number of lower budget fantasies produced in the 1950s, typically based on Greek or Arabian legend. The most notable of these may be 1958's "The 7th Voyage of Sinbad", featuring special effects by Ray Harryhausen and music by Bernard Herrmann.
Harryhausen worked on a series of fantasy films in the 1960s, most importantly "Jason and the Argonauts" (1963). Many critics have identified this film as Harryhausen's masterwork for its stop-motion animated statues, skeletons, harpies, hydra, and other mythological creatures. Other Harryhausen fantasy and science fantasy collaborations from the decade include the 1961 adaptation of Jules Verne's "Mysterious Island", the critically panned "One Million Years B.C." starring Raquel Welch, and "The Valley of Gwangi" (1969).
Capitalising on the success of the sword and sandal genre several Italian B-movies based on classical myth were made, including the "Maciste" series. Otherwise, the 1960s were almost entirely devoid of fantasy films. The fantasy picture "7 Faces of Dr. Lao", in which Tony Randall portrayed several characters from Greek mythology, was released in 1964. But the 1967 adaptation of the Broadway musical "Camelot" removed most of the fantasy elements from T. H. White's classic "The Once and Future King", on which the musical had been based. The 1960s also saw a new adaption of Haggard's "She" in 1965 starring Ursula Andress as the immortal "She who must be obeyed" and was followed by a sequel in 1968 "The Vengeance of She" based loosely on the novel "" both produced by Hammer Film Productions, 1968 also saw the release of "Chitty Chitty Bang Bang" based on a story by Ian Fleming with a script from Roald Dahl.
Fantasy elements of Arthurian legend were again featured, albeit absurdly, in 1975's "Monty Python and the Holy Grail". Harryhausen also returned to the silver screen in the 1970s with two additional "Sinbad" fantasies, "The Golden Voyage of Sinbad" (1974) and "Sinbad and the Eye of the Tiger" (1977). The animated movie "Wizards" (1977) had limited success at the box office but achieved status as a cult film. There was also "The Noah" (1975) which was never released theatrically but became a cult favorite when it was finally released on DVD in 2006. Some would consider 1977's "Oh God!", starring George Burns to be a fantasy film, and "Heaven Can Wait" (1978) was a successful Bangsian fantasy remake of 1941's "Here Comes Mr. Jordan" (not 1943's "Heaven Can Wait").
A few low budget "Lost World" pictures were made in the 1970s, such as 1975's "The Land That Time Forgot". Otherwise, the fantasy genre was largely absent from mainstream movies in this decade, although 1971's "Bedknobs and Broomsticks" and "Willy Wonka & the Chocolate Factory" were two fantasy pictures in the public eye the former being predominantly from the same team who did "Mary Poppins" the latter again being from Roald Dahl in both script and novel.
1980s fantasy films were initially characterised by directors finding a new spin on established mythologies. Ray Harryhausen brought the monsters of Greek legends to life in "Clash of the Titans" while Arthurian lore returned to the screen in John Boorman's 1981 "Excalibur". Films such as Ridley Scott's 1985 "Legend" and Terry Gilliam's 1981–1986 trilogy of fantasy epics ("Time Bandits", "Brazil", and "The Adventures of Baron Munchausen") explored a new artist-driven style featuring surrealist imagery and thought-provoking plots. The modern sword and sorcery boom began around the same time with 1982's "Conan the Barbarian" followed by "Krull" and "Fire and Ice" in 1983, as well as a boom in fairy tale-like fantasy films such as "Ladyhawke" (1985), "The Princess Bride" (1987), and "Willow" (1988).
The 1980s also started a trend in mixing modern settings and action movie effects with exotic fantasy-like concepts. "Big Trouble in Little China" (1986), directed by John Carpenter and starring Kurt Russell, combined humor, martial arts and classic Chinese folklore in a modern Chinatown setting. "Highlander", a film about immortal Scottish swordsmen, was released the same year.
Jim Henson produced two iconic fantasy films in the 80s, the solemn "The Dark Crystal" and the more whimsical and lofty "Labyrinth". Meanwhile, Robert Zemeckis helmed "Who Framed Roger Rabbit", featuring various famous cartoon characters from animation's "Golden Age," including Mickey Mouse, Minnie Mouse, Donald Duck, Bugs Bunny, Daffy Duck, Droopy, Wile E. Coyote and Road Runner, Sylvester the Cat, Tweety Pie, and Jiminy Cricket, among others.
(2012)
"Aladdin" (2019)
"Alice in Wonderland" (2010)
"Alice in Wonderland 2: Through the Looking Glass" (2016)
"Aquaman" (2018)
"A Wrinkle in Time" (2018)
"" (2014)
"" (2017)
"Beauty and the Beast" (2017)
"Black Panther" (2018)
"Brahmastra" (2019)
"Brave" (2012)
"Christopher Robin" (2018)
"Cinderella" (2015)
"Clash of the Titans" (2010) and its 2012 sequel, "Wrath of the Titans"
"Conan the Barbarian" (2011)
"Crimson Peak" (2015)
"Dark Shadows" (2012)
"Doctor Strange" (2016)
"" (2018)
"Fantastic Beasts and Where to Find Them" (2016)
"Frozen" (2013)
Frozen II (2019)
"Goosebumps" (2015)
"Gulliver's Travels" (2010)
"Harry Potter and the Deathly Hallows – Part 1" (2010)
"Harry Potter and the Deathly Hallows – Part 2" (2011)
"Hop" (2011)
"How to Train Your Dragon" (2010–19)
"Immortals" (2011)
"Into the Woods" (2014)
"Jack the Giant Slayer" (2010)
"John Carter" (2012)
"Life of Pi" (2012)
"Maleficent" (2014)
"Mary Poppins Returns" (2018)
"Maximum Shame" (2010)
"Midnight in Paris" (2011)
"Mirror Mirror" (2012)
"Miss Peregrine's Home for Peculiar Children" (2016)
"Oz the Great and Powerful" (2013)
"Paddington" (2014)
"Pan" (2015)
"" (2013)
"" (2010)
"Pete's Dragon" (2016)
"Peter Rabbit" (2018)
"Puss in Boots" (2011)
"Sardaar Ji" (2015) (Punjabi)
"Scott Pilgrim vs. the World" (2010)
"Snow White and the Huntsman" (2012)
"Song of the Sea" (2014)
"Sucker Punch" (2011)
The Bastard Sword (2018)
"The BFG" (2016)
"The Hobbit" (2012–14)
"The Jungle Book" (2016)
"The Kid Who Would Be King" (2019)
"The Last Airbender" (2010)
"The Lorax" (2012)
"The Muppets" (2011)
"The Nutcracker and the Four Realms" (2018)
"Trolls" (2016)
"The Shape of Water" (2017)
"The Sorcerer's Apprentice" (2010)
"" (2017)
"" (2013)
"Thor" (2011)
"Toy Story 3" (2010)
"Toy Story 4" (2019)
"Wonder Woman" (2017)
"Your Highness" (2011)
|
https://en.wikipedia.org/wiki?curid=11741
|
Finite set
In mathematics, a finite set is a set that has a finite number of elements. Informally, a finite set is a set which one could in principle count and finish counting. For example,
is a finite set with five elements. The number of elements of a finite set is a natural number (a non-negative integer) and is called the cardinality of the set. A set that is not finite is called infinite. For example, the set of all positive integers is infinite:
Finite sets are particularly important in combinatorics, the mathematical study of counting. Many arguments involving finite sets rely on the pigeonhole principle, which states that there cannot exist an injective function from a larger finite set to a smaller finite set.
Formally, a set is called finite if there exists a bijection
for some natural number . The number is the set's cardinality, denoted as ||. The empty set {} or Ø is considered finite, with cardinality zero.
If a set is finite, its elements may be written — in many ways — in a sequence:
In combinatorics, a finite set with elements is sometimes called an "-set" and a subset with elements is called a "-subset". For example, the set {5,6,7} is a 3-set – a finite set with three elements – and {6,7} is a 2-subset of it.
Any proper subset of a finite set "S" is finite and has fewer elements than "S" itself. As a consequence, there cannot exist a bijection between a finite set "S" and a proper subset of "S". Any set with this property is called Dedekind-finite. Using the standard ZFC axioms for set theory, every Dedekind-finite set is also finite, but this implication cannot be proved in ZF (Zermelo–Fraenkel axioms without the axiom of choice) alone.
The axiom of countable choice, a weak version of the axiom of choice, is sufficient to prove this equivalence.
Any injective function between two finite sets of the same cardinality is also a surjective function (a surjection). Similarly, any surjection between two finite sets of the same cardinality is also an injection.
The union of two finite sets is finite, with
In fact, by the inclusion–exclusion principle:
More generally, the union of any finite number of finite sets is finite. The Cartesian product of finite sets is also finite, with:
Similarly, the Cartesian product of finitely many finite sets is finite. A finite set with "n" elements has 2 distinct subsets. That is, the
power set of a finite set is finite, with cardinality 2.
Any subset of a finite set is finite. The set of values of a function when applied to elements of a finite set is finite.
All finite sets are countable, but not all countable sets are finite. (Some authors, however, use "countable" to mean "countably infinite", so do not consider finite sets to be countable.)
The free semilattice over a finite set is the set of its non-empty subsets, with the join operation being given by set union.
In Zermelo–Fraenkel set theory without the axiom of choice (ZF), the following conditions are all equivalent:
If the axiom of choice is also assumed (the axiom of countable choice is sufficient), then the following conditions are all equivalent:
Georg Cantor initiated his theory of sets in order to provide a mathematical treatment of infinite sets. Thus the distinction between the finite and the infinite lies at the core of set theory. Certain foundationalists, the strict finitists, reject the existence of infinite sets and thus recommend a mathematics based solely on finite sets. Mainstream mathematicians consider strict finitism too confining, but acknowledge its relative consistency: the universe of hereditarily finite sets constitutes a model of Zermelo–Fraenkel set theory with the axiom of infinity replaced by its negation.
Even for those mathematicians who embrace infinite sets, in certain important contexts, the formal distinction between the finite and the infinite can remain a delicate matter. The difficulty stems from Gödel's incompleteness theorems. One can interpret the theory of hereditarily finite sets within Peano arithmetic (and certainly also vice versa), so the incompleteness of the theory of Peano arithmetic implies that of the theory of hereditarily finite sets. In particular, there exists a plethora of so-called non-standard models of both theories. A seeming paradox is that there are non-standard models of the theory of hereditarily finite sets which contain infinite sets, but these infinite sets look finite from within the model. (This can happen when the model lacks the sets or functions necessary to witness the infinitude of these sets.) On account of the incompleteness theorems, no first-order predicate, nor even any recursive scheme of first-order predicates, can characterize the standard part of all such models. So, at least from the point of view of first-order logic, one can only hope to describe finiteness approximately.
More generally, informal notions like set, and particularly finite set, may receive interpretations across a range of formal systems varying in their axiomatics and logical apparatus. The best known axiomatic set theories include Zermelo-Fraenkel set theory (ZF), Zermelo-Fraenkel set theory with the Axiom of Choice (ZFC), Von Neumann–Bernays–Gödel set theory (NBG), Non-well-founded set theory, Bertrand Russell's Type theory and all the theories of their various models. One may also choose among classical first-order logic, various higher-order logics and intuitionistic logic.
A formalist might see the meaning of "set" varying from system to system. Some kinds of Platonists might view particular formal systems as approximating an underlying reality.
In contexts where the notion of natural number sits logically prior to any notion of set, one can define a set "S" as finite if "S" admits a bijection to some set of natural numbers of the form formula_9. Mathematicians more typically choose to ground notions of number in set theory, for example they might model natural numbers by the order types of finite well-ordered sets. Such an approach requires a structural definition of finiteness that does not depend on natural numbers.
Various properties that single out the finite sets among all sets in the theory ZFC turn out logically inequivalent in weaker systems such as ZF or intuitionistic set theories. Two definitions feature prominently in the literature, one due to Richard Dedekind, the other to Kazimierz Kuratowski. (Kuratowski's is the definition used above.)
A set "S" is called Dedekind infinite if there exists an injective, non-surjective function formula_10. Such a function exhibits a bijection between "S" and a proper subset of "S", namely the image of "f". Given a Dedekind infinite set "S", a function "f", and an element "x" that is not in the image of "f", we can form an infinite sequence of distinct elements of "S", namely formula_11. Conversely, given a sequence in "S" consisting of distinct elements formula_12, we can define a function "f" such that on elements in the sequence formula_13 and "f" behaves like the identity function otherwise. Thus Dedekind infinite sets contain subsets that correspond bijectively with the natural numbers. Dedekind finite naturally means that every injective self-map is also surjective.
In other words, "S" is finite when the set of all non-empty subsets of "S" is equal to the intersection of all classes "X" which satisfy:
Kuratowski showed that this is equivalent to the numerical definition of a finite set. Intuitively, "K"("S") consists of the finite subsets of "S". Crucially, one does not need induction, recursion or a definition of natural numbers to define "generated by" since one may obtain "K"("S") simply by taking the intersection of all sub-semilattices containing the empty set and the singletons.
Readers unfamiliar with semilattices and other notions of abstract algebra may prefer an entirely elementary formulation. Kuratowski finite means "S" lies in the set "K"("S"), constructed as follows. Write "M" for the set of all subsets "X" of "P"("S") such that:
Then "K"("S") may be defined as the intersection of "M".
In ZF, Kuratowski finite implies Dedekind finite, but not vice versa. In the parlance of a popular pedagogical formulation, when the axiom of choice fails badly, one may have an infinite family of socks with no way to choose one sock from more than finitely many of the pairs. That would make the set of such socks Dedekind finite: there can be no infinite sequence of socks, because such a sequence would allow a choice of one sock for infinitely many pairs by choosing the first sock in the sequence. However, Kuratowski finiteness would fail for the same set of socks.
In ZF set theory without the axiom of choice, the following concepts of finiteness for a set "S" are distinct. They are arranged in strictly decreasing order of strength, i.e. if a set "S" meets a criterion in the list then it meets all of the following criteria. In the absence of the axiom of choice the reverse implications are all unprovable, but if the axiom of choice is assumed then all of these concepts are equivalent. (Note that none of these definitions need the set of finite ordinal numbers to be defined first; they are all pure "set-theoretic" definitions in terms of the equality and membership relations, not involving ω.)
The forward implications (from strong to weak) are theorems within ZF. Counter-examples to the reverse implications (from weak to strong) in ZF with urelements are found using model theory.
Most of these finiteness definitions and their names are attributed to by . However, definitions I, II, III, IV and V were presented in , together with proofs (or references to proofs) for the forward implications. At that time, model theory was not sufficiently advanced to find the counter-examples.
Each of the properties I-finite thru IV-finite is a notion of smallness in the sense that any subset of a set with such a property will also have the property. This is not true for V-finite thru VII-finite because they may have countably infinite subsets.
|
https://en.wikipedia.org/wiki?curid=11742
|
List of freshwater aquarium fish species
A vast number of aquatic species have successfully adapted to live in the freshwater aquarium. This list gives some examples of the most common species found in home aquariums
Angelfish
|
https://en.wikipedia.org/wiki?curid=11748
|
Foresight Institute
The Foresight Institute is a Palo Alto, California-based research non-profit that promotes the development of nanotechnology and other emerging technologies. The institute holds conferences on molecular nanotechnology and awards yearly prizes for developments in the field.
The Foresight Institute and its founder Eric Drexler have been criticized for unrealistic expectations, ignoring quantum effects in their design, lack of practical output, and technical obsolescence.
The Foresight Institute was founded in 1986 by Christine Peterson, K. Eric Drexler, and James C. Bennett to support the development of nanotechnology. Many of the institute's initial members came to it from the L5 Society, who were hoping to form a smaller group more focused on nanotechnology. In 1991, the Foresight Institute created two suborganizations with funding from tech entrepreneur Mitch Kapor; the Institute for Molecular Manufacturing and the Center for Constitutional Issues in Technology. In the 1990s, the Foresight Institute launched several initiatives to provide funding to developers of nanotechnology. In 1993, it created the Feynman Prize in Nanotechnology, named after physicist Richard Feynman. In May 2005, the Foresight Institute changed its name to "Foresight Nanotech Institute", though it reverted to its original name in June 2009.
The Feynman Prize in Nanotechnology is an award given by the Foresight Institute for significant advances in nanotechnology. Between 1993 and 1997, one prize was given biennially. Since 1997, two prizes have been given each year, divided into the categories of theory and experimentation. The prize is named in honor of physicist Richard Feynman, whose 1959 talk "There's Plenty of Room at the Bottom" is considered to have inspired and informed the start of the field of nanotechnology. Author Colin Milburn refers to the prize as an example of "fetishizing" its namesake Feynman, due to his "prestige as a scientist and his fame among the broader public."
The Foresight Institute also offers the Feynman Grand Prize, a $250,000 award to the first persons to create both a nanoscale robotic arm capable of precise positional control and a nanoscale 8-bit adder, with both conditions conforming to given specifications. The Feynman Grand Prize is intended to emulate historical prizes such as the Longitude prize, Orteig Prize, Kremer prize, Ansari X Prize, and two prizes that were offered by Richard Feynman personally as challenges during his 1959 "There's Plenty of Room at the Bottom" talk. In 2004, X-Prize Foundation founder Peter Diamandis was selected to chair the Feynman Grand Prize committee.
|
https://en.wikipedia.org/wiki?curid=11751
|
List of freshwater aquarium invertebrate species
This is a list of invertebrates, animals without a backbone, that are commonly kept in freshwater aquaria by hobby aquarists. Numerous shrimp species of various kinds, crayfish, a number of freshwater snail species, and at least one freshwater clam species are found in freshwater aquaria.
|
https://en.wikipedia.org/wiki?curid=11752
|
List of freshwater aquarium plant species
Aquatic plants are used to give the freshwater aquarium a natural appearance, oxygenate the water, absorb ammonia, and provide habitat for fish, especially fry (babies) and for invertebrates. Some aquarium fish and invertebrates also eat live plants. Hobbyists use aquatic plants for aquascaping, of several aesthetic styles.
Most of these plant species are found either partially or fully submerged in their natural habitat. Although there are a handful of obligate aquatic plants that must be grown entirely underwater, most can grow fully emersed if the soil is moist. Though some are just living at the water margins, still, they can live in the completely submerged habitat.
The taxonomy of most plant genera is not final. Scientific names listed here may, therefore, contradict other sources.
Common aquarium plant species:
Several species of terrestrial plants are frequently sold as "aquarium plants". While such plants are beautiful and can survive and even flourish for months under water, they will eventually die and must be removed so their decay does not contaminate the aquarium water. These plants have no necessary biology to live underwater.
|
https://en.wikipedia.org/wiki?curid=11753
|
Fasces
Fasces ( , ; a "plurale tantum", from the Latin word "fascis", meaning "bundle"; ) is a bound bundle of wooden rods, sometimes including an axe with its blade emerging. The fasces had its origin in the Etruscan civilization and was passed on to ancient Rome, where it symbolized a magistrate's power and jurisdiction. The axe originally associated with the symbol, the Labrys (Greek: , "") the double-bitted axe, originally from Crete, is one of the oldest symbols of Greek civilization. To the Romans, it was known as a "bipennis".
The image has survived in the modern world as a representation of magisterial or collective power, law and governance. The fasces frequently occurs as a charge in heraldry: it is present on the reverse of the US Mercury dime coin and behind the podium in the United States House of Representatives; and it was the origin of the name of the National Fascist Party in Italy (from which the term "fascism" is derived).
During the first half of the twentieth century both the swastika and the fasces (each symbol having its own unique ancient religious and mythological associations) became heavily identified with the authoritarian/fascist political movements of Adolf Hitler and Benito Mussolini. During this period the swastika became deeply stigmatized, but the fasces did not undergo a similar process.
The fact that the fasces remained in use in many societies after World War II may have been due to the fact that prior to Mussolini the fasces had already been adopted and incorporated within the governmental iconography of many governments outside Italy. As such, its use persists as an accepted form of governmental and other iconography in various contexts, whilst the swastika only remains in common usage in Asia where it originated as an ancient Hindu symbol, the religious purposes of which are entirely unrelated to, and pre-date, early twentieth century European fascism.
The fasces is sometimes confused with the related term "fess", which in French heraldry is called a "fasce".
A few artifacts found showing a thin bundle of rods surrounding a two-headed axe point to a possible Etruscan origin for fasces, but little is known about the Etruscans themselves. Fasces symbolism might be derived via the Etruscans from the eastern Mediterranean, with the labrys, the Anatolian, and Minoan double-headed axe, later incorporated into the praetorial fasces. There is little archaeological evidence for precise claims.
By the time of the Roman Republic, the fasces had developed into a thicker bundle of birch rods, sometimes surrounding a single-headed axe and tied together with a red leather ribbon into a cylinder. On certain special occasions, the fasces might be decorated with a laurel wreath.
The symbolism of the fasces suggests strength through unity (see Unity makes strength); a single rod is easily broken, while the bundle is very difficult to break. This symbolism occurs in Aesop's fable "The Old Man and his Sons". A similar story is told about the Bulgar (pre-Bulgarian, proto-Bulgarian) Khan Kubrat, giving rise to the Bulgarian national motto "Union gives strength" (Съединението прави силата). However, bundled birch twigs could also symbolise corporal punishment (see birching).
The "fasces lictoriae" ("bundles of the lictors") symbolised power and authority ("imperium") in ancient Rome, beginning with the early Roman Kingdom and continuing through the republican and imperial periods. By republican times, use of the fasces was surrounded with tradition and protocol. A corps of "apparitores" (subordinate officials) called lictors each carried fasces before a magistrate, in a number corresponding to his rank. Lictors preceded consuls (and proconsuls), praetors (and propraetors), dictators, curule aediles, quaestors, and the Flamen Dialis during Roman triumphs (public celebrations held in Rome after a military conquest).
According to Livy, it is likely that the lictors were an Etruscan tradition, adopted by Rome. The highest magistrate, the "dictator", was entitled to twenty-four lictors and fasces, the consul to twelve, the proconsul eleven, the praetor six (two within the "pomerium"), the propraetor five, and the curule aediles two.
Another part of the symbolism developed in Republican Rome was the inclusion of just a single-headed axe in the fasces, with the blade projecting from the bundle. The axe indicated that the magistrate's judicial powers ("imperium") included capital punishment. Fasces carried within the "Pomerium"—the boundary of the sacred inner city of Rome—had their axe blades removed; within the city, the power of life and death rested with the people through their assemblies. During times of emergency, however, the Roman Republic might choose a dictator to lead for a limited time period, who was the only magistrate to be granted capital punishment authority within the Pomerium. Lictors attending the dictator kept the axes in their fasces even inside the Pomerium—a sign that the dictator had the ultimate power in his own hands. There were exceptions to this rule: in 48 BC, guards holding bladed fasces guided Vatia Isauricus to the tribunal of Marcus Caelius, and Vatia Isauricus used one to destroy Caelius's magisterial chair ("sella curulis").
An occasional variation on the fasces was the addition of a laurel wreath, symbolizing victory. This occurred during the celebration of a "Triumph" - essentially a victory parade through Rome by a returning victorious general. Previously, all Republican Roman commanding generals had held high office with imperium, and so, already were entitled to the lictors and fasces.
The modern Italian word "fascio", used in the twentieth century to designate peasant cooperatives and industrial workers' unions, is related to "fasces".
Numerous governments and other authorities have used the image of the "fasces" as a symbol of power since the end of the Roman Empire. It also has been used to hearken back to the Roman republic, particularly by those who see themselves as modern-day successors to the old republic or its ideals.
The Ecuadorian coat of arms incorporated the fasces in 1830, although it had already been in use in the of Gran Colombia.
Italian Fascism, which derives its name from the "fasces", arguably used this symbolism the most in the twentieth century. The British Union of Fascists also used it in the 1930s. The "fasces", as a widespread and long-established symbol in the West, however, has avoided the stigma associated with much of fascist symbolism, and many authorities continue to display them, including the federal government of the United States.
A review of the images included in "Les Grands Palais de France : Fontainebleau" reveals that French architects used the Roman fasces ("faisceaux romains") as a decorative device as early as the reign of Louis XIII (1610–1643) and continued to employ it through the periods of Napoleon I's Empire (1804–1815).
The fasces typically appeared in a context reminiscent of the Roman Republic and of the Roman Empire. The French Revolution used many references to the ancient Roman Republic in its imagery. During the First Republic, topped by the Phrygian cap, the fasces is a tribute to the Roman Republic and means that power belongs to the people. It also symbolizes the "unity and indivisibility of the Republic", as stated in the French Constitution. In 1848 and after 1870, it appears on the seal of the French Republic, held by the figure of Liberty. There is the fasces in the arms of the French Republic with the "RF" for "République française" (see image below), surrounded by leaves of olive tree (as a symbol of peace) and oak (as a symbol of justice). While it is used widely by French officials, this symbol never was officially adopted by the government.
The fasces appears on the helmet and the buckle insignia of the French Army's Autonomous Corps of Military Justice, as well as on that service's distinct cap badges for the prosecuting and defending lawyers in a court-martial.
Since the original founding of the United States in the 18th century, several offices and institutions in the United States have heavily incorporated representations of the "fasces" into much of their iconography.
The following cases all involve the adoption of the fasces as a symbol or icon, although no physical re-introduction has occurred.
|
https://en.wikipedia.org/wiki?curid=11755
|
Fast combat support ship
The fast combat support ship (US Navy hull classification symbol: AOE) is the United States Navy's largest combat logistics ship, designed as an oiler, ammunition and supply ship. All fast combat support ships currently in service are operated by Military Sealift Command. They can carry more than 177,000 barrels of oil, 2,150 tons of ammunition, 500 tons of dry stores and 250 tons of refrigerated stores. It receives petroleum products, ammunition and stores from various shuttle ships and redistributes these items when needed to ships in the carrier battle group. This greatly reduces the number of service ships needed to travel with carrier battle groups.
The four ships of the were 53,000 tons at full load, 796 feet overall length, and carried two Boeing Vertol CH-46 Sea Knight helicopters. The "Sacramento" class was retired in 2005.
The ships displace 48,800 tons full load and carried two Boeing Vertol CH-46 Sea Knight helicopters or two Sikorsky MH-60S Knighthawk helicopters.
Air defense includes the Sea Sparrow radar and infrared surface-to-air missile in eight-cell launchers to provide point defence with 15km to 25km range. There are also two Phalanx mk15 20mm gatling gun close-in weapon systems (CIWS) and two 25mm Raytheon mk88 guns.
China has developed the Type 901 fast combat support ship which serves a similar mission in their navy.
|
https://en.wikipedia.org/wiki?curid=11757
|
FASA
FASA Corporation was an American publisher of role-playing games, wargames and board games between 1980 and 2001, after which they closed publishing operations for several years, becoming an IP holding company under the name FASA Inc. In 2012, a wholly owned subsidiary called FASA Games Inc. went into operation, using the name and logo under license from the parent company. FASA Games Inc. works alongside Ral Partha Europe, also a subsidiary of FASA Corporation, to bring out new editions of existing properties such as Earthdawn and Demonworld, and to develop new properties within the FASA cosmology.
FASA first appeared as a "Traveller" licensee, producing supplements for that Game Designers' Workshop role-playing game, especially the work of the Keith Brothers. The company went on to establish itself as a major gaming company with the publication of the first licensed "Star Trek" RPG, then several successful original games. Noteworthy lines included "BattleTech" and "Shadowrun". Their "Star Trek" role-playing supplements and tactical ship game enjoyed popularity outside the wargaming community since, at the time, official descriptions of the "Star Trek" universe were not common, and the gaming supplements offered details fans craved.
The highly successful "BattleTech" line led to a series of video games, some of the first virtual reality gaming suites, called Virtual World (created by a subdivision of the company known at the time of development as ESP, an acronym for "Extremely Secret Project") and a Saturday-morning .
Originally the name FASA was an acronym for "Freedonian Aeronautics and Space Administration", a joking allusion to the Marx Brothers film "Duck Soup". This tongue-in-cheek attitude was carried over in humorous self-references in its games. For example, in "Shadowrun", a tactical nuclear device was detonated near FASA's offices at 1026 W. Van Buren St in Chicago, Illinois.
FASA Corporation was founded by Jordan Weisman and L. Ross Babcock III in 1980 with a starting capital of $350. The two were fellow gamers at the United States Merchant Marine Academy. Mort Weisman, Jordan's father, joined the company in 1985 to lead the company's operational management having sold his book publishing business, Swallow Press.
Under the new commercial direction and with Mort's capital injection, the company diversified into books and miniature figures. After consulting their UK distributor, Chart Hobby Distributors, FASA licensed the manufacture of its "BattleTech" figurines to Miniature Figurines (also known as Minifigs). FASA would later acquire the U.S. figures manufacturer Ral Partha, which was the US manufacturer of Minifigs. While Mort ran the paper and metal based sides of the business, the company's founders focused on the development of computer-based games. They were particularly interested in virtual reality (particularly the BattleTech Centers / Virtual World) but also developed desktop computer games.
When Microsoft acquired the FASA Interactive subsidiary, Babcock went with that company. After the sale of Virtual World, Jordan turned his attention to the founding of a new games venture called WizKids.
FASA unexpectedly ceased active operations on April 30, 2001, but still exists as a corporation holding intellectual property rights, which it licenses to other publishers. Contrary to popular belief, the company did not go bankrupt. Allegedly the owners decided to quit while the company was still financially sound in a market they perceived as going downhill. Mort Weisman had been talking of retirement for some years and his confidence in the future of the paper-based games business was low. He considered the intellectual property of FASA to be of high value but did not wish to continue working as he had been for the last decade or more. Unwilling to wrestle with the complexities of dividing up the going concern, the owners issued a press release on January 25, 2001 announcing the immediate closure of the business.
The "BattleTech" and "Shadowrun" properties were sold to WizKids, who in turn licensed their publication to FanPro LLC and then to Catalyst Game Labs. The "Earthdawn" license was sold to WizKids, and then back to FASA. Living Room Games published "Earthdawn" (Second Edition), RedBrick published "Earthdawn" (Classic and Third Editions), but the license has now returned to FASA Corporation, and FASA Games, Inc. is the current license holder for new material. "Crimson Skies" was originally developed by Zipper Interactive under the FASA Interactive brand in late 2000 and used under license by FASA; FASA Interactive had been purchased by Microsoft, so rights to "Crimson Skies" stayed with Microsoft. Rights to the miniatures game "" reverted to the designer Mike "Skuzzy" Nielsen, but it has not been republished in any form due partly to legal difficulties. Microsoft officially closed the FASA team in the company's gaming division on September 12, 2007.
On December 6, 2007, FASA founder Jordan Weisman announced that his new venture, Smith & Tinker, had licensed the electronic gaming rights to "MechWarrior", "Shadowrun", and "Crimson Skies" from Microsoft.
On April 28, 2008 Mike "Skuzzy" Nielsen announced plans to create " 2.0".
At Gen Con 2012, FASA Games, Inc. was revealed, which includes FASA Corporation co-founder Ross Babcock on the Board of Directors. While FASA Corporation still owns and manages the FASA IP and brands, FASA Games, Inc. has announced its intention to develop new games under the FASA banner.
|
https://en.wikipedia.org/wiki?curid=11758
|
McDonnell Douglas F-4 Phantom II
The McDonnell Douglas F-4 Phantom II is a tandem two-seat, twin-engine, all-weather, long-range supersonic jet interceptor and fighter-bomber originally developed for the United States Navy by McDonnell Aircraft. It first entered service in 1960 with the Navy. Proving highly adaptable, it was also adopted by the United States Marine Corps and the United States Air Force, and by the mid-1960s had become a major part of their air arms.
The Phantom is a large fighter with a top speed of over Mach 2.2. It can carry more than 18,000 pounds (8,400 kg) of weapons on nine external hardpoints, including air-to-air missiles, air-to-ground missiles, and various bombs. The F-4, like other interceptors of its time, was initially designed without an internal cannon. Later models incorporated an M61 Vulcan rotary cannon. Beginning in 1959, it set 15 world records for in-flight performance, including an absolute speed record and an absolute altitude record.
The F-4 was used extensively during the Vietnam War. It served as the principal air superiority fighter for the U.S. Air Force, Navy, and Marine Corps and became important in the ground-attack and aerial reconnaissance roles late in the war. During the Vietnam War, one U.S. Air Force pilot, two weapon systems officers (WSOs), one U.S. Navy pilot and one radar intercept officer (RIO) became aces by achieving five aerial kills against enemy fighter aircraft. The F-4 continued to form a major part of U.S. military air power throughout the 1970s and 1980s, being gradually replaced by more modern aircraft such as the F-15 Eagle and F-16 Fighting Falcon in the U.S. Air Force, the F-14 Tomcat in the U.S. Navy, and the F/A-18 Hornet in the U.S. Navy and U.S. Marine Corps.
The F-4 Phantom II remained in use by the U.S. in the reconnaissance and Wild Weasel (Suppression of Enemy Air Defenses) roles in the 1991 Gulf War, finally leaving service in 1996. It was also the only aircraft used by both U.S. flight demonstration teams: the United States Air Force Thunderbirds (F-4E) and the United States Navy Blue Angels (F-4J). The F-4 was also operated by the armed forces of 11 other nations. Israeli Phantoms saw extensive combat in several Arab–Israeli conflicts, while Iran used its large fleet of Phantoms, acquired before the fall of the Shah, in the Iran–Iraq War. Phantom production ran from 1958 to 1981, with a total of 5,195 built, making it the most produced American supersonic military aircraft. As of 2020, 62 years after its first flight, the F-4 remains in service with Iran, Japan, South Korea, Greece, and Turkey. The aircraft has most recently been in service against the Islamic State group in the Middle East.
In 1952, McDonnell's Chief of Aerodynamics, Dave Lewis, was appointed by CEO Jim McDonnell to be the company's preliminary design manager. With no new aircraft competitions on the horizon, internal studies concluded the Navy had the greatest need for a new and different aircraft type: an attack fighter.
In 1953, McDonnell Aircraft began work on revising its F3H Demon naval fighter, seeking expanded capabilities and better performance. The company developed several projects, including a variant powered by a Wright J67 engine, and variants powered by two Wright J65 engines, or two General Electric J79 engines. The J79-powered version promised a top speed of Mach 1.97. On 19 September 1953, McDonnell approached the United States Navy with a proposal for the "Super Demon". Uniquely, the aircraft was to be modular, as it could be fitted with one- or two-seat noses for different missions, with different nose cones to accommodate radar, photo cameras, four 20 mm (.79 in) cannon, or 56 FFAR unguided rockets in addition to the nine hardpoints under the wings and the fuselage. The Navy was sufficiently interested to order a full-scale mock-up of the F3H-G/H, but felt that the upcoming Grumman XF9F-9 and Vought XF8U-1 already satisfied the need for a supersonic fighter.
The McDonnell design was therefore reworked into an all-weather fighter-bomber with 11 external hardpoints for weapons and on 18 October 1954, the company received a letter of intent for two YAH-1 prototypes. Then on 26 May 1955, four Navy officers arrived at the McDonnell offices and, within an hour, presented the company with an entirely new set of requirements. Because the Navy already had the Douglas A-4 Skyhawk for ground attack and F-8 Crusader for dogfighting, the project now had to fulfill the need for an all-weather fleet defense interceptor. A second crewman was added to operate the powerful radar; designers believed that air combat in the next war would overload solo pilots with information.
The XF4H-1 was designed to carry four semi-recessed AAM-N-6 Sparrow III radar-guided missiles, and to be powered by two J79-GE-8 engines. As in the McDonnell F-101 Voodoo, the engines sat low in the fuselage to maximize internal fuel capacity and ingested air through fixed geometry intakes. The thin-section wing had a leading edge sweep of 45° and was equipped with blown flaps for better low-speed handling.
Wind tunnel testing had revealed lateral instability, requiring the addition of 5° dihedral to the wings. To avoid redesigning the titanium central section of the aircraft, McDonnell engineers angled up only the outer portions of the wings by 12°, which averaged to the required 5° over the entire wingspan. The wings also received the distinctive "dogtooth" for improved control at high angles of attack. The all-moving tailplane was given 23° of anhedral to improve control at high angles of attack, while still keeping the tailplane clear of the engine exhaust. In addition, air intakes were equipped with variable geometry ramps to regulate airflow to the engines at supersonic speeds. All-weather intercept capability was achieved thanks to the AN/APQ-50 radar. To accommodate carrier operations, the landing gear was designed to withstand landings with a sink rate of , while the nose strut could extend by some to increase angle of attack at takeoff.
On 25 July 1955, the Navy ordered two XF4H-1 test aircraft and five YF4H-1 pre-production examples. The Phantom made its maiden flight on 27 May 1958 with Robert C. Little at the controls. A hydraulic problem precluded retraction of the landing gear, but subsequent flights went more smoothly. Early testing resulted in redesign of the air intakes, including the distinctive addition of 12,500 holes to "bleed off" the slow-moving boundary layer air from the surface of each intake ramp. Series production aircraft also featured splitter plates to divert the boundary layer away from the engine intakes. The aircraft soon squared off against the XF8U-3 Crusader III. Due to operator workload, the Navy wanted a two-seat aircraft and on 17 December 1958 the F4H was declared a winner. Delays with the J79-GE-8 engines meant that the first production aircraft were fitted with J79-GE-2 and −2A engines, each having 16,100 lbf (71.8 kN) of afterburning thrust. In 1959, the Phantom began carrier suitability trials with the first complete launch-recovery cycle performed on 15 February 1960 from .
There were proposals to name the F4H "Satan" and "Mithras". In the end, the aircraft was given the less controversial name "Phantom II", the first "Phantom" being another McDonnell jet fighter, the FH-1 Phantom. The Phantom II was briefly given the designation F-110A and the name "Spectre" by the USAF, but neither name was officially used.
Early in production, the radar was upgraded to the Westinghouse AN/APQ-72, an AN/APG-50 with a larger radar antenna, necessitating the bulbous nose, and the canopy was reworked to improve visibility and make the rear cockpit less claustrophobic. During its career the Phantom underwent many changes in the form of numerous variants developed.
The USN operated the F4H-1 (re-designated F-4A in 1962) with J79-GE-2 and -2A engines of 16,100 lbf (71.62 kN) thrust and later builds receiving -8 engines. A total of 45 F-4As were built; none saw combat, and most ended up as test or training aircraft. The USN and USMC received the first definitive Phantom, the F-4B which was equipped with the Westinghouse APQ-72 radar (pulse only), a Texas Instruments AAA-4 Infra-red search and track pod under the nose, an AN/AJB-3 bombing system and powered by J79-GE-8,-8A and -8B engines of 10,900 lbf (48.5 kN) dry and 16,950 lbf (75.4 kN) afterburner (reheat) with the first flight on 25 March 1961. 649 F-4Bs were built with deliveries beginning in 1961 and VF-121 Pacemakers receiving the first examples at NAS Miramar.
The USAF received Phantoms as the result of Defense Secretary Robert McNamara's push to create a unified fighter for all branches of the US military. After an F-4B won the "Operation Highspeed" fly-off against the Convair F-106 Delta Dart, the USAF borrowed two Naval F-4Bs, temporarily designating them F-110A "Spectre" in January 1962, and developed requirements for their own version. Unlike the US Navy's focus on air-to-air interception in the Fleet Air Defense (FAD) mission, the USAF emphasized both an air-to-air and an air-to-ground fighter-bomber role. With McNamara's unification of designations on 18 September 1962, the Phantom became the F-4 with the naval version designated F-4B and USAF F-4C. The first Air Force Phantom flew on 27 May 1963, exceeding Mach 2 on its maiden flight.
The F-4J improved both air-to-air and ground-attack capability; deliveries begun in 1966 and ended in 1972 with 522 built. It was equipped with J79-GE-10 engines with 17,844 lbf (79.374 kN) thrust, the Westinghouse AN/AWG-10 Fire Control System (making the F-4J the first fighter in the world with operational look-down/shoot-down capability), a new integrated missile control system and the AN/AJB-7 bombing system for expanded ground attack capability.
The F-4N (updated F-4Bs) with smokeless engines and F-4J aerodynamic improvements started in 1972 under a U.S. Navy-initiated refurbishment program called "Project Bee Line" with 228 converted by 1978. The F-4S model resulted from the refurbishment of 265 F-4Js with J79-GE-17 smokeless engines of 17,900 lbf (79.379 kN), AWG-10B radar with digitized circuitry for improved performance and reliability, Honeywell AN/AVG-8 Visual Target Acquisition Set or VTAS (world's first operational Helmet Sighting System), classified avionics improvements, airframe reinforcement and leading edge slats for enhanced maneuvering. The USMC also operated the RF-4B with reconnaissance cameras with 46 built; pilots flew the RF-4B alone and unarmed, straight and level on predictable flight paths at 5,000 feet while taking photographs, hoping that steady velocity would keep them alive.
Phantom II production ended in the United States in 1979 after 5,195 had been built (5,057 by McDonnell Douglas and 138 in Japan by Mitsubishi). Of these, 2,874 went to the USAF, 1,264 to the Navy and Marine Corps, and the rest to foreign customers. The last U.S.-built F-4 went to South Korea, while the last F-4 built was an F-4EJ built by Mitsubishi Heavy Industries in Japan and delivered on 20 May 1981. As of 2008, 631 Phantoms were in service worldwide, while the Phantoms were in use as a target drone (specifically QF-4Cs) operated by the U.S. military until 21 December 2016, when the Air Force officially ended use of the type.
To show off their new fighter, the Navy led a series of record-breaking flights early in Phantom development: All in all, the Phantom set 16 world records. Except for Skyburner, all records were achieved in unmodified production aircraft. Five of the speed records remained unbeaten until the F-15 Eagle appeared in 1975.
The F-4 Phantom is a tandem-seat fighter-bomber designed as a carrier-based interceptor to fill the U.S. Navy's fleet defense fighter role. Innovations in the F-4 included an advanced pulse-Doppler radar and extensive use of titanium in its airframe.
Despite imposing dimensions and a maximum takeoff weight of over 60,000 lb (27,000 kg), the F-4 has a top speed of Mach 2.23 and an initial climb rate of over 41,000 ft/min (210 m/s). The F-4's nine external hardpoints have a capability of up to 18,650 pounds (8,480 kg) of weapons, including air-to-air and air-to-surface missiles, and unguided, guided, and thermonuclear weapons. Like other interceptors of its day, the F-4 was designed without an internal cannon.
The baseline performance of a Mach 2-class fighter with long-range and a bomber-sized payload would be the template for the next generation of large and light/middle-weight fighters optimized for daylight air combat.
"Speed is life" was F-4 pilots' slogan. The Phantom's greatest advantage in air combat was acceleration and thrust, which permitted a skilled pilot to engage and disengage from the fight at will. MiGs usually could outturn the F-4 because of the high drag on its airframe; as a massive fighter aircraft designed to fire radar-guided missiles from beyond visual range, the F-4 lacked the agility of its Soviet opponents and was subject to adverse yaw during hard maneuvering. Although thus subject to irrecoverable spins during aileron rolls, pilots reported the aircraft to be very responsive and easy to fly on the edge of its performance envelope. In 1972, the F-4E model was upgraded with leading edge slats on the wing, greatly improving high angle of attack maneuverability at the expense of top speed.
The J79 reacted instantly to controls, unlike earlier engines. While landing on John Cheshire's tailhook missed the arresting gear after fully idling the engines. By using full throttle the J79s went to afterburner, turning his bolter into a touch-and-go landing. The J79 produced noticeable amounts of black smoke (at mid-throttle/cruise settings), a severe disadvantage in that it made it easier for the enemy to spot the aircraft. Two decades after the aircraft entered service this was solved on the F-4S, which was fitted with the −10A engine variant with a smokeless combustor.
The lack of an internal gun "was the biggest mistake on the F-4", Cheshire said; "Bullets are cheap and tend to go where you aim them. I needed a gun, and I really wished I had one". Marine Corps general John R. Dailey recalled that "everyone in RF-4s wished they had a gun on the aircraft". For a brief period, doctrine held that turning combat would be impossible at supersonic speeds and little effort was made to teach pilots air combat maneuvering. In reality, engagements quickly became subsonic, as pilots would slow down in an effort to get behind their adversaries. Furthermore, the relatively new heat-seeking and radar-guided missiles at the time were frequently reported as unreliable and pilots had to fire multiple missiles (also known as ripple-firing), just to hit one enemy fighter. To compound the problem, rules of engagement in Vietnam precluded long-range missile attacks in most instances, as visual identification was normally required. Many pilots found themselves on the tail of an enemy aircraft, but too close to fire short-range Falcons or Sidewinders. Although by 1965 USAF F-4Cs began carrying SUU-16 external gunpods containing a 20 mm (.79 in) M61A1 Vulcan Gatling cannon, USAF cockpits were not equipped with lead-computing gunsights until the introduction of the SUU-23, virtually assuring a miss in a maneuvering fight. Some Marine Corps aircraft carried two pods for strafing. In addition to the loss of performance due to drag, combat showed the externally mounted cannon to be inaccurate unless frequently boresighted, yet far more cost-effective than missiles. The lack of a cannon was finally addressed by adding an internally mounted 20 mm (.79 in) M61A1 Vulcan on the F-4E.
Note: Original amounts were in 1965 U.S. dollars. The figures in these tables have been adjusted for inflation to the current year.
In USAF service, the F-4 was initially designated the F-110 Spectre prior to the introduction of the 1962 United States Tri-Service aircraft designation system. The USAF quickly embraced the design and became the largest Phantom user. The first USAF Phantoms in Vietnam were F-4Cs from the 43rd Tactical Fighter Squadron arrived in December 1964.
Unlike the U.S. Navy and U.S. Marine Corps, which flew the Phantom with a Naval Aviator (pilot) in the front seat and a Naval Flight Officer as a radar intercept officer (RIO) in the back seat, the USAF initially flew its Phantoms with a rated Air Force Pilot in front and back seats. Pilots usually did not like flying in the back seat; while the GIB, or "guy in back", could fly and ostensibly land the aircraft, he had fewer flight instruments and a very restricted forward view. The Air Force later assigned a rated Air Force Navigator qualified as a weapon/targeting systems officer (later designated as weapon systems officer or WSO) in the rear seat instead of another pilot.
On 10 July 1965, F-4Cs of the 45th Tactical Fighter Squadron, 15th TFW, on temporary assignment in Ubon, Thailand, scored the USAF's first victories against North Vietnamese MiG-17s using AIM-9 Sidewinder air-to-air missiles. On 26 April 1966, an F-4C from the 480th Tactical Fighter Squadron scored the first aerial victory by a U.S. aircrew over a North Vietnamese MiG-21 "Fishbed". On 24 July 1965, another Phantom from the 45th Tactical Fighter Squadron became the first American aircraft to be downed by an enemy SAM, and on 5 October 1966 an 8th Tactical Fighter Wing F-4C became the first U.S. jet lost to an air-to-air missile, fired by a MiG-21.
Early aircraft suffered from leaks in wing fuel tanks that required re-sealing after each flight and 85 aircraft were found to have cracks in outer wing ribs and stringers. There were also problems with aileron control cylinders, electrical connectors, and engine compartment fires. Reconnaissance RF-4Cs made their debut in Vietnam on 30 October 1965, flying the hazardous post-strike reconnaissance missions. The USAF Thunderbirds used the F-4E from the 1969 season until 1974.
Although the F-4C was essentially identical to the Navy/Marine Corps F-4B in-flight performance and carried the AIM-9 Sidewinder missiles, USAF-tailored F-4Ds initially arrived in June 1967 equipped with AIM-4 Falcons. However, the Falcon, like its predecessors, was designed to shoot down heavy bombers flying straight and level. Its reliability proved no better than others and its complex firing sequence and limited seeker-head cooling time made it virtually useless in combat against agile fighters. The F-4Ds reverted to using Sidewinders under the "Rivet Haste" program in early 1968, and by 1972 the AIM-7E-2 "Dogfight Sparrow" had become the preferred missile for USAF pilots. Like other Vietnam War Phantoms, the F-4Ds were urgently fitted with radar warning receivers to detect the Soviet-built S-75 Dvina SAMs.
From the initial deployment of the F-4C to Southeast Asia, USAF Phantoms performed both air superiority and ground attack roles, supporting not only ground troops in South Vietnam, but also conducting bombing sorties in Laos and North Vietnam. As the F-105 force underwent severe attrition between 1965 and 1968, the bombing role of the F-4 proportionately increased until after November 1970 (when the last F-105D was withdrawn from combat) it became the primary USAF tactical ordnance delivery system. In October 1972 the first squadron of EF-4C Wild Weasel aircraft deployed to Thailand on temporary duty. The "E" prefix was later dropped and the aircraft was simply known as the F-4C Wild Weasel.
Sixteen squadrons of Phantoms were permanently deployed between 1965 and 1973, and 17 others deployed on temporary combat assignments. Peak numbers of combat F-4s occurred in 1972, when 353 were based in Thailand. A total of 445 Air Force Phantom fighter-bombers were lost, 370 in combat and 193 of those over North Vietnam (33 to MiGs, 30 to SAMs, and 307 to AAA).
The RF-4C was operated by four squadrons, and of the 83 losses, 72 were in combat including 38 over North Vietnam (seven to SAMs and 65 to AAA). By war's end, the U.S. Air Force had lost a total of 528 F-4 and RF-4C Phantoms. When combined with U.S. Navy and Marine Corps losses of 233 Phantoms, 761 F-4/RF-4 Phantoms were lost in the Vietnam War.
On 28 August 1972, Captain Steve Ritchie became the first USAF ace of the war. On 9 September 1972, WSO Capt Charles B. DeBellevue became the highest-scoring American ace of the war with six victories. and WSO Capt Jeffrey Feinstein became the last USAF ace of the war on 13 October 1972. Upon return to the United States, DeBellevue and Feinstein were assigned to undergraduate pilot training (Feinstein was given a vision waiver) and requalified as USAF pilots in the F-4. USAF F-4C/D/E crews claimed 107½ MiG kills in Southeast Asia (50 by Sparrow, 31 by Sidewinder, five by Falcon, 15.5 by gun, and six by other means).
On 31 January 1972, the 170th Tactical Fighter Squadron/183d Tactical Fighter Group of the Illinois Air National Guard became the first Air National Guard unit to transition to Phantoms from Republic F-84F Thunderstreaks which were found to have corrosion problems. Phantoms would eventually equip numerous tactical fighter and tactical reconnaissance units in the USAF active, National Guard, and reserve.
On 2 June 1972, a Phantom flying at supersonic speed shot down a MiG-19 over Thud Ridge in Vietnam with its cannon. At a recorded speed of Mach 1.2, Major Phil Handley's shoot down was the first and only recorded gun kill while flying at supersonic speeds.
On 15 August 1990, 24 F-4G Wild Weasel Vs and six RF-4Cs were deployed to Shaikh Isa AB, Bahrain, for Operation Desert Storm. The F-4G was the only aircraft in the USAF inventory equipped for the Suppression of Enemy Air Defenses (SEAD) role, and was needed to protect coalition aircraft from Iraq's extensive air defense system. The RF-4C was the only aircraft equipped with the ultra-long-range KS-127 LOROP (long-range oblique photography) camera, and was used for a variety of reconnaissance missions. In spite of flying almost daily missions, only one RF-4C was lost in a fatal accident before the start of hostilities. One F-4G was lost when enemy fire damaged the fuel tanks and the aircraft ran out of fuel near a friendly airbase. The last USAF Phantoms, F-4G Wild Weasel Vs from 561st Fighter Squadron, were retired on 26 March 1996. The last operational flight of the F-4G Wild Weasel was from the 190th Fighter Squadron, Idaho Air National Guard, in April 1996. The last operational USAF/ANG F-4 to land was flown by Maj Mike Webb and Maj Gary Leeder of the Idaho ANG.
Like the Navy, the Air Force has operated QF-4 target drones, serving with the 82d Aerial Targets Squadron at Tyndall Air Force Base, Florida, and Holloman Air Force Base, New Mexico. It was expected that the F-4 would remain in the target role with the 82d ATRS until at least 2015, when they would be replaced by early versions of the F-16 Fighting Falcon converted to a QF-16 configuration. Several QF-4s also retain capability as manned aircraft and are maintained in historical color schemes, being displayed as part of Air Combat Command's Heritage Flight at air shows, base open houses, and other events while serving as non-expendable target aircraft during the week. On 19 November 2013, BAE Systems delivered the last QF-4 aerial target to the Air Force. The example had been in storage for over 20 years before being converted. Over 16 years, BAE had converted 314 F-4 and RF-4 Phantom IIs into QF-4s and QRF-4s, with each aircraft taking six months to adapt. As of December 2013, QF-4 and QRF-4 aircraft had flown over 16,000 manned and 600 unmanned training sorties, with 250 unmanned aircraft being shot down in firing exercises. The remaining QF-4s and QRF-4s held their training role until the first of 126 QF-16s were delivered by Boeing. The final flight of an Air Force QF-4 from Tyndall AFB took place on 27 May 2015 to Holloman AFB. After Tyndall AFB ceased operations, the 53d Weapons Evaluation Group at Holloman became the fleet of 22 QF-4s' last remaining operator. The base continued using them to fly manned test and unmanned live fire test support and Foreign Military Sales testing, with the final unmanned flight taking place in August 2016. The type was officially retired from US military service with a four–ship flight at Holloman during an event on 21 December 2016. The remaining QF-4s were to be demilitarized after 1 January 2017.
On 30 December 1960, the VF-121 "Pacemakers" at NAS Miramar became the first Phantom operator with its F4H-1Fs (F-4As). The VF-74 "Be-devilers" at NAS Oceana became the first deployable Phantom squadron when it received its F4H-1s (F-4Bs) on 8 July 1961. The squadron completed carrier qualifications in October 1961 and Phantom's first full carrier deployment between August 1962 and March 1963 aboard . The second deployable U.S. Atlantic Fleet squadron to receive F-4Bs was the VF-102 "Diamondbacks", who promptly took their new aircraft on the shakedown cruise of . The first deployable U.S. Pacific Fleet squadron to receive the F-4B was the VF-114 "Aardvarks", which participated in the September 1962 cruise aboard .
By the time of the Tonkin Gulf incident, 13 of 31 deployable navy squadrons were armed with the type. F-4Bs from made the first Phantom combat sortie of the Vietnam War on 5 August 1964, flying bomber escort in Operation Pierce Arrow. Navy fighter pilots were unused to flying with a non-pilot RIO, but learned from air combat in Vietnam the benefits of the "guy in back" or "voice in the luggage compartment" helping with the workload. The first Phantom air-to-air victory of the war took place on 9 April 1965 when an F-4B from VF-96 "Fighting Falcons" piloted by Lieutenant (junior grade) Terence M. Murphy and his RIO, Ensign Ronald Fegan, shot down a Chinese MiG-17 "Fresco". The Phantom was then shot down, probably by an AIM-7 Sparrow from one of its wingmen. There continues to be controversy over whether the Phantom was shot down by MiG guns or, as enemy reports later indicated, an AIM-7 Sparrow III from one of Murphy's and Fegan's wingmen. On 17 June 1965, an F-4B from VF-21 "Freelancers" piloted by Commander Louis Page and Lieutenant John C. Smith shot down the first North Vietnamese MiG of the war.
On 10 May 1972, Lieutenant Randy "Duke" Cunningham and Lieutenant (junior grade) William P. Driscoll flying an F-4J, call sign "Showtime 100", shot down three MiG-17s to become the first American flying aces of the war. Their fifth victory was believed at the time to be over a mysterious North Vietnamese ace, Colonel Nguyen Toon, now considered mythical. On the return flight, the Phantom was damaged by an enemy surface-to-air missile. To avoid being captured, Cunningham and Driscoll flew their burning aircraft using only the rudder and afterburner (the damage to the aircraft rendered conventional control nearly impossible), until they could eject over water.
During the war, U.S. Navy F-4 Phantom squadrons participated in 84 combat tours with F-4Bs, F-4Js, and F-4Ns. The Navy claimed 40 air-to-air victories at a cost of 73 Phantoms lost in combat (seven to enemy aircraft, 13 to SAMs, and 53 to AAA). An additional 54 Phantoms were lost in mishaps.
In 1984, all Navy F-4Ns were retired from Fleet service in deployable USN squadrons and by 1987 the last F-4Ss were retired from deployable USN squadrons. On 25 March 1986, an F-4S belonging to the VF-151 "Vigilantes," became the last active duty U.S. Navy Phantom to launch from an aircraft carrier, in this case, . On 18 October 1986, an F-4S from the VF-202 "Superheats", a Naval Reserve fighter squadron, made the last-ever Phantom carrier landing while operating aboard . In 1987, the last of the Naval Reserve-operated F-4S aircraft were replaced by F-14As. The last Phantoms in service with the Navy were QF-4N and QF-4S target drones operated by the Naval Air Warfare Center at NAS Point Mugu, California. These airframes were subsequently retired in 2004.
The Marine Corps received its first F-4Bs in June 1962, with the "Black Knights" of VMFA-314 at Marine Corps Air Station El Toro, California becoming the first operational squadron. Marine Phantoms from VMFA-531 "Gray Ghosts" were assigned to Da Nang airbase on South Vietnam's northeast coast on 10 May 1965 and were initially assigned to provide air defense for the USMC. They soon began close air support missions (CAS) and VMFA-314 'Black Knights', VMFA-232 'Red Devils, VMFA-323 'Death Rattlers', and VMFA-542 'Bengals' soon arrived at the primitive airfield. Marine F-4 pilots claimed three enemy MiGs (two while on exchange duty with the USAF) at the cost of 75 aircraft lost in combat, mostly to ground fire, and four in accidents.
The VMCJ-1 Golden Hawks (later VMAQ-1 and VMAQ-4 which had the old RM tailcode) flew the first photo recon mission with an RF-4B variant on 3 November 1966 from Da Nang AB, South Vietnam and remained there until 1970 with no RF-4B losses and only one aircraft damaged by anti-aircraft artillery (AAA) fire. VMCJ-2 and VMCJ-3 (now VMAQ-3) provided aircraft for VMCJ-1 in Da Nang and VMFP-3 was formed in 1975 at MCAS El Toro, CA consolidating all USMC RF-4Bs in one unit that became known as "The Eyes of the Corps." VMFP-3 disestablished in August 1990 after the Advanced Tactical Airborne Reconnaissance System was introduced for the F/A-18D Hornet.
The F-4 continued to equip fighter-attack squadrons in both active and reserve Marine Corps units throughout the 1960s, 1970s and 1980s and into the early 1990s. In the early 1980s, these squadrons began to transition to the F/A-18 Hornet, starting with the same squadron that introduced the F-4 to the Marine Corps, VMFA-314 at MCAS El Toro, California. On 18 January 1992, the last Marine Corps Phantom, an F-4S in the Marine Corps Reserve, was retired by the "Cowboys" of VMFA-112 at NAS Dallas, Texas, after which the squadron was re-equipped with F/A-18 Hornets.
The USAF and the US Navy had high expectations of the F-4 Phantom, assuming that the massive firepower, the best available on-board radar, the highest speed and acceleration properties, coupled with new tactics, would provide Phantoms with an advantage over the MiGs. However, in confrontations with the lighter MiG-21, F-4s did not always succeed and began to suffer losses. Over the course of the air war in Vietnam, between 3 April 1965 and 8 January 1973, each side would ultimately claim favorable kill ratios.
During the war, U.S. Navy F-4 Phantoms downed 40 air-to-air victories at a loss of seven Phantoms to enemy aircraft. USMC F-4 pilots claimed three enemy MiGs at the cost of one aircraft in air-combat. USAF F-4 Phantom crews scored 107½ MiG kills (including 33½ MiG-17s, eight MiG-19s and 66 MiG-21s) at a cost of 33 Phantoms in air-combat. F-4 pilots were credited with a total of 150½ MiG kills at a cost of 42 Phantoms in air-combat.
According to the VPAF, 103 F-4 Phantoms were shot down by MiG-21s at a cost of 54 MiG-21s downed by F-4s. During the war, the VPAF lost 131 MiGs in air combat (63 MiG-17s, eight MiG-19s and 60 MiG-21s) of which one half were by F-4s.
From 1966 to November 1968, in 46 air battles conducted over North Vietnam between F-4s and MiG-21s, VPAF claimed 27 F-4s were shot down by MiG-21s at a cost of 20 MiG-21s In 1970, one F-4 Phantom was shot down by MiG-21. The struggle culminated on 10 May 1972, with VPAF aircraft completing 64 sorties, resulting in 15 air battles. The VPAF claimed seven F-4s were shot down, while U.S. confirmed five F-4s were lost. The Phantoms, in turn, managed to destroy two MiG-21s, three MiG-17s, and one MiG-19. On 11 May, two MiG-21s, which played the role of "bait", brought the four F-4s to two MiG-21s circling at low altitude. The MiGs quickly engaged and shot down two F-4s. On 18 May, Vietnamese aircraft made 26 sorties in eight air engagements, which cost 4 F-4 Phantoms; Vietnamese fighters on that day did not suffer losses.
The Phantom has served with the air forces of many countries, including Australia, Egypt, Germany, United Kingdom, Greece, Iran, Israel, Japan, Spain, South Korea and Turkey.
The Royal Australian Air Force (RAAF) leased 24 USAF F-4Es from 1970 to 1973 while waiting for their order for the General Dynamics F-111C to be delivered. They were so well-liked that the RAAF considered retaining the aircraft after the F-111Cs were delivered. They were operated from RAAF Amberley by No. 1 Squadron and No. 6 Squadron.
In 1979, the Egyptian Air Force purchased 35 former USAF F-4Es along with a number of Sparrow, Sidewinder, and Maverick missiles from the U.S. for $594 million as part of the "Peace Pharaoh" program. An additional seven surplus USAF aircraft were purchased in 1988. Three attrition replacements had been received by the end of the 1990s.
The German Air Force ("Luftwaffe") initially ordered the reconnaissance RF-4E in 1969, receiving a total of 88 aircraft from January 1971. In 1982, the initially unarmed RF-4Es were given a secondary ground attack capability; these aircraft were retired in 1994.
In 1973, under the "Peace Rhine" program, the "Luftwaffe" purchased the F-4F (a lightened and simplified version of the F-4E) which was upgraded in the mid-1980s. 24 German F-4F Phantom IIs were operated by the 49th Tactical Fighter Wing of the USAF at Holloman AFB to train "Luftwaffe" crews until December 2004. In 1975, Germany also received 10 F-4Es for training in the U.S. In the late 1990s, these were withdrawn from service after being replaced by F-4Fs. Germany also initiated the Improved Combat Efficiency (ICE) program in 1983. The 110 ICE-upgraded F-4Fs entered service in 1992, and were expected to remain in service until 2012. All the remaining Luftwaffe Phantoms were based at Wittmund with "Jagdgeschwader" 71 (fighter wing 71) in Northern Germany and WTD61 at Manching. Phantoms were deployed to NATO states under the Baltic Air Policing starting in 2005, 2008, 2009, 2011 and 2012. The German Air Force retired its last F-4Fs on 29 June 2013. German F-4Fs flew 279,000 hours from entering service on 31 August 1973 until retirement.
In 1971, the Hellenic Air Force ordered brand new F-4E Phantoms, with deliveries starting in 1974. In the early 1990s, the Hellenic AF acquired surplus RF-4Es and F-4Es from the "Luftwaffe" and U.S. ANG.
Following the success of the German ICE program, on 11 August 1997, a contract was signed between DASA of Germany and Hellenic Aerospace Industry for the upgrade of 39 aircraft to the very similar "Peace Icarus 2000" standard. The Hellenic AF operated 34 upgraded "F-4E-PI2000" (338 and 339 Squadrons) and 12 RF-4E aircraft (348 Squadron) as of September 2013.
On 5 May 2017, the Hellenic Air Force officially retired the RF-4E Phantom II during a public ceremony.
In the 1960s and 1970s when the U.S. and Iran were on friendly terms, the U.S. sold 225 F-4D, F-4E, and RF-4E Phantoms to Iran. The Imperial Iranian Air Force saw at least one engagement, resulting in a loss, after an RF-4C was rammed by a Soviet MiG-21 during Project Dark Gene, an ELINT operation during the Cold War.
The Islamic Republic of Iran Air Force Phantoms saw heavy action in the Iran–Iraq War in the 1980s and are kept operational by overhaul and servicing from Iran's aerospace industry. Notable operations of Iranian F-4s during the war included Operation Scorch Sword, an attack by two F-4s against the Iraqi Osirak nuclear reactor site near Baghdad on 30 September 1980, and the attack on H3, a 4 April 1981 strike by eight Iranian F-4s against the H-3 complex of air bases in the far west of Iraq, which resulted in many Iraqi aircraft being destroyed or damaged for no Iranian losses.
On 5 June 1984, two Saudi Arabian fighter pilots shot down two Iranian F-4 fighters. The Royal Saudi Air Force pilots were flying American-built F-15s and fired air-to-air missiles to bring down the Iranian planes. The Saudi fighter pilots had KC-135 aerial tanker planes and Boeing E-3 Sentry AWACS surveillance planes assist in the encounter. The aerial fight occurred in Saudi airspace over the Persian Gulf near the Saudi island Al Arabiyah, about 60 miles northeast of Jubail.
Iranian F-4s were in use as of late 2014; the aircraft reportedly conducted air strikes on ISIS targets in the eastern Iraqi province of Diyala.
The Israeli Air Force was the largest foreign operator of the Phantom, flying both newly built and ex-USAF aircraft, as well as several one-off special reconnaissance variants. The first F-4Es, nicknamed ""Kurnass"" (Sledgehammer), and RF-4Es, nicknamed ""Orev"" (Raven), were delivered in 1969 under the "Peace Echo I" program. Additional Phantoms arrived during the 1970s under "Peace Echo II" through "Peace Echo V" and "Nickel Grass" programs. Israeli Phantoms saw extensive combat during Arab–Israeli conflicts, first seeing action during the War of Attrition. In the 1980s, Israel began the "Kurnass 2000" modernization program which significantly updated avionics. The last Israeli F-4s were retired in 2004.
From 1968, the Japan Air Self-Defense Force (JASDF) purchased a total of 140 F-4EJ Phantoms without aerial refueling, AGM-12 Bullpup missile system, nuclear control system or ground attack capabilities. Mitsubishi built 138 under license in Japan and 14 unarmed reconnaissance RF-4Es were imported. One of the aircraft (17-8440) was the very last of the 5,195 F-4 Phantoms to be produced. It was manufactured by Mitsubishi Heavy Industries on 21 May 1981. "The Final Phantom" served with 306th Tactical Fighter Squadron and later transferred to the 301st Tactical Fighter Squadron.
Of these, 96 F-4EJs were modified to the F-4EJ standard. 15 F-4EJ and F-4EJ Kai were converted to reconnaissance aircraft designated RF-4EJ. Japan had a fleet of 90 F-4s in service in 2007. After studying several replacement fighters the F-35 Lightning II was chosen in 2011. The 302nd Tactical Fighter Squadron became the first JASDF F-35 Squadron at Misawa Air Base when it converted from the F-4EJ Kai on 29 March 2019. The JASDF's sole aerial reconnaissance unit, the 501st Tactical Reconnaissance Squadron, retired their RF-4Es and RF-4EJs on 9 March 2020, and the unit itself dissolved on 26 March. The 301st Tactical Fighter Squadron is now the sole user of the F-4EJ in the Air Defense Command, with their retirement scheduled in 2021 along with the unit's transition to the F-35A. Some F-4s are also operated by the Air Development and Test Wing in Gifu Prefecture.
The Republic of Korea Air Force purchased its first batch of secondhand USAF F-4D Phantoms in 1968 under the "Peace Spectator" program. The F-4Ds continued to be delivered until 1988. The "Peace Pheasant II" program also provided new-built and former USAF F-4Es.
The Spanish Air Force acquired its first batch of ex-USAF F-4C Phantoms in 1971 under the "Peace Alfa" program. Designated C.12, the aircraft were retired in 1989. At the same time, the air arm received a number of ex-USAF RF-4Cs, designated CR.12. In 1995–1996, these aircraft received extensive avionics upgrades. Spain retired its RF-4s in 2002.
The Turkish Air Force (TAF) received 40 F-4Es in 1974, with a further 32 F-4Es and 8 RF-4Es in 1977–78 under the "Peace Diamond III" program, followed by 40 ex-USAF aircraft in "Peace Diamond IV" in 1987, and a further 40 ex-U.S. Air National Guard Aircraft in 1991. A further 32 RF-4Es were transferred to Turkey after being retired by the Luftwaffe between 1992 and 1994. In 1995, Israel Aerospace Industries (IAI) implemented an upgrade similar to Kurnass 2000 on 54 Turkish F-4Es which were dubbed the F-4E 2020 Terminator. Turkish F-4s, and more modern F-16s have been used to strike Kurdish PKK bases in ongoing military operations in Northern Iraq. On 22 June 2012, a Turkish RF-4E was shot down by Syrian air defenses while flying a reconnaissance flight near the Turkish-Syrian border. Turkey has stated the reconnaissance aircraft was in international airspace when it was shot down, while Syrian authorities stated it was inside Syrian airspace. Turkish F-4s remained in use as of 2015.
On 24 February 2015, two RF-4Es crashed in the Malatya region in the southeast of Turkey, under yet unknown circumstances, killing both crew of two each. On 5 March 2015, an F-4E-2020 crashed in central Anatolia killing both crew. After the recent accidents, the TAF withdrew RF-4Es from active service. Turkey was reported to have used F-4 jets to attack PKK separatists and the ISIS capital on 19 September 2015. The Turkish Air Force has reportedly used the F-4E 2020s against the more recent Third Phase of the PKK conflict on heavy bombardment missions into Iraq on 15 November 2015, 12 January 2016, and 12 March 2016.
The United Kingdom bought versions based on the U.S. Navy's F-4J for use with the Royal Air Force and the Royal Navy's Fleet Air Arm. The UK was the only country outside the United States to operate the Phantom at sea, launching them from . The main differences were the use of the British Rolls-Royce Spey engines and of British-made avionics. The RN and RAF versions were given the designation F-4K and F-4M respectively, and entered service with the British military aircraft designations Phantom FG.1 (fighter/ground attack) and Phantom FGR.2 (fighter/ground attack/reconnaissance).
Initially, the FGR.2 was used in the ground attack and reconnaissance role, primarily with RAF Germany, while 43 Squadron was formed in the air defence role using the FG.1s that had been intended for the Fleet Air Arm for use aboard . The superiority of the Phantom over the English Electric Lightning in terms of both range and weapon load, combined with the successful introduction of the SEPECAT Jaguar, meant that, during the mid-1970s, most of the ground attack Phantoms in Germany were redeployed to the UK to replace air defence Lightning squadrons. A second RAF squadron, 111 Squadron, was formed on the FG.1 in 1979 after the disbandment of 892 NAS.
In 1982, during the Falklands War, three Phantom FGR2s of No. 29 Squadron were on active Quick Reaction Alert duty on Ascension Island to protect the base from air attack. After the Falklands War, 15 upgraded ex-USN F-4Js, known as the F-4J(UK) entered RAF service to compensate for one interceptor squadron redeployed to the Falklands.
Around 15 RAF squadrons received various marks of Phantom, many of them based in Germany. The first to be equipped was No. 228 Operational Conversion Unit at RAF Coningsby in August 1968. One noteworthy operator was No. 43 Squadron where Phantom FG1s remained the squadron equipment for 20 years, arriving in September 1969 and departing in July 1989. During this period the squadron was based at Leuchars.
The interceptor Phantoms were replaced by the Panavia Tornado F3 from the late 1980s onwards, and the last British Phantoms were retired in October 1992 when No. 74 Squadron was disbanded.
Sandia National Laboratories used an F-4 mounted on a "rocket sled" in a crash test to see the results of an aircraft hitting a reinforced concrete structure, such as a nuclear power plant.
One aircraft, an F-4D (civilian registration N749CF), is operated by the Massachusetts-based non-profit organization Collings Foundation as a "living history" exhibit. Funds to maintain and operate the aircraft, which is based in Houston, Texas, are raised through donations/sponsorships from public and commercial parties.
After finding the Lockheed F-104 Starfighter inadequate, NASA used the F-4 to photograph and film Titan II missiles after launch from Cape Canaveral during the 1960s. Retired U.S. Air Force colonel Jack Petry described how he put his F-4 into a Mach 1.2 dive synchronized to the launch countdown, then "walked the (rocket's) contrail". Petry's Phantom stayed with the Titan for 90 seconds, reaching 68,000 feet, then broke away as the missile continued into space.
NASA's Dryden Flight Research Center acquired an F-4A on 3 December 1965. It made 55 flights in support of short programs, chase on X-15 missions and lifting body flights. The F-4 also supported a biomedical monitoring program involving 1,000 flights by NASA Flight Research Center aerospace research pilots and students of the USAF Aerospace Research Pilot School flying high-performance aircraft. The pilots were instrumented to record accurate and reliable data of electrocardiogram, respiration rate, and normal acceleration. In 1967, the Phantom supported a brief military-inspired program to determine whether an airplane's sonic boom could be directed and whether it could be used as a weapon of sorts, or at least an annoyance. NASA also flew an F-4C in a spanwise blowing study from 1983 to 1985, after which it was returned.
The Phantom gathered a number of nicknames during its career. Some of these names included "Snoopy", "Rhino", "Double Ugly", "Old Smokey", the "Flying Anvil", "Flying Footlocker", "Flying Brick", "Lead Sled", the "Big Iron Sled", and the "St. Louis Slugger". In recognition of its record of downing large numbers of Soviet-built MiGs, it was called the "World's Leading Distributor of MiG Parts". As a reflection of excellent performance in spite of its bulk, the F-4 was dubbed "the triumph of thrust over aerodynamics." German "Luftwaffe" crews called their F-4s the "Eisenschwein" ("Iron Pig"), "Fliegender Ziegelstein" ("Flying Brick") and "Luftverteidigungsdiesel" ("Air Defense Diesel").
Imitating the spelling of the aircraft's name, McDonnell issued a series of patches. Pilots became "Phantom Phlyers", backseaters became "Phantom Pherrets", fans of the F-4 "Phantom Phanatics", and call it the "Phabulous Phantom". Ground crewmen who worked on the aircraft are known as "Phantom Phixers".
Several active websites are devoted to sharing information on the F-4, and the aircraft is grudgingly admired as brutally effective by those who have flown it. Colonel (Ret.) Chuck DeBellevue reminisces, "The F-4 Phantom was the last plane that looked like it was made to kill somebody. It was a beast. It could go through a flock of birds and kick out barbeque from the back." It had "A reputation of being a clumsy bruiser reliant on brute engine power and obsolete weapons technology."
The aircraft's emblem is a whimsical cartoon ghost called "The Spook", which was created by McDonnell Douglas technical artist, Anthony "Tony" Wong, for shoulder patches. The name "Spook" was coined by the crews of either the 12th Tactical Fighter Wing or the 4453rd Combat Crew Training Wing at MacDill AFB. The figure is ubiquitous, appearing on many items associated with the F-4. The Spook has followed the Phantom around the world adopting local fashions; for example, the British adaptation of the U.S. "Phantom Man" is a Spook that sometimes wears a bowler hat and smokes a pipe.
As a result of its extensive number of operators and large number of aircraft produced, there are many F-4 Phantom II of numerous variants on display worldwide.
|
https://en.wikipedia.org/wiki?curid=11759
|
McDonnell FH Phantom
The McDonnell FH Phantom was a twinjet fighter aircraft designed and first flown during World War II for the United States Navy. The Phantom was the first purely jet-powered aircraft to land on an American aircraft carrier and the first jet deployed by the United States Marine Corps. Although with the end of the war, only 62 FH-1s were built, it helped prove the viability of carrier-based jet fighters. As McDonnell's first successful fighter, leading to the development of the follow-on F2H Banshee, which was one of the two most important naval jet fighters of the Korean War, it would also establish McDonnell as an important supplier of navy aircraft. When McDonnell chose to bring the name back with the Mach 2–class McDonnell Douglas F-4 Phantom II, it launched what would become the most versatile and widely used western combat aircraft of the Vietnam War era, adopted by the USAF and the US Navy, remaining in use with various countries to the present day.
The FH Phantom was originally designated the FD Phantom, but the designation was changed as the aircraft entered production.
In early 1943, aviation officials at the United States Navy were impressed with McDonnell's audacious XP-67 Bat project. McDonnell was invited by the navy to cooperate in the development of a shipboard jet fighter, using an engine from the turbojets under development by Westinghouse Electric Corporation. Three prototypes were ordered on 30 August 1943 and the designation XFD-1 was assigned. Under the 1922 United States Navy aircraft designation system, the letter "D" before the dash designated the aircraft's manufacturer. The Douglas Aircraft Company had previously been assigned this letter, but the USN elected to reassign it to McDonnell because Douglas had not provided any fighters for navy service in years.
McDonnell engineers evaluated a number of engine combinations, varying from eight 9.5 in (24 cm) diameter engines down to two engines of 19 inch (48 cm) diameter. The final design used the two 19 in (48 cm) engines after it was found to be the lightest and simplest configuration. The engines were buried in the wing root to keep intake and exhaust ducts short, offering greater aerodynamic efficiency than underwing nacelles, and the engines were angled slightly outwards to protect the fuselage from the hot exhaust blast. Placement of the engines in the middle of the airframe allowed the cockpit with its bubble-style canopy to be placed ahead of the wing, granting the pilot excellent visibility in all directions. This engine location also freed up space under the nose, allowing designers to use tricycle gear, thereby elevating the engine exhaust path and reducing the risk that the hot blast would damage the aircraft carrier deck. The construction methods and aerodynamic design of the Phantom were fairly conventional for the time; the aircraft had unswept wings, a conventional empennage, and an aluminum monocoque structure with flush riveted aluminum skin. Folding wings were used to reduce the width of the aircraft in storage configuration. Provisions for four .50-caliber (12.7 mm) machine guns were made in the nose, while racks for eight 5 in (127 mm) High Velocity Aircraft Rockets could be fitted under the wings, although these were seldom used in service. Adapting a jet to carrier use was a much greater challenge than producing a land-based fighter because of slower landing and takeoff speeds required on a small carrier deck. The Phantom used split flaps on both the folding and fixed wing sections to enhance low-speed landing performance, but no other high-lift devices were used. Provisions were also made for Rocket Assisted Take Off (RATO) bottles to improve takeoff performance.
When the first XFD-1, serial number "48235", was completed in January 1945, only one Westinghouse 19XB-2B engine was available for installation. Ground runs and taxi tests were conducted with the single engine, and such was the confidence in the aircraft that the first flight on 26 January 1945 was made with only the one turbojet engine. During flight tests, the Phantom became the first naval aircraft to exceed 500 mph (434 kn, 805 km/h). With successful completion of tests, a production contract was awarded on 7 March 1945 for 100 FD-1 aircraft. With the end of the war, the Phantom production contract was reduced to 30 aircraft, but was soon increased back to 60.
The first prototype was lost in a fatal crash on 1 November 1945, but the second and final Phantom prototype (serial number "48236") was completed early the next year and became the first purely jet-powered aircraft to operate from an American aircraft carrier, completing four successful takeoffs and landings on 21 July 1946, from near Norfolk, Virginia. At the time, she was the largest carrier serving with the U.S. Navy, allowing the aircraft to take off without assistance from a catapult. The second prototype crashed on 26 August 1946.
Production Phantoms incorporated a number of design improvements. These included provisions for a flush-fitting centerline drop tank, an improved gunsight, and the addition of speed brakes. Production models used Westinghouse J30-WE-20 engines with 1,600 lbf (7.1 kN) of thrust per engine. The top of the vertical tail had a more square shape than the rounder tail used on the prototypes, and a smaller rudder was used to resolve problems with control surface clearance discovered during test flights. The horizontal tail surfaces were shortened slightly, while the fuselage was stretched by 19 in (48 cm). The amount of framing in the windshield was reduced to enhance pilot visibility.
Halfway through the production run, the navy reassigned the designation letter "D" back to Douglas, with the Phantom being redesignated FH-1. Including the two prototypes, a total of 62 Phantoms were finally produced, with the last FH-1 rolling off the assembly line in May 1948.
Realizing that the production of more powerful jet engines was imminent, McDonnell engineers proposed a more powerful variant of the Phantom while the original aircraft was still under development – a proposal that would lead to the design of the Phantom's replacement, the F2H Banshee. Although the new aircraft was originally envisioned as a modified Phantom, the need for heavier armament, greater internal fuel capacity, and other improvements eventually led to a substantially heavier and bulkier aircraft that shared few parts with its agile predecessor. Despite this, the two aircraft were similar enough that McDonnell was able to complete its first F2H-1 in August 1948, a mere three months after the last FH-1 had rolled off the assembly line.
The first Phantoms were delivered to USN fighter squadron VF-17A (later redesignated VF-171) in August 1947; the squadron received a full complement of 24 aircraft on 29 May 1948. Beginning in November 1947, Phantoms were delivered to United States Marine Corps squadron VMF-122, making it the first USMC combat squadron to deploy jets. VF-17A became the USN's first fully operational jet carrier squadron when it deployed aboard on 5 May 1948.
The Phantom was one of the first jets used by the U.S. military for exhibition flying. Three Phantoms used by the Naval Air Test Center were used by a unique demonstration team called the Gray Angels, whose members consisted entirely of naval aviators holding the rank of rear admiral (Daniel V. Gallery, Apollo Soucek and Edgar A. Cruise.) The team's name was an obvious play on the name of the recently formed U.S. Navy Blue Angels, who were still flying propeller-powered Grumman F8F Bearcats at the time. The "Grays" flew in various air shows during the summer of 1947, but the team was abruptly disbanded after their poorly timed arrival at a September air show in Cleveland, Ohio, nearly caused a head-on low-altitude collision with a large formation of other aircraft; their Phantoms were turned over to test squadron VX-3. The VMF-122 Phantoms were later used for air show demonstrations until they were taken out of service in 1949, with the team being known alternately as the Marine Phantoms or the Flying Leathernecks.
The Phantom's service as a frontline fighter would be short-lived. Its limited range and light armament – notably, its inability to carry bombs – made it best suited for duty as a point-defence interceptor aircraft. However, its speed and rate of climb were only slightly better than existing propeller-powered fighters and fell short of other contemporary jets, such as the Lockheed P-80 Shooting Star, prompting concerns that the Phantom would be outmatched by future enemy jets it might soon face. Moreover, recent experience in World War II had demonstrated the value of naval fighters that could double as fighter-bombers, a capability the Phantom lacked. Finally, the aircraft exhibited some design deficiencies – its navigational avionics were poor, it could not accommodate newly developed ejection seats, and the location of the machine guns in the upper nose caused pilots to be dazzled by muzzle flash.
The F2H Banshee and Grumman F9F Panther, both of which began flight tests around the time of the Phantom's entry into service, better satisfied the navy's desire for a versatile, long-range, high-performance jet. Consequently, the FH-1 saw little weapons training, and was primarily used for carrier qualifications to transition pilots from propeller-powered fighters to jets in preparation for flying the Panther or Banshee. In June 1949, VF-171 (VF-17A) re-equipped with the Banshee, and their Phantoms were turned over to VF-172; this squadron, along with the NATC, VX-3, and VMF-122, turned over their Phantoms to the United States Naval Reserve by late 1949 after receiving F2H-1 Banshees. The FH-1 would see training duty with the USNR until being replaced by the F9F Panther in July 1954; none ever saw combat, having been retired from frontline service prior to the outbreak of the Korean War.
In 1964, Progressive Aero, Incorporated of Fort Lauderdale, Florida purchased three surplus Phantoms, intending to use them to teach civilians how to fly jets. A pair were stripped of military equipment and restored to flying condition, but the venture was unsuccessful, and the aircraft were soon retired once again.
|
https://en.wikipedia.org/wiki?curid=11761
|
Fricative consonant
Fricatives are consonants produced by forcing air through a narrow channel made by placing two articulators close together. These may be the lower lip against the upper teeth, in the case of ; the back of the tongue against the soft palate, in the case of German (the final consonant of "Bach"); or the side of the tongue against the molars, in the case of Welsh (appearing twice in the name "Llanelli"). This turbulent airflow is called frication.
A particular subset of fricatives are the sibilants. When forming a sibilant, one still is forcing air through a narrow channel, but in addition, the tongue is curled lengthwise to direct the air over the edge of the teeth. English , , , and are examples of sibilants.
The usage of two other terms is less standardized: "Spirant" is an older term for fricatives used by some American and European phoneticians and phonologists. "Strident" could mean just "sibilant", but some authors include also labiodental and uvular fricatives in the class.
All sibilants are coronal, but may be dental, alveolar, postalveolar, or palatal (retroflex) within that range. However, at the postalveolar place of articulation, the tongue may take several shapes: domed, laminal, or apical, and each of these is given a separate symbol and a separate name. Prototypical retroflexes are subapical and palatal, but they are usually written with the same symbol as the apical postalveolars. The alveolars and dentals may also be either apical or laminal, but this difference is indicated with diacritics rather than with separate symbols.
The IPA also has letters for epiglottal fricatives,
with allophonic trilling, but these might be better analyzed as pharyngeal trills.
The lateral fricative occurs as the "ll" of Welsh, as in "Lloyd", "Llewelyn", and "Machynlleth" (, a town), as the unvoiced 'hl' and voiced 'dl' or 'dhl' in the several languages of Southern Africa (such as Xhosa and Zulu), and in Mongolian.
No language distinguishes voiced fricatives from approximants at these places, so the same symbol is used for both. For the pharyngeal, approximants are more numerous than fricatives. A fricative realization may be specified by adding the uptack to the letters, . Likewise, the downtack may be added to specify an approximant realization, .
In many languages, such as English, the glottal "fricatives" are unaccompanied phonation states of the glottis, without any accompanying manner, fricative or otherwise. However, in languages such as Arabic, they are true fricatives.
In addition, is usually called a "voiceless labial-velar fricative", but it is actually an approximant. True doubly articulated fricatives may not occur in any language; but see voiceless palatal-velar fricative for a putative (and rather controversial) example.
Fricatives are very commonly voiced, though cross-linguistically voiced fricatives are not nearly as common as tenuis ("plain") fricatives. Other phonations are common in languages that have those phonations in their stop consonants. However, phonemically aspirated fricatives are rare. contrasts with in Korean; aspirated fricatives are also found in a few Sino-Tibetan languages, in some Oto-Manguean languages, in the Siouan language Ofo ( and ), and in the (central?) Chumash languages ( and ). The record may be Cone Tibetan, which has four contrastive aspirated fricatives: , , and .
Phonemically nasalized fricatives are rare. Some South Arabian languages have , Umbundu has , and Kwangali and Souletin Basque have . In Coatzospan Mixtec, appear allophonically before a nasal vowel, and in Igbo nasality is a feature of the syllable; when occur in nasal syllables they are themselves nasalized.
Until its extinction, Ubykh may have been the language with the most fricatives (29 not including ), some of which did not have dedicated symbols or diacritics in the IPA. This number actually outstrips the number of all consonants in English (which has 24 consonants). By contrast, approximately 8.7% of the world's languages have no phonemic fricatives at all. This is a typical feature of Australian Aboriginal languages, where the few fricatives that exist result from changes to plosives or approximants, but also occurs in some indigenous languages of New Guinea and South America that have especially small numbers of consonants. However, whereas is "entirely" unknown in indigenous Australian languages, most of the other languages without true fricatives do have in their consonant inventory.
Voicing contrasts in fricatives are largely confined to Europe, Africa, and Western Asia. Languages of South and East Asia, such as Mandarin Chinese, Korean, the Dravidian and Austronesian languages, typically do not have such voiced fricatives as and , which are familiar to many European speakers. These voiced fricatives are also relatively rare in indigenous languages of the Americas. Overall, voicing contrasts in fricatives are much rarer than in plosives, being found only in about a third of the world's languages as compared to 60 percent for plosive voicing contrasts.
About 15 percent of the world's languages, however, have "unpaired voiced fricatives", i.e. a voiced fricative without a voiceless counterpart. Two-thirds of these, or 10 percent of all languages, have unpaired voiced fricatives but no voicing contrast between any fricative pair.
This phenomenon occurs because voiced fricatives have developed from lenition of plosives or fortition of approximants. This phenomenon of unpaired voiced fricatives is scattered throughout the world, but is confined to nonsibilant fricatives with the exception of a couple of languages that have but lack . (Relatedly, several languages have the voiced affricate but lack , and vice versa.) The fricatives that occur most often without a voiceless counterpart are – in order of ratio of unpaired occurrences to total occurrences – , , , and .
Fricatives appear in waveforms as random noise caused by the turbulent airflow, upon which a periodic pattern is overlaid if voiced. Fricatives produced in the front of the mouth tend to have energy concentration at higher frequencies than ones produced in the back. The centre of gravity, the average frequency in a spectrum weighted by the amplitude, may be used to determine the place of articulation of a fricative relative to that of another.
|
https://en.wikipedia.org/wiki?curid=11762
|
Frost
Frost is a thin layer of ice on a solid surface, which forms from water vapor in an above freezing atmosphere coming in contact with a solid surface whose temperature is below freezing, and resulting in a phase change from water vapor (a gas) to ice (a solid) as the water vapor reaches the freezing point. In temperate climates, it most commonly appears on surfaces near the ground as fragile white crystals; in cold climates, it occurs in a greater variety of forms. The propagation of crystal formation occurs by the process of nucleation.
The ice crystals of frost form as the result of fractal process development. The depth of frost crystals varies depending on the amount of time they have been accumulating, and the concentration of the water vapor (humidity). Frost crystals may be invisible (black), clear (translucent), or white; if a mass of frost crystals scatters light in all directions, the coating of frost appears white.
Types of frost include crystalline frost (hoar frost or radiation frost) from deposition of water vapor from air of low humidity, white frost in humid conditions, window frost on glass surfaces, advection frost from cold wind over cold surfaces, black frost without visible ice at low temperatures and very low humidity, and rime under supercooled wet conditions.
Plants that have evolved in warmer climates suffer damage when the temperature falls low enough to freeze the water in the cells that make up the plant tissue. The tissue damage resulting from this process is known as "frost damage". Farmers in those regions where frost damage is known to affect their crops often invest in substantial means to protect their crops from such damage.
If a solid surface is chilled below the dew point of the surrounding humid air and the surface itself is colder than freezing, ice will form on it. If the water deposits as a liquid that then freezes, it forms a coating that may look glassy, opaque, or crystalline, depending on its type. Depending on context, that process also may be called atmospheric icing. The ice it produces differs in some ways from crystalline frost, which consists of spicules of ice that typically project from the solid surface on which they grow.
The main difference between the ice coatings and frost spicules arises from the fact that the crystalline spicules grow directly from desublimation of water vapour from air, and desublimation is not a factor in icing of freezing surfaces. For desublimation to proceed the surface must be below the frost point of the air, meaning that it is sufficiently cold for ice to form without passing through the liquid phase. The air must be humid, but not sufficiently humid to permit the condensation of liquid water, or icing will result instead of desublimation. The size of the crystals depends largely on the temperature, the amount of water vapor available, and how long they have been growing undisturbed.
As a rule, except in conditions where supercooled droplets are present in the air, frost will form only if the deposition surface is colder than the surrounding air. For instance frost may be observed around cracks in cold wooden sidewalks when humid air escapes from the warmer ground beneath. Other objects on which frost commonly forms are those with low specific heat or high thermal emissivity, such as blackened metals; hence the accumulation of frost on the heads of rusty nails.
The apparently erratic occurrence of frost in adjacent localities is due partly to differences of elevation, the lower areas becoming colder on calm nights. Where static air settles above an area of ground in the absence of wind, the absorptivity and specific heat of the ground strongly influence the temperature that the trapped air attains.
Hoar frost, also hoarfrost, radiation frost, or pruina, refers to white ice crystals deposited on the ground or loosely attached to exposed objects, such as wires or leaves. They form on cold, clear nights when conditions are such that heat radiates out to the open air faster than it can be replaced from nearby sources, such as wind or warm objects. Under suitable circumstances, objects cool to below the frost point of the surrounding air, well below the freezing point of water. Such freezing may be promoted by effects such as flood frost or frost pocket. These occur when ground-level radiation losses cool air until it flows downhill and accumulates in pockets of very cold air in valleys and hollows. Hoar frost may freeze in such low-lying cold air even when the air temperature a few feet above ground is well above freezing.
The word "hoar" comes from an Old English adjective that means "showing signs of old age". In this context, it refers to the frost that makes trees and bushes look like white hair.
Hoar frost may have different names depending on where it forms:
When surface hoar covers sloping snowbanks, the layer of frost crystals may create an avalanche risk; when heavy layers of new snow cover the frosty surface, furry crystals standing out from the old snow hold off the falling flakes, forming a layer of voids that prevent the new snow layers from bonding strongly to the old snow beneath. Ideal conditions for hoarfrost to form on snow are cold clear nights, with very light, cold air currents conveying humidity at the right rate for growth of frost crystals. Wind that is too strong or warm destroys the furry crystals, and thereby may permit a stronger bond between the old and new snow layers. However, if the winds are strong enough and cold enough to lay the crystals flat and dry, carpeting the snow with cold, loose crystals without removing or destroying them or letting them warm up and become sticky, then the frost interface between the snow layers may still present an avalanche danger, because the texture of the frost crystals differs from the snow texture and the dry crystals will not stick to fresh snow. Such conditions still prevent a strong bond between the snow layers.
In very low temperatures where fluffy surface hoar crystals form without subsequently being covered with snow, strong winds may break them off, forming a dust of ice particles and blowing them over the surface. The ice dust then may form yukimarimo, as has been observed in parts of Antarctica, in a process similar to the formation of dust bunnies and similar structures.
Hoar frost and white frost also occurs in man-made environments such as in freezers or industrial cold storage facilities. If such cold spaces or the pipes serving them are not well insulated and are exposed to ambient humidity, the moisture will freeze instantly depending on the freezer temperature. The frost may coat pipes thickly, partly insulating them, but such inefficient insulation still is a source of heat loss.
Advection frost (also called wind frost) refers to tiny ice spikes that form when very cold wind is blowing over tree branches, poles, and other surfaces. It looks like rimming on the edges of flowers and leaves and usually forms against the direction of the wind. It can occur at any hour, day or night.
Window frost (also called fern frost or ice flowers) forms when a glass pane is exposed to very cold air on the outside and warmer, moderately moist air on the inside. If the pane is not a good insulator (for example, if it is a single pane window), water vapour condenses on the glass forming frost patterns. With very low temperatures outside, frost can appear on the bottom of the window even with double pane energy efficient windows because the air convection between two panes of glass ensures that the bottom part of the glazing unit is colder than the top part. On unheated motor vehicles the frost will usually form on the outside surface of the glass first. The glass surface influences the shape of crystals, so imperfections, scratches, or dust can modify the way ice nucleates. The patterns in window frost form a fractal with a fractal dimension greater than one but less than two. This is a consequence of the nucleation process being constrained to unfold in two dimensions, unlike a snowflake which is shaped by a similar process but forms in three dimensions and has a fractal dimension greater than two.
If the indoor air is very humid, rather than moderately so, water will first condense in small droplets and then freeze into clear ice.
Similar patterns of freezing may occur on other smooth vertical surfaces, but they seldom are as obvious or spectacular as on clear glass.
White frost is a solid deposition of ice that forms directly from water vapour contained in air.
White frost forms when there is a relative humidity above 90% and a temperature below −8 °C (18 °F) and it grows against the wind direction, since air arriving from windward has a higher humidity than leeward air, but the wind must not be strong or it damages the delicate icy structures as they begin to form. White frost resembles a heavy coating of hoar frost with big, interlocking crystals, usually needle-shaped.
Rime is a type of ice deposition that occurs quickly, often under heavily humid and windy conditions. Technically speaking, it is not a type of frost, since usually supercooled water drops are involved, in contrast to the formation of hoar frost, in which water vapour desublimates slowly and directly. Ships travelling through Arctic seas may accumulate large quantities of rime on the rigging. Unlike hoar frost, which has a feathery appearance, rime generally has an icy, solid appearance.
Black frost (or "killing frost") is not strictly speaking frost at all, because it is the condition seen in crops when the humidity is too low for frost to form, but the temperature falls so low that plant tissues freeze and die, becoming blackened, hence the term "black frost". Black frost often is called "killing frost" because white frost tends to be less cold, partly because the latent heat of freezing of the water reduces the temperature drop.
Many plants can be damaged or killed by freezing temperatures or frost. This varies with the type of plant, the tissue exposed, and how low temperatures get: a "light frost" of will damage fewer types of plants than a "hard frost" below .
Plants likely to be damaged even by a light frost include vines—such as beans, grapes, squashes, melons—along with nightshades such as tomatoes, eggplants and peppers. Plants that may tolerate (or even benefit) from frosts include:
Even those plants that tolerate frost may be damaged once temperatures drop even lower (below ). Hardy perennials, such as "Hosta", become dormant after the first frosts and regrow when spring arrives. The entire visible plant may turn completely brown until the spring warmth, or may drop all of its leaves and flowers, leaving the stem and stalk only. Evergreen plants, such as pine trees, withstand frost although all or most growth stops. Frost crack is a bark defect caused by a combination of low temperatures and heat from the winter sun.
Vegetation is not necessarily damaged when leaf temperatures drop below the freezing point of their cell contents. In the absence of a site nucleating the formation of ice crystals, the leaves remain in a supercooled liquid state, safely reaching temperatures of . However, once frost forms, the leaf cells may be damaged by sharp ice crystals. Hardening is the process by which a plant becomes tolerant to low temperatures. See also Cryobiology.
Certain bacteria, notably "Pseudomonas syringae", are particularly effective at triggering frost formation, raising the nucleation temperature to about . Bacteria lacking ice nucleation-active proteins (ice-minus bacteria) result in greatly reduced frost damage.
Typical measures to prevent frost or reduce its severity include one or more of:
Such measures need to be applied with discretion, because they may do more harm than good; for example, spraying crops with water can cause damage if the plants become overburdened with ice. An effective low cost method for small crop farms and plant nurseries, exploits the latent heat of freezing. A pulsed irrigation timer delivers water through existing overhead sprinklers at a low volumes to combat frosts down to . If the water freezes it giving off its latent heat, preventing the temperature of the foliage from falling much below zero.
Frost-free areas are found mainly in the tropics, where they cover almost all land except at altitudes above about near the equator and around in the semi-arid middle tropics, but also in areas with subtropical climates that have winters tempered by strong oceanic influences. The most poleward frost-free areas are the lower altitudes of the Azores, Île Amsterdam, Île Saint-Paul, and Tristan da Cunha.
The only reliably frost-free areas in the contiguous United States are the Florida Keys and the coastal areas of the Channel Islands of California. The hardiness zones there are 11a and 11b.
Frost is personified in Russian culture as Ded Moroz. Indigenous peoples of Russia such as the Mordvins have their own traditions of frost deities.
English folklore tradition holds that Jack Frost, an elfish creature, is responsible for feathery patterns of frost found on windows on cold mornings.
|
https://en.wikipedia.org/wiki?curid=11763
|
Franz Schmidt
Franz Schmidt (22 December 1874 – 11 February 1939) was an Austro-Hungarian composer, cellist and pianist.
Schmidt was born in Pozsony (known in German as Pressburg), in the Hungarian part of the Austro-Hungarian Empire (the city is now Bratislava, capital of Slovakia). His father was half Hungarian and his mother entirely Hungarian. He was a Roman Catholic.
His earliest teacher was his mother, Mária Ravasz, an accomplished pianist, who gave him a systematic instruction in the keyboard works of J. S. Bach. He received a foundation in theory from Brother Felizian Moczik, the organist at the Franciscan church in Pressburg. He studied piano briefly with Theodor Leschetizky, with whom he clashed. He moved to Vienna with his family in 1888, and studied at the Vienna Conservatory (composition with Robert Fuchs, cello with Ferdinand Hellmesberger and counterpoint with Anton Bruckner), graduating "with excellence" in 1896.
He obtained a post as cellist with the Vienna Court Opera Orchestra, where he played until 1914, often under Gustav Mahler. Mahler habitually had Schmidt play all the cello solos, even though Friedrich Buxbaum was the principal cellist. Schmidt was also in demand as a chamber musician. Schmidt and Arnold Schoenberg maintained cordial relations despite their vast differences in style. Also a brilliant pianist, in 1914 Schmidt took up a professorship in piano at the Vienna Conservatory, which had been recently renamed Imperial Academy of Music and the Performing Arts. (Apparently, when asked who the greatest living pianist was, Leopold Godowsky replied, "The other one is Franz Schmidt.") In 1925 he became Director of the Academy, and from 1927 to 1931 its Rector.
As teacher of piano, cello and counterpoint and composition at the Academy, Schmidt trained numerous instrumentalists, conductors, and composers who later achieved fame. Among his best-known students were the pianist Friedrich Wührer and Alfred Rosé (son of Arnold Rosé, the founder of the Rosé Quartet, Konzertmeister of the Vienna Philharmonic and brother-in-law of Gustav Mahler). Among the composers were Walter Bricht (his favourite student), Theodor Berger, Marcel Rubin, Alfred Uhl and Ľudovít Rajter. He received many tokens of the high esteem in which he was held, notably the Franz-Josef Order, and an Honorary Doctorate from the University of Vienna.
Schmidt's private life was in stark contrast to the success of his distinguished professional career, and was overshadowed by tragedy. His first wife, Karoline Perssin (c. 1880–1943), was confined in the Vienna mental hospital Am Steinhof in 1919, and three years after his death was murdered under the euthanasia program of the Third Reich. Their daughter Emma Schmidt Holzschuh (1902–1932, married 1929) died unexpectedly after the birth of her first child. Schmidt experienced a spiritual and physical breakdown after this, and achieved an artistic revival and resolution in his Fourth Symphony of 1933 (which he inscribed as "Requiem for my Daughter") and, especially, in his oratorio "The Book with Seven Seals". His second marriage in 1923, to a successful young piano student Margarethe Jirasek (1891–1964), for the first time brought some desperately needed stability into the private life of the artist, who was plagued by many serious health problems.
Schmidt's worsening health forced his retirement from the Academy in early 1937. In the last year of his life Austria was brought into the German Reich by the Anschluss, and Schmidt was feted by the NSDAP authorities as the greatest living composer of the so-called Ostmark. He was given a commission to write a cantata entitled "The German Resurrection", which, after 1945, was taken by many as a reason to brand him as having been tainted by National Socialist sympathy. However, Schmidt left this composition unfinished, and in the summer and autumn of 1938, a few months before his death, set it aside to devote himself to two other commissioned works for the one-armed pianist Paul Wittgenstein: the Quintet in A major for piano left-hand, clarinet, and string trio; and the Toccata in D minor for solo piano.
Schmidt died on 11 February 1939.
As a composer, Schmidt was slow to develop, but his reputation, at least in Austria, saw a steady growth from the late 1890s until his death in 1939. In his music, Schmidt continued to develop the Viennese classic-romantic traditions he inherited from Schubert, Brahms and his own master, Bruckner. He also takes forward the exotic "gypsy" style of Liszt and Brahms. His works are monumental in form and firmly tonal in language, though quite often innovative in their designs and clearly open to some of the new developments in musical syntax initiated by Mahler and Schoenberg. Although Schmidt did not write a lot of chamber music, what he did write, in the opinion of such critics as Wilhelm Altmann, was important and of high quality. Although Schmidt's organ works may resemble others of the era in terms of length, complexity, and difficulty, they are forward-looking in being conceived for the smaller, clearer, classical-style instruments of the "Orgelbewegung", which he advocated. Schmidt worked mainly in large forms, including four symphonies (1899, 1913, 1928 and 1933) and two operas: "Notre Dame" (1904–6) and "Fredigundis" (1916–21). A CD recording of "Notre Dame" has been available for many years, starring Dame Gwyneth Jones and James King.
No really adequate recording has been made of Schmidt's second and last opera "Fredigundis", of which there has been but one "unauthorized" release in the early 1980s on the Voce label of an Austrian Radio broadcast of a 1979 Vienna performance under the direction of Ernst Märzendorfer. Aside from numerous "royal fanfares" (Fredigundis held the French throne in the sixth century) the score contains some fine examples of Schmidt's transitional style between his earlier and later manner. In many respects, Schmidt seldom ventured so far from traditional tonality again, and his third and final period (in the last decade-and-a-half of his life) was generally one of (at least partial) retrenchment, consolidation and the integration of the style of his opulently scored and melodious early compositions (the First Symphony, "Notre Dame") with elements of the overt experimentation seen in "Fredigundis", combined with an economy of utterance born of artistic maturity. "New Grove" encyclopaedia states that "Fredigundis" was a critical and popular failure, which may be partly attributable to the fact that Fredigundis (Fredegund, the widow of Chilperic I), is presented as a murderous and sadistic feminine monster. Add to this some structural problems with the libretto, and the opera's failure to make headway – despite an admirable and impressive score – becomes comprehensible.
Aside from the mature symphonies (Nos. 2-4), Schmidt's crowning achievement was the oratorio "The Book with Seven Seals" (1935–37), a setting of passages from the Book of Revelation. His choice of subject was prophetic: with hindsight the work appears to foretell, in the most powerful terms, the disasters that were shortly to be visited upon Europe in the Second World War. Here his invention rises to a sustained pitch of genius. A narrative upon the text of the oratorio was provided by the composer.
Schmidt's oratorio stands in the Austro-German tradition stretching back to the time of J. S. Bach and Handel. He was one of relatively few composers to write an oratorio fully on the subject of the Book of Revelation (earlier works include Georg Philipp Telemann: "Der Tag des Gerichts", Schneider: "Das Weltgericht", Louis Spohr: "Die letzten Dinge", Joachim Raff: "Weltende", and Ralph Vaughan Williams: "Sancta Civitas"). Far from glorifying its subject, it is a mystical contemplation, a horrified warning, and a prayer for salvation. The premiere was held in Vienna on 15 June 1938, with the Vienna Symphony Orchestra under Oswald Kabasta: the soloists were Rudolf Gerlach (John), Erika Rokyta, Enid Szantho, Anton Dermota, Josef von Manowarda and Franz Schütz at the organ.
Schmidt is generally, if erroneously, regarded as a conservative composer (such labels rest upon yet-to-be-resolved aesthetic/stylistic arguments), but the rhythmic subtlety and harmonic complexity of much of his music belie this. His music is modern without being modernist, combining a reverence for the great Austro-German lineage of composers with very personal innovations in harmony and orchestration (showing an awareness of the output of composers such as Debussy and Ravel, whose piano music he greatly admired, along with a knowledge of more recent composers in his own German-speaking realm, such as Schoenberg, Berg, Hindemith, etc.). The considerable technical accomplishment of his music ought to compel respect, but he seems to have fallen between two stools: his works are too complex for the conservatively minded, yet too obviously traditional for the avant-garde (they are also notoriously difficult to perform). Since the 1970s his music has enjoyed a modest revival which looks set to continue as it is rediscovered and re-evaluated.
Schmidt's premiere of "The Book with Seven Seals" was made much of by the National Socialists (who had annexed Austria shortly before in the Anschluss), and Schmidt was seen to give the Nazi salute (according to a report by Georg Tintner, who revered Schmidt and whose intent to record his symphonies was never realised). His conductor Oswald Kabasta was apparently an enthusiastic Nazi who, being prohibited from conducting in 1946 during de-nazification, committed suicide. These facts long placed Schmidt's posthumous reputation under a cloud. His lifelong friend and colleague Oskar Adler, who fled the Nazis in 1938, wrote afterwards that Schmidt was never a Nazi and never antisemitic but was extremely naive about politics. Hans Keller gave a similar endorsement. Regarding Schmidt's political naivety, Michael Steinberg, in his book "The Symphony", tells of Schmidt's recommending "Variations on a Hebrew Theme" by his student Israel Brandmann to a musical group associated with the proto-Nazi German National Party. Most of Schmidt's principal musical friends were Jews, and they benefited from his generosity.
Schmidt's last listed work, the cantata "German Resurrection", was composed to a Nazi text. As one of the most famous living Austrian composers, Schmidt was well-known to Hitler and received this commission after the Anschluss. He left it unfinished, to be completed later by Robert Wagner. Already seriously ill, Schmidt worked instead on other compositions such as the Quintet in A major for piano (left hand), clarinet and string trio, intended for Paul Wittgenstein and incorporating a variation set based on a theme by Wittgenstein's old teacher, Josef Labor. His failure to complete the cantata is likely to be a further indication that he was not committed to the Nazi cause; such, at any rate, was the opinion of his friend Oskar Adler.
|
https://en.wikipedia.org/wiki?curid=11768
|
Finnish Civil War
The Finnish Civil War was a civil war in Finland in 1918 fought for the leadership and control of Finland during the country's transition from a Grand Duchy of the Russian Empire to an independent state. The clashes took place in the context of the national, political, and social turmoil caused by World War I (Eastern Front) in Europe. The war was fought between the "Reds", led by a section of the Social Democratic Party, and the "Whites", conducted by the conservative-based Senate and the German Imperial Army. The paramilitary Red Guards, composed of industrial and agrarian workers, controlled the cities and industrial centres of southern Finland. The paramilitary White Guards, composed of farmers, along with middle-class and upper-class social strata, controlled rural central and northern Finland.
In the years before the conflict, Finnish society had experienced rapid population growth, industrialisation, pre-urbanisation and the rise of a comprehensive labour movement. The country's political and governmental systems were in an unstable phase of democratisation and modernisation. The socio-economic condition and education of the population had gradually improved, as well as national thinking and cultural life had awakened.
World War I led to the collapse of the Russian Empire, causing a power vacuum in Finland, and a subsequent struggle for dominance led to militarisation and an escalating crisis between the left-leaning labour movement and the conservatives. The Reds carried out an unsuccessful general offensive in February 1918, supplied with weapons by Soviet Russia. A counteroffensive by the Whites began in March, reinforced by the German Empire's military detachments in April. The decisive engagements were the Battles of Tampere and Vyborg (; ), won by the Whites, and the Battles of Helsinki and Lahti, won by German troops, leading to overall victory for the Whites and the German forces. Political violence became a part of this warfare. Around 12,500 Red prisoners of war died of malnutrition and disease in camps. About 39,000 people, of whom 36,000 were Finns, perished in the conflict.
In the aftermath, the Finns passed from Russian governance to the German sphere of influence with a plan to establish a German-led Finnish monarchy. The scheme was cancelled with the defeat of Germany in World War I and Finland instead emerged as an independent, democratic republic. The Civil War divided the nation for decades. Finnish society was reunited through social compromises based on a long-term culture of moderate politics and religion and the post-war economic recovery.
The main factor behind the Finnish Civil War was a political crisis arising out of World War I. Under the pressures of the Great War, the Russian Empire collapsed, leading to the February and October Revolutions in 1917. This breakdown caused a power vacuum and a subsequent struggle for power in Eastern Europe. Russia's Grand Duchy of Finland (1809–1917), became embroiled in the turmoil. Geopolitically less important than the continental Moscow–Warsaw gateway, Finland, isolated by the Baltic Sea was a peaceful side front until early 1918. The war between the German Empire and Russia had only indirect effects on the Finns. Since the end of the 19th century, the Grand Duchy had become a vital source of raw materials, industrial products, food and labour for the growing Imperial Russian capital Petrograd (modern Saint Petersburg), and World War I emphasised that role. Strategically, the Finnish territory was the less important northern section of the Estonian–Finnish gateway and a buffer zone to and from Petrograd through the Narva area, the Gulf of Finland and the Karelian Isthmus.
The German Empire saw Eastern Europe—primarily Russia—as a major source of vital products and raw materials, both during World War I and for the future. Her resources overstretched by the two-front war, Germany attempted to divide Russia by providing financial support to revolutionary groups, such as the Bolsheviks and the Socialist Revolutionary Party, and to radical, separatist factions, such as the Finnish national activist movement leaning toward Germanism. Between 30 and 40 million marks were spent on this endeavour. Controlling the Finnish area would allow the Imperial German Army to penetrate Petrograd and the Kola Peninsula, an area rich in raw materials for the mining industry. Finland possessed large ore reserves and a well-developed forest industry.
From 1809 to 1898, a period called "Pax Russica", the peripheral authority of the Finns gradually increased, and Russo-Finnish relations were exceptionally peaceful in comparison with other parts of the Russian Empire. Russia's defeat in the Crimean War in the 1850s led to attempts to speed up the modernisation of the country. This caused more than 50 years of economic, industrial, cultural and educational progress in the Grand Duchy of Finland, including an improvement in the status of the Finnish language. All this encouraged Finnish nationalism and cultural unity through the birth of the Fennoman movement, which bound the Finns to the domestic administration and led to the idea that the Grand Duchy was an increasingly autonomous state of the Russian Empire.
In 1899, the Russian Empire initiated a policy of integration through the Russification of Finland. The strengthened, pan-slavist central power tried to unite the "Russian Multinational Dynastic Union" as the military and strategic situation of Russia became more perilous due to the rise of Germany and Japan. Finns called the increased military and administrative control, "the First Period of Oppression", and for the first time Finnish politicians drew up plans for disengagement from Russia or sovereignty for Finland. In the struggle against integration, activists drawn from sections of the working class and the Swedish-speaking intelligentsia carried out terrorist acts. During World War I and the rise of Germanism, the pro-Swedish Svecomans began their covert collaboration with Imperial Germany and, from 1915 to 1917, a Jäger (; ) battalion consisting of 1,900 Finnish volunteers was trained in Germany.
The major reasons for rising political tensions among Finns were the autocratic rule of the Russian czar and the undemocratic class system of the estates of the realm. The latter system originated in the regime of the Swedish Empire that preceded Russian governance and divided the Finnish people economically, socially and politically. Finland's population grew rapidly in the nineteenth century (from 860,000 in 1810 to 3,130,000 in 1917), and a class of agrarian and industrial workers, as well as crofters, emerged over the period. The Industrial Revolution was rapid in Finland, though it started later than in the rest of Western Europe. Industrialisation was financed by the state and some of the social problems associated with the industrial process were diminished by the administration's actions. Among urban workers, socio-economic problems steepened during periods of industrial depression. The position of rural workers worsened after the end of the nineteenth century, as farming became more efficient and market-oriented, and the development of industry was insufficiently vigorous to fully utilise the rapid population growth of the countryside.
The difference between Scandinavian-Finnish (Finno-Ugric peoples) and Russian-Slavic culture affected the nature of Finnish national integration. The upper social strata took the lead and gained domestic authority from the Russian czar in 1809. The estates planned to build an increasingly autonomous Finnish state, led by the elite and the intelligentsia. The Fennoman movement aimed to include the common people in a non-political role; the labour movement, youth associations and the temperance movement were initially led "from above".
Between 1870 and 1916 industrialisation gradually improved social conditions and the self-confidence of workers, but while the standard of living of the common people rose in absolute terms, the rift between rich and poor deepened markedly. The commoners' rising awareness of socio-economic and political questions interacted with the ideas of socialism, social liberalism and nationalism. The workers' initiatives and the corresponding responses of the dominant authorities intensified social conflict in Finland.The Finnish labour movement, which emerged at the end of the nineteenth century from temperance, religious movements and Fennomania, had a Finnish nationalist, working-class character. From 1899 to 1906, the movement became conclusively independent, shedding the paternalistic thinking of the Fennoman estates, and it was represented by the Finnish Social Democratic Party, established in 1899. Workers' activism was directed both toward opposing Russification and in developing a domestic policy that tackled social problems and responded to the demand for democracy. This was a reaction to the domestic dispute, ongoing since the 1880s, between the Finnish nobility-bourgeoisie and the labour movement concerning voting rights for the common people.
Despite their obligations as obedient, peaceful and non-political inhabitants of the Grand Duchy (who had, only a few decades earlier, accepted the class system as the natural order of their life), the commoners began to demand their civil rights and citizenship in Finnish society. The power struggle between the Finnish estates and the Russian administration gave a concrete role model and free space for the labour movement. On the other side, due to an at-least century-long tradition and experience of administrative authority, the Finnish elite saw itself as the inherent natural leader of the nation. The political struggle for democracy was solved outside Finland, in international politics: the Russian Empire's failed 1904–1905 war against Japan led to the 1905 Revolution in Russia and to a general strike in Finland. In an attempt to quell the general unrest, the system of estates was abolished in the Parliamentary Reform of 1906. The general strike increased support for the social democrats substantially. The party encompassed a higher proportion of the population than any other socialist movement in the world.
The Reform of 1906 was a giant leap towards the political and social liberalisation of the common Finnish people because the Russian House of Romanov had been the most autocratic and conservative ruler in Europe. The Finns adopted a unicameral parliamentary system, the Parliament of Finland (; ) with universal suffrage. The number of voters increased from 126,000 to 1,273,000, including female citizens. The reform led to the social democrats obtaining about fifty percent of the popular vote, but the Czar regained his authority after the crisis of 1905. Subsequently, during the more severe programme of Russification, called "the Second Period of Oppression" by the Finns, the Czar neutralised the power of the Finnish Parliament between 1908 and 1917. He dissolved the assembly, ordered parliamentary elections almost annually, and determined the composition of the Finnish Senate, which did not correlate with the Parliament.
The capacity of the Finnish Parliament to solve socio-economic problems was stymied by confrontations between the largely uneducated commoners and the former estates. Another conflict festered as employers denied collective bargaining and the right of the labour unions to represent workers. The parliamentary process disappointed the labour movement, but as dominance in the Parliament and legislation was the workers' most likely way to obtain a more balanced society, they identified themselves with the state. Overall domestic politics led to a contest for leadership of the Finnish state during the ten years before the collapse of the Russian Empire.
The Second Period of Russification was halted on 15 March 1917 by the February Revolution, which removed the czar, Nicholas II. The collapse of Russia was caused by military defeats, war-weariness against the duration and hardships of the Great War, and the collision between the most conservative regime in Europe and a Russian people desiring modernisation. The Czar's power was transferred to the State Duma (Russian Parliament) and the right-wing Provisional Government, but this new authority was challenged by the Petrograd Soviet (city council), leading to dual power in the country.
The autonomous status of 1809–1899 was returned to the Finns by the March 1917 manifesto of the Russian Provisional Government. For the first time in history, "de facto" political power existed in the Parliament of Finland. The political left, consisting mainly of social democrats, covered a wide spectrum from moderate to revolutionary socialists. The political right was even more diverse, ranging from social liberals and moderate conservatives to rightist conservative elements. The four main parties were:
During 1917, a power struggle and social disintegration interacted. The collapse of Russia induced a chain reaction of disintegration, starting from the government, military and economy, and spreading to all fields of society, such as local administration, workplaces and to individual citizens. The social democrats wanted to retain the civil rights already achieved and to increase the socialists' power over society. The conservatives feared the loss of their long-held socio-economic dominance. Both factions collaborated with their equivalents in Russia, deepening the split in the nation.
The Social Democratic Party gained an absolute majority in the parliamentary elections of 1916. A new Senate was formed in March 1917 by Oskari Tokoi, but it did not reflect the socialists' large parliamentary majority: it comprised six social democrats and six non-socialists. In theory, the Senate consisted of a broad national coalition, but in practice (with the main political groups unwilling to compromise and top politicians remaining outside of it), it proved unable to solve any major Finnish problem. After the February Revolution, political authority descended to the street level: mass meetings, strike organisations and worker-soldier councils on the left and to active organisations of employers on the right, all serving to undermine the authority of the state.
The February Revolution halted the Finnish economic boom caused by the Russian war-economy. The collapse in business led to unemployment and high inflation, but the employed workers gained an opportunity to resolve workplace problems. The commoners' call for the eight-hour working day, better working conditions and higher wages led to demonstrations and large-scale strikes in industry and agriculture.
While the Finns had specialised in milk and butter production, the bulk of the food supply for the country depended on cereals produced in southern Russia. The cessation of cereal imports from disintegrating Russia led to food shortages in Finland. The Senate responded by introducing rationing and price controls. The farmers resisted the state control and thus a black market, accompanied by sharply rising food prices, formed. As a consequence, export to the free market of the Petrograd area increased. Food supply, prices and, in the end, the fear of starvation became emotional political issues between farmers and urban workers, especially those who were unemployed. Common people, their fears exploited by politicians and an incendiary, polarised political media, took to the streets. Despite the food shortages, no actual large-scale starvation hit southern Finland before the civil war and the food market remained a secondary stimulator in the power struggle of the Finnish state.
The passing of the Tokoi Senate bill called the "Law of Supreme Power" (, more commonly known as "valtalaki"; ) in July 1917, triggered one of the key crises in the power struggle between the social democrats and the conservatives. The fall of the Russian Empire opened the question of who would hold sovereign political authority in the former Grand Duchy. After decades of political disappointment, the February Revolution offered the Finnish social democrats an opportunity to govern; they held the absolute majority in Parliament. The conservatives were alarmed by the continuous increase of the socialists' influence since 1899, which reached a climax in 1917.
The "Law of Supreme Power" incorporated a plan by the socialists to substantially increase the authority of Parliament, as a reaction to the non-parliamentary and conservative leadership of the Finnish Senate between 1906 and 1916. The bill furthered Finnish autonomy in domestic affairs: the Russian Provisional Government was only allowed the right to control Finnish foreign and military policies. The Act was adopted with the support of the Social Democratic Party, the Agrarian League, part of the Young Finnish Party and some activists eager for Finnish sovereignty. The conservatives opposed the bill and some of the most right-wing representatives resigned from Parliament.
In Petrograd, the social democrats' plan had the backing of the Bolsheviks. They had been plotting a revolt against the Provisional Government since April 1917, and pro-Soviet demonstrations during the July Days brought matters to a head. The Helsinki Soviet and the Regional Committee of the Finnish Soviets, led by the Bolshevik Ivar Smilga, both pledged to defend the Finnish Parliament, were it threatened with attack. However, the Provisional Government still had sufficient support in the Russian army to survive and as the street movement waned, Vladimir Lenin fled to Karelia. In the aftermath of these events, the "Law of Supreme Power" was overruled and the social democrats eventually backed down; more Russian troops were sent to Finland and, with the co-operation and insistence of the Finnish conservatives, Parliament was dissolved and new elections announced.
In the October 1917 elections, the social democrats lost their absolute majority, which radicalised the labour movement and decreased support for moderate politics. The crisis of July 1917 did not bring about the Red Revolution of January 1918 on its own, but together with political developments based on the commoners' interpretation of the ideas of Fennomania and socialism, the events favoured a Finnish revolution. In order to win power, the socialists had to overcome Parliament.
The February Revolution resulted in a loss of institutional authority in Finland and the dissolution of the police force, creating fear and uncertainty. In response, both the right and left assembled their own security groups, which were initially local and largely unarmed. By late 1917, following the dissolution of Parliament, in the absence of a strong government and national armed forces, the security groups began assuming a broader and more paramilitary character. The Civil Guards (; ; ) and the later White Guards (; ) were organised by local men of influence: conservative academics, industrialists, major landowners, and activists. The Workers' Order Guards (; ) and the Red Guards (; ) were recruited through the local social democratic party sections and from the labour unions.
The Bolsheviks' and Vladimir Lenin's October Revolution of 7 November 1917 transferred political power in Petrograd to the radical, left-wing socialists. The German government's decision to arrange safe-conduct for Lenin and his comrades from exile in Switzerland to Petrograd in April 1917, was a success. An armistice between Germany and the Bolshevik regime came into force on 6 December and peace negotiations began on 22 December 1917 at Brest-Litovsk.
November 1917 became another watershed in the 1917–1918 rivalry for the leadership of Finland. After the dissolution of the Finnish Parliament, polarisation between the social democrats and the conservatives increased markedly and the period witnessed the appearance of political violence. An agricultural worker was shot during a local strike on 9 August 1917 at Ypäjä and a Civil Guard member was killed in a local political crisis at Malmi on 24 September. The October Revolution disrupted the informal truce between the Finnish non-socialists and the Russian Provisional Government. After political wrangling over how to react to the revolt, the majority of the politicians accepted a compromise proposal by Santeri Alkio, the leader of the Agrarian League. Parliament seized the sovereign power in Finland on 15 November 1917 based on the socialists' "Law of Supreme Power" and ratified their proposals of an eight-hour working day and universal suffrage in local elections, from July 1917.
The purely non-socialist, conservative-led government of Pehr Evind Svinhufvud was appointed on 27 November. This nomination was both a long-term aim of the conservatives and a response to the challenges of the labour movement during November 1917. Svinhufvud's main aspirations were to separate Finland from Russia, to strengthen the Civil Guards, and to return a part of Parliament's new authority to the Senate. There were 149 Civil Guards on 31 August 1917 in Finland, counting local units and subsidiary White Guards in towns and rural communes; 251 on 30 September; 315 on 31 October; 380 on 30 November and 408 on 26 January 1918. The first attempt at serious military training among the Guards was the establishment of a 200-strong cavalry school at the Saksanniemi estate in the vicinity of the town of Porvoo, in September 1917. The vanguard of the Finnish Jägers and German weaponry arrived in Finland during October–November 1917 on the ' freighter and the German U-boat '; around 50 Jägers had returned by the end of 1917.
After political defeats in July and October 1917, the social democrats put forward an uncompromising program called "We Demand" (; ) on 1 November, in order to push for political concessions. They insisted upon a return to the political status before the dissolution of Parliament in July 1917, disbandment of the Civil Guards and elections to establish a Finnish Constituent Assembly. The program failed and the socialists initiated a general strike during 14–19 November to increase political pressure on the conservatives, who had opposed the "Law of Supreme Power" and the parliamentary proclamation of sovereign power on 15 November.
Revolution became the goal of the radicalised socialists after the loss of political control, and events in November 1917 offered momentum for a socialist uprising. In this phase, Lenin and Joseph Stalin, under threat in Petrograd, urged the social democrats to take power in Finland. The majority of Finnish socialists were moderate and preferred parliamentary methods, prompting the Bolsheviks to label them "reluctant revolutionaries". The reluctance diminished as the general strike appeared to offer a major channel of influence for the workers in southern Finland. The strike leadership voted by a narrow majority to start a revolution on 16 November, but the uprising had to be called off the same day due to the lack of active revolutionaries to execute it.
At the end of November 1917, the moderate socialists among the social democrats won a second vote over the radicals in a debate over revolutionary versus parliamentary means, but when they tried to pass a resolution to completely abandon the idea of a socialist revolution, the party representatives and several influential leaders voted it down. The Finnish labour movement wanted to sustain a military force of its own and to keep the revolutionary road open, too. The wavering Finnish socialists disappointed V. I. Lenin and in turn, he began to encourage the Finnish Bolsheviks in Petrograd.
Among the labour movement, a more marked consequence of the events of 1917 was the rise of the Workers' Order Guards. There were 20–60 separate guards between 31 August and 30 September 1917, but on 20 October, after defeat in parliamentary elections, the Finnish labour movement proclaimed the need to establish more worker units. The announcement led to a rush of recruits: on 31 October the number of guards was 100–150; 342 on 30 November 1917 and 375 on 26 January 1918. Since May 1917, the paramilitary organisations of the left had grown in two phases, the majority of them as Workers' Order Guards. The minority were Red Guards, these were partly underground groups formed in industrialised towns and industrial centres, such as Helsinki, Kotka and Tampere, based on the original Red Guards that had been formed during 1905–1906 in Finland.
The presence of the two opposing armed forces created a state of dual power and divided sovereignty on Finnish society. The decisive rift between the guards broke out during the general strike: the Reds executed several political opponents in southern Finland and the first armed clashes between the Whites and Reds took place. In total, 34 casualties were reported. Eventually, the political rivalries of 1917 led to an arms race and an escalation towards civil war.
The disintegration of Russia offered Finns an historic opportunity to gain national independence. After the October Revolution, the conservatives were eager for secession from Russia in order to control the left and minimise the influence of the Bolsheviks. The socialists were skeptical about sovereignty under conservative rule, but they feared a loss of support among nationalistic workers, particularly after having promised increased national liberty through the "Law of Supreme Power". Eventually, both political factions supported an independent Finland, despite strong disagreement over the composition of the nation's leadership.
Nationalism had become a "civic religion" in Finland by the end of nineteenth century, but the goal during the general strike of 1905 was a return to the autonomy of 1809–1898, not full independence. In comparison to the unitary Swedish regime, the domestic power of Finns had increased under the less uniform Russian rule. Economically, the Grand Duchy of Finland benefited from having an independent domestic state budget, a central bank with national currency, the markka (deployed 1860), and customs organisation and the industrial progress of 1860–1916. The economy was dependent on the huge Russian market and separation would disrupt the profitable Finnish financial zone. The economic collapse of Russia and the power struggle of the Finnish state in 1917 were among the key factors that brought sovereignty to the fore in Finland.
Svinhufvud's Senate introduced Finland's Declaration of Independence on 4 December 1917 and Parliament adopted it on 6 December. The social democrats voted against the Senate's proposal, while presenting an alternative declaration of sovereignty. The establishment of an independent state was not a guaranteed conclusion for the small Finnish nation. Recognition by Russia and other great powers was essential; Svinhufvud accepted that he had to negotiate with Lenin for the acknowledgement. The socialists, having been reluctant to enter talks with the Russian leadership in July 1917, sent two delegations to Petrograd to request that Lenin approve Finnish sovereignty.
In December 1917, Lenin was under intense pressure from the Germans to conclude peace negotiations at Brest-Litovsk and the Bolsheviks' rule was in crisis, with an inexperienced administration and the demoralised army facing powerful political and military opponents. Lenin calculated that the Bolsheviks could fight for central parts of Russia but had to give up some peripheral territories, including Finland in the geopolitically less important north-western corner. As a result, Svinhufvud's delegation won Lenin's concession of sovereignty on 31 December 1917.
By the beginning of the Civil War, Austria-Hungary, Denmark, France, Germany, Greece, Norway, Sweden and Switzerland had recognised Finnish independence. The United Kingdom and United States did not approve it; they waited and monitored the relations between Finland and Germany (the main enemy of the Allies), hoping to override Lenin's regime and to get Russia back into the war against the German Empire. In turn, the Germans hastened Finland's separation from Russia so as to move the country to within their sphere of influence.
The final escalation towards war began in early January 1918, as each military or political action of the Reds or the Whites resulted in a corresponding counteraction by the other. Both sides justified their activities as defensive measures, particularly to their own supporters. On the left, the vanguard of the movement was the urban Red Guards from Helsinki, Kotka and Turku; they led the rural Reds and convinced the socialist leaders who wavered between peace and war to support the revolution. On the right, the vanguard was the Jägers, who had transferred to Finland, and the volunteer Civil Guards of southwestern Finland, southern Ostrobothnia and Vyborg province in the southeastern corner of Finland. The first local battles were fought during 9–21 January 1918 in southern and southeastern Finland, mainly to win the arms race and to control Vyborg (; ).
On 12 January 1918, Parliament authorised the Svinhufvud Senate to establish internal order and discipline on behalf of the state. On 15 January, Carl Gustaf Emil Mannerheim, a former Finnish general of the Imperial Russian Army, was appointed the commander-in-chief of the Civil Guards. The Senate appointed the Guards, henceforth called the White Guards, as the White Army of Finland. Mannerheim placed his Headquarters of the White Army in the Vaasa–Seinäjoki area. The White Order to engage was issued on 25 January. The Whites gained weaponry by disarming Russian garrisons during 21–28 January, in particular in southern Ostrobothnia.
The Red Guards, led by Ali Aaltonen, refused to recognise the Whites' hegemony and established a military authority of their own. Aaltonen installed his headquarters in Helsinki and nicknamed it Smolna echoing the Smolny Institute, the Bolsheviks' headquarters in Petrograd. The Red Order of Revolution was issued on 26 January, and a red lantern, a symbolic indicator of the uprising, was lit in the tower of the Helsinki Workers' House. A large-scale mobilisation of the Reds began late in the evening of 27 January, with the Helsinki Red Guard and some of the Guards located along the Vyborg-Tampere railway having been activated between 23 and 26 January, in order to safeguard vital positions and escort a heavy railroad shipment of Bolshevik weapons from Petrograd to Finland. White troops tried to capture the shipment: 20–30 Finns, Red and White, died in the Battle of Kämärä at the Karelian Isthmus on 27 January 1918. The Finnish rivalry for power had culminated.
At the beginning of the war, a discontinuous front line ran through southern Finland from west to east, dividing the country into White Finland and Red Finland. The Red Guards controlled the area to the south, including nearly all the major towns and industrial centres, along with the largest estates and farms with the highest numbers of crofters and tenant farmers. The White Army controlled the area to the north, which was predominantly agrarian and contained small or medium-sized farms and tenant farmers. The number of crofters was lower and they held a better social status than those in the south. Enclaves of the opposing forces existed on both sides of the front line: within the White area lay the industrial towns of Varkaus, Kuopio, Oulu, Raahe, Kemi and Tornio; within the Red area lay Porvoo, Kirkkonummi and Uusikaupunki. The elimination of these strongholds was a priority for both armies in February 1918.
Red Finland was led by the People's Delegation (; ), established on 28 January 1918 in Helsinki. The delegation sought democratic socialism based on the Finnish Social Democratic Party's ethos; their visions differed from Lenin's dictatorship of the proletariat. Otto Ville Kuusinen formulated a proposal for a new constitution, influenced by those of Switzerland and the United States. With it, political power was to be concentrated to Parliament, with a lesser role for a government. The proposal included a multi-party system; freedom of assembly, speech and press; and the use of referenda in political decision-making. In order to ensure the authority of the labour movement, the common people would have a right to permanent revolution. The socialists planned to transfer a substantial part of property rights to the state and local administrations.
In foreign policy, Red Finland leaned on Bolshevist Russia. A Red-initiated Finno–Russian treaty and peace agreement was signed on 1 March 1918, where Red Finland was called the Finnish Socialist Workers' Republic (; ). The negotiations for the treaty implied that –as in World War I in general– nationalism was more important for both sides than the principles of international socialism. The Red Finns did not simply accept an alliance with the Bolsheviks and major disputes appeared, for example, over the demarcation of the border between Red Finland and Soviet Russia. The significance of the Russo–Finnish Treaty evaporated quickly due to the signing of the Treaty of Brest-Litovsk between the Bolsheviks and the German Empire on 3 March 1918.
Lenin's policy on the right of nations to self-determination aimed at preventing the disintegration of Russia during the period of military weakness. He assumed that in war-torn, splintering Europe, the proletariat of free nations would carry out socialist revolutions and unite with Soviet Russia later. The majority of the Finnish labour movement supported Finland's independence. The Finnish Bolsheviks, influential, though few in number, favoured annexation of Finland by Russia.
The government of White Finland, Pehr Evind Svinhufvud's first senate, was called the Vaasa Senate after its relocation to the safer west-coast city of Vaasa, which acted as the capital of the Whites from 29 January to 3 May 1918. In domestic policy, the White Senate's main goal was to return the political right to power in Finland. The conservatives planned a monarchist political system, with a lesser role for Parliament. A section of the conservatives had always supported monarchy and opposed democracy; others had approved of parliamentarianism since the revolutionary reform of 1906, but after the crisis of 1917–1918, concluded that empowering the common people would not work. Social liberals and reformist non-socialists opposed any restriction of parliamentarianism. They initially resisted German military help, but the prolonged warfare changed their stance.
In foreign policy, the Vaasa Senate relied on the German Empire for military and political aid. Their objective was to defeat the Finnish Reds; end the influence of Bolshevist Russia in Finland and expand Finnish territory to East Karelia, a geopolitically significant home to people speaking Finno-Ugric languages. The weakness of Russia inspired an idea of Greater Finland among the expansionist factions of both the right and left: the Reds had claims concerning the same areas. General Mannerheim agreed on the need to take over East Karelia and to request German weapons, but opposed actual German intervention in Finland. Mannerheim recognised the Red Guards' lack of combat skill and trusted in the abilities of the German-trained Finnish Jägers. As a former Russian army officer, Mannerheim was well aware of the demoralisation of the Russian army. He co-operated with White-aligned Russian officers in Finland and Russia.
The number of Finnish troops on each side varied from 70,000 to 90,000 and both had around 100,000 rifles, 300–400 machine guns and a few hundred cannons. While the Red Guards consisted mostly of volunteers, with wages paid at the beginning of the war, the White Army consisted predominantly of conscripts with 11,000–15,000 volunteers. The main motives for volunteering were socio-economic factors, such as salary and food, as well as idealism and peer pressure. The Red Guards included 2,600 women, mostly girls recruited from the industrial centres and cities of southern Finland. Urban and agricultural workers constituted the majority of the Red Guards, whereas land-owning farmers and well-educated people formed the backbone of the White Army. Both armies used child soldiers, mainly between 14 and 17 years of age. The use of juvenile soldiers was not rare in World War I; children of the time were under the absolute authority of adults and were not shielded against exploitation.
Rifles and machine guns from Imperial Russia were the main armaments of the Reds and the Whites. The most commonly used rifle was the Russian Mosin–Nagant Model 1891. In total, around ten different rifle models were in service, causing problems for ammunition supply. The Maxim gun was the most-used machine gun, along with the less-used M1895 Colt–Browning, Lewis and Madsen guns. The machine guns caused a substantial part of the casualties in combat. Russian field guns were mostly used with direct fire.
The Civil War was fought primarily along railways; vital means for transporting troops and supplies, as well for using armoured trains, equipped with light cannons and heavy machine guns. The strategically most important railway junction was Haapamäki, approximately northeast of Tampere, connecting eastern and western Finland and as well as southern and northern Finland. Other critical junctions included Kouvola, Riihimäki, Tampere, Toijala and Vyborg. The Whites captured Haapamäki at the end of January 1918, leading to the Battle of Vilppula.
The Finnish Red Guards seized the early initiative in the war by taking control of Helsinki on 28 January 1918 and by undertaking a general offensive lasting from February till early March 1918. The Reds were relatively well-armed, but a chronic shortage of skilled leaders, both at the command level and in the field, left them unable to capitalise on this momentum, and most of the offensives came to nothing. The military chain of command functioned relatively well at company and platoon level, but leadership and authority remained weak as most of the field commanders were chosen by the vote of the troops. The common troops were more or less armed civilians, whose military training, discipline and combat morale were both inadequate and low.
Ali Aaltonen was replaced on 28 January 1918 by Eero Haapalainen as commander-in-chief. He, in turn, was displaced by the Bolshevik triumvirate of Eino Rahja, Adolf Taimi and Evert Eloranta on 20 March. The last commander-in-chief of the Red Guard was Kullervo Manner, from 10 April until the last period of the war when the Reds no longer had a named leader. Some talented local commanders, such as Hugo Salmela in the Battle of Tampere, provided successful leadership, but could not change the course of the war. The Reds achieved some local victories as they retreated from southern Finland toward Russia, such as against German troops in the Battle of Syrjäntaka on 28–29 April in Tuulos.
The revolutions in Russia divided the Soviet army officers politically and their attitude towards the Finnish Civil War varied. Mikhail Svechnikov led Finnish Red troops in western Finland in February and Konstantin Yeremejev Soviet forces on the Karelian Isthmus, while other officers were mistrustful of their revolutionary peers and instead co-operated with General Mannerheim, in disarming Soviet garrisons in Finland. On 30 January 1918, Mannerheim proclaimed to Russian soldiers in Finland that the White Army did not fight against Russia, but that the objective of the White campaign was to beat the Finnish Reds and the Soviet troops supporting them.
The number of Soviet soldiers active in the civil war declined markedly once Germany attacked Russia on 18 February 1918. The German-Soviet Treaty of Brest-Litovsk of 3 March restricted the Bolsheviks' support for the Finnish Reds to weapons and supplies. The Soviets remained active on the south-eastern front, mainly in the Battle of Rautu on the Karelian Isthmus between February and April 1918, where they defended the approaches to Petrograd.
While the conflict has been called by some, "The War of Amateurs", the White Army had two major advantages over the Red Guards: the professional military leadership of Gustaf Mannerheim and his staff, which included 84 Swedish volunteer officers and former Finnish officers of the czar's army; and 1,450 soldiers of the 1,900-strong, Jäger battalion. The majority of the unit arrived in Vaasa on 25 February 1918. On the battlefield, the Jägers, battle-hardened on the Eastern Front, provided strong leadership that made disciplined combat of the common White troopers possible. The soldiers were similar to those of the Reds, having brief and inadequate training. At the beginning of the war, the White Guards' top leadership had little authority over volunteer White units, which obeyed only their local leaders. At the end of February, the Jägers started a rapid training of six conscript regiments.
The Jäger battalion was politically divided, too. Four-hundred-and-fifty –mostly socialist– Jägers remained stationed in Germany, as it was feared they were likely to side with the Reds. White Guard leaders faced a similar problem when drafting young men to the army in February 1918: 30,000 obvious supporters of the Finnish labour movement never showed up. It was also uncertain whether common troops drafted from the small-sized and poor farms of central and northern Finland had strong enough motivation to fight the Finnish Reds. The Whites' propaganda promoted the idea that they were fighting a defensive war against Bolshevist Russians, and belittled the role of the Red Finns among their enemies. Social divisions appeared both between southern and northern Finland and within rural Finland. The economy and society of the north had modernised more slowly than that of the south. There was a more pronounced conflict between Christianity and socialism in the north, and the ownership of farmland conferred major social status, motivating the farmers to fight against the Reds.
Sweden declared neutrality both during World War I and the Finnish Civil War. General opinion, in particular among the Swedish elite, was divided between supporters of the Allies and the Central powers, Germanism being somewhat more popular. Three war-time priorities determined the pragmatic policy of the Swedish liberal-social democratic government: sound economics, with export of iron-ore and foodstuff to Germany; sustaining the tranquility of Swedish society; and geopolitics. The government accepted the participation of Swedish volunteer officers and soldiers in the Finnish White Army in order to block expansion of revolutionary unrest to Scandinavia.
A 1,000-strong paramilitary Swedish Brigade, led by Hjalmar Frisell, took part in the Battle of Tampere and in the fighting south of the town. In February 1918, the Swedish Navy escorted the German naval squadron transporting Finnish Jägers and German weapons and allowed it to pass through Swedish territorial waters. The Swedish socialists tried to open peace negotiations between the Whites and the Reds. The weakness of Finland offered Sweden a chance to take over the geopolitically vital Finnish Åland Islands, east of Stockholm, but the German army's Finland operation stalled this plan.
In March 1918, the German Empire intervened in the Finnish Civil War on the side of the White Army. Finnish activists leaning on Germanism had been seeking German aid in freeing Finland from Soviet hegemony since late 1917, but because of the pressure they were facing at the Western Front, the Germans did not want to jeopardise their armistice and peace negotiations with the Soviet Union. The German stance changed after 10 February when Leon Trotsky, despite the weakness of the Bolsheviks' position, broke off negotiations, hoping revolutions would break out in the German Empire and change everything. On 13 February, the German leadership decided to retaliate and send military detachments to Finland too. As a pretext for aggression, the Germans invited "requests for help" from the western neighbouring countries of Russia. Representatives of White Finland in Berlin duly requested help on 14 February.
The Imperial German Army attacked Russia on 18 February. The offensive led to a rapid collapse of the Soviet forces and to the signing of the first Treaty of Brest-Litovsk by the Bolsheviks on 3 March 1918. Finland, the Baltic countries, Poland and Ukraine were transferred to the German sphere of influence. The Finnish Civil War opened a low-cost access route to Fennoscandia, where the geopolitical status was altered as a British Naval squadron invaded the Soviet harbour of Murmansk by the Arctic Ocean on 9 March 1918. The leader of the German war effort, General Erich Ludendorff, wanted to keep Petrograd under threat of attack via the Vyborg-Narva area and to install a German-led monarchy in Finland.
On 5 March 1918, a German naval squadron landed on the Åland Islands (in mid-February 1918, the islands had been occupied by a Swedish military expedition, which departed from there in May). On 3 April 1918, the 10,000-strong Baltic Sea Division (), led by General Rüdiger von der Goltz, launched the main attack at Hanko, west of Helsinki. It was followed on 7 April by Colonel Otto von Brandenstein's 3,000-strong Detachment Brandenstein () taking the town of Loviisa east of Helsinki. The larger German formations advanced eastwards from Hanko and took Helsinki on 12–13 April, while Detachment Brandenstein overran the town of Lahti on 19 April. The main German detachment proceeded northwards from Helsinki and took Hyvinkää and Riihimäki on 21–22 April, followed by Hämeenlinna on 26 April. The final blow to the cause of the Finnish Reds was dealt when the Bolsheviks broke off the peace negotiations at Brest-Litovsk, leading to the German eastern offensive in February 1918.
In February 1918, General Mannerheim deliberated on where to focus the general offensive of the Whites. There were two strategically vital enemy strongholds: Tampere, Finland's major industrial town in the south-west, and Vyborg, Karelia's main city. Although seizing Vyborg offered many advantages, his army's lack of combat skills and the potential for a major counterattack by the Reds in the area or in the south-west made it too risky.
Mannerheim decided to strike first at Tampere. He launched the main assault on 16 March 1918, at Längelmäki north-east of the town, through the right flank of the Reds' defence. At the same time, the Whites attacked through the north-western frontline Vilppula–Kuru–Kyröskoski–Suodenniemi. Although the Whites were unaccustomed to offensive warfare, some Red Guard units collapsed and retreated in panic under the weight of the offensive, while other Red detachments defended their posts to the last and were able to slow the advance of the White troops. Eventually, the Whites lay siege to Tampere. They cut off the Reds' southward connection at Lempäälä on 24 March and westward ones at Siuro, Nokia, and Ylöjärvi on 25 March.
The Battle for Tampere was fought between 16,000 White and 14,000 Red soldiers. It was Finland's first large-scale urban battle and one of the four most decisive military engagements of the war. The fight for the area of Tampere began on 28 March, on the eve of Easter 1918, later called "Bloody Maundy Thursday", in the Kalevankangas cemetery. The White Army did not achieve a decisive victory in the fierce combat, suffering more than 50 percent losses in some of their units. The Whites had to re-organise their troops and battle plans, managing to raid the town centre in the early hours of 3 April.
After a heavy, concentrated artillery barrage, the White Guards advanced from house to house and street to street, as the Red Guards retreated. In the late evening of 3 April, the Whites reached the eastern banks of the Tammerkoski rapids. The Reds' attempts to break the siege of Tampere from the outside along the Helsinki-Tampere railway failed. The Red Guards lost the western parts of the town between 4 and 5 April. The Tampere City Hall was among the last strongholds of the Reds. The battle ended 6 April 1918 with the surrender of Red forces in the Pyynikki and Pispala sections of Tampere.
The Reds, now on the defensive, showed increased motivation to fight during the battle. General Mannerheim was compelled to deploy some of the best-trained Jäger detachments, initially meant to be conserved for later use in the Vyborg area. The Battle of Tampere was the bloodiest action of the Civil War. The White Army lost 700–900 men, including 50 Jägers, the highest number of deaths the Jäger battalion suffered in a single battle of the 1918 war. The Red Guards lost 1,000–1,500 soldiers, with a further 11,000–12,000 captured. 71 civilians died, mainly due to artillery fire. The eastern parts of the city, consisting mostly of wooden buildings, were completely destroyed.
After peace talks between Germans and the Finnish Reds were broken off on 11 April 1918, the battle for the capital of Finland began. At 05:00 on 12 April, around 2,000–3,000 German Baltic Sea Division soldiers, led by Colonel Hans von Tschirsky und von Bögendorff, attacked the city from the north-west, supported via the Helsinki-Turku railway. The Germans broke through the area between Munkkiniemi and Pasila, and advanced on the central-western parts of the town. The German naval squadron led by Vice Admiral Hugo Meurer blocked the city harbour, bombarded the southern town area, and landed "Seebataillon" marines at Katajanokka.
Around 7,000 Finnish Reds defended Helsinki, but their best troops fought on other fronts of the war. The main strongholds of the Red defence were the Workers' Hall, the Helsinki railway station, the Red Headquarters at Smolna, the Senate Palace–Helsinki University area and the former Russian garrisons. By the late evening of 12 April, most of the southern parts and all of the western area of the city had been occupied by the Germans. Local Helsinki White Guards, having hidden in the city during the war, joined the battle as the Germans advanced through the town.
On 13 April, German troops took over the Market Square, the Smolna, the Presidential Palace and the Senate-Ritarihuone area. Toward the end, a German brigade with 2,000–3,000 soldiers, led by Colonel Kondrad Wolf joined the battle. The unit rushed from north to the eastern parts of Helsinki, pushing into the working-class neighborhoods of Hermanni, Kallio and Sörnäinen. German artillery bombarded and destroyed the Workers' Hall and put out the red lantern of the Finnish revolution. The eastern parts of the town surrendered around 14:00 on 13 April, when a white flag was raised in the tower of the Kallio Church. Sporadic fighting lasted until the evening. In total, 60 Germans, 300–400 Reds and 23 White Guard troopers were killed in the battle. Around 7,000 Reds were captured. The German army celebrated the victory with a military parade in the centre of Helsinki on 14 April 1918.
On 19 April 1918, Detachment Brandenstein took over the town of Lahti. The German troops advanced from the east-southeast via Nastola, through the Mustankallio graveyard in Salpausselkä and the Russian garrisons at Hennala. The battle was minor but strategically important as it cut the connection between the western and eastern Red Guards. Local engagements broke out in the town and the surrounding area between 22 April and 1 May 1918 as several thousand western Red Guards and Red civilian refugees tried to push through on their way to Russia. The German troops were able to hold major parts of the town and halt the Red advance. In total, 600 Reds and 80 German soldiers perished, and 30,000 Reds were captured in and around Lahti.
After the defeat in Tampere, the Red Guards began a slow retreat eastwards. As the German army seized Helsinki, the White Army shifted the military focus to Vyborg area, where 18,500 Whites advanced against 15,000 defending Reds. General Mannerheim's war plan had been revised as a result of the Battle for Tampere, a civilian, industrial town. He aimed to avoid new, complex city combat in Vyborg, an old military fortress. The Jäger detachments tried to tie down and destroy the Red force outside the town. The Whites were able to cut the Reds' connection to Petrograd and weaken the troops on the Karelian Isthmus on 20–26 April, but the decisive blow remained to be dealt in Vyborg. The final attack began on late 27 April with a heavy Jäger artillery barrage. The Reds' defence collapsed gradually, and eventually the Whites conquered Patterinmäki—the Reds' symbolic last stand of the 1918 uprising—in the early hours of 29 April 1918. In total, 400 Whites died, and 500–600 Reds perished and 12,000–15,000 were captured.
Both Whites and Reds carried out political violence through executions, respectively termed White Terror (; ) and Red Terror (; ). The threshold of political violence had already been crossed by the Finnish activists during the First Period of Russification. Large-scale terror operations were born and bred in Europe during World War I, the first total war. The February and October Revolutions initiated similar violence in Finland: at first by Russian army troops executing their officers, later between the Finnish Reds and Whites.
The terror consisted of a calculated aspect of general warfare and, on the other hand, the local, personal murders and corresponding acts of revenge. In the former, the commanding staff planned and organised the actions and gave orders to the lower ranks. At least a third of the Red terror and most of the White terror was centrally led. In February 1918, a "Desk of Securing Occupied Areas" was implemented by the highest-ranking White staff, and the White troops were given "Instructions for Wartime Judicature", later called the Shoot on the Spot Declaration. This order authorised field commanders to execute essentially anyone they saw fit. No order by the less-organised, highest Red Guard leadership authorising Red Terror has been found. The paper was "burned" or the command was oral.
The main goals of the terror were to destroy the command structure of the enemy; to clear and secure the areas governed and occupied by armies; and to create shock and fear among the civil population and the enemy soldiers. Additionally, the common troops' paramilitary nature and their lack of combat skills drove them to use political violence as a military weapon. Most of the executions were carried out by cavalry units called Flying Patrols, consisting of 10 to 80 soldiers aged 15 to 20 and led by an experienced, adult leader with absolute authority. The patrols, specialised in search and destroy operations and death squad tactics, were similar to German Sturmbattalions and Russian Assault units organized during World War I. The terror achieved some of its objectives but also gave additional motivation to fight against an enemy perceived to be inhuman and cruel. Both Red and White propaganda made effective use of their opponents' actions, increasing the spiral of revenge.
The Red Guards executed influential Whites, including politicians, major landowners, industrialists, police officers, civil servants and teachers as well as White Guards. Ten priests of the Evangelical Lutheran Church and 90 moderate socialists were killed. The number of executions varied over the war months, peaking in February as the Reds secured power, but March saw low counts because the Reds could not seize new areas outside of the original frontlines. The numbers rose again in April as the Reds aimed to leave Finland. The two major centres for Red Terror were Toijala and Kouvola, where 300–350 Whites were executed between February and April 1918.
The White Guards executed Red Guard and party leaders, Red troops, socialist members of the Finnish Parliament and local Red administrators, and those active in implementing Red Terror. The numbers varied over the months as the Whites conquered southern Finland. Comprehensive White Terror started with their general offensive in March 1918 and increased constantly. It peaked at the end of the war and declined and ceased after the enemy troops had been transferred to prison camps. During the high point of the executions, between the end of April and the beginning of May, 200 Reds were shot per day. White Terror was decisive against Russian soldiers who assisted the Finnish Reds, and several Russian non-socialist civilians were killed in the Vyborg massacre, the aftermath of the Battle of Vyborg.
In total, 1,650 Whites died as a result of Red Terror, while around 10,000 Reds perished by White Terror, which turned into political cleansing. White victims have been recorded exactly, while the number of Red troops executed immediately after battles remains unclear. Together with the harsh prison-camp treatment of the Reds during 1918, the executions inflicted the deepest mental scars on the Finns, regardless of their political allegiance. Some of those who carried out the killings were traumatised, a phenomenon that was later documented.
On 8 April 1918, after the defeat in Tampere and the German army intervention, the People's Delegation retreated from Helsinki to Vyborg. The loss of Helsinki pushed them to Petrograd on 25 April. The escape of the leadership embittered many Reds, and thousands of them tried to flee to Russia, but most of the refugees were encircled by White and German troops. In the Lahti area they surrendered on 1–2 May. The long Red caravans included women and children, who experienced a desperate, chaotic escape with severe losses due to White attacks. The scene was described as a "road of tears" for the Reds, but for the Whites, the sight of long, enemy caravans heading east was a victorious moment. The Red Guards' last strongholds between the Kouvola and Kotka area fell by 5 May, after the Battle of Ahvenkoski. The war of 1918 ended on 15 May 1918, when the Whites took over Fort Ino, a Russian coastal artillery base on the Karelian Isthmus, from the Russian troops. White Finland and General Mannerheim celebrated the victory with a large military parade in Helsinki on 16 May 1918.
The Red Guards had been defeated. The initially pacifist Finnish labour movement had lost the Civil War, several military leaders committed suicide and a majority of the Reds were sent to prison camps. The Vaasa Senate returned to Helsinki on 4 May 1918, but the capital was under the control of the German army. White Finland had become a protectorate of the German Empire and General Rüdiger von der Goltz was called "the true Regent of Finland". No armistice or peace negotiations were carried out between the Whites and Reds and an official peace treaty to end the Finnish Civil War was never signed.
The White Army and German troops captured around 80,000 Red prisoners of war (POWs), including 5,000 women, 1,500 children and 8,000 Russians. The largest prison camps were Suomenlinna (an island facing Helsinki), Hämeenlinna, Lahti, Riihimäki, Tammisaari, Tampere and Vyborg. The Senate decided to keep the POWs detained until each individual's role in the Civil War had been investigated. Legislation making provision for a Treason Court (; ) was enacted on 29 May 1918. The judicature of the 145 inferior courts led by the Supreme Treason Court (; ) did not meet the standards of impartiality, due to the condemnatory atmosphere of White Finland. In total 76,000 cases were examined and 68,000 Reds were convicted, primarily for treason; 39,000 were released on parole while the mean-length of punishment for the rest was two to four years in jail. 555 people were sentenced to death, of whom 113 were executed. The trials revealed that some innocent adults had been imprisoned.
Combined with the severe food shortages caused by the Civil War, mass imprisonment led to high mortality rates in the POW camps, and the catastrophe was compounded by the angry, punitive and uncaring mentality of the victors. Many prisoners felt that they had been abandoned by their own leaders, who had fled to Russia. The physical and mental condition of the POWs declined in May 1918. Many prisoners had been sent to the camps in Tampere and Helsinki in the first half of April and food supplies were disrupted during the Reds' eastward retreat. Consequently, in June 2,900 prisoners starved to death, or died as a result of diseases caused by malnutrition or the Spanish flu: 5,000 in July; 2,200 in August; and 1,000 in September. The mortality rate was highest in the Tammisaari camp at 34 percent, while the rate varied between 5 percent and 20 percent in the others. In total, around 12,500 Finns perished (3,000–4,000 due to the Spanish flu) while detained. The dead were buried in mass graves near the camps. Moreover, 700 severely weakened POWs died soon after release from the camps.
Most POWs were paroled or pardoned by the end of 1918, after a shift in the political situation. There were 6,100 Red prisoners left at the end of the year and 4,000 at the end of 1919. In January 1920, 3,000 POWs were pardoned and civil rights were returned to 40,000 former Reds. In 1927, the Social Democratic Party government led by Väinö Tanner pardoned the last 50 prisoners. The Finnish government paid reparations to 11,600 POWs in 1973. The traumatic hardships of the prison camps increased support for communism in Finland.
The Civil War was a catastrophe for Finland: around 36,000 people – 1.2 percent of the population – perished. The war left approximately 15,000 children orphaned. Most of the casualties occurred outside the battlefields: in the prison camps and the terror campaigns. Many Reds fled to Russia at the end of the war and during the period that followed. The fear, bitterness and trauma caused by the war deepened the divisions within Finnish society and many moderate Finns identified themselves as "citizens of two nations."
The conflict caused disintegration within both socialist and non-socialist factions. The rightward shift of power caused a dispute between conservatives and liberals on the best system of government for Finland to adopt: the former demanded monarchy and restricted parliamentarianism; the latter demanded a democratic republic. Both sides justified their views on political and legal grounds. The monarchists leaned on the Swedish regime's 1772 monarchist constitution (accepted by Russia in 1809), belittled the Declaration of Independence of 1917, and proposed a modernised, monarchist constitution for Finland. The republicans argued that the 1772 law lost validity in the February Revolution, that the authority of the Russian czar was assumed by the Finnish Parliament on 15 November 1917, and that the Republic of Finland had been adopted on 6 December that year. The republicans were able to halt the passage of the monarchists' proposal in Parliament. The royalists responded by applying the 1772 law to select a new monarch for the country without reference to Parliament.
The Finnish labour movement was divided into three parts: moderate social democrats in Finland; radical socialists in Finland; and communists in Soviet Russia. The Social Democratic Party had its first official party meeting after the Civil War on 25 December 1918, at which the party proclaimed a commitment to parliamentary means and disavowed Bolshevism and communism. The leaders of Red Finland, who had fled to Russia, established the Communist Party of Finland in Moscow on 29 August 1918. After the power struggle of 1917 and the bloody civil war, the former Fennomans and the social democrats who had supported "ultra-democratic" means in Red Finland declared a commitment to revolutionary Bolshevism–communism and to the dictatorship of the proletariat, under the control of Lenin.
In May 1918, a conservative-monarchist Senate was formed by J. K. Paasikivi, and the Senate asked the German troops to remain in Finland. 3 March 1918 Treaty of Brest-Litovsk and 7 March German-Finnish agreements bound White Finland to the German Empire's sphere of influence. General Mannerheim resigned his post on 25 May after disagreements with the Senate about German hegemony over Finland, and about his planned attack on Petrograd to repulse the Bolsheviks and capture Russian Karelia. The Germans opposed these plans due to their peace treaties with Lenin. The Civil War weakened the Finnish Parliament; it became a Rump Parliament that included only three socialist representatives.
On 9 October 1918, under pressure from Germany, the Senate and Parliament elected a German prince, Friedrich Karl, the brother-in-law of German Emperor William II, to become the King of Finland. The German leadership was able to utilise the breakdown of Russia for the geopolitical benefit of the German Empire in Fennoscandia also. The Civil War and the aftermath diminished independence of Finland, compared to the status it had held at the turn of the year 1917–1918.
The economic condition of Finland deteriorated drastically from 1918; recovery to pre-conflict levels was achieved only in 1925. The most acute crisis was in food supply, already deficient in 1917, though large-scale starvation had been avoided that year. The Civil War caused marked starvation in southern Finland. Late in 1918, Finnish politician Rudolf Holsti appealed for relief to Herbert Hoover, the American chairman of the Committee for Relief in Belgium. Hoover arranged for the delivery of food shipments and persuaded the Allies to relax their blockade of the Baltic Sea, which had obstructed food supplies to Finland, and to allow food into the country.
On 15 March 1917, the fate of Finns had been decided outside Finland, in Petrograd. On 11 November 1918, the future of the nation was determined in Berlin, as a result of Germany's surrender to end World War I. The German Empire collapsed in the German Revolution of 1918–19, caused by lack of food, war-weariness and defeat in the battles of the Western Front. General Rüdiger von der Goltz and his division left Helsinki on 16 December 1918, and Prince Friedrich Karl, who had not yet been crowned, abandoned his role four days later. Finland's status shifted from a monarchist protectorate of the German Empire to an independent republic. The new system of government was confirmed by the Constitution Act (; ) on 17 July 1919.
The first local elections based on universal suffrage in Finland were held during 17–28 December 1918, and the first free parliamentary election took place after the Civil War on 3 March 1919. The United States and the United Kingdom recognised Finnish sovereignty on 6–7 May 1919. The Western powers demanded the establishment of democratic republics in post-war Europe, to lure the masses away from widespread revolutionary movements. The Finno–Russian Treaty of Tartu was signed on 14 October 1920, with the aim of stabilizing political relations between Finland and Russia and settling the border question.
In April 1918, the leading Finnish social liberal and the eventual first President of Finland, Kaarlo Juho Ståhlberg wrote: "It is urgent to get the life and development in this country back on the path that we had already reached in 1906 and which the turmoil of war turned us away from." Moderate social democrat Väinö Voionmaa agonised in 1919: "Those who still trust in the future of this nation must have an exceptionally strong faith. This young independent country has lost almost everything due to the war." Voionmaa was a vital companion for the leader of the reformed Social Democratic Party, Väinö Tanner.
Santeri Alkio supported moderate politics. His party colleague, Kyösti Kallio urged in his Nivala address of 5 May 1918: "We must rebuild a Finnish nation, which is not divided into the Reds and Whites. We have to establish a democratic Finnish republic, where all the Finns can feel that we are true citizens and members of this society." In the end, many of the moderate Finnish conservatives followed the thinking of National Coalition Party member Lauri Ingman, who wrote in early 1918: "A political turn more to the right will not help us now, instead it would strengthen the support of socialism in this country."
Together with other broad-minded Finns, the new partnership constructed a Finnish compromise which eventually delivered a stable and broad parliamentary democracy. The compromise was based both on the defeat of the Reds in the Civil War and the fact that most of the Whites' political goals had not been achieved. After foreign forces left Finland, the militant factions of the Reds and the Whites lost their backing, while the pre-1918 cultural and national integrity and the legacy of Fennomania stood out among the Finns.
The weakness of both Germany and Russia after World War I empowered Finland and made a peaceful, domestic Finnish social and political settlement possible. A reconciliation process led to a slow and painful, but steady, national unification. In the end, the power vacuum and interregnum of 1917–1919 gave way to the Finnish compromise. From 1919 to 1991, the democracy and sovereignty of the Finns withstood challenges from right-wing and left-wing political radicalism, the crisis of World War II and pressure from the Soviet Union during the Cold War.
Between 1918 and the 1950s, mainstream literature and poetry presented the 1918 war from the White victors' point of view, with works such as the "Psalm of the Cannons" () by Arvi Järventaus in 1918. In poetry, Bertel Gripenberg, who had volunteered for the White Army, celebrated its cause in "The Great Age" () in 1928 and V. A. Koskenniemi in "Young Anthony" () in 1918. The war tales of the Reds were kept silent.
The first neutrally critical books were written soon after the war, notably, "Devout Misery" () written by the Nobel Prize laureate Frans Emil Sillanpää in 1919; "Dead Apple Trees" () by Joel Lehtonen in 1918; and "Homecoming" () by Runar Schildt in 1919. These were followed by Jarl Hemmer in 1931 with the book "A Man and His Conscience" () and Oiva Paloheimo in 1942 with "Restless Childhood" (). Lauri Viita's book "Scrambled Ground" () from 1950 presented the life and experiences of a worker family in the Tampere of 1918, including a point of view from outsiders to the Civil War.
Between 1959 and 1962, Väinö Linna described in his trilogy "Under the North Star" () the Civil War and World War II from the viewpoint of the common people. Part II of Linna's work opened a larger view of these events and included tales of the Reds in the 1918 war. At the same time, a new outlook on the war was opened by Paavo Haavikko's book "Private Matters" (), Veijo Meri's "The Events of 1918" () and Paavo Rintala's "My Grandmother and Mannerheim" (), all published in 1960. In poetry, Viljo Kajava, who had experienced the Battle of Tampere at the age of nine, presented a pacifist view of the Civil War in his "Poems of Tampere" () in 1966. The same battle is described in the novel "Corpse Bearer" () by Antti Tuuri from 2007. Jenni Linturi's multilayered "Malmi 1917" (2013) describes contradictory emotions and attitudes in a village drifting towards civil war.
Väinö Linna's trilogy turned the general tide, and after it, several books were written mainly from the Red viewpoint: The Tampere-trilogy by Erkki Lepokorpi in 1977; Juhani Syrjä's "Juho 18" in 1998; "The Command" () by Leena Lander in 2003; and "Sandra" by Heidi Köngäs in 2017. Kjell Westö's epic novel "Where We Once Went" (), published in 2006, deals with the period of 1915–1930 from both the Red and the White sides. Westö's book "Mirage 38" () from 2013, describes post-war traumas of the 1918 war and Finnish mentality in the 1930s. Many of the stories have been utilised in motion pictures and in theatre.
|
https://en.wikipedia.org/wiki?curid=11772
|
Flynn effect
The Flynn effect is the substantial and long-sustained increase in both fluid and crystallized intelligence test scores that were measured in many parts of the world over the 20th century. When intelligence quotient (IQ) tests are initially standardized using a sample of test-takers, by convention the average of the test results is set to 100 and their standard deviation is set to 15 or 16 IQ points. When IQ tests are revised, they are again standardized using a new sample of test-takers, usually born more recently than the first. Again, the average result is set to 100. However, when the new test subjects take the older tests, in almost every case their average scores are significantly above 100.
Test score increases have been continuous and approximately linear from the earliest years of testing to the present. For the Raven's Progressive Matrices test, a study published in the year 2009 found that British children's average scores rose by 14 IQ points from 1942 to 2008. Similar gains have been observed in many other countries in which IQ testing has long been widely used, including other Western European countries, Japan, and South Korea.
There are numerous proposed explanations of the Flynn effect, as well as some skepticism about its implications. Similar improvements have been reported for other cognitions such as semantic and episodic memory. Research suggests that there is an ongoing reversed Flynn effect, i.e. a decline in IQ scores, in Norway, Denmark, Australia, Britain, the Netherlands, Sweden, Finland, France and German-speaking countries, a development which appears to have started in the 1990s.
The Flynn effect is named for James R. Flynn, who did much to document it and promote awareness of its implications. The term itself was coined by Richard Herrnstein and Charles Murray, authors of "The Bell Curve". Although the general term for the phenomenon—referring to no researcher in particular—continues to be "secular rise in IQ scores", many textbooks on psychology and IQ testing have now followed the lead of Herrnstein and Murray in calling the phenomenon the Flynn effect.
IQ tests are updated periodically. For example, the Wechsler Intelligence Scale for Children (WISC), originally developed in 1949, was updated in 1974, 1991, 2003 and again in 2014. The revised versions are standardized based on the performance of test-takers in standardization samples. A standard score of IQ 100 is defined as the median performance of the standardization sample. Thus one way to see changes in norms over time is to conduct a study in which the same test-takers take both an old and new version of the same test. Doing so confirms IQ gains over time. Some IQ tests, for example tests used for military draftees in NATO countries in Europe, report raw scores, and those also confirm a trend of rising scores over time. The average rate of increase seems to be about three IQ points per decade in the United States, as scaled by the Wechsler tests. The increasing test performance over time appears on every major test, in every age range, at every ability level, and in every modern industrialized country, although not necessarily at the same rate as in the United States. The increase was continuous and roughly linear from the earliest days of testing to the mid-1990s. Though the effect is most associated with IQ increases, a similar effect has been found with increases in attention and of semantic and episodic memory.
Ulric Neisser estimated that using the IQ values of 1997, the average IQ of the United States in 1932, according to the first Stanford–Binet Intelligence Scales standardization sample, was 80. Neisser states that "Hardly any of them would have scored 'very superior', but nearly one-quarter would have appeared to be 'deficient.'" He also wrote that "Test scores are certainly going up all over the world, but whether intelligence itself has risen remains controversial."
Trahan et al. (2014) found that the effect was about 2.93 points per decade, based on both Stanford–Binet and Wechsler tests; they also found no evidence the effect was diminishing. In contrast, Pietschnig and Voracek (2015) reported, in their meta-analysis of studies involving nearly 4 million participants, that the Flynn effect had decreased in recent decades. They also reported that the magnitude of the effect was different for different types of intelligence ("0.41, 0.30, 0.28, and 0.21 IQ points annually for fluid, spatial, full-scale, and crystallized IQ test performance, respectively"), and that the effect was stronger for adults than for children.
Raven (2000) found that, as Flynn suggested, data interpreted as showing a decrease in many abilities with increasing age must be re-interpreted as showing that there has been a dramatic increase of these abilities with date of birth. On many tests this occurs at all levels of ability.
Some studies have found the gains of the Flynn effect to be particularly concentrated at the lower end of the distribution. Teasdale and Owen (1989), for example, found the effect primarily reduced the number of low-end scores, resulting in an increased number of moderately high scores, with no increase in very high scores. In another study, two large samples of Spanish children were assessed with a 30-year gap. Comparison of the IQ distributions indicated that the mean IQ scores on the test had increased by 9.7 points (the Flynn effect), the gains were concentrated in the lower half of the distribution and negligible in the top half, and the gains gradually decreased as the IQ of the individuals increased. Some studies have found a reverse Flynn effect with declining scores for those with high IQ.
In 1987, Flynn took the position that the very large increase indicates that IQ tests do not measure intelligence but only a minor sort of "abstract problem-solving ability" with little practical significance. He argued that if IQ gains do reflect intelligence increases, there would have been consequent changes of our society that have not been observed (a presumed non-occurrence of a "cultural renaissance"). Flynn no longer endorses this view of intelligence and has since elaborated and refined his view of what rising IQ scores mean.
Earlier investigators had discovered rises in raw IQ test scores in some study populations, but had not published general investigations of that issue in particular. Historian Daniel C. Calhoun cited earlier psychology literature on IQ score trends in his book "The Intelligence of a People" (1973). R. L. Thorndike drew attention to rises in Stanford-Binet scores in a 1975 review of the history of intelligence testing.
There is debate about whether the rise in IQ scores also corresponds to a rise in general intelligence, or only a rise in special skills related to taking IQ tests. Because children attend school longer now and have become much more familiar with the testing of school-related material, one might expect the greatest gains to occur on such school content-related tests as vocabulary, arithmetic or general information. Just the opposite is the case: abilities such as these have experienced relatively small gains and even occasional decreases over the years. Meta-analytic findings indicate that Flynn effects occur for tests assessing both fluid and crystallized abilities. For example, Dutch conscripts gained 21 points during only 30 years, or 7 points per decade, between 1952 and 1982. But this rise in IQ test scores is not wholly explained by an increase in general intelligence. Studies have shown that while test scores have improved over time, the improvement is not fully correlated with latent factors related to intelligence. Rushton has shown that the gains in IQ over time (the Lynn-Flynn effect) are unrelated to "g". Other researchers have shown that the IQ gains described by the Flynn effect are due in part to increasing intelligence, and in part to increases in test-specific skills. In parallel with the measured gains in IQ scores, secular declines have been found for "mental speed, digit span backwards, the use of difficult words, and color acuity, all of which are related to intelligence".
A 2017 survey of 75 experts in the field of intelligence research suggested four key causes of the Flynn effect: Better health, better nutrition, more and better education, and rising standards of living. Genetic changes were seen as not important. The experts' views agreed with an independently performed meta-analysis on published Flynn effect data, except that the latter found life history speed to be the most important factor.
The expert survey explained the possible end or decline in the Flynn effect by asymmetric fertility by means of genetic effects, migration, asymmetric fertility by means of socialization effects, declines in education, and the influence of media.
Duration of average schooling has increased steadily. One problem with this explanation is that if in the US comparing older and more recent subjects with similar educational levels, then the IQ gains appear almost undiminished in each such group considered individually.
Many studies find that children who do not attend school score drastically lower on the tests than their regularly attending peers. During the 1960s, when some Virginia counties closed their public schools to avoid racial integration, compensatory private schooling was available only for Caucasian children. On average, the scores of African-American children who received no formal education during that period decreased at a rate of about six IQ points per year.
Another explanation is an increased familiarity of the general population with tests and testing. For example, children who take the very same IQ test a second time usually gain five or six points. However, this seems to set an upper limit on the effects of test sophistication. One problem with this explanation and others related to schooling is that in the US, the groups with greater test familiarity show smaller IQ increases.
Early intervention programs have shown mixed results. Some preschool (ages 3–4) intervention programs like "Head Start" do not produce lasting changes of IQ, although they may confer other benefits. The "Abecedarian Early Intervention Project", an all-day program that provided various forms of environmental enrichment to children from infancy onward, showed IQ gains that did not diminish over time. The IQ difference between the groups, although only five points, was still present at age 12. Not all such projects have been successful. Also, such IQ gains can diminish until age 18.
Citing a high correlation between rising literacy rates and gains in IQ, David Marks has argued that the Flynn effect is caused by changes in literacy rates.
Still another theory is that the general environment today is much more complex and stimulating. One of the most striking 20th-century changes of the human intellectual environment has come from the increase of exposure to many types of visual media. From pictures on the wall to movies to television to video games to computers, each successive generation has been exposed to richer optical displays than the one before and may have become more adept at visual analysis. This would explain why visual tests like the Raven's have shown the greatest increases. An increase only of particular forms of intelligence would explain why the Flynn effect has not caused a "cultural renaissance too great to be overlooked."
In 2001, Dickens and Flynn presented a model for resolving several contradictory findings regarding IQ. They argue that the measure "heritability" includes both a direct effect of the genotype on IQ and also indirect effects such that the genotype changes the environment, thereby affecting IQ. That is, those with a greater IQ tend to seek stimulating environments that further increase IQ. These reciprocal effects result in gene environment correlation. The direct effect could initially have been very small, but feedback can create large differences of IQ. In their model, an environmental stimulus can have a very great effect on IQ, even for adults, but this effect also decays over time unless the stimulus continues (the model could be adapted to include possible factors, like nutrition during early childhood, that may cause permanent effects). The Flynn effect can be explained by a generally more stimulating environment for all people. The authors suggest that any program designed to increase IQ may produce long-term IQ gains if that program teaches children how to replicate the types of cognitively demanding experiences that produce IQ gains outside the program. To maximize lifetime IQ, the programs should also motivate them to continue searching for cognitively demanding experiences after they have left the program.
Flynn in his 2007 book "What Is Intelligence?" further expanded on this theory. Environmental changes resulting from modernization—such as more intellectually demanding work, greater use of technology and smaller families—have meant that a much larger proportion of people are more accustomed to manipulating abstract concepts such as hypotheses and categories than a century ago. Substantial portions of IQ tests deal with these abilities. Flynn gives, as an example, the question 'What do a dog and a rabbit have in common?' A modern respondent might say they are both mammals (an abstract, or "a priori" answer, which depends only on the meanings of the words "dog" and "rabbit"), whereas someone a century ago might have said that humans catch rabbits with dogs (a concrete, or "a posteriori" answer, which depended on what happened to be the case at that time).
Improved nutrition is another possible explanation. Today's average adult from an industrialized nation is taller than a comparable adult of a century ago. That increase of stature, likely the result of general improvements of nutrition and health, has been at a rate of more than a centimeter per decade. Available data suggest that these gains have been accompanied by analogous increases of head size, and by an increase in the average size of the brain. This argument had been thought to suffer the difficulty that groups who tend to be of smaller overall body size (e.g. women, or people of Asian ancestry) do not have lower average IQs.
A 2005 study presented data supporting the nutrition hypothesis, which predicts that gains will occur predominantly at the low end of the IQ distribution, where nutritional deprivation is probably most severe. An alternative interpretation of skewed IQ gains could be that improved education has been particularly important for this group. Richard Lynn makes the case for nutrition, arguing that cultural factors cannot typically explain the Flynn effect because its gains are observed even at infant and preschool levels, with rates of IQ test score increase about equal to those of school students and adults. Lynn states that "This rules out improvements in education, greater test sophistication, etc. and most of the other factors that have been proposed to explain the Flynn effect. He proposes that the most probable factor has been improvements in pre-natal and early post-natal nutrition."
A century ago, nutritional deficiencies may have limited body and organ functionality, including skull volume. The first two years of life is a critical time for nutrition. The consequences of malnutrition can be irreversible and may include poor cognitive development, educability, and future economic productivity. On the other hand, Flynn has pointed to 20-point gains on Dutch military (Raven's type) IQ tests between 1952, 1962, 1972, and 1982. He observes that the Dutch 18-year-olds of 1962 had a major nutritional handicap. They were either in the womb, or were recently born, during the great Dutch famine of 1944—when German troops monopolized food and 18,000 people died of starvation. Yet, concludes Flynn, "they do not show up even as a blip in the pattern of Dutch IQ gains. It is as if the famine had never occurred." It appears that the effects of diet are gradual, taking effect over decades (affecting mother as well as child) rather than a few months.
In support of the nutritional hypothesis, it is known that, in the United States, the average height before 1900 was about 10 cm (∼4 inches) shorter than it is today. Possibly related to the Flynn effect is a similar change of skull size and shape during the last 150 years. Though the idea that brain size is unrelated to race and intelligence was popularized in the 1980s, studies continue to show significant correlations.
A Norwegian study found that height gains were strongly correlated with intelligence gains until the cessation of height gains in military conscript cohorts towards the end of the 1980s. Both height and skull size increases probably result from a combination of phenotypic plasticity and genetic selection over this period. With only five or six human generations in 150 years, time for natural selection has been very limited, suggesting that increased skeletal size resulting from changes in population phenotypes is more likely than recent genetic evolution.
It is well known that micronutrient deficiencies change the development of intelligence. For instance, one study has found that iodine deficiency causes a fall, on average, of 12 IQ points in China.
Scientists James Feyrer, Dimitra Politi, and David N. Weil have found in the U.S. that the proliferation of iodized salt increased IQ by 15 points in some areas. Journalist Max Nisen has stated that, with this type of salt becoming popular, that "the aggregate effect has been extremely positive."
Daley et al. (2003) found a significant Flynn effect among children in rural Kenya, and concluded that nutrition was one of the hypothesized explanations that best explained their results (the others were parental literacy and family structure).
Eppig, Fincher, and Thornhill (2009) argue that "From an energetics standpoint, a developing human will have difficulty building a brain and fighting off infectious diseases at the same time, as both are very metabolically costly tasks" and that "the Flynn effect may be caused in part by the decrease in the intensity of infectious diseases as nations develop." They suggest that improvements in gross domestic product (GDP), education, literacy, and nutrition may have an effect on IQ mainly through reducing the intensity of infectious diseases.
Eppig, Fincher, and Thornhill (2011) in a similar study instead looking at different US states found that states with a higher prevalence of infectious diseases had lower average IQ. The effect remained after controlling for the effects of wealth and educational variation.
Atheendar Venkataramani (2010) studied the effect of malaria on IQ in a sample of Mexicans. Malaria eradication during the birth year was associated with increases in IQ. It also increased the probability of employment in a skilled occupation. The author suggests that this may be one explanation for the Flynn effect and that this may be an important explanation for the link between national malaria burden and economic development. A literature review of 44 papers states that cognitive abilities and school performance were shown to be impaired in sub-groups of patients (with either cerebral malaria or uncomplicated malaria) when compared with healthy controls. Studies comparing cognitive functions before and after treatment for acute malarial illness continued to show significantly impaired school performance and cognitive abilities even after recovery. Malaria prophylaxis was shown to improve cognitive function and school performance in clinical trials when compared to placebo groups.
Heterosis, or hybrid vigor associated with historical reductions of the levels of inbreeding, has been proposed by Michael Mingroni as an alternative explanation of the Flynn effect. However, James Flynn has pointed out that even if everyone mated with a sibling in 1900, subsequent increases in heterosis would not be a sufficient explanation of the observed IQ gains.
Jon Martin Sundet and colleagues (2004) examined scores on intelligence tests given to Norwegian conscripts between the 1950s and 2002. They found that the increase of scores of general intelligence stopped after the mid-1990s and declined in numerical reasoning sub-tests.
Teasdale and Owen (2005) examined the results of IQ tests given to Danish male conscripts. Between 1959 and 1979 the gains were 3 points per decade. Between 1979 and 1989 the increase approached 2 IQ points. Between 1989 and 1998 the gain was about 1.3 points. Between 1998 and 2004 IQ declined by about the same amount as it gained between 1989 and 1998. They speculate that "a contributing factor in this recent fall could be a simultaneous decline in proportions of students entering 3-year advanced-level school programs for 16–18-year-olds." The same authors in a more comprehensive 2008 study, again on Danish male conscripts, found that there was a 1.5-point increase between 1988 and 1998, but a 1.5-point decrease between 1998 and 2003/2004. A possible contributing factor to the more recent decline may be changes in the Danish educational system. Another may be the rising proportion of immigrants or their immediate descendants in Denmark. This is supported by data on Danish draftees where first or second generation immigrants with Danish nationality score below average.
In Australia, the IQ of 6–12 year olds as measured by the Colored Progressive Matrices has shown no increase from 1975–2003.
In the United Kingdom, a study by Flynn (2009) found that tests carried out in 1980 and again in 2008 show that the IQ score of an average 14-year-old dropped by more than two points over the period. For the upper half of the results the performance was even worse. Average IQ scores declined by six points. However, children aged between five and 10 saw their IQs increase by up to half a point a year over the three decades. Flynn argues that the abnormal drop in British teenage IQ could be due to youth culture having "stagnated" or even dumbed down. He also states that the youth culture is more oriented towards computer games than towards reading and holding conversations. Researcher Richard Gray, commenting on the study, also mentions the computer culture diminishing reading books as well as a tendency towards teaching to the test.
Lynn and Harvey argued in 2008 that the causes of the above are difficult to interpret since these countries had had significant recent immigration from countries with lower average national IQs. Nevertheless, they expect that similar patterns will occur, or have occurred, first in other developed nations and then in the developing world as there is a limit to how much environmental factors can improve intelligence. Furthermore, during the last century there is a negative correlation between fertility and intelligence although there is not yet any conclusive evidence of the association between the two. They estimate that there has been a dysgenic decline in the world's genotypic IQ (masked by the Flynn effect for the phenotype) of 0.86 IQ points per decade for the years 1950–2000.
Stefansson et al. (2017) similarly argue for a decline in polygenic scores pertaining to educational attainment in Icelandic individuals born from 1910 to 1990, stating that it could be declining at a rate that is two to three times faster that the captured rate of 0.30 IQ points per decade.
Bratsberg & Rogeberg (2018) present evidence that the Flynn effect in Norway has reversed, and that both the original rise in mean IQ scores and their subsequent decline were caused by environmental factors. They conclude that environmental factors explain all or almost all of the decline, and the hypothesised declines in genotypic IQ is negligible, although they "cannot rule out the theoretical possibility of negative selection on a genetic component that is masked when assessed using environmentally influenced measures", not being able to rule out the decline posited by Stefansson et al.
One possible explanation of a worldwide decline in intelligence, suggested by the World Health Organization and the Forum of International Respiratory Societies' Environmental Committee, is an increase in air pollution, which now affects over 90% of the world's population.
If the Flynn effect has ended in developed nations, then this may possibly allow national differences in IQ scores to diminish if the Flynn effect continues in nations with lower average national IQs.
Also, if the Flynn effect has ended for the majority in developed nations, it may still continue for minorities, especially for groups like immigrants where many may have received poor nutrition during early childhood or have had other disadvantages. A study in the Netherlands found that children of non-Western immigrants had improvements for "g", educational achievements, and work proficiency compared to their parents, although there were still remaining differences compared to ethnic Dutch.
There is a controversy as to whether the US racial gap in IQ scores is diminishing. If that is the case then this may or may not be related to the Flynn effect. Flynn has commented that he never claimed that the Flynn effect has the same causes as the black-white gap, but that it shows that environmental factors can create IQ differences of a magnitude similar to the gap. Research that has examined whether g factor and IQ gains from the Flynn effect are related have found there is a negative correlation between the two, which may indicate that group differences and the Flynn effect are possibly due to differing causes.
The Flynn effect has also been part of the discussions regarding Spearman's hypothesis, which states that differences in the g factor are the major source of differences between blacks and whites observed in many studies of race and intelligence.
|
https://en.wikipedia.org/wiki?curid=11773
|
Field ion microscope
The Field ion microscope (FIM) was invented by Müller in 1951. It is a type of microscope that can be used to image the arrangement of atoms at the surface of a sharp metal tip.
On October 11, 1955, Erwin Müller and his Ph.D. student, Kanwar Bahadur (Pennsylvania State University) observed individual tungsten atoms on the surface of a sharply pointed tungsten tip by cooling it to 21 K and employing helium as the imaging gas. Müller & Bahadur were the first persons to observe individual atoms directly.
In FIM, a sharp (<50 nm tip radius) metal tip is produced and placed in an ultra high vacuum chamber, which is backfilled with an imaging gas such as helium or neon. The tip is cooled to cryogenic temperatures (20–100 K). A positive voltage of 5 to 10 kilovolts is applied to the tip. Gas atoms adsorbed on the tip are ionized by the strong electric field in the vicinity of the tip (thus, "field ionization"), becoming positively charged and being repelled from the tip. The curvature of the surface near the tip causes a natural magnification — ions are repelled in a direction roughly perpendicular to the surface (a "point projection" effect). A detector is placed so as to collect these repelled ions; the image formed from all the collected ions can be of sufficient resolution to image individual atoms on the tip surface.
Unlike conventional microscopes, where the spatial resolution is limited by the wavelength of the particles which are used for imaging, the FIM is a projection type microscope with atomic resolution and an approximate magnification of a few million times.
FIM like Field Emission Microscopy (FEM) consists of a sharp sample tip and a fluorescent screen (now replaced by a multichannel plate) as the key elements. However, there are some essential differences as follows:
Like FEM, the field strength at the tip apex is typically a few V/Å. The experimental set-up and image formation in FIM is illustrated in the accompanying figures.
In FIM the presence of a strong field is critical. The imaging gas atoms (He, Ne) near the tip are polarized by the field and since the field is non-uniform the polarized atoms are attracted towards the tip surface. The imaging atoms then lose their kinetic energy performing a series of hops and accommodate to the tip temperature. Eventually, the imaging atoms are ionized by tunneling electrons into the surface and the resulting positive ions are accelerated along the field lines to the screen to form a highly magnified image of the sample tip.
In FIM, the ionization takes place close to the tip, where the field is strongest. The electron that tunnels from the atom is picked up by the tip. There is a critical distance, xc, at which the tunneling probability is a maximum. This distance is typically about 0.4 nm. The very high spatial resolution and high contrast for features on the atomic scale arises from the fact that the electric field is enhanced in the vicinity of the surface atoms because of the higher local curvature. The resolution of FIM is limited by the thermal velocity of the imaging ion. Resolution of the order of 1Å (atomic resolution) can be achieved by effective cooling of the tip.
Application of FIM, like FEM, is limited by the materials which can be fabricated in the shape of a sharp tip, can be used in an ultra high vacuum (UHV) environment, and can tolerate the high electrostatic fields. For these reasons, refractory metals with high melting temperature (e.g. W, Mo, Pt, Ir) are conventional objects for FIM experiments. Metal tips for FEM and FIM are prepared by electropolishing (electrochemical polishing) of thin wires. However, these tips usually contain many asperities. The final preparation procedure involves the in situ removal of these asperities by field evaporation just by raising the tip voltage. Field evaporation is a field induced process which involves the removal of atoms from the surface itself at very high field strengths and typically occurs in the range 2-5 V/Å. The effect of the field in this case is to reduce the effective binding energy of the atom to the surface and to give, in effect, a greatly increased evaporation rate relative to that expected at that temperature at zero fields. This process is self-regulating since the atoms that are at positions of high local curvature, such as adatoms or ledge atoms, are removed preferentially. The tips used in FIM is sharper (tip radius is 100~300 Å) compared to those used in FEM experiments (tip radius ~1000 Å).
FIM has been used to study dynamical behavior of surfaces and the behavior of adatoms on surfaces. The problems studied include adsorption-desorption phenomena, surface diffusion of adatoms and clusters, adatom-adatom interactions, step motion, equilibrium crystal shape, etc. However, there is the possibility of the results being affected by the limited surface area (i.e. edge effects) and by the presence of large electric field.
|
https://en.wikipedia.org/wiki?curid=11774
|
First Battle of El Alamein
The First Battle of El Alamein (1–27 July 1942) was a battle of the Western Desert Campaign of the Second World War, fought in Egypt between Axis forces (Germany and Italy) of the Panzer Army Africa (), which included the under Field Marshal () Erwin Rommel and Allied (British Imperial and Commonwealth) forces (Britain, British India, Australia, South Africa and New Zealand) of the Eighth Army (General Claude Auchinleck).
The British prevented a second advance by the Axis forces into Egypt. Axis positions near El Alamein, only from Alexandria, were dangerously close to the ports and cities of Egypt, the base facilities of the Commonwealth forces and the Suez Canal. However, the Axis forces were too far from their base at Tripoli in Libya to remain at El Alamein indefinitely, which led both sides to accumulate supplies for more offensives, against the constraints of time and distance.
Following their defeat at the Battle of Gazala in Eastern Libya in June 1942, the British Eighth Army, commanded by Lieutenant-General Neil Ritchie, had retreated east from the Gazala line into north-western Egypt as far as Mersa Matruh, roughly inside the border. Ritchie had decided not to hold the defences on the Egyptian border, because the defensive plan there was for infantry to hold defended localities and a strong armoured force behind them to meet any attempts to penetrate or outflank the fixed defences. Since General Ritchie had virtually no armoured units left fit to fight, the infantry positions would be defeated in detail. The Mersa defence plan also included an armoured reserve but in its absence Ritchie believed he could organise his infantry to cover the minefields between the defended localities to prevent Axis engineers from having undisturbed access.
To defend the Matruh line, Ritchie placed 10th Indian Infantry Division (in Matruh itself) and 50th (Northumbrian) Infantry Division (some down the coast at Gerawla) under X Corps HQ, newly arrived from Syria. Inland from X Corps would be XIII Corps with 5th Indian Infantry Division (with only one infantry brigade, 29th Indian, and two artillery regiments) around Sidi Hamza about inland, and the newly arrived 2nd New Zealand Division (short one brigade, the 6th, which had been left out of combat in case the division was captured and it would be needed to serve as the nucleus of a new division) at Minqar Qaim (on the escarpment inland) and 1st Armored Division in the open desert to the south. The 1st Armored Division had taken over 4th and 22nd Armoured Brigades from 7th Armoured Division which by this time had only three tank regiments (battalions) between them.
On 25 June, General Claude Auchinleck—Commander-in-Chief (C-in-C) Middle East Command—relieved Ritchie and assumed direct command of the Eighth Army himself. He decided not to seek a decisive confrontation at the Mersa Matruh position. He concluded that his inferiority in armour after the Gazala defeat, meant he would be unable to prevent Rommel either breaking through his centre or enveloping his open left flank to the south in the same way he had at Gazala. He decided instead to employ delaying tactics while withdrawing a further or more east to a more defensible position near El Alamein on the Mediterranean coast. Only to the south of El Alamein, the steep slopes of the Qattara Depression ruled out the possibility of Axis armour moving around the southern flank of his defences and limited the width of the front he had to defend.
While preparing the Alamein positions, Auchinleck fought strong delaying actions, first at Mersa Matruh on 26–27 June and then Fuka on 28 June. The late change of orders resulted in some confusion in the forward formations (X Corps and XIII Corps) between the desire to inflict damage on the enemy and the intention not to get trapped in the Matruh position but retreat in good order. The result was poor co-ordination between the two forward Corps and units within them. Late on 26 June, the German 90th Light and 21st "Panzer" Divisions managed to find their way through the minefields in the centre of the front. Early on 27 June, resuming its advance, the 90th Light was checked by British 50th Division's artillery. Meanwhile, the 15th and 21st "Panzer" Divisions advanced east above and below the escarpment. The 15th "Panzer" were blocked by 4th Armoured and 7th Motor Brigades, but the 21st "Panzer" were ordered on to attack Minqar Qaim. Rommel ordered 90th Light to resume its advance, requiring it to cut the coast road behind 50th Division by the evening. As the 21st "Panzer" moved on Minqar Qaim, the 2nd New Zealand Division found itself surrounded but broke out on the night of 27/28 June without serious losses and withdrew east.
Auchinleck had planned a second delaying position at Fuka, some east of Matruh, and at 21:20 he issued the orders for a withdrawal to Fuka. Confusion in communication led the division withdrawing immediately to the El Alamein position. X Corps, having made an unsuccessful attempt to secure a position on the escarpment, were out of touch with Eighth Army from 19:30 until 04:30 the next morning. Only then did they discover that the withdrawal order had been given. The withdrawal of XIII Corps had left the southern flank of X Corps on the coast at Matruh exposed and their line of retreat compromised by the cutting of the coastal road east of Matruh. They were ordered to break out southwards into the desert and then make their way east. Auchinleck ordered XIII Corps to provide support but they were in no position to do so. At 21:00 on 28 June, X Corps—organised into brigade groups—headed south. In the darkness, there was considerable confusion as they came across enemy units laagered for the night. In the process, 5th Indian Division in particular sustained heavy casualties, including the destruction of the 29th Indian Infantry Brigade at Fuka. Axis forces captured more than 6,000 prisoners, in addition to 40 tanks and an enormous quantity of supplies.
Alamein itself was an inconsequential railway station on the coast. Some to the south lay the Ruweisat Ridge, a low stony prominence that gave excellent observation for many miles over the surrounding desert; to the south was the Qattara Depression. The line the British chose to defend stretched between the sea and the Depression, which meant that Rommel could outflank it only by taking a significant detour to the south and crossing the Sahara Desert. The British Army in Egypt recognised this before the war and had the Eighth Army begin construction of several "boxes" (localities with dug-outs and surrounded by minefields and barbed wire) the most developed being around the railway station at Alamein. Most of the "line" was open, empty desert. Lieutenant-General William Norrie (General officer commanding [GOC] XXX Corps) organised the position and started to construct three defended "boxes". The first and strongest, at El Alamein on the coast, had been partly wired and mined by 1st South African Division. The Bab el Qattara box—some from the coast and south-west of the Ruweisat Ridge—had been dug but had not been wired or mined, while at the Naq Abu Dweis box (on the edge of the Qattara Depression), from the coast, very little work had been done.
The British position in Egypt was desperate, the rout from Mersa Matruh had created a panic in the British headquarters at Cairo, something later called "the Flap". On what came to be referred to as "Ash Wednesday", at British headquarters, rear echelon units and the British Embassy, papers were hurriedly burned in anticipation of the fall of the city. Auchinleck—although believing he could stop Rommel at Alamein—felt he could not ignore the possibility that he might once more be outmanoeuvred or outfought. To maintain his army, plans must be made for the possibility of a further retreat whilst maintaining morale and retaining the support and co-operation of the Egyptians. Defensive positions were constructed west of Alexandria and on the approaches to Cairo while considerable areas in the Nile delta were flooded. The Axis, too, believed that the capture of Egypt was imminent; Italian leader Benito Mussolini—sensing a historic moment—flew to Libya to prepare for his triumphal entry into Cairo.
The scattering of X Corps at Mersa Matruh disrupted Auchinleck's plan for occupying the Alamein defences. On 29 June, he ordered XXX Corps—the 1st South African, 5th and 10th Indian divisions—to take the coastal sector on the right of the front and XIII Corps—the 2nd New Zealand Division and 4th Indian divisions—to be on the left. The remains of the 1st Armoured Division and the 7th Armoured Division were to be held as a mobile army reserve. His intention was for the fixed defensive positions to channel and disorganise the enemy's advance while mobile units would attack their flanks and rear.
On 30 June, Rommel's "Panzerarmee Afrika" approached the Alamein position. The Axis forces were exhausted and understrength. Rommel had driven them forward ruthlessly, being confident that, provided he struck quickly before Eighth Army had time to settle, his momentum would take him through the Alamein position and he could then advance to the Nile with little further opposition. Supplies remained a problem because the Axis staff had originally expected a pause of six weeks after the capture of Tobruk. German air units were also exhausted and providing little help against the RAF's all-out attack on the Axis supply lines which, with the arrival of United States Army Air Forces (USAAF) heavy bombers, could reach as far as Benghazi. Although captured supplies proved useful, water and ammunition were constantly in short supply, while a shortage of transport impeded the distribution of the supplies that the Axis forces did have.
Rommel's plan was for the 90th Light Division and the 15th and 21st "Panzer" divisions of the "Afrika Korps" to penetrate the Eighth Army lines between the Alamein box and Deir el Abyad (which he believed was defended). The 90th Light Division was then to veer north to cut the coastal road and trap the defenders of the Alamein box (which Rommel thought was occupied by the remains of the 50th Infantry Division) and the "Afrika Korps" would veer right to attack the rear of XIII Corps.
An Italian division was to attack the Alamein box from the west and another was to follow the 90th Light Division. The Italian XX Corps was to follow the "Afrika Korps" and deal with the Qattara box while the 133rd Armoured Division "Littorio" and German reconnaissance units would protect the right flank. Rommel had planned to attack on 30 June but supply and transport difficulties had resulted in a day's delay, vital to the defending forces reorganising on the Alamein line. On 30 June, the 90th Light Division was still short of its start line, 21st "Panzer" Division was immobilised through lack of fuel and the promised air support had yet to move into its advanced airfields.
At 03:00 on 1 July, 90th Light Infantry Division advanced east but strayed too far north and ran into the 1st South African Division's defences and became pinned down. The 15th and 21st "Panzer" Divisions of the "Afrika Korps" were delayed by a sandstorm and then a heavy air attack. It was broad daylight by the time they circled round the back of Deir el Abyad where they found the feature to the east of it occupied by 18th Indian Infantry Brigade which, after a hasty journey from Iraq, had occupied the exposed position just west of Ruweisat Ridge and east of Deir el Abyad at Deir el Shein late on 28 June to create one of Norrie's additional defensive boxes.
At about 10:00 on 1 July, 21st "Panzer" Division attacked Deir el Shein. 18th Indian Infantry Brigade—supported by 23 25-pounder gun-howitzers, 16 of the new 6-pounder anti-tank guns and nine Matilda tanks—held out the whole day in desperate fighting but by evening the Germans succeeded in over-running them. The time they bought allowed Auchinleck to organise the defence of the western end of Ruweisat Ridge. The 1st Armoured Division had been sent to intervene at Deir el Shein. They ran into 15th "Panzer" Division just south of Deir el Shein and drove it west. By the end of the day's fighting, the "Afrika Korps" had 37 tanks left out of its initial complement of 55.
During the early afternoon, 90th Light had extricated itself from the El Alamein box defences and resumed its move eastward. It came under artillery fire from the three South African brigade groups and was forced to dig in.
On 2 July, Rommel ordered the resumption of the offensive. Once again, 90th Light failed to make progress so Rommel called the "Afrika Korps" to abandon its planned sweep southward and instead join the effort to break through to the coast road by attacking east toward Ruweisat Ridge. The British defence of Ruweisat Ridge relied on an improvised formation called "Robcol", comprising a regiment each of field artillery and light anti-aircraft artillery and a company of infantry. Robcol—in line with normal British Army practice for "ad hoc" formations—was named after its commander, Brigadier Robert Waller, the Commander Royal Artillery of the 10th Indian Infantry Division. Robcol was able to buy time, and by late afternoon the two British armoured brigades joined the battle with 4th Armoured Brigade engaging 15th "Panzer" and 22nd Armoured Brigade 21st "Panzer" respectively. They drove back repeated attacks by the Axis armour, who then withdrew before dusk. The British reinforced Ruweisat on the night of 2 July. The now enlarged Robcol became "Walgroup". Meanwhile, the Royal Air Force (RAF) made heavy air attacks on the Axis units.
The next day, 3 July, Rommel ordered the "Afrika Korps" to resume its attack on the Ruweisat ridge with the Italian XX Motorised Corps on its southern flank. Italian X Corps, meanwhile were to hold El Mreir. By this stage the "Afrika Korps" had only 26 operational tanks. There was a sharp armoured exchange south of Ruweisat ridge during the morning and the main Axis advance was held. On 3 July, the RAF flew 780 sorties.
To relieve the pressure on the right and centre of the Eighth Army line, XIII Corps on the left advanced from the Qattara box (known to the New Zealanders as the Kaponga box). The plan was that the New Zealand 2nd Division—with the remains of Indian 5th Division and 7th Motor Brigade under its command—would swing north to threaten the Axis flank and rear. This force encountered the "Ariete" Armoured Division's artillery, which was driving on the southern flank of the division as it attacked Ruweisat. The Italian commander ordered his battalions to fight their way out independently but the "Ariete" lost 531 men (about 350 were prisoners), 36 pieces of artillery, six (or eight?) tanks, and 55 trucks. By the end of the day, the "Ariete" Division had only five tanks. The day ended once again with the "Afrika Korps" and "Ariete" coming off second best to the superior numbers of the British 22nd Armoured and 4th Armoured Brigades, frustrating Rommel's attempts to resume his advance. The RAF once again played its part, flying 900 sorties during the day.
To the south, on 5 July the New Zealand group resumed its advance northwards towards El Mreir intending to cut the rear of the "Ariete" Division. Heavy fire from the Italian "Brescia" Motorised Division at El Mreir, however, north of the Qattara box, checked their progress and led XIII Corps to call off its attack.
At this point, Rommel decided his exhausted forces could make no further headway without resting and regrouping. He reported to the German High Command that his three German divisions numbered just 1,200–1,500 men each and resupply was proving highly problematic because of enemy interference from the air. He expected to have to remain on the defensive for at least two weeks.
Rommel was by this time suffering from the extended length of his supply lines. The Allied Desert Air Force (DAF) was concentrating fiercely on his fragile and elongated supply routes while British mobile columns moving west and striking from the south were causing havoc in the Axis rear echelons. Rommel could afford these losses even less since shipments from Italy had been substantially reduced (in June, he received of supplies compared with in May and 400 vehicles (compared with 2,000 in May). Meanwhile, the Eighth Army was reorganising and rebuilding, benefiting from its short lines of communication. By 4 July, the Australian 9th Division had entered the line in the north, and on 9 July the Indian 5th Infantry Brigade also returned, taking over the Ruweisat position. At the same time, the fresh Indian 161st Infantry Brigade reinforced the depleted Indian 5th Infantry Division.
On 8 July, Auchinleck ordered the new XXX Corps commander—Lieutenant-General William Ramsden—to capture the low ridges at Tel el Eisa and Tel el Makh Khad and then to push mobile battle groups south toward Deir el Shein and raiding parties west toward the airfields at El Daba. Meanwhile, XIII Corps would prevent the Axis from moving troops north to reinforce the coastal sector. Ramsden tasked the Australian 9th Division with 44th Royal Tank Regiment under command with the Tel el Eisa objective and the South African 1st Division with eight supporting tanks, Tel el Makh Khad. The raiding parties were to be provided by 1st Armoured Division.
Following a bombardment which started at 03:30 on 10 July, the Australian 26th Brigade launched an attack against the ridge north of Tel el Eisa station along the coast (Trig 33). The bombardment was the heaviest barrage yet experienced in North Africa, which created panic in the inexperienced soldiers of the Italian 60th Infantry Division "Sabratha" who had only just occupied sketchy defences in the sector. The Australian attack took more than 1,500 prisoners, routed an Italian Division and overran the German Signals Intercept Company 621. Meanwhile, the South Africans had by late morning taken Tel el Makh Khad and were in covering positions.
Elements of the German 164th Light Division and Italian 101st Motorised Division "Trieste" arrived to plug the gap torn in the Axis defences. That afternoon and evening, tanks from the German 15th "Panzer" and Italian "Trieste" Divisions launched counter-attacks against the Australian positions, the counter-attacks failing in the face of overwhelming Allied artillery and the Australian anti-tank guns.
At first light on 11 July, the Australian 2/24th Battalion supported by tanks from 44th Royal Tank Regiment attacked the western end of Tel el Eisa hill (Point 24). By early afternoon, the feature was captured and was then held against a series of Axis counter-attacks throughout the day. A small column of armour, motorised infantry, and guns then set off to raid Deir el Abyad and caused a battalion of Italian infantry to surrender. Its progress was checked at the Miteirya ridge and it was forced to withdraw that evening to the El Alamein box. During the day, more than 1,000 Italian prisoners were taken.
On 12 July, the 21st "Panzer" Division launched a counter-attack against Trig 33 and Point 24, which was beaten off after a 2½-hour fight, with more than 600 German dead and wounded left strewn in front of the Australian positions. The next day, 21. "Panzerdivision" launched an attack against Point 33 and South African positions in the El Alamein box. The attack was halted by intense artillery fire from the defenders. Rommel was still determined to drive the British forces from the northern salient. Although the Australian defenders had been forced back from Point 24, heavy casualties had been inflicted on 21st "Panzer" Division. Another attack was mounted on 15 July but made no ground against tenacious resistance. On 16 July, the Australians—supported by British tanks—launched an attack to try to take Point 24 but were forced back by German counter-attacks, suffering nearly fifty percent casualties.
After seven days of fierce fighting, the battle in the north for Tel el Eisa salient petered out. Australian 9th Division estimated at least 2,000 Axis troops had been killed and more than 3,700 prisoners of war taken in the battle. Possibly the most important feature of the battle, however, was that the Australians had captured Signals Intercept Company 621. This unit had provided Rommel with priceless intelligence, gleaned from intercepting British radio communications. That source of intelligence was now lost to Rommel.
As the Axis forces dug in, Auchinleck—having drawn a number of German units to the coastal sector during the Tel el Eisa fighting—developed a plan—codenamed Operation Bacon—to attack the Italian "Pavia" and "Brescia" Divisions in the centre of the front at the Ruweisat ridge. Signals intelligence was giving Auchinleck clear details of the Axis order of battle and force dispositions. His policy was to "...hit the Italians wherever possible in view of their low morale and because the Germans cannot hold extended fronts without them."
The intention was for the 4th New Zealand Brigade and 5th New Zealand Brigade (on 4th Brigade's right) to attack north-west to seize the western part of the ridge and on their right the Indian 5th Infantry Brigade to capture the eastern part of the ridge in a night attack. Then 2nd Armoured Brigade would pass through the centre of the infantry objectives to exploit toward Deir el Shein and the Miteirya Ridge. On the left, the 22nd Armoured Brigade would be ready to move forward to protect the infantry as they consolidated on the ridge.
The attack commenced at 23:00 on 14 July. The two New Zealand brigades shortly before dawn on 15 July took their objectives, but minefields and pockets of resistance created disarray among the attackers. A number of pockets of resistance were left behind the forward troops' advance which impeded the move forward of reserves, artillery, and support arms. As a result, the New Zealand brigades occupied exposed positions on the ridge without support weapons except for a few anti-tank guns. More significantly, communications with the two British armoured brigades failed, and the British armour did not move forwards to protect the infantry. At first light, a detachment from 15th "Panzer" division's 8th "Panzer" Regiment launched a counter-attack against New Zealand 4th Brigade's 22nd Battalion. A sharp exchange knocked out their anti-tank guns and the infantry found themselves exposed in the open with no alternative but to surrender. About 350 New Zealanders were taken prisoner.
While the 2nd New Zealand Division attacked the western slopes of Ruweisat Ridge, the Indian 5th Brigade made small gains on Ruweisat ridge to the east. By 07:00, word was finally got to 2nd Armoured Brigade which started to move north west. Two regiments became embroiled in a minefield but the third was able to join Indian 5th Infantry 5th Brigade as it renewed its attack. With the help of the armour and artillery, the Indians were able to take their objectives by early afternoon. Meanwhile, the 22nd Armoured Brigade had been engaged at Alam Nayil by 90th Light Division and the "Ariete" Armoured Division, advancing from the south. While—with help from mobile infantry and artillery columns from 7th Armoured Division—they pushed back the Axis probe with ease, they were prevented from advancing north to protect the New Zealand flank.
Seeing the "Brescia" and "Pavia" under pressure, Rommel rushed German troops to Ruweisat. By 15:00, the 3rd Reconnaissance Regiment and part of 21st "Panzer" Division from the north and 33rd Reconnaissance Regiment and the Baade Group comprising elements from 15th "Panzer" Division from the south were in place under Lieutenant-General ("General der Panzertruppe") Walther Nehring. At 17:00, Nehring launched his counter-attack. 4th New Zealand Brigade were still short of support weapons and also, by this time, ammunition. Once again, the anti-tank defences were overwhelmed and about 380 New Zealanders were taken prisoner including Captain Charles Upham who gained a second Victoria Cross for his actions including destroying a German tank and several guns and vehicles with grenades despite being shot through the elbow by a machine gun bullet and having his arm broken. At about 18:00, the brigade HQ was overrun. At about 18:15, 2nd Armoured Brigade engaged the German armour and halted the Axis eastward advance. At dusk, Nehring broke off the action.
Early on 16 July, Nehring renewed his attack. The 5th Indian Infantry Brigade pushed them back but it was clear from intercepted radio traffic that a further attempt would be made. Strenuous preparations to dig in anti-tank guns were made, artillery fire plans organised and a regiment from the 22nd Armoured Brigade was sent to reinforce the 2nd Armoured Brigade. When the attack resumed late in the afternoon, it was repulsed. After the battle, the Indians counted 24 knocked out tanks, as well as armoured cars and numerous anti-tank guns left on the battlefield.
In three days' fighting, the Allies took more than 2,000 Axis prisoners, mostly from the Italian "Brescia" and "Pavia" Divisions; the New Zealand division suffered 1,405 casualties. The fighting at Tel el Eisa and Ruweisat had caused the destruction of three Italian divisions, forced Rommel to redeploy his armour from the south, made it necessary to lay minefields in front of the remaining Italian divisions and stiffen them with detachments of German troops.
To relieve pressure on Ruweisat ridge, Auchinleck ordered the Australian 9th Division to make another attack from the north. In the early hours of 17 July, the Australian 24th Brigade—supported by 44th Royal Tank Regiment (RTR) and strong fighter cover from the air—assaulted Miteirya ridge (known as "Ruin ridge" to the Australians). The initial night attack went well, with 736 prisoners taken, mostly from the Italian "Trento" and "Trieste" motorised divisions. Once again, however, a critical situation for the Axis forces was retrieved by vigorous counter-attacks from hastily assembled German and Italian forces, which forced the Australians to withdraw back to their start line with 300 casualties. Although the Australian Official History of the 24th Brigade's 2/32nd Battalion describes the counter-attack force as "German", the Australian historian Mark Johnston reports that German records indicate that it was the "Trento" Division that overran the Australian battalion.
The Eighth Army now enjoyed a massive superiority in material over the Axis forces: 1st Armoured Division had 173 tanks and more in reserve or in transit, including 61 Grants while Rommel possessed only 38 German tanks and 51 Italian tanks although his armoured units had some 100 tanks awaiting repair.
Auchinleck's plan was for Indian Infantry 161st Brigade to attack along Ruweisat ridge to take Deir el Shein, while the New Zealand 6th Brigade attacked from south of the ridge to the El Mreir depression. At daylight, two British armoured brigades—2nd Armoured Brigade and the fresh 23rd Armoured Brigade—would sweep through the gap created by the infantry. The plan was complicated and ambitious.
The infantry night attack began at 16:30 on 21 July. The New Zealand attack took their objectives in the El Mreir depression but, once again, many vehicles failed to arrive and they were short of support arms in an exposed position. At daybreak on 22 July, the British armoured brigades again failed to advance. At daybreak on 22 July, Nehring's 5th and 8th "Panzer" Regiments responded with a rapid counter-attack which quickly overran the New Zealand infantry in the open, inflicting more than 900 casualties on the New Zealanders. 2nd Armoured Brigade sent forward two regiments to help but they were halted by mines and anti-tank fire.
The attack by Indian 161st Brigade had mixed fortunes. On the left, the initial attempt to clear the western end of Ruweisat failed but at 08:00 a renewed attack by the reserve battalion succeeded. On the right, the attacking battalion broke into the Deir el Shein position but was driven back in hand-to-hand fighting.
Compounding the disaster at El Mreir, at 08:00 the commander of 23rd Armoured Brigade ordered his brigade forward, intent on following his orders to the letter. Major-General Gatehouse—commanding 1st Armoured Division—had been unconvinced that a path had been adequately cleared in the minefields and had suggested the advance be cancelled. However, XIII Corps commander—Lieutenant-General William Gott—rejected this and ordered the attack but on a centre line south of the original plan which he incorrectly believed was mine-free. These orders failed to get through and the attack went ahead as originally planned. The brigade found itself mired in mine fields and under heavy fire. They were then counter-attacked by 21st Panzer at 11:00 and forced to withdraw. The 23rd Armoured Brigade was destroyed, with the loss of 40 tanks destroyed and 47 badly damaged.
At 17:00, Gott ordered 5th Indian Infantry Division to execute a night attack to capture the western half of Ruweisat ridge and Deir el Shein. 3/14th Punjab Regiment from 9th Indian Infantry Brigade attacked at 02:00 on 23 July but failed as they lost their direction. A further attempt in daylight succeeded in breaking into the position but intense fire from three sides resulted in control being lost as the commanding officer was killed, and four of his senior officers were wounded or went missing.
To the north, Australian 9th Division continued its attacks. At 06:00 on 22 July, Australian 26th Brigade attacked Tel el Eisa and Australian 24th Brigade attacked Tel el Makh Khad toward Miteirya (Ruin Ridge). It was during this fighting that Arthur Stanley Gurney performed the actions for which he was posthumously awarded the Victoria Cross. The fighting for Tel el Eisa was costly, but by the afternoon the Australians controlled the feature. That evening, Australian 24th Brigade attacked Tel el Makh Khad with the tanks of 50th RTR in support. The tank unit had not been trained in close infantry support and failed to co-ordinate with the Australian infantry. The result was that the infantry and armour advanced independently and having reached the objective 50th RTR lost 23 tanks because they lacked infantry support.
Once more, the Eighth Army had failed to destroy Rommel's forces, despite its overwhelming superiority in men and equipment. On the other hand, for Rommel the situation continued to be grave as, despite successful defensive operations, his infantry had suffered heavy losses and he reported that "the situation is critical in the extreme".
On 26/27 July, Auchinleck launched Operation Manhood in the northern sector in a final attempt to break the Axis forces. XXX Corps was reinforced with 1st Armoured Division (less 22nd Armoured Brigade), 4th Light Armoured Brigade, and 69th Infantry Brigade. The plan was to break the enemy line south of Miteirya ridge and exploit north-west. The South Africans were to make and mark a gap in the minefields to the south-east of Miteirya by midnight of 26/27 July. By 01:00 on 27 July, 24th Australian Infantry Brigade was to have captured the eastern end of the Miteirya ridge and would exploit toward the north-west. The 69th Infantry Brigade would pass through the minefield gap created by the South Africans to Deir el Dhib and clear and mark gaps in further minefields. The 2nd Armoured Brigade would then pass through to El Wishka and would be followed by 4th Light Armoured Brigade which would attack the Axis lines of communication.
This was the third attempt to break through in the northern sector, and the Axis defenders were expecting the attack. Like the previous attacks, it was hurriedly and therefore poorly planned. The Australian 24th Brigade managed to take their objectives on Miteirya Ridge by 02:00 of 27 July. To the south, the British 69th Brigade set off at 01:30 and managed to take their objectives by about 08:00. However, the supporting anti-tank units became lost in the darkness or delayed by minefields, leaving the attackers isolated and exposed when daylight came. There followed a period during which reports from the battlefront regarding the minefield gaps were confused and conflicting. As a consequence, the advance of 2nd Armoured Brigade was delayed. Rommel launched an immediate counter-attack and the German armoured battlegroups overran the two forward battalions of 69th Brigade. Meanwhile, 50th RTR supporting the Australians was having difficulty locating the minefield gaps made by Australian 2/24th Battalion. They failed to find a route through and in the process were caught by heavy fire and lost 13 tanks. The unsupported 2/28th Australian battalion on the ridge was overrun. The 69th Brigade suffered 600 casualties and the Australians 400 for no gain.
The Eighth Army was exhausted, and on 31 July Auchinleck ordered an end to offensive operations and the strengthening of the defences to meet a major counter-offensive.
Rommel was later to blame the failure to break through to the Nile on how the sources of supply to his army had dried up and how:
Rommel complained bitterly about the failure of important Italian convoys to get through to him desperately needed tanks and supplies, always blaming the Italian Supreme Command, never suspecting British code breaking.
According to Dr James Sadkovich and others, Rommel often displayed a distinct tendency to blame and scapegoat his Italian allies to cover up his own mistakes and deficiencies as a commander in the field. For example, while Rommel was a very good tactical commander, the Italian and German High Commands were concerned that he lacked operational awareness and a sense of strategic objectives. Dr Sadkovich points out that he would often out-run his logistics and squander valuable (mostly Italian) military hardware and resources in battle after battle without clear strategic goals and an appreciation of the limited logistics his Italian allies were desperately trying to provide him.
The battle was a stalemate, but it had halted the Axis advance on Alexandria (and then Cairo and ultimately the Suez Canal). The Eighth Army had suffered over 13,000 casualties in July, including 4,000 in the 2nd New Zealand Division, 3,000 in the 5th Indian Infantry Division and 2,552 battle casualties in the 9th Australian Division but had taken 7,000 prisoners and inflicted heavy damage on Axis men and machines. In his appreciation of 27 July, Auchinleck wrote that the Eighth Army would not be ready to attack again until mid-September at the earliest. He believed that because Rommel understood that with the passage of time the Allied situation would only improve, he was compelled to attack as soon as possible and before the end of August when he would have superiority in armour. Auchinleck therefore made plans for a defensive battle.
In early August, Winston Churchill and General Sir Alan Brooke—the Chief of the Imperial General Staff (CIGS)—visited Cairo on their way to meet Joseph Stalin in Moscow. They decided to replace Auchinleck, appointing the XIII Corps commander, William Gott, to the Eighth Army command and General Sir Harold Alexander as C-in-C Middle East Command. Persia and Iraq were to be split from Middle East Command as a separate Persia and Iraq Command and Auchinleck was offered the post of C-in-C (which he refused). Gott was killed on the way to take up his command when his aircraft was shot down. Lieutenant-General Bernard Montgomery was appointed in his place and took command on 13 August.
|
https://en.wikipedia.org/wiki?curid=11775
|
First Italo-Ethiopian War
The First Italo-Ethiopian War was fought between Italy and Ethiopia from 1895 to 1896. It originated from the disputed Treaty of Wuchale, which the Italians claimed turned Ethiopia into an Italian protectorate. Full-scale war broke out in 1895, with Italian troops from Italian Eritrea having initial success until Ethiopian troops counterattacked Italian positions and besieged the Italian fort of Mekele, forcing its surrender.
Italian defeat came about after the Battle of Adwa, where the Ethiopian army dealt the heavily outnumbered Italian soldiers and Eritrean askaris a decisive blow and forced their retreat back into Eritrea. Some Eritreans, regarded as traitors by the Ethiopians, were also captured and mutilated. The war concluded with the Treaty of Addis Ababa. Because this was one of the first decisive victories by African forces over a European colonial power, this war became a preeminent symbol of the pan-Africanism and secured Ethiopia's sovereignty until 1936.
The Khedive of Egypt Isma'il Pasha, better known as "Isma'il the Magnificent" had conquered Eritrea as part of his efforts to give Egypt an African empire. Isma'il had tried to follow up that conquest with Ethiopia, but the Egyptian attempts to conquer that realm ended in humiliating defeat. After Egypt's bankruptcy in 1876 followed by the "Ansar" revolt under the leadership of the Mahdi in 1881, the Egyptian position in Eritrea was hopeless with the Egyptian forces cut off and unpaid for years. By 1884 the Egyptians began to pull out of both Sudan and Eritrea.
Egypt had been very much in the French sphere of influence until 1882 when Britain occupied Egypt. A major goal of French foreign policy until 1904 was to diminish British power in Egypt and restore it to its place in the French sphere of influence, and in 1883 the French created the colony of French Somaliland which allowed for the establishment of a French naval base at Djibouti on the Red Sea. The opening of the Suez Canal in 1869 had turned the Horn of Africa into a very strategic region as a navy based in the Horn could interdict any shipping going up and down the Red Sea. By building naval bases on the Red Sea that could intercept British shipping in the Red Sea, the French hoped to reduce the value of the Suez Canal for the British, and thus lever them out of Egypt. A French historian in 1900 wrote: "The importance of Djibouti lies almost solely in the uniqueness of its geographic position, which makes it a port of transit and natural entrepôt for areas more infinitely more populated than its own territory...the rich provinces of central Ethiopia." The British historian Harold Marcus noted that for the French: "Ethiopia represented the entrance to the Nile valley; if she could obtain hegemony over Ethiopia, her dream of a west to east French African empire would be closer to reality". In response, Britain consistently supported Italian ambitions in the Horn of Africa as the best way of keeping the French out.
On 3 June 1884, the Hewett Treaty was signed between Britain, Egypt and Ethiopia that allowed the Ethiopians to occupy parts of Eritrea and allowed the Ethiopian goods to pass in and out of Massawa duty-free. From the viewpoint of Britain, it was highly undesirable that the French replace the Egyptians in Eritrea as that would allow the French to have more naval bases on the Red Sea that could interfere with British shipping using the Suez Canal, and as the British did not want the financial burden of ruling Eritrea, they looked for another power to replace the Egyptians. The Hewett treaty seemed to suggest that Eritrea would fall into the Ethiopian sphere of influence as the Egyptians pulled out. After initially encouraging the Emperor Yohannes IV to move into Eritrea to replace the Egyptians, London decided to have the Italians move into Eritrea. In his history of Ethiopia, Augustus Wylde wrote: "England made use of King John [Emperor Yohannes] as long as he was of any service and then threw him over to the tender mercies of Italy...It is one of our worst bits of business out of the many we have been guilty of in Africa...one of the vilest bites of treachery". After the French had unexpectedly made Tunis into their protectorate in 1881, outraging opinion in Italy over the so-called ""Schiaffo di Tunisi"" (the "slap of Tunis"), Italian foreign policy had been extremely anti-French, and from the British viewpoint the best way of ensuring the Eritrean ports on the Red Sea stayed out of French hands was by having the staunchly anti-French Italians move in. In 1882, Italy had joined the Triple Alliance, allying herself with Austria and Germany against France.
On 5 February 1885 Italian troops landed at Massawa to replace the Egyptians. The Italian government for its part was more than happy to embark upon an imperialist policy to distract its people from the failings in post "Risorgimento" Italy. In 1861, the unification of Italy was supposed to mark the beginning of a glorious new era in Italian life, and many Italians were gravely disappointed to find that not much had changed in the new Kingdom of Italy with the vast majority of Italians still living in abject poverty. To compensate, a chauvinist mood was rampant among the upper classes in Italy with the newspaper "Il Diritto" writing in an editorial: "Italy must be ready. The year 1885 will decide her fate as a great power. It is necessary to feel the responsibility of the new era; to become again strong men afraid of nothing, with the sacred love of the fatherland, of all Italy, in our hearts". On the Ethiopian side, the wars that Emperor Yohannes had waged first against the invading Egyptians in the 1870s and then more so against the Sudanese "Mahdiyya" state in the 1880s had been presented by him to his subjects as holy wars in defense of Orthodox Christianity against Islam, reinforcing the Ethiopian belief that their country was an especially virtuous and holy land. The struggle against the "Ansar" from Sudan complicated Yohannes's relations with the Italians, whom he sometimes asked to provide him with guns to fight the "Ansar" and other times he resisted the Italians and proposed a truce with the "Ansar".
On 18 January 1887, at a village named Saati, an advancing Italian Army detachment defeated the Ethiopians in a skirmish, but it ended with the numerically superior Ethiopians surrounding the Italians in Saati after they retreated in face of the enemy's numbers. Some 500 Italian soldiers under Colonel de Christoforis together with 50 Eritrean auxiliaries were sent to support the besieged garrison at Saati. At Dogali on his way to Saati, de Christoforis was ambushed by an Ethiopian force under "Ras" Alula, whose men armed with spears skillfully encircled the Italians who retreated to one hill and then to another higher hill. After the Italians ran out of ammunition, "Ras" Alula ordered his men to charge and the Ethiopians swiftly overwhelmed the Italians in an action that featured bayonets against spears. The Battle of Dogali ended with the Italians losing 23 officers and 407 other ranks killed. As a result of the defeat at Dogali, the Italians abandoned Saati and retreated back to the Red Sea coast. Italians newspapers called the battle a "massacre" and excoriated the "Regio Esercito " for not assigning de Chistoforis enough ammunition. Having, at first, encouraged Emperor Yohannes to move into Eritrea, and then having encouraged the Italians to also do so, London realised a war was brewing and decided to try to mediate, largely out of the fear that the Italians might actually lose.
The British consul in Zanzibar, Gerald Portal, was sent in 1887 to mediate between the Ethiopians and Italians before war broke out. Portal set sail on an Egyptian ship, the "Narghileh", which he called a "small, dirty, greasy steamer bound for Jeddah, Suakin and Massawa, in which we very soon discovered that our traveling companions consisted of cockroaches and other smaller animals innumerable, a flock of sheep, a few cows, many cocks, hens, turkeys and geese, and a dozen of the evil-looking Greek adventurers who always appear like vultures around a dead carcass whenever there is a possibility of a campaign in North Africa." Portal upon meeting the Emperor Yohannes on 4 December 1887 presented him with gifts and a letter from Queen Victoria urging him to settle with the Italians. Portal reported: "What might have been possible in August or September was impossible in December, when the whole of the immense available forces in the country were already under arms; and that there now remains no hope of a satisfactory adjustment of the difficulties between Italy and Abyssinia [Ethiopia] until the question of the relative supremacy of these two nations has been decided by an appeal to the fortunes of war... No one who has once seen the nature of the gorges, ravines and mountain passes near the Abyssinian frontier can doubt for a moment that any advance by a civilised army in the face of the hostile Abyssinian hordes would be accomplished at the price of a fearful loss of life on both sides. ... The Abyssinians are savage and untrustworthy, but they are also redeemed by the possession of an unbounded courage, by a disregard of death, and by a national pride which leads them to look down on every human being who has not had the good fortune to be born an Abyssinian". Portal ended by writing that the Italians were making a mistake in preparing to go war against Ethiopia: "It is the old, old story, contempt of a gallant enemy because his skin happens to be chocolate or brown or black, and because his men have not gone through orthodox courses of field-firing, battalion drill, or 'autumn maneuvers'".
The defeat at Dogali made the Italians cautious for a moment, but on 10 March 1889, Emperor Yohannes died after being wounded in battle against the "Ansar" and on his deathbed admitted that "Ras" Mengesha, the supposed son of his brother, was actually his own son and asked that he succeed him. The revelation that the emperor had slept with his brother's wife scandalised intensely Orthodox Ethiopia, and instead the "Negus" Menelik was proclaimed emperor on 26 March 1889. "Ras" Mengesha, one of the most powerful Ethiopian noblemen, was unhappy about being by-passed in the succession and for a time allied himself with the Italians against the Emperor Menelik. Under the feudal Ethiopian system, there was no standing army, and instead, the nobility raised up armies on behalf of the Emperor. In December 1889, the Italians advanced inland again and took the cities of Asmara and Keren and in January 1890 took Adowa.
On March 25, 1889, the Shewa ruler Menelik II, having conquered Tigray and Amhara, declared himself Emperor of Ethiopia (or "Abyssinia", as it was commonly called in Europe at the time). Barely a month later, on May 2, he signed the Treaty of Wuchale with the Italians, which apparently gave them control over Eritrea, the Red Sea coast to the northeast of Ethiopia, in return for recognition of Menelik's rule. Menelik II continued the policy of Tewodros II of integrating Ethiopia.
However, the bilingual treaty did not say the same thing in Italian and Amharic; the Italian version did not give the Ethiopians the "significant autonomy" written into the Amharic translation. The former text established an Italian protectorate over Ethiopia, but the Amharic version merely stated that Menelik could contact foreign powers and conduct foreign affairs through Italy if he so chose. Italian diplomats, however, claimed that the original Amharic text included the clause and Menelik knowingly signed a modified copy of the Treaty. In October 1889, the Italians informed all of the other European governments because of the Treaty of Wuchale that Ethiopia was now an Italian protectorate and therefore the other European nations could not conduct diplomatic relations with Ethiopia. With the exceptions of the Ottoman Empire, which still maintained its claim to Eritrea, and Russia, which disliked the idea of an Orthodox nation being subjugated to a Roman Catholic nation, all of the European powers accepted the Italian claim to a protectorate.
The Italian claim that Menelik was aware of Article XVII turning his nation into an Italian protectorate seems unlikely given that the Emperor Menelik sent letters to Queen Victoria and Emperor Wilhelm II in late 1889 and was informed in the replies in early 1890 that neither Britain nor Germany could have diplomatic relations with Ethiopia on the account of Article XVII of the Treaty of Wuchale, a revelation that came as a great shock to the Emperor. Victoria's letter was polite whereas Wilhelm's letter was somewhat more rude, saying that King Umberto I was a great friend of Germany and Menelik's violation of the supposed Italian protectorate was a grave insult to Umberto, adding that he never wanted to hear from Menelik again. Moreover, Menelik did not know Italian and only signed the Amharic text of the treaty, being assured that there were no differences between the Italian and Amharic texts before he signed. The differences between the Italian and Amharic texts were due to the Italian minister in Addis Ababa, Count Pietro Antonelli, who had been instructed by his government to gain as much territory as possible in negotiating with the Emperor Menelik. However, knowing Menelik was now enthroned as the King of Kings and had a strong position, Antonelli was in the unenviable situation of negotiating a treaty that his own government might disallow. Therefore, he inserted the statement making Ethiopia give up its right to conduct its foreign affairs to Italy as a way of pleasing his superiors who might otherwise have fired him for only making small territorial gains. Antonelli was fluent in Amharic and given that Menelik only signed the Amharic text he could not have been unaware that the Amharic version of Article XVII only stated that the King of Italy places the services of his diplomats at the disposal of the Emperor of Ethiopia to represent him abroad if he so wished. When his subterfuge was exposed in 1890 with Menelik indignantly saying he would never sign away his country's independence to anybody, Antonelli who left Addis Ababa in mid 1890 resorted to racism, telling his superiors in Rome that as Menelik was a black man, he was thus intrinsically dishonest and it was only natural the Emperor would lie about the protectorate he supposedly willingly turned his nation into.
Francesco Crispi, the Italian Prime Minister was an ultra-imperialist who believed the newly unified Italian state required "the grandeur of a second Roman empire". Crispi believed that the Horn of Africa was the best place for the Italians to start building the new Roman empire. The American journalist James Perry wrote that "Crispi was a fool, a bigot and a very dangerous man". Because of the Ethiopian refusal to abide by the Italian version of the treaty and despite economic handicaps at home, the Italian government decided on a military solution to force Ethiopia to abide by the Italian version of the treaty. In doing so, they believed that they could exploit divisions within Ethiopia and rely on tactical and technological superiority to offset any inferiority in numbers. The efforts of Emperor Menelik, viewed as pro-French in London, to unify Ethiopia and thus bring control source of the Blue Nile under his rule was perceived in Whitehall as a threat to keeping Egypt in the British sphere of influence. As Menelik became increasingly successful in unifying Ethiopia, London brought more pressure to bear on Rome for the Italians to move inland and conquer Ethiopia once and for all.
There was a broader, European background as well: the Triple Alliance of Germany, Austria–Hungary, and Italy was under some stress, with Italy being courted by England. Two secret Anglo-Italian protocols in 1891, left most of Ethiopia in Italy's sphere of influence. France, one of the members of the opposing Franco-Russian Alliance, had its own claims on Eritrea and was bargaining with Italy over giving up those claims in exchange for a more secure position in Tunisia. Meanwhile, Russia was supplying weapons and other aid to Ethiopia. It had been trying to gain a foothold in Ethiopia, and in 1894, after denouncing the Treaty of Wuchale in July, it received an Ethiopian mission in St. Petersburg and sent arms and ammunition to Ethiopia. This support continued after the war ended. The Russian travel writer Alexander Bulatovich who went to Ethiopia to serve as a Red Cross volunteer with the Emperor Menelik made a point of emphasizing in his books that the Ethiopians converted to Christianity before any of the Europeans ever did, described the Ethiopians as a deeply religious people like the Russians, and argued the Ethiopians did not have the "low cultural level" of the other African peoples, making them equal to the Europeans. Germany and Austria supported their ally in the Triple Alliance Italy while France and Russia supported Ethiopia.
In 1893, judging that his power over Ethiopia was secure, Menelik repudiated the treaty; in response the Italians ramped up the pressure on his domain in a variety of ways, including the annexation of small territories bordering their original claim under the Treaty of Wuchale, and finally culminating with a military campaign and across the Mareb River into Tigray (on the border with Eritrea) in December 1894. The Italians expected disaffected potentates like Negus Tekle Haymanot of Gojjam, Ras Mengesha Yohannes, and the Sultan of Aussa to join them; instead, all of the ethnic Tigrayan or Amharic peoples flocked to the Emperor Menelik's side in a display of both nationalism and anti-Italian feeling, while other peoples of dubious loyalty (e.g. the Sultan of Aussa) were watched by Imperial garrisons. In June 1894, "Ras" Mengesha and his generals had appeared in Addis Ababa carrying large stones which they dropped before the Emperor Menelik (a gesture that is a symbol of submission in Ethiopian culture). In Ethiopia, the popular saying at the time was: "Of a black snake's bite, you may be cured, but from the bite of a white snake, you will never recover." There was an overwhelming national unity in Ethiopia as various feuding noblemen rallied behind the emperor who insisted that Ethiopia, unlike the other African nations, would retain its freedom and not be subjected to Italy. The ethnic rivalries between the Tigrians and the Amhara that the Italians were counting upon did not prove to be a factor as Menelik pointed out that the Italians held all Ethnic Africans, regardless of their individual ethnic backgrounds, in contempt, noting the segregation policies in Eritrea applied to all Ethnic Africans. Further, Menelik had spent much of the previous four years building up a supply of modern weapons and ammunition, acquired from the French, British, and the Italians themselves, as the European colonial powers sought to keep each other's North African aspirations in check. They also used the Ethiopians as a proxy army against the Sudanese Mahdists.
In December 1894, Bahta Hagos led a rebellion against the Italians in Akkele Guzay, claiming support of Mengesha. Units of General Oreste Baratieri's army under Major Pietro Toselli crushed the rebellion and killed Bahta at the Battle of Halai. The Italian army then occupied the Tigrian capital, Adwa. Baratieri suspected that Mengesha would invade Eritrea, and met him at the Battle of Coatit in January 1895. The victorious Italians chased the retreating Mengesha, capturing weapons and important documents proving his complicity with Menelik. The victory in this campaign, along with previous victories against the Sudanese Mahdists, led the Italians to underestimate the difficulties to overcome in a campaign against Menelik. At this point, Emperor Menelik turned to France, offering a treaty of alliance; the French response was to abandon the Emperor in order to secure Italian approval of the Treaty of Bardo which would secure French control of Tunisia. Virtually alone, on 17 September 1895, Emperor Menelik issued a proclamation calling up the men of Shewa to join his army at Were Ilu.
As the Italians were poised to enter Ethiopian territory, the Ethiopians mobilised en masse all over the country. Helping it was the newly updated imperial fiscal and taxation system. As a result, a hastily mobilised army of 196,000 men gathered from all parts of Abyssinia, more than half of whom were armed with modern rifles, rallied at Addis Ababa in support of the Emperor and defence of their country.
The only European ally of Ethiopia was Russia. The Ethiopian emperor sent his first diplomatic mission to St. Petersburg in 1895. In June 1895, the newspapers in St. Petersburg wrote, "Along with the expedition, Menelik II sent his diplomatic mission to Russia, including his princes and his bishop". Many citizens of the capital came to meet the train that brought Prince Damto, General Genemier, Prince Belyakio, Bishop of Harer Gabraux Xavier and other members of the delegation to St. Petersburg. On the eve of war, an agreement providing military help for Ethiopia was concluded.
The next clash came at Amba Alagi on 7 December 1895, when Ethiopian soldiers overran the Italian positions dug in on the natural fortress, and forced the Italians to retreat back to Eritrea. The remaining Italian troops under General Giuseppe Arimondi reached the unfinished Italian fort at Mekele. Arimondi left there a small garrison of approximately 1,150 Askaris and 200 Italians, commanded by Major Giuseppe Galliano, and took the bulk of his troops to Adigrat, where Oreste Baratieri, the Italian Commander, was concentrating the Italian Army.
The first Ethiopian troops reached Mekele in the following days. Ras Makonnen surrounded the fort at Mekele on 18 December, but the Italian Commander adroitly used promises of a negotiated surrender to prevent the Ras from attacking the fort. By the first days of January, Emperor Menelik, accompanied by his Queen Taytu Betul, had led large forces into Tigray, and besieged the Italians for sixteen days (6–21 January 1896), making several unsuccessful attempts to carry the fort by storm, until the Italians surrendered with permission from the Italian Headquarters. Menelik allowed them to leave Mekele with their weapons, and even provided the defeated Italians mules and pack animals to rejoin Baratieri. While some historians read this generous act as a sign that Emperor Menelik still hoped for a peaceful resolution to the war, Harold Marcus points out that this escort allowed him a tactical advantage: "Menelik craftily managed to establish himself in Hawzien, at Gendepata, near Adwa, where the mountain passes were not guarded by Italian fortifications."
Heavily outnumbered, Baratieri refused to engage, knowing that due to their lack of infrastructure the Ethiopians could not keep large numbers of troops in the field much longer. However, Baratieri also never knew about the true numerical strength of the Ethiopian army that was to face his army, so he rather further fortified his positions in the Tigray. But the Italian government of Francesco Crispi was unable to accept being stymied by non-Europeans. The prime minister specifically ordered Baratieri to advance deep into enemy territory and bring about a battle.
The decisive battle of the war was the Battle of Adwa on March 1, 1896, which took place in the mountainous country north of the actual town of Adwa (or Adowa). The Italian army comprised four brigades totaling approximately 17,700 men, with fifty-six artillery pieces; the Ethiopian army comprised several brigades numbering between 73,000 and 120,000 men (80–100,000 with firearms: according to Richard Pankhurst, the Ethiopians were armed with approximately 100,000 rifles of which about half were quick-firing), with almost fifty artillery pieces.
General Baratieri planned to surprise the larger Ethiopian force with an early morning attack, expecting his enemy to be asleep. However, the Ethiopians had risen early for Church services and, upon learning of the Italian advance, promptly attacked. The Italian forces were hit by wave after wave of attacks, until Menelik released his reserve of 25,000 men, destroying an Italian brigade. Another brigade was cut off, and destroyed by a cavalry charge. The last two brigades were destroyed piecemeal. By noon, the Italian survivors were in full retreat.
While Menelik's victory was in a large part due to the sheer force of numbers, his troops were well-armed because of his careful preparations. The Ethiopian army only had a feudal system of organisation but proved capable of properly executing the strategic plan drawn up in Menelik's headquarters. However, the Ethiopian army also had its problems. The first was the quality of its arms, as the Italian and British colonial authorities could sabotage the transportation of 30,000–60,000 modern Mosin–Nagant rifles and Berdan rifles from Russia into landlocked Ethiopia. The rest of the Ethiopian army was equipped with swords and spears. Secondly, the Ethiopian army's feudal organisation meant that nearly the entire force was composed of peasant militia. Russian military experts advising Menelik II suggested a full-contact battle with Italians, to neutralise the Italian fire superiority, instead of engaging in a campaign of harassment designed to nullify problems with arms, training, and organisation.
Some Russian councillors of Menelik II and a team of fifty Russian volunteers participated in the battle, among them Nikolay Leontiev, an officer of the Kuban Cossack army. Russian support for Ethiopia also led to a Russian Red Cross mission, which arrived in Addis Ababa some three months after Menelik's Adwa victory.
The Italians suffered about 7,000 killed and 1,500 wounded in the battle and subsequent retreat back into Eritrea, with 3,000 taken prisoner; Ethiopian losses have been estimated around 4,000 killed and 8,000 wounded. In addition, 2,000 Eritrean Askaris were killed or captured. Italian prisoners were treated as well as possible under difficult circumstances, but 800 captured Askaris, regarded as traitors by the Ethiopians, had their right hands and left feet amputated. Menelik, knowing that the war was very unpopular in Italy with the Italian Socialists in particular condemning the policy of the Crispi government, chose to be a magnanimous victor, making it clear that he saw a difference between the Italian people and Crispi.
Menelik was a well respected ruler whose lineage was allegedly traced back to King Solomon and the Queen of Sheba. He used that status and its power to peacefully create alliances and to conquer those who opposed him. He was such a skillful negotiator that he was able to unify almost all of the Northern, Western, and Central territories peacefully. He made Ras Mengesha Yohannes the prince of Tigray, and along with the threat of the Italians, convinced him to join him. Menelik not only conquered large groups of people like the Oromo, Guarage, and Wolayta, he also managed to incorporate leaders from those groups into his own government, and war council. Whether conquered peacefully or militarily, almost all groups had a voice under Menelik.
From 1888 to 1892, one third of the Ethiopian population died from what would become known as The Great Famine. On the heels of this disaster, Menelik used his relationship with the Europeans to help modernise Ethiopia. The Europeans soon flooded the Ethiopian economy looking for business opportunities. Meanwhile, Menelik established the first national bank, a national currency, a postal system, railroads, modern roads, and electricity. The bank and currency unified the people economically and helped establish economic stability. The railways, roads, and postal system connected the people and tribes as a nation as well as physically. Possibly his greatest achievement in creating a national identity was through the creation of Addis Ababa. This was an important psychological component in the establishment of a nation. It provided a metaphorical ‘head’ for the nation. It became permanent location for the entire country to look upon for support and for guidance.
Menelik retired in good order to his capital, Addis Ababa, and waited for the fallout of the victory to hit Italy. Riots broke out in several Italian cities, and within two weeks, the Crispi government collapsed amidst Italian disenchantment with "foreign adventures".
Menelik secured the Treaty of Addis Ababa in October, which delineated the borders of Eritrea and forced Italy to recognise the independence of Ethiopia. Delegations from the United Kingdom and France—whose colonial possessions lay next to Ethiopia—soon arrived in the Ethiopian capital to negotiate their own treaties with this newly proven power. Owing to Russia's diplomatic support of her fellow Orthodox nation, Russia's prestige greatly increased in Ethiopia. The adventuresome Seljan brothers, Mirko and Stjepan, who were actually Catholic Croats, were warmly welcomed when they arrived in Ethiopia in 1899 when they misinformed their hosts by saying they were Russians. As France supported Ethiopia with weapons, French influence increased markedly. Prince Henri of Orléans, the French traveller, wrote: "France gave rifles to this country and taking the hand of its Emperor like an elder sister has explained to him the old motto which has guided her across the centuries of greatness and glory: Honor and Country!". In December 1896, a French diplomatic mission in Addis Ababa arrived and on 20 March 1897 signed a treaty that was described as ""véritable traité d'alliance". In turn, the increase in French influence in Ethiopia led to fears in London that the French would gain control of the Blue Nile and would be able to "lever" the British out of Egypt. To keep control of the Nile in Egypt, the British decided in March 1896 to advance down the Nile from Egypt into the Sudan to liquidate the "Mahdiyya" state. On 12 March 1896, upon hearing of the Italian defeat at the Battle of Adwa, the Prime Minister Lord Salisbury, gave instructions for the British forces in Egypt to occupy the Sudan before the French could liquidate the "Mahdiyya" state, stating that no hostile power would be allowed to control the Nile.
In 1935, Italy launched a second invasion, which resulted in an Italian victory and the annexation of Ethiopia to Italian East Africa until the Italians were defeated in the Second World War and expelled by the British, with some assistance from Ethiopian Arbegnochs. The Italians successively started a guerrilla war until 1943 in some areas of northern Ethiopia, supporting the rebellion of the Galla in 1942.
|
https://en.wikipedia.org/wiki?curid=11776
|
Frederick Soddy
Frederick Soddy FRS (2 September 1877 – 22 September 1956) was an English radiochemist who explained, with Ernest Rutherford, that radioactivity is due to the transmutation of elements, now known to involve nuclear reactions. He also proved the existence of isotopes of certain radioactive elements.
Soddy was born at 5 Bolton Road, Eastbourne, England, the son of Benjamin Soddy, corn merchant, and his wife Hannah Green. He went to school at Eastbourne College, before going on to study at University College of Wales at Aberystwyth and at Merton College, Oxford, where he graduated in 1898 with first class honours in chemistry. He was a researcher at Oxford from 1898 to 1900.
In 1900 he became a demonstrator in chemistry at McGill University in Montreal, Quebec, where he worked with Ernest Rutherford on radioactivity.
He and Rutherford realized that the anomalous behaviour of radioactive elements was because they decayed into other elements.
This decay also produced alpha, beta, and gamma radiation. When radioactivity was first discovered, no one was sure what the cause was. It needed careful work by Soddy and Rutherford to prove that atomic transmutation was in fact occurring.
In 1903, with Sir William Ramsay at University College London, Soddy showed that the decay of radium produced helium gas. In the experiment a sample of radium was enclosed in a thin-walled glass envelope sited within an evacuated glass bulb. After leaving the experiment running for a long period of time, a spectral analysis of the contents of the former evacuated space revealed the presence of helium. Later in 1907, Rutherford and Thomas Royds showed that the helium was first formed as positively charged nuclei of helium (He2+) which were identical to alpha particles, which could pass through the thin glass wall but were contained within the surrounding glass envelope.
From 1904 to 1914, Soddy was a lecturer at the University of Glasgow. Ruth Pirret worked as his research assistant during this time.
In May 1910 Soddy was elected a Fellow of the Royal Society. In 1914 he was appointed to a chair at the University of Aberdeen, where he worked on research related to World War I.
The work that Soddy and his research assistant Ada Hitchins did at Glasgow and Aberdeen showed that uranium decays to radium. It also showed that a radioactive element may have more than one atomic mass though the chemical properties are identical. Soddy named this concept isotope meaning "same place". The word was initially suggested to him by Margaret Todd. Later, J. J. Thomson showed that non-radioactive elements can also have multiple isotopes.
In 1913, Soddy also showed that an atom moves lower in atomic number by two places on alpha emission, higher by one place on beta emission. This was discovered at about the same time by Kazimierz Fajans, and is known as the radioactive displacement law of Fajans and Soddy, a fundamental step toward understanding the relationships among families of radioactive elements. Soddy published "The Interpretation of Radium" (1909) and "Atomic Transmutation" (1953).
In 1918 he announced discovery of a stable isotope of Protactinium, working with John Arnold Cranston. This slightly post-dated its discovery by German counterparts; however, it is said their discovery was actually made in 1915 but its announcement was delayed due to Cranston's notes being locked away whilst on active service in the First World War.
In 1919 he moved to the University of Oxford as Dr Lee's Professor of Chemistry, where, in the period up till 1936, he reorganized the laboratories and the syllabus in chemistry. He received the 1921 Nobel Prize in chemistry for his research in radioactive decay and particularly for his formulation of the theory of isotopes.
His work and essays popularising the new understanding of radioactivity was the main inspiration for H. G. Wells's "The World Set Free" (1914), which features atomic bombs dropped from biplanes in a war set many years in the future. Wells's novel is also known as "The Last War" and imagines a peaceful world emerging from the chaos. In "Wealth, Virtual Wealth and Debt" Soddy praises Wells’s "The World Set Free". He also says that radioactive processes probably power the stars.
In four books written from 1921 to 1934, Soddy carried on a "campaign for a radical restructuring of global monetary relationships", offering a perspective on economics rooted in physics – the laws of thermodynamics, in particular – and was "roundly dismissed as a crank". While most of his proposals – "to abandon the gold standard, let international exchange rates float, use federal surpluses and deficits as macroeconomic policy tools that could counter cyclical trends, and establish bureaus of economic statistics (including a consumer price index) in order to facilitate this effort" – are now conventional practice, his critique of fractional-reserve banking still "remains outside the bounds of conventional wisdom" although a recent paper by the IMF reinvigorated his proposals. Soddy wrote that financial debts grew exponentially at compound interest but the real economy was based on exhaustible stocks of fossil fuels. Energy obtained from the fossil fuels could not be used again. This criticism of economic growth is echoed by his intellectual heirs in the now emergent field of ecological economics.
In "Wealth, Virtual Wealth and Debt" Soddy cited the (fraudulent) Protocols of the Learned Elders of Zion as evidence for the belief, which was relatively widespread at the time, of a "financial conspiracy to enslave the world". He used the imagery of a Jewish conspiracy to buttress his claim that "A corrupt monetary system strikes at the very life of the nation." In the same document, he made reference to "the semi-Oriental" who is "supreme" in "high finance" and to an "iridescent bubble of beliefs blown around the world by the Hebraic hierarchy". Later in life he published a pamphlet "Abolish Private Money, or Drown in Debt" (1939) with a noted publisher of anti-Semitic texts. The influence of his writing can be gauged, for example, in this quote from Ezra Pound:
"Professor Frederick Soddy states that the Gold Standard monetary system has wrecked a scientific age! ... The world's bankers ... have not been content to take their share of modern wealth production – great as it has been – but they have refused to allow the masses of mankind to receive theirs."
He rediscovered the Descartes' theorem in 1936 and published it as a poem, "The Kiss Precise", quoted at Problem of Apollonius. The kissing circles in this problem are sometimes known as Soddy circles.
He received the Nobel Prize in Chemistry in 1921 and the same year he was elected member of the International Atomic Weights Committee. A small crater on the far side of the Moon as well as the radioactive uranium mineral soddyite are named after him.
In 1908, Soddy married Winifred Moller Beilby (1885-1936), the daughter of industrial chemist Sir George Beilby and Lady Emma Bielby, a philanthropist to women's causes. The couple worked together and co-published a paper in 1910 on the absorption of gamma rays from radium. He died in Brighton, England in 1956, twenty days after his 79th birthday.
|
https://en.wikipedia.org/wiki?curid=11778
|
Federico Fellini
Federico Fellini, (; 20 January 1920 – 31 October 1993) was an Italian film director and screenwriter known for his distinctive style, which blends fantasy and baroque images with earthiness. He is recognized as one of the greatest and most influential filmmakers of all time. His films have ranked in polls such as "Cahiers du cinéma" and "Sight & Sound", which lists his 1963 film "8½" as the 10th-greatest film.
Fellini won the Palme d'Or for "La Dolce Vita", was nominated for twelve Academy Awards, and won four in the category of Best Foreign Language Film, the most for any director in the history of the Academy. He received an honorary award for Lifetime Achievement at the 65th Academy Awards in Los Angeles. His other well-known films include "La Strada" (1954), "Nights of Cabiria" (1957), "Juliet of the Spirits" (1967), "Satyricon" (1969), "Roma" (1972), "Amarcord" (1973), and "Fellini's Casanova" (1976).
Fellini was born on 20 January 1920, to middle-class parents in Rimini, then a small town on the Adriatic Sea. On 25 January, at the San Nicolò church he was baptized Federico Domenico Marcello Fellini. His father, Urbano Fellini (1894–1956), born to a family of Romagnol peasants and small landholders from Gambettola, moved to Rome in 1915 as a baker apprenticed to the Pantanella pasta factory. His mother, Ida Barbiani (1896–1984), came from a bourgeois Catholic family of Roman merchants. Despite her family's vehement disapproval, she had eloped with Urbano in 1917 to live at his parents' home in Gambettola. A civil marriage followed in 1918 with the religious ceremony held at Santa Maria Maggiore in Rome a year later.
The couple settled in Rimini where Urbano became a traveling salesman and wholesale vendor. Fellini had two siblings: Riccardo (1921–1991), a documentary director for RAI Television, and Maria Maddalena (m. Fabbri; 1929–2002).
In 1924, Fellini started primary school in an institute run by the nuns of San Vincenzo in Rimini, attending the Carlo Tonni public school two years later. An attentive student, he spent his leisure time drawing, staging puppet shows and reading "Il corriere dei piccoli", the popular children's magazine that reproduced traditional American cartoons by Winsor McCay, George McManus and Frederick Burr Opper. (Opper's "Happy Hooligan" would provide the visual inspiration for Gelsomina in Fellini's 1954 film "La Strada"; McCay's "Little Nemo" would directly influence his 1980 film "City of Women".) In 1926, he discovered the world of Grand Guignol, the circus with Pierino the Clown and the movies. Guido Brignone’s "Maciste all’Inferno" (1926), the first film he saw, would mark him in ways linked to Dante and the cinema throughout his entire career.
Enrolled at the Ginnasio Giulio Cesare in 1929, he made friends with Luigi "Titta" Benzi, later a prominent Rimini lawyer (and the model for young Titta in "Amarcord" (1973)). In Mussolini’s Italy, Fellini and Riccardo became members of the "Avanguardista", the compulsory Fascist youth group for males. He visited Rome with his parents for the first time in 1933, the year of the maiden voyage of the transatlantic ocean liner "SS Rex" (which is shown in "Amarcord"). The sea creature found on the beach at the end of "La Dolce Vita" (1960) has its basis in a giant fish marooned on a Rimini beach during a storm in 1934.
Although Fellini adapted key events from his childhood and adolescence in films such as "I Vitelloni" (1953), "8½" (1963), and "Amarcord" (1973), he insisted that such autobiographical memories were inventions:
In 1937, Fellini opened Febo, a portrait shop in Rimini, with the painter Demos Bonini. His first humorous article appeared in the "Postcards to Our Readers" section of Milan's "Domenica del Corriere". Deciding on a career as a caricaturist and gag writer, Fellini travelled to Florence in 1938, where he published his first cartoon in the weekly "420". According to a biographer, Fellini found school "exasperating" and, in one year, had 67 absences. Failing his military culture exam, he graduated from high school in July 1938 after doubling the exam.
In September 1939, he enrolled in law school at the University of Rome to please his parents. Biographer Hollis Alpert reports that "there is no record of his ever having attended a class". Installed in a family "pensione", he met another lifelong friend, the painter Rinaldo Geleng. Desperately poor, they unsuccessfully joined forces to draw sketches of restaurant and café patrons. Fellini eventually found work as a cub reporter on the dailies "Il Piccolo" and "Il Popolo di Roma", but quit after a short stint, bored by the local court news assignments.
Four months after publishing his first article in "Marc’Aurelio", the highly influential biweekly humour magazine, he joined the editorial board, achieving success with a regular column titled "But Are You Listening?" Described as “the determining moment in Fellini’s life”, the magazine gave him steady employment between 1939 and 1942, when he interacted with writers, gagmen, and scriptwriters. These encounters eventually led to opportunities in show business and cinema. Among his collaborators on the magazine's editorial board were the future director Ettore Scola, Marxist theorist and scriptwriter Cesare Zavattini, and Bernardino Zapponi, a future Fellini screenwriter. Conducting interviews for "CineMagazzino" also proved congenial: when asked to interview Aldo Fabrizi, Italy's most popular variety performer, he established such immediate personal rapport with the man that they collaborated professionally. Specializing in humorous monologues, Fabrizi commissioned material from his young protégé.
Retained on business in Rimini, Urbano sent wife and family to Rome in 1940 to share an apartment with his son. Fellini and Ruggero Maccari, also on the staff of "Marc’Aurelio", began writing radio sketches and gags for films.
Not yet twenty and with Fabrizi's help, Fellini obtained his first screen credit as a comedy writer on Mario Mattoli’s "Il pirata sono io" ("The Pirate's Dream"). Progressing rapidly to numerous collaborations on films at Cinecittà, his circle of professional acquaintances widened to include novelist Vitaliano Brancati and scriptwriter Piero Tellini. In the wake of Mussolini’s declaration of war against France and Britain on 10 June 1940, Fellini discovered Kafka’s "The Metamorphosis", Gogol, John Steinbeck and William Faulkner along with French films by Marcel Carné, René Clair, and Julien Duvivier. In 1941 he published "Il mio amico Pasqualino", a 74-page booklet in ten chapters describing the absurd adventures of Pasqualino, an alter ego.
Writing for radio while attempting to avoid the draft, Fellini met his future wife Giulietta Masina in a studio office at the Italian public radio broadcaster EIAR in the autumn of 1942. Well-paid as the voice of Pallina in Fellini's radio serial, "Cico and Pallina", Masina was also well known for her musical-comedy broadcasts which cheered an audience depressed by the war. In November 1942, Fellini was sent to Libya, occupied by Fascist Italy, to work on the screenplay of "I cavalieri del deserto" ("Knights of the Desert", 1942), directed by Osvaldo Valenti and Gino Talamo. Fellini welcomed the assignment as it allowed him "to secure another extension on his draft order". Responsible for emergency re-writing, he also directed the film's first scenes. When Tripoli fell under siege by British forces, he and his colleagues made a narrow escape by boarding a German military plane flying to Sicily. His African adventure, later published in "Marc’Aurelio" as "The First Flight", marked “the emergence of a new Fellini, no longer just a screenwriter, working and sketching at his desk, but a filmmaker out in the field”.
The apolitical Fellini was finally freed of the draft when an Allied air raid over Bologna destroyed his medical records. Fellini and Giulietta hid in her aunt's apartment until Mussolini's fall on 25 July 1943. After dating for nine months, the couple were married on 30 October 1943. Several months later, Masina fell down the stairs and suffered a miscarriage. She gave birth to a son, Pierfederico, on 22 March 1945, but the child died of encephalitis a month later on 24 April 1945. The tragedy had enduring emotional and artistic repercussions.
After the Allied liberation of Rome on 4 June 1944, Fellini and Enrico De Seta opened the Funny Face Shop where they survived the postwar recession drawing caricatures of American soldiers. He became involved with Italian Neorealism when Roberto Rossellini, at work on "Stories of Yesteryear" (later "Rome, Open City"), met Fellini in his shop, and proposed he contribute gags and dialogue for the script. Aware of Fellini's reputation as Aldo Fabrizi's “creative muse”, Rossellini also requested that he try to convince the actor to play the role of Father Giuseppe Morosini, the parish priest executed by the SS on 4 April 1944.
In 1947, Fellini and Sergio Amidei received an Oscar nomination for the screenplay of "Rome, Open City".
Working as both screenwriter and assistant director on Rossellini's "Paisà" ("Paisan") in 1946, Fellini was entrusted to film the Sicilian scenes in Maiori. In February 1948, he was introduced to Marcello Mastroianni, then a young theatre actor appearing in a play with Giulietta Masina. Establishing a close working relationship with Alberto Lattuada, Fellini co-wrote the director's "Senza pietà" ("Without Pity") and "Il mulino del Po" ("The Mill on the Po"). Fellini also worked with Rossellini on the anthology film "L'Amore" (1948), co-writing the screenplay and in one segment titled, "The Miracle", acting opposite Anna Magnani. To play the role of a vagabond rogue mistaken by Magnani for a saint, Fellini had to bleach his black hair blond.
In 1950 Fellini co-produced and co-directed with Alberto Lattuada "Variety Lights" ("Luci del varietà"), his first feature film. A backstage comedy set among the world of small-time travelling performers, it featured Giulietta Masina and Lattuada's wife, Carla Del Poggio. Its release to poor reviews and limited distribution proved disastrous for all concerned. The production company went bankrupt, leaving both Fellini and Lattuada with debts to pay for over a decade. In February 1950, "Paisà" received an Oscar nomination for the screenplay by Rossellini, Sergio Amidei, and Fellini.
After travelling to Paris for a script conference with Rossellini on "Europa '51", Fellini began production on "The White Sheik" in September 1951, his first solo-directed feature. Starring Alberto Sordi in the title role, the film is a revised version of a treatment first written by Michelangelo Antonioni in 1949 and based on the "fotoromanzi", the photographed cartoon strip romances popular in Italy at the time. Producer Carlo Ponti commissioned Fellini and Tullio Pinelli to write the script but Antonioni rejected the story they developed. With Ennio Flaiano, they re-worked the material into a light-hearted satire about newlywed couple Ivan and Wanda Cavalli (Leopoldo Trieste, Brunella Bovo) in Rome to visit the Pope. Ivan's prissy mask of respectability is soon demolished by his wife's obsession with the White Sheik. Highlighting the music of Nino Rota, the film was selected at Cannes (among the films in competition was Orson Welles’s "Othello") and then retracted. Screened at the 13th Venice International Film Festival, it was razzed by critics in "the atmosphere of a soccer match”. One reviewer declared that Fellini had “not the slightest aptitude for cinema direction".
In 1953, "I Vitelloni" found favour with the critics and public. Winning the Silver Lion Award in Venice, it secured Fellini his first international distributor.
Fellini directed "La Strada" based on a script completed in 1952 with Pinelli and Flaiano. During the last three weeks of shooting, Fellini experienced the first signs of severe clinical depression. Aided by his wife, he undertook a brief period of therapy with Freudian psychoanalyst Emilio Servadio.
Fellini cast American actor Broderick Crawford to interpret the role of an aging swindler in "Il Bidone". Based partly on stories told to him by a petty thief during production of "La Strada", Fellini developed the script into a con man's slow descent towards a solitary death. To incarnate the role's "intense, tragic face", Fellini's first choice had been Humphrey Bogart, but after learning of the actor's lung cancer, chose Crawford after seeing his face on the theatrical poster of "All the King’s Men" (1949). The film shoot was wrought with difficulties stemming from Crawford's alcoholism. Savaged by critics at the 16th Venice International Film Festival, the film did miserably at the box office and did not receive international distribution until 1964.
During the autumn, Fellini researched and developed a treatment based on a film adaptation of Mario Tobino’s novel, "The Free Women of Magliano". Set in a mental institution for women, the project was abandoned when financial backers considered the subject had no potential.
While preparing "Nights of Cabiria" in spring 1956, Fellini learned of his father’s death by cardiac arrest at the age of sixty-two. Produced by Dino De Laurentiis and starring Giulietta Masina, the film took its inspiration from news reports of a woman’s severed head retrieved in a lake and stories by Wanda, a shantytown prostitute Fellini met on the set of "Il Bidone". Pier Paolo Pasolini was hired to translate Flaiano and Pinelli’s dialogue into Roman dialect and to supervise researches in the vice-afflicted suburbs of Rome. The movie won the Academy Award for Best Foreign Language Film at the 30th Academy Awards and brought Masina the Best Actress Award at Cannes for her performance.
With Pinelli, he developed "Journey with Anita" for Sophia Loren and Gregory Peck. An "invention born out of intimate truth", the script was based on Fellini's return to Rimini with a mistress to attend his father's funeral. Due to Loren's unavailability, the project was shelved and resurrected twenty-five years later as "Lovers and Liars" (1981), a comedy directed by Mario Monicelli with Goldie Hawn and Giancarlo Giannini. For Eduardo De Filippo, he co-wrote the script of "Fortunella", tailoring the lead role to accommodate Masina's particular sensibility.
The Hollywood on the Tiber phenomenon of 1958 in which American studios profited from the cheap studio labour available in Rome provided the backdrop for photojournalists to steal shots of celebrities on the via Veneto. The scandal provoked by Turkish dancer Haish Nana's improvised striptease at a nightclub captured Fellini's imagination: he decided to end his latest script-in-progress, "Moraldo in the City", with an all-night "orgy" at a seaside villa. Pierluigi Praturlon’s photos of Anita Ekberg wading fully dressed in the Trevi Fountain provided further inspiration for Fellini and his scriptwriters.
Changing the title of the screenplay to "La Dolce Vita", Fellini soon clashed with his producer on casting: the director insisted on the relatively unknown Mastroianni while De Laurentiis wanted Paul Newman as a hedge on his investment. Reaching an impasse, De Laurentiis sold the rights to publishing mogul Angelo Rizzoli. Shooting began on 16 March 1959 with Anita Ekberg climbing the stairs to the cupola of Saint Peter’s in a mammoth décor constructed at Cinecittà. The statue of Christ flown by helicopter over Rome to Saint Peter's Square was inspired by an actual media event on 1 May 1956, which Fellini had witnessed. The film wrapped August 15 on a deserted beach at Passo Oscuro with a bloated mutant fish designed by Piero Gherardi.
"La Dolce Vita" broke all box office records. Despite scalpers selling tickets at 1000 lire, crowds queued in line for hours to see an “immoral movie” before the censors banned it. At an exclusive Milan screening on 5 February 1960, one outraged patron spat on Fellini while others hurled insults. Denounced in parliament by right-wing conservatives, undersecretary Domenico Magrì of the Christian Democrats demanded tolerance for the film's controversial themes. The Vatican's official press organ, "l'Osservatore Romano", lobbied for censorship while the Board of Roman Parish Priests and the Genealogical Board of Italian Nobility attacked the film. In one documented instance involving favourable reviews written by the Jesuits of San Fedele, defending "La Dolce Vita" had severe consequences. In competition at Cannes alongside Antonioni's "L’Avventura", the film won the Palme d'Or awarded by presiding juror Georges Simenon. The Belgian writer was promptly “hissed at” by the disapproving festival crowd.
A major discovery for Fellini after his Italian neorealism period (1950–1959) was the work of Carl Jung. After meeting Jungian psychoanalyst Dr. Ernst Bernhard in early 1960, he read Jung's autobiography, "Memories, Dreams, Reflections" (1963) and experimented with LSD. Bernhard also recommended that Fellini consult the "I Ching" and keep a record of his dreams. What Fellini formerly accepted as "his extrasensory perceptions" were now interpreted as psychic manifestations of the unconscious. Bernhard's focus on Jungian depth psychology proved to be the single greatest influence on Fellini's mature style and marked the turning point in his work from neorealism to filmmaking that was "primarily oneiric". As a consequence, Jung's seminal ideas on the "anima" and the "animus", the role of archetypes and the collective unconscious directly influenced such films as "8½" (1963), "Juliet of the Spirits" (1965), "Fellini Satyricon" (1969), "Casanova" (1976), and "City of Women" (1980). Other key influences on his work include Luis Buñuel. Charlie Chaplin, Sergei Eisenstein, Buster Keaton, Laurel and Hardy, the Marx Brothers, and Roberto Rossellini.
Exploiting "La Dolce Vita"’s success, financier Angelo Rizzoli set up Federiz in 1960, an independent film company, for Fellini and production manager Clemente Fracassi to discover and produce new talent. Despite the best intentions, their overcautious editorial and business skills forced the company to close down soon after cancelling Pasolini’s project, "Accattone" (1961).
Condemned as a "public sinner", for "La Dolce Vita", Fellini responded with "The Temptations of Doctor Antonio", a segment in the omnibus "Boccaccio '70". His second colour film, it was the sole project green-lighted at Federiz. Infused with the surrealistic satire that characterized the young Fellini's work at "Marc’Aurelio", the film ridiculed a crusader against vice, interpreted by Peppino De Filippo, who goes insane trying to censor a billboard of Anita Ekberg espousing the virtues of milk.
In an October 1960 letter to his colleague Brunello Rondi, Fellini first outlined his film ideas about a man suffering creative block: "Well then - a guy (a writer? any kind of professional man? a theatrical producer?) has to interrupt the usual rhythm of his life for two weeks because of a not-too-serious disease. It’s a warning bell: something is blocking up his system." Unclear about the script, its title, and his protagonist's profession, he scouted locations throughout Italy “looking for the film”, in the hope of resolving his confusion. Flaiano suggested "La bella confusione" (literally "The Beautiful Confusion") as the movie's title. Under pressure from his producers, Fellini finally settled on "8½", a self-referential title referring principally (but not exclusively) to the number of films he had directed up to that time.
Giving the order to start production in spring 1962, Fellini signed deals with his producer Rizzoli, fixed dates, had sets constructed, cast Mastroianni, Anouk Aimée, and Sandra Milo in lead roles, and did screen tests at the Scalera Studios in Rome. He hired cinematographer Gianni Di Venanzo, among key personnel. But apart from naming his hero Guido Anselmi, he still couldn't decide what his character did for a living. The crisis came to a head in April when, sitting in his Cinecittà office, he began a letter to Rizzoli confessing he had "lost his film" and had to abandon the project. Interrupted by the chief machinist requesting he celebrate the launch of "8½", Fellini put aside the letter and went on the set. Raising a toast to the crew, he "felt overwhelmed by shame… I was in a no exit situation. I was a director who wanted to make a film he no longer remembers. And lo and behold, at that very moment everything fell into place. I got straight to the heart of the film. I would narrate everything that had been happening to me. I would make a film telling the story of a director who no longer knows what film he wanted to make". The self-mirroring structure makes that the entire film is inseparable from its reflecting construction.
Shooting began on 9 May 1962. Perplexed by the seemingly chaotic, incessant improvisation on the set, Deena Boyer, the director's American press officer at the time, asked for a rationale. Fellini told her that he hoped to convey the three levels "on which our minds live: the past, the present, and the conditional - the realm of fantasy". After shooting wrapped on 14 October, Nino Rota composed various circus marches and fanfares that would later become signature tunes of the maestro's cinema. Nominated for four Oscars, "8½" won awards for best foreign language film and best costume design in black-and-white. In California for the ceremony, Fellini toured Disneyland with Walt Disney the day after.
Increasingly attracted to parapsychology, Fellini met the Turin magician Gustavo Rol in 1963. Rol, a former banker, introduced him to the world of Spiritism and séances. In 1964, Fellini took LSD under the supervision of Emilio Servadio, his psychoanalyst during the 1954 production of "La Strada". For years reserved about what actually occurred that Sunday afternoon, he admitted in 1992 that
objects and their functions no longer had any significance. All I perceived was perception itself, the hell of forms and figures devoid of human emotion and detached from the reality of my unreal environment. I was an instrument in a virtual world that constantly renewed its own meaningless image in a living world that was itself perceived outside of nature. And since the appearance of things was no longer definitive but limitless, this paradisiacal awareness freed me from the reality external to my self. The fire and the rose, as it were, became one.
Fellini's hallucinatory insights were given full flower in his first colour feature "Juliet of the Spirits" (1965), depicting Giulietta Masina as Juliet, a housewife who rightly suspects her husband's infidelity and succumbs to the voices of spirits summoned during a séance at her home. Her sexually voracious next door neighbor Suzy (Sandra Milo) introduces Juliet to a world of uninhibited sensuality but Juliet is haunted by childhood memories of her Catholic guilt and a teenaged friend who committed suicide. Complex and filled with psychological symbolism, the film is set to a jaunty score by Nino Rota.
To help promote "Satyricon" in the United States, Fellini flew to Los Angeles in January 1970 for interviews with Dick Cavett and David Frost. He also met with film director Paul Mazursky who wanted to star him alongside Donald Sutherland in his new film, "Alex in Wonderland". In February, Fellini scouted locations in Paris for "The Clowns", a docufiction both for cinema and television, based on his childhood memories of the circus and a "coherent theory of clowning." As he saw it, the clown "was always the caricature of a well-established, ordered, peaceful society. But today all is temporary, disordered, grotesque. Who can still laugh at clowns?... All the world plays a clown now."
In March 1971, Fellini began production on "Roma", a seemingly random collection of episodes informed by the director's memories and impressions of Rome. The "diverse sequences," writes Fellini scholar Peter Bondanella, "are held together only by the fact that they all ultimately originate from the director’s fertile imagination." The film's opening scene anticipates "Amarcord" while its most surreal sequence involves an ecclesiastical fashion show in which nuns and priests roller skate past shipwrecks of cobwebbed skeletons.
Over a period of six months between January and June 1973, Fellini shot the Oscar-winning "Amarcord". Loosely based on the director's 1968 autobiographical essay "My Rimini", the film depicts the adolescent Titta and his friends working out their sexual frustrations against the religious and Fascist backdrop of a provincial town in Italy during the 1930s. Produced by Franco Cristaldi, the seriocomic movie became Fellini's second biggest commercial success after "La Dolce Vita". Circular in form, "Amarcord" avoids plot and linear narrative in a way similar to "The Clowns" and "Roma". The director's overriding concern with developing a poetic form of cinema was first outlined in a 1965 interview he gave to "The New Yorker" journalist Lillian Ross: "I am trying to free my work from certain constrictions – a story with a beginning, a development, an ending. It should be more like a poem with metre and cadence."
Organized by his publisher Diogenes Verlag in 1982, the first major exhibition of 63 drawings by Fellini was held in Paris, Brussels, and the Pierre Matisse Gallery in New York. A gifted caricaturist, much of the inspiration for his sketches was derived from his own dreams while the films-in-progress both originated from and stimulated drawings for characters, decor, costumes and set designs. Under the title, "I disegni di Fellini" (Fellini's Designs), he published 350 drawings executed in pencil, watercolours, and felt pens.
On 6 September 1985 Fellini was awarded the Golden Lion for lifetime achievement at the 42nd Venice Film Festival. That same year, he became the first non-American to receive the Film Society of Lincoln Center’s annual award for cinematic achievement.
Long fascinated by Carlos Castaneda’s "", Fellini accompanied the Peruvian author on a journey to the Yucatán to assess the feasibility of a film. After first meeting Castaneda in Rome in October 1984, Fellini drafted a treatment with Pinelli titled "Viaggio a Tulun". Producer Alberto Grimaldi, prepared to buy film rights to all of Castaneda's work, then paid for pre-production research taking Fellini and his entourage from Rome to Los Angeles and the jungles of Mexico in October 1985. When Castaneda inexplicably disappeared and the project fell through, Fellini's mystico-shamanic adventures were scripted with Pinelli and serialized in "Corriere della Sera" in May 1986. A barely veiled satirical interpretation of Castaneda's work, "Viaggio a Tulun" was published in 1989 as a graphic novel with artwork by Milo Manara and as "Trip to Tulum" in America in 1990.
For "Intervista", produced by Ibrahim Moussa and RAI Television, Fellini intercut memories of the first time he visited Cinecittà in 1939 with present-day footage of himself at work on a screen adaptation of Franz Kafka’s "Amerika". A meditation on the nature of memory and film production, it won the special 40th Anniversary Prize at Cannes and the 15th Moscow International Film Festival Golden Prize. In Brussels later that year, a panel of thirty professionals from eighteen European countries named Fellini the world’s best director and "8½" the best European film of all time.
In early 1989 Fellini began production on "The Voice of the Moon", based on Ermanno Cavazzoni’s novel, "Il poema dei lunatici" ("The Lunatics' Poem"). A small town was built at Empire Studios on the via Pontina outside Rome. Starring Roberto Benigni as Ivo Salvini, a madcap poetic figure newly released from a mental institution, the character is a combination of "La Strada"'s Gelsomina, Pinocchio, and Italian poet Giacomo Leopardi. Fellini improvised as he filmed, using as a guide a rough treatment written with Pinelli. Despite its modest critical and commercial success in Italy, and its warm reception by French critics, it failed to interest North American distributors.
Fellini won the "Praemium Imperiale", the equivalent of the Nobel Prize in the visual arts, awarded by the Japan Art Association in 1990.
In July 1991 and April 1992, Fellini worked in close collaboration with Canadian filmmaker Damian Pettigrew to establish "the longest and most detailed conversations ever recorded on film". Described as the "Maestro's spiritual testament” by his biographer Tullio Kezich, excerpts culled from the conversations later served as the basis of their feature documentary, "" (2002) and the book, "". Finding it increasingly difficult to secure financing for feature films, Fellini developed a suite of television projects whose titles reflect their subjects: "Attore", "Napoli", "L’Inferno", "L'opera lirica", and "L’America".
In April 1993 Fellini received his fifth Oscar, for lifetime achievement, "in recognition of his cinematic accomplishments that have thrilled and entertained audiences worldwide". On 16 June, he entered the Cantonal Hospital in Zürich for an angioplasty on his femoral artery but suffered a stroke at the Grand Hotel in Rimini two months later. Partially paralyzed, he was first transferred to Ferrara for rehabilitation and then to the Policlinico Umberto I in Rome to be near his wife, also hospitalized. He suffered a second stroke and fell into an irreversible coma.
Fellini died in Rome on 31 October 1993 at the age of 73 after a heart attack he suffered a few weeks earlier, a day after his 50th wedding anniversary. The memorial service, in Studio 5 at Cinecittà, was attended by an estimated 70,000 people. At Giulietta Masina's request, trumpeter Mauro Maur played Nino Rota's "Improvviso dell'Angelo" during the ceremony.
Five months later, on 23 March 1994, Masina died of lung cancer. Fellini, Masina and their son, Pierfederico, are buried in a bronze sepulchre sculpted by Arnaldo Pomodoro. Designed as a ship's prow, the tomb is at the main entrance to the Cemetery of Rimini. The Federico Fellini Airport in Rimini is named in his honour.
Fellini was raised in a Roman Catholic family and considered himself a Catholic, but avoided formal activity in the Catholic Church. Fellini's films include Catholic themes; some celebrate Catholic teachings, while others criticize or ridicule church dogma.
While Fellini was for the most part indifferent to politics, he had a general dislike of authoritarian institutions, and is interpreted by Bondanella as believing in "the dignity and even the nobility of the individual human being". In a 1966 interview, he said, "I make it a point to see if certain ideologies or political attitudes threaten the private freedom of the individual. But for the rest, I am not prepared nor do I plan to become interested in politics."
Despite various famous Italian actors favouring the Communists, Fellini was not left-wing. It is rumored that he supported Christian Democracy (DC). Bondanella writes that DC "was far too aligned with an extremely conservative and even reactionary pre-Vatican II church to suit Fellini's tastes", but Fellini opposed the '68 Movement and befriended Giulio Andreotti.
Apart from satirizing Silvio Berlusconi and mainstream television in "Ginger and Fred", Fellini rarely expressed political views in public and never directed an overtly political film. He directed two electoral television spots during the 1990s: one for DC and another for the Italian Republican Party (PRI). His slogan "Non si interrompe un'emozione" ("Don't interrupt an emotion") was directed against the excessive use of TV advertisements. The Democratic Party of the Left also used the slogan in the referendums of 1995.
Personal and highly idiosyncratic visions of society, Fellini's films are a unique combination of memory, dreams, fantasy and desire. The adjectives "Fellinian" and "Felliniesque" are "synonymous with any kind of extravagant, fanciful, even baroque image in the cinema and in art in general". "La Dolce Vita" contributed the term "paparazzi" to the English language, derived from Paparazzo, the photographer friend of journalist Marcello Rubini (Marcello Mastroianni).
Contemporary filmmakers such as Tim Burton, Terry Gilliam, Emir Kusturica, and David Lynch have cited Fellini's influence on their work.
Polish director Wojciech Has, whose two best-received films, "The Saragossa Manuscript" (1965) and "The Hour-Glass Sanatorium" (1973), are examples of modernist fantasies, has been compared to Fellini for the sheer "luxuriance of his images".
"I Vitelloni" inspired European directors Juan Antonio Bardem, Marco Ferreri, and Lina Wertmüller and influenced Martin Scorsese's "Mean Streets" (1973), George Lucas's "American Graffiti" (1974), Joel Schumacher's "St. Elmo's Fire" (1985), and Barry Levinson's "Diner" (1987), among many others. When the American magazine "Cinema" asked Stanley Kubrick in 1963 to name his ten favorite films, he ranked "I Vitelloni" number one.
"Nights of Cabiria" was adapted as the Broadway musical "Sweet Charity" and the movie "Sweet Charity" (1969) by Bob Fosse starring Shirley MacLaine. "City of Women" was adapted for the Berlin stage by Frank Castorf in 1992.
"8½" inspired, among others, "Mickey One" (Arthur Penn, 1965), "Alex in Wonderland" (Paul Mazursky, 1970), "Beware of a Holy Whore" (Rainer Werner Fassbinder, 1971), "Day for Night" (François Truffaut, 1973), "All That Jazz" (Bob Fosse, 1979), "Stardust Memories" (Woody Allen, 1980), "Sogni d'oro" (Nanni Moretti, 1981), "Parad Planet" (Vadim Abdrashitov, 1984), "La Pelicula del rey" (Carlos Sorin, 1986), "Living in Oblivion" (Tom DiCillo, 1995), "8½ Women" (Peter Greenaway, 1999), "Falling Down" (Joel Schumacher, 1993), and the Broadway musical "Nine" (Maury Yeston and Arthur Kopit, 1982). "Yo-Yo Boing!" (1998), a Spanish novel by Puerto Rican writer Giannina Braschi, features a dream sequence with Fellini inspired by "8½".
Fellini's work is referenced on the albums "Fellini Days" (2001) by Fish, "Another Side of Bob Dylan" (1964) by "Bob Dylan" with "Motorpsycho Nitemare", "Funplex" (2008) by the B-52's with the song "Juliet of the Spirits", and in the opening traffic jam of the music video "Everybody Hurts" by R.E.M. American singer Lana Del Rey has cited Fellini as an influence. It influenced the American TV shows "Northern Exposure" and "Third Rock from the Sun". Wes Anderson's short film "Castello Cavalcanti" (2013) is in many places a direct homage to Fellini.
Various film-related material and personal papers of Fellini are in the Wesleyan University Cinema Archives, to which scholars and media experts have full access. In October 2009, the Jeu de Paume in Paris opened an exhibit devoted to Fellini that included ephemera, television interviews, behind-the-scenes photographs, "Book of Dreams" (based on 30 years of the director's illustrated dreams and notes), along with excerpts from "La dolce vita" and "8½".
In 2014, the Blue Devils Drum and Bugle Corps of Concord, California, performed "Felliniesque", a show themed around Fellini's work, with which they won a record 16th Drum Corps International World Class championship with a record score of 99.650. That same year, the weekly entertainment-trade magazine "Variety" announced that French director Sylvain Chomet was moving forward with "The Thousand Miles", a project based on various Fellini works, including his unpublished drawings and writings.
Television commercials
|
https://en.wikipedia.org/wiki?curid=11786
|
Fleetwood Mac
Fleetwood Mac are a British-American rock band, formed in London in 1967. They have sold more than 120 million records worldwide, making them one of the world's best-selling bands. As early as 1979, Fleetwood Mac were honoured with a star on the Hollywood Walk of Fame. In 1998 the band was inducted into the Rock and Roll Hall of Fame and received the Brit Award for Outstanding Contribution to Music.
Fleetwood Mac was founded by guitarist Peter Green, drummer Mick Fleetwood and guitarist Jeremy Spencer. Bassist John McVie completed the lineup for their self-titled debut album. Danny Kirwan joined as a third guitarist in 1968. Keyboardist Christine Perfect, who contributed as a session musician from the second album, married McVie and joined in 1970. At this time it was primarily a British blues band, scoring a UK number one with "Albatross", and also had other hits such as the singles "Oh Well" and "Man of the World". All three guitarists left in succession during the early 1970s, to be replaced by guitarists Bob Welch and Bob Weston and vocalist Dave Walker. By 1974, all three had either departed or been dismissed, leaving the band without a male lead vocalist or guitarist.
In late 1974, while Fleetwood was scouting studios in Los Angeles, he was introduced to folk-rock duo Lindsey Buckingham and Stevie Nicks. Fleetwood Mac soon asked Buckingham to be their new lead guitarist, and Buckingham agreed on condition that Nicks would also join the band. The addition of Buckingham and Nicks gave the band a more pop rock sound, and their 1975 self-titled album, "Fleetwood Mac", reached No. 1 in the United States. "Rumours" (1977), Fleetwood Mac's second album after the arrival of Buckingham and Nicks, produced four U.S. Top 10 singles and remained at number one on the American albums chart for 31 weeks. It also reached the top spot in various countries around the world and won a Grammy Award for Album of the Year in 1978. "Rumours" has sold over 40 million copies worldwide, making it the eighth-highest-selling album in history. The band went through personal turmoil while recording the album, as both the romantic partnerships in the band (one being John and Christine McVie, and the other being Buckingham and Nicks) separated while continuing to make music together.
The band's personnel remained stable through three more studio albums, but by the late 1980s began to disintegrate. After Buckingham and Nicks each left the band, they were replaced by a number of other guitarists and vocalists. A 1993 one-off performance for the first inauguration of Bill Clinton featured the lineup of Fleetwood, John McVie, Christine McVie, Nicks, and Buckingham back together for the first time in six years. A full reunion occurred four years later, and the group released their fourth U.S. No. 1 album, "The Dance" (1997), a live compilation of their work. Christine McVie left the band in 1998, but continued to work with the band in a session capacity. Meanwhile, the group remained together as a four-piece, releasing their most recent studio album, "Say You Will", in 2003. Christine McVie rejoined the band full-time in 2014. In 2018, Buckingham was fired from the band and was replaced by Mike Campbell, formerly of Tom Petty and the Heartbreakers, and Neil Finn of Split Enz and Crowded House.
Fleetwood Mac were formed in July 1967 in London, England, when Peter Green left the British blues band John Mayall & the Bluesbreakers. Green had previously replaced guitarist Eric Clapton in the Bluesbreakers and had received critical acclaim for his work on their album "A Hard Road". Green had been in two bands with Mick Fleetwood, Peter B's Looners and the subsequent Shotgun Express (which featured a young Rod Stewart as vocalist), and suggested Fleetwood as a replacement for drummer Aynsley Dunbar when Dunbar left the Bluesbreakers to join the new Jeff Beck/Rod Stewart band. John Mayall agreed and Fleetwood joined the Bluesbreakers.
The Bluesbreakers then consisted of Green, Fleetwood, John McVie and Mayall. Mayall gave Green free recording time as a gift, in which Fleetwood, McVie and Green recorded five songs. The fifth song was an instrumental that Green named after the rhythm section, "Fleetwood Mac" ("Mac" being short for McVie).
Soon after this, Green suggested to Fleetwood that they form a new band. The pair wanted McVie on bass guitar and named the band 'Fleetwood Mac' to entice him, but McVie opted to keep his steady income with Mayall rather than take a risk with a new band. In the meantime Peter Green and Mick Fleetwood had teamed up with slide guitarist Jeremy Spencer and bassist Bob Brunning. Brunning was in the band on the understanding that he would leave if McVie agreed to join. The Green, Fleetwood, Spencer, Brunning version of the band made its debut on 13 August 1967 at the Windsor Jazz and Blues Festival as 'Peter Green's Fleetwood Mac, also featuring Jeremy Spencer'. Brunning played only a few gigs with Fleetwood Mac. Within weeks of this show, John McVie agreed to join the band as permanent bassist.
Fleetwood Mac's self-titled debut album was a no-frills blues album and was released by the Blue Horizon label in February 1968. There were no other players on the album (except on the song "Long Grey Mare", which was recorded with Brunning on bass). The album was successful in the UK and reached no. 4, although it did not have any singles on it. The band soon released two singles: Green's "Black Magic Woman" (later a big hit for Santana) and "Need Your Love So Bad".
The band's second studio album, "Mr. Wonderful", was released in August 1968. Like their first album, it was all blues. The album was recorded live in the studio with miked amplifiers and a PA system, rather than being plugged into the board. They also added horns and featured a friend of the band on keyboards, Christine Perfect of Chicken Shack.
Shortly after the release of their second album, Fleetwood Mac added 18-year-old guitarist Danny Kirwan to their line-up. He was recruited from the South London blues trio Boilerhouse, which consisted of Kirwan on guitar, Trevor Stevens on bass and Dave Terrey on drums. Green and Fleetwood had watched Boilerhouse rehearse in a basement boiler-room, and Green had been so impressed that he invited the band to play support slots for Fleetwood Mac. Green wanted Boilerhouse to become a professional band but Stevens and Terrey were not prepared to turn professional, so Green tried to find another rhythm section for Kirwan by placing an ad in Melody Maker. There were over 300 applicants, but when Green and Fleetwood ran auditions at the Nag's Head in Battersea (home of the Mike Vernon Blue Horizon Club) the hard-to-please Green could not find anyone good enough. Fleetwood invited Kirwan to join Fleetwood Mac as a third guitarist.
Green had been frustrated that Jeremy Spencer did not contribute to his songs. Kirwan, a talented self-taught guitarist, had a signature vibrato and a unique style that added a new dimension to the band's sound. In November 1968, with Kirwan in the band, they released their first number one single in Europe, "Albatross", on which Kirwan duetted with Green. Green said later that the success of 'Albatross' was thanks to Kirwan. "If it wasn't for Danny, I would never had had a number one hit record." Around this time they released the compilation album "English Rose", which contained half of "Mr Wonderful" plus new songs from Kirwan. Their second compilation album,"The Pious Bird of Good Omen", contained a collection of singles, B-sides and a selection of work the band had done with Eddie Boyd.
On tour in the US in January 1969, the band recorded an album at the soon-to-close Chess Records Studio with some of the blues legends of Chicago, including Willie Dixon, Buddy Guy and Otis Spann. These were Fleetwood Mac's last all-blues recordings. Along with the change of style the band was also going through label changes. Up until that point they had been on the Blue Horizon label, but with Kirwan in the band the musical possibilities had become too diverse for a blues-only label. The band signed with Immediate Records and released the single "Man of the World", which became another British and European hit. For the B-side Spencer fronted Fleetwood Mac as "Earl Vince and the Valiants" and recorded "Somebody's Gonna Get Their Head Kicked In Tonite", typifying the more raucous rock 'n' roll side of the band. Immediate Records was in bad shape, however, and the band shopped around for a new deal. The Beatles wanted the band on Apple Records (Mick Fleetwood and George Harrison were brothers-in-law), but the band's manager Clifford Davis decided to go with Warner Bros. Records (through Reprise Records, a Frank Sinatra-founded label), the label they have stayed with ever since.
Under the wing of Reprise, Fleetwood Mac released their third studio album, "Then Play On", in September 1969. Although the initial pressing of the American release of this album was the same as the British version, it was altered to contain the song "Oh Well", which featured consistently in live performances from the time of its release through 1997 and again starting in 2009. "Then Play On", the band's first rock album, featured only the songs of Kirwan and Green. Jeremy Spencer, meanwhile, had recorded a solo album of 1950s-style rock and roll songs, backed by the rest of the band except Green.
By 1970 Peter Green, the frontman of the band, had become a user of LSD. During the band's European tour, Green experienced a bad acid trip at a hippie commune in Munich. Clifford Davis, the band's manager, singled out this incident as the crucial point in Green's mental decline. He said: "The truth about Peter Green and how he ended up how he did is very simple. We were touring Europe in late 1969. When we were in Germany, Peter told me he had been invited to a party. I knew there were going to be a lot of drugs around and I suggested that he didn't go. But he went anyway and I understand from him that he took what turned out to be very bad, impure LSD. He was never the same again." German author and filmmaker Rainer Langhans stated in his autobiography that he and Uschi Obermaier met Green in Munich and invited him to their Highfisch-Kommune, where the drinks were spiked with acid. Langhans and Obermaier were planning to organise an open-air "Bavarian Woodstock", for which they wanted Jimi Hendrix and The Rolling Stones to be the main acts, and they hoped Green would help them to get in contact with The Rolling Stones.
Green's last hit with Fleetwood Mac was "The Green Manalishi (With the Two-Prong Crown)". The track was recorded at Warner-Reprise's studios in Hollywood on the band's third US tour in April 1970, a few weeks before Green left the band. A live performance was recorded at the Boston Tea Party in February 1970, and the song was later recorded by Judas Priest. "Green Manalishi" was released as Green's mental stability deteriorated. He wanted the band to give all their money to charity, but the other members of the band disagreed.
In April, Green announced his decision to quit the band after the completion of their European tour. His last show with Fleetwood Mac was on 20 May 1970. During that show the band went past their allotted time and the power was shut off, although Mick Fleetwood kept drumming. Some of the Boston Tea Party recordings (5/6/7 February 1970) were eventually released in the 1980s as the "Live in Boston" album. A more complete remastered three-volume compilation was released by Snapper Music in the late 1990s.
Kirwan and Spencer were left with the task of replacing Green in their live shows and on their recordings. In September 1970 Fleetwood Mac released their fourth studio album, "Kiln House." Kirwan's songs on the album moved the band in the direction of rock, while Spencer's contributions focused on re-creating the country-tinged "Sun Sound" of the late 1950s. Christine Perfect, who had retired from the music business after one unsuccessful solo album, contributed (uncredited) to "Kiln House", singing backup vocals and playing keyboards. She also drew the album cover. After "Kiln House", Fleetwood Mac were progressing and developing a new sound, and she was invited to join the band to help fill out the rhythm section. They released a single, Danny Kirwan's "Dragonfly" b/w "The Purple Dancer" in the UK and certain European countries, but despite good notices in the press it was not a success. The B-side has been reissued only once, on a Reprise German and Dutch-only "Best of" album.
Christine Perfect, who by this point had married bassist John McVie, made her first appearance with the band as Christine McVie at Bristol University, England, in May 1969, just as she was leaving Chicken Shack. She had had success with the Etta James classic "I'd Rather Go Blind" and was twice voted female artist of the year in England. Christine McVie played her first gig as an official member of Fleetwood Mac on 1 August 1970 in New Orleans, Louisiana. CBS Records, which now owned Blue Horizon (except in the US and Canada), released the band's fifth compilation album, "The Original Fleetwood Mac", containing previously unreleased material. The album was relatively successful, and the band continued to gain popularity.
While on tour in February 1971, Jeremy Spencer said he was going out to "get a magazine" but never returned. After several days of frantic searching the band discovered that Spencer had joined a religious group, the Children of God. The band were liable for the remaining shows on the tour and asked Peter Green to step in as a replacement. Green brought along his friend Nigel Watson, who played the congas. (Twenty-five years later Green and Watson collaborated again to form the Peter Green Splinter Group.) Green was only back with Fleetwood Mac temporarily and the band began a search for a new guitarist. Green insisted on playing only new material and none he had written. He and Watson played only the last week of shows. The San Bernardino show on 20 February was taped.
In the summer of 1971 the band held auditions for a replacement guitarist at their large country home, "Benifold", which they had jointly bought with their manager Davis for £23,000 () prior to the "Kiln House" tour. A friend of the band, Judy Wong, recommended her high school friend Bob Welch, who was living in Paris, France, at the time. The band held a few meetings with Welch and decided to hire him, without actually playing with him, after they heard a tape of his songs.
In September 1971 the band released their fifth studio album, "Future Games". As a result of Welch's arrival and Spencer's departure, the album was different from anything they had done previously. While it became the band's first studio album to miss the charts in the UK, it helped to expand the band's appeal in the United States. In Europe CBS released Fleetwood Mac's first Greatest Hits album, which mostly consisted of songs by Peter Green, with one song by Spencer and one by Kirwan.
In 1972, six months after the release of "Future Games", the band released their sixth studio album, "Bare Trees". Mostly composed by Kirwan, "Bare Trees" featured the Welch-penned single "Sentimental Lady", which would be a much bigger hit for Welch five years later when he re-recorded it for his solo album "French Kiss", backed by Mick Fleetwood and Christine McVie. "Bare Trees" also featured "Spare Me a Little of Your Love", a bright Christine McVie song that became a staple of the band's live act throughout the early to mid-1970s.
While the band was doing well in the studio, their tours started to be problematic. By 1972 Danny Kirwan had developed an alcohol dependency and was becoming alienated from Welch and the McVies. When Kirwan smashed his Gibson Les Paul Custom guitar before a concert on a US tour in August 1972, refused to go on stage and criticised the band afterwards, Fleetwood fired him. Fleetwood said later that the pressure had become too much for Kirwan, and he had suffered a breakdown.
In the three albums they released in this period they constantly changed line-ups. In September 1972 the band added guitarist Bob Weston and vocalist Dave Walker, formerly of Savoy Brown and Idle Race. Bob Weston was well known as a slide guitarist and had known the band from his touring period with Long John Baldry. Fleetwood Mac also hired Savoy Brown's road manager, John Courage. Fleetwood, The McVies, Welch, Weston and Walker recorded the band's seventh studio album, "Penguin", which was released in January 1973. After the tour the band fired Walker because they felt his vocal style and attitude did not fit well with the rest of the band.
The remaining five members carried on and recorded the band's eighth studio album, "Mystery to Me", six months later. This album contained Welch's song "Hypnotized", which received a great amount of airplay on the radio and became one of the band's most successful songs to date in the US. The band was proud of the new album and anticipated that it would be a smash hit. While it did eventually go Gold, personal problems within the band emerged. The McVies' marriage was under a lot of stress, which was aggravated by their constant working with each other and by John McVie's considerable alcohol abuse. Subsequent lack of touring meant that the album was unable to chart as high as the previous one.
During the 1973 US tour to promote "Mystery to Me", Weston had an affair with Fleetwood's wife Jenny Boyd Fleetwood, sister of Pattie Boyd Harrison. Fleetwood was emotionally devastated by this and could not continue with the tour. Courage fired Weston and two weeks in, with another twenty-six concerts scheduled, the tour was cancelled. The last date played was Lincoln, Nebraska, on 20 October 1973. In a late-night meeting after that show, the band told their sound engineer that the tour was over and Fleetwood Mac was splitting up.
In late 1973, after the collapse of the US tour, the band's manager, Clifford Davis, was left with major touring commitments to fulfil and no band. Fleetwood Mac had "temporarily disbanded" in Nebraska and its members had gone their separate ways. Davis was concerned that failing to complete the tour would destroy his reputation with bookers and promoters. He sent the band a letter in which he said he "hadn't slaved for years to be brought down by the whims of irresponsible musicians". Davis claimed that he owned the name 'Fleetwood Mac' and the right to choose the band members, and he recruited members of the band Legs, which had recently issued one single under Davis's management, to tour the US in early 1974 under the name 'The New Fleetwood Mac' and perform the rescheduled dates. This band - who former guitarist Dave Walker said were "very good" - consisted of Elmer Gantry (Dave Todd, formerly of Velvet Opera: vocals, guitar), Kirby Gregory (formerly of Curved Air: guitar), Paul Martinez (formerly of the Downliners Sect: bass), John Wilkinson (also known as Dave Wilkinson:
keyboards) and Australian drummer Craig Collinge (formerly of Manfred Mann Ch III, the Librettos, Procession and Third World War).
The members of this group were told that Mick Fleetwood would join them after the tour had started, to validate the use of the name, and claimed that Fleetwood had been involved in planning it. Davis and others stated that Fleetwood had committed himself to the project and had given instructions to hire musicians and rehearse the band. Davis said Collinge had been hired only as a temporary stand-in drummer for rehearsals and the first two gigs, and that Fleetwood had agreed to appear on the rest of the tour after he had sorted out personal matters, but then had backed out after the tour started. Fleetwood said later that he had not promised to appear on the tour.
The 'New Fleetwood Mac' tour began on 16 January 1974 at the Syria Mosque in Pittsburgh, Pennsylvania, and according to one of the band members, the first concert "went down a storm". The promoter was dubious at first, but said later that the crowd had loved the band and they were "actually really good." More successful gigs followed, but then word got around that this was not the real Fleetwood Mac and audiences became hostile. The band was turned away from several gigs and the next half-dozen were pulled by promoters. The band struggled on and played further dates in the face of increasing hostility and heckling, more dates were pulled, the keyboard player quit, and after a concert in Edmonton where bottles were thrown at the stage, the tour collapsed. The band dissolved and the remainder of the tour was cancelled.
The lawsuit that followed regarding who owned the rights to the name 'Fleetwood Mac' put the original Fleetwood Mac on hiatus for almost a year. Although the band was named after Mick Fleetwood and John McVie, they had apparently signed contracts in which they had forfeited the rights to the name. Their record company, Warner Bros. Records, when appealed to, said they didn't know who owned it. The dispute was eventually settled amicably out of court, four years later, in what was described as "a reasonable settlement not unfair to either party." In later years Fleetwood said that, in the end, he was grateful to Davis because the lawsuit was the reason the band moved to California.
Nobody from the alternative lineup was ever made a part of the real Fleetwood Mac, although some of them later played in Danny Kirwan's studio band. Gantry and Gregory went on to become members of Stretch, whose 1975 UK hit single "Why Did You Do It" was written about the touring debacle. Gantry later collaborated with the Alan Parsons Project. Martinez went on to play with the Deep Purple offshoot Paice Ashton Lord, as well as Robert Plant's backing band.
While the other band had been on tour, Welch stayed in Los Angeles and connected with entertainment attorneys. He realised that the original Fleetwood Mac was being neglected by Warner Bros and that they would need to change their base of operation from England to America, to which the rest of the band agreed. Rock promoter Bill Graham wrote a letter to Warner Bros to convince them that the real Fleetwood Mac was, in fact, Fleetwood, Welch, and the McVies. This did not end the legal battle but the band was able to record as Fleetwood Mac again. Instead of hiring another manager, Fleetwood Mac, having re-formed, decided to manage themselves.
In September 1974, Fleetwood Mac signed a new recording contract with Warner Bros, but remained on the Reprise label. The band released their ninth studio album, "Heroes Are Hard to Find," in September 1974 and, for the first time in its history, the band had only one guitarist. While on tour they added a second keyboardist, Doug Graves, who had been an engineer on "Heroes Are Hard to Find". In late 1974 Graves was preparing to become a permanent member of the band by the end of their US tour. He said:
However, Graves did not ultimately join full-time. In 1980, Christine McVie explained the decision:
Robert ("Bobby") Hunt, who had been in the band Head West with Bob Welch back in 1970, replaced Graves. Neither musician proved to be a long-term addition to the line-up. Welch left soon after the tour ended (on 5 December 1974 at Cal State University), having grown tired of touring and legal struggles. Nevertheless, the tour had enabled the "Heroes" album to reach a higher position on the American charts than any of the band's previous records.
After Welch announced that he was leaving the band, Fleetwood began searching for a replacement. While Fleetwood was checking out Sound City Studios in Los Angeles, the house engineer, Keith Olsen, played him a track he had recorded in the studio, "Frozen Love", from the album "Buckingham Nicks" (1973). Fleetwood liked it and was introduced to the guitarist from the band, Lindsey Buckingham, who was at Sound City that day recording demos. Fleetwood asked him to join Fleetwood Mac and Buckingham agreed, on the condition that his music partner and girlfriend, Stevie Nicks, be included. Buckingham and Nicks joined the band on New Year's Eve 1974, within four weeks of the previous incarnation splitting.
In 1975, the new line-up released another self-titled album, their tenth studio album. The album was a breakthrough for the band and became a huge hit, reaching No.1 in the US and selling over 7 million copies. Among the hit singles from this album were Christine McVie's "Over My Head" and "Say You Love Me" and Stevie Nicks's "Rhiannon", as well as the much-played album track "Landslide", a live rendition of which became a hit twenty years later on "The Dance" album.
In 1976, the band was suffering from severe stress. With success came the end of John and Christine McVie's marriage, as well as Buckingham and Nicks's long-term romantic relationship. Fleetwood, meanwhile, was in the midst of divorce proceedings from his wife, Jenny. The pressure on Fleetwood Mac to release a successful follow-up album, combined with their new-found wealth, led to creative and personal tensions which were allegedly fuelled by high consumption of drugs and alcohol.
The band's eleventh studio album, "Rumours" (the band's first release on the main Warner label after Reprise was retired and all of its acts were reassigned to the parent label), was released in the spring of 1977. In this album, the band members laid bare the emotional turmoil they were experiencing at the time. "Rumours" was critically acclaimed and won the Grammy Award for Album of the Year in 1977. The album generated multiple Top Ten singles, including Buckingham's "Go Your Own Way", Nicks's US No. 1 "Dreams" and Christine McVie's "Don't Stop" and "You Make Loving Fun". Buckingham's "Second Hand News", Nicks's "Gold Dust Woman" and "The Chain" (the only song written by all five band members) also received significant radio airplay. By 2003 "Rumours" had sold over 19 million copies in the US alone (certified as a diamond album by the RIAA) and a total of 40 million copies worldwide, bringing it to eighth on the list of best-selling albums. Fleetwood Mac supported the album with a lucrative tour.
On 10 October 1979, Fleetwood Mac were honoured with a star on the Hollywood Walk of Fame for their contributions to the music industry at 6608 Hollywood Boulevard.
Buckingham convinced Fleetwood to let his work on their next album be more experimental, and to be allowed to work on tracks at home before bringing them to the rest of the band in the studio. The result of this, the band's twelfth studio album "Tusk", was a 20-track double album released in 1979. It produced three hit singles: Lindsey Buckingham's "Tusk" (US No. 8), which featured the USC Trojan Marching Band, Christine McVie's "Think About Me" (US No. 20), and Stevie Nicks's six-and-a-half minute opus "Sara" (US No. 7). "Sara" was cut to four-and-a-half minutes for both the hit single and the first CD-release of the album, but the unedited version has since been restored on the 1988 greatest hits compilation, the 2004 reissue of "Tusk" and Fleetwood Mac's 2002 release of "The Very Best of Fleetwood Mac". Original guitarist Peter Green also took part in the sessions of "Tusk" although his playing, on the Christine McVie track "Brown Eyes", is not credited on the album. In an interview in 2019 Fleetwood described "Tusk" as his "personal favourite" and said, “Kudos to Lindsey ... for us not doing a replica of "Rumours"."
"Tusk" sold four million copies worldwide. Fleetwood blamed the album's relative lack of commercial success on the RKO radio chain having played the album in its entirety prior to release, thereby allowing mass home taping.
The band embarked on an 11-month tour to support and promote "Tusk". They travelled across the world, including the US, Australia, New Zealand, Japan, France, Belgium, Germany, the Netherlands, and the United Kingdom. In Germany, they shared the bill with reggae superstar Bob Marley. On this world tour, the band recorded music for their first live album, which was released at the end of 1980.
The band's thirteenth studio album, "Mirage", was released in 1982. Following 1981 solo albums by Nicks ("Bella Donna"), Fleetwood ("The Visitor"), and Buckingham ("Law and Order"), there was a return to a more conventional approach. Buckingham had been chided by critics, fellow band members and music business managers for the lesser commercial success of "Tusk". Recorded at Château d'Hérouville in France and produced by Richard Dashut, "Mirage" was an attempt to recapture the huge success of "Rumours". Its hits included Christine McVie's "Hold Me" and "Love in Store" (co-written by Robbie Patton and Jim Recor, respectively), Stevie Nicks's "Gypsy", and Lindsey Buckingham's "Oh Diane", which made the Top 10 in the UK. A minor hit was also scored by Buckingham's "Eyes Of The World" and "Can't Go Back".
In contrast to the Tusk Tour the band embarked on only a short tour of 18 American cities, the Los Angeles show being recorded and released on video. They also headlined the first US Festival, on 5 September 1982, for which the band was paid $500,000 ($ today). "Mirage" was certified double platinum in the US.
Following "Mirage" the band went on hiatus, which allowed members to pursue solo careers. Stevie Nicks released two more solo albums (1983's "The Wild Heart" and 1985's "Rock a Little"). Lindsey Buckingham issued "Go Insane" in 1984, the same year that Christine McVie made an eponymous album (yielding the Top 10 hit "Got a Hold on Me" and the Top 40 hit "Love Will Show Us How"). All three met with success, Nicks being the most popular. During this period Mick Fleetwood had filed for bankruptcy, Nicks was admitted to the Betty Ford Clinic for addiction problems and John McVie had suffered an addiction-related seizure, all of which were attributed to the lifestyle of excess afforded to them by their worldwide success. It was rumoured that Fleetwood Mac had disbanded, but Buckingham commented that he was unhappy to allow "Mirage" to remain as the band's last effort.
The "Rumours" line-up of Fleetwood Mac recorded one more album, their fourteenth studio album, "Tango in the Night", in 1987. As with various other Fleetwood Mac albums, the material started off as a Buckingham solo album before becoming a group project. The album went on to become their best-selling release since "Rumours", especially in the UK where it hit No. 1 three times in the following year. The album sold three million copies in the US and contained four hits: Christine McVie's "Little Lies" and "Everywhere" ('Little Lies' being co-written with McVie's new husband Eddy Quintela), Sandy Stewart and Stevie Nicks's "Seven Wonders", and Lindsey Buckingham's "Big Love". "Family Man" (Buckingham and Richard Dashut), and "Isn't It Midnight" (Christine McVie), were also released as singles, with less success.
With a ten-week tour scheduled, Buckingham held back at the last minute, saying he felt his creativity was being stifled. A group meeting at Christine McVie's house on 7 August 1987 resulted in turmoil. Tensions were coming to a head. Mick Fleetwood said in his autobiography that there was a physical altercation between Buckingham and Nicks. Buckingham left the band the following day. After Buckingham's departure Fleetwood Mac added two new guitarists to the band, Billy Burnette and Rick Vito, again without auditions.
Burnette was the son of Dorsey Burnette and nephew of Johnny Burnette, both of The Rock and Roll Trio. He had already worked with Mick Fleetwood in Zoo, with Christine McVie as part of her solo band, had done some session work with Stevie Nicks, and backed Lindsey Buckingham on "Saturday Night Live". Fleetwood and Christine McVie had played on his "Try Me" album in 1985. Vito, a Peter Green admirer, had played with many artists from Bonnie Raitt to John Mayall, and worked with John McVie on two Mayall albums.
The 1987–88 "Shake the Cage" tour was the first outing for this line-up. It was successful enough to warrant the release of a concert video, entitled "Tango in the Night", which was filmed at San Francisco's Cow Palace arena in December 1987.
Capitalising on the success of "Tango in the Night", the band released a "Greatest Hits" album in 1988. It featured singles from the 1975–1988 era and included two new compositions, "No Questions Asked" written by Nicks and "As Long as You Follow", written by McVie and Quintela. 'As Long as You Follow' was released as a single in 1988 but only made No. 43 in the US and No.66 in the UK, although it reached No.1 on the US Adult Contemporary charts. The "Greatest Hits" album, which peaked at No. 3 in the UK and No. 14 in the US (though it has since sold over 8 million copies there) was dedicated by the band to Buckingham, with whom they were now reconciled.
In 1990, Fleetwood Mac released their fifteenth studio album, "Behind the Mask". With this album the band veered away from the stylised sound that Buckingham had evolved during his tenure in the band (which was also evident in his solo work) and developed a more adult contemporary style with producer Greg Ladanyi. The album yielded only one Top 40 hit, McVie's "Save Me". "Behind the Mask" only achieved Gold album status in the US, peaking at No. 18 on the "Billboard" album chart, though it entered the UK Albums Chart at No. 1. It received mixed reviews and was seen by some music critics as a low point for the band in the absence of Lindsey Buckingham (who had actually made a guest appearance playing on the title track). But "Rolling Stone" magazine said that Vito and Burnette were "the best thing to ever happen to Fleetwood Mac". The subsequent "Behind the Mask" tour saw the band play sold-out shows at London's Wembley Stadium. In the final show in Los Angeles, Buckingham joined the band on stage. The two women of the band, McVie and Nicks, had decided that the tour would be their last (McVie's father had died during the tour), although both stated that they would still record with the band. In 1991, however, Nicks and Rick Vito announced they were leaving Fleetwood Mac altogether.
In 1992, Mick Fleetwood arranged a 4-disc box set, spanning highlights from the band's 25-year history, entitled "25 Years – The Chain" (an edited 2-disc set was also available). A notable inclusion in the box set was "Silver Springs", a Stevie Nicks composition that was recorded during the "Rumours" sessions but was omitted from the album and used as the B-side of "Go Your Own Way". Nicks had requested use of this track for her 1991 best-of compilation "TimeSpace", but Fleetwood had refused as he had planned to include it in this collection as a rarity. The disagreement between Nicks and Fleetwood garnered press coverage and was believed to have been the main reason for Nicks leaving the band in 1991. The box set also included a new Stevie Nicks/Rick Vito composition, "Paper Doll", which was released in the US as a single and produced by Lindsey Buckingham and Richard Dashut. There were also two new Christine McVie compositions, "Heart of Stone" and "Love Shines". "Love Shines" was released as a single in the UK and elsewhere. Lindsey Buckingham also contributed a new song, "Make Me a Mask". Mick Fleetwood also released a deluxe hardcover companion book to coincide with the release of the box set, titled "My 25 Years in Fleetwood Mac". The volume featured notes written by Fleetwood detailing the band's 25-year history and many rare photographs.
The Buckingham/Nicks/McVie/McVie/Fleetwood line-up reunited in 1993 at the request of US President Bill Clinton for his first Inaugural Ball. Clinton had made Fleetwood Mac's "Don't Stop" his campaign theme song. His request for it to be performed at the Inauguration Ball was met with enthusiasm by the band, although this line-up had no intention of reuniting again.
Inspired by the new interest in the band, Mick Fleetwood, John McVie, and Christine McVie recorded another album as Fleetwood Mac, with Billy Burnette taking lead guitar duties. Burnette left in March 1993 to record a country album and pursue an acting career and Bekka Bramlett, who had worked a year earlier with Mick Fleetwood's Zoo, was recruited to take his place. Solo singer-songwriter/guitarist and Traffic member Dave Mason, who had worked with Bekka's parents Delaney & Bonnie twenty-five years earlier, was subsequently added. In March 1994 Billy Burnette, a good friend and co-songwriter with Delaney Bramlett, returned to the band with Fleetwood's blessing.
The band, minus Christine McVie, toured in 1994, opening for Crosby, Stills, & Nash and in 1995 as part of a package with REO Speedwagon and Pat Benatar. This tour saw the band perform classic Fleetwood Mac songs from their 1967–1974 era. In 1995, at a concert in Tokyo, the band was greeted by former member Jeremy Spencer, who performed a few songs with them.
On 10 October 1995, Fleetwood Mac released their sixteenth studio album, "Time", which was not a success. Although it hit the UK Top 60 for one week, the album had zero impact in the US. It failed to graze the "Billboard" Top 200 albums chart, a reversal for a band that had been a mainstay on that chart for most of the previous two decades. Shortly after the album's release, Christine McVie informed the band that the album would be her last. Bramlett and Burnette subsequently formed a country music duo, Bekka & Billy.
Just weeks after disbanding Fleetwood Mac, Mick Fleetwood announced that he was working with Lindsey Buckingham again. John McVie was added to the sessions, and later Christine McVie. Stevie Nicks also enlisted Lindsey Buckingham to produce a song for a soundtrack.
In May 1996 Mick Fleetwood, John McVie, Christine McVie, and Stevie Nicks performed together at a private party in Louisville, Kentucky, prior to the Kentucky Derby, with Steve Winwood filling in for Lindsey Buckingham. A week later the "Twister" film soundtrack was released, which featured the Stevie Nicks-Lindsey Buckingham duet "Twisted", with Mick Fleetwood on drums. This eventually led to a full reunion of the "Rumours" line-up, which officially reformed in March 1997.
The regrouped Fleetwood Mac performed a live concert on a soundstage at Warner Bros. Burbank, California, on 22 May 1997. The concert was recorded, and from this performance came the 1997 live album "The Dance", which brought Fleetwood Mac back to the top of the US album charts for the first time in 10 years. "The Dance" returned Fleetwood Mac to a superstar status they had not enjoyed since "Tango in the Night". The album was certified 5 million units by the RIAA. An arena tour followed the MTV premiere of "The Dance" and kept the reunited Fleetwood Mac on the road throughout much of 1997, the 20th anniversary of "Rumours". With additional musicians Neale Heywood on guitar, Brett Tuggle on keyboards, Lenny Castro on percussion and Sharon Celani (who had toured with Fleetwood Mac in the late 1980s) and Mindy Stein on backing vocals, this would be the final appearance of the classic line-up including Christine McVie for 16 years. Neale Heywood and Sharon Celani remain touring members to this day.
In 1998 Fleetwood Mac were inducted into the Rock and Roll Hall of Fame. Members inducted included the original band, Mick Fleetwood, John McVie, Peter Green, Jeremy Spencer and Danny Kirwan, and "Rumours"-era members Christine McVie, Stevie Nicks and Lindsey Buckingham. Bob Welch was not included, despite his key role in keeping the band alive during the early 1970s. The "Rumours"-era version of the band performed both at the induction ceremony and at the Grammy Awards programme that year. Peter Green attended the induction ceremony but did not perform with his former bandmates, opting instead to perform his composition "Black Magic Woman" with Santana, who were inducted the same night. Neither Jeremy Spencer nor Danny Kirwan attended. Fleetwood Mac also received the "Outstanding Contribution to Music" award at the Brit Awards (British Phonographic Industry Awards) the same year.
In 1998 Christine McVie left the band. Her departure left Buckingham and Nicks to sing all the lead vocals for the band's seventeenth album, "Say You Will", released in 2003, although Christine contributed some backing vocals and keyboards. The album debuted at No.3 on the "Billboard" 200 chart (No. 6 in the UK) and yielded chart hits with "Peacekeeper" and the title track, and a successful world arena tour which lasted through 2004. The tour grossed $27,711,129 and was ranked No. 21 in the top 25 grossing tours of 2004.
Around 2004–05 there were rumours of a reunion of the early line-up of Fleetwood Mac involving Peter Green and Jeremy Spencer. While these two apparently remained unconvinced, in April 2006 bassist John McVie, during a question-and-answer session on the "Penguin" Fleetwood Mac fan website, said of the reunion idea:
In interviews given in November 2006 to support his solo album "Under the Skin", Buckingham stated that plans for the band to reunite once more for a 2008 tour were still on the cards. Recording plans had been put on hold for the foreseeable future. In an interview Stevie Nicks gave to the UK newspaper "The Daily Telegraph" "i" in September 2007, she stated that she was unwilling to carry on with the band unless Christine McVie returned. However, in a more recent interview, Mick Fleetwood said "... be very happy and hopeful that we will be working again. I can tell you everyone's going to be extremely excited about what's happening with Fleetwood Mac."
On 14 March 2008, the Associated Press reported Sheryl Crow as saying that she would be working with Fleetwood Mac in 2009. Crow and Stevie Nicks had collaborated in the past and Crow had stated that Nicks had been a great teacher and inspiration to her. In a subsequent interview, Buckingham said that after discussions between the band and Crow, the potential collaboration with Crow had "lost its momentum". In an interview in June 2008 Nicks said that Crow would not be joining Fleetwood Mac as a replacement for Christine McVie. According to Nicks, "the group will start working on material and recording probably in October, and finish an album." On 7 October 2008 Mick Fleetwood confirmed on the BBC's "The One Show" that the band were working in the studio. He also announced plans for a world tour in 2009.
In late 2008, it was announced that Fleetwood Mac would tour in 2009, beginning in March. As in the 2003–2004 tour, Christine McVie would not be featured in the line-up. The tour was branded as a greatest hits show entitled "Unleashed", although album tracks such as "Storms" and "I Know I'm Not Wrong" were also played.
During their show on 20 June 2009 in New Orleans, Louisiana, Stevie Nicks premiered part of a new song that she had written about Hurricane Katrina. The song was later released as "New Orleans" on Stevie Nicks's 2011 album "In Your Dreams" with Mick Fleetwood on drums. In October 2009 and November the band toured Europe, followed by Australia and New Zealand in December. In October, The Very Best of Fleetwood Mac was re-released in an extended two-disc format (this format having been released in the US in 2002), entering at number six on the UK Albums Chart. On 1 November 2009 a new one-hour documentary, "Fleetwood Mac: Don't Stop", was broadcast in the UK on BBC One, featuring recent interviews with all four current band members. During the documentary Nicks gave a candid summary of the current state of her relationship with Buckingham, saying "Maybe when we're 75 and Fleetwood Mac is a distant memory, we might be friends."
On 6 November 2009, Fleetwood Mac played the last show of the European leg of their "Unleashed" tour at London's Wembley Arena. Christine McVie was present in the audience. Stevie Nicks paid tribute to her from the stage to a standing ovation from the audience, saying that she thought about her former bandmate "every day", and dedicated that night's performance of "Landslide" to her. On 19 December 2009 Fleetwood Mac played the second-to-last show of their "Unleashed" tour to a sell-out crowd in New Zealand, at what was originally intended to be a one-off event at the TSB Bowl of Brooklands in New Plymouth. Tickets, after pre-sales, sold out within twelve minutes of public release. Another date, Sunday 20 December, was added and also sold out. The tour grossed $84,900,000 and was ranked No. 13 in the highest grossing worldwide tours of 2009. On 19 October 2010, Fleetwood Mac played a private show at the Phoenician Hotel in Scottsdale, Arizona for TPG (Texas Pacific Group).
On 3 May 2011, the Fox Network broadcast an episode of "Glee" entitled "Rumours" that featured six songs from the band's 1977 album. The show sparked renewed interest in the band and its commercially most successful album, and "Rumours" re-entered the "Billboard" 200 chart at No. 11 in the same week that Stevie Nicks's new solo album "In Your Dreams" debuted at No. 6. (Nicks was quoted by "Billboard" saying that her new album was "my own little "Rumours".") The two recordings sold about 30,000 and 52,000 units respectively. Music downloads accounted for 91 percent of the "Rumours" sales. The spike in sales for "Rumours" represented an increase of 1,951%. It was the highest chart entry by a previously issued album since "The Rolling Stones"' reissue of "Exile On Main St." re-entered the chart at No. 2 on 5 June 2010. In an interview in July 2012 Nicks confirmed that the band would reunite for a tour in 2013.
Original Fleetwood Mac bassist Bob Brunning died on 18 October 2011 at the age of 68. Former guitarist and singer Bob Weston was found dead on 3 January 2012 at the age of 64. Former singer and guitarist Bob Welch was found dead from a self-inflicted gunshot wound on 7 June 2012 at the age of 66. Don Aaron, a spokesman at the scene, stated, "He died from an apparent self-inflicted gunshot wound to the chest." A suicide note was found. Welch had been struggling with health issues and was dealing with depression. His wife discovered his body.
The band's 2013 tour, which took place in 34 cities, started on 4 April in Columbus, OH. The band performed two new songs ("Sad Angel" and "Without You"), which Buckingham described as some of the most "Fleetwood Mac-ey" sounding songs since "Mirage". 'Without You' was re-recorded from the Buckingham-Nicks era. The band released their first new studio material in ten years, "Extended Play", on 30 April 2013. The EP debuted and peaked at No. 48 in the US and produced one single, "Sad Angel". On 25 and 27 September 2013, the second and third nights of the band's London O2 shows, Christine McVie joined them on stage for "Don't Stop".
On 27 October 2013, the band announced that John McVie had been diagnosed with cancer and cancelled their New Zealand and Australian performances so that he could undergo treatment. They said: "We are sorry not to be able to play these Australian and New Zealand dates. We hope our Australian and New Zealand fans as well as Fleetwood Mac fans everywhere will join us in wishing John and his family all the best." According to "The Guardian" on 22 November 2013, Christine McVie stated that she would like to return to Fleetwood Mac if they wanted her, and also affirmed that John McVie's prognosis was "really good."
On 11 January 2014, Mick Fleetwood announced that Christine McVie would be rejoining Fleetwood Mac. The news was confirmed on 13 January by the band's primary publicist, Liz Rosenberg, who said that an official announcement regarding a new album and tour would be forthcoming. In October 2013 Stevie Nicks appeared in "" with Fleetwood Mac's song "Seven Wonders" playing in the background.
On with the Show, a 33-city North American tour, opened in Minneapolis, Minnesota, on 30 September 2014. A series of May–June 2015 arena dates in the United Kingdom went on sale on 14 November, selling out in minutes. Due to high demand, additional dates were added to the tour, including an Australian leg.
In January 2015, Buckingham suggested that the new album and tour might be Fleetwood Mac's last, and that the band would cease operations in 2015 or soon afterwards. He concluded: "We're going to continue working on the new album and the solo stuff will take a back seat for a year or two. A beautiful way to wrap up this last act." But Mick Fleetwood stated that the new album might take a few years to complete and that they were waiting for contributions from Nicks, who had been ambivalent about committing to a new record.
In August 2016, Fleetwood revealed that while the band had "a huge amount of recorded music", virtually none of it featured Nicks. Buckingham and Christine McVie, however, had contributed multiple songs to the new project. Fleetwood told "Ultimate Classic Rock": "She [McVie] ... wrote up a storm ... She and Lindsey could probably have a mighty strong duet album if they want. In truth, I hope it will come to more than that. There really are dozens of songs. And they’re really good. So we’ll see."
Nicks explained her reluctance to record another album with Fleetwood Mac. "Is it possible that Fleetwood Mac might do another record? I can never tell you yes or no, because I don't know. I honestly don't know... It's like, do you want to take a chance of going in and setting up in a room for like a year [to record an album] and having a bunch of arguing people? And then not wanting to go on tour because you just spent a year arguing?". She also emphasized that people do not buy as many records as they used to.
Buckingham and Christine McVie announced a new album, titled "Lindsey Buckingham/Christine McVie", which featured contributions from Mick Fleetwood and John McVie. "Lindsey Buckingham/Christine McVie" was released on 9 June 2017, preceded by the single "In My World". A 38-date tour was arranged which began on 21 June and concluded 16 November. Fleetwood Mac also planned to embark on another tour in 2018. The band headlined the second night of the Classic West concert (on 16 July 2017 at Dodger Stadium in Los Angeles) and the second night of the Classic East concert (at New York's Citi Field on 30 July 2017).
Fleetwood Mac were announced at the MusiCares Person of the Year in 2018 and reunited to perform several songs at the Grammy-hosted gala honouring them. Artists including Lorde, Harry Styles, Little Big Town and Miley Cyrus also performed.
In April 2018, the song "Dreams" re-entered the Hot Rock Songs chart at No. 16 after a viral meme had featured the song. This chart re-entry came 40 years after the song had topped the Hot 100. The song's streaming totals also translated into 7,000 "equivalent album units", a jump of 12 percent, which helped "Rumours" to go from No. 21 to No. 13 on the Top Rock Albums chart.
That month Buckingham departed from the group a second time, having reportedly been dismissed. The reason was said to have been a disagreement about the nature of the tour, and in particular the question of whether newer or less well-known material would be included, as Buckingham wanted. Mick Fleetwood and the band appeared on "CBS This Morning" on 25 April 2018 and said that Buckingham would not sign off on a tour that the group had been planning for a year and a half and they had reached a "huge impasse" and "hit a brick wall". When asked if Buckingham had been fired, he said, "Well, we don't use that word because I think it's ugly." He also said that "Lindsey has huge amounts of respect and kudos to what he's done within the ranks of Fleetwood Mac and always will." In October 2018, Buckingham filed a lawsuit against Fleetwood Mac for breach of fiduciary duty, breach of oral contract and intentional interference with prospective economic advantage, among other charges. He stated that they eventually came to a settlement, which he would not share the terms of, but claimed he was "happy enough with it".
Former Tom Petty and the Heartbreakers guitarist Mike Campbell and Neil Finn of Crowded House were named to replace Buckingham. On "CBS This Morning", Fleetwood said that Fleetwood Mac had been reborn and that "This is the new lineup of Fleetwood Mac." Aside from touring, the band plans to record new music with Campbell and Finn in the future.. In April 2018 the band announced "An Evening with Fleetwood Mac" tour starting in October 2018. The band launched the tour at the iHeartRadio Music Festival on 21 September 2018 at the T-Mobile Arena in Las Vegas, NV.
Danny Kirwan, guitarist, songwriter and early member of Fleetwood Mac (1968–1972) died in London, England, on 8 June 2018, aged 68. An obituary in "The New York Times" said he had died in his sleep after contracting pneumonia earlier in the year. The British music magazine "Mojo", in a two-page tribute to Kirwan's life and music, quoted Christine McVie as saying: "Danny Kirwan was "the" white English blues guy. Nobody else could play like him. He was a one-off ... Danny and Peter [Green] gelled so well together. Danny had a very precise, piercing vibrato – a unique sound ... He was a perfectionist; a fantastic musician and a fantastic writer." One of Kirwan's songs, "Tell Me All the Things You Do" from the 1970 album "Kiln House", was included in the set of the 2018–19 An Evening with Fleetwood Mac tour.
The following is a list of awards and nominations received by Fleetwood Mac:
|
https://en.wikipedia.org/wiki?curid=11787
|
F-Zero: Maximum Velocity
F-Zero: Maximum Velocity is a futuristic racing video game developed by NDcube and published by Nintendo for the Game Boy Advance. The game was released in Japan, North America and Europe in 2001. It is the first "F-Zero" game to be released on a handheld game console.
"Maximum Velocity" takes place twenty-five years after "F-Zero", in another F-Zero Grand Prix. The past generations of F-Zero had "piloted their way to fame", so it is the only "F-Zero" game without Captain Falcon, Samurai Goroh, Pico, or Dr. Stewart. Players control fast hovering crafts and use their speed-boosting abilities to navigate through the courses as quickly as possible.
Every race consists of five laps around a race track. A player will lose the race if his or her machine explodes due to either taking too much damage or landing outside of the track, gets ejected from the race due to falling to 20th place or due to completing a lap with a rank outside of the rank limit of that lap, or he or she decides to give up. In the single player Grand Prix mode, all of these conditions requires the player to use an extra machine if and only if he or she has one or more spare machines to try again.
For each lap completed the player is rewarded with a speed boost, to be used once any time, one of the "SSS" marks will be shaded green to indicate that it can be used. A boost will dramatically increase a player's speed, but will decrease their ability to turn. A boost used before a jump will make the player jump farther, which could allow the player to use a shortcut with the right vehicle. Boost time and speed varies according to the machine, and is usually tuned for proper balance. For example, one machine boasts a boost time of twelve seconds, yet has the slowest boost speed of the entire game. Players can also take advantage of the varying deceleration of each vehicle. Some vehicles, such as the Jet Vermilion, take longer than others to decelerate from top boost speed to normal speed, once the boost has been used up. Players can also take advantage of this effect on boost pads.
The Grand Prix is the main single player component of "Maximum Velocity". It consists of four series named after chess pieces: "Pawn", "Knight", "Bishop" and "Queen". The latter of these can be unlocked by winning the others on "Expert" mode. They have five races in four difficulty settings, "Master" mode is unlocked by winning expert mode in each series, the player unlocks a new machine after completing it. The player needs to be in the top three at the end of the last lap in order to continue to the next race. If the player is unable to continue, the player will lose a machine and can try the race again. If the player runs out of machines, then the game ends, and the player has to start the series from the beginning.
Championship is another single player component. It is basically the same as a "Time Attack" mode, except the player can only race on one, special course: the Synobazz Championship Circuit. This special course is not selectable in any other modes.
"Maximum Velocity" can be played in two multiplayer modes using the Game Boy Advance link cable, with one cartridge, or one cartridge per player. Two to four Players can play in both modes.
In single cart, only one player needs to have a cartridge. The other players will boot off the link cable network from the player with the cart using the GBA's netboot capability. All players drive a generic craft, and the game can only be played on one level, Silence. Silence, along with Fire Field, are the only areas to return from previous games. Aptly, Silence in "Maximum Velocity" has no background music, unlike in most other F-Zero games.
In multi cart, each player needs to have a cartridge to play. This has many advantages over single cart: All players can use any machine in this game that has been unlocked by another player. Players can select any course in this game. After the race is finished, all of the players' ranking data are mixed and shared ("Mixed ranking" stored in each cart).
"F-Zero: Maximum Velocity" is one of the first titles to have been developed by NDcube. Like the original "F-Zero" for SNES, "Maximum Velocity" implements a pseudo-3D visual technique based on the scaling and rotation effects of bitmap graphics. In this game, this technique consists of a double-layer; one of which gives the illusion of depth.
"Maximum Velocity" is one of ten Game Boy Advance games released on December 16, 2011 to Nintendo 3DS Ambassadors, a program to give free downloadable games to early adopters who bought a Nintendo 3DS before its price drop. It was also released on the Wii U Virtual Console on April 3, 2014 in Japan and April 17, 2014 in North America and Europe.
On release, "Famitsu" magazine scored the game a 31 out of 40. "F-Zero: Maximum Velocity" went on to sell 334,145 copies in Japan and 273,229 copies in the U.S. as of 2005. The game has total sales of over 1 million copies worldwide and has an overall score of 86% on Metacritic and 83.37% on Game Rankings.
|
https://en.wikipedia.org/wiki?curid=11790
|
Felsic
In geology, felsic is an adjective describing igneous rocks that are relatively rich in elements that form feldspar and quartz. It is contrasted with mafic rocks, which are relatively richer in magnesium and iron. Felsic refers to silicate minerals, magma, and rocks which are enriched in the lighter elements such as silicon, oxygen, aluminium, sodium, and potassium. Felsic magma or lava is higher in viscosity than mafic magma/lava.
Felsic rocks are usually light in color and have specific gravities less than 3. The most common felsic rock is granite. Common felsic minerals include quartz, muscovite, orthoclase, and the sodium-rich plagioclase feldspars (albite-rich).
In modern usage, the term "acid rock", although sometimes used as a synonym, normally now refers specifically to a high-silica-content (greater than 63% SiO2 by weight) volcanic rock, such as rhyolite. Older, broader usage is now considered archaic. That usage, with the contrasting term "basic rock", was based on an incorrect idea, dating from the 19th century, that "silicic acid" was the chief form of silicon occurring in rocks.
The term "felsic" combines the words "feldspar" and "silica". The similarity of the resulting term "felsic" to the German "felsig", "rocky" (from "Fels", "rock"), is purely accidental. "Feldspar" is linked to German. It is a borrowing of "Feldspat". The link is therefore to German "Feld", meaning "field".
In order for a rock to be classified as felsic, it generally needs to contain more than 75% felsic minerals; namely quartz, orthoclase and plagioclase. Rocks with greater than 90% felsic minerals can also be called leucocratic, from the Greek words for white and dominance.
Felsite is a petrologic field term used to refer to very fine-grained or aphanitic, light-colored volcanic rocks which might be later reclassified after a more detailed microscopic or chemical analysis.
In some cases, felsic volcanic rocks may contain phenocrysts of mafic minerals, usually hornblende, pyroxene or a feldspar mineral, and may need to be named after their phenocryst mineral, such as 'hornblende-bearing felsite'.
The chemical name of a felsic rock is given according to the TAS classification of Le Maitre (1975). However, this only applies to volcanic rocks. If the rock is analyzed and found to be felsic but is metamorphic and has no definite volcanic protolith, it may be sufficient to simply call it a 'felsic schist'. There are examples known of highly sheared granites which can be mistaken for rhyolites.
For phaneritic felsic rocks, the QAPF diagram should be used, and a name given according to the granite nomenclature. Often the species of mafic minerals is included in the name, for instance, hornblende-bearing granite, pyroxene tonalite or augite megacrystic monzonite, because the term "granite" already assumes content with feldspar and quartz.
The rock texture thus determines the basic name of a felsic rock.
|
https://en.wikipedia.org/wiki?curid=11795
|
Frisians
The Frisians are a Germanic ethnic group indigenous to the coastal parts of the Netherlands and northwestern Germany. They inhabit an area known as Frisia and are concentrated in the Dutch provinces of Friesland and Groningen and, in Germany, East Frisia and North Frisia (which was a part of Denmark until 1864). The Frisian languages are still spoken by more than 500,000 people; West Frisian is officially recognised in the Netherlands (in Friesland), and North Frisian and Saterland Frisian are recognised as regional languages in Germany.
The ancient Frisii enter recorded history in the Roman account of Drusus's 12 BC war against the Rhine Germans and the Chauci. They occasionally appear in the accounts of Roman wars against the Germanic tribes of the region, up to and including the Revolt of the Batavi around 70 AD. Frisian mercenaries were hired to assist the Roman invasion of Britain in the capacity of cavalry. They are not mentioned again until 296, when they were deported into Roman territory as "laeti" (i.e., Roman-era serfs; see Binchester Roman Fort and Cuneus Frisionum). The discovery of a type of earthenware unique to 4th century Frisia, called "terp Tritzum", shows that an unknown number of them were resettled in Flanders and Kent, probably as "laeti" under Roman coercion.
From the 3rd through the 5th centuries Frisia suffered marine transgressions that made most of the land uninhabitable, aggravated by a change to a cooler and wetter climate. Whatever population may have remained dropped dramatically, and the coastal lands remained largely unpopulated for the next two centuries. When conditions improved, Frisia received an influx of new settlers, mostly Angles and Saxons. These people would eventually be referred to as 'Frisians', though they were not necessarily descended from the ancient Frisii. It is these 'new Frisians' who are largely the ancestors of the medieval and modern Frisians.
By the end of the 6th century, Frisian territory had expanded westward to the North Sea coast and, in the 7th century, southward down to Dorestad. This farthest extent of Frisian territory is sometimes referred to as "Frisia Magna". Early Frisia was ruled by a High King, with the earliest reference to a 'Frisian King' being dated 678.
In the early 8th century the Frisian nobles came into increasing conflict with the Franks to their south, resulting in a series of wars in which the Frankish Empire eventually subjugated Frisia in 734. These wars benefited attempts by Anglo-Irish missionaries (which had begun with Saint Boniface) to convert the Frisian populace to Christianity, in which Saint Willibrord largely succeeded.
Some time after the death of Charlemagne, the Frisian territories were in theory under the control of the Count of Holland, but in practice the Hollandic counts, starting with Count Arnulf in 993, were unable to assert themselves as the sovereign lords of Frisia. The resulting stalemate resulted in a period of time called the 'Frisian freedom', a period in which feudalism and serfdom (as well as central or judicial administration) did not exist, and in which the Frisian lands only owed their allegiance to the Holy Roman Emperor.
During the 13th century, however, the counts of Holland became increasingly powerful and, starting in 1272, sought to reassert themselves as rightful lords of the Frisian lands in a series of wars, which (with a series of lengthy interruptions) ended in 1422 with the Hollandic conquest of Western Frisia and with the establishment of a more powerful noble class in Central and Eastern Frisia.
In 1524, Frisia became part of the Seventeen Provinces and in 1568 joined the Dutch revolt against Philip II, king of Spain, heir of the Burgundian territories; Central Frisia has remained a part of the Netherlands ever since. The eastern periphery of Frisia would become part of various German states (later Germany) and Denmark. An old tradition existed in the region of exploitation of peatlands.
Though impossible to know exact numbers and migration patterns, research has indicated that many Frisians were part of the wave of ethnic groups to colonise areas of present day England alongside the Angles, Saxons and Jutes, starting from around the fifth century when Frisians arrived along the coastline of Kent. Studies have found the DNA of people tested in Central England to be "indistinguishable" from that of Frisians.
Frisians principally settled in modern-day Kent, East Anglia, the East Midlands, North East England, and Yorkshire. Across these areas, evidence of their settlement includes place names of Frisian origin, such as Frizinghall in Bradford and Frieston in Lincolnshire.
Similarities in dialect between Great Yarmouth and Friesland have been noted, originating from trade between these areas during the Middle Ages. Frisians are also known to have founded the Freston area of Ipswich.
In Scotland, historians have noted that colonies of Angles and Frisians settled as far north as the River Forth. This corresponds to those areas of Scotland which historically constituted part of Northumbria.
As both the Anglo-Saxons of England and the early Frisians were formed from similar tribal confederacies, their respective languages were very similar, together forming the Anglo-Frisian family. Old Frisian is the most closely attested language to Old English and the modern Frisian dialects are in turn the closest related languages to contemporary English that do not themselves derive from Old English (although the modern Frisian and English are not mutually intelligible).
The Frisian language group is divided into three mutually unintelligible languages:
Of these three languages both Saterland Frisian (2,000 speakers) and North Frisian (10,000 speakers) are endangered. West Frisian is spoken by around 350,000 native speakers in Friesland, and as many as 470,000 when including speakers in neighbouring Groningen province. West Frisian is not listed as threatened, although research published by Radboud University in 2016 has challenged that assumption.
Today there exists a tripartite division of the Frisians, into North Frisians, East Frisians and West Frisians, caused by Frisia's constant loss of territory in the Middle Ages. The West Frisians, in general, do not see themselves as part of a larger group of Frisians, and, according to a 1970 poll, identify themselves more with the Dutch than with the East or North Frisians. Therefore, the term 'Frisian', when applied to the speakers of all three Frisian languages, is a linguistic, ethnic and/or cultural concept, not a political one.
|
https://en.wikipedia.org/wiki?curid=11797
|
Filippo Tommaso Marinetti
Filippo Tommaso Emilio Marinetti (; 22 December 1876 – 2 December 1944) was an Italian poet, editor, art theorist, and founder of the Futurist movement. He was associated with the utopian and Symbolist artistic and literary community Abbaye de Créteil between 1907 and 1908. Marinetti is best known as the author of the first "Futurist Manifesto", which was written and published in 1909.
Emilio Angelo Carlo Marinetti (some documents give his name as "Filippo Achille Emilio Marinetti") spent the first years of his life in Alexandria, Egypt, where his father (Enrico Marinetti) and his mother (Amalia Grolli) lived together "more uxorio" (as if married). Enrico was a lawyer from Piedmont, and his mother was the daughter of a literary professor from Milan. They had come to Egypt in 1865, at the invitation of Khedive Isma'il Pasha, to act as legal advisers for foreign companies that were taking part in his modernization program.
His love for literature developed during the school years. His mother was an avid reader of poetry, and introduced the young Marinetti to the Italian and European classics. At age seventeen he started his first school magazine, "Papyrus"; the Jesuits threatened to expel him for publicizing Émile Zola's scandalous novels in the school.
He first studied in Egypt then in Paris, obtaining a "baccalauréat" degree in 1894 at the Sorbonne, and in Italy, graduating in law at the University of Pavia in 1899.
He decided not to be a lawyer but to develop a literary career. He experimented with every type of literature (poetry, narrative, theatre, "words in liberty"), signing everything "Filippo Tommaso Marinetti".
Marinetti and Constantin Brâncuși were visitors of the Abbaye de Créteil c. 1908 along with young writers like Roger Allard (one of the first to defend Cubism), Pierre Jean Jouve, and Paul Castiaux, who wanted to publish their works through the Abbaye. The Abbaye de Créteil was a "phalanstère" community founded in the autumn of 1906 by the painter Albert Gleizes, and the poets , , Alexandre Mercereau and Charles Vildrac. The movement drew its inspiration from the "Abbaye de Thélème," a fictional creation by Rabelais in his novel "Gargantua". It was closed down by its members early in 1908.
Marinetti is known best as the author of the "Futurist Manifesto", which he wrote in 1909. It was published in French on the front page of the most prestigious French daily newspaper, "Le Figaro", on 20 February 1909. In "The Founding and Manifesto of Futurism", Marinetti declared that "Art, in fact, can be nothing but violence, cruelty, and injustice." Georges Sorel, who influenced the entire political spectrum from anarchism to Fascism, also argued for the importance of violence. Futurism had both anarchist and Fascist elements; Marinetti later became an active supporter of Benito Mussolini.
Marinetti, who admired speed, had a minor car accident outside Milan in 1908 when he veered into a ditch to avoid two cyclists. He referred to the accident in the Futurist Manifesto: the Marinetti who was helped out of the ditch was a new man, determined to end the pretense and decadence of the prevailing Liberty style. He discussed a new and strongly revolutionary programme with his friends, in which they should end every artistic relationship with the past, "destroy the museums, the libraries, every type of academy". Together, he wrote, "We will glorify war—the world's only hygiene—militarism, patriotism, the destructive gesture of freedom-bringers, beautiful ideas worth dying for, and scorn for woman".
The Futurist Manifesto was read and debated all across Europe, but Marinetti's first 'Futurist' works were not as successful. In April, the opening night of his drama "Le Roi bombance" (The Feasting King), written in 1905, was interrupted by loud, derisive whistling by the audience... and by Marinetti himself, who thus introduced another element of Futurism, "the desire to be heckled." Marinetti did, however, fight a duel with a critic he considered too harsh.
His drama "La donna è mobile" (Poupées électriques), first presented in Turin, was not successful either. Nowadays, the play is remembered through a later version, named "Elettricità sessuale" (Sexual Electricity), and mainly for the appearance onstage of humanoid automatons, ten years before the Czech writer Karel Čapek would invent the term "robot".
In 1910 his first novel, "Mafarka il futurista", was cleared of all charges by an obscenity trial. That year, Marinetti discovered some allies in three young painters (Umberto Boccioni, Carlo Carrà, Luigi Russolo), who adopted the Futurist philosophy. Together with them (and with poets such as Aldo Palazzeschi), Marinetti began a series of Futurist Evenings, theatrical spectacles in which Futurists declaimed their manifestos in front of a crowd that in part attended the performances to throw vegetables at them.
The most successful "happening" of that period was the publicization of the "Manifesto Against Past-Loving Venice" in Venice. In the flier, Marinetti demands "fill(ing) the small, stinking canals with the rubble from the old, collapsing and leprous palaces" to "prepare for the birth of an industrial and militarized Venice, capable of dominating the great Adriatic, a great Italian lake."
In 1911, the Italo-Turkish War began and Marinetti departed for Libya as war correspondent for a French newspaper. His articles were eventually collected and published in "The Battle of Tripoli". He then covered the First Balkan War of 1912–13, witnessing the surprise success of Bulgarian troops against the Ottoman Empire in the Siege of Adrianople. In this period he also made a number of visits to London, which he considered 'the Futurist city par excellence', and where a number of exhibitions, lectures and demonstrations of Futurist music were staged. However, although a number of artists, including Wyndham Lewis, were interested in the new movement, only one British convert was made, the young artist C.R.W. Nevinson. Nevertheless, Futurism was an important influence upon Lewis's Vorticist philosophy.
About the same time Marinetti worked on a very anti-Roman Catholic and anti-Austrian verse-novel, "Le monoplan du Pape" ("The Pope's Aeroplane", 1912) and edited an anthology of futurist poets. But his attempts to renew the style of poetry did not satisfy him. So much so that, in his foreword to the anthology, he declared a new revolution: it was time to be done with traditional syntax and to use "words in freedom" ("parole in libertà"). His sound-poem "Zang Tumb Tumb", an account of the Battle of Adrianople, exemplifies words in freedom. Recordings can be heard of Marinetti reading some of his sound poems: "Battaglia, Peso + Odore" (1912); "Dune, parole in libertà" (1914); "La Battaglia di Adrianopoli" (1926) (recorded 1935).
Marinetti agitated for Italian involvement in World War I, and once Italy was engaged, promptly volunteered for service. In the fall of 1915 he and several other Futurists who were members of the Lombard Volunteer Cyclists were stationed at Lake Garda, in Trentino province, high in the mountains along the Italo-Austrian border. They endured several weeks of fighting in harsh conditions before the cyclists units, deemed inappropriate for mountain warfare, were disbanded.
Marinetti spent most of 1916 supporting Italy's war effort with speeches, journalism, and theatrical work, then returned to military service as a regular army officer in 1917. In May of that year he was seriously wounded while serving with an artillery battalion on the Isonzo front; he returned to service after a long recovery, and participated in the decisive Italian victory at Vittorio Veneto in October 1918.
After an extended courtship, in 1923 Marinetti married Benedetta Cappa (1897–1977), a writer and painter and a pupil of Giacomo Balla. Born in Rome, she had joined the Futurists in 1917. They'd met in 1918, moved in together in Rome, and chose to marry only to avoid legal complications on a lecture tour of Brazil. They would have three daughters: Vittoria, Ala, and Luce.
Cappa and Marinetti collaborated on a genre of mixed-media assemblages in the mid-1920s they called "tattilismo" ("Tactilism"), and she was a strong proponent and practitioner of the aeropittura movement after its inception in 1929. She also produced three experimental novels. Cappa's major public work is likely a series of five murals at the Palermo Post Office (1926–1935) for the Fascist public-works architect Angiolo Mazzoni.
In early 1918 he founded the "Partito Politico Futurista" or Futurist Political Party, which only a year later merged with Benito Mussolini's "Fasci Italiani di Combattimento". Marinetti was one of the first affiliates of the Italian Fascist Party. In 1919 he co-wrote with Alceste De Ambris the Fascist Manifesto, the original manifesto of Italian Fascism. He opposed Fascism's later exaltation of existing institutions, terming them "reactionary," and, after walking out of the 1920 Fascist party congress in disgust, withdrew from politics for three years. However, he remained a notable force in developing the party philosophy throughout the regime's existence. For example, at the end of the "Congress of Fascist Culture" that was held in Bologna on 30 March 1925, Giovanni Gentile addressed Sergio Panunzio on the need to define Fascism more purposefully by way of Marinetti's opinion, stating, "Great spiritual movements make recourse to precision when their primitive inspirations—what F. T. Marinetti identified this morning as artistic, that is to say, the creative and truly innovative ideas, from which the movement derived its first and most potent impulse—have lost their force. We today find ourselves at the very beginning of a new life and we experience with joy this obscure need that fills our hearts—this need that is our inspiration, the genius that governs us and carries us with it."
As part of his campaign to overturn tradition, Marinetti also attacked traditional Italian food. His "Manifesto of Futurist Cooking" was published in the Turin "Gazzetta del Popolo" on 28 December 1930. Arguing that "People think, dress and act in accordance with what they drink and eat", Marinetti proposed wide-ranging changes to diet. He condemned pasta, blaming it for lassitude, pessimism and lack of virility, and promoted the eating of Italian-grown rice. In this, as in other ways, his proposed Futurist cooking was nationalistic, rejecting foreign foods and food names. It was also militaristic, seeking to stimulate men to be fighters.
Marinetti also sought to increase creativity. His attraction to whatever was new made scientific discoveries appealing to him, but his views on diet were not scientifically based. He was fascinated with the idea of processed food, predicting that someday pills would replace food as a source of energy, and calling for the creation of "plastic complexes" to replace natural foods. Food, in turn, would become a matter of artistic expression. Many of the meals Marinetti described and ate resemble performance art, such as the "Tactile Dinner", recreated in 2014 for an exhibit at the Guggenheim Museum. Participants wore pajamas decorated with sponge, sandpaper, and aluminum, and ate salads without using cutlery.
During the Fascist regime Marinetti sought to make Futurism the official state art of Italy but failed to do so. Mussolini was personally uninterested in art and chose to give patronage to numerous styles to keep artists loyal to the regime. Opening the exhibition of art by the Novecento Italiano group in 1923, he said: "I declare that it is far from my idea to encourage anything like a state art. Art belongs to the domain of the individual. The state has only one duty: not to undermine art, to provide humane conditions for artists, to encourage them from the artistic and national point of view." Mussolini's mistress, Margherita Sarfatti, successfully promoted the rival Novecento Group, and even persuaded Marinetti to be part of its board.
In Fascist Italy, modern art was tolerated and even approved by the Fascist hierarchy. Towards the end of the 1930s, some Fascist ideologues (for example, the ex-Futurist Ardengo Soffici) wished to import the concept of "degenerate art" from Germany to Italy and condemned modernism, although their demands were ignored by the regime. In 1938, hearing that Adolf Hitler wanted to include Futurism in a traveling exhibition of degenerate art, Marinetti persuaded Mussolini to refuse to let it enter Italy.
On 17 November 1938, Italy passed The Racial Laws, discriminating against Italian Jews, much as the discrimination pronounced in the Nuremberg Laws. The anti-Semitic trend in Italy resulted in attacks against modern art, judged too foreign, too radical and anti-nationalist. In 11 January 1939 issue of the Futurist journal "Artecrazia" Marinetti expressed his condemnation of such attacks on modern art, noting Futurism is both Italian and nationalist, not foreign, and that there are no Jews in Futurism. Furthermore, he claimed Jews were not active in the development of modern art. Regardless, the Italian state shut down "Artecrazia".
Marinetti made numerous attempts to ingratiate himself with the regime, becoming less radical and avant garde with each attempt. He relocated from Milan to Rome. He became an academician despite his condemnation of academies, saying, "It is important that Futurism be represented in the Academy."
He was an atheist, but by the mid 1930s he had come to accept the influence of the Catholic Church on Italian society. In "Gazzetta del Popolo", 21 June 1931, Marinetti proclaimed that "Only Futurist artists...are able to express clearly...the simultaneous dogmas of the Catholic faith, such as the Holy Trinity, the Immaculate Conception and Christ's Calvary." In his last works, written just before his death in 1944 "L'aeropoema di Gesù" ("The Aeropoem of Jesus") and "Quarto d'ora di poesia per the X Mas" ("A Fifteen Minutes' Poem of the X Mas"), Marinetti sought to reconcile his newfound love for God and his passion for the action that accompanied him throughout his life.
There were other contradictions in his character: despite his nationalism, he was international, educated in Egypt and France, writing his first poems in French, publishing the Futurist Manifesto in a French newspaper and traveling to promote his ideas.
Marinetti volunteered for active service in the Second Italo-Abyssinian War and the Second World War, serving on the Eastern Front for a few weeks in the Summer and Autumn of 1942 at the age of 65.
He died of cardiac arrest in Bellagio on 2 December 1944 while working on a collection of poems praising the wartime achievements of the Decima Flottiglia MAS.
|
https://en.wikipedia.org/wiki?curid=11801
|
Franz Mesmer
Franz Anton Mesmer (; ; 23 May 1734 – 5 March 1815) was a German doctor with an interest in astronomy. He theorised the existence of a natural energy transference occurring between all animated and inanimate objects; this he called "animal magnetism", sometimes later referred to as "mesmerism". (In modern times New Age spiritualists have revived a similar idea.) Mesmer's theory attracted a wide following between about 1780 and 1850, and continued to have some influence until the end of the 19th century. In 1843 the Scottish doctor James Braid proposed the term "hypnosis" for a technique derived from animal magnetism; today the word "mesmerism" generally functions as a synonym of "hypnosis".
Mesmer was born in the village of Iznang, on the shore of Lake Constance in Swabia, Germany, a son of master forester Anton Mesmer (1701—after 1747) and his wife, Maria/Ursula (née Michel; 1701—1770). After studying at the Jesuit universities of Dillingen and Ingolstadt, he took up the study of medicine at the University of Vienna in 1759. In 1766 he published a doctoral dissertation with the Latin title "De planetarum influxu in corpus humanum" ("On the Influence of the Planets on the Human Body"), which discussed the influence of the moon and the planets on the human body and on disease. This was not medical astrology. Building largely on Isaac Newton's theory of the tides, Mesmer expounded on certain tides in the human body that might be accounted for by the movements of the sun and moon. Evidence assembled by Frank A. Pattie suggests that Mesmer plagiarized a part of his dissertation from a work by Richard Mead, an eminent English physician and Newton's friend. However, in Mesmer's day doctoral theses were not expected to be original.
In January 1768, Mesmer married Anna Maria von Posch, a wealthy widow, and established himself as a doctor in Vienna. In the summers he lived on a splendid estate and became a patron of the arts. In 1768, when court intrigue prevented the performance of "La finta semplice" (K. 51), for which the twelve-year-old Wolfgang Amadeus Mozart had composed 500 pages of music, Mesmer is said to have arranged a performance in his garden of Mozart's "Bastien und Bastienne" (K. 50), a one-act opera, though Mozart's biographer Nissen found no proof that this performance actually took place. Mozart later immortalized his former patron by including a comedic reference to Mesmer in his opera "Così fan tutte".
In 1774, Mesmer produced an "artificial tide" in a patient, Francisca Österlin, who suffered from hysteria, by having her swallow a preparation containing iron and then attaching magnets to various parts of her body. She reported feeling streams of a mysterious fluid running through her body and was relieved of her symptoms for several hours. Mesmer did not believe that the magnets had achieved the cure on their own. He felt that he had contributed animal magnetism, which had accumulated in his work, to her. He soon stopped using magnets as a part of his treatment.
In the same year Mesmer collaborated with Maximilian Hell.
In 1775, Mesmer was invited to give his opinion before the Munich Academy of Sciences on the exorcisms carried out by Johann Joseph Gassner (Gaßner), a priest and healer who grew up in Vorarlberg, Austria. Mesmer said that while Gassner was sincere in his beliefs, his cures resulted because he possessed a high degree of animal magnetism. This confrontation between Mesmer's secular ideas and Gassner's religious beliefs marked the end of Gassner's career as well as, according to Henri Ellenberger, the emergence of dynamic psychiatry.
The scandal that followed Mesmer's only partial success in curing the blindness of an 18-year-old musician, Maria Theresia Paradis, led him to leave Vienna in 1777. In February 1778 Mesmer moved to Paris, rented an apartment in a part of the city preferred by the wealthy and powerful, and established a medical practice. There he would reunite with Mozart who often visited him. Paris soon divided into those who thought he was a charlatan who had been forced to flee from Vienna and those who thought he had made a great discovery.
In his first years in Paris, Mesmer tried and failed to get either the Royal Academy of Sciences or the Royal Society of Medicine to provide official approval for his doctrines. He found only one physician of high professional and social standing, Charles d'Eslon, to become a disciple. In 1779, with d'Eslon's encouragement, Mesmer wrote an 88-page book, "Mémoire sur la découverte du magnétisme animal", to which he appended his famous 27 Propositions. These propositions outlined his theory at that time. Some contemporary scholars equate Mesmer's animal magnetism with the Qi (chi) of Traditional Chinese Medicine and mesmerism with medical Qigong practices.
According to d'Eslon, Mesmer understood health as the free flow of the process of life through thousands of channels in our bodies. Illness was caused by obstacles to this flow. Overcoming these obstacles and restoring flow produced crises, which restored health. When Nature failed to do this spontaneously, contact with a conductor of animal magnetism was a necessary and sufficient remedy. Mesmer aimed to aid or provoke the efforts of Nature. To cure an insane person, for example, involved causing a fit of madness. The advantage of magnetism involved accelerating such crises without danger.
Mesmer treated patients both individually and in groups. With individuals he would sit in front of his patient with his knees touching the patient's knees, pressing the patient's thumbs in his hands, looking fixedly into the patient's eyes. Mesmer made "passes", moving his hands from patients' shoulders down along their arms. He then pressed his fingers on the patient's hypochondrium region (the area below the diaphragm), sometimes holding his hands there for hours. Many patients felt peculiar sensations or had convulsions that were regarded as crises and supposed to bring about the cure. Mesmer would often conclude his treatments by playing some music on a glass armonica.
By 1780 Mesmer had more patients than he could treat individually and he established a collective treatment known as the "baquet." An English doctor who observed Mesmer described the treatment as follows:In the middle of the room is placed a vessel of about a foot and a half high which is called here a "baquet". It is so large that twenty people can easily sit round it; near the edge of the lid which covers it, there are holes pierced corresponding to the number of persons who are to surround it; into these holes are introduced iron rods, bent at right angles outwards, and of different heights, so as to answer to the part of the body to which they are to be applied. Besides these rods, there is a rope which communicates between the baquet and one of the patients, and from him is carried to another, and so on the whole round. The most sensible effects are produced on the approach of Mesmer, who is said to convey the fluid by certain motions of his hands or eyes, without touching the person. I have talked with several who have witnessed these effects, who have convulsions occasioned and removed by a movement of the hand...
In 1784, without Mesmer requesting it, King Louis XVI appointed four members of the Faculty of Medicine as commissioners to investigate animal magnetism as practiced by d'Eslon. At the request of these commissioners the King appointed five additional commissioners from the Royal Academy of Sciences. These included the chemist Antoine Lavoisier, the doctor Joseph-Ignace Guillotin, the astronomer Jean Sylvain Bailly, and the American ambassador Benjamin Franklin.
The commission conducted a series of experiments aimed not at determining whether Mesmer's treatment worked, but whether he had discovered a new physical fluid. The commission concluded that there was no evidence for such a fluid. Whatever benefit the treatment produced was attributed to "imagination." But one of the commissioners, the botanist Antoine Laurent de Jussieu took exception to the official reports. He wrote a dissenting opinion that declared Mesmer's theory credible and worthy of further investigation.
The commission did not examine Mesmer, but investigated the practice of d'Eslon.
In August 1784 Mesmer visited a Mesmeric society in Lyon. In 1785 Mesmer left Paris. In 1790 he was in Vienna again to settle the estate of his deceased wife Maria Anna. When he sold his house in Vienna in 1801 he was in Paris.
Mesmer was driven into exile soon after the investigations on animal magnetism although his influential student, Armand-Marie-Jacques de Chastenet, Marquis de Puségur (1751-1825), continued to have many followers until his death. Mesmer continued to practice in Frauenfeld, Switzerland, for a number of years and died in 1815 in Meersburg, Germany.
Abbé Faria, an Indo-Portuguese monk in Paris and a contemporary of Mesmer, claimed that "nothing comes from the magnetizer; everything comes from the subject and takes place in his imagination, i.e. autosuggestion generated from within the mind."
|
https://en.wikipedia.org/wiki?curid=11803
|
Foix–Alajouanine syndrome
Foix–Alajouanine syndrome, also called subacute ascending necrotizing myelitis, is a disease caused by an arteriovenous malformation of the spinal cord. The patients present with symptoms indicating spinal cord involvement (paralysis of arms and legs, numbness and loss of sensation and sphincter dysfunction), and pathological examination reveals disseminated nerve cell death in the spinal cord and abnormally dilated and tortuous vessels situated on the surface of the spinal cord. Surgical treatment can be tried in some cases. If surgical intervention is contraindicated, corticosteroids may be used.
The condition is named after Charles Foix and Théophile Alajouanine.
|
https://en.wikipedia.org/wiki?curid=11806
|
Ferromagnetism
Ferromagnetism is the basic mechanism by which certain materials (such as iron) form permanent magnets, or are attracted to magnets. In physics, several different types of magnetism are distinguished. Ferromagnetism (along with the similar effect ferrimagnetism) is the strongest type and is responsible for the common phenomenon of magnetism in magnets encountered in everyday life. Substances respond weakly to magnetic fields with three other types of magnetism—paramagnetism, diamagnetism, and antiferromagnetism—but the forces are usually so weak that they can be detected only by sensitive instruments in a laboratory. An everyday example of ferromagnetism is a refrigerator magnet used to hold notes on a refrigerator door. The attraction between a magnet and ferromagnetic material is "the quality of magnetism first apparent to the ancient world, and to us today".
Permanent magnets (materials that can be magnetized by an external magnetic field and remain magnetized after the external field is removed) are either ferromagnetic or ferrimagnetic, as are the materials that are noticeably attracted to them. Only a few substances are ferromagnetic. The common ones are iron, cobalt, nickel and most of their alloys, and some compounds of rare earth metals.
Ferromagnetism is very important in industry and modern technology, and is the basis for many electrical and electromechanical devices such as electromagnets, electric motors, generators, transformers, and magnetic storage such as tape recorders, and hard disks, and nondestructive testing of ferrous materials.
Ferromagnetic materials can be divided into magnetically "soft" materials like annealed iron, which can be magnetized but do not tend to stay magnetized, and magnetically "hard" materials, which do. Permanent magnets are made from "hard" ferromagnetic materials such as alnico and ferrite that are subjected to special processing in a strong magnetic field during manufacture to align their internal microcrystalline structure, making them very hard to demagnetize. To demagnetize a saturated magnet, a certain magnetic field must be applied, and this threshold depends on coercivity of the respective material. "Hard" materials have high coercivity, whereas "soft" materials have low coercivity. The overall strength of a magnet is measured by its magnetic moment or, alternatively, the total magnetic flux it produces. The local strength of magnetism in a material is measured by its magnetization.
Historically, the term "ferromagnetism" was used for any material that could exhibit spontaneous magnetization: a net magnetic moment in the absence of an external magnetic field. This general definition is still in common use.
However, in a landmark paper in 1948, Louis Néel showed there are two levels of magnetic alignment that result in this behavior. One is ferromagnetism in the strict sense, where all the magnetic moments are aligned. The other is "ferrimagnetism", where some magnetic moments point in the opposite direction but have a smaller contribution, so there is still a spontaneous magnetization.
In the special case where the opposing moments balance completely, the alignment is known as "antiferromagnetism". Therefore antiferromagnets do not have a spontaneous magnetization.
The table lists a selection of ferromagnetic and ferrimagnetic compounds, along with the temperature above which they cease to exhibit spontaneous magnetization (see Curie temperature).
Ferromagnetism is a property not just of the chemical make-up of a material, but of its crystalline structure and microstructure. There are ferromagnetic metal alloys whose constituents are not themselves ferromagnetic, called Heusler alloys, named after Fritz Heusler. Conversely there are non-magnetic alloys, such as types of stainless steel, composed almost exclusively of ferromagnetic metals.
Amorphous (non-crystalline) ferromagnetic metallic alloys can be made by very rapid quenching (cooling) of a liquid alloy. These have the advantage that their properties are nearly isotropic (not aligned along a crystal axis); this results in low coercivity, low hysteresis loss, high permeability, and high electrical resistivity. One such typical material is a transition metal-metalloid alloy, made from about 80% transition metal (usually Fe, Co, or Ni) and a metalloid component (B, C, Si, P, or Al) that lowers the melting point.
A relatively new class of exceptionally strong ferromagnetic materials are the rare-earth magnets. They contain lanthanide elements that are known for their ability to carry large magnetic moments in well-localized f-orbitals.
Most ferromagnetic materials are metals, since the conducting electrons are often responsible for mediating the ferromagnetic interactions. It is therefore a challenge to develop ferromagnetic insulators, especially multiferroic materials, which are both ferromagnetic and ferroelectric.
A number of actinide compounds are ferromagnets at room temperature or exhibit ferromagnetism upon cooling. PuP is a paramagnet with cubic symmetry at room temperature, but which undergoes a structural transition into a tetragonal state with ferromagnetic order when cooled below its TC = 125 K. In its ferromagnetic state, PuP's easy axis is in the <100> direction.
In NpFe2 the easy axis is <111>. Above NpFe2 is also paramagnetic and cubic. Cooling below the Curie temperature produces a rhombohedral distortion wherein the rhombohedral angle changes from 60° (cubic phase) to 60.53°. An alternate description of this distortion is to consider the length "c" along the unique trigonal axis (after the distortion has begun) and "a" as the distance in the plane perpendicular to "c". In the cubic phase this reduces to . Below the Curie temperature
which is the largest strain in any actinide compound. NpNi2 undergoes a similar lattice distortion below , with a strain of (43 ± 5) × 10−4. NpCo2 is a ferrimagnet below 15 K.
In 2009, a team of MIT physicists demonstrated that a lithium gas cooled to less than one kelvin can exhibit ferromagnetism. The team cooled fermionic lithium-6 to less than (150 billionths of one kelvin) using infrared laser cooling. This demonstration is the first time that ferromagnetism has been demonstrated in a gas.
In 2018, a team of University of Minnesota physicists demonstrated that body-centered tetragonal ruthenium exhibits ferromagnetism at room temperature.
The Bohr–van Leeuwen theorem, discovered in the 1910s, showed that classical physics theories are unable to account for any form of magnetism, including ferromagnetism. Magnetism is now regarded as a purely quantum mechanical effect. Ferromagnetism arises due to two effects from quantum mechanics: spin and the Pauli exclusion principle.
One of the fundamental properties of an electron (besides that it carries charge) is that it has a magnetic dipole moment, i.e., it behaves like a tiny magnet, producing a magnetic field. This dipole moment comes from the more fundamental property of the electron that it has quantum mechanical spin. Due to its quantum nature, the spin of the electron can be in one of only two states; with the magnetic field either pointing "up" or "down" (for any choice of up and down). The spin of the electrons in atoms is the main source of ferromagnetism, although there is also a contribution from the orbital angular momentum of the electron about the nucleus. When these magnetic dipoles in a piece of matter are aligned, (point in the same direction) their individually tiny magnetic fields add together to create a much larger macroscopic field.
However, materials made of atoms with filled electron shells have a total dipole moment of zero: because the electrons all exist in pairs with opposite spin, every electron's magnetic moment is cancelled by the opposite moment of the second electron in the pair. Only atoms with partially filled shells (i.e., unpaired spins) can have a net magnetic moment, so ferromagnetism occurs only in materials with partially filled shells. Because of Hund's rules, the first few electrons in a shell tend to have the same spin, thereby increasing the total dipole moment.
These unpaired dipoles (often called simply "spins" even though they also generally include orbital angular momentum) tend to align in parallel to an external magnetic field, an effect called paramagnetism. Ferromagnetism involves an additional phenomenon, however: in a few substances the dipoles tend to align spontaneously, giving rise to a spontaneous magnetization, even when there is no applied field.
When two nearby atoms have unpaired electrons, whether the electron spins are parallel or antiparallel affects whether the electrons can share the same orbit as a result of the quantum mechanical effect called the exchange interaction. This in turn affects the electron location and the Coulomb (electrostatic) interaction and thus the energy difference between these states.
The exchange interaction is related to the Pauli exclusion principle, which says that two electrons with the same spin cannot also be in the same spatial state (orbital). This is a consequence of the spin-statistics theorem and that electrons are fermions. Therefore, under certain conditions, when the orbitals of the unpaired outer valence electrons from adjacent atoms overlap, the distributions of their electric charge in space are farther apart when the electrons have parallel spins than when they have opposite spins. This reduces the electrostatic energy of the electrons when their spins are parallel compared to their energy when the spins are anti-parallel, so the parallel-spin state is more stable. In simple terms, the electrons, which repel one another, can move "further apart" by aligning their spins, so the spins of these electrons tend to line up. This difference in energy is called the exchange energy.
This energy difference can be orders of magnitude larger than the energy differences associated with the magnetic dipole-dipole interaction due to dipole orientation, which tends to align the dipoles antiparallel. In certain doped semiconductor oxides RKKY interactions have been shown to bring about periodic longer-range magnetic interactions, a phenomenon of significance in the study of spintronic materials.
The materials in which the exchange interaction is much stronger than the competing dipole-dipole interaction are frequently called "magnetic materials". For instance, in iron (Fe) the exchange force is about 1000 times stronger than the dipole interaction. Therefore, below the Curie temperature virtually all of the dipoles in a ferromagnetic material will be aligned. In addition to ferromagnetism, the exchange interaction is also responsible for the other types of spontaneous ordering of atomic magnetic moments occurring in magnetic solids, antiferromagnetism and ferrimagnetism.
There are different exchange interaction mechanisms which create the magnetism in different ferromagnetic, ferrimagnetic, and antiferromagnetic substances. These mechanisms include direct exchange, RKKY exchange, double exchange, and superexchange.
Although the exchange interaction keeps spins aligned, it does not align them in a particular direction. Without magnetic anisotropy, the spins in a magnet randomly change direction in response to thermal fluctuations and the magnet is superparamagnetic. There are several kinds of magnetic anisotropy, the most common of which is magnetocrystalline anisotropy. This is a dependence of the energy on the direction of magnetization relative to the crystallographic lattice. Another common source of anisotropy, inverse magnetostriction, is induced by internal strains. Single-domain magnets also can have a "shape anisotropy" due to the magnetostatic effects of the particle shape. As the temperature of a magnet increases, the anisotropy tends to decrease, and there is often a blocking temperature at which a transition to superparamagnetism occurs.
The above would seem to suggest that every piece of ferromagnetic material should have a strong magnetic field, since all the spins are aligned, yet iron and other ferromagnets are often found in an "unmagnetized" state. The reason for this is that a bulk piece of ferromagnetic material is divided into tiny regions called "magnetic domains"
|
https://en.wikipedia.org/wiki?curid=11807
|
Francesco Cossiga
Francesco Cossiga (, ; 1928 – 2010) was an Italian politician. A member of the Christian Democratic Party of Italy, he was Prime Minister of Italy from 1979 to 1980 and the eighth President of Italy from 1985 to 1992. Cossiga is widely considered one of the most prominent and influential politicians of the First Republic.
Cossiga also served as minister on several occasions, most notably as Italian Minister of the Interior. In that position he re-structured the Italian police, civil protection and secret services. Due to his repressive approach to public protests, he has been described as a strongman and labeled "iron minister". He was in office at the time of the kidnapping and murder of Aldo Moro by Red Brigades, and resigned as Minister of the Interior when Moro was found dead in 1978. Cossiga was Prime Minister during the Bologna station bombing in 1980.
Before his political career, Cossiga was a professor of constitutional law at the University of Sassari.
Francesco Cossiga was born in Sassari on 26 July 1928, into a republican and anti-fascist middle-bourgeois family. He was the second-degree cousin of Enrico and Giovanni Berlinguer. Although he was commonly called "Cossìga" , the original pronunciation of the surname is "Còssiga" . His surname in Sardinian and Sassarese means "Corsica", likely pointing to the family's origin.
At the age of sixteen, he graduated, three years in advance, at the classical lyceum Domenico Alberto Azuni. The following year he joined in the Christian Democracy, and three years later, at only 19 years old, he graduated in law and started a university career as professor of constitutional law at the faculty of jurisprudence of the University of Sassari.
During his period at the university he became a member of the Catholic Federation of University Students (FUCI), becoming the association's leader for Sassari.
After the 1958 general election Cossiga was elected in the Chamber of Deputies for the first time, representing the constituency of Cagliari–Sassari.
In February 1966 he became the youngest Undersecretary of the Ministry of Defence, in the government of Aldo Moro. In this role he had to face the aftermath of Piano Solo, an envisaged plot for an Italian "coup d'état" requested by then President Antonio Segni, two years before.
From November 1974 to February 1976 Cossiga was Minister of Public Administration in Moro's fourth government.
On 12 February 1976, Cossiga was appointed Minister of the Interior, by Prime Minister Moro. During his term he re-structured the Italian police, civil protection and secret services. Cossiga has been often described as a strongman and labeled "iron minister", for repressing public protests. Moreover, during his tenure his surname was often stylized as "Koiga", using the "SS" symbol.
In 1977 the city of Bologna was the scene of violent street clashes. In particular, on March 11 a militant of the far-left organization "Lotta Continua", Francesco Lorusso, was killed by a gunshot to the back (probably fired by a policeman), when police dispersed protesters against a mass meeting of Communion and Liberation, which was being held that morning at the University. This event served as a detonator for a long series of clashes with security forces for two days, which affected the entire city of Bologna.
Cossiga sent armored vehicles into the university area and other hot spots of the city to quell what he perceived as guerrilla warfare. Clashes with the police caused numerous casualties among people who got caught up in the riots, including uninvolved locals. No old leftist party, except the Youth Socialist Federation, led by local secretary Emilio Lonardo, participated at the funeral of the student Lorusso, showing the dramatic split between the movement and the historical left parties.
Turin was also the scene of bloody clashes and attacks. On October 1, 1977, after a procession had started with an attack on the headquarters of the Italian Social Movement (MSI), a group of militants of "Lotta Continua" reached a downtown bar, "L'angelo azzurro" (The Blue Angel), frequented by young right-wing activists. They threw two Molotov cocktails, and Roberto Crescenzio, a totally apolitical student, died of burns. The perpetrators of the murder were never identified. "Lotta Continua" leader Silvio Viale called it a "tragic accident".
Another innocent victim of the riots of that year was Giorgiana Masi, who was killed in Rome by a gunshot during an event organized by the Radical Party to celebrate the third anniversary of the victory in the referendum on divorce. As the perpetrators of the murder remained unknown, the movement attributed the responsibility of the crime to police officers in plain clothes, which were immortalized at that time dressed in clothing of the style of young people of the movement.
Cossiga was in office at the time of the kidnapping and murder of the Christian Democratic leader Aldo Moro by the Marxist-Leninist extreme-left terrorist group Red Brigades. On the morning of 16 March 1978, the day on which the new cabinet led by Giulio Andreotti was supposed to have undergone a confidence vote in the Italian Parliament, the car of Moro, former prime minister and then president of DC, was assaulted by a group of Red Brigades terrorists in Via Fani in Rome. Firing automatic weapons, the terrorists killed Moro's bodyguards, (two Carabinieri in Moro's car and three policemen in the following car) and kidnapped him.
Cossiga formed immediately two "crisis committees". The first one was a technical-operational-political committee, chaired by Cossiga himself and, in his absence, by undersecretary Nicola Lettieri. Other members included the supreme commanders of the Italian Police Forces, of the Carabinieri, the Guardia di Finanza, the recently named directors of SISMI and SISDE (respectively, Italy's military and civil intelligence services), the national secretary of CESIS (a secret information agency), the director of UCIGOS and the police prefect of Rome. The second one was an information committee, including members of CESIS, SISDE, SISMI and SIOS, another military intelligence office.
A third unofficial committee was created which never met officially, called the "comitato di esperti" ("committee of experts"). Its existence was not disclosed until 1981, by Cossiga himself, in his interrogation by the Italian Parliament's Commission about the Moro affair. He omitted to reveal the decisions and the activities of the committee however. This committee included: Steve Pieczenik, a psychologist of the anti-terrorism section of the US State Department, and notable Italian criminologists. Pieczenik later declared that there were numerous leaks about the discussions made at the committee, and accused Cossiga.
However, on 9 May 1978 Moro's body was found in the trunk of a Renault 4 in Via Caetani after 55 days of imprisonment, during which Moro was submitted to a political trial by the so-called "people's court" set up by the Brigate Rosse and the Italian government was asked for an exchange of prisoners. Despite the common interpretation, the car location in Via Caetani was not halfway between the locations of the national offices of DC and of the Italian Communist Party (PCI) in Rome. After two days, Cossiga resigned as Minister of the Interior. According to Italian journalist Enrico Deaglio, Cossiga, to justify his lack of action, "accused the leaders of CGIL and of the Communist Party of knowing where Moro was detained". Cossiga was also accused by Moro himself, in his letters who wrote during his detention, saying that "his blood will fall over him".
One year after Moro's death and the subsequent Cossiga's resignation as Interior Minister, he was appointed Prime Minister of Italy. He led a government's coalition composed by Christian Democrats, Socialists, Democratic Socialists, Republicans and Liberals.
Cossiga was head of the government during the Bologna massacre, a terrorist bombing of the Bologna Central Station on the morning of 2 August 1980, which killed 85 people and wounded more than 200. The attack was attributed to the neo-fascist terrorist organization "Nuclei Armati Rivoluzionari" (Armed Revolutionary Nucleus), which always denied any involvement; other theories have been proposed, especially in correlation with the strategy of tension.
Francesco Cossiga first assumed the explosion to have been caused by an accident (the explosion of an old boiler located in the basement of the station). Nevertheless, soon the evidence gathered on site of the explosion made it clear that the attack constituted an act of terrorism. "L'Unità", the newspaper of the Communist Party on 3 August already attributed responsibility for the attack to neo-fascists. Later, in a special session to the Senate, Cossiga supported the theory that neofascists were behind the attack, "unlike leftist terrorism, which strikes at the heart of the state through its representatives, black terrorism prefers the massacre because it promotes panic and impulsive reactions."
Later, according to media reports in 2004, taken up again in 2007, Cossiga, in a letter addressed to Enzo Fragala, leader of the National Alliance section in the Mitrokhin Committee, suggested Palestinian involvement of George Habash's Popular Front for the Liberation of Palestine and the Separat group of Ilich Ramirez Sanchez, known as "Carlos the Jackal". In addition, in 2008 Cossiga gave an interview to "BBC" in which it reaffirmed his belief that the massacre would not be attributable to black terrorism, but to an "incident" of Palestinian resistance groups operating in Italy. He declared also being convinced of the innocence of Francesca Mambro and Giuseppe Valerio Fioravanti, the two neo-fascist terrorists accused of the massacre. The PFLP has always denied responsibility.
In October 1980, Cossiga resigned as Prime Minister after the rejection of the annual budget bill by the Italian Parliament.
Following the 1983 general election, Cossiga became a member of the Italian Senate; on 12 July, he was elected President of the Senate.
In the 1985 presidential election, Cossiga was elected as President of Italy with 752 votes out of 977. His candidacy was endorsed by the Christian Democracy, but supported also by communists, socialists, social democrats, liberals and republicans. This was the first time an Italian presidential candidate had won the election on the first ballot, where a two thirds majority is necessary.
The Cossiga presidency was essentially divided into two phases related to the attitudes of the head of state. In the first five years, Cossiga played its role in a traditional way, caring for the role of the republican institutions under the Constitution, which makes the President of the Republic a kind of arbitrator in relations between the powers of the state.
It was in his last two years as President that Cossiga began to express some unusual opinions regarding the Italian political system. He opined that the Italian parties, especially the Christian Democrats and the Communists had to take into account the deep changes brought about by the fall of the Berlin Wall and the end of the Cold War. According to him, DC and PCI would therefore have been seriously affected by this change, but Cossiga believed that political parties and the same institutions refused to recognize it.
Thus, a period of conflict and political controversy began, often provocative and deliberately excessive, and with very strong media exposure. These statements, soon dubbed ""esternazioni"", or "mattock blows" ("picconate"), were considered by many to be inappropriate for a President, and often beyond his constitutional powers; also, his mental health was doubted and Cossiga had to declare "I am the fake madman who speaks the truth." Cossiga suffered from bipolar disorder and depression in the last years of his life.
Among the statements of the President there were also allegations of excessive politicization of the judiciary system, and the stigmatization of the fact that young magistrates, who just came into service, were immediately destined for the Sicilian prosecutor to carry out mafia proceedings.
For his changed attitude, Cossiga received various criticisms by almost every party, with the exception of the Italian Social Movement, which stood beside him in defense of the "picconate". He will, amongst other things, be considered one of the first "cleansers" of MSI, who recognized it as a constitutional and democratic force.
Tension developed between Cossiga and Prime Minister Giulio Andreotti. This tension emerged when Andreotti revealed the existence of Gladio, a stay-behind organization with the official aim of countering a possible Soviet invasion through sabotage and guerrilla warfare behind enemy lines. Cossiga acknowledged his involvement in the establishment of the organization. The Democratic Party of the Left (successor to the Communist Party) started the procedure of impeachment (Presidents of Italy can be impeached only for high treason against the State or for an attempt to overthrow the Constitution). Although he threatened to prevent the impeachment procedure by dissolving Parliament, the impeachment request was ultimately dismissed.
Cossiga resigned two months before the end of his term, on 25 April 1992. In his last speech as President he stated "To young people I want to say to love the fatherland, to honor the nation, to serve the Republic, to believe in freedom and to believe in our country".
According to the Italian Constitution, after his resignation from the office of President, Cossiga became senator for life, joining his predecessors in the upper house of Parliament, with whom he also shared the title of President Emeritus of the Italian Republic.
On 12 February 1997, Cossiga survived unscathed a railway accident (), while traveling on a high-speed train from Milan to Rome that derailed near Piacenza.
In February 1998, Cossiga created the Democratic Union for the Republic (UDR), a Christian democratic political party, declaring it to be politically central. The UDR was a crucial component of the majority that supported the Massimo D'Alema government in October 1998, after the fall of the Romano Prodi's government which lost a vote of confidence. Cossiga declared that his support for D'Alema was intended to end the conventional exclusion of the former communist leaders from the premiership in Italy.
In 1999 UDR was dissolved and Cossiga returned to his activities as a senator, with competences in the Military Affairs' Commission.
In May 2006, Cossiga gave his support to the formation of Prodi's second government. In the same month, he brought in a bill that would allow the region of South Tyrol to hold a referendum, where the local electorate could decide whether to remain within the Republic of Italy, take independence, or become part of Austria again.
On 27 November 2006, he resigned from his position as a lifetime senator. His resignation was, however, rejected on 31 January 2007 by a vote of the Senate.
In May 2008, Cossiga voted in favor of the government of Silvio Berlusconi.
Cossiga died on 17 August 2010 from respiratory problems at the Agostino Gemelli Polyclinic. After his death, four letters written by Cossiga were sent to the four highest authorities of the state in office at the time of his death, President of the Republic Giorgio Napolitano, President of the Senate Renato Schifani, President of the Chamber of Deputies Gianfranco Fini and Prime Minister Silvio Berlusconi.
The funeral took place in his hometown, Sassari, at the Church of San Giuseppe. Cossiga is buried in the public cemetery of Sassari, in the family tomb, not far from one of his predecessors as President of Italy, Antonio Segni.
In 2007, Cossiga wrote (referring to the 2001 September 11 attacks): "all democratic circles in America and of Europe, especially those of the Italian centre-left, now know that the disastrous attack was planned and realized by the American CIA and Mossad with the help of the Zionist world, to place the blame on Arab countries and to persuade the Western powers to intervene in Iraq and Afghanistan". However, the previous year Cossiga had stated that he rejects theoretical conspiracies and that it "seems unlikely that September 11 was the result of an American plot."
In the same statement, Cossiga claimed that a video tape circulated by Osama bin Laden's al Qaeda and containing threats against Silvio Berlusconi was "produced in the studios of Mediaset in Milan" and forwarded to the "Islamist Al-Jazeera television network." The purpose of that video tape (which was actually an audio tape) was to raise "a wave of solidarity to Berlusconi" who was, at the time, facing political difficulties.
In 2008, Francesco Cossiga said that Mario Draghi was "a craven moneyman".
Cossiga blamed the loss of Itavia Flight 870, a passenger jet that crashed in 1980 with the loss of all 81 people on board, on a missile fired from a French Navy aircraft. On 23 January 2013 Italy's top criminal court ruled that there was "abundantly" clear evidence that the flight was brought down by a missile.
As President of the Republic, Cossiga was Head (and also Knight Grand Cross with Grand Cordon) of the Order of Merit of the Italian Republic (from 3 July 1985 to 28 April 1992), Military Order of Italy, Order of the Star of Italian Solidarity, Order of Merit for Labour and Order of Vittorio Veneto and Grand Cross of Merit of the Italian Red Cross. He has also been given honours and awards by other countries.
|
https://en.wikipedia.org/wiki?curid=11809
|
Lockheed Martin F-35 Lightning II
The Lockheed Martin F-35 Lightning II is an American family of single-seat, single-engine, all-weather stealth multirole combat aircraft. It is intended to perform both air superiority and strike missions while also providing electronic warfare and intelligence, surveillance, and reconnaissance capabilities. Lockheed Martin is the prime F-35 contractor, with principal partners Northrop Grumman and BAE Systems. The aircraft has three main variants: the conventional takeoff and landing F-35A (CTOL), the short take-off and vertical-landing F-35B (STOVL), and the carrier-based F-35C (CV/CATOBAR).
The aircraft descends from the Lockheed Martin X-35, which in 2001 beat the Boeing X-32 to win the Joint Strike Fighter (JSF) program. Its development is principally funded by the United States, with additional funding from program partner countries from NATO and close U.S. allies, including the United Kingdom, Italy, Australia, Canada, Norway, Denmark, the Netherlands, and formerly Turkey. Several other countries have ordered, or are considering ordering, the aircraft. The program has drawn much scrutiny and criticism for its unprecedented size, complexity, ballooning costs, and much-delayed deliveries. The acquisition strategy of concurrent production of the aircraft while it was still in development and testing led to expensive design changes and retrofits.
The F-35B entered service with the U.S. Marine Corps in July 2015, followed by the U.S. Air Force F-35A in August 2016 and the U.S. Navy F-35C in February 2019. The F-35 was first used in combat in 2018 by the Israeli Air Force. The U.S. plans to buy 2,456 F-35s through 2044, which will represent the bulk of the crewed tactical airpower of the U.S. Air Force, Navy, and Marine Corps for several decades. The aircraft is projected to operate until 2070.
The F-35 was the product of the Joint Strike Fighter (JSF) program, which was the merger of various combat aircraft programs from the 1980s and 1990s. One progenitor program was the Defense Advanced Research Projects Agency (DARPA) Advanced Short Take-Off/Vertical Landing (ASTOVL) which ran from 1983 to 1994; ASTOVL aimed to develop a Harrier Jump Jet replacement for the U.K. Royal Navy and the U.S. Marine Corps (USMC). Under one of ASTOVL's classified programs, the Supersonic STOVL Fighter (SSF), Lockheed Skunk Works conducted research for a stealthy supersonic STOVL fighter intended for both U.S. Air Force (USAF) and USMC; a key technology explored was the shaft-driven lift fan (SDLF) system. Lockheed's concept was a single-engine canard delta aircraft weighing about empty. ASTOVL was rechristened as the Common Affordable Lightweight Fighter (CALF) in 1993 and involved Lockheed, McDonnell Douglas, and Boeing.
In 1993, the Joint Advanced Strike Technology (JAST) program emerged following the USAF's Multi-Role Fighter (MRF) and U.S. Navy's (USN) Advanced Fighter-Attack (A/F-X) programs cancellations. MRF, a program for a relatively affordable F-16 replacement, was scaled back and delayed due to post-Cold War defense cuts easing F-16 fleet usage and thus extending its service life as well as increasing budget pressure from the F-22 program. The A/F-X, initially known as the Advanced-Attack (A-X), began in 1991 as the USN's follow-on to the Advanced Tactical Aircraft (ATA) program for an A-6 replacement; the resulting A-12 Avenger II was cancelled due to problems and cost overruns in 1991. In the same year, the termination of the Naval Advanced Tactical Fighter (NATF), an offshoot of USAF's Advanced Tactical Fighter (ATF) program, to replace the F-14 resulted in additional fighter capability being added to A-X, which was then renamed A/F-X. Amid increased budget pressure, the Department of Defenses (DoD) Bottom-Up Review (BUR) in September 1993 announced MRF's and A/F-X's cancellations, with applicable experience brought to the emerging JAST program. JAST was not meant to develop a new aircraft, instead developing requirements, maturing technologies, and demonstrating concepts for advanced strike warfare.
As JAST progressed, the need for concept demonstrator aircraft by 1996 emerged, which would coincide with the full-scale flight demonstrator phase of ASTOVL/CALF. Because the ASTOVL/CALF concept appeared to align with the JAST charter, the two programs were eventually merged in 1994 under the JAST name, with the program now serving the USAF, USMC, and USN. JAST was subsequently renamed the Joint Strike Fighter (JSF) in 1995, with STOVL submissions by McDonnell Douglas, Northrop Grumman, Lockheed Martin, and Boeing. The JSF was expected to eventually replace large numbers of multi-role and strike fighters in the inventories of the US and its allies, including the Harrier, F-16, F/A-18, A-10, and F-117.
International participation is a key aspect of the JSF program, starting with United Kingdom participation in the ASTOVL program. Many international partners requiring modernization of their air forces that deployed the F-16 and F/A-18 were interested in the JSF. The United Kingdom joined JAST/JSF as a founding member in 1995 and thus became the only Tier 1 partner of the JSF program; Italy, the Netherlands, Denmark, Norway, Canada, Australia, and Turkey joined the program during the Concept Demonstration Phase (CDP), with Italy and the Netherlands being Tier 2 partners and the rest Tier 3. Consequently, the aircraft was developed in cooperation with international partners and available for export.
Boeing and Lockheed Martin were selected in early 1997 for CDP, with their concept demonstrator aircraft designated X-32 and X-35 respectively; the McDonnell Douglas team was eliminated and Northrop Grumman and British Aerospace joined the Lockheed Martin team. Each firm would produce two prototype air vehicles to demonstrate conventional takeoff and landing (CTOL), carrier takeoff and landing (CV), and STOVL. Lockheed Martin's design would leverage the work on the SDLF system conducted under the ASTOVL/CALF program. The key aspect of the X-35 that enabled STOVL operation, the SDLF system consists of the lift fan in the forward center fuselage that could be activated by engaging a clutch that connects the drive shaft to the turbines and thus augmenting the thrust from the engine's swivel nozzle. Research from prior aircraft incorporating similar systems, such as the Convair Model 200, Rockwell XFV-12, and Yakovlev Yak-141, were also taken into consideration. By contrast, Boeing's X-32 employed direct lift system that the augmented turbofan would be reconfigured to when engaging in STOVL operation.
Lockheed Martin's commonality strategy was to replace the STOVL variant's SDLF with a fuel tank and the aft swivel nozzle with a two-dimensional thrust vectoring nozzle for the CTOL variant. This would enable identical aerodynamic configuration for the STOVL and CTOL variants, while the CV variant would have an enlarged wing in order to reduce landing speed for carrier recovery. Due to aerodynamic characteristics and carrier recovery requirements from the JAST merger, the design configuration would settle on a conventional tail compared to the canard delta design from the ASTOVL/CALF; notably, the conventional tail configuration offers much lower risk for carrier recovery compared to the ASTOVL/CALF canard configuration, which was designed without carrier compatibility in mind. This enabled greater commonality between all three variants, as commonality goal was still very high at this stage of the design. Lockheed Martin's prototypes would consist of the X-35A for demonstrating CTOL before converting it to the X-35B for STOVL demonstration and the larger-winged X-35C for CV compatibility demonstration.
The X-35A first flew on 24 October 2000 and conducted flight tests for subsonic and supersonic flying qualities, handling, range, and maneuver performance. After 28 flights, the aircraft was then converted into the X-35B for STOVL testing, with key changes including the addition of the SDLF, the three-bearing swivel module (3BSM), and roll-control ducts. The X-35B would successfully demonstrate the SDLF system by performing stable hover, vertical landing, and short takeoff in less than . The X-35C first flew on 16 December 2000 and conducted field landing carrier practice tests.
On 26 October 2001, Lockheed Martin was declared the winner and was awarded the System Development and Demonstration (SDD) contract; Pratt & Whitney was separately awarded to develop the F135 engine for the JSF. The F-35 designation, which was out of sequence with standard DoD numbering, was allegedly determined on the spot by program manager Major General Mike Hough; this came as a surprise even to Lockheed Martin, which had expected the "F-24" designation for the JSF.
As the JSF program moved into the SDD phase, the X-35 demonstrator design was modified to create the F-35 combat aircraft. The forward fuselage was lengthened by to make room for mission avionics, while the horizontal stabilizers were moved aft to retain balance and control. The diverterless supersonic inlet changed from a four-sided to a three-sided cowl shape and was moved aft. The fuselage section was fuller, the top surface raised by along the centerline to accommodate weapons bays. Following the designation of the X-35 prototypes, the three variants were designated F-35A (CTOL), F-35B (STOVL), and F-35C (CV). Prime contractor Lockheed Martin performs overall systems integration and final assembly and checkout (FACO), while Northrop Grumman and BAE Systems supply components for mission systems and airframe.
Adding the systems of a fighter aircraft added weight. The F-35B gained the most, largely due to a 2003 decision to enlarge the weapons bays for commonality between variants; the total weight growth was reportedly up to , over 8%, causing all STOVL key performance parameter (KPP) thresholds to be missed. In December 2003, the STOVL Weight Attack Team (SWAT) was formed to reduce the weight increase; changes included more engine thrust, thinned airframe members, smaller weapons bays and vertical stabilizers, less thrust fed to the roll-post outlets, and redesigning the wing-mate joint, electrical elements, and the airframe immediately aft of the cockpit. Many changes from the SWAT effort were applied to all three variants for commonality. By September 2004, these efforts had reduced the F-35B's weight by over , while the F-35A and F-35C were reduced in weight by and respectively. The weight reduction work cost $6.2 billion and caused an 18-month delay.
The first F-35A, designated AA-1, was rolled out in Fort Worth, Texas, on 19 February 2006 and first flew on 15 December 2006. The aircraft was given the name "Lightning II" in 2006.
The software was developed as six releases, or Blocks, for SDD. The first two Blocks, 1A and 1B, readied the F-35 for initial pilot training and multi-level security. Block 2A improved the training capabilities, while 2B was the first combat-ready release planned for the USMC's Initial Operating Capability (IOC). Block 3i retains the capabilities of 2B while having new hardware and was planned for the USAF's IOC. The final release for SDD, Block 3F, would have full flight envelope and all baseline combat capabilities. Alongside software releases, each block also incorporates avionics hardware updates and air vehicle improvements from flight and structural testing. In what is known as "concurrency", some low rate initial production (LRIP) aircraft lots would be delivered in early Block configurations and eventually upgraded to Block 3F once development is complete. After 17,000 flight test hours, the final flight for the SDD phase was completed in April 2018. Like the F-22, the F-35 has been targeted by cyberattacks and technology theft efforts, as well as potential vulnerabilities in the integrity of the supply chain.
Testing found several major problems: early F-35B airframes had premature cracking, the F-35C arrestor hook design was unreliable, fuel tanks were too vulnerable to lightning strikes, the helmet display had problems, and more. Software was repeatedly delayed due to its unprecedented scope and complexity. In 2009, the DoD Joint Estimate Team (JET) estimated that the program was 30 months behind the public schedule. In 2011, the program was "re-baselined"; that is, its cost and schedule goals were changed, pushing the IOC from the planned 2010 to July 2015. The decision to simultaneously test, fix defects, and begin production was criticized as inefficient; in 2014, Under Secretary of Defense for Acquisition Frank Kendall called it "acquisition malpractice". The three variants shared just 25% of their parts, far below the anticipated commonality of 70%. The program received considerable criticism for cost overruns and for the total projected lifetime cost, as well as quality management shortcomings by contractors.
The JSF program was expected to cost about $200 billion in acquisition in base-year 2002 dollars when SDD was awarded in 2001. As early as 2005, the Government Accountability Office (GAO) had identified major program risks in cost and schedule. The costly delays strained the relationship between the Pentagon and contractors; Program Executive Officer Lt. General Christopher Bogdan highlighted the frayed relationship in 2012. By 2017, delays and cost overruns had pushed the F-35 program's expected lifetime (i.e., to 2070) cost to $1.5 trillion in then-year dollars: $406.5 billion for acquisition plus $1.1 trillion for operations and maintenance. The unit cost of LRIP lot 13 F-35A was $79.2 million. Delays in development and operational test & evaluation has pushed full-rate production to 2021.
The first combat-capable Block 2B configuration, which had basic air-to-air and strike capabilities, was declared ready by the USMC in July 2015. The Block 3F configuration began operational test and evaluation (OT&E) in December 2018, the completion of which will conclude SDD. The F-35 program is also conducting sustainment and upgrade development, with early LRIP aircraft gradually upgraded to the baseline Block 3F standard by 2021.
The F-35 is expected to be continually upgraded over its lifetime. The first upgrade program, called Continuous Capability Development and Delivery (C2D2) began in 2019 and is currently planned to run to 2024. The near-term development priority of C2D2 is Block 4, which would integrate additional weapons, including those unique to international customers, refresh the avionics, improve ESM capabilities, and add Remotely Operated Video Enhanced Receiver (ROVER) support. C2D2 also places greater emphasis on agile software development to enable quicker releases. In 2018, the Air Force Life Cycle Management Center (AFLCMC) awarded contracts to General Electric and Pratt & Whitney to develop more powerful and efficient adaptive cycle engines for potential application in the F-35, leveraging the research done under the Adaptive Engine Transition Program (AETP).
Defense contractors have offered upgrades to the F-35 outside of official program contracts. In 2013, Northrop Grumman disclosed its development of a directional infrared countermeasures (DIRCM) suite, named Threat Nullification Defensive Resource (ThNDR). The countermeasure system would share the same space as the Distributed Aperture System (DAS) sensors and acts as a laser missile jammer to protect against infrared-homing missiles.
The United States is the primary customer and financial backer, with planned procurement of 1,763 F-35As for the USAF, 353 F-35Bs and 67 F-35Cs for the USMC, and 273 F-35Cs for the USN. Additionally, the United Kingdom, Italy, the Netherlands, Canada, Turkey, Australia, Norway, and Denmark have agreed to contribute US$4.375 billion towards development costs, with the United Kingdom contributing about 10% of the planned development costs as the sole Tier 1 partner. The initial plan was that the U.S. and eight major partner nations would acquire over 3,100 F-35s through 2035. The three tiers of international participation generally reflect financial stake in the program, the amount of technology transfer and subcontracts open for bid by national companies, and the order in which countries can obtain production aircraft. Alongside program partner countries, Israel and Singapore have joined as Security Cooperative Participants (SCP). Sales to SCP and non-partner nations are made through the Pentagon's Foreign Military Sales program. Turkey was removed from the F-35 program in July 2019 over security concerns.
Japan announced on 20 December 2011 its intent to purchase 42 F-35s to replace the F-4 Phantom II, with 38 to be assembled domestically and deliveries beginning in 2016. Due to delays in development and testing, many initial orders have been postponed. Italy reduced its order from 131 to 90 F-35s in 2012. Australia decided to buy the F/A-18F Super Hornet in 2006 and the EA-18G Growler in 2013 as interim measures.
On 3 April 2012, the Auditor General of Canada Michael Ferguson published a report outlining problems with Canada's procurement of the jet; the report states that the government knowingly understated the final cost of 65 F-35s by $10 billion. Following the 2015 Federal Election, the Canadian government under the Liberal Party decided not to proceed with a sole-sourced purchase and launched a competition to choose an aircraft.
In January 2019, Singapore officially announced its plan to buy a small number of F-35s for an evaluation of capabilities and suitability before deciding on more to replace its F-16 fleet. In May 2019, Poland announced plans to buy 32 F-35As to replace its Soviet-era jets; the contract was signed on 31 January 2020.
The F-35 is a family of single-engine, supersonic, stealth multirole fighters. The second fifth generation fighter to enter US service and the first operational supersonic STOVL stealth fighter, the F-35 emphasizes low observables, advanced avionics and sensor fusion that enable a high level of situational awareness and long range lethality; the USAF considers the aircraft its primary strike fighter for conducting suppression of enemy air defense (SEAD) missions, owing to the advanced sensors and mission systems.
The F-35 has a wing-tail configuration with two vertical stabilizers canted for stealth. Flight control surfaces include leading-edge flaps, flaperons, rudders, and all-moving horizontal tails (stabilators); leading edge root extensions also run forwards to the inlets. The relatively short 35-foot wingspan of the F-35A and F-35B is set by the requirement to fit inside USN amphibious assault ship parking areas and elevators; the F-35C's larger wing is more fuel efficient. The fixed diverterless supersonic inlets (DSI) use a bumped compression surface and forward-swept cowl to shed the boundary layer of the forebody away from the inlets, which form a Y-duct for the engine. Structurally, the F-35 drew upon lessons from the F-22; composites comprise 35% of airframe weight, with the majority being bismaleimide and composite epoxy materials as well as some carbon nanotube-reinforced epoxy in newer production lots. The F-35 is considerably heavier than the lightweight fighters it replaces, with the lightest variant having an empty weight of ; much of the weight can be attributed to the internal weapons bays and the extensive avionics carried.
While lacking the raw performance of the larger twin-engine F-22, the F-35 has kinematics competitive with fourth generation fighters such as the F-16 and F/A-18, especially with ordnance mounted because the F-35's internal weapons carriage eliminates parasitic drag from external stores. All variants have a top speed of Mach 1.6, attainable with full internal payload. The powerful F135 engine gives good subsonic acceleration and energy, with supersonic dash in afterburner. The large stabilitors, leading edge extensions and flaps, and canted rudders provide excellent high alpha (angle-of-attack) characteristics, with a trimmed alpha of 50°. Relaxed stability and fly-by-wire controls provide excellent handling qualities and departure resistance. Having over double the F-16's internal fuel, the F-35 has considerably greater combat radius, while stealth also enables a more efficient mission flight profile.
The F-35's mission systems are among the most complex aspects of the aircraft. The avionics and sensor fusion are designed to enhance the pilot's situational awareness and command and control capabilities and facilitate network-centric warfare. Key sensors include the Northrop Grumman AN/APG-81 active electronically scanned array (AESA) radar, BAE Systems AN/ASQ-239 Barracuda electronic warfare system, Northrop Grumman/Raytheon AN/AAQ-37 Distributed Aperture System (DAS), Lockheed Martin AN/AAQ-40 Electro-Optical Targeting System (EOTS) and Northrop Grumman AN/ASQ-242 Communications, Navigation, and Identification (CNI) suite. The F-35 was designed with sensor intercommunication to provide a cohesive image of the local battlespace and availability for any possible use and combination with one another; for example, the APG-81 radar also acts as a part of the electronic warfare system.
Much of the F-35's software was developed in C and C++ programming languages, while Ada83 code from the F-22 was also used; the Block 3F software has 8.6 million lines of code. The Green Hills Software Integrity DO-178B real-time operating system (RTOS) runs on integrated core processors (ICPs); data networking includes the IEEE 1394b and Fibre Channel buses. To enable fleet software upgrades for the software-defined radio systems and greater upgrade flexibility and affordability, the avionics leverage commercial off-the-shelf (COTS) components when practical. The mission systems software, particularly for sensor fusion, was one of the program's most difficult parts and responsible for substantial program delays.
The APG-81 radar uses electronic scanning for rapid beam agility and incorporates passive and active air-to-air modes, strike modes, and synthetic aperture radar (SAR) capability, with multiple target tracking at ranges in excess of . The antenna is tilted backwards for stealth. Complementing the radar is the AAQ-37 DAS, which consists of six infrared sensors that provide all-aspect missile launch warning and target tracking; the DAS acts as a situational awareness infrared search-and-track (SAIRST) and gives the pilot spherical infrared and night-vision imagery on the helmet visor. The ASQ-239 Barracuda electronic warfare system has ten radio frequency antennas embedded into the edges of the wing and tail for all-aspect radar warning receiver (RWR). It also provides sensor fusion of radio frequency and infrared tracking functions, geolocation threat targeting, and multispectral image countermeasures for self-defense against missiles. The electronic warfare system is capable of detecting and jamming hostile radars. The AAQ-40 EOTS is mounted internally behind a faceted low-observable window under the nose and performs laser targeting, forward-looking infrared (FLIR), and long range IRST functions. The ASQ-242 CNI suite uses a half dozen different physical links, including the Multifunction Advanced Data Link (MADL), for covert CNI functions. Through sensor fusion, information from radio frequency receivers and infrared sensors are combined to form a single tactical picture for the pilot. The all aspect target direction and identification can be shared via MADL to other platforms without compromising low observability. Link 16 is present for communication with legacy systems.
The F-35 was designed from the outset to incorporate improved processors, sensors, and software enhancements over its lifespan. Technology Refresh 3, which includes a new core processor and a new cockpit display, is planned for Lot 15 aircraft. Lockheed Martin has offered the Advanced EOTS for the Block 4 configuration; the improved sensor fits into the same area as the baseline EOTS with minimal changes. In June 2018, Lockheed Martin picked Raytheon for improved DAS. The USAF has studied the potential for the F-35 to orchestrate attacks by unmanned combat aerial vehicles (UCAVs) via its sensors and communications equipment.
Stealth is a key aspect of the F-35s design, and radar cross-section (RCS) is minimized through careful shaping of the airframe and the use of radar-absorbent materials (RAM); visible measures to reduce RCS include alignment of edges, serration of skin panels, and the masking of the engine face and turbine. Additionally, the F-35's diverterless supersonic inlet (DSI) uses a compression bump and forward-swept cowl rather than a splitter gap or bleed system to divert the boundary layer away from the inlet duct, eliminating the diverter cavity and further reducing radar signature. The RCS of the F-35 has been characterized as lower than a metal golf ball at certain frequencies and angles; in some conditions, the F-35 compares favorably to the F-22 in stealth. For maintainability, the F-35's stealth design took lessons learned from prior stealth aircraft such as the F-22; the F-35's radar-absorbent fibermat skin is more durable and requires less maintenance than older topcoats. The aircraft also has reduced infrared and visual signatures as well as strict controls of radio frequency emitters to prevent their detection. The F-35's stealth design is primarily focused on high-frequency X-band wavelengths; low-frequency radars can spot stealthy aircraft due to Rayleigh scattering, but such radars are also conspicuous, susceptible to clutter, and lack precision. To disguise its RCS, the aircraft can mount four Luneburg lens reflectors.
Noise from the F-35 caused concerns in residential areas near potential bases for the aircraft, and residents near two such bases—Luke Air Force Base, Arizona, and Eglin Air Force Base, Florida—requested environmental impact studies in 2008 and 2009 respectively. Although the noise level in decibels were comparable to those of prior fighters such as the F-16, the sound power of the F-35 is stronger particularly at lower frequencies. Subsequent surveys and studies have indicated that the noise of the F-35 was not perceptibly different from the F-16 and F/A-18E/F, though the greater low-frequency noise was noticeable for some observers.
The glass cockpit was designed to give the pilot good situational awareness. The main display is a 20- by 8-inch (50 by 20 cm) panoramic touchscreen, which shows flight instruments, stores management, CNI information, and integrated caution and warnings; the pilot can customize the arrangement of the information. Below the main display is a smaller stand-by display. The cockpit has a speech-recognition system developed by Adacel. The F-35 does not have a head-up display; instead, flight and combat information is displayed on the visor of the pilot's helmet in a helmet-mounted display system (HMDS). The one-piece tinted canopy is hinged at the front and has an internal frame for structural strength. The Martin-Baker US16E ejection seat is launched by a twin-catapult system housed on side rails. There is a right-hand side stick and throttle hands-on throttle-and-stick system. For life support, an onboard oxygen-generation system (OBOGS) is fitted and powered by the Integrated Power Package (IPP), with an auxiliary oxygen bottle and backup oxygen system for emergencies.
The Vision Systems International helmet display is a key piece of the F-35's human-machine interface. Instead of the head-up display mounted atop the dashboard of earlier fighters, the HMDS puts flight and combat information on the helmet visor, allowing the pilot to see it no matter which way he or she is facing. Infrared and night vision imagery from the Distributed Aperture System can be displayed directly on the HMDS and enables the pilot to "see through" the aircraft. The HDMS allows an F-35 pilot to fire missiles at targets even when the nose of the aircraft is pointing elsewhere by cuing missile seekers at high angles off-boresight. Each helmet costs $400,000. The HMDS weighs more than traditional helmets, and there is concern that it can endanger lightweight pilots during ejection.
Due to the HMDS's vibration, jitter, night-vision and sensor display problems during development, Lockheed Martin and Elbit issued a draft specification in 2011 for an alternative HMDS based on the AN/AVS-9 night vision goggles as backup, with BAE Systems chosen later that year. A cockpit redesign would be needed to adopt an alternative HMDS. Following progress on the baseline helmet, development on the alternative HMDS was halted in October 2013. In 2016, the Gen 3 helmet with improved night vision camera, new liquid crystal displays, automated alignment and software enhancements was introduced with LRIP lot 7.
To preserve its stealth shaping, the F-35 has two internal weapons bays with four weapons stations. The two outboard weapon stations each can carry ordnance up to , or for F-35B, while the two inboard stations carry air-to-air missiles. Air-to-surface weapons for the outboard station include the Joint Direct Attack Munition (JDAM), Paveway series of bombs, Joint Standoff Weapon (JSOW), and cluster munitions (Wind Corrected Munitions Dispenser). The station can also carry multiple smaller munitions such as the GBU-39 Small Diameter Bombs (SDB), GBU-53/B SDB II, and the SPEAR 3 anti-tank missiles; up to four SDBs can be carried per station for the F-35A and F-35C, and three for F-35B. The inboard station can carry the AIM-120 AMRAAM. Two compartments behind the weapons bays contain flares, chaff, and towed decoys.
The aircraft can use six external weapons stations for missions that do not require stealth. The wingtip pylons each can carry an AIM-9X or AIM-132 ASRAAM and are canted outwards to reduce their radar cross-section. Additionally, each wing has a inboard station and a middle station, or for F-35B. The external wing stations can carry large air-to-surface weapons that would not fit inside the weapons bays such as the AGM-158 Joint Air to Surface Stand-off Missile (JASSM) cruise missile. An air-to-air missile load of eight AIM-120s and two AIM-9s is possible using internal and external weapons stations; a configuration of six bombs, two AIM-120s and two AIM-9s can also be arranged. The F-35A is armed with a 25 mm GAU-22/A rotary cannon mounted internally near the left wing root with 182 rounds carried; the gun is more effective against ground targets than the 20 mm cannon carried by other USAF fighters. The F-35B and F-35C have no internal gun and instead can use a Terma A/S multi-mission pod (MMP) carrying the GAU-22/A and 220 rounds; the pod is mounted on the centerline of the aircraft and shaped to reduce its radar cross-section. In lieu of the gun, the pod can also be used for different equipment and purposes, such as electronic warfare, aerial reconnaissance, or rear-facing tactical radar.
Lockheed Martin is developing a weapon rack called Sidekick that would enable the internal outboard station to carry two AIM-120s, thus increasing the internal air-to-air payload to six missiles, currently offered for Block 4. Block 4 will also have a rearranged hydraulic line and bracket to allow the F-35B to carry four SDBs per internal outboard station; integration of the MBDA Meteor is also planned. The USAF and USN are planning to integrate the AGM-88G AARGM-ER internally in the F-35A and F-35C. Norway and Australia are funding an adaptation of the Naval Strike Missile (NSM) for the F-35; designated Joint Strike Missile (JSM), two missiles can be carried internally with an additional four externally. Nuclear weapons delivery via internal carriage of the B61 nuclear bomb is planned for Block 4B in 2024. Both hypersonic missiles and direct energy weapons such as solid-state laser are currently being considered as future upgrades. Lockheed Martin is studying integrating a fiber laser that uses spectral beam combining multiple individual laser modules into a single high-power beam, which can be scaled to various levels.
The USAF plans for the F-35A to take up the close air support (CAS) mission in contested environments; amid criticism that it is not as well suited as a dedicated attack platform, USAF chief of staff Mark Welsh placed a focus on weapons for CAS sorties, including guided rockets, fragmentation rockets that shatter into individual projectiles before impact, and more compact ammunition for higher capacity gun pods. Fragmentary rocket warheads create greater effects than cannon shells as each rocket creates a "thousand-round burst", delivering more projectiles than a strafing run.
The single-engine aircraft is powered by the Pratt & Whitney F135 low-bypass augmented turbofan with rated thrust of . Derived from the Pratt & Whitney F119 used by the F-22, the F135 has a larger fan and higher bypass ratio to increase subsonic fuel efficiency, and unlike the F119, is not optimized for supercruise. The engine contributes to the F-35's stealth by having a low-observable augmenter, or afterburner, that incorporates fuel injectors into thick curved vanes; these vanes are covered by ceramic radar-absorbent materials and mask the turbine. The stealthy augmenter had problems with pressure pulsations, or "screech", at low altitude and high speed early in its development. The low-observable axisymmetric nozzle consists of 15 partially overlapping flaps that create a sawtooth pattern at the trailing edge, which reduces radar signature and creates shed vortices that reduce the infrared signature of the exhaust plume. Due to the engines large dimensions, the USN had to modify its underway replenishment system to facilitate at-sea logistics support.
The F135-PW-600 variant for the F-35B incorporates the SDLF to allow STOVL operations. Designed by Lockheed Martin and developed by Rolls-Royce, the SDLF, also known as the Rolls-Royce LiftSystem, consists of the lift fan, drive shaft, two roll posts, and a "three-bearing swivel module" (3BSM). The thrust vectoring 3BSM nozzle allows the main engine exhaust to be deflected downward at the tail of the aircraft and is moved by a "fueldraulic" actuator that uses pressurized fuel as the working fluid. Unlike the Harriers Rolls-Royce Pegasus engine that entirely uses direct engine thrust for lift, the F-35B's system augments the swivel nozzle's thrust with the lift fan; the fan is powered by the low-pressure turbine through a drive shaft when engaged with a clutch and placed near the front of the aircraft to provide a counterbalancing thrust. Roll control during slow flight is achieved by diverting unheated engine bypass air through wing-mounted thrust nozzles called roll posts.
An alternative engine, the General Electric/Rolls-Royce F136, was being developed in the 2000s; originally, F-35 engines from Lot 6 onward would competitively tendered. Using technology from the General Electric YF120, The F136 was claimed to have a greater temperature margin than the F135. The F136 was canceled in December 2011 due to lack of funding.
In 2016, the Adaptive Engine Transition Program (AETP) was launched to develop and test adaptive cycle engines, with one major potential application being the re-engining of the F-35. Both GE and P&W were awarded contracts to develop class demonstrators, with the designations XA100 and XA101 respectively. In 2017, P&W announced the F135 Growth Option 1.0 and 2.0; Growth Option 1.0, which had finished testing and was production ready in May 2017, was a power module upgrade that offered 6–10% thrust improvement and 5–6% fuel burn reduction. The power module could be retrofitted onto older engines and seamlessly added to future engines at low cost rise and no impact on delivery. Growth Option 2.0 would be the adaptive cycle XA101. In June 2018, Pratt & Whitney changed its development plan for the F135, and instead offered an adaptive three-stream fan as Growth Option 2.0 that's separate from the XA101, which would instead have a new engine core.
The F-35 is designed to require less maintenance than earlier stealth aircraft. Some 95% of all field-replaceable parts are "one deep" — that is, nothing else need be removed to reach the desired part; for instance, the ejection seat can be replaced without removing the canopy. The F-35 has a fibermat radar-absorbent material (RAM) baked into the skin, which is more durable, easier to work with, and faster to cure than older RAM coatings; similar coatings are currently being considered for application on older stealth aircraft such as the F-22. Skin corrosion on the F-22 led the F-35's designers to use a less galvanic corrosion-inducing skin gap filler and to use fewer gaps in the airframe skin needing filler and better drainage. The flight control system uses electro-hydrostatic actuators rather than traditional hydraulic systems; these controls can be powered by lithium-ion batteries in case of emergency. Commonality between the different variants allowed the USMC to create their first aircraft maintenance Field Training Detachment to apply the USAF's lessons to their F-35 operations.
The F-35 was intended to be supported by a computerized maintenance management system named Autonomic Logistics Information System (ALIS). In concept, any aircraft can be serviced at any F-35 maintenance facility and for all parts to be globally tracked and shared as needed. Due to numerous problems, such as unreliable diagnoses, excessive connectivity requirements, and security vulnerabilities, program officials plan to replace ALIS with the cloud-based Operational Data Integrated Network (ODIN) by 2022.
The first F-35A, AA-1, conducted its engine run in September 2006 and first flew on 15 December 2006. Unlike all subsequent aircraft, AA-1 did not have the weight optimization from SWAT; consequently, it mainly tested subsystems common to subsequent aircraft, such as the propulsion, electrical system, and cockpit displays. This aircraft was retired from flight testing in December 2009 and was used for live-fire testing at NAS China Lake.
The first F-35B, BF-1, flew on 11 June 2008, while the first weight-optimized F-35A and F-35C, AF-1 and CF-1, flew on 14 November 2009 and 6 June 2010 respectively. The F-35B's first hover was on 17 March 2010, followed by its first vertical landing the next day. The F-35 Integrated Test Force (ITF) consisted of 18 aircraft at Edwards Air Force Base and Naval Air Station Patuxent River. Nine aircraft at Edwards, five F-35As, three F-35Bs, and one F-35C, performed flight sciences testing such as F-35A envelope expansion, flight loads, stores separation, as well as mission systems testing. The other nine aircraft at Patuxent River, five F-35Bs and four F-35Cs, were responsible for F-35B and C envelope expansion and STOVL and CV suitability testing. Additional carrier suitability testing was conducted at Naval Air Warfare Center Aircraft Division at Lakehurst, New Jersey. Two non-flying aircraft of each variant were used to test static loads and fatigue. For testing avionics and mission systems, a modified Boeing 737-300 with a duplication of the cockpit, the Lockheed Martin CATBird has been used. Field testing of the F-35's sensors were conducted during Exercise Northern Edge 2009 and 2011, serving as significant risk-reduction steps.
Flight tests revealed several serious deficiencies that required costly redesigns, caused delays, and resulted in several fleet-wide groundings. In 2011, the F-35C failed to catch the arresting wire in all eight landing tests; a redesigned tail hook was delivered two years later. By June 2009, many of the initial flight test targets had been accomplished but the program was behind schedule. Software and mission systems were among the biggest sources of delays for the program, with sensor fusion proving especially challenging. In fatigue testing, the F-35B suffered several premature cracks, requiring a redesign of the structure. A third non-flying F-35B is currently planned to test the redesigned structure. The F-35B and C also had problems with the horizontal tails suffering heat damage from prolonged afterburner use. Early flight control laws had problems with "wing drop" and also made the airplane sluggish, with high angles-of-attack tests in 2015 against an F-16 showing a lack of energy.
At-sea testing of the F-35B was first conducted aboard . In October 2011, two F-35Bs conducted three weeks of initial sea trials, called Development Test I. The second F-35B sea trials, Development Test II, began in August 2013, with tests including nighttime operations; two aircraft completed 19 nighttime vertical landings using DAS imagery. The first operational testing involving six F-35Bs was done on the "Wasp" in May 2015. The final Development Test III on involving operations in high sea states was completed in late 2016. A Royal Navy F-35 conducted the first "rolling" landing on board the HMS "Queen Elizabeth" in October 2018.
After the redesigned tail hook arrived, the F-35C's carrier-based Development Test I began in November 2014 aboard and focused on basic day carrier operations and establishing launch and recovery handling procedures. Development Test II, which focused on night operations, weapons loading, and full power launches, took place in October 2015. The final Development Test III was completed in August 2016, and included tests of asymmetric loads and certifying systems for landing qualifications and interoperability. Operational test of the F-35C began in 2018.
The F-35's reliability and availability have fallen short of requirements, especially during early years of testing. The ALIS maintenance and logistics system was plagued by excessive connectivity requirements and faulty diagnoses. In late 2017, the GAO reported the time needed to repair an F-35 part averaged 172 days, which was "twice the program's objective," and that shortage of spare parts was degrading readiness. In 2019, while individual F-35 units have achieved mission capable rates of over the target of 80% for short periods during deployed operations, fleet-wide rates remained below target. The fleet availability goal of 65% was also not met, although the trend shows improvement. Gun accuracy of the F-35A remains unacceptable.
Operational test and evaluation (OT&E) with Block 3F, the final configuration for SDD, began in December 2018.
The F-35A and F-35B were cleared for basic flight training in early 2012. However, lack of system maturity at the time led to concerns over safety as well as concerns by the Director of Operational Test & Evaluation (DOT&E) over electronic warfare testing, budget, and concurrency for the Operational Test and Evaluation master plan. On 10 September 2012, despite problems remaining in the operational testing plan, the USAF began an operational utility evaluation (OUE) of the F-35A, including logistical support, maintenance, personnel training, and pilot execution. OUE flights began on 26 October and were completed on 14 November after 24 flights, each pilot having completed six flights. On 16 November 2012, the USMC received the first F-35B at MCAS Yuma, although Marine pilots had several flight restrictions. During the Low Rate Initial Production (LRIP) phase, the three U.S. military services jointly developed tactics and procedures using flight simulators, testing effectiveness, discovering problems and refining design. In January 2013, training began at Eglin Air Force Base with capacity for 100 pilots and 2,100 maintainers at once. On 8 January 2015, RAF Lakenheath in the UK was chosen as the first base in Europe to station two USAF F-35 squadrons, with 48 aircraft adding to the 48th Fighter Wing's existing F-15C and F-15E squadrons.
The USMC declared Initial Operational Capability (IOC) for the F-35B in the Block 2B configuration on 31 July 2015 after operational trials. However, limitations remained in night operations, communications, software and weapons carriage capabilities. USMC F-35Bs participated in their first Red Flag exercise in July 2016 with 67 sorties conducted. USAF F-35A in the Block 3i configuration achieved IOC with the USAF on 2 August 2016, and the F-35C in Block 3F with the USN on 28 February 2019. USAF F-35As conducted their first Red Flag exercise in 2017; system maturity had improved and the aircraft scored a kill ratio of 15:1 against an F-16 aggressor squadron in a high-threat environment.
The F-35's operating cost is higher than those of some older fighters. In fiscal year 2018, the F-35A's cost per flight hour (CPFH) was $44,000, a number that was reduced to $35,000 in 2019. For comparison, in 2015 the CPFH of the A-10 was $17,716; the F-15C, $41,921; and the F-16C, $22,514. Lockheed Martin hopes to reduce it to $25,000 by 2025 through performance-based logistics and other measures.
The USMC plans to disperse its F-35Bs among forward deployed bases to enhance survivability while remaining close to a battlespace, similar to RAF Harrier deployment in the Cold War, which relied on the use of off-base locations that offered short runways, shelter, and concealment. Known as distributed STOVL operations (DSO), F-35Bs would operate from temporary bases in allied territory within the range of hostile ballistic and cruise missiles and be moved between temporary locations inside the enemy's 24- to 48-hour targeting cycle; this strategy accounts for the F-35B's short range, the shortest of the three variants, with mobile forward arming and refueling points (M-Farps) accommodating KC-130 and MV-22 Osprey aircraft to rearm and refuel the jets, as well as littoral areas for sea links of mobile distribution sites. M-Farps can be based on small airfields, multi-lane roads, or damaged main bases, while F-35Bs return to rear-area USAF bases or friendly ships for scheduled maintenance. Helicopter-portable metal planking is needed to protect unprepared roads from the F-35B's engine exhaust; the USMC are studying lighter heat-resistant alternatives.
The first U.S. combat employment began in July 2018 with USMC F-35Bs from the amphibious assault ship , with the first combat strike on 27 September 2018 against a Taliban target in Afghanistan. This was followed by a USAF deployment to Al Dhafra Air Base, UAE on 15 April 2019. On 27 April 2019, USAF F-35As were first used in combat in an airstrike on an Islamic State tunnel network in northern Iraq.
In service, some USAF pilots have nicknamed the aircraft "Panther" in lieu of the official "Lightning II".
The United Kingdom's Royal Air Force and Royal Navy both operate the F-35B, known simply as the Lightning in British service; it has replaced the Harrier GR9, which was retired in 2010, and Tornado GR4, which was retired in 2019. The F-35 is to be Britain's primary strike aircraft for the next three decades. One of the Royal Navy's requirements for the F-35B was a Shipborne Rolling and Vertical Landing (SRVL) mode to increase maximum landing weight by using wing lift during landing. In July 2013, Chief of the Air Staff, Air Chief Marshal Sir Stephen Dalton announced that No. 617 (The Dambusters) Squadron would be the RAF's first operational F-35 squadron. The second operational squadron will be the Fleet Air Arm's 809 Naval Air Squadron in April 2023.
No. 17 (Reserve) Test and Evaluation Squadron (TES) stood-up on 12 April 2013 as the Operational Evaluation Unit for the Lightning, becoming the first British squadron to operate the type. By June 2013, the RAF had received three F-35s of the 48 on order, all initially based at Eglin Air Force Base. In June 2015, the F-35B undertook its first launches from a ski-jump at NAS Patuxent River. When operated at sea, British F-35B shall use ships fitted with ski-jumps, as will the Italian Navy. British F-35Bs are not intended to receive the Brimstone 2 missile. On 5 July 2017, it was announced the second UK-based RAF squadron would be No. 207 Squadron, which reformed on 1 August 2019 as the Lightning Operational Conversion Unit. No. 617 Squadron reformed on 18 April 2018 during a ceremony in Washington, D.C., US, becoming the first RAF front-line squadron to operate the type; receiving its first four F-35Bs on 6 June, flying from MCAS Beaufort to RAF Marham. Both No. 617 Squadron and its F-35s were declared combat ready on 10 January 2019.
In April 2019, No. 617 Squadron deployed to RAF Akrotiri, Cyprus, the type's first overseas deployment. On 25 June 2019, the first combat use of an RAF F-35B was reportedly undertaken as armed reconnaissance flights searching for Islamic State targets in Iraq and Syria. In October 2019, "the Dambusters" and No. 17 TES F-35s were embarked on HMS "Queen Elizabeth" for the first time. No. 617 Squadron departed RAF Marham on 22 January 2020 for their first Exercise Red Flag with the Lightning.
The Israeli Air Force declared the F-35 operationally capable on 6 December 2017. According to Kuwaiti newspaper "Al Jarida", in July 2018, a test mission of at least three IAF F-35s flew to Iran's capital Tehran and back from Tel Aviv. While publicly unconfirmed, regional leaders acted on the report; Iran's supreme leader Ali Khamenei reportedly fired the air force chief and commander of Iran's Revolutionary Guard Corps over the mission.
On 22 May 2018, Israeli Air Force chief Amikam Norkin said that the service had employed their F-35Is in two attacks on two battle fronts, marking the first combat operation of an F-35 by any country. Norkin said it had been flown "all over the Middle East", and showed photos of an F-35I flying over Beirut in daylight. In July 2019, Israel reportedly expanded its strikes against Iranian missile shipments; IAF F-35Is allegedly struck Iranian targets in Iraq twice.
The F-35A is the conventional takeoff and landing (CTOL) variant intended for the USAF and other air forces. It is the smallest, lightest version and capable of 9 g, the highest of all variants.
Although the F-35A currently conducts aerial refueling via boom and receptacle method, the aircraft can be modified for probe-and-drogue refueling if needed by the customer. A drag chute pod can be installed on the F-35A, with the Royal Norwegian Air Force being the first operator to adopt it.
The F-35B is the short takeoff and vertical landing (STOVL) variant of the aircraft. Similar in size to the A variant, the B sacrifices about a third of the A variant's fuel volume to accommodate the SDLF. This variant is limited to 7 g. Unlike other variants, the F-35B has no landing hook. The "STOVL/HOOK" control instead engages conversion between normal and vertical flight.
The F-35C variant is designed for catapult-assisted take-off but arrested recovery operations from aircraft carriers. Compared to the F-35A, the F-35C features larger wings with foldable wingtip sections, larger wing and tail control surfaces for improved low-speed control, stronger landing gear for the stresses of carrier arrested landings, a twin-wheel nose gear, and a stronger tailhook for use with carrier arrestor cables. The larger wing area allows for decreased landing speed while increasing both range and payload. The F-35C is limited to 7.5 g.
A study for a possible upgrade of the F-35A to be fielded by the 2035 target date of the USAF's Future Operating Concept.
The F-35I "Adir" (, meaning "Awesome", or "Mighty One") is an F-35A with unique Israeli modifications. The US initially refused to allow such changes before permitting Israel to integrate its own electronic warfare systems, including sensors and countermeasures. The main computer has a plug-and-play function for add-on systems; proposals include an external jamming pod, and new Israeli air-to-air missiles and guided bombs in the internal weapon bays. A senior IAF official said that the F-35's stealth may be partly overcome within 10 years despite a 30 to 40 year service life, thus Israel's insistence on using their own electronic warfare systems. Israel Aerospace Industries (IAI) has considered a two-seat F-35 concept; an IAI executive noted: "There is a known demand for two seats not only from Israel but from other air forces". IAI plans to produce conformal fuel tanks.
The Canadian CF-35 is a proposed variant that would differ from the F-35A through the addition of a drogue parachute and may include an F-35B/C-style refueling probe. In 2012, it was revealed that the CF-35 would employ the same boom refueling system as the F-35A. One alternative proposal would have been the adoption of the F-35C for its probe refueling and lower landing speed; however, the Parliamentary Budget Officer's report cited the F-35C's limited performance and payload as being too high a price to pay. Following the 2015 Federal Election the Liberal Party, whose campaign had included a pledge to cancel the F-35 procurement, formed a new government and commenced an open competition to replace the existing CF-18 Hornet.
On 23 June 2014, an F-35A's engine caught fire at Eglin Air Force Base. The pilot escaped unharmed, while the aircraft sustained an estimated US$50 million of damages. The accident caused all flights to be halted on 3 July. The fleet returned to flight on 15 July with flight envelope restrictions. In June 2015, the USAF Air Education and Training Command (AETC) issued its official report, which blamed the failure on the third stage rotor of the engine's fan module, pieces of which cut through the fan case and upper fuselage. Pratt & Whitney applied an extended "rub-in" to increase the gap between the second stator and the third rotor integral arm seal, as well as design alterations to pre-trench the stator by early 2016.
The first crash occurred on 28 September 2018 involving a USMC F-35B near Marine Corps Air Station Beaufort, South Carolina; the pilot ejected safely. The cause of the crash was attributed to a faulty fuel tube; all F-35s were grounded on 11 October pending a fleet-wide inspection of the tubes. The next day, most USAF and USN F-35s returned to flight status following the inspection.
On 9 April 2019, a Japan Air Self-Defense Force F-35A attached to Misawa Air Base disappeared from radar about 84 miles (135 km) east of the Aomori Prefecture during a training mission over the Pacific Ocean. The pilot, Major Akinori Hosomi, had radioed his intention to abort the drill before disappearing. Both US and Japanese Navy assets searched for the missing aircraft and pilot, finding debris on the water that confirmed its crash; Hosomi's remains were recovered in June. In response, Japan grounded its 12 F-35As. There was speculation that China or Russia might attempt to salvage it; the Japanese Defense Ministry announced there had been no "reported activities" from either country. The F-35 reportedly did not send a distress signal nor did the pilot attempt any recovery maneuvers as the aircraft descended at a rapid rate. The accident report attributed the cause to the pilot's spatial disorientation.
On 19 May 2020, a USAF F-35A from the 58th Fighter Squadron crashed while landing at Eglin Air Force Base, Florida. The pilot ejected and was in stable condition.
|
https://en.wikipedia.org/wiki?curid=11812
|
Food additive
Food additives are substances added to food to preserve flavor or enhance its taste, appearance, or other qualities. Some additives have been used for centuries; for example, preserving food by pickling (with vinegar), salting, as with bacon, preserving sweets or using sulfur dioxide as with wines. With the advent of processed foods in the second half of the twentieth century, many more additives have been introduced, of both natural and artificial origin. Food additives also include substances that may be introduced to food indirectly (called "indirect additives") in the manufacturing process, through packaging, or during storage or transport.
To regulate these additives and inform consumers, each additive is assigned a unique number called an "E number", which is used in Europe for all approved additives. This numbering scheme has now been adopted and extended by the "Codex Alimentarius" Commission to internationally identify all additives, regardless of whether they are approved for use.
E numbers are all prefixed by "E", but countries outside Europe use only the number, whether the additive is approved in Europe or not.
For example, acetic acid is written as E260 on products sold in Europe, but is simply known as additive 260 in some countries. Additive 103, alkannin, is not approved for use in Europe so does not have an E number, although it is approved for use in Australia and New Zealand. Since 1987, Australia has had an approved system of labelling for additives in packaged foods. Each food additive has to be named or numbered. The numbers are the same as in Europe, but without the prefix "E".
The United States Food and Drug Administration (FDA) lists these items as "generally recognized as safe" (GRAS); they are listed under both their Chemical Abstracts Service number and FDA regulation under the United States Code of Federal Regulations.
Food additives can be divided into several groups, although there is some overlap because some additives exert more than one effect. For example, salt is both a preservative as well as a flavor.
With the increasing use of processed foods since the 19th century, food additives are more widely used. Many countries regulate their use. For example, boric acid was widely used as a food preservative from the 1870s to the 1920s, but was banned after World War I due to its toxicity, as demonstrated in animal and human studies. During World War II, the urgent need for cheap, available food preservatives led to it being used again, but it was finally banned in the 1950s. Such cases led to a general mistrust of food additives, and an application of the precautionary principle led to the conclusion that only additives that are known to be safe should be used in foods. In the United States, this led to the adoption of the Delaney clause, an amendment to the Federal Food, Drug, and Cosmetic Act of 1938, stating that no carcinogenic substances may be used as food additives. However, after the banning of cyclamates in the United States and Britain in 1969, saccharin, the only remaining legal artificial sweetener at the time, was found to cause cancer in rats. Widespread public outcry in the United States, partly communicated to Congress by postage-paid postcards supplied in the packaging of sweetened soft drinks, led to the retention of saccharin, despite its violation of the Delaney clause. However, in 2000, saccharin was found to be carcinogenic in rats due only to their unique urine chemistry.
Periodically, concerns have been expressed about a linkage between additives and hyperactivity, however "no clear evidence of ADHD was
provided".
In 2007, Food Standards Australia New Zealand published an official shoppers' guidance with which the concerns of food additives and their labeling are mediated. In the EU it can take 10 years or more to obtain approval for a new food additive. This includes five years of safety testing, followed by two years for evaluation by the European Food Safety Authority and another three years before the additive receives an EU-wide approval for use in every country in the European Union. Apart from testing and analyzing food products during the whole production process to ensure safety and compliance with regulatory standards, Trading Standards officers (in the UK) protect the public from any illegal use or potentially dangerous mis-use of food additives by performing random testing of food products.
There has been significant controversy associated with the risks and benefits of food additives. Natural additives may be similarly harmful or be the cause of allergic reactions in certain individuals. For example, safrole was used to flavor root beer until it was shown to be carcinogenic. Due to the application of the Delaney clause, it may not be added to foods, even though it occurs naturally in sassafras and sweet basil.
A subset of food additives, micronutrients added in food fortification processes preserve nutrient value by providing vitamins and minerals to foods such as flour, cereal, margarine and milk which normally would not retain such high levels. Added ingredients, such as air, bacteria, fungi, and yeast, also contribute manufacturing and flavor qualities, and reduce spoilage.
ISO has published a series of standards regarding the topic and these standards are covered by ICS 67.220.
|
https://en.wikipedia.org/wiki?curid=11815
|
Fridtjof Nansen
Fridtjof Wedel-Jarlsberg Nansen (; 10 October 1861 – 13 May 1930) was a Norwegian explorer, scientist, diplomat, humanitarian and Nobel Peace Prize laureate. In his youth he was a champion skier and ice skater. He led the team that made the first crossing of the Greenland interior in 1888, traversing the island on cross-country skis. He won international fame after reaching a record northern latitude of 86°14′ during his "Fram" expedition of 1893–1896. Although he retired from exploration after his return to Norway, his techniques of polar travel and his innovations in equipment and clothing influenced a generation of subsequent Arctic and Antarctic expeditions.
Nansen studied zoology at the Royal Frederick University in Christiania and later worked as a curator at the University Museum of Bergen where his research on the central nervous system of lower marine creatures earned him a doctorate and helped establish neuron doctrine. Later, neuroscientist Santiago Ramón y Cajal won the 1906 Nobel Prize in Medicine for his research on the same subject. After 1896 his main scientific interest switched to oceanography; in the course of his research he made many scientific cruises, mainly in the North Atlantic, and contributed to the development of modern oceanographic equipment.
As one of his country's leading citizens, in 1905 Nansen spoke out for the ending of Norway's union with Sweden, and was instrumental in persuading Prince Carl of Denmark to accept the throne of the newly independent Norway. Between 1906 and 1908 he served as the Norwegian representative in London, where he helped negotiate the Integrity Treaty that guaranteed Norway's independent status.
In the final decade of his life, Nansen devoted himself primarily to the League of Nations, following his appointment in 1921 as the League's High Commissioner for Refugees. In 1922 he was awarded the Nobel Peace Prize for his work on behalf of the displaced victims of the First World War and related conflicts. Among the initiatives he introduced was the "Nansen passport" for stateless persons, a certificate that used to be recognised by more than 50 countries. He worked on behalf of refugees until his sudden death in 1930, after which the League established the Nansen International Office for Refugees to ensure that his work continued. This office received the Nobel Peace Prize in 1938. His name is commemorated in numerous geographical features, particularly in the polar regions.
The Nansen family originated in Denmark. Hans Nansen (1598–1667), a trader, was an early explorer of the White Sea region of the Arctic Ocean. In later life he settled in Copenhagen, becoming the city's "borgmester" in 1654. Later generations of the family lived in Copenhagen until the mid-18th century, when Ancher Antoni Nansen moved to Norway (then in a union with Denmark). His son, Hans Leierdahl Nansen (1764–1821), was a magistrate first in the Trondheim district, later in Jæren. After Norway's separation from Denmark in 1814, he entered national political life as the representative for Stavanger in the first Storting, and became a strong advocate of union with Sweden. After suffering a paralytic stroke in 1821 Hans Leierdahl Nansen died, leaving a four-year-old son, Baldur Fridtjof Nansen, the explorer's father.
Baldur was a lawyer without ambitions for public life, who became Reporter to the Supreme Court of Norway. He married twice, the second time to Adelaide Johanne Thekla Isidore Bølling Wedel-Jarlsberg from Bærum, a niece of Herman Wedel-Jarlsberg who had helped frame the Norwegian constitution of 1814 and was later the Swedish king's Norwegian Viceroy. Baldur and Adelaide settled at Store Frøen, an estate at Aker, a few kilometres north of Norway's capital city, Christiania (since renamed Oslo). The couple had three children; the first died in infancy, the second, born 10 October 1861, was Fridtjof Wedel-Jarlsberg Nansen.
Store Frøen's rural surroundings shaped the nature of Nansen's childhood. In the short summers the main activities were swimming and fishing, while in the autumn the chief pastime was hunting for game in the forests. The long winter months were devoted mainly to skiing, which Nansen began to practice at the age of two, on improvised skis. At the age of 10 he defied his parents and attempted the ski jump at the nearby Huseby installation. This exploit had near-disastrous consequences, as on landing the skis dug deep into the snow, pitching the boy forward: "I, head first, described a fine arc in the air ... [W]hen I came down again I bored into the snow up to my waist. The boys thought I had broken my neck, but as soon as they saw there was life in me ... a shout of mocking laughter went up." Nansen's enthusiasm for skiing was undiminished, though as he records, his efforts were overshadowed by those of the skiers from the mountainous region of Telemark, where a new style of skiing was being developed. "I saw this was the only way", wrote Nansen later.
At school, Nansen worked adequately without showing any particular aptitude. Studies took second place to sports, or to expeditions into the forests where he would live "like Robinson Crusoe" for weeks at a time. Through such experiences Nansen developed a marked degree of self-reliance. He became an accomplished skier and a highly proficient skater. Life was disrupted when, in the summer of 1877, Adelaide Nansen died suddenly. Distressed, Baldur Nansen sold the Store Frøen property and moved with his two sons to Christiania. Nansen's sporting prowess continued to develop; at 18 he broke the world one-mile (1.6 km) skating record, and in the following year won the national cross-country skiing championship, a feat he would repeat on 11 subsequent occasions.
In 1880 Nansen passed his university entrance examination, the "examen artium". He decided to study zoology, claiming later that he chose the subject because he thought it offered the chance of a life in the open air. He began his studies at the Royal Frederick University in Christiania early in 1881.
Early in 1882 Nansen took "...the first fatal step that led me astray from the quiet life of science." Professor Robert Collett of the university's zoology department proposed that Nansen take a sea voyage, to study Arctic zoology at first hand. Nansen was enthusiastic, and made arrangements through a recent acquaintance, Captain Axel Krefting, commander of the sealer "Viking". The voyage began on 11 March 1882 and extended over the following five months. In the weeks before sealing started, Nansen was able to concentrate on scientific studies. From water samples he showed that, contrary to previous assumption, sea ice forms on the surface of the water rather than below. His readings also demonstrated that the Gulf Stream flows beneath a cold layer of surface water. Through the spring and early summer "Viking" roamed between Greenland and Spitsbergen in search of seal herds. Nansen became an expert marksman, and on one day proudly recorded that his team had shot 200 seal. In July, "Viking" became trapped in the ice close to an unexplored section of the Greenland coast; Nansen longed to go ashore, but this was impossible. However, he began to develop the idea that the Greenland icecap might be explored, or even crossed. On 17 July the ship broke free from the ice, and early in August was back in Norwegian waters.
Nansen did not resume formal studies at the university. Instead, on Collett's recommendation, he accepted a post as curator in the zoological department of the Bergen Museum. He was to spend the next six years of his life there—apart from a six-month sabbatical tour of Europe—working and studying with leading figures such as Gerhard Armauer Hansen, the discoverer of the leprosy bacillus, and Daniel Cornelius Danielssen, the museum's director who had turned it from a backwater collection into a centre of scientific research and education. Nansen's chosen area of study was the then relatively unexplored field of neuroanatomy, specifically the central nervous system of lower marine creatures. Before leaving for his sabbatical in February 1886 he published a paper summarising his research to date, in which he stated that "anastomoses or unions between the different ganglion cells" could not be demonstrated with certainty. This unorthodox view was confirmed by the simultaneous researches of the embryologist Wilhelm His and the psychiatrist August Forel. Nansen is considered the first Norwegian defender of the neuron theory, originally proposed by Santiago Ramón y Cajal. His subsequent paper, "The Structure and Combination of Histological Elements of the Central Nervous System", published in 1887, became his doctoral thesis.
The idea of an expedition across the Greenland icecap grew in Nansen's mind throughout his Bergen years. In 1887, after the submission of his doctoral thesis, he finally began organising this project. Before then, the two most significant penetrations of the Greenland interior had been those of Adolf Erik Nordenskiöld in 1883, and Robert Peary in 1886. Both had set out from Disko Bay on the western coast, and had travelled about eastward before turning back. By contrast, Nansen proposed to travel from east to west, ending rather than beginning his trek at Disko Bay. A party setting out from the inhabited west coast would, he reasoned, have to make a return trip, as no ship could be certain of reaching the dangerous east coast and picking them up. By starting from the east—assuming that a landing could be made there—Nansen's would be a one-way journey towards a populated area. The party would have no line of retreat to a safe base; the only way to go would be forward, a situation that fitted Nansen's philosophy completely.
Nansen rejected the complex organisation and heavy manpower of other Arctic ventures, and instead planned his expedition for a small party of six. Supplies would be manhauled on specially designed lightweight sledges. Much of the equipment, including sleeping bags, clothing and cooking stoves, also needed to be designed from scratch. These plans received a generally poor reception in the press; one critic had no doubt that "if [the] scheme be attempted in its present form ... the chances are ten to one that he will ... uselessly throw his own and perhaps others' lives away". The Norwegian parliament refused to provide financial support, believing that such a potentially risky undertaking should not be encouraged. The project was eventually launched with a donation from a Danish businessman, Augustin Gamél; the rest came mainly from small contributions from Nansen's countrymen, through a fundraising effort organised by students at the university.
Despite the adverse publicity, Nansen received numerous applications from would-be adventurers. He wanted expert skiers, and attempted to recruit from the skiers of Telemark, but his approaches were rebuffed. Nordenskiöld had advised Nansen that Sami people, from Finland in the far north of Norway, were expert snow travellers, so Nansen recruited a pair, Samuel Balto and Ole Nielsen Ravna. The remaining places went to Otto Sverdrup, a former sea-captain who had more recently worked as a forester; Oluf Christian Dietrichson, an army officer, and Kristian Kristiansen, an acquaintance of Sverdrup's. All had experience of outdoor life in extreme conditions, and were experienced skiers. Just before the party's departure, Nansen attended a formal examination at the university, which had agreed to receive his doctoral thesis. In accordance with custom he was required to defend his work before appointed examiners acting as "devil's advocates". He left before knowing the outcome of this process.
The sealer "Jason" picked up Nansen's party on 3 June 1888 from the Icelandic port of Ísafjörður. They sighted the Greenland coast a week later, but thick pack ice hindered progress. With the coast still away, Nansen decided to launch the small boats. They were within sight of Sermilik Fjord on 17 July; Nansen believed it would offer a route up the icecap.
The expedition left "Jason" "in good spirits and with the highest hopes of a fortunate result." Days of extreme frustration followed as they drifted south. Weather and sea conditions prevented them from reaching the shore. They spent most time camping on the ice itself—it was too dangerous to launch the boats.
By 29 July, they found themselves south of the point where they left the ship. That day they finally reached land but were too far south to begin the crossing. Nansen ordered the team back into the boats after a brief rest and to begin rowing north. The party battled northward along the coast through the ice floes for the next 12 days. They encountered a large Eskimo encampment on the first day, near Cape Steen Bille. Occasional contacts with the nomadic native population continued as the journey progressed.
The party reached Umivik Bay on 11 August, after covering . Nansen decided they needed to begin the crossing. Although they were still far south of his intended starting place; the season was becoming too advanced. After they landed at Umivik, they spent the next four days preparing for their journey. They set out on the evening of 15 August, heading north-west towards Christianhaab on the western shore of Disko Bay— away.
Over the next few days, the party struggled to ascend. The inland ice had a treacherous surface with many hidden crevasses and the weather was bad. Progress stopped for three days because of violent storms and continuous rain one time. The last ship was due to leave Christianhaab by mid-September. They would not be able to reach it in time, Nansen concluded on 26 August. He ordered a change of course due west, towards Godthaab; a shorter journey by at least . The rest of the party, according to Nansen, "hailed the change of plan with acclamation."
They continued climbing until 11 September and reached a height of above sea level. Temperatures on the icecap summit of the icecap dropped to at night. From then on the downward slope made travelling easier. Yet, the terrain was rugged and the weather remained hostile. Progress was slow: fresh snowfalls made dragging the sledges like pulling them through sand.
On 26 September, they battled their way down the edge of a fjord westward towards Godthaab. Sverdrup constructed a makeshift boat out of parts of the sledges, willows, and their tent. Three days later, Nansen and Sverdrup began the last stage of the journey; rowing down the fjord.
On 3 October, they reached Godthaab, where the Danish town representative greeted them. He first informed Nansen that he secured his doctorate, a matter that "could not have been more remote from [Nansen's] thoughts at that moment." The team accomplished their crossing in 49 days. Throughout the journey, they maintained meteorological and geographical and other records relating to the previously unexplored interior.
The rest of the team arrived in Godthaab on 12 October. Nansen soon learned no ship was likely to call at Godthaab until the following spring. Still, they were able to send letters back to Norway via a boat leaving Ivigtut at the end of October. He and his party spent the next seven months in Greenland. On 15 April 1889, the Danish ship "Hvidbjørnen" finally entered the harbour. Nansen recorded: "It was not without sorrow that we left this place and these people, among whom we had enjoyed ourselves so well."
"Hvidbjørnen" reached Copenhagen on 21 May 1889. News of the crossing had preceded its arrival, and Nansen and his companions were feted as heroes. This welcome, however, was dwarfed by the reception in Christiania a week later, when crowds of between thirty and forty thousand—a third of the city's population—thronged the streets as the party made its way to the first of a series of receptions. The interest and enthusiasm generated by the expedition's achievement led directly to the formation that year of the Norwegian Geographical Society.
Nansen accepted the position of curator of the Royal Frederick University's zoology collection, a post which carried a salary but involved no duties; the university was satisfied by the association with the explorer's name. Nansen's main task in the following weeks was writing his account of the expedition, but he found time late in June to visit London, where he met the Prince of Wales (the future Edward VII), and addressed a meeting of the Royal Geographical Society (RGS).
The RGS president, Sir Mountstuart Elphinstone Grant Duff, said that Nansen has claimed "the foremost place amongst northern travellers", and later awarded him the Society's prestigious Founder's Medal. This was one of many honours Nansen received from institutions all over Europe. He was invited by a group of Australians to lead an expedition to Antarctica, but declined, believing that Norway's interests would be better served by a North Pole conquest.
On 11 August 1889 Nansen announced his engagement to Eva Sars, the daughter of Michael Sars, a zoology professor who had died when Eva was 11 years old. The couple had met some years previously, at the skiing resort of Frognerseteren, where Nansen recalled seeing "two feet sticking out of the snow". Eva was three years older than Nansen, and despite the evidence of this first meeting, was an accomplished skier. She was also a celebrated classical singer who had been coached in Berlin by Désirée Artôt, one-time paramour of Tchaikovsky. The engagement surprised many; since Nansen had previously expressed himself forcefully against the institution of marriage, Otto Sverdrup assumed he had read the message wrongly. The wedding took place on 6 September 1889, less than a month after the engagement.
Nansen first began to consider the possibility of reaching the North Pole after reading meteorologist Henrik Mohn's theory on polar drift in 1884. Artefacts found on the coast of Greenland were identified to have come from the "Jeannette" expedition. In June 1881, was crushed and sunk off the Siberian coast—the opposite side of the Arctic Ocean. Mohn surmised the location of the artefacts indicated the existence of an ocean current from east to west, all the way across the polar sea and possibly over the pole itself.
The idea remained fixated in Nansen's mind for the next couple of years. He developed a detailed plan for a polar venture after his triumphant return from Greenland. He made his idea public in February 1890, at a meeting of the newly-formed Norwegian Geographical Society. Previous expeditions, he argued, approached the North Pole from the west and failed because they were working against the prevailing east-west current; the secret was to work with the current.
A workable plan would require a sturdy and manoeuvrable small ship, capable of carrying fuel and provisions for twelve men for five years. This ship would enter the ice pack close to the approximate location of "Jeannette's" sinking, drifting west with the current towards the pole and beyond it—eventually reaching the sea between Greenland and Spitsbergen.
Experienced polar explorers were dismissive: Adolphus Greely called the idea "an illogical scheme of self-destruction". Equally dismissive were Sir Allen Young, a veteran of the searches for Franklin's lost expedition, and Sir Joseph Dalton Hooker, who had sailed to the Antarctic on the Ross expedition. Nansen still managed to secure a grant from the Norwegian parliament after an impassioned speech. Additional funding was secured through a national appeal for private donations.
Nansen chose naval engineer Colin Archer to design and build a ship. Archer designed an extraordinarily sturdy vessel with an intricate system of crossbeams and braces of the toughest oak timbers. Its rounded hull was designed to push the ship upwards when beset by pack ice. Speed and manoeuvrability were to be secondary to its ability as a safe and warm shelter during their predicted confinement.
The length-to-beam ratio— and —gave it a stubby appearance, justified by Archer: "A ship that is built with exclusive regard to its suitability for [Nansen's] object must differ essentially from any known vessel." It was christened "Fram" and launched on 6 October 1892.
Nansen selected a party of twelve from thousands of applicants. Otto Sverdrup, who took part in Nansen's earlier Greenland expedition was appointed as the expedition's second-in-command. Competition was so fierce that army lieutenant and dog-driving expert Hjalmar Johansen signed on as ship's stoker, the only position still available.
"Fram" left Christiania on 24 June 1893, cheered on by thousands of well-wishers. After a slow journey around the coast, the final port of call was Vardø, in the far north-east of Norway. "Fram" left Vardø on 21 July, following the North-East Passage route pioneered by Nordenskiöld in 1878–1879, along the northern coast of Siberia. Progress was impeded by fog and ice conditions in the mainly uncharted seas.
The crew also experienced the dead water phenomenon, where a ship's forward progress is impeded by friction caused by a layer of fresh water lying on top of heavier salt water. Nevertheless, Cape Chelyuskin, the most northerly point of the Eurasian continental mass, was passed on 10 September.
Heavy pack ice was sighted ten days later at around latitude 78°N, as "Fram" approached the area in which was crushed. Nansen followed the line of the pack northwards to a position recorded as , before ordering engines stopped and the rudder raised. From this point "Fram's" drift began. The first weeks in the ice were frustrating, as the drift moved unpredictably; sometimes north, sometimes south.
By 19 November, "Fram's" latitude was south of that at which she had entered the ice. Only after the turn of the year, in January 1894, did the northerly direction become generally settled; the 80°N mark was finally passed on 22 March. Nansen calculated that, at this rate, it might take the ship five years to reach the pole. As the ship's northerly progress continued at a rate rarely above a kilometre and a half per day, Nansen began privately to consider a new plan—a dog sledge journey towards the pole. With this in mind, he began to practice dog-driving, making many experimental journeys over the ice.
In November, Nansen announced his plan: when the ship passed latitude 83°N, he and Hjalmar Johansen would leave the ship with the dogs and make for the pole while "Fram", under Sverdrup, continued its drift until it emerged from the ice in the North Atlantic. After reaching the pole, Nansen and Johansen would make for the nearest known land, the recently discovered and sketchily mapped Franz Josef Land. They would then cross to Spitzbergen where they would find a ship to take them home.
The crew spent the rest of the winter of 1894 preparing clothing and equipment for the forthcoming sledge journey. Kayaks were built, to be carried on the sledges until needed for the crossing of open water. Preparations were interrupted early in January when violent tremors shook the ship. The crew disembarked, fearing the vessel would be crushed, but "Fram" proved herself equal to the danger. On 8 January 1895, the ship's position was 83°34′N, above Greely's previous record of 83°24′N.
With the ship's latitude at 84°4′N and after two false starts, Nansen and Johansen began their journey on 14 March 1895. Nansen allowed 50 days to cover the to the pole, an average daily journey of . After a week of travel, a sextant observation indicated they averaged per day, which put them ahead of schedule. However, uneven surfaces made skiing more difficult, and their speeds slowed. They also realised they were marching against a southerly drift, and that distances travelled did not necessarily equate to distance progressed.
On 3 April, Nansen began to doubt whether the pole was attainable. Unless their speed improved, their food would not last them to the pole and back to Franz Josef Land. He confided in his diary: "I have become more and more convinced we ought to turn before time." Four days later, after making camp, he observed the way ahead was "... a veritable chaos of iceblocks stretching as far as the horizon." Nansen recorded their latitude as 86°13′6″N—almost three degrees beyond the previous record—and decided to turn around and head back south.
At first Nansen and Johansen made good progress south, but suffered a serious setback on 13 April, when in his eagerness to break camp, they had forgotten to wind their chronometers, which made it impossible to calculate their longitude and accurately navigate to Franz Josef Land. They restarted the watches based on Nansen's guess they were at 86°E. From then on were uncertain of their true position. The tracks of an Arctic fox were observed towards the end of April. It was the first trace of a living creature other than their dogs since they left "Fram". They soon saw bear tracks and by the end of May saw evidence of nearby seals, gulls and whales.
On 31 May, Nansen calculated they were only from Cape Fligely, Franz Josef Land's northernmost point. Travel conditions worsened as increasingly warmer weather caused the ice to break up. On 22 June, the pair decided to rest on a stable ice floe while they repaired their equipment and gathered strength for the next stage of their journey. They remained on the floe for a month.
The day after leaving this camp, Nansen recorded: "At last the marvel has come to pass—land, land, and after we had almost given up our belief in it!" Whether this still-distant land was Franz Josef Land or a new discovery they did not know—they had only a rough sketch map to guide them. The edge of the pack ice was reached on 6 August and they shot the last of their dogs—the weakest of which they killed regularly to feed the others since 24 April. The two kayaks were lashed together, a sail was raised, and they made for the land.
It soon became clear this land was part of an archipelago. As they moved southwards, Nansen tentatively identified a headland as Cape Felder on the western edge of Franz Josef Land. Towards the end of August, as the weather grew colder and travel became increasingly difficult, Nansen decided to camp for the winter. In a sheltered cove, with stones and moss for building materials, the pair erected a hut which was to be their home for the next eight months. With ready supplies of bear, walrus and seal to keep their larder stocked, their principal enemy was not hunger but inactivity. After muted Christmas and New Year celebrations, in slowly improving weather, they began to prepare to leave their refuge, but it was 19 May 1896 before they were able to resume their journey.
On 17 June, during a stop for repairs after the kayaks had been attacked by a walrus, Nansen thought he heard a dog barking as well as human voices. He went to investigate, and a few minutes later saw the figure of a man approaching. It was the British explorer Frederick Jackson, who was leading an expedition to Franz Josef Land and was camped at Cape Flora on nearby Northbrook Island. The two were equally astonished by their encounter; after some awkward hesitation Jackson asked: "You are Nansen, aren't you?", and received the reply "Yes, I am Nansen."
Johansen was picked up and the pair were taken to Cape Flora where, during the following weeks, they recuperated from their ordeal. Nansen later wrote that he could "still scarcely grasp" their sudden change of fortune; had it not been for the walrus attack that caused the delay, the two parties might have been unaware of each other's existence.
On 7 August, Nansen and Johansen boarded Jackson's supply ship "Windward", and sailed for Vardø where they arrived on the 13th. They were greeted by Hans Mohn, the originator of the polar drift theory, who was in the town by chance. The world was quickly informed by telegram of Nansen's safe return, but as yet there was no news of "Fram".
Taking the weekly mail steamer south, Nansen and Johansen reached Hammerfest on 18 August, where they learned that "Fram" had been sighted. She had emerged from the ice north and west of Spitsbergen, as Nansen had predicted, and was now on her way to Tromsø. She had not passed over the pole, nor exceeded Nansen's northern mark. Without delay Nansen and Johansen sailed for Tromsø, where they were reunited with their comrades.
The homeward voyage to Christiania was a series of triumphant receptions at every port. On 9 September, "Fram" was escorted into Christiania's harbour and welcomed by the largest crowds the city had ever seen. The crew were received by King Oscar, and Nansen, reunited with family, remained at the palace for several days as special guests. Tributes arrived from all over the world; typical was that from the British mountaineer Edward Whymper, who wrote that Nansen had made "almost as great an advance as has been accomplished by all other voyages in the nineteenth century put together".
Nansen's first task on his return was to write his account of the voyage. This he did remarkably quickly, producing 300,000 words of Norwegian text by November 1896; the English translation, titled "Farthest North", was ready in January 1897. The book was an instant success, and secured Nansen's long-term financial future. Nansen included without comment the one significant adverse criticism of his conduct, that of Greely, who had written in "Harper's Weekly" on Nansen's decision to leave "Fram" and strike for the pole: "It passes comprehension how Nansen could have thus deviated from the most sacred duty devolving on the commander of a naval expedition."
During the 20 years following his return from the Arctic, Nansen devoted most of his energies to scientific work. In 1897 he accepted a professorship in zoology at the Royal Frederick University, which gave him a base from which he could tackle the major task of editing the reports of the scientific results of the "Fram" expedition. This was a much more arduous task than writing the expedition narrative. The results were eventually published in six volumes, and according to a later polar scientist, Robert Rudmose-Brown, "were to Arctic oceanography what the "Challenger" expedition results had been to the oceanography of other oceans."
In 1900, Nansen became director of the Christiania-based International Laboratory for North Sea Research, and helped found the International Council for the Exploration of the Sea. Through his connection with the latter body, in the summer of 1900 Nansen embarked on his first visit to Arctic waters since the "Fram" expedition, a cruise to Iceland and Jan Mayen Land on the oceanographic research vessel "Michael Sars", named after Eva's father. Shortly after his return he learned that his Farthest North record had been passed, by members of the Duke of the Abruzzi's Italian expedition. They had reached 86°34′N on 24 April 1900, in an attempt to reach the North Pole from Franz Josef Land. Nansen received the news philosophically: "What is the value of having goals for their own sake? They all vanish ... it is merely a question of time."
Nansen was now considered an oracle by all would-be explorers of the north and south polar regions. Abruzzi had consulted him, as had the Belgian Adrien de Gerlache, each of whom took expeditions to the Antarctic. Although Nansen refused to meet his own countryman and fellow-explorer Carsten Borchgrevink (whom he considered a fraud), he gave advice to Robert Falcon Scott on polar equipment and transport, prior to the 1901–04 "Discovery" expedition. At one point Nansen seriously considered leading a South Pole expedition himself, and asked Colin Archer to design two ships. However, these plans remained on the drawing board.
By 1901 Nansen's family had expanded considerably. A daughter, Liv, had been born just before "Fram" set out; a son, Kåre was born in 1897 followed by a daughter, Irmelin, in 1900 and a second son Odd in 1901. The family home, which Nansen had built in 1891 from the profits of his Greenland expedition book, was now too small. Nansen acquired a plot of land in the Lysaker district and built, substantially to his own design, a large and imposing house which combined some of the characteristics of an English manor house with features from the Italian renaissance.
The house was ready for occupation by April 1902; Nansen called it "Polhøgda" (in English "polar heights"), and it remained his home for the rest of his life. A fifth and final child, son Asmund, was born at Polhøgda in 1903.
The union between Norway and Sweden, imposed by the Great Powers in 1814, had been under considerable strain through the 1890s, the chief issue in question being Norway's rights to its own consular service. Nansen, although not by inclination a politician, had spoken out on the issue on several occasions in defence of Norway's interests. It seemed, early in the 20th century that agreement between the two countries might be possible, but hopes were dashed when negotiations broke down in February 1905. The Norwegian government fell, and was replaced by one led by Christian Michelsen, whose programme was one of separation from Sweden.
In February and March Nansen published a series of newspaper articles which placed him firmly in the separatist camp. The new prime minister wanted Nansen in the cabinet, but Nansen had no political ambitions. However, at Michelsen's request he went to Berlin and then to London where, in a letter to "The Times", he presented Norway's legal case for a separate consular service to the English-speaking world. On 17 May 1905, Norway's Constitution Day, Nansen addressed a large crowd in Christiania, saying: "Now have all ways of retreat been closed. Now remains only one path, the way forward, perhaps through difficulties and hardships, but forward for our country, to a free Norway". He also wrote a book, "Norway and the Union with Sweden", to promote Norway's case abroad.
On 23 May the Storting passed the Consulate Act establishing a separate consular service. King Oscar refused his assent; on 27 May the Norwegian cabinet resigned, but the king would not recognise this step. On 7 June the Storting unilaterally announced that the union with Sweden was dissolved. In a tense situation the Swedish government agreed to Norway's request that the dissolution should be put to a referendum of the Norwegian people. This was held on 13 August 1905 and resulted in an overwhelming vote for independence, at which point King Oscar relinquished the crown of Norway while retaining the Swedish throne. A second referendum, held in November, determined that the new independent state should be a monarchy rather than a republic. In anticipation of this, Michelsen's government had been considering the suitability of various princes as candidates for the Norwegian throne. Faced with King Oscar's refusal to allow anyone from his own House of Bernadotte to accept the crown, the favoured choice was Prince Charles of Denmark. In July 1905 Michelsen sent Nansen to Copenhagen on a secret mission to persuade Charles to accept the Norwegian throne. Nansen was successful; shortly after the second referendum Charles was proclaimed king, taking the name Haakon VII. He and his wife, the British princess Maud, were crowned in the Nidaros Cathedral in Trondheim on 22 June 1906.
In April 1906 Nansen was appointed Norway's first Minister in London. His main task was to work with representatives of the major European powers on an Integrity Treaty which would guarantee Norway's position. Nansen was popular in England, and got on well with King Edward, though he found court functions and diplomatic duties disagreeable; "frivolous and boring" was his description. However, he was able to pursue his geographical and scientific interests through contacts with the Royal Geographical Society and other learned bodies. The Treaty was signed on 2 November 1907, and Nansen considered his task complete. Resisting the pleas of, among others, King Edward that he should remain in London, on 15 November Nansen resigned his post. A few weeks later, still in England as the king's guest at Sandringham, Nansen received word that Eva was seriously ill with pneumonia. On 8 December he set out for home, but before he reached Polhøgda he learned, from a telegram, that Eva had died.
After a period of mourning, Nansen returned to London. He had been persuaded by his government to rescind his resignation until after King Edward's state visit to Norway in April 1908. His formal retirement from the diplomatic service was dated 1 May 1908, the same day on which his university professorship was changed from zoology to oceanography. This new designation reflected the general character of Nansen's more recent scientific interests.
In 1905, he had supplied the Swedish physicist Walfrid Ekman with the data which established the principle in oceanography known as the Ekman spiral. Based on Nansen's observations of ocean currents recorded during the "Fram" expedition, Ekman concluded that the effect of wind on the sea's surface produced currents which "formed something like a spiral staircase, down towards the depths".
In 1909 Nansen combined with Bjørn Helland-Hansen to publish an academic paper, "The Norwegian Sea: its Physical Oceanography", based on the "Michael Sars" voyage of 1900. Nansen had by now retired from polar exploration, the decisive step being his release of "Fram" to fellow Norwegian Roald Amundsen, who was planning a North Pole expedition. When Amundsen made his controversial change of plan and set out for the South Pole, Nansen stood by him.
Between 1910 and 1914, Nansen participated in several oceanographic voyages. In 1910, aboard the Norwegian naval vessel "Fridtjof", he carried out researches in the northern Atlantic, and in 1912 he took his own yacht, "Veslemøy", to Bear Island and Spitsbergen. The main objective of the "Veslemøy" cruise was the investigation of salinity in the North Polar Basin. One of Nansen's lasting contributions to oceanography was his work designing instruments and equipment; the "Nansen bottle" for taking deep water samples remained in use into the 21st century, in a version updated by Shale Niskin.
At the request of the Royal Geographical Society, Nansen began work on a study of Arctic discoveries, which developed into a two-volume history of the exploration of the northern regions up to the beginning of the 16th century. This was published in 1911 as "Nord i Tåkeheimen" ("In Northern Mists"). That year he renewed an acquaintance with Kathleen Scott, wife of Robert Falcon Scott whose Terra Nova Expedition had sailed for Antarctica in 1910.
Biographer Roland Huntford has asserted, without any compelling evidence, that Nansen and Kathleen Scott had a brief love affair. Louisa Young, in her biography of Lady Scott, refutes the claim. Many women were attracted to Nansen, and he had a reputation as a womaniser. His personal life was troubled around this time; in January 1913 he received news of the suicide of Hjalmar Johansen, who had returned in disgrace from Amundsen's successful South Pole expedition. In March 1913, Nansen's youngest son Asmund died after a long illness.
In the summer of 1913, Nansen travelled to the Kara Sea, by the invitation of Jonas Lied, as part of a delegation investigating a possible trade route between Western Europe and the Siberian interior. The party then took a steamer up the Yenisei River to Krasnoyarsk, and travelled on the Trans-Siberian Railway to Vladivostok before turning for home. Nansen published a report from the trip in "Through Siberia". The life and culture of the Russian peoples aroused in Nansen an interest and sympathy he would carry through to his later life. Immediately before the First World War, Nansen joined Helland-Hansen in an oceanographical cruise in eastern Atlantic waters.
On the outbreak of war in 1914, Norway declared its neutrality, alongside Sweden and Denmark. Nansen was appointed as the president of the Norwegian Union of Defence, but had few official duties, and continued with his professional work as far as circumstances permitted. As the war progressed, the loss of Norway's overseas trade led to acute shortages of food in the country, which became critical in April 1917, when the United States entered the war and placed extra restrictions on international trade. Nansen was dispatched to Washington by the Norwegian government; after months of discussion, he secured food and other supplies in return for the introduction of a rationing system. When his government hesitated over the deal, he signed the agreement on his own initiative.
Within a few months of the war's end in November 1918, a draft agreement had been accepted by the Paris Peace Conference to create a League of Nations, as a means of resolving disputes between nations by peaceful means. The foundation of the League at this time was providential as far as Nansen was concerned, giving him a new outlet for his restless energy. He became president of the Norwegian League of Nations Society, and although the Scandinavian nations with their traditions of neutrality initially held themselves aloof, his advocacy helped to ensure that Norway became a full member of the League in 1920, and he became one of its three delegates to the League's General Assembly.
In April 1920, at the League's request, Nansen began organising the repatriation of around half a million prisoners of war, stranded in various parts of the world. Of these, 300,000 were in Russia which, gripped by revolution and civil war, had little interest in their fate. Nansen was able to report to the Assembly in November 1920 that around 200,000 men had been returned to their homes. "Never in my life", he said, "have I been brought into touch with so formidable an amount of suffering."
Nansen continued this work for a further two years until, in his final report to the Assembly in 1922, he was able to state that 427,886 prisoners had been repatriated to around 30 different countries. In paying tribute to his work, the responsible committee recorded that the story of his efforts "would contain tales of heroic endeavour worthy of those in the accounts of the crossing of Greenland and the great Arctic voyage."
Even before this work was complete, Nansen was involved in a further humanitarian effort. On 1 September 1921, prompted by the British delegate Philip Noel-Baker, he accepted the post of the League's High Commissioner for Refugees. His main brief was the resettlement of around two million Russian refugees displaced by the upheavals of the Russian Revolution.
At the same time he tried to tackle the urgent problem of famine in Russia; following a widespread failure of crops around 30 million people were threatened with starvation and death. Despite Nansen's pleas on behalf of the starving, Russia's revolutionary government was feared and distrusted internationally, and the League was reluctant to come to its peoples' aid. Nansen had to rely largely on fundraising from private organisations, and his efforts met with limited success. Later he was to express himself bitterly on the matter:
A major problem impeding Nansen's work on behalf of refugees was that most of them lacked documentary proof of identity or nationality. Without legal status in their country of refuge, their lack of papers meant they were unable to go anywhere else. To overcome this, Nansen devised a document that became known as the "Nansen passport", a form of identity for stateless persons that was in time recognised by more than 50 governments, and which allowed refugees to cross borders legally. Although the passport was created initially for refugees from Russia, it was extended to cover other groups.
While attending the Conference of Lausanne in November 1922, Nansen learned that he had been awarded the Nobel Peace Prize for 1922. The citation referred to "his work for the repatriation of the prisoners of war, his work for the Russian refugees, his work to bring succour to the millions of Russians afflicted by famine, and finally his present work for the refugees in Asia Minor and Thrace". Nansen donated the prize money to international relief efforts.
After the Greco-Turkish War of 1919–1922, Nansen travelled to Constantinople to negotiate the resettlement of hundreds of thousands of refugees, mainly ethnic Greeks who had fled from Turkey after the defeat of the Greek Army. The impoverished Greek state was unable to take them in, and so Nansen devised a scheme for a population exchange whereby half a million Turks in Greece were returned to Turkey, with full financial compensation, while further loans facilitated the absorption of the refugee Greeks into their homeland. Despite some controversy over the principle of a population exchange, the plan was implemented successfully over a period of several years.
From 1925 onwards, Nansen devoted much time trying to help Armenian refugees, victims of Armenian genocide at the hands of the Ottoman Empire during the First World War and further ill-treatment thereafter. His goal was the establishment of a national home for these refugees, within the borders of Soviet Armenia. His main assistant in this endeavour was Vidkun Quisling, the future Nazi collaborator and head of a Norwegian puppet government during the Second World War.
After visiting the region, Nansen presented the Assembly with a modest plan for the irrigation of on which 15,000 refugees could be settled. The plan ultimately failed, because even with Nansen's unremitting advocacy the money to finance the scheme was not forthcoming. Despite this failure, his reputation among the Armenian people remains high.
Nansen wrote "Armenia and the Near East" (1923) wherein he describes the plight of the Armenians in the wake of losing its independence to the Soviet Union. The book was translated into many languages. After his visit to Armenia, Nansen wrote two additional books: "Across Armenia" (1927) and "Through the Caucasus to the Volga" (1930).
Within the League's Assembly, Nansen spoke out on many issues besides those related to refugees. He believed that the Assembly gave the smaller countries such as Norway a "unique opportunity for speaking in the councils of the world." He believed that the extent of the League's success in reducing armaments would be the greatest test of its credibility. He was a signatory to the Slavery Convention of 25 September 1926, which sought to outlaw the use of forced labour. He supported a settlement of the post-war reparations issue and championed Germany's membership of the League, which was granted in September 1926 after intensive preparatory work by Nansen.
On 17 January 1919 Nansen married Sigrun Munthe, a long-time friend with whom he had had a love affair in 1905, while Eva was still alive. The marriage was resented by the Nansen children, and proved unhappy; an acquaintance writing of them in the 1920s said Nansen appeared unbearably miserable and Sigrun steeped in hate.
Nansen's League of Nations commitments through the 1920s meant that he was mostly absent from Norway, and was able to devote little time to scientific work. Nevertheless, he continued to publish occasional papers. He entertained the hope that he might travel to the North Pole by airship, but could not raise sufficient funding. In any event he was forestalled in this ambition by Amundsen, who flew over the pole in Umberto Nobile's airship "Norge" in May 1926. Two years later Nansen broadcast a memorial oration to Amundsen, who had disappeared in the Arctic while organising a rescue party for Nobile whose airship had crashed during a second polar voyage. Nansen said of Amundsen: "He found an unknown grave under the clear sky of the icy world, with the whirring of the wings of eternity through space."
In 1926 Nansen was elected Rector of the University of St Andrews in Scotland, the first foreigner to hold this largely honorary position. He used the occasion of his inaugural address to review his life and philosophy, and to deliver a call to the youth of the next generation. He ended:
We all have a Land of Beyond to seek in our life—what more can we ask? Our part is to find the trail that leads to it. A long trail, a hard trail, maybe; but the call comes to us, and we have to go. Rooted deep in the nature of every one of us is the spirit of adventure, the call of the wild—vibrating under all our actions, making life deeper and higher and nobler.
Nansen largely avoided involvement in domestic Norwegian politics, but in 1924 he was persuaded by the long-retired former Prime Minister Christian Michelsen to take part in a new anti-communist political grouping, the Fatherland League. There were fears in Norway that should the Marxist-oriented Labour Party gain power it would introduce a revolutionary programme. At the inaugural rally of the League in Oslo (as Christiania had now been renamed), Nansen declared: "To talk of the right of revolution in a society with full civil liberty, universal suffrage, equal treatment for everyone ... [is] idiotic nonsense."
Following continued turmoil between the centre-right parties, there was even an independent petition in 1926 gaining some momentum that proposed for Nansen to head a centre-right national unity government on a balanced budget program, an idea he did not reject. He was the headline speaker at the single largest Fatherland League rally with 15,000 attendees in Tønsberg in 1928. In 1929 he went on his final tour for the League on the ship "Stella Polaris", holding speeches from Bergen to Hammerfest.
In between his various duties and responsibilities, Nansen had continued to take skiing holidays when he could. In February 1930, aged 68, he took a short break in the mountains with two old friends, who noted that Nansen was slower than usual and appeared to tire easily. On his return to Oslo he was laid up for several months, with influenza and later phlebitis, and was visited on his sickbed by King Haakon VII.
Nansen was a close friend of a clergyman named Wilhelm. Nansen himself was an atheist.
Nansen died of a heart attack on 13 May 1930. He was given a non-religious state funeral before cremation, after which his ashes were laid under a tree at Polhøgda. Nansen's daughter Liv recorded that there were no speeches, just music: Schubert's "Death and the Maiden", which Eva used to sing.
In his lifetime and thereafter, Nansen received honours and recognition from many countries. Among the many tributes paid to him subsequently was that of Lord Robert Cecil, a fellow League of Nations delegate, who spoke of the range of Nansen's work, done with no regard for his own interests or health: "Every good cause had his support. He was a fearless peacemaker, a friend of justice, an advocate always for the weak and suffering."
Nansen was a pioneer and innovator in many fields. As a young man he embraced the revolution in skiing methods that transformed it from a means of winter travel to a universal sport, and quickly became one of Norway's leading skiers. He was later able to apply this expertise to the problems of polar travel, in both his Greenland and his "Fram" expeditions.
He invented the "Nansen sledge" with broad, ski-like runners, the "Nansen cooker" to improve the heat efficiency of the standard spirit stoves then in use, and the layer principle in polar clothing, whereby the traditionally heavy, awkward garments were replaced by layers of lightweight material. In science, Nansen is recognised both as one of the founders of modern neurology, and as a significant contributor to early oceanographical science, in particular for his work in establishing the Central Oceanographic Laboratory in Christiania.
Through his work on behalf of the League of Nations, Nansen helped to establish the principle of international responsibility for refugees. Immediately after his death the League set up the Nansen International Office for Refugees, a semi-autonomous body under the League's authority, to continue his work. The Nansen Office faced great difficulties, in part arising from the large numbers of refugees from the European dictatorships during the 1930s. Nevertheless, it secured the agreement of 14 countries (including a reluctant Great Britain) to the Refugee Convention of 1933.
It also helped to repatriate 10,000 Armenians to Yerevan in Soviet Armenia, and to find homes for a further 40,000 in Syria and Lebanon. In 1938, the year in which it was superseded by a wider-ranging body, the Nansen Office was awarded the Nobel Peace Prize. In 1954, the League's successor body, the United Nations, established the Nansen Medal, later named the Nansen Refugee Award, given annually by the United Nations High Commissioner for Refugees to an individual, group or organisation "for outstanding work on behalf of the forcibly displaced".
Numerous geographical features bear his name: the Nansen Basin and the Nansen-Gakkel Ridge in the Arctic Ocean; Mount Nansen in the Yukon region of Canada; Mount Nansen, Mount Fridtjof Nansen and Nansen Island, all in Antarctica; as well as Nansen Island in the Kara Sea, Nansen Land in Greenland and Nansen Island in Franz Josef Land; 853 Nansenia, an asteroid; Nansen crater at the Moon's north pole and Nansen crater on Mars. His Polhøgda mansion is now home to the Fridtjof Nansen Institute, an independent foundation which engages in research on environmental, energy and resource management politics.
"Just a life – the story of Fridtjof Nansen" was released, a 1968 Norwegian/Soviet biographical film with Knut Wigert as Nansen.
The Royal Norwegian Navy launched the first of a series of five s in 2004, with as its lead ship.
|
https://en.wikipedia.org/wiki?curid=11820
|
Free market
In economics, a free market is a system in which the prices for goods and services are self-regulated by the open market and by consumers. In a free market, the laws and forces of supply and demand are free from any intervention by a government or other authority, and from all forms of economic privilege, monopolies and artificial scarcities. Proponents of the concept of free market contrast it with a regulated market in which a government intervenes in supply and demand through various methods such as tariffs used to restrict trade and to protect the local economy. In an idealized free-market economy, prices for goods and services are set freely by the forces of supply and demand and are allowed to reach their point of equilibrium without intervention by government policy.
Scholars contrast the concept of a free market with the concept of a coordinated market in fields of study such as political economy, new institutional economics, economic sociology and political science. All of these fields emphasize the importance in currently existing market systems of rule-making institutions external to the simple forces of supply and demand which create space for those forces to operate to control productive output and distribution. Although free markets are commonly associated with capitalism within a market economy in contemporary usage and popular culture, free markets have also been advocated by anarchists, socialists and some proponents of cooperatives and advocates of profit sharing.
Criticism of the theoretical concept may regard systems with significant market power, inequality of bargaining power, or information asymmetry as less than free, with regulation being necessary to control those imbalances in order to allow markets to function more efficiently as well as produce more desirable social outcomes.
The Heritage Foundation, a conservative think tank based in Washington, D.C. that defines capitalism as the free market which is free from state intervention and government regulation, tried to identify the key factors necessary to measure the degree of freedom of economy of a particular country. In 1986, they introduced the Index of Economic Freedom which is based on some fifty variables. While this and other similar indices do not necessarily define a free market, The Heritage Foundation measures the degree to which a modern economy is free. The variables are divided into the following major groups:
According to The Heritage Foundation, these free market principles are what helped the United States transition to a free-market economy. International free trade improved the country and in order for Americans to prosper from a strong economy they had no choice but to embrace it. Each group is assigned a numerical value between 1 and 5 as the index is the arithmetical mean of the values, rounded to the nearest hundredth. Initially, countries which were traditionally considered capitalistic received high ratings, but the method improved over time. Some economists such as Milton Friedman and other "laissez-faire" economists have argued that there is a direct relationship between economic growth and economic freedom and some studies suggest this is true. Ongoing debates exist among scholars regarding methodological issues in empirical studies of the connection between economic freedom and economic growth. These debates and studies continue to explore just what that relationship entails.
The Free Market Monument Foundation defines the principles of a free market as such:
For classical economists such as Adam Smith, the term free market does not necessarily refer to a market free from government interference, but rather free from all forms of economic privilege, monopolies and artificial scarcities. This implies that economic rents, i.e. profits generated from a lack of perfect competition, must be reduced or eliminated as much as possible through free competition.
Economic theory suggests the returns to land and other natural resources are economic rents that cannot be reduced in such a way because of their perfect inelastic supply. Some economic thinkers emphasize the need to share those rents as an essential requirement for a well functioning market. It is suggested this would both eliminate the need for regular taxes that have a negative effect on trade (see deadweight loss) as well as release land and resources that are speculated upon or monopolised. Two features that improve the competition and free market mechanisms. Winston Churchill supported this view by the following statement: "Land is the mother of all monopoly". The American economist and social philosopher Henry George, the most famous proponent of this thesis, wanted to accomplish this through a high land value tax that replaces all other taxes. Followers of his ideas are often called Georgists or geoists and geolibertarians.
Léon Walras, one of the founders of the neoclassical economics who helped formulate the general equilibrium theory, had a very similar view. He argued that free competition could only be realized under conditions of state ownership of natural resources and land. Additionally, income taxes could be eliminated because the state would receive income to finance public services through owning such resources and enterprises.
The "laissez-faire" principle expresses a preference for an absence of non-market pressures on prices and wages such as those from discriminatory government taxes, subsidies, tariffs, regulations of purely private behavior, or government-granted or coercive monopolies. In "The Pure Theory of Capital", Friedrich Hayek argued that the goal is the preservation of the unique information contained in the price itself.
The definition of free market has been disputed and made complex by collectivist political philosophers and socialist economic ideas. This contention arose from the divergence from classical economists such as Richard Cantillon, Adam Smith, David Ricardo and Thomas Robert Malthus and from the continental economic science developed primarily by the Spanish scholastic and French classical economists, including Anne-Robert-Jacques Turgot, Baron de Laune, Jean-Baptiste Say and Frédéric Bastiat. During the marginal revolution, subjective value theory was rediscovered.
Although "laissez-faire" has been commonly associated with capitalism, there is a similar economic theory associated with socialism called left-wing or socialist "laissez-faire", or free-market anarchism, also known as free-market anti-capitalism and free-market socialism to distinguish it from "laissez-faire" capitalism. Critics of "laissez-faire" as commonly understood argue that a truly "laissez-faire" system would be anti-capitalist and socialist.
Various forms of socialism based on free markets have existed since the 19th century. Early notable socialist proponents of free markets include Pierre-Joseph Proudhon, Benjamin Tucker and the Ricardian socialists. These economists believed that genuinely free markets and voluntary exchange could not exist within the exploitative conditions of capitalism. These proposals ranged from various forms of worker cooperatives operating in a free-market economy such as the mutualist system proposed by Proudhon, to state-owned enterprises operating in unregulated and open markets. These models of socialism are not to be confused with other forms of market socialism (e.g. the Lange model) where publicly owned enterprises are coordinated by various degrees of economic planning, or where capital good prices are determined through marginal cost pricing.
Advocates of free-market socialism such as Jaroslav Vanek argue that genuinely free markets are not possible under conditions of private ownership of productive property. Instead, he contends that the class differences and inequalities in income and power that result from private ownership enable the interests of the dominant class to skew the market to their favor, either in the form of monopoly and market power, or by utilizing their wealth and resources to legislate government policies that benefit their specific business interests. Additionally, Vanek states that workers in a socialist economy based on cooperative and self-managed enterprises have stronger incentives to maximize productivity because they would receive a share of the profits (based on the overall performance of their enterprise) in addition to receiving their fixed wage or salary. The stronger incentives to maximize productivity that he conceives as possible in a socialist economy based on cooperative and self-managed enterprises might be accomplished in a capitalist free-market if employee-owned companies were the norm as envisioned by various thinkers including Louis O. Kelso and James S. Albus.
Socialists also assert that free-market capitalism leads to an excessively skewed distributions of income and economic instabilities which in turn leads to social instability. As a result, corrective measures in the form of social welfare, re-distributive taxation, and regulatory measures and their associated administrative costs are required which create agency costs for society. These costs would not be required in a self-managed socialist economy.
Conditions that must exist for unregulated markets to behave as free markets are summarized at perfect competition. An absence of any of these perfect competition ideal conditions is a market failure. Most schools of economics allow that regulatory intervention may provide a substitute force to counter a market failure. Under this thinking, this form of market regulation may be better than an unregulated market at providing a free market.
Demand for an item (such as goods or services) refers to the economic market pressure from people trying to buy it. Buyers have a maximum price they are willing to pay and sellers have a minimum price they are willing to offer their product. The point at which the supply and demand curves meet is the equilibrium price of the good and quantity demanded. Sellers willing to offer their goods at a lower price than the equilibrium price receive the difference as producer surplus. Buyers willing to pay for goods at a higher price than the equilibrium price receive the difference as consumer surplus.
The model is commonly applied to wages in the market for labor. The typical roles of supplier and consumer are reversed. The suppliers are individuals, who try to sell (supply) their labor for the highest price. The consumers are businesses, which try to buy (demand) the type of labor they need at the lowest price. As more people offer their labor in that market, the equilibrium wage decreases and the equilibrium level of employment increases as the supply curve shifts to the right. The opposite happens if fewer people offer their wages in the market as the supply curve shifts to the left.
In a free market, individuals and firms taking part in these transactions have the liberty to enter, leave and participate in the market as they so choose. Prices and quantities are allowed to adjust according to economic conditions in order to reach equilibrium and properly allocate resources. However, in many countries around the world governments seek to intervene in the free market in order to achieve certain social or political agendas. Governments may attempt to create social equality or equality of outcome by intervening in the market through actions such as imposing a minimum wage (price floor) or erecting price controls (price ceiling). Other lesser-known goals are also pursued, such as in the United States, where the federal government subsidizes owners of fertile land to not grow crops in order to prevent the supply curve from further shifting to the right and decreasing the equilibrium price. This is done under the justification of maintaining farmers' profits; due to the relative inelasticity of demand for crops, increased supply would lower the price but not significantly increase quantity demanded, thus placing pressure on farmers to exit the market. Those interventions are often done in the name of maintaining basic assumptions of free markets such as the idea that the costs of production must be included in the price of goods. Pollution and depletion costs are sometimes not included in the cost of production (a manufacturer that withdraws water at one location then discharges it polluted downstream, avoiding the cost of treating the water), therefore governments may opt to impose regulations in an attempt to try to internalize all of the cost of production and ultimately include them in the price of the goods.
Advocates of the free market contend that government intervention hampers economic growth by disrupting the natural allocation of resources according to supply and demand while critics of the free market contend that government intervention is sometimes necessary to protect a country's economy from better-developed and more influential economies, while providing the stability necessary for wise long-term investment. Milton Friedman pointed to failures of central planning, price controls and state-owned corporations, particularly in the Soviet Union and China while Ha-Joon Chang cites the examples of post-war Japan and the growth of South Korea's steel industry.
With varying degrees of mathematical rigor over time, the general equilibrium theory has demonstrated that under certain conditions of competition the law of supply and demand predominates in this ideal free and competitive market, influencing prices toward an equilibrium that balances the demands for the products against the supplies. At these equilibrium prices, the market distributes the products to the purchasers according to each purchaser's preference or utility for each product and within the relative limits of each buyer's purchasing power. This result is described as market efficiency, or more specifically a Pareto optimum.
This equilibrating behavior of free markets requires certain assumptions about their agents—collectively known as perfect competition—which therefore cannot be results of the market that they create. Among these assumptions are several which are impossible to fully achieve in a real market, such as complete information, interchangeable goods and services and lack of market power. The question then is what approximations of these conditions guarantee approximations of market efficiency and which failures in competition generate overall market failures. Several Nobel Prizes in Economics have been awarded for analyses of market failures due to asymmetric information.
A free market does not require the existence of competition, however it does require a framework that allows new market entrants. Hence, in the lack of coercive barriers, for example, paid licensing certification for certain services and businesses, competition between businesses flourishes all through the demands of consumers, or buyers. It often suggests the presence of the profit motive, although neither a profit motive or profit itself are necessary for a free market. All modern free markets are understood to include entrepreneurs, both individuals and businesses. Typically, a modern free-market economy would include other features such as a stock exchange and a financial services sector, but they do not define it.
Friedrich Hayek popularized the view that market economies promote spontaneous order which results in a better "allocation of societal resources than any design could achieve". According to this view, market economies are characterized by the formation of complex transactional networks which produce and distribute goods and services throughout the economy. These networks are not designed, but they nevertheless emerge as a result of decentralized individual economic decisions. The idea of spontaneous order is an elaboration on the invisible hand proposed by Adam Smith in "The Wealth of Nations". About the individual, Smith wrote:
By preferring the support of domestic to that of foreign industry, he intends only his own security; and by directing that industry in such a manner as its produce may be of the greatest value, he intends only his own gain, and he is in this, as in many other cases, led by an invisible hand to promote an end which was no part of his intention. Nor is it always the worse for society that it was no part of it. By pursuing his own interest, he frequently promotes that of the society more effectually than when he really intends to promote it. I have never known much good done by those who affected to trade for the public good.
Smith pointed out that one does not get one's dinner by appealing to the brother-love of the butcher, the farmer or the baker. Rather, one appeals to their self-interest and pays them for their labor, arguing:
It is not from the benevolence of the butcher, the brewer or the baker, that we expect our dinner, but from their regard to their own self-interest. We address ourselves, not to their humanity but to their self-love, and never talk to them of our own necessities but of their advantages.
Supporters of this view claim that spontaneous order is superior to any order that does not allow individuals to make their own choices of what to produce, what to buy, what to sell and at what prices due to the number and complexity of the factors involved. They further believe that any attempt to implement central planning will result in more disorder, or a less efficient production and distribution of goods and services.
Critics such as political economist Karl Polanyi question whether a spontaneously ordered market can exist, completely free of distortions of political policy, claiming that even the ostensibly freest markets require a state to exercise coercive power in some areas, namely to enforce contracts, govern the formation of labor unions, spell out the rights and obligations of corporations, shape who has standing to bring legal actions and define what constitutes an unacceptable conflict of interest.
Critics of the free market have argued that in real world situations it has proven to be susceptible to the development of price fixing monopolies. Such reasoning has led to government intervention, e.g. the United States antitrust law.
Two prominent Canadian authors argue that government at times has to intervene to ensure competition in large and important industries. Naomi Klein illustrates this roughly in her work "The Shock Doctrine" and John Ralston Saul more humorously illustrates this through various examples in "The Collapse of Globalism and the Reinvention of the World". While its supporters argue that only a free market can create healthy competition and therefore more business and reasonable prices, opponents say that a free market in its purest form may result in the opposite. According to Klein and Ralston, the merging of companies into giant corporations or the privatization of government-run industry and national assets often result in monopolies or oligopolies requiring government intervention to force competition and reasonable prices. Another form of market failure is speculation, where transactions are made to profit from short term fluctuation, rather from the intrinsic value of the companies or products. This criticism has been challenged by historians such as Lawrence Reed, who argued that monopolies have historically failed to form even in the absence of antitrust law. This is because monopolies are inherently difficult to maintain as a company that tries to maintain its monopoly by buying out new competitors, for instance, is incentivizing newcomers to enter the market in hope of a buy-out.
American philosopher and author Cornel West has derisively termed what he perceives as dogmatic arguments for "laissez-faire" economic policies as free-market fundamentalism. West has contended that such mentality "trivializes the concern for public interest" and "makes money-driven, poll-obsessed elected officials deferential to corporate goals of profit – often at the cost of the common good". American political philosopher Michael J. Sandel contends that in the last thirty years the United States has moved beyond just having a market economy and has become a market society where literally everything is for sale, including aspects of social and civic life such as education, access to justice and political influence. The economic historian Karl Polanyi was highly critical of the idea of the market-based society in his book "The Great Transformation", noting that any attempt at its creation would undermine human society and the common good.
Critics of free market economics range from those who reject markets entirely in favour of a planned economy as advocated by various Marxists to those who wish to see market failures regulated to various degrees or supplemented by government interventions. Keynesians support market roles for government such as using fiscal policy for economic stimulus when actions in the private sector lead to sub-optimal economic outcomes of depressions or recessions. Business cycle is used by Keynesians to explain liquidity traps, by which underconsumption occurs, to argue for government intervention with fiscal policy. David McNally of the University of Houston argues in the Marxist tradition that the logic of the market inherently produces inequitable outcomes and leads to unequal exchanges, arguing that Adam Smith's moral intent and moral philosophy espousing equal exchange was undermined by the practice of the free market he championed. According to McNally, the development of the market economy involved coercion, exploitation and violence that Smith's moral philosophy could not countenance. McNally also criticizes market socialists for believing in the possibility of fair markets based on equal exchanges to be achieved by purging parasitical elements from the market economy such as private ownership of the means of production, arguing that market socialism is an oxymoron when socialism is defined as an end to wage labour.
Some would argue that only one known example of a true free market exists, namely the black market. The black market is under constant threat by the police, but under no circumstances do the police regulate the substances that are being created. The black market produces wholly unregulated goods and are purchased and consumed unregulated. That is to say, anyone can produce anything at any time and anyone can purchase anything available at any time. The alternative view is that the black market is not a free market at all since high prices and natural monopolies are often enforced through murder, theft and destruction. Black markets can only exist peripheral to regulated markets where laws are being regularly enforced.
|
https://en.wikipedia.org/wiki?curid=11826
|
Ford GT40
The Ford GT40 is an American high-performance endurance racing car. The Mk I, Mk II, and Mk III variants were designed and built in England based upon the British Lola Mk6. Only the Mk IV model was designed and built in the United States. The range was powered by a series of American-built Ford V8 engines modified for racing.
The GT40 effort was launched by Ford Motor Company to win long-distance sports car races against Ferrari, which won every 24 Hours of Le Mans race from 1960 to 1965. The GT40 broke Ferrari's streak in 1966 and went on to win the next three annual races. The Mk II's victory was the first win for an American manufacturer in a major European race since Jimmy Murphy's triumph with Duesenberg at the 1921 French Grand Prix. In 1967, the Mk IV became the only car designed and built entirely in the United States to achieve the overall win at Le Mans.
The Mk 1, the oldest of the cars, won in 1968 and 1969, the second chassis to win Le Mans more than once. (This Ford/Shelby chassis #P-1075 was believed to have been the first until the Ferrari 275P chassis 0816 was revealed to have won the 1964 race after winning the 1963 race in 250P configuration and with a 0814 chassis plate). Using an American Ford V8 engine, originally of 4.7-liter displacement capacity (289 cubic inches), it was later enlarged to the 4.9-liter engine (302 cubic inches), with custom alloy Gurney–Weslake cylinder heads.
Early cars were simply named "Ford GT" for Grand Touring), the name of Ford's project to prepare the cars for the international endurance racing circuit. The "40" represented its height of 40 inches (1.02 m), measured at the windshield, the maximum allowed. The first 12 "prototype" vehicles carried serial numbers GT-101 to GT-112. The "production" began and the subsequent cars: the MkI, MkII, MkIII, and MkIV were numbered GT40P/1000 through GT40P/1145, and thus officially "GT40s". The Mk IVs were numbered J1-J12.
The contemporary Ford GT is a modern homage to the GT40.
Henry Ford II had wanted a Ford at Le Mans since the early 1960s. In early 1963, Ford reportedly received word through a European intermediary that Enzo Ferrari was interested in selling to Ford Motor Company. Ford reportedly spent several million dollars in an audit of Ferrari factory assets and in legal negotiations, only to have Ferrari unilaterally cut off talks at a late stage due to disputes about the ability to direct open-wheel racing. Ferrari, who wanted to remain the sole operator of his company's motorsports division, was angered when he was told that he would not be allowed to race at the Indianapolis 500 if the deal went through since Ford fielded Indy cars using its own engine, and didn't want competition from Ferrari. Enzo cut the deal off out of spite and Henry Ford II, enraged, directed his racing division to find a company that could build a Ferrari-beater on the world endurance-racing circuit.
To this end Ford began negotiation with Lotus, Lola, and Cooper. Cooper had no experience in GT or prototype and its performances in Formula One were declining.
The Lola proposal was chosen since Lola had used a Ford V8 engine in its mid-engined Lola Mk6 (also known as Lola GT). It was one of the most advanced racing cars of the time, and made a noted performance in Le Mans 1963, even though the car did not finish, due to low gearing and slow revving out on the Mulsanne Straight. However, Eric Broadley, Lola Cars' owner and chief designer, agreed on a short-term personal contribution to the project without involving Lola Cars.
The agreement with Broadley included a one-year collaboration between Ford and Broadley, and the sale of the two Lola Mk 6 chassis builds to Ford. To form the development team, Ford also hired the ex-Aston Martin team manager John Wyer. Ford Motor Co. engineer Roy Lunn was sent to England; he had designed the mid-engined Mustang I concept car powered by a 1.7-liter V4. Despite the small engine of the Mustang I, Lunn was the only Dearborn engineer to have some experience with a mid-engined car.
Overseen by Harley Copp, the team of Broadley, Lunn, and Wyer began working on the new car at the Lola Factory in Bromley. At the end of 1963, the team moved to Slough, near Heathrow Airport. Ford then established Ford Advanced Vehicles (FAV) Ltd, a new subsidiary under the direction of Wyer, to manage the project.
The first chassis built by Abbey Panels of Coventry was delivered on 16 March 1964, with fiber-glass moldings produced by Fibre Glass Engineering Ltd of Farnham. The first "Ford GT" the GT/101 was unveiled in England on 1 April and soon after exhibited in New York. Purchase price of the completed car for competition use was £5,200.
It was powered by the 4.7 L 289 cu in Fairlane engine with a Colotti transaxle, the same power plant was used by the Lola GT and the single-seater Lotus 29 that came in a highly controversial second at the Indy 500 in 1963. (An aluminum block DOHC version, known as the Ford Indy Engine, was used in later years at Indy. It won in 1965 in the Lotus 38.)
The Ford GT40 was first raced in May 1964 at the Nürburgring "1000 km race" where it retired with suspension failure after holding second place early in the event. Three weeks later at the 24 Hours of Le Mans, all three entries retired although the Ginther/Gregory car led the field from the second lap until its first pitstop. After a season-long series of dismal results under John Wyer in 1964, the program was handed over to Carroll Shelby after the 1964 Nassau race. The cars were sent directly to Shelby, still bearing the dirt and damage from the Nassau race. Carroll Shelby was noted for complaining that the cars were poorly maintained when he received them, but later information revealed the cars were packed up as soon as the race was over, and FAV never had a chance to clean and organize the cars to be transported to Shelby.
Shelby's first victory came on their maiden race with the Ford program, with Ken Miles and Lloyd Ruby taking a Shelby American-entered Ford GT40 to victory in the Daytona 2000 in February 1965. One month later Ken Miles and Bruce McLaren came in second overall (to the winning Chaparral in the sports class) and first in prototype class at the Sebring 12-hour race. The rest of the season, however, was a disappointment.
The experience gained in 1964 and 1965 allowed the 7-liter Mk II to dominate the following year. In February, the GT40 again won at Daytona. This was the first year Daytona was run in the 24 Hour format and Mk II's finished 1st, 2nd, and 3rd. In March, at the 1966 12 Hours of Sebring, GT40s again took all three top finishes with the X-1 Roadster first, an Mk. II taking second, and an Mk. I in third. Then in June at the 24 Hours of Le Mans the GT40 achieved yet another 1–2–3 result.
The Le Mans finish, however, was clouded in controversy: The No1 car of Ken Miles and Denny Hulme held a four lap lead over the No2 car of Bruce McLaren and Chris Amon. This disintegrated when the No1 car was forced to make a pit-stop for replacement brake rotors, following an incorrect set being fitted a lap prior in a scheduled rotor change. It was found to be a result of the correct brake rotors being taken by the No2 crew. This meant, that in the final few hours, the Ford GT40 of New Zealanders Bruce McLaren and Chris Amon closely trailed the leading Ford GT40 driven by Englishman Ken Miles and New Zealander Denny Hulme. With a multimillion-dollar program finally on the very brink of success, Ford team officials faced a difficult choice. They could allow the drivers to settle the outcome by racing each other—and risk one or both cars breaking down or crashing. They could dictate a finishing order to the drivers—guaranteeing that one set of drivers would be extremely unhappy. Or they could arrange a tie, with the McLaren/Amon and Miles/Hulme cars crossing the line side-by-side.
The team chose the last and informed McLaren and Miles of the decision just before the two got in their cars for the final stint. Then, not long before the finish, the Automobile Club de l'Ouest (ACO), organizers of the Le Mans event, informed Ford that the geographical difference in starting positions would be taken into account at a close finish. This meant that the McLaren/Amon vehicle, which had started perhaps behind the Hulme-Miles car, would have covered slightly more ground over the 24 hours and would, therefore, be the winner. Secondly, Ford officials admitted later, the company's contentious relationship with Miles, its top contract driver, placed executives in a difficult position. They could reward an outstanding driver who had been at times extremely difficult to work with, or they could decide in favor of drivers (McLaren/Amon) who had committed less to the Ford program but who had been easier to deal with. Ford stuck with the orchestrated photo finish but Miles, deeply bitter over this decision after his dedication to the program, issued his own protest by suddenly slowing just yards from the finish and letting McLaren across the line first. Miles died in a testing accident in the J-car (later to become the Mk IV) at Riverside (CA) Raceway just two months later.
Miles' death occurred at the wheel of the Ford "J-car", an iteration of the GT40 that included several unique features. These included an aluminum honeycomb chassis construction and a "bread van" body design that experimented with "Kammback" aerodynamic theories. Unfortunately, the fatal Miles accident was attributed at least partly to the unproven aerodynamics of the J-car design, as well as the experimental chassis' strength. The team embarked on a complete redesign of the car, which became known as the Mk IV. The Mk IV, newer design with an Mk II engine but a different chassis and a different body, won the following year at Le Mans (when four Mark IVs, three Mark IIs, and three Mark Is raced). The high speeds achieved in that race caused a rule change, which already came in effect in 1968: the prototypes were limited to the capacity of 3.0 liters, the same as in Formula One. This took out the V12-powered Ferrari 330P as well as the Chaparral and the Mk. IV.
If at least 50 cars had been built, sportscars like the GT40 and the Lola T70 were allowed, with a maximum of 5.0 L. John Wyer's revised 4.7-liter (bored to 4.9 liter, and O-rings cut and installed between the block and head to prevent head gasket failure, a common problem found with the 4.7 engine) Mk I won the 24 hours of Le Mans race in 1968 against the fragile smaller prototypes. This result added to four other round wins for the GT40, gave Ford victory in the 1968 International Championship for Makes. The GT40's intended 3.0 L replacement, the Ford P68, and Mirage cars proved a dismal failure. While facing more experienced prototypes and the new yet still unreliable 4.5 L flat-12-powered Porsche 917s, Wyer's 1969 24 Hours of Le Mans winners Jacky Ickx/Jackie Oliver managed to beat the remaining 3.0-liter Porsche 908 by just a few seconds with the already outdated GT40 Mk I, in the very car that had won in 1968—the legendary GT40P/1075. Apart from brake wear in the Porsche and the decision not to change brake pads so close to the race end, the winning combination was relaxed driving by both GT40 drivers and heroic efforts at the right time by (at that time Le Mans' rookie) Ickx, who won Le Mans five more times in later years.
In addition to four consecutive overall Le Mans victories, Ford also won the following four FIA international titles (at what was then unofficially known as the World Sportscar Championship) with the GT40:
The Mk.I was the original Ford GT40. Early prototypes were powered by alloy V8 engines and production models were powered by engines as used in the Ford Mustang. Five prototype models were built with roadster bodywork, including the Ford X-1. Two lightweight cars (of a planned five) were built by Alan Mann Racing in 1966, with light alloy bodies and other weight-saving modifications.
The Mk.I met with little success in its initial tune for the 1964 and 1965 Le Mans races. The first success came after their demise at the Nassau Speed Weekend Nov 1964 when the racing was handed over to Carrol Shelby. Shelby's team modified the Ford GT40 and the first win at Daytona February 1965 was achieved. Much was later modified and run by John Wyer in 1968 and 1969, winning Le Mans in both those years and Sebring in 1969. The Mk.II and IV were both obsolete after the FIA had changed the rules to ban unlimited capacity engines, ruling out the Ford V8. However, the Mk.I, with its smaller engine, was legally able to race as a homologated sports car because of its production numbers.
In 1968 competition came from the Porsche 908 which was the first prototype built for the 3-liter Group 6. The result of the 1968 was resounding success at the 24 Hours of Le Mans with Pedro Rodríguez and Lucien Bianchi having a clear lead over the Porsches, driving the almighty #9 car with the 'Gulf Oil' colors. The season began slowly for JW, losing at Sebring and Daytona before taking their first win at the BOAC International 500 at Brands Hatch. Later victories included the Grand Prix de Spa, 21st Annual Watkins Glen Sports Car Road Race and the 1000 km di Monza. The engine installed on this car was a naturally aspirated Windsor V8 engine with a compression ratio of 10.6:1 fuel feed by four 2-barrel 48 IDA Weber carburetors, rated at at 6,000 rpm and a maximum torque of at 4,750 rpm.
31 Mk I cars were built at the Slough factory in "road" trim, which differed little from the race versions. Wire wheels, carpet, ruched fabric map pockets in the doors and a cigarette lighter made up most of the changes. Some cars deleted the ventilated seats, and at least one (chassis 1049) was built with the opening, metal-framed, windows from the Mk III.
The X-1 was a roadster built to contest the Fall 1965 North American Pro Series, a forerunner of Can-Am, entered by the Bruce McLaren team and driven by Chris Amon. The car had an aluminum chassis built at Abbey Panels and was originally powered by a 4.7-liter (289 ci) engine. The real purpose of this car was to test several improvements originating from Kar Kraft, Shelby, and McLaren. Several gearboxes were used: a Hewland LG500 and at least one automatic gearbox. It was later upgraded to Mk.II specifications with a 7.0-liter (427 ci) engine and a standard four ratio Kar Kraft (subsidiary of Ford) gearbox, however, the car kept specific features such as its open roof and lightweight aluminum chassis. The car went on to win the 12 Hours of Sebring in 1966. The X-1 was a one-off and having been built in the United Kingdom and being liable for United States tariffs, was later ordered to be destroyed by United States customs officials.
The Mk.II was very similar in appearance to the Mk.I but used the 7.0-liter FE (427 ci) engine from the Ford Galaxie, used in NASCAR at the time and modified for road course use. The car's chassis was similar to the British-built Mk.I chassis, but it and other parts of the car had to be redesigned and modified by Shelby to accommodate the larger and heavier 427 engine. A new Kar Kraft-built four-speed gearbox replaced the ZF five-speed used in the Mk.I. This car is sometimes called the "Ford Mk.II".
In 1966, the Mk.II dominated Le Mans, taking European audiences by surprise and beating Ferrari to finish 1-2-3 in the standings. After the success of these Mk.II cars, the Ford GT40 went on to win the race for the next three years.
For 1967, the Mk.IIs were upgraded to "B" spec; they had re-designed bodywork and twin Holley carburetors for an additional 15 hp. A batch of improperly heat-treated input shafts in the transaxles sidelined virtually every Ford in the race at Daytona, however, and Ferrari won 1-2-3. The Mk.IIBs were also used for Sebring and Le Mans that year and won the Reims 12 Hours in France. For the Daytona 24 Hours, two Mk II models (chassis 1016 and 1047) had their engines re-badged as Mercury engines; Ford seeing a good opportunity to advertise that division of the company.
The Mk III was a road-car only, of which seven were built. The car had four headlamps, the rear part of the body was expanded to make room for luggage, the 4.7-liter engine was detuned to , the shock absorbers were softened, the shift lever was moved to the center, an ashtray was added, and the car was available with the steering wheel on the left side of the car. As the Mk III looked significantly different from the racing models many customers interested in buying a GT40 for road use chose to buy a Mk I that was available from Wyer Ltd. Of the seven MK III that were produced four were left-hand drive.
In an effort to develop a car with better aerodynamics (potentially resulting in superior control and speed compared to competitors), the decision was made to re-conceptualize, and redesign everything about the vehicle other than its powerful 7-liter Engine. This would end up resulting in the abandonment of the original Mk.I/Mk.II chassis. In order to bring the car into alignment with Ford’s "in house" ideology at the time, more restrictive partnerships were implemented with English firms, which resulted in the sale of Ford Advanced Vehicles, (acquired by John Wyer) ultimately leading to a new vehicle which would be slated for design by Ford's studios and produced by Ford's subsidiary Kar-Kraft under Ed Hull. Furthermore there was also a partnership with the Brunswick Aircraft Corporation for expertise on the novel use of aluminum honeycomb panels bonded together to form a lightweight, rigid "tub". The car was designated as the J-car, as it was constructed to meet the new Appendix J regulations
which were introduced by the FIA in 1966.
The first J-car was completed in March 1966 and set the fastest time at the Le Mans trials that year. The tub weighed only , and the entire car weighed only , less than the Mk II. It was decided to run the Mk IIs due to their proven reliability, however, and little or no development was done on the J-car for the rest of the season. Following Le Mans, the development program for the J-car was resumed, and a second car was built. During a test session at Riverside International Raceway in August 1966 with Ken Miles driving, the car suddenly went out of control at the end of Riverside's high-speed, 1-mile-long back straight. The aluminum honeycomb chassis did not live up to its design goal, shattering upon impact. The car burst into flames, killing Miles. It was determined that the unique, flat-topped "bread van" aerodynamics of the car, lacking any sort of spoiler, were implicated in generating excess lift. Therefore, a conventional but significantly more aerodynamic body was designed for the subsequent development of the J-car which was officially known as the Mk IV. A total of nine cars were constructed with J-car chassis numbers although six were designated as Mk IVs and one as the G7A.
The Mk IV was built around a reinforced J chassis powered by the same 7.0 L engine as the Mk II. Excluding the engine, gearbox, some suspension parts and the brakes from the Mk.II, the Mk.IV was totally different from other GT40s, using a specific, all-new chassis and bodywork. It was undoubtedly the most radical and American variant of all the GT40's over the years. As a direct result of the Miles accident, the team installed a NASCAR-style steel-tube roll cage in the Mk.IV, which made it much safer, but the roll cage was so heavy that it negated most of the weight saving of the then-highly advanced, radically innovative honeycomb-panel construction. The Mk. IV had a long, streamlined shape, which gave it exceptional top speed, crucial to do well at Le Mans in those days (a circuit made up predominantly of straights)—the race it was ultimately built for. A 2-speed automatic gearbox was tried, but during the extensive testing of the J-car in 1966 and 1967, it was decided that the 4-speed from the Mk.II would be retained. Dan Gurney often complained about the weight of the Mk.IV, since the car was heavier than the Ferrari 330 P4's. During practice at Le Mans in 1967, in an effort to preserve the highly stressed brakes, Gurney developed a strategy (also adopted by co-driver A.J. Foyt) of backing completely off the throttle several hundred yards before the approach to the Mulsanne hairpin and virtually coasting into the braking area. This technique saved the brakes, but the resulting increase in the car's recorded lap times during practice led to speculation within the Ford team that Gurney and Foyt, in an effort to compromise on chassis settings, had hopelessly "dialed out" their car. The car proved to be fastest in a straight line that year, thanks to its streamlined aerodynamics, achieving 212 mph on the 3.6-mile Mulsanne Straight.
The Mk. IV ran in only two races, the 1967 12 Hours of Sebring and the 1967 24 Hours of Le Mans and won both events. Only one Mk.IV was completed for Sebring; the pressure from Ford had been amped up considerably after Ford's humiliation at Daytona two months earlier. Mario Andretti and Bruce McLaren won Sebring, Dan Gurney and A. J. Foyt won Le Mans (Gurney and Foyt's car was the Mk.IV that was apparently least likely to win), where the Ford-representing Shelby-American and Holman & Moody teams showed up to Le Mans with 2 Mk.IV's each. The installation of the roll cage was ultimately credited by many with saving the life of Andretti, who crashed violently at the Esses during the 1967 Le Mans 24 Hours, but escaped with minor injuries.
Unlike the earlier Mk.I - III cars, which were built in England, the Mk.IVs were built in the United States by Kar Kraft. Le Mans 1967 remains the only all-American victory in Le Mans history—American drivers, team, chassis, engine, and tires. A total of six Mk IVs were constructed. One of the Mk IVs was rebuilt to the Ford G7 in 1968, and used in the Can-Am series for 1969 and 1970, but with no success. This car is sometimes called the "Ford Mk.IV".
For years Peter Thorp had searched for a GT40 in good condition. Most of the cars had problems including the dreaded rust issue. His company, Safir Engineering, was building and fielding Formula 3 race cars, in addition, had a Token Formula One car purchased from the Ron Dennis Company, Rondell Racing. Formula One events in which Safir Engineering competed included Brands Hatch and Silverstone. Safir was also redesigning Range Rovers modifying the unit to six-wheel drive and exporting them. Safir technical capabilities were such that they could rebuild GT40s. It was with this in mind that Thorp approached John Willment for his thoughts. It was soon decided that there would be a limited, further run of the significant GT40. JW Engineering would oversee the build, and Safir was to do the work. The continued JW Engineering/Safir Engineering production would utilize sequential serial numbers starting at the last used GT40 serial number and move forward. Maintaining the GT40 Mark nomenclature, this continued production would be named GT40 MkV.
JW Engineering wished to complete the GT40 chassis numbers GT40P-1087, 1088 and 1089. This was supposed to take place prior to the beginning of Safir production, however, the completion of these three chassis’ was very much delayed.
Ford's Len Bailey was hired to inspect the proposed build and engineer any changes he thought prudent to ensure the car was safe, as well as minimize problems experienced in the past. Baily changed the front suspension to Alan Mann specifications, which minimized nose-dive under braking. Zinc coated steel replaced the previous uncoated rust-prone sheet metal. The vulnerable drive donuts were replaced with CV joints and the leak-prone rubber gas tanks were replaced with aluminum tanks. The GT40 chassis was upgraded without making any major changes.
Tennant Panels supplied the roof structure and the balance of the chassis was completed by Safir. Bill Pink, noted for his electrical experience and the wiring installation of previous GT40s, was brought in. Also, Jim Rose was hired for his experience with working at both Alan Mann and Shelby. After the manufacture of chassis 1120, John Etheridge was hired to manage the GT40 build. The chassis was supplied from Adams McCall Engineering and parts supplied from Tennant panels.
For the most part, the MkV resembled very closely the MkI car, although there were a few changes, and, as with the 60s production, very few cars were identical.
The first car, GT40P-1090, had an open-top in place of roofed doors. Most motors were Ford small block, Webers or 4 Barrel Carburetor. Safir produced five Big Block GT40s, serial numbers GT40P-1128 to GT40P-1132. These aluminum big block cars all had easily removable door roof sections. Most GT40s were high-performance street cars however some of the MkV production can be described as full race. Two road cars GT40P-1133 (roadster) and GT40P-1142 (roofed doors) were built as lightweights which included an aluminum honeycomb chassis and carbon fiber bodywork.
Several kit cars and replicas inspired by the Ford GT40 have been built. They are generally intended for assembly by the customer in a home workshop or garage. There are two alternatives to the kit car approach, either continuation models (exact and licensed replicas true to the original GT40) or modernizations (replicas with upgraded components, ergonomics & trim for improved usability, drivability, and performance).
At the 1995 North American International Auto Show, the Ford GT90 concept was shown and at the 2002 show, a new GT40 Concept was unveiled by Ford.
While similar in appearance to the original cars, it was bigger, wider, and 3 inches (76 mm) taller than the original 40 inches (1020 mm). Three production prototype cars were shown in 2003 as part of Ford's centenary, and delivery of the production Ford GT began in the fall of 2004. The Ford GT was assembled in the Ford Wixom plant and painted by Saleen, Incorporated at their Saleen Special Vehicles plant in Troy, Michigan.
A British company, Safir Engineering, who continued to produce a limited number of GT40s (the MkV) in the 1980s under an agreement with Walter Hayes of Ford and John Wilmont of J.W. Automotive Engineering, owned the GT40 trademark at that time, and when they completed production, they sold the excess parts, tooling, design, and trademark to a small American company called Safir GT40 Spares, Limited based in Ohio. Safir GT40 Spares licensed the use of the GT40 trademark to Ford for the initial 2002 show car, but when Ford decided to make the production vehicle, negotiations between the two failed, and as a result, the new Ford GT does not wear the badge GT40. Bob Wood, one of three partners who own Safir GT40 Spares, said: "When we talked with Ford, they asked what we wanted. We said that Ford owns Beanstalk in New York, the company that licenses the Blue Oval for Ford on such things as T-shirts. Since Beanstalk gets 7.5 percent of the retail cost of the item for licensing the name, we suggested 7.5 percent on each GT40 sold." In this instance, Ford wished to purchase, not just license the GT40 trademark. At the then-estimated $125,000 per copy, 7.5% of 4,500 vehicles would have totalled approximately $42,187,500. It was widely and erroneously reported following an "Automotive News Weekly" story that Safir "demanded" the $40 million for the sale of the trademark. Discussions between Safir and Ford ensued. However, in fact, the Ford Motor Company never made an offer in writing to purchase the famed GT40 trademark. Later models or prototypes have also been called the Ford GT but have had different numbering on them such as the Ford GT90 or the Ford GT70. The GT40 name and trademark is currently licensed to Superformance in the USA.
A second-generation Ford GT was unveiled at the 2015 North American International Auto Show. It features a 3.5L twin-turbocharged V6 engine, carbon fiber monocoque and body panels, pushrod suspension and active aerodynamics. It entered the 2016 season of the FIA World Endurance Championship and the United SportsCar Championship, and started being sold in a street-legal version at Ford dealerships in 2017.
|
https://en.wikipedia.org/wiki?curid=11830
|
Glycine
Glycine (symbol Gly or G; ) is an amino acid that has a single hydrogen atom as its side chain. It is the simplest amino acid (since carbamic acid is unstable), with the chemical formula NH2‐CH2‐COOH. Glycine is one of the proteinogenic amino acids. It is encoded by all the codons starting with GG (GGU, GGC, GGA, GGG). Glycine is integral to the formation of alpha-helices in secondary protein structure due to its compact form. For the same reason, it is the most abundant amino acid in collagen triple-helices. Glycine is also an inhibitory neurotransmitter - interference with its release within the spinal cord (such as during a Clostridium tetani infection) can cause spastic paralysis due to uninhibited muscle contraction.
Glycine is a colorless, sweet-tasting crystalline solid. It is the only achiral proteinogenic amino acid. It can fit into hydrophilic or hydrophobic environments, due to its minimal side chain of only one hydrogen atom. The acyl radical is glycyl.
Glycine was discovered in 1820 by the French chemist Henri Braconnot when he hydrolyzed gelatin by boiling it with sulfuric acid. He originally called it "sugar of gelatin", but the French chemist Jean-Baptiste Boussingault showed that it contained nitrogen. The American scientist Eben Norton Horsford, then a student of the German chemist Justus von Liebig, proposed the name "glycocoll"; however, the Swedish chemist Berzelius suggested the simpler name "glycine". The name comes from the Greek word γλυκύς "sweet tasting" (which is also related to the prefixes "glyco-" and "gluco-", as in "glycoprotein" and "glucose"). In 1858, the French chemist Auguste Cahours determined that glycine was an amine of acetic acid.
Although glycine can be isolated from hydrolyzed protein, this is not used for industrial production, as it can be manufactured more conveniently by chemical synthesis. The two main processes are amination of chloroacetic acid with ammonia, giving glycine and ammonium chloride, and the Strecker amino acid synthesis, which is the main synthetic method in the United States and Japan. About 15 thousand tonnes are produced annually in this way.
Glycine is also cogenerated as an impurity in the synthesis of EDTA, arising from reactions of the ammonia coproduct.
Its acid–base properties are most important. In aqueous solution, glycine itself is amphoteric: at low pH the molecule can be protonated with a p"K"a of about 2.4 and at high pH it loses a proton with a p"K"a of about 9.6 (precise values of p"K"a depend on temperature and ionic strength).
Glycine functions as a bidentate ligand for many metal ions. A typical complex is Cu(glycinate)2, i.e. Cu(H2NCH2CO2)2, which exists both in cis and trans isomers.
As a bifunctional molecule, glycine reacts with many reagents. These can be classified into N-centered and carboxylate-center reactions.
The amine undergoes the expected reactions. With acid chlorides, one obtains the amidocarboxylic acid, such as hippuric acid and acetylglycine. With nitrous acid, one obtains glycolic acid (van Slyke determination). With methyl iodide, the amine becomes quaternized to give betaine, a natural product:
Glycine condenses with itself to give peptides, beginning with the formation of glycylglycine:
Pyrolysis of glycine or glycylglycine gives 2,5-diketopiperazine, the cyclic diamide.
Glycine is not essential to the human diet, as it is biosynthesized in the body from the amino acid serine, which is in turn derived from 3-phosphoglycerate, but the metabolic capacity for glycine biosynthesis does not satisfy the need for collagen synthesis. In most organisms, the enzyme serine hydroxymethyltransferase catalyses this transformation via the cofactor pyridoxal phosphate:
In the liver of vertebrates, glycine synthesis is catalyzed by glycine synthase (also called glycine cleavage enzyme). This conversion is readily reversible:
Glycine is degraded via three pathways. The predominant pathway in animals and plants is the reverse of the glycine synthase pathway mentioned above. In this context, the enzyme system involved is usually called the glycine cleavage system:
In the second pathway, glycine is degraded in two steps. The first step is the reverse of glycine biosynthesis from serine with serine hydroxymethyl transferase. Serine is then converted to pyruvate by serine dehydratase.
In the third pathway of its degradation, glycine is converted to glyoxylate by D-amino acid oxidase. Glyoxylate is then oxidized by hepatic lactate dehydrogenase to oxalate in an NAD+-dependent reaction.
The half-life of glycine and its elimination from the body varies significantly based on dose. In one study, the half-life varied between 0.5 and 4.0 hours.
Glycine is extremely sensitive to antibiotics which target folate, and blood Glycine levels drop severely within a minute of antibiotic injections. Some antibiotics can deplete more than 90% of Glycine within a few minutes of being administered.
The principal function of glycine is as a precursor to proteins. Most proteins incorporate only small quantities of glycine, a notable exception being collagen, which contains about 35% glycine due to its periodically repeated role in the formation of collagen's helix structure in conjunction with hydroxyproline. In the genetic code, glycine is coded by all codons starting with GG, namely GGU, GGC, GGA and GGG.
In higher eukaryotes, δ-aminolevulinic acid, the key precursor to porphyrins, is biosynthesized from glycine and succinyl-CoA by the enzyme ALA synthase. Glycine provides the central C2N subunit of all purines.
Glycine is an inhibitory neurotransmitter in the central nervous system, especially in the spinal cord, brainstem, and retina. When glycine receptors are activated, chloride enters the neuron via ionotropic receptors, causing an inhibitory postsynaptic potential (IPSP). Strychnine is a strong antagonist at ionotropic glycine receptors, whereas bicuculline is a weak one. Glycine is a required co-agonist along with glutamate for NMDA receptors. In contrast to the inhibitory role of glycine in the spinal cord, this behaviour is facilitated at the (NMDA) glutamatergic receptors which are excitatory. The of glycine is 7930 mg/kg in rats (oral), and it usually causes death by hyperexcitability.
In the US, glycine is typically sold in two grades: United States Pharmacopeia (“USP”), and technical grade. USP grade sales account for approximately 80 to 85 percent of the U.S. market for glycine. If purity greater than the USP standard is needed, for example for intravenous injections, a more expensive pharmaceutical grade glycine can be used. Technical grade glycine, which may or may not meet USP grade standards, is sold at a lower price for use in industrial applications, e.g., as an agent in metal complexing and finishing.
Glycine is not widely used in foods for its nutrional value, except in infusions. Instead glycine's role in food chemistry is as a flavorant. It is mildly sweet, and it counters the aftertaste of saccharine. It also has preservative properties, perhaps owing to its complexation to metal ions. Metal glycinate complexes, e.g. copper(II) glycinate are used as supplements for animal feeds.
Glycine is an intermediate in the synthesis of a variety of chemical products. It is used in the manufacture of the herbicides glyphosate, iprodione, glyphosine, imiprothrin, and eglinazine. It is used as an intermediate of the medicine such as thiamphenicol.
Glycine is a significant component of some solutions used in the SDS-PAGE method of protein analysis. It serves as a buffering agent, maintaining pH and preventing sample damage during electrophoresis. Glycine is also used to remove protein-labeling antibodies from Western blot membranes to enable the probing of numerous proteins of interest from SDS-PAGE gel. This allows more data to be drawn from the same specimen, increasing the reliability of the data, reducing the amount of sample processing, and number of samples required. This process is known as stripping.
The presence of glycine outside the earth was confirmed in 2009, based on the analysis of samples that had been taken in 2004 by the NASA spacecraft Stardust from comet Wild 2 and subsequently returned to earth. Glycine had previously been identified in the Murchison meteorite in 1970. The discovery of cometary glycine bolstered the theory of panspermia, which claims that the "building blocks" of life are widespread throughout the Universe. In 2016, detection of glycine within Comet 67P/Churyumov-Gerasimenko by the Rosetta spacecraft was announced.
The detection of glycine outside the solar system in the interstellar medium has been debated. In 2008, the Max Planck Institute for Radio Astronomy discovered the spectral lines of a glycine-like molecule aminoacetonitrile in the Large Molecule Heimat, a giant gas cloud near the galactic center in the constellation Sagittarius.
|
https://en.wikipedia.org/wiki?curid=11835
|
GeekSpeak
GeekSpeak is a podcast with two to four hosts who focus on technology and technology news of the week. Though originally a radio tech call-in program, which first aired in 1998 on KUSP, GeekSpeak has been a weekly podcast since 2004.
The program's slogan is ""Bridging the gap between geeks and the rest of humanity"."
GeekSpeak was created and originally broadcast on KUSP by Chris Neklason of Cruzio, Steve Schaefer of Guenther Computer, and board operator Ray Price from KUSP. Shortly there after Mark Hanford of Cruzio joined the program.
Currently, the host/producer is Lyle Troxell, who took over in September 2000.
In April 2016, citing financial difficulties, KUSP stopped broadcasting GeekSpeak with its final broadcast on May 5, 2016.
GeekSpeak episodes have been distributed as an archive on the internet since 2001. The podcast went live prior to March 5, 2005 with its first episode December 3, 2004.
|
https://en.wikipedia.org/wiki?curid=11844
|
Le Corbusier
Charles-Édouard Jeanneret (6 October 1887 – 27 August 1965), known as Le Corbusier ( , , ), was a Swiss-French architect, designer, painter, urban planner, writer, and one of the pioneers of what is now called modern architecture. He was born in Switzerland and became a French citizen in 1930. His career spanned five decades, and he designed buildings in Europe, Japan, India, and North and South America.
Dedicated to providing better living conditions for the residents of crowded cities, Le Corbusier was influential in urban planning, and was a founding member of the Congrès International d'Architecture Moderne (CIAM). Le Corbusier prepared the master plan for the city of Chandigarh in India, and contributed specific designs for several buildings there, specially the government buildings.
On 17 July 2016, seventeen projects by Le Corbusier in seven countries were inscribed in the list of UNESCO World Heritage Sites as The Architectural Work of Le Corbusier, an Outstanding Contribution to the Modern Movement.
Charles-Édouard Jeanneret was born on 6 October 1887 in La Chaux-de-Fonds, a small city in the French-speaking Neuchâtel canton in north-western Switzerland, in the Jura mountains, across the border from France. It was an industrial town, devoted to manufacturing watches. (He adopted the pseudonym "Le Corbusier" in 1920.) His father was an artisan who enameled boxes and watches, and his mother taught piano. His elder brother Albert was an amateur violinist. He attended a kindergarten that used Fröbelian methods.
Like his contemporaries Frank Lloyd Wright and Mies van der Rohe, Le Corbusier lacked formal training as an architect. He was attracted to the visual arts; at the age of fifteen he entered the municipal art school in La-Chaux-de-Fonds which taught the applied arts connected with watchmaking. Three years later he attended the higher course of decoration, founded by the painter Charles L'Eplattenier, who had studied in Budapest and Paris. Le Corbusier wrote later that L'Eplattenier had made him "a man of the woods" and taught him painting from nature. His father frequently took him into the mountains around the town. He wrote later, "we were constantly on mountaintops; we grew accustomed to a vast horizon." His architecture teacher in the Art School was architect René Chapallaz, who had a large influence on Le Corbusier's earliest house designs. He reported later that it was the art teacher L'Eplattenier who made him choose architecture. "I had a horror of architecture and architects," he wrote. "...I was sixteen, I accepted the verdict and I obeyed. I moved into architecture."
Le Corbusier began teaching himself by going to the library to read about architecture and philosophy, by visiting museums, by sketching buildings, and by constructing them. In 1905, he and two other students, under the supervision of their teacher, René Chapallaz, designed and built his first house, the Villa Fallet, for the engraver Louis Fallet, a friend of his teacher Charles L'Eplattenier. Located on the forested hillside near Chaux-de-fonds, it was a large chalet with a steep roof in the local alpine style and carefully crafted colored geometric patterns on the façade. The success of this house led to his construction of two similar houses, the Villas Jacquemet and Stotzer, in the same area.
In September 1907, he made his first trip outside of Switzerland, going to Italy; then that winter traveling through Budapest to Vienna, where he stayed for four months and met Gustav Klimt and tried, without success, to meet Josef Hoffmann. In Florence, he visited the Florence Charterhouse in Galluzzo, which made a lifelong impression on him. "I would have liked to live in one of what they called their cells," he wrote later. "It was the solution for a unique kind of worker's housing, or rather for a terrestrial paradise." He traveled to Paris, and during fourteen months between 1908 until 1910 he worked as a draftsman in the office of the architect Auguste Perret, the pioneer of the use of reinforced concrete in residential construction and the architect of the Art Deco landmark Théâtre des Champs-Élysées. Two years later, between October 1910 and March 1911, he traveled to Germany and worked four months in the office Peter Behrens, where Ludwig Mies van der Rohe and Walter Gropius were also working and learning.
In 1911, he traveled again with his friend August Klipstein for five months; this time he journeyed to the Balkans and visited Serbia, Bulgaria, Turkey, Greece, as well as Pompeii and Rome, filling nearly 80 sketchbooks with renderings of what he saw—including many sketches of the Parthenon, whose forms he would later praise in his work "Vers une architecture" (1923). He spoke of what he saw during this trip in many of his books, and it was the subject of his last book, "Le Voyage d'Orient".
In 1912, he began his most ambitious project; a new house for his parents. also located on the forested hillside near La-Chaux-de-Fonds. The Jeanneret-Perret house was larger than the others, and in a more innovative style; the horizontal planes contrasted dramatically with the steep alpine slopes, and the white walls and lack of decoration were in sharp contrast with the other buildings on the hillside. The interior spaces were organized around the four pillars of the salon in the center, foretelling the open interiors he would create in his later buildings. The project was more expensive to build than he imagined; his parents were forced to move from the house within ten years, and relocate in a more modest house. However, it led to a commission to build an even more imposing villa in the nearby village of Le Locle for a wealthy watch manufacturer, Georges Favre-Jacot. Le Corbusier designed the new house in less than a month. The building was carefully designed to fit its hillside site, and interior plan was spacious and designed around a courtyard for maximum light, significant departure from the traditional house.
During World War I, Le Corbusier taught at his old school in La-Chaux-de-Fonds. He concentrated on theoretical architectural studies using modern techniques. In December 1914, along with the engineer Max Dubois, he began a serious study of the use of reinforced concrete as a building material. He had first discovered concrete working in the office of Auguste Perret, the pioneer of reinforced concrete architecture in Paris, but now wanted to use it in new ways.
"Reinforced concrete provided me with incredible resources," he wrote later, "and variety, and a passionate plasticity in which by themselves my structures will be rhythm of a palace, and a Pompieen tranquility." This led him to his plan for the Dom-Ino House (1914–15). This model proposed an open floor plan consisting of three concrete slabs supported by six thin reinforced concrete columns, with a stairway providing access to each level on one side of the floor plan. The system was originally designed to provide large numbers of temporary residences after World War I, producing only slabs, columns and stairways, and residents could build exterior walls with the materials around the site. He described it in his patent application as "a juxtiposable system of construction according to an infinite number of combinations of plans. This would permit, he wrote, "the construction of the dividing walls at any point on the façade or the interior."
Under this system, the structure of the house did not have to appear on the outside, but could be hidden behind a glass wall, and the interior could be arranged in any way the architect liked. After it was patented, Le Corbusier designed a number of houses according to the system, which were all white concrete boxes. Although some of these were never built, they illustrated his basic architectural ideas which would dominate his works throughout the 1920s. He refined the idea in his 1927 book on the "Five Points of a New Architecture". This design, which called for the disassociation of the structure from the walls, and the freedom of plans and façades, became the foundation for most of his architecture over the next ten years.
In August 1916, Le Corbusier received his largest commission ever, to construct a villa for the Swiss watchmaker Anatole Schwob, for whom he had already completed several small remodeling projects. He was given a large budget and the freedom to design not only the house, but also to create the interior decoration and choose the furniture. Following the precepts of Auguste Perret, he built the structure out of reinforced concrete and filled the gaps with brick. The center of the house is a large concrete box with two semicolumn structures on both sides, which reflects his ideas of pure geometrical forms. A large open hall with a chandelier occupied the center of the building. "You can see," he wrote to Auguste Perret in July 1916, "that Auguste Perret left more in me than Peter Behrens."
Le Corbusier's grand ambitions collided with the ideas and budget of his client, and led to bitter conflicts. Schwob went to court and denied Le Corbusier access to site, or the right to claim to be the architect. Le Corbusier responded, "Whether you like it or not, my presence is inscribed in every corner of your house." Le Corbusier took great pride in the house, and reproduced pictures in several of his books.
Le Corbusier moved to Paris definitively in 1917 and began his own architectural practice with his cousin, Pierre Jeanneret (1896–1967), a partnership that would last until the 1950s, with an interruption in the World War II years
In 1918, Le Corbusier met the Cubist painter Amédée Ozenfant, in whom he recognised a kindred spirit. Ozenfant encouraged him to paint, and the two began a period of collaboration. Rejecting Cubism as irrational and "romantic", the pair jointly published their manifesto, "Après le cubisme" and established a new artistic movement, Purism. Ozenfant and Le Corbusier began writing for a new journal, "L'Esprit Nouveau", and promoted with energy and imagination his ideas of architecture.
In the first issue of the journal, in 1920, Charles-Edouard Jeanneret adopted Le Corbusier (an altered form of his maternal grandfather's name, Lecorbésier) as a pseudonym, reflecting his belief that anyone could reinvent themselves. Adopting a single name to identify oneself was in vogue by artists in many fields during that era, especially in Paris.
Between 1918 and 1922, Le Corbusier did not build anything, concentrating his efforts on Purist theory and painting. In 1922, he and his cousin Pierre Jeanneret opened a studio in Paris at 35 rue de Sèvres.
They set up an architectural practice together. From 1927 to 1937 they worked together with Charlotte Perriand at the Le Corbusier-Pierre Jeanneret studio. In 1929 the trio prepared the “House fittings” section for the Decorative Artists Exhibition and asked for a group stand, renewing and widening the 1928 avant-garde group idea. This was refused by the Decorative Artists Committee. They resigned and founded the Union of Modern Artists (“Union des artistes modernes”: UAM).
His theoretical studies soon advanced into several different single-family house models. Among these, was the Maison "Citrohan." The project's name was a reference to the French Citroën automaker, for the modern industrial methods and materials Le Corbusier advocated using in the house's construction as well as the way in which he intended the homes would be consumed, similar to other commercial products, like the automobile.
As part of the Maison Citrohan model, Le Corbusier proposed a three-floor structure, with a double-height living room, bedrooms on the second floor, and a kitchen on the third floor. The roof would be occupied by a sun terrace. On the exterior Le Corbusier installed a stairway to provide second-floor access from ground level. Here, as in other projects from this period, he also designed the façades to include large uninterrupted banks of windows. The house used a rectangular plan, with exterior walls that were not filled by windows but left as white, stuccoed spaces. Le Corbusier and Jeanneret left the interior aesthetically spare, with any movable furniture made of tubular metal frames. Light fixtures usually comprised single, bare bulbs. Interior walls also were left white.
In 1922 and 1923, Le Corbusier devoted himself to advocating his new concepts of architecture and urban planning in a series of polemical articles published in "L'Esprit Nouveau". At the Paris Salon d'Automne in 1922, he presented his plan for the Ville Contemporaine, a model city for three million people, whose residents would live and work in a group of identical sixty-story tall apartment buildings surrounded by lower zig-zag apartment blocks and a large park. In 1923, he collected his essays from "L'Esprit Nouveau" published his first and most influential book, "Towards an Architecture". He presented his ideas for the future of architecture in a series of maxims, declarations, and exhortations, pronouncing that "a grand epoch has just begun. There exists a new spirit. There already exist a crowd of works in the new spirit, they are found especially in industrial production. Architecture is suffocating in its current uses. "Styles" are a lie. Style is a unity of principles which animates all the work of a period and which result in a characteristic spirit...Our epoch determines each day its style..-Our eyes, unfortunately don't know how to see it yet," and his most famous maxim, "A house is a machine to live in." Most of the many photographs and drawings in the book came from outside the world of traditional architecture; the cover showed the promenade deck of an ocean liner, while others showed racing cars, airplanes, factories, and the huge concrete and steel arches of zeppelin hangars.
An important early work of Le Corbusier was the Esprit Nouveau Pavilion, built for the 1925 Paris International Exhibition of Modern Decorative and Industrial Arts, the event which later gave Art Deco its name. Le Corbusier built the pavilion in collaboration with Amédée Ozenfant and with his cousin Pierre Jeanneret. Le Corbusier and Ozenfant had broken with Cubism and formed the Purism movement in 1918 and in 1920 founded their journal "L'Esprit Nouveau". In his new journal, Le Corbusier vividly denounced the decorative arts: "Decorative Art, as opposed to the machine phenomenon, is the final twitch of the old manual modes, a dying thing." To illustrate his ideas, he and Ozenfant decided to create small pavilion at the Exposition, representing his idea of the future urban housing unit. A house, he wrote, "is a cell within the body of a city. The cell is made up of the vital elements which are the mechanics of a house...Decorative art is antistandarizational. Our pavilion will contain only standard things created by industry in factories and mass produced, objects truly of the style of today...my pavilion will therefore be a cell extracted from a huge apartment building."
Le Corbusier and his collaborators were given a plot of land located behind the Grand Palais in the center of the Exposition. The plot was forested, and exhibitors could not cut down trees, so Le Corbusier built his pavilion with a tree in the center, emerging through a hole in the roof. The building was a stark white box with an interior terrace and square glass windows. The interior was decorated with a few cubist paintings and with a few pieces of mass-produced commercially available furniture, entirely different from the expensive, one-of-a-kind pieces in the other pavilions. The chief organizers of the Exposition were furious, and built a fence to partially hide the pavilion. Le Corbusier had to appeal to the Ministry of Fine Arts, which ordered that fence be taken down.
Besides the furniture, the pavilion exhibited a model of his Plan Voisin his provocative plan for rebuilding a large part of the centre of Paris. He proposed to bulldoze a large area north of the Seine and replace the narrow streets, monuments and houses with giant sixty-story cruciform towers placed within an orthogonal street grid and park-like green space. His scheme was met with criticism and scorn from French politicians and industrialists, although they were favorable to the ideas of Taylorism and Fordism underlying his designs. The plan was never seriously considered, but it provoked discussion concerning how to deal with the overcrowded poor working-class neighborhoods of Paris, and it later saw partial realization in the housing developments built in the Paris suburbs in the 1950s and 1960s.
The Pavilion was ridiculed by many critics, but Le Corbusier, undaunted, wrote: "Right now one thing is sure. 1925 marks the decisive turning point in the quarrel between the old and new. After 1925, the antique-lovers will have virtually ended their lives . . . Progress is achieved through experimentation; the decision will be awarded on the field of battle of the 'new'."
In 1925, Le Corbusier combined a series of articles about decorative art from "L'Esprit Nouveau" into a book, "L'art décoratif d'aujourd'hui" ("The Decorative Art of Today"). The book was a spirited attack on the very idea of decorative art. His basic premise, repeated throughout the book, was: "Modern decorative art has no decoration." He attacked with enthusiasm the styles presented at the 1925 Exposition of Decorative Arts: "The desire to decorate everything about one is a false spirit and an abominable small perversion...The religion of beautiful materials is in its final death agony...The almost hysterical onrush in recent years toward this quasi-orgy of decor is only the last spasm of a death already predictable." He cited the 1912 book of the Austrian architect Adolf Loos "Ornament and crime", and quoted Loos's dictum, "The more a people are cultivated, the more decor disappears." He attacked the deco revival of classical styles, what he called "Louis Philippe and Louis XVI moderne"; he condemned the "symphony of color" at the Exposition, and called it "the triumph of assemblers of colors and materials. They were swaggering in colors... They were making stews out of fine cuisine." He condemned the exotic styles presented at the Exposition based on the art of China, Japan, India and Persia. "It takes energy today to affirm our western styles." He criticized the "precious and useless objects that accumulated on the shelves" in the new style. He attacked the "rustling silks, the marbles which twist and turn, the vermilion whiplashes, the silver blades of Byzantium and the Orient…Let's be done with it!"
"Why call bottles, chairs, baskets and objects decorative?" Le Corbusier asked. "They are useful tools….Decor is not necessary. Art is necessary." He declared that in the future the decorative arts industry would produce only "objects which are perfectly useful, convenient, and have a true luxury which pleases our spirit by their elegance and the purity of their execution, and the efficiency of their services. This rational perfection and precise determinate creates the link sufficient to recognize a style." He described the future of decoration in these terms: "The ideal is to go work in the superb office of a modern factory, rectangular and well-lit, painted in white Ripolin (a major French paint manufacturer); where healthy activity and laborious optimism reign." He concluded by repeating "Modern decoration has no decoration".
The book became a manifesto for those who opposed the more traditional styles of the decorative arts; In the 1930s, as Le Corbusier predicted, the modernized versions of Louis Philippe and Louis XVI furniture and the brightly colored wallpapers of stylized roses were replaced by a more sober, more streamlined style. Gradually the modernism and functionality proposed by Le Corbusier overtook the more ornamental style. The shorthand titles that Le Corbusier used in the book, "1925 Expo: Arts Deco" was adapted in 1966 by the art historian Bevis Hillier for a catalog of an exhibition on the style, and in 1968 in the title of a book, "Art Deco of the 20s and 30s". And thereafter the term "Art Deco" was commonly used as the name of the style.
The notoriety that Le Corbusier achieved from his writings and the Pavilion at the 1925 Exposition led to commissions to build a dozen residences in Paris and in the Paris region in his "purist style." These included the Maison La Roche/Albert Jeanneret (1923–1925), which now houses the Fondation Le Corbusier; the Maison Guiette in Antwerp, Belgium (1926); a residence for Jacques Lipchitz; the Maison Cook, and the Maison Planeix. In 1927, he was invited by the German Werkbund to build three houses in the model city of Weissenhof near Stuttgart, based on the Citrohan House and other theoretical models he had published. He described this project in detail one of his best-known essays, the "Five Points of Architecture".
The following year he began the Villa Savoye (1928–1931), which became one of the most famous of Le Corbusier's works, and an icon of modernist architecture. Located in Poissy, in a landscape surrounded by trees and large lawn, the house is an elegant white box poised on rows of slender pylons, surrounded by a horizontal band of windows which fill the structure with light. The service areas (parking, rooms for servants and laundry room) are located under the house. Visitors enter a vestibule from which a gentle ramp leads to the house itself. The bedrooms and salons of the house are distributed around a suspended garden; the rooms look both out at the landscape and into the garden, which provides additional light and air. Another ramp leads up to the roof, and a stairway leads down to the cellar under the pillars.
Villa Savoye succinctly summed up the five points of architecture that he had elucidated in "L'Esprit Nouveau" and the book "Vers une architecture", which he had been developing throughout the 1920s. First, Le Corbusier lifted the bulk of the structure off the ground, supporting it by "pilotis", reinforced concrete stilts. These "pilotis", in providing the structural support for the house, allowed him to elucidate his next two points: a free façade, meaning non-supporting walls that could be designed as the architect wished, and an open floor plan, meaning that the floor space was free to be configured into rooms without concern for supporting walls. The second floor of the Villa Savoye includes long strips of ribbon windows that allow unencumbered views of the large surrounding garden, and which constitute the fourth point of his system. The fifth point was the roof garden to compensate for the green area consumed by the building and replacing it on the roof. A ramp rising from ground level to the third-floor roof terrace allows for a promenade architecturale through the structure. The white tubular railing recalls the industrial "ocean-liner" aesthetic that Le Corbusier much admired.
Le Corbusier was quite rhapsodic when describing the house in "Précisions" in 1930: "the plan is pure, exactly made for the needs of the house. It has its correct place in the rustic landscape of Poissy. It is Poetry and lyricism, supported by technique." The house had its problems; the roof persistently leaked, due to construction faults; but it became a landmark of modern architecture and one of the best-known works of Le Corbusier.
Thanks to his passionate articles in L'Esprit Nouveau, his participation in the 1925 Decorative Arts Exposition and the conferences he gave on the new spirit of architecture, Le Corbusier had become well known in the architectural world, though he had only built residences for wealthy clients. In 1926, he entered the competition for the construction of a headquarters for the League of Nations in Geneva with a plan for an innovative lakeside complex of modernist white concrete office buildings and meeting halls. There were three-hundred thirty seven projects in competition. It appeared that the Corbusier's project was the first choice of the architectural jury, but after much behind-the scenes maneuvering the jury declared it was unable to pick a single winner, and the project was given instead to the top five architects, who were all neoclassicists. Le Corbusier was not discouraged; he presented his own plans to the public in articles and lectures to show the opportunity that the League of Nations had missed.
In 1926, Le Corbusier received the opportunity he had been looking for; he was commissioned by a Bordeaux industrialist, Henry Frugès, a fervent admirer of his ideas on urban planning, to build a complex of worker housing, the Cité Frugès, at Pessac, a suburb of Bordeaux. Le Corbusier described Pessac as "A little like a Balzac novel", a chance to create a whole community for living and working. The Fruges quarter became his first laboratory for a residential housing ; a series of rectangular blocks composed of modular housing units located in a garden setting. Like the unit displayed at the 1925 Exposition, each housing unit had its own small terrace. The earlier villas he constructed all had white exterior walls, but for Pessac, at the request of his clients, he added color; panels of brown, yellow and jade green, coordinated by Le Corbusier. Originally planned to have some two hundred units, it finally contained about fifty to seventy housing units, in eight buildings. Pessac became the model on a small scale for his later and much larger Cité Radieuse projects.
In 1928, Le Corbusier took a major step toward establishing modernist architecture as the dominant European style. Le Corbusier had met with many of the leading German and Austrian modernists during the competition for the League of Nations in 1927. In the same year, the German Werkbund organized an architectural exposition at the Weissenhof Estate Stuttgart. Seventeen leading modernist architects in Europe were invited to design twenty-one houses; Le Corbusier and Mies Van der Rohe played a major part. In 1927 Le Corbusier, Pierre Chareau and others proposed the foundation of an international conference to establish the basis for a common style. The first meeting of the "Congrès Internationaux d'Architecture Moderne" or International Congresses of Modern Architects (CIAM), was held in a château on Lake Leman in Switzerland 26–28 June 1928. Those attending included Le Corbusier, Robert Mallet-Stevens, Auguste Perret, Pierre Chareau and Tony Garnier from France; Victor Bourgeois from Belgium; Walter Gropius, Erich Mendelsohn, Ernst May and Mies Van der Rohe from Germany; Josef Frank from Austria; Mart Stam and Gerrit Rietveld from the Netherlands, and Adolf Loos from Czechoslovakia. A delegation of Soviet architects was invited to attend, but they were unable to obtain visas. Later members included Josep Lluís Sert of Spain and Alvar Aalto of Finland. No one attended from the United States. A second meeting was organized in 1930 in Brussels by Victor Bourgeois on the topic "Rational methods for groups of habitations". A third meeting, on "The functional city", was scheduled for Moscow in 1932, but was cancelled at the last minute. Instead the delegates held their meeting on a cruise ship traveling between Marseille and Athens. On board, they together drafted a text on how modern cities should be organized. The text, called The Athens Charter, after considerable editing by Le Corbusier and others, was finally published in 1943 and became an influential text for city planners in the 1950s and 1960s. The group met once more in Paris in 1937 to discuss public housing and was scheduled to meet in the United States in 1939, but the meeting was cancelled because of the war. The legacy of the CIAM was a roughly common style and doctrine which helped define modern architecture in Europe and the United States after World War II.
Le Corbusier saw the new society founded in the Soviet Union after the Russian Revolution as a promising laboratory for his architectural ideas. He met the Russian architect Konstantin Melnikov during the 1925 Decorative Arts Exposition in Paris, and admired the construction of Melnikov's constructvist USSR pavilion, the only truly modernist building in the Exposition other than his own Esprit Nouveau pavilion. At Melnikov's invitation he traveled to Moscow, where found that his writings had been published in Russian; he gave lectures and interviews, and between 1928 and 1932 he constructed an office building for the Tsentrosoyuz, the headquarters of Soviet trade unions.
In 1932, he was invited to take part in an international competition for the new Palace of the Soviets in Moscow, which was to be built on the site of the Cathedral of Christ the Saviour, demolished on Stalin's orders. Le Corbusier contributed a highly original plan, a low-level complex of circular and rectangular buildings and a rainbow-like arch from which the roof of the main meeting hall was suspended. To Le Corbusier's distress, his plan was rejected by Stalin in favor of a plan for a massive neoclassical tower, the highest in Europe, crowned with a statue of Vladimir Lenin. The Palace was never built; construction was stopped by World War II, a swimming pool took its place; and after the collapse of the USSR the cathedral was rebuilt on its original site.
Between 1928 and 1934, as Le Corbusier's reputation grew, he received commissions to construct a wide variety of buildings. In 1928 he received a commission from the Soviet government to construct the headquarters of the Tsentrosoyuz, or central office of trade unions, a large office building whose glass walls alternated with plaques of stone. He built the Villa de Madrot in Le Pradet (1929–1931); and an apartment in Paris for Charles de Bestigui at the top of an existing building on the Champs-Élysées 1929–1932, (later demolished). In 1929–1930 he constructed a floating homeless shelter for the Salvation Army on the left bank of the Seine at the Pont d'Austerlitz. Between 1929 and 1933, he built a larger and more ambitious project for the Salvation Army, the Cité de Refuge, on rue Cantagrel in the 13th arrondissement of Paris. He also constructed the Swiss Pavilion in the Cité Universitaire in Paris with 46 units of student housing, (1929–33). He designed furniture to go with the building; the main salon was decorated with a montage of black-and-white photographs of nature. In 1948, he replaced this with a colorful mural he painted himself. In Geneva he built a glass-walled apartment building with forty-five units, the Immeuble Clarté. Between 1931 and 1945 he built an apartment building with fifteen units, including an apartment and studio for himself on the 6th and 7th floors, at 4 rue Nungesser-et-Coli in the 16th arrondissement in Paris. overlooking the Bois de Boulogne. His apartment and studio are owned today by the Fondation Le Corbusier, and can be visited.
As the global Great Depression enveloped Europe, Le Corbusier devoted more and more time to his ideas for urban design and planned cities. He believed that his new, modern architectural forms would provide an organizational solution that would raise the quality of life for the working classes. In 1922 he had presented his model of the Ville Contemporaine, a city of three million inhabitants, at the Salon d'Automne in Paris. His plan featured tall office towers with surrounded by lower residential blocks in a park setting. He reported that "analysis leads to such dimensions, to such a new scale, and to such the creation of an urban organism so different from those that exist, that it that the mind can hardly imagine it." The Ville Contemporaine, presenting an imaginary city in an imaginary location, did not attract the attention that Le Corbusier wanted. For his next proposal, the Plan Voisin (1925), he took a much more provocative approach; he proposed to demolish a large part of central Paris and to replace it with a group of sixty-story cruciform office towers surrounded by parkland. This idea shocked most viewers, as it was certainly intended to do. The plan included a multi-level transportation hub that included depots for buses and trains, as well as highway intersections, and an airport. Le Corbusier had the fanciful notion that commercial airliners would land between the huge skyscrapers. He segregated pedestrian circulation paths from the roadways and created an elaborate road network. Groups of lower-rise zigzag apartment blocks, set back from the street, were interspersed among the office towers. Le Corbusier wrote: "The center of Paris, currently threatened with death, threatened by exodus, is in reality a diamond mine...To abandon the center of Paris to its fate is to desert in face of the enemy."
As no doubt Le Corbusier expected, no one hurried to implement the Plan Voisin, but he continued working on variations of the idea and recruiting followers. In 1929, he traveled to Brazil where he gave conferences on his architectural ideas. He returned with drawings of his own vision for Rio de Janeiro; he sketched serpentine multi-story apartment buildings on pylons, like inhabited highways, winding through Rio de Janeiro.
In 1931, he developed a visionary plan for another city Algiers, then part of France. This plan, like his Rio Janeiro plan, called for the construction of an elevated viaduct of concrete, carrying residential units, which would run from one end of the city to the other. This plan, unlike his early Plan Voisin, was more conservative, because it did not call for the destruction of the old city of Algiers; the residential housing would be over the top of the old city. This plan, like his Paris plans, provoked discussion, but never came close to realization.
In 1935, Le Corbusier made his first visit to the United States. He was asked by American journalists what he thought about New York City skyscrapers; he responded, characteristically, that he found them "much too small". He wrote a book describing his experiences in the States, "Quand les Cathédrales etait blanc- voyages au pays des timides" (When Cathedrals were White; voyage to the land of the timid) whose title expressed his view of the lack of boldness in American architecture.
He wrote a great deal but built very little in the late 1930s. The titles of his books expressed the combined urgency and optimism of his messages: "Cannons? Munitions? No thank you, Lodging please!" (1938) and "The lyricism of modern times and urbanism" (1939).
In 1928, the French Minister of Labour, Louis Loucheur, won the passage of a French law on public housing, calling for the construction of 260,000 new housing units within five years. Le Corbusier immediately began to design a new type of modular housing unit, which he called the Maison Loucheur, which would be suitable for the project. These units were in size, made with metal frames, and were designed to be mass-produced and then transported to the site, where they would be inserted into frameworks of steel and stone; The government insisted on stone walls to win the support of local building contractors. The standardisation of apartment buildings was the essence of what Le Corbusier termed the "Ville Radieuse" or "radiant city", in a new book which published in 1935. The Radiant City was similar to his earlier Contemporary City and Plan Voisin, with the difference that residences would be assigned by family size, rather than by income and social position. In his 1935 book, he developed his ideas for a new kind of city, where the principle functions; heavy industry, manufacturing, habitation and commerce, would be clearly separated into their own neighbourhoods, carefully planned and designed. However, before any units could be built, World War II intervened.
During the War and the German occupation of France, Le Corbusier did his best to promote his architectural projects. He moved to Vichy for a time, where the collaborationist government of Marshal Philippe Petain was located, offering his services for architectural projects, including his plan for the reconstruction of Algiers, but they were rejected. He continued writing, completing "Sur les Quatres routes" (On the Four Routes) in 1941. After 1942, Le Corbusier left Vichy for Paris. He became for a time a technical adviser at Alexis Carrel's eugenic foundation, he resigned from this position on 20 April 1944. In 1943, he founded a new association of modern architects and builders, the Ascoral, the Assembly of Constructors for a renewal of architecture, but there were no projects to build.
When the war ended, Le Corbusier was nearly sixty years old, and he had not had a single project realized in ten years. He tried, without success, to obtain commissions for several of the first large reconstruction projects, but his proposals for the reconstruction of the town of Saint-Dié and for La Rochelle were rejected. Still, he persisted; Le Corbusier finally found a willing partner in Raoul Dautry, the new Minister of Reconstruction and Urbanism. Dautry agreed to fund one of his projects, a "Unité d'habitation de grandeur conforme", or housing units of standard size, with the first one to be built in Marseille, which had been heavily damaged during the war.
This was his first public commission, and was a major breakthrough for Le Corbusier. He gave the building the name of his pre-war theoretical project, the "Cité Radieuse", and followed the principles that he had studied before the war, he proposed a giant reinforced concrete framework, into which modular apartments would be fit like bottles into a bottle rack. Like the Villa Savoye, the structure was poised on concrete pylons though, because of the shortage of steel to reinforce the concrete, the pylons were more massive than usual. The building contained 337 duplex apartment modules to house a total of 1,600 people. Each module was three stories high, and contained two apartments, combined so each had two levels (see diagram above). The modules ran from one side of the building to the other, and each apartment had a small terrace at each end. They were ingeniously fitted together like pieces of a Chinese puzzle, with a corridor slotted through the space between the two apartments in each module. Residents had a choice of twenty-three different configurations for the units. Le Corbusier designed furniture, carpets and lamps to go with the building, all purely functional; the only decoration was a choice of interior colors that Le Corbusier gave to residents. The only mildly decorative features of the building were the ventilator shafts on the roof, which Le Corbusier made to look like the smokestacks of an ocean liner, a functional form that he admired.
The building was designed not just to be a residence, but to offer all the services needed for living. Every third floor, between the modules, there was a wide corridor, like an interior street, which ran the length of the building from one end of the building to the other. This served as a sort of commercial street, with shops, eating places, a nursery school and recreational facilities. A running track and small stage for theater performances was located in the roof. The building itself was surrounded by trees and a small park.
Le Corbusier wrote later that the Unité d'Habitation concept was inspired by the visit he had made to the Florence Charterhouse at Galluzzo in Italy, in 1907 and 1910 during his early travels. He wanted to recreate, he wrote, an ideal place "for meditation and contemplation." He also learned from the monastery, he wrote, that "standardization led to perfection," and that "all of his life a man labours under this impulse: to make the home the temple of the family."
The Unité d'Habitation marked a turning point in the career of Le Corbusier; in 1952, he was made a Commander of the Légion d'Honneur in a ceremony held on the roof of his new building. He had progressed from being an outsider and critic of the architectural establishment to its centre, as the most prominent French architect.
Le Corbusier made another almost identical Unité d'Habitation in Rezé-les-Nantes in the Loire-Atlantique Department between 1948 and 1952, and three more over the following years, in Berlin, Briey-en-Forêt and Firminy; and he designed a factory for the company of Claude and Duval, in Saint-Dié in the Vosges. In the post-Second World War decades Le Corbusier’s fame moved beyond architectural and planning circles as he became one of the leading intellectual figures of the time.
In early 1947 Le Corbusier submitted a design for the headquarters of the United Nations, which was to be built beside the East River in New York. Instead of competition, the design was to be selected by a Board of Design Consultants composed of leading international architects nominated by member governments, including Le Corbusier, Oscar Niemeyer of Brazil, Howard Robertson from Britain, Nikolai Bassov of the Soviet Union, and five others from around the world. The committee was under the direction of the American architect Wallace K. Harrison, who was also architect for the Rockefeller family, which had donated the site for the building.
Le Corbusier had submitted his plan for the Secretariat, called Plan 23 of the 58 submitted. In Le Corbusier's plan offices, council chambers and General Assembly hall were in a single block in the center of the site. He lobbied hard for his project, and asked the younger Brazilian architect, Niemeyer, to support and assist him on his plan. Niemeyer, to help Le Corbusier, refused to submit his own design and did not attend the meetings until the Director, Harrison, insisted. Niemeyer then submitted his plan, Plan 32, with the office building and councils and General Assembly in separate buildings. After much discussion, the Committee chose Niemeyer's plan, but suggested that he collaborate with Le Corbusier on the final project. Le Corbusier urged Niemeyer to put the General Assembly Hall in the center of the site, though this would eliminate Niemeyer's plan to have a large plaza in the center. Niemeyer agreed with Le Corbusier's suggestion, and the headquarters was built, with minor modifications, according to their joint plan.
Le Corbusier was an avowed atheist, but he also had a strong belief in the ability of architecture to create a sacred and spiritual environment. In the postwar years, he designed two important religious buildings; the chapel of Notre-Dame-du-Haut at Ronchamp (1950–1955); and the Convent of Sainte Marie de La Tourette (1953–1960). Le Corbusier wrote later that he was greatly aided in his religious architecture by a Dominican father, Père Couturier, who had founded a movement and review of modern religious art.
Le Corbusier first visited the remote mountain site of Ronchamp in May 1950, saw the ruins of the old chapel, and drew sketches of possible forms. He wrote afterwards: "In building this chapel, I wanted to create a place of silence, of peace, of prayer, of interior joy. The feeling of the sacred animated our effort. Some things are sacred, others aren't, whether they're religious or not."
The second major religious project undertaken by Le Corbusier was the Convent of Sainte Marie de La Tourette in L'Arbresle in the Rhone Department (1953–1960). Once again it was Father Couturier who engaged Le Corbusier in the project. He invited Le Corbusier to visit the starkly simple and imposing 12th–13th century Le Thoronet Abbey in Provence, and also used his memories of his youthful visit to the Erna Charterhouse in Florence. This project involved not only a chapel, but a library, refectory, rooms for meetings and reflection, and dormitories for the nuns. For the living space he used the same Modulor concept for measuring the ideal living space that he had used in the Unité d'Habitation in Marseille; height under the ceiling of ; and width .
Le Corbusier used raw concrete to construct the convent, which is placed on the side of a hill. The three blocks of dormitories U, closed by the chapel, with a courtyard in the center. The Convent has a flat roof, and is placed on sculpted concrete pillars. Each of the residential cells has small loggia with a concrete sunscreen looking out at the countryside. The centerpiece of the convent is the chapel, a plain box of concrete, which he called his "Box of miracles." Unlike the highly finished façade of the Unité d'Habitation, the façade of the chapel is raw, unfinished concrete. He described the building in a letter to Albert Camus in 1957: "I'm taken with the idea of a "box of miracles"...as the name indicates, it is a rectangual box made of concrete. It doesn't have any of the traditional theatrical tricks, but the possibility, as its name suggests, to make miracles." The interior of the chapel is extremely simple, only benches in a plain, unfinished concrete box, with light coming through a single square in the roof and six small band on the sides. The Crypt beneath has intense blue, red and yellow walls, and illumination by sunlight channeled from above. The monastery has other unusual features, including floor to ceiling panels of glass in the meeting rooms, window panels that fragmented the view into pieces, and a system of concrete and metal tubes like gun barrels which aimed sunlight through colored prisms and projected it onto the walls of sacristy and to the secondary altars of the crypt on the level below. These were whimsically termed the ""machine guns" of the sacristy and the "light cannons" of the crypt.
In 1960, Le Corbusier began a third religious building, the Church of Saint Pierre in the new town of Firminy-Vert, where he had built a Unité d'Habitation and a cultural and sports centre. While he made the original design, construction did not begin until five years after his death, and work continued under different architects until it was completed in 2006. The most spectacular feature of the church is the sloping concrete tower that covers the entire interior. similar to that in the Assembly Building in his complex at Chandigarh. Windows high in the tower illuminate the interior. Le Corbusier originally proposed that tiny windows also project the form of a constellation on the walls. Later architects designed the church to project the constellation Orion.
Le Corbusier's largest and most ambitious project was the design of Chandigarh, the capital city of the Punjab and Haryana States of India, created after India received independence in 1947. Le Corbusier was contacted in 1950 by Indian Prime Minister Jawaharlal Nehru, and invited to propose a project. An American architect, Albert Mayer, had made a plan in 1947 for a city of 150,000 inhabitants, but the Indian government wanted a grander and more monumental city. Corbusier worked on the plan with two British specialists in urban design and tropical climate architecture, Maxwell Fry and Jane Drew, and with his cousin, Pierre Jeanneret, who moved to India and supervised the construction until his death.
Le Corbusier, as always, was rhapsodic about his project; "It will be a city of trees," he wrote, "of flowers and water, of houses as simple as those at the time of Homer, and of a few splendid edifices of the highest level of modernism, where the rules of mathematics will reign.". His plan called for residential, commercial and industrial areas, along with parks and a transportation infrastructure. In the middle was the capitol, a complex of four major government buildings; the Palace of the National Assembly, the High Court of Justice; the Palace of Secretariat of Ministers, and the Palace of the Governor. For financial and political reasons, the Palace of the Governor was dropped well into the construction of the city, throwing the final project somewhat off-balance. From the beginning, Le Corbusier worked, as he reported, "Like a forced laborer." He dismissed the earlier American plan as "Faux-Moderne" and overly filled with parking spaces roads. His intent was to present what he had learned in forty years of urban study, and also to show the French government the opportunities they had missed in not choosing him to rebuild French cities after the War. His design made use of many of his favorite ideas: an architectural promenade, incorporating the local landscape and the sunlight and shadows into the design; the use of the Modulor to give a correct human scale to each element; and his favourite symbol, the open hand ("The hand is open to give and to receive"). He placed a monumental open hand statue in a prominent place in the design.
Le Corbusier's design called for the use of raw concrete, whose surface not smoothed or polished and which showed the marks of the forms in which it dried. Pierre Jeanneret wrote to his cousin that he was in a continual battle with the construction workers, who could not resist the urge to smooth and finish the raw concrete, particularly when important visitors were coming to the site. At one point one thousand workers were employed on the site of the High Court of Justice. Le Corbusier wrote to his mother, "It is an architectural symphony which surpasses all my hopes, which flashes and develops under the light in a way which is unimaginable and unforgettable. From far, from up close, it provokes astonishment; all made with raw concrete and a cement cannon. Adorable, and grandiose. In all the centuries no one has seen that."
The High Court of Justice, begun in 1951, was finished in 1956. The building was radical in its design; a parallelogram topped with an inverted parasol. Along the walls were high concrete grills thick which served as sunshades. The entry featured a monumental ramp and columns that allowed the air to circulate. The pillars were originally white limestone, but in the 1960s they were repainted in bright colors, which better resisted the weather.
The Secretariat, the largest building that housed the government offices, was constructed between 1952 and 1958. It is an enormous block long and eight levels high, served by a ramp which extends from the ground to the top level. The ramp was designed to be partly sculptural and partly practical. Since there were no modern building cranes at the time of construction, the ramp was the only way to get materials to the top of the construction site. The Secretariat had two features which were borrowed from his design for the Unité d'Habitation in Marseille: concrete grill sunscreens over the windows and a roof terrace.
The most important building of the capitol complex was the Palace of Assembly (1952–61), which faced the High Court at the other end of a five hundred meter esplanade with a large reflecting pool in the front. This building features a central courtyard, over which is the main meeting hall for the Assembly. On the roof on the rear of the building is a signature feature of Le Corbusier, a large tower, similar in form to the smokestack of a ship or the ventilation tower of a heating plant. Le Corbusier added touches of color and texture with an immense tapestry in the meeting hall and large gateway decorated with enamel. He wrote of this building, "A Palace magnificent in its effect, from the new art of raw concrete. It is magnificent and terrible; terrible meaning that there is nothing cold about it to the eyes."
The 1950s and 1960s, were a difficult period for Le Corbusier's personal life; his wife Yvonne died in 1957, and his mother, to whom he was closely attached, died in 1960. He remained active in a wide variety of fields; in 1955 he published "Poéme de l'angle droits", a portfolio of lithographs, published in the same collection as the book "Jazz" by Henri Matisse. In 1958 he collaborated with the composer Edgar Varèse on a work called "Le Poème électronique", a show of sound and light, for the Philips Pavilion at the International Exposition in Brussels. In 1960 he published a new book, "L'Atelier de la recherché patiente" "The workshop of patient research"), simultaneously published in four languages. He received growing recognition for his pioneering work in modernist architecture; in 1959, a successful international campaign was launched to have his Villa Savoye, threatened with demolition, declared an historic monument; it was the first time that a work by a living architect received this distinction. In 1962, in the same year as the dedication of the Palace of the Assembly in Chandigarh, the first retrospective exhibit on his work was held at the National Museum of Modern Art in Paris. In 1964, in a ceremony held in his atelier on rue de Sèvres, he was awarded the Grand Cross of the Légion d'honneur by Culture Minister André Malraux.
His later architectural work was extremely varied, and often based on designs of earlier projects. In 1952–1958, he designed a series of tiny vacation cabins, in size, for a site next to the Mediterranean at Roquebrune-Cap-Martin. He built a similar cabin for himself, but the rest of the project was not realized until after his death. In 1953–1957, he designed a residential building for Brazilian students for the Cité de la Université in Paris. Between 1954 and 1959, he built the National Museum of Western Art in Tokyo. His other projects included a cultural centre and stadium for the town of Firminy, where he had built his first housing project (1955–1958); and a stadium in Baghdad, Iraq (much altered since its construction). He also constructed three new "Unités d'Habitation", apartment blocks on the model of the original in Marseille, the first in Berlin (1956–1958), the second in Briey-en-Forêt in the Meurthe-et-Moselle Department; and the third (1959–1967) in Firminy. In 1960–1963, he built his only building in the United States; the Carpenter Center for the Visual Arts in Cambridge, Massachusetts.
Le Corbusier died of a heart attack at age 77 in 1965 after swimming at the French Riviera. At the time of his death in 1965, several projects were on the drawing boards; the church of Saint-Pierre in Firminy, finally completed in modified form in 2006; a Palace of Congresses for Strasbourg (1962–65), and a hospital in Venice, (1961–1965) which were never built. Le Corbusier designed an art gallery beside the lake in Zürich for gallery owner Heidi Weber in 1962–1967. Now called the Centre Le Corbusier, it is one of his last finished works.
The Fondation Le Corbusier (FLC) functions as his official estate. The US copyright representative for the Fondation Le Corbusier is the Artists Rights Society.
Le Corbusier defined the principles of his new architecture in "Les cinq points de l'architecture moderne", published in 1927, and co-authored by his cousin, Pierre Jeanneret. They summarized the lessons he had learned in the previous years, which he put literally into concrete form in his villas constructed of the late 1920s, most dramatically in the Villa Savoye (1928–1931)
The five points are:
The "Architectural Promenade" was another idea dear to Le Corbusier, which he particularly put into play in his design of the Villa Savoye. In 1928, in "Une Maison, un Palais", he described it: "Arab architecture gives us a precious lesson: it is best appreciated in walking, on foot. It is in walking, in going from one place to another, that you see develop the features of the architecture. In this house (Villa Savoye) you find a veritable architectural promenade, offering constantly varying aspects, unexpected, sometimes astonishing." The promenade at Villa Savoye, Le Corbusier wrote, both in the interior of the house and on the roof terrace, often erased the traditional difference between the inside and outside.
In the 1930s, Le Corbusier expanded and reformulated his ideas on urbanism, eventually publishing them in "La Ville radieuse" (The Radiant City) in 1935. Perhaps the most significant difference between the Contemporary City and the Radiant City is that the latter abandoned the class-based stratification of the former; housing was now assigned according to family size, not economic position. Some have read dark overtones into "The Radiant City": from the "astonishingly beautiful assemblage of buildings" that was Stockholm, for example, Le Corbusier saw only "frightening chaos and saddening monotony." He dreamed of "cleaning and purging" the city, bringing "a calm and powerful architecture"—referring to steel, plate glass, and reinforced concrete. Although Le Corbusier's designs for Stockholm did not succeed, later architects took his ideas and partly "destroyed" the city with them.
Le Corbusier hoped that politically minded industrialists in France would lead the way with their efficient Taylorist and Fordist strategies adopted from American industrial models to reorganize society. As Norma Evenson has put it, "the proposed city appeared to some an audacious and compelling vision of a brave new world, and to others a frigid megalomaniacally scaled negation of the familiar urban ambient."
Le Corbusier "His ideas—his urban planning and his architecture—are viewed separately," Perelman noted, "whereas they are one and the same thing."
In "La Ville radieuse", he conceived an essentially apolitical society, in which the bureaucracy of economic administration effectively replaces the state.
Le Corbusier was heavily indebted to the thought of the 19th-century French utopians Saint-Simon and Charles Fourier. There is a noteworthy resemblance between the concept of the unité and Fourier's phalanstery. From Fourier, Le Corbusier adopted at least in part his notion of administrative, rather than political, government.
The Modulor was a standard model of the human form which Le Corbusier devised to determine the correct amount of living space needed for residents in his buildings. It was also his rather original way of dealing with differences between the metric system and British or American system, since the Modulor was not attached to either one.
Le Corbusier explicitly used the golden ratio in his Modulor system for the scale of architectural proportion. He saw this system as a continuation of the long tradition of Vitruvius, Leonardo da Vinci's "Vitruvian Man", the work of Leon Battista Alberti, and others who used the proportions of the human body to improve the appearance and function of architecture. In addition to the golden ratio, Le Corbusier based the system on human measurements, Fibonacci numbers, and the double unit. Many scholars see the Modulor as a humanistic expression but it is also argued that: "It's exactly the opposite (...) It's the mathematicization of the body, the standardization of the body, the rationalization of the body."
He took Leonardo's suggestion of the golden ratio in human proportions to an extreme: he sectioned his model human body's height at the navel with the two sections in golden ratio, then subdivided those sections in golden ratio at the knees and throat; he used these golden ratio proportions in the Modulor system.
Le Corbusier's 1927 Villa Stein in Garches exemplified the Modulor system's application. The villa's rectangular ground plan, elevation, and inner structure closely approximate golden rectangles.
Le Corbusier placed systems of harmony and proportion at the centre of his design philosophy, and his faith in the mathematical order of the universe was closely bound to the golden section and the Fibonacci series, which he described as "rhythms apparent to the eye and clear in their relations with one another. And these rhythms are at the very root of human activities. They resound in Man by an organic inevitability, the same fine inevitability which causes the tracing out of the Golden Section by children, old men, savages, and the learned."
The Open Hand (La Main Ouverte) is a recurring motif in Le Corbusier's architecture, a sign for him of "peace and reconciliation. It is open to give and open to receive." The largest of the many Open Hand sculptures that Le Corbusier created is a version in Chandigarh, India known as "Open Hand Monument".
Le Corbusier was an eloquent critic of the finely crafted, hand-made furniture, made with rare and exotic woods, inlays and coverings, presented at the 1925 Exposition of Decorative Arts. Following his usual method, Le Corbusier first wrote a book with his theories of furniture, complete with memorable slogans. In his 1925 book "L'Art Décoratif d'aujourd'hui", he called for furniture that used inexpensive materials and could be mass-produced. Le Corbusier described three different furniture types: "type-needs", "type-furniture", and "human-limb objects". He defined human-limb objects as: "Extensions of our limbs and adapted to human functions that are type-needs and type-functions, therefore type-objects and type-furniture. The human-limb object is a docile servant. A good servant is discreet and self-effacing in order to leave his master free. Certainly, works of art are tools, beautiful tools. And long live the good taste manifested by choice, subtlety, proportion, and harmony". He further declared, "Chairs are architecture, sofas are bourgeois",
Le Corbusier first relied on ready-made furniture from Thonet to furnish his projects, such as his pavilion at the 1925 Exposition. In 1928, following the publication of his theories, he began experimenting with furniture design. In 1928, he invited the architect Charlotte Perriand to join his studio as a furniture designer. His cousin, Pierre Jeanneret, also collaborated on many of the designs. For the manufacture of his furniture, he turned to the German firm Gebrüder Thonet, which had begun making chairs with tubular steel, a material originally used for bicycles, in the early 1920s. Le Corbusier admired the design of Marcel Breuer and the Bauhaus, who in 1925 had begun making sleek modern tubular club chairs. Mies van der Rohe had begun making his own version in a sculptural curved form with a cane seat in 1927.
The first results of the collaboration between Le Corbusier and Perriand were three types of chairs made with chrome-plated tubular steel frames: The LC4, Chaise Longue, (1927–28), with a covering of cowhide, which gave it a touch of exoticism; the "Fauteuil Grand Confort" (LC3) (1928–29), a club chair with a tubular frame which resembled the comfortable Art Deco club chairs that became popular in the 1920s; and the "Fauteuil à dossier basculant" (LC4) (1928–29), a low seat suspended in a tubular steel frame, also with a cowhide upholstery. These chairs were designed specifically for two of his projects, the "Maison la Roche" in Paris and a pavilion for Barbara and Henry Church. All three clearly showed the influence of Mies van der Rohe and Marcel Breuer. The line of furniture was expanded with additional designs for Le Corbusier's 1929 "Salon d'Automne" installation, 'Equipment for the Home'. Despite the intention of Le Corbusier that his furniture should be inexpensive and mass-produced, his pieces were originally costly to make and were not mass-produced until many years later, when he was famous.
The political views of Le Corbusier were rather variable over time. In the 1920s, he co-founded and contributed articles about urbanism to the fascist journals "Plans", "Prélude" and "L'Homme Réel". He also penned pieces in favour of Nazi anti-semitism for those journals, as well as "hateful editorials". Between 1925 and 1928, Le Corbusier had connections to Le Faisceau, a short-lived French fascist party led by Georges Valois. Valois later became an anti-fascist. Le Corbusier knew another former member of Faisceau, Hubert Lagardelle, a former labor leader and syndicalist who had become disaffected with the political left. In 1934, after Lagardelle had obtained a position at the French embassy in Rome, he arranged for Le Corbusier to lecture on architecture in Italy. Lagardelle later served as minister of labor in the pro-Axis Vichy regime. While Le Corbusier sought commissions from the Vichy regime, particularly the redesign of Marseille after its Jewish population had been forcefully removed, he was unsuccessful, and the only appointment he received from it was membership of a committee studying urbanism. Alexis Carrel, a eugenicist surgeon, appointed Le Corbusier to the Department of Bio-Sociology of the "Foundation for the Study of Human Problems", an institute promoting eugenics policies under the Vichy regime.
Le Corbusier has been accused of anti-semitism. He wrote to his mother in October 1940, prior to a referendum held by the Vichy government: "The Jews are having a bad time. I occasionally feel sorry. But it appears their blind lust for money has rotted the country". He was also accused of belittling the Muslim population of Algeria, then part of France. When Le Corbusier proposed a plan for the rebuilding of Algiers, he condemned the existing housing for European Algerians, complaining that it was inferior to that inhabited by indigenous Algerians: "the civilized live like rats in holes", while "the barbarians live in solitude, in well-being." His plan for rebuilding Algiers was rejected, and thereafter Le Corbusier mostly avoided politics.
Few other 20th-century architects were praised, or criticized, as much as Le Corbusier. In his eulogy to Le Corbusier at the memorial ceremony for the architect in the courtyard of the Louvre on 1 September 1965, French Culture Minister André Malraux declared, "Le Corbusier had some great rivals, but none of them had the same significance in the revolution of architecture, because none bore insults so patiently and for so long."
Later criticism of Le Corbusier was directed at his ideas of urban planning. In 1998 the architectural historian Witold Rybczynski wrote in "Time" magazine: "He called it the Ville Radieuse, the Radiant City. Despite the poetic title, his urban vision was authoritarian, inflexible and simplistic. Wherever it was tried—in Chandigarh by Le Corbusier himself or in Brasilia by his followers—it failed. Standardization proved inhuman and disorienting. The open spaces were inhospitable; the bureaucratically imposed plan, socially destructive. In the US, the Radiant City took the form of vast urban-renewal schemes and regimented public housing projects that damaged the urban fabric beyond repair. Today, these megaprojects are being dismantled, as superblocks give way to rows of houses fronting streets and sidewalks. Downtowns have discovered that combining, not separating, different activities is the key to success. So is the presence of lively residential neighborhoods, old as well as new. Cities have learned that preserving history makes more sense than starting from zero. It has been an expensive lesson, and not one that Le Corbusier intended, but it too is part of his legacy."
The public housing projects influenced by his ideas have been criticized for isolating poor communities in monolithic high-rises and breaking the social ties integral to a community's development. One of his most influential detractors has been Jane Jacobs, who delivered a scathing critique of Le Corbusier's urban design theories in her seminal work "The Death and Life of Great American Cities".
For some critics, the urbanism of Le Corbusier's was the model for a fascist state. These critics cited Le Corbusier himself when he wrote that "not all citizens could become leaders. The technocratic elite, the industrialists, financiers, engineers, and artists would be located in the city centre, while the workers would be removed to the fringes of the city".
Le Corbusier was concerned by problems he saw in industrial cities at the turn of the 20th century. He thought that industrial housing techniques led to crowding, dirtiness, and a lack of a moral landscape. He was a leader of the modernist movement to create better living conditions and a better society through housing. Ebenezer Howard's "Garden Cities of Tomorrow" heavily influenced Le Corbusier and his contemporaries.
Le Corbusier revolutionized urban planning, and was a founding member of the Congrès International d'Architecture Moderne (CIAM). One of the first to realize how the automobile would change human society, Le Corbusier conceived the city of the future with large apartment buildings isolated in a park-like setting on pilotis. Le Corbusier's plans were adopted by builders of public housing in Europe and the United States. In Great Britain urban planners turned to Le Corbusier's "Cities in the Sky" as a cheaper method of building public housing from the late 1950s. Le Corbusier criticized any effort at ornamentation of the buildings. The large spartan structures in cities, but not part of it, have been criticized for being boring and unfriendly to pedestrians.
Several of the many architects who worked for Le Corbusier in his studio became prominent, including painter-architect Nadir Afonso, who absorbed Le Corbusier's ideas into his own aesthetics theory. Lúcio Costa's city plan of Brasília and the industrial city of Zlín planned by František Lydie Gahura in the Czech Republic are based on his ideas. Le Corbusier's thinking had profound effects on city planning and architecture in the Soviet Union during the Constructivist era.
Le Corbusier harmonized and lent credence to the idea of space as a set of destinations between which mankind moved continuously. He gave credibility to the automobile as transporter, and to freeways in urban spaces. His philosophies were useful to urban real estate developers in the American post-World War II period because they justified and lent intellectual support to the desire to raze traditional urban space for high density, high profit urban concentration. The freeways connected this new urbanism to low density, low cost, highly profitable suburban locales available to be developed for middle class single-family housing.
Missing from this scheme of movement was connectivity between isolated urban villages created for lower-middle and working classes, and the destination points in Le Corbusier's plan: suburban and rural areas, and urban commercial centers. The freeways as designed traveled over, at, or beneath grade levels of the living spaces of the urban poor, for example the Cabrini–Green housing project in Chicago. Such projects with no freeway exit ramps, cut off by freeway rights-of-way, became isolated from jobs and services concentrated at Le Corbusier's nodal transportation end points. As jobs migrated to the suburbs, urban village dwellers found themselves without freeway access points in their communities or public mass transit that could economically reach suburban job centers. Late in the post-War period, suburban job centers found labor shortages to be such a critical problem that they sponsored urban-to-suburban shuttle bus services to fill vacant working class and lower-middle class jobs, which did not typically pay enough to afford car ownership.
Le Corbusier influenced architects and urbanists worldwide. In the United States, Shadrach Woods; in Spain, Francisco Javier Sáenz de Oiza; in Brazil, Oscar Niemeyer; In Mexico, Mario Pani Darqui; in Chile, Roberto Matta; in Argentina, Antoni Bonet i Castellana (a Catalan exile), Juan Kurchan, Jorge Ferrari Hardoy, Amancio Williams, and Clorindo Testa in his first era; in Uruguay, the professors Justino Serralta and Carlos Gómez Gavazzo; in Colombia, Germán Samper Gnecco, Rogelio Salmona, and Dicken Castro; in Peru, Abel Hurtado and José Carlos Ortecho.
The Fondation Le Corbusier is a private foundation and archive honoring the work of Le Corbusier. It operates Maison La Roche, a museum located in the 16th arrondissement at 8–10, square du Dr Blanche, Paris, France, which is open daily except Sunday.
The foundation was established in 1968. It now owns Maison La Roche and Maison Jeanneret (which form the foundation's headquarters), as well as the apartment occupied by Le Corbusier from 1933 to 1965 at rue Nungesser et Coli in Paris 16e, and the "Small House" he built for his parents in Corseaux on the shores of Lac Leman (1924).
Maison La Roche and Maison Jeanneret (1923–24), also known as the La Roche-Jeanneret house, is a pair of semi-detached houses that was Le Corbusier's third commission in Paris. They are laid out at right angles to each other, with iron, concrete, and blank, white façades setting off a curved two-story gallery space. Maison La Roche is now a museum containing about 8,000 original drawings, studies and plans by Le Corbusier (in collaboration with Pierre Jeanneret from 1922 to 1940), as well as about 450 of his paintings, about 30 enamels, about 200 other works on paper, and a sizable collection of written and photographic archives. It describes itself as the world's largest collection of Le Corbusier drawings, studies, and plans.
In 2016, seventeen of Le Corbusier's buildings spanning seven countries were identified as UNESCO World Heritage Sites, reflecting "outstanding contribution to the Modern Movement".
Le Corbusier's portrait was featured on the 10 Swiss francs banknote, pictured with his distinctive eyeglasses.
The following place-names carry his name:
|
https://en.wikipedia.org/wiki?curid=17900
|
Leonhard Euler
Leonhard Euler ( ; ; 15 April 170718 September 1783) was a Swiss mathematician, physicist, astronomer, geographer, logician and engineer who made important and influential discoveries in many branches of mathematics, such as infinitesimal calculus and graph theory, while also making pioneering contributions to several branches such as topology and analytic number theory. He also introduced much of the modern mathematical terminology and notation, particularly for mathematical analysis, such as the notion of a mathematical function. He is also known for his work in mechanics, fluid dynamics, optics, astronomy and music theory.
Euler was one of the most eminent mathematicians of the 18th century and is held to be one of the greatest in history. He is also widely considered to be the most prolific, as his collected works fill 92 volumes, more than anyone else in the field. He spent most of his adult life in Saint Petersburg, Russia, and in Berlin, then the capital of Prussia.
A statement attributed to Pierre-Simon Laplace expresses Euler's influence on mathematics: "Read Euler, read Euler, he is the master of us all."
Leonhard Euler was born on 15 April 1707, in Basel, Switzerland, to Paul III Euler, a pastor of the Reformed Church, and Marguerite née Brucker, another pastor's daughter. He had two younger sisters, Anna Maria and Maria Magdalena, and a younger brother, Johann Heinrich. Soon after the birth of Leonhard, the Eulers moved from Basel to the town of Riehen, Switzerland, where Leonhard spent most of his childhood. Paul was a friend of the Bernoulli family; Johann Bernoulli, then regarded as Europe's foremost mathematician, would eventually be the most important influence on young Leonhard.
Euler's formal education started in Basel, where he was sent to live with his maternal grandmother. In 1720, at age thirteen, he enrolled at the University of Basel. In 1723, he received a Master of Philosophy with a dissertation that compared the philosophies of Descartes and Newton. During that time, he was receiving Saturday afternoon lessons from Johann Bernoulli, who quickly discovered his new pupil's incredible talent for mathematics. At that time Euler's main studies included theology, Greek and Hebrew at his father's urging to become a pastor, but Bernoulli convinced his father that Leonhard was destined to become a great mathematician.
In 1726, Euler completed a dissertation on the propagation of sound with the title "De Sono". At that time, he was unsuccessfully attempting to obtain a position at the University of Basel. In 1727, he first entered the "Paris Academy Prize Problem" competition; the problem that year was to find the best way to place the masts on a ship. Pierre Bouguer, who became known as "the father of naval architecture", won and Euler took second place. Euler later won this annual prize twelve times.
Around this time Johann Bernoulli's two sons, Daniel and Nicolaus, were working at the Imperial Russian Academy of Sciences in Saint Petersburg. On 31 July 1726, Nicolaus died of appendicitis after spending less than a year in Russia. When Daniel assumed his brother's position in the mathematics/physics division, he recommended that the post in physiology that he had vacated be filled by his friend Euler. In November 1726 Euler eagerly accepted the offer, but delayed making the trip to Saint Petersburg while he unsuccessfully applied for a physics professorship at the University of Basel.
Euler arrived in Saint Petersburg on 17 May 1727. He was promoted from his junior post in the medical department of the academy to a position in the mathematics department. He lodged with Daniel Bernoulli with whom he often worked in close collaboration. Euler mastered Russian and settled into life in Saint Petersburg. He also took on an additional job as a medic in the Russian Navy.
The Academy at Saint Petersburg, established by Peter the Great, was intended to improve education in Russia and to close the scientific gap with Western Europe. As a result, it was made especially attractive to foreign scholars like Euler. The academy possessed ample financial resources and a comprehensive library drawn from the private libraries of Peter himself and of the nobility. Very few students were enrolled in the academy to lessen the faculty's teaching burden. The academy emphasized research and offered to its faculty both the time and the freedom to pursue scientific questions.
The Academy's benefactress, Catherine I, who had continued the progressive policies of her late husband, died on the day of Euler's arrival. The Russian nobility then gained power upon the ascension of the twelve-year-old Peter II. The nobility, suspicious of the academy's foreign scientists, cut funding and caused other difficulties for Euler and his colleagues.
Conditions improved slightly after the death of Peter II, and Euler swiftly rose through the ranks in the academy and was made a professor of physics in 1731. Two years later, Daniel Bernoulli, who was fed up with the censorship and hostility he faced at Saint Petersburg, left for Basel. Euler succeeded him as the head of the mathematics department.
On 7 January 1734, he married Katharina Gsell (1707–1773), a daughter of Georg Gsell, a painter from the Academy Gymnasium. The young couple bought a house by the Neva River. Of their thirteen children, only five survived childhood.
Concerned about the continuing turmoil in Russia, Euler left St. Petersburg on 19 June 1741 to take up a post at the "Berlin Academy", which he had been offered by Frederick the Great of Prussia. He lived for 25 years in Berlin, where he wrote over 380 articles. In Berlin, he published the two works for which he would become most renowned: the "Introductio in analysin infinitorum", a text on functions published in 1748, and the "Institutiones calculi differentialis", published in 1755 on differential calculus. In 1755, he was elected a foreign member of the Royal Swedish Academy of Sciences.
In addition, Euler was asked to tutor Friederike Charlotte of Brandenburg-Schwedt, the Princess of Anhalt-Dessau and Frederick's niece. Euler wrote over 200 letters to her in the early 1760s, which were later compiled into a best-selling volume entitled "Letters of Euler on different Subjects in Natural Philosophy Addressed to a German Princess". This work contained Euler's exposition on various subjects pertaining to physics and mathematics, as well as offering valuable insights into Euler's personality and religious beliefs. This book became more widely read than any of his mathematical works and was published across Europe and in the United States. The popularity of the "Letters" testifies to Euler's ability to communicate scientific matters effectively to a lay audience, a rare ability for a dedicated research scientist.
Despite Euler's immense contribution to the Academy's prestige, he eventually incurred the ire of Frederick and ended up having to leave Berlin. The Prussian king had a large circle of intellectuals in his court, and he found the mathematician unsophisticated and ill-informed on matters beyond numbers and figures. Euler was a simple, devoutly religious man who never questioned the existing social order or conventional beliefs, in many ways the polar opposite of Voltaire, who enjoyed a high place of prestige at Frederick's court. Euler was not a skilled debater and often made it a point to argue subjects that he knew little about, making him the frequent target of Voltaire's wit. Frederick also expressed disappointment with Euler's practical engineering abilities:
Euler's eyesight worsened throughout his mathematical career. In 1738, three years after nearly expiring from fever, he became almost blind in his right eye, but Euler rather blamed the painstaking work on cartography he performed for the St. Petersburg Academy for his condition. Euler's vision in that eye worsened throughout his stay in Germany, to the extent that Frederick referred to him as "Cyclops". Euler remarked on his loss of vision, "Now I will have fewer distractions." He later developed a cataract in his left eye, which was discovered in 1766. Just a few weeks after its discovery, a failed surgical restoration rendered him almost totally blind. He was 59 years old then. However, his condition appeared to have little effect on his productivity, as he compensated for it with his mental calculation skills and exceptional memory. For example, Euler could repeat the "Aeneid" of Virgil from beginning to end without hesitation, and for every page in the edition he could indicate which line was the first and which the last. With the aid of his scribes, Euler's productivity on many areas of study actually increased. He produced, on average, one mathematical paper every week in the year 1775. The Eulers bore a double name, Euler-Schölpi, the latter of which derives from "schelb" and "schief", signifying squint-eyed, cross-eyed, or crooked. This suggests that the Eulers may have had a susceptibility to eye problems.
In 1760, with the Seven Years' War raging, Euler's farm in Charlottenburg was ransacked by advancing Russian troops. Upon learning of this event, General Ivan Petrovich Saltykov paid compensation for the damage caused to Euler's estate, with Empress Elizabeth of Russia later adding a further payment of 4000 roubles—an exorbitant amount at the time. The political situation in Russia stabilized after Catherine the Great's accession to the throne, so in 1766 Euler accepted an invitation to return to the St. Petersburg Academy. His conditions were quite exorbitant—a 3000 ruble annual salary, a pension for his wife, and the promise of high-ranking appointments for his sons. All of these requests were granted. He spent the rest of his life in Russia. However, his second stay in the country was marred by tragedy. A fire in St. Petersburg in 1771 cost him his home, and almost his life. In 1773, he lost his wife Katharina after 40 years of marriage.
Three years after his wife's death, Euler married her half-sister, Salome Abigail Gsell (1723–1794). This marriage lasted until his death. In 1782 he was elected a Foreign Honorary Member of the American Academy of Arts and Sciences.
In St. Petersburg on 18 September 1783, after a lunch with his family, Euler was discussing the newly discovered planet Uranus and its orbit with a fellow academician Anders Johan Lexell, when he collapsed from a brain hemorrhage. He died a few hours later. wrote a short obituary for the Russian Academy of Sciences and Russian mathematician Nicolas Fuss, one of Euler's disciples, wrote a more detailed eulogy, which he delivered at a memorial meeting. In his eulogy for the French Academy, French mathematician and philosopher Marquis de Condorcet, wrote:
Euler was buried next to Katharina at the Smolensk Lutheran Cemetery on Goloday Island. In 1785, the Russian Academy of Sciences put a marble bust of Leonhard Euler on a pedestal next to the Director's seat and, in 1837, placed a headstone on Euler's grave. To commemorate the 250th anniversary of Euler's birth, the headstone was moved in 1956, together with his remains, to the 18th-century necropolis at the Alexander Nevsky Monastery.
Euler worked in almost all areas of mathematics, such as geometry, infinitesimal calculus, trigonometry, algebra, and number theory, as well as continuum physics, lunar theory and other areas of physics. He is a seminal figure in the history of mathematics; if printed, his works, many of which are of fundamental interest, would occupy between 60 and 80 quarto volumes. Euler's name is associated with a large number of topics.
Euler is the only mathematician to have "two" numbers named after him: the important Euler's number in calculus, "e", approximately equal to 2.71828, and the Euler–Mascheroni constant γ (gamma) sometimes referred to as just "Euler's constant", approximately equal to 0.57721. It is not known whether γ is rational or irrational.
Euler introduced and popularized several notational conventions through his numerous and widely circulated textbooks. Most notably, he introduced the concept of a function and was the first to write "f"("x") to denote the function "f" applied to the argument "x". He also introduced the modern notation for the trigonometric functions, the letter for the base of the natural logarithm (now also known as Euler's number), the Greek letter Σ for summations and the letter to denote the imaginary unit. The use of the Greek letter "π" to denote the ratio of a circle's circumference to its diameter was also popularized by Euler, although it originated with Welsh mathematician William Jones.
The development of infinitesimal calculus was at the forefront of 18th-century mathematical research, and the Bernoullis—family friends of Euler—were responsible for much of the early progress in the field. Thanks to their influence, studying calculus became the major focus of Euler's work. While some of Euler's proofs are not acceptable by modern standards of mathematical rigour (in particular his reliance on the principle of the generality of algebra), his ideas led to many great advances.
Euler is well known in analysis for his frequent use and development of power series, the expression of functions as sums of infinitely many terms, such as
Notably, Euler directly proved the power series expansions for and the inverse tangent function. (Indirect proof via the inverse power series technique was given by Newton and Leibniz between 1670 and 1680.) His daring use of power series enabled him to solve the famous Basel problem in 1735 (he provided a more elaborate argument in 1741):
Euler introduced the use of the exponential function and logarithms in analytic proofs. He discovered ways to express various logarithmic functions using power series, and he successfully defined logarithms for negative and complex numbers, thus greatly expanding the scope of mathematical applications of logarithms. He also defined the exponential function for complex numbers, and discovered its relation to the trigonometric functions. For any real number (taken to be radians), Euler's formula states that the complex exponential function satisfies
A special case of the above formula is known as Euler's identity,
called "the most remarkable formula in mathematics" by Richard P. Feynman, for its single uses of the notions of addition, multiplication, exponentiation, and equality, and the single uses of the important constants 0, 1, , and . In 1988, readers of the "Mathematical Intelligencer" voted it "the Most Beautiful Mathematical Formula Ever". In total, Euler was responsible for three of the top five formulae in that poll.
De Moivre's formula is a direct consequence of Euler's formula.
In addition, Euler elaborated the theory of higher transcendental functions by introducing the gamma function and introduced a new method for solving quartic equations. He also found a way to calculate integrals with complex limits, foreshadowing the development of modern complex analysis. He also invented the calculus of variations including its best-known result, the Euler–Lagrange equation.
Euler also pioneered the use of analytic methods to solve number theory problems. In doing so, he united two disparate branches of mathematics and introduced a new field of study, analytic number theory. In breaking ground for this new field, Euler created the theory of hypergeometric series, q-series, hyperbolic trigonometric functions and the analytic theory of continued fractions. For example, he proved the infinitude of primes using the divergence of the harmonic series, and he used analytic methods to gain some understanding of the way prime numbers are distributed. Euler's work in this area led to the development of the prime number theorem.
Euler's interest in number theory can be traced to the influence of Christian Goldbach, his friend in the St. Petersburg Academy. A lot of Euler's early work on number theory was based on the works of Pierre de Fermat. Euler developed some of Fermat's ideas and disproved some of his conjectures.
Euler linked the nature of prime distribution with ideas in analysis. He proved that the sum of the reciprocals of the primes diverges. In doing so, he discovered the connection between the Riemann zeta function and the prime numbers; this is known as the Euler product formula for the Riemann zeta function.
Euler proved Newton's identities, Fermat's little theorem, Fermat's theorem on sums of two squares, and he made distinct contributions to Lagrange's four-square theorem. He also invented the totient function φ("n"), the number of positive integers less than or equal to the integer "n" that are coprime to "n". Using properties of this function, he generalized Fermat's little theorem to what is now known as Euler's theorem. He contributed significantly to the theory of perfect numbers, which had fascinated mathematicians since Euclid. He proved that the relationship shown between even perfect numbers and Mersenne primes earlier proved by Euclid was one-to-one, a result otherwise known as the Euclid–Euler theorem. Euler also conjectured the law of quadratic reciprocity. The concept is regarded as a fundamental theorem of number theory, and his ideas paved the way for the work of Carl Friedrich Gauss.
By 1772 Euler had proved that 231 − 1 = 2,147,483,647 is a Mersenne prime. It may have remained the largest known prime until 1867.
In 1735, Euler presented a solution to the problem known as the Seven Bridges of Königsberg. The city of Königsberg, Prussia was set on the Pregel River, and included two large islands that were connected to each other and the mainland by seven bridges. The problem is to decide whether it is possible to follow a path that crosses each bridge exactly once and returns to the starting point. It is not possible: there is no Eulerian circuit. This solution is considered to be the first theorem of graph theory, specifically of planar graph theory.
Euler also discovered the formula formula_5 relating the number of vertices, edges and faces of a convex polyhedron, and hence of a planar graph. The constant in this formula is now known as the Euler characteristic for the graph (or other mathematical object), and is related to the genus of the object. The study and generalization of this formula, specifically by Cauchy and L'Huilier, is at the origin of topology.
Some of Euler's greatest successes were in solving real-world problems analytically, and in describing numerous applications of the Bernoulli numbers, Fourier series, Euler numbers, the constants and , continued fractions and integrals. He integrated Leibniz's differential calculus with Newton's Method of Fluxions, and developed tools that made it easier to apply calculus to physical problems. He made great strides in improving the numerical approximation of integrals, inventing what are now known as the Euler approximations. The most notable of these approximations are Euler's method and the Euler–Maclaurin formula. He also facilitated the use of differential equations, in particular introducing the Euler–Mascheroni constant:
One of Euler's more unusual interests was the application of mathematical ideas in music. In 1739 he wrote the "Tentamen novae theoriae musicae," hoping to eventually incorporate musical theory as part of mathematics. This part of his work, however, did not receive wide attention and was once described as too mathematical for musicians and too musical for mathematicians.
In 1911, almost 130 years after Euler's death, Alfred J. Lotka used Euler's work to derive the Euler–Lotka equation for calculating rates of population growth for age-structured populations, a fundamental method that is commonly used in population biology and ecology.
Euler helped develop the Euler–Bernoulli beam equation, which became a cornerstone of engineering. Aside from successfully applying his analytic tools to problems in classical mechanics, Euler also applied these techniques to celestial problems. His work in astronomy was recognized by a number of Paris Academy Prizes over the course of his career. His accomplishments include determining with great accuracy the orbits of comets and other celestial bodies, understanding the nature of comets, and calculating the parallax of the Sun. His calculations also contributed to the development of accurate longitude tables.
In addition, Euler made important contributions in optics. He disagreed with Newton's corpuscular theory of light in the "Opticks", which was then the prevailing theory. His 1740s papers on optics helped ensure that the wave theory of light proposed by Christiaan Huygens would become the dominant mode of thought, at least until the development of the quantum theory of light.
In 1757 he published an important set of equations for inviscid flow, that are now known as the Euler equations. In differential form, the equations are:
where
Euler is also well known in structural engineering for his formula giving the critical buckling load of an ideal strut, which depends only on its length and flexural stiffness:
where
Euler is also credited with using closed curves to illustrate syllogistic reasoning (1768). These diagrams have become known as Euler diagrams.
An Euler diagram is a diagrammatic means of representing sets and their relationships. Euler diagrams consist of simple closed curves (usually circles) in the plane that depict sets. Each Euler curve divides the plane into two regions or "zones": the interior, which symbolically represents the elements of the set, and the exterior, which represents all elements that are not members of the set. The sizes or shapes of the curves are not important; the significance of the diagram is in how they overlap. The spatial relationships between the regions bounded by each curve (overlap, containment or neither) corresponds to set-theoretic relationships (intersection, subset and disjointness). Curves whose interior zones do not intersect represent disjoint sets. Two curves whose interior zones intersect represent sets that have common elements; the zone inside both curves represents the set of elements common to both sets (the intersection of the sets). A curve that is contained completely within the interior zone of another represents a subset of it. Euler diagrams (and their generalization in Venn diagrams) were incorporated as part of instruction in set theory as part of the new math movement in the 1960s. Since then, they have also been adopted by other curriculum fields such as reading.
Even when dealing with music, Euler's approach is mainly mathematical. His writings on music are not particularly numerous (a few hundred pages, in his total production of about thirty thousand pages), but they reflect an early preoccupation and one that did not leave him throughout his life.
A first point of Euler's musical theory is the definition of "genres", i.e. of possible divisions of the octave using the prime numbers 3 and 5. Euler describes 18 such genres, with the general definition 2mA, where A is the "exponent" of the genre (i.e. the sum of the exponents of 3 and 5) and 2m (where "m is an indefinite number, small or large, so long as the sounds are perceptible"), expresses that the relation holds independently of the number of octaves concerned. The first genre, with A = 1, is the octave itself (or its duplicates); the second genre, 2m.3, is the octave divided by the fifth (fifth + fourth, C–G–C); the third genre is 2m.5, major third + minor sixth (C–E–C); the fourth is 2m.32, two-fourths and a tone (C–F–B–C); the fifth is 2m.3.5 (C–E–G–B–C); etc. Genres 12 (2m.33.5), 13 (2m.32.52) and 14 (2m.3.53) are corrected versions of the diatonic, chromatic and enharmonic, respectively, of the Ancients. Genre 18 (2m.33.52) is the "diatonico-chromatic", "used generally in all compositions", and which turns out to be identical with the system described by Johann Mattheson. Euler later envisaged the possibility of describing genres including the prime number 7.
Euler devised a specific graph, the "Speculum musicum", to illustrate the diatonico-chromatic genre, and discussed paths in this graph for specific intervals, recalling his interest in the Seven Bridges of Königsberg (see above). The device drew renewed interest as the Tonnetz in neo-Riemannian theory (see also Lattice (music)).
Euler further used the principle of the "exponent" to propose a derivation of the "gradus suavitatis" (degree of suavity, of agreeableness) of intervals and chords from their prime factors – one must keep in mind that he considered just intonation, i.e. 1 and the prime numbers 3 and 5 only. Formulas have been proposed extending this system to any number of prime numbers, e.g. in the form
where "pi" are prime numbers and "ki" their exponents.
Euler and his friend Daniel Bernoulli were opponents of Leibniz's monadism and the philosophy of Christian Wolff. Euler insisted that knowledge is founded in part on the basis of precise quantitative laws, something that monadism and Wolffian science were unable to provide. Euler's religious leanings might also have had a bearing on his dislike of the doctrine; he went so far as to label Wolff's ideas as "heathen and atheistic".
Much of what is known of Euler's religious beliefs can be deduced from his "Letters to a German Princess" and an earlier work, "Rettung der Göttlichen Offenbahrung Gegen die Einwürfe der Freygeister" ("Defense of the Divine Revelation against the Objections of the Freethinkers"). These works show that Euler was a devout Christian who believed the Bible to be inspired; the "Rettung" was primarily an argument for the divine inspiration of scripture.
There is a famous legend inspired by Euler's arguments with secular philosophers over religion, which is set during Euler's second stint at the St. Petersburg Academy. The French philosopher Denis Diderot was visiting Russia on Catherine the Great's invitation. However, the Empress was alarmed that the philosopher's arguments for atheism were influencing members of her court, and so Euler was asked to confront the Frenchman. Diderot was informed that a learned mathematician had produced a proof of the existence of God: he agreed to view the proof as it was presented in court. Euler appeared, advanced toward Diderot, and in a tone of perfect conviction announced this non-sequitur: "Sir, ="x", hence God exists—reply!"
Diderot, to whom (says the story) all mathematics was gibberish, stood dumbstruck as peals of laughter erupted from the court. Embarrassed, he asked to leave Russia, a request that was graciously granted by the Empress. However amusing the anecdote may be, it is apocryphal, given that Diderot himself did research in mathematics.
The legend was apparently first told by
Dieudonné Thiébault with significant embellishment by Augustus De Morgan.
Euler was featured on the sixth series of the Swiss 10-franc banknote and on numerous Swiss, German, and Russian postage stamps. The asteroid 2002 Euler was named in his honor. He is also commemorated by the Lutheran Church on their Calendar of Saints on 24 May—he was a devout Christian (and believer in biblical inerrancy) who wrote apologetics and argued forcefully against the prominent atheists of his time.
Euler has an extensive bibliography. His best-known books include:
The first collection of Euler's work was made by Paul Heinrich von Fuss in 1862. A definitive collection of Euler's works, entitled "Opera Omnia", has been published since 1911 by the Euler Commission of the Swiss Academy of Sciences. A complete chronological list of Euler's works is available at "The Eneström Index". Full text, open access versions of many of Euler's papers are available in the original language and English translations at the Euler Archive, hosted by University of the Pacific. The Euler Archive was started at Dartmouth College before moving to the Mathematical Association of America and, most recently, to University of the Pacific in 2017.
|
https://en.wikipedia.org/wiki?curid=17902
|
Linear model
In statistics, the term linear model is used in different ways according to the context. The most common occurrence is in connection with regression models and the term is often taken as synonymous with linear regression model. However, the term is also used in time series analysis with a different meaning. In each case, the designation "linear" is used to identify a subclass of models for which substantial reduction in the complexity of the related statistical theory is possible.
For the regression case, the statistical model is as follows. Given a (random) sample formula_1 the relation between the observations "Yi" and the independent variables "Xij" is formulated as
where formula_3 may be nonlinear functions. In the above, the quantities "εi" are random variables representing errors in the relationship. The "linear" part of the designation relates to the appearance of the regression coefficients, "βj" in a linear way in the above relationship. Alternatively, one may say that the predicted values corresponding to the above model, namely
are linear functions of the "βj".
Given that estimation is undertaken on the basis of a least squares analysis, estimates of the unknown parameters "βj" are determined by minimising a sum of squares function
From this, it can readily be seen that the "linear" aspect of the model means the following:
An example of a linear time series model is an autoregressive moving average model. Here the model for values {"Xt"} in a time series can be written in the form
where again the quantities "εt" are random variables representing innovations which are new random effects that appear at a certain time but also affect values of "X" at later times. In this instance the use of the term "linear model" refers to the structure of the above relationship in representing "Xt" as a linear function of past values of the same time series and of current and past values of the innovations. This particular aspect of the structure means that it is relatively simple to derive relations for the mean and covariance properties of the time series. Note that here the "linear" part of the term "linear model" is not referring to the coefficients "φi" and "θi", as it would be in the case of a regression model, which looks structurally similar.
There are some other instances where "nonlinear model" is used to contrast with a linearly structured model, although the term "linear model" is not usually applied. One example of this is nonlinear dimensionality reduction.
|
https://en.wikipedia.org/wiki?curid=17904
|
Likelihood principle
In statistics, the likelihood principle is the proposition that, given a statistical model, all the evidence in a sample relevant to model parameters is contained in the likelihood function.
A likelihood function arises from a probability density function considered as a function of its distributional parameterization argument. For example, consider a model which gives the probability density function "ƒ""X"("x" | "θ") of observable random variable "X" as a function of a parameter "θ". Then for a specific value "x" of "X", the function formula_1("θ" | "x") = "ƒ""X"("x" | "θ") is a likelihood function of "θ": it gives a measure of how "likely" any particular value of "θ" is, if we know that "X" has the value "x". The density function may be a density with respect to counting measure, i.e. a probability mass function.
Two likelihood functions are "equivalent" if one is a scalar multiple of the other. The likelihood principle is this: all information from the data that is relevant to inferences about the value of the model parameters is in the equivalence class to which the likelihood function belongs. The strong likelihood principle applies this same criterion to cases such as sequential experiments where the sample of data that is available results from applying a stopping rule to the observations earlier in the experiment.
Suppose
Then the observation that "X" = 3 induces the likelihood function
while the observation that "Y" = 12 induces the likelihood function
The likelihood principle says that, as the data are the same in both cases, the inferences drawn about the value of "θ" should also be the same. In addition, all the inferential content in the data about the value of "θ" is contained in the two likelihoods, and is the same if they are proportional to one another. This is the case in the above example, reflecting the fact that the difference between observing "X" = 3 and observing "Y" = 12 lies not in the actual data, but merely in the design of the experiment. Specifically, in one case, one has decided in advance to try twelve times; in the other, to keep trying until three successes are observed. The inference about "θ" should be the same, and this is reflected in the fact that the two likelihoods are proportional to each other.
This is not always the case, however. The use of frequentist methods involving p-values leads to different inferences for the two cases above, showing that the outcome of frequentist methods depends on the experimental procedure, and thus violates the likelihood principle.
A related concept is the law of likelihood, the notion that the extent to which the evidence supports one parameter value or hypothesis against another is equal to the ratio of their likelihoods, their likelihood ratio. That is,
is the degree to which the observation "x" supports parameter value or hypothesis "a" against "b". If this ratio is 1, the evidence is indifferent; if greater than 1, the evidence supports the value "a" against "b"; or if less, then vice versa.
In Bayesian statistics, this ratio is known as the Bayes factor, and Bayes' rule can be seen as the application of the law of likelihood to inference.
In frequentist inference, the likelihood ratio is used in the likelihood-ratio test, but other non-likelihood tests are used as well. The Neyman–Pearson lemma states the likelihood-ratio test is the most powerful test for comparing two simple hypotheses at a given significance level, which gives a frequentist justification for the law of likelihood.
Combining the likelihood principle with the law of likelihood yields the consequence that the parameter value which maximizes the likelihood function is the value which is most strongly supported by the evidence. This is the basis for the widely used method of maximum likelihood.
The likelihood principle was first identified by that name in print in 1962 (Barnard et al., Birnbaum, and Savage et al.), but arguments for the same principle, unnamed, and the use of the principle in applications goes back to the works of R.A. Fisher in the 1920s. The law of likelihood was identified by that name by I. Hacking (1965). More recently the likelihood principle as a general principle of inference has been championed by A. W. F. Edwards. The likelihood principle has been applied to the philosophy of science by R. Royall.
Birnbaum proved that the likelihood principle follows from two more primitive and seemingly reasonable principles, the "conditionality principle" and the "sufficiency principle". The conditionality principle says that if an experiment is chosen by a random process independent of the states of nature formula_5, then only the experiment actually performed is relevant to inferences about formula_5. The sufficiency principle says that if formula_7 is a sufficient statistic for formula_5, and if in two experiments with data formula_9 and formula_10 we have formula_11, then the evidence about formula_5 given by the two experiments is the same.
Some widely used methods of conventional statistics, for example many significance tests, are not consistent with the likelihood principle.
Let us briefly consider some of the arguments for and against the likelihood principle.
Birnbaum's proof of the likelihood principle has been disputed by philosophers of science, including Deborah Mayo and statisticians including Michael Evans. On the other hand, a new proof of the likelihood principle has been provided by Greg Gandenberger that addresses some of the counterarguments to the original proof.
Unrealized events play a role in some common statistical methods. For example, the result of a significance test depends on the "p"-value, the probability of a result as extreme or more extreme than the observation, and that probability may depend on the design of the experiment. To the extent that the likelihood principle is accepted, such methods are therefore denied.
Some classical significance tests are not based on the likelihood. A commonly cited example is the optional stopping problem. Suppose I tell you that I tossed a coin 12 times and in the process observed 3 heads. You might make some inference about the probability of heads and whether the coin was fair. Suppose now I tell that I tossed the coin "until" I observed 3 heads, and I tossed it 12 times. Will you now make some different inference?
The likelihood function is the same in both cases: it is proportional to
According to the likelihood principle, the inference should be the same in either case.
Suppose a number of scientists are assessing the probability of a certain outcome (which we shall call 'success') in experimental trials. Conventional wisdom suggests that if there is no bias towards success or failure then the success probability would be one half. Adam, a scientist, conducted 12 trials and obtains 3 successes and 9 failures. One of those successes was the 12th and last observation. Then Adam left the lab.
Bill, a colleague in the same lab, continued Adam's work and published Adam's results, along with a significance test. He tested the null hypothesis that "p", the success probability, is equal to a half, versus "p" < 0.5. The probability of the observed result that out of 12 trials 3 or something fewer (i.e. more extreme) were successes, if "H"0 is true, is
which is 299/4096 = 7.3%. Thus the null hypothesis is not rejected at the 5% significance level.
Charlotte, another scientist, reads Bill's paper and writes a letter, saying that it is possible that Adam kept trying until he obtained 3 successes, in which case the probability of needing to conduct 12 or more experiments is given by
which is 134/4096 = 3.27%. Now the result "is" statistically significant at the 5% level. Note that there is no contradiction among these two results; both computations are correct.
To these scientists, whether a result is significant or not depends on the design of the experiment, not on the likelihood (in the sense of the likelihood function) of the parameter value being 1/2.
Results of this kind are considered by some as arguments against the likelihood principle. For others it exemplifies the value of the likelihood principle and is an argument against significance tests.
Similar themes appear when comparing Fisher's exact test with Pearson's chi-squared test.
An argument in favor of the likelihood principle is given by Edwards in his book "Likelihood". He cites the following story from J.W. Pratt, slightly condensed here. Note that the likelihood function depends only on what actually happened, and not on what "could" have happened.
This story can be translated to Adam's stopping rule above, as follows. Adam stopped immediately after 3 successes, because his boss Bill had instructed him to do so. After the publication of the statistical analysis by Bill, Adam realizes that he has missed a second instruction from Bill to conduct 12 trials instead, and that Bill's paper is based on this second instruction. Adam is very glad that he got his 3 successes after exactly 12 trials, and explains to his friend Charlotte that by coincidence he executed the second instruction. Later, he is astonished to hear about Charlotte's letter explaining that "now" the result is significant.
|
https://en.wikipedia.org/wiki?curid=17905
|
Led Zeppelin
Led Zeppelin were an English rock band formed in London in 1968. The group consisted of vocalist Robert Plant, guitarist Jimmy Page, bassist/keyboardist John Paul Jones, and drummer John Bonham. With their heavy, guitar-driven sound, they are regularly cited as one of the progenitors of heavy metal, although their style drew from a variety of influences, including blues and folk music.
After changing their name from the New Yardbirds, Led Zeppelin signed a deal with Atlantic Records that afforded them considerable artistic freedom. Although the group were initially unpopular with critics, they achieved significant commercial success with eight studio albums released over ten years, from "Led Zeppelin" (1969) to "In Through the Out Door" (1979). Their untitled fourth studio album, commonly known as "Led Zeppelin IV" (1971), and featuring the song "Stairway to Heaven", is among the most popular and influential works in rock music, and helped to secure the group's popularity.
Page wrote most of Led Zeppelin's music, particularly early in their career, while Plant generally supplied the lyrics. Jones's keyboard-based compositions later became central to the group's catalogue, which featured increasing experimentation. The latter half of their career saw a series of record-breaking tours that earned the group a reputation for excess and debauchery. Although they remained commercially and critically successful, their output and touring schedule were limited during the late 1970s, and the group disbanded following Bonham's death from alcohol-related asphyxia in 1980. In the decades that followed, the former members sporadically collaborated and participated in one-off Led Zeppelin reunions. The most successful of these was the 2007 Ahmet Ertegun Tribute Concert in London, with Bonham's son Jason Bonham on drums.
Many critics consider Led Zeppelin one of the most successful, innovative, and influential rock groups in history. They are one of the best-selling music artists in the history of audio recording; various sources estimate the group's record sales at 200 to 300 million units worldwide. With RIAA-certified sales of 111.5 million units, they are the third-best-selling band and fifth-best-selling act in the US. Each of their nine studio albums placed in the top 10 of the "Billboard" album chart and six reached the number-one spot. They achieved eight consecutive UK number-one albums. "Rolling Stone" magazine described them as "the heaviest band of all time", "the biggest band of the Seventies", and "unquestionably one of the most enduring bands in rock history". They were inducted into the Rock and Roll Hall of Fame in 1995; the museum's biography of the band states that they were "as influential" during the 1970s as the Beatles were during the 1960s.
In 1966, London-based session guitarist Jimmy Page joined the blues-influenced rock band the Yardbirds to replace bassist Paul Samwell-Smith. Page soon switched from bass to lead guitar, creating a dual lead guitar line-up with Jeff Beck. Following Beck's departure in October 1966, the Yardbirds, tired from constant touring and recording, began to wind down. Page wanted to form a supergroup with him and Beck on guitars, and the Who's Keith Moon and John Entwistle on drums and bass, respectively. Vocalists Steve Winwood and Steve Marriott were also considered for the project. The group never formed, although Page, Beck, and Moon did record a song together in 1966, "Beck's Bolero", in a session that also included bassist-keyboardist John Paul Jones.
The Yardbirds played their final gig in July 1968 at Luton College of Technology in Bedfordshire. They were still committed to several concerts in Scandinavia, so drummer Jim McCarty and vocalist Keith Relf authorised Page and bassist Chris Dreja to use the Yardbirds' name to fulfill the band's obligations. Page and Dreja began putting a new line-up together. Page's first choice for the lead singer was Terry Reid, but Reid declined the offer and suggested Robert Plant, a singer for the Band of Joy and Hobbstweedle. Plant eventually accepted the position, recommending former Band of Joy drummer John Bonham. John Paul Jones inquired about the vacant position of bass guitarist, at the suggestion of his wife, after Dreja dropped out of the project to become a photographer. Page had known Jones since they were both session musicians, and agreed to let him join as the final member.
The four played together for the first time in a room below a record store on Gerrard Street in London. Page suggested that they attempt "Train Kept A-Rollin'", originally a jump blues song popularised in a rockabilly version by Johnny Burnette, which had been covered by the Yardbirds. "As soon as I heard John Bonham play", Jones recalled, "I knew this was going to be great ... We locked together as a team immediately". Before leaving for Scandinavia, the group took part in a recording session for the P. J. Proby album "Three Week Hero". The album's track "Jim's Blues", with Plant on harmonica, was the first studio track to feature all four future members of Led Zeppelin.
The band completed the Scandinavian tour as the New Yardbirds, playing together for the first time in front of a live audience at Gladsaxe Teen Clubs in Gladsaxe, Denmark, on 7 September 1968. Later that month, they began recording their first album, which was based on their live set. The album was recorded and mixed in nine days, and Page covered the costs. After the album's completion, the band were forced to change their name after Dreja issued a cease and desist letter, stating that Page was allowed to use the New Yardbirds moniker for the Scandinavian dates only. One account of how the new band's name was chosen held that Moon and Entwistle had suggested that a supergroup with Page and Beck would go down like a "lead balloon", an idiom for disastrous results. The group dropped the 'a' in "lead" at the suggestion of their manager, Peter Grant, so that those unfamiliar with the term would not pronounce it "leed". The word "balloon" was replaced by "zeppelin", a word which, according to music journalist Keith Shadwick, brought "the perfect combination of heavy and light, combustibility and grace" to Page's mind.
Grant secured a $143,000 advance contract ($ today) from Atlantic Records in November 1968—at the time, the biggest deal of its kind for a new band. Atlantic was a label with a catalogue of mainly blues, soul, and jazz artists, but in the late 1960s it began to take an interest in British progressive rock acts. Record executives signed Led Zeppelin without having ever seen them. Under the terms of their contract, the band had autonomy in deciding when they would release albums and tour, and had the final say over the contents and design of each album. They would also decide how to promote each release and which tracks to release as singles. They formed their own company, Superhype, to handle all publishing rights.
The band began their first tour of the UK on 4 October 1968, still billed as the New Yardbirds; they played their first show as Led Zeppelin at the University of Surrey in Battersea on 25 October. Tour manager Richard Cole, who would become a major figure in the touring life of the group, organised their first North American tour at the end of the year. Their debut album, "Led Zeppelin", was released in the US during the tour on 12 January 1969 and peaked at number 10 on the "Billboard" chart; it was released in the UK, where it peaked at number 6, on 31 March. According to Steve Erlewine, the album's memorable guitar riffs, lumbering rhythms, psychedelic blues, groovy, bluesy shuffles and hints of English folk music made it "a significant turning point in the evolution of hard rock and heavy metal".
In their first year Led Zeppelin completed four US and four UK concert tours, and also released their second album, "Led Zeppelin II". Recorded mostly on the road at various North American studios, it was an even greater commercial success than their first album, and reached the number one chart position in the US and the UK. The album further developed the mostly blues-rock musical style established on their debut release, creating a sound that was "heavy and hard, brutal and direct", and which would be highly influential and frequently imitated. Steve Waksman has suggested that "Led Zeppelin II" was "the musical starting point for heavy metal".
The band saw their albums as indivisible, complete listening experiences, disliking the re-editing of existing tracks for release as singles. Grant maintained an aggressive pro-album stance, particularly in the UK, where there were few radio and TV outlets for rock music. Without the band's consent, however, some songs were released as singles, particularly in the US. In 1969 an edited version of "Whole Lotta Love", a track from their second album, was released as a single in the US. It reached number four in the "Billboard" chart in January 1970, selling over one million copies and helping to cement the band's popularity. The group also increasingly shunned television appearances, citing their preference that their fans hear and see them in live concerts.
Following the release of their second album, Led Zeppelin completed several more US tours. They played initially in clubs and ballrooms, and then in larger auditoriums as their popularity grew. Some early Led Zeppelin concerts lasted more than four hours, with expanded and improvised live versions of their repertoire. Many of these shows have been preserved as bootleg recordings. It was during this period of intensive concert touring that the band developed a reputation for off-stage excess.
In 1970, Page and Plant retired to Bron-Yr-Aur, a remote cottage in Wales, to commence work on their third album, "Led Zeppelin III". The result was a more acoustic style that was strongly influenced by folk and Celtic music, and showcased the band's versatility. The album's rich acoustic sound initially received mixed reactions, with critics and fans surprised at the turn from the primarily electric arrangements of the first two albums, further fuelling the band's hostility to the musical press. It reached number one in the UK and US charts, but its stay would be the shortest of their first five albums. The album's opening track, "Immigrant Song", was released as a US single in November 1970 against the band's wishes, reaching the top twenty on the "Billboard" chart.
During the 1970s, Led Zeppelin reached new heights of commercial and critical success that made them one of the most influential groups of the era, eclipsing their earlier achievements. The band's image also changed as the members began to wear elaborate, flamboyant clothing, with Page taking the lead on the flamboyant appearance by wearing a glittering moon-and-stars outfit. Led Zeppelin changed their show by using things such as lasers, professional light shows and mirror balls. They began travelling in a private jet airliner, a Boeing 720 (nicknamed "the Starship"), rented out entire sections of hotels (including the Continental Hyatt House in Los Angeles, known colloquially as the "Riot House"), and became the subject of frequently repeated stories of debauchery. One involved John Bonham riding a motorcycle through a rented floor of the Riot House, while another involved the destruction of a room in the Tokyo Hilton, leading to the group being banned from that establishment for life. Although Led Zeppelin developed a reputation for trashing their hotel suites and throwing television sets out of the windows, some suggest that these tales have been exaggerated. According to music journalist Chris Welch, "[Led Zeppelin's] travels spawned many stories, but it was a myth that [they] were constantly engaged in acts of wanton destruction and lewd behaviour".
Led Zeppelin released their fourth album on 8 November 1971. It is variously referred to as "Led Zeppelin IV", "Untitled", "IV", or, due to the four symbols appearing on the record label, as "Four Symbols", "Zoso" or "Runes". The band had wanted to release the fourth album with no title or information, in response to the music press "going on about Zeppelin being a hype", but the record company wanted something on the cover, so in discussions it was agreed to have four symbols to represent both the four members of the band, and that it was the fourth album. With 37 million copies sold, "Led Zeppelin IV" is one of the best-selling albums in history, and its massive popularity cemented Led Zeppelin's status as superstars in the 1970s. By 2006, it had sold 23 million copies in the United States alone. The track "Stairway to Heaven", never released as a single, was the most requested and most played song on American rock radio in the 1970s. The group followed up the album's release with tours of the UK, Australasia, North America, Japan, and the UK again from late 1971 through early 1973.
Led Zeppelin's next album, "Houses of the Holy", was released in March 1973. It featured further experimentation by the band, who expanded their use of synthesisers and mellotron orchestration. The predominantly orange album cover, designed by the London-based design group Hipgnosis, depicts images of nude children climbing the Giant's Causeway in Northern Ireland. Although the children are not shown from the front, the cover was controversial at the time of the album's release. As with the band's fourth album, neither their name nor the album title was printed on the sleeve.
"Houses of the Holy" topped charts worldwide, and the band's subsequent concert tour of North America in 1973 broke records for attendance, as they consistently filled large auditoriums and stadiums. At Tampa Stadium in Florida, they played to 56,800 fans, breaking the record set by the Beatles' 1965 Shea Stadium concert and grossing $309,000. Three sold-out shows at Madison Square Garden in New York City were filmed for a motion picture, but the theatrical release of this project ("The Song Remains the Same") was delayed until 1976. Before the final night's performance, $180,000 ($ today) of the band's money from gate receipts was stolen from a safe deposit box at the Drake Hotel.
In 1974, Led Zeppelin took a break from touring and launched their own record label, Swan Song, named after an unreleased song. The record label's logo is based on a drawing called "Evening: Fall of Day" (1869) by William Rimmer. The drawing features a figure of a winged human-like being interpreted as either Apollo or Icarus. The logo can be found on Led Zeppelin memorabilia, especially T-shirts. In addition to using Swan Song as a vehicle to promote their own albums, the band expanded the label's roster, signing artists such as Bad Company, the Pretty Things and Maggie Bell. The label was successful while Led Zeppelin existed, but folded less than three years after they disbanded.
In 1975, Led Zeppelin's double album "Physical Graffiti" was their first release on the Swan Song label. It consisted of fifteen songs, of which eight had been recorded at Headley Grange in 1974 and seven had been recorded earlier. A review in "Rolling Stone" magazine referred to "Physical Graffiti" as Led Zeppelin's "bid for artistic respectability", adding that the only bands Led Zeppelin had to compete with for the title "The World's Best Rock Band" were the Rolling Stones and the Who. The album was a massive commercial and critical success. Shortly after the release of "Physical Graffiti", all previous Led Zeppelin albums simultaneously re-entered the top-200 album chart, and the band embarked on another North American tour, now employing sophisticated sound and lighting systems. In May 1975, Led Zeppelin played five sold-out nights at the Earls Court Arena in London, at the time the largest arena in Britain.
Following their triumphant Earls Court appearances, Led Zeppelin took a holiday and planned an autumn tour in America, scheduled to open with two outdoor dates in San Francisco. In August 1975, however, Plant and his wife Maureen were involved in a serious car crash while on holiday in Rhodes, Greece. Plant suffered a broken ankle and Maureen was badly injured; a blood transfusion saved her life. Unable to tour, he headed to the Channel Island of Jersey to spend August and September recuperating, with Bonham and Page in tow. The band then reconvened in Malibu, California. During this forced hiatus much of the material for their next album, "Presence", was written.
By this time, Led Zeppelin were the world's number one rock attraction, having outsold most bands of the time, including the Rolling Stones. "Presence", released in March 1976, marked a change in the Led Zeppelin sound towards more straightforward, guitar-based jams, departing from the acoustic ballads and intricate arrangements featured on their previous albums. Though it was a platinum seller, "Presence" received a mixed reaction among fans and the music press, with some critics suggesting that the band's excesses may have caught up with them. Page had begun using heroin during recording sessions for the album, a habit which may have affected the band's later live shows and studio recordings, although he has since denied this.
Because of Plant's injuries, Led Zeppelin did not tour in 1976. Instead, the band completed the concert film "The Song Remains the Same" and the accompanying soundtrack album. The film premiered in New York City on 20 October 1976, but was given a lukewarm reception by critics and fans. The film was particularly unsuccessful in the UK, where, unwilling to tour since 1975 because of their tax exile status, Led Zeppelin faced an uphill battle to recapture the public's affection.
In 1977, Led Zeppelin embarked on another major concert tour of North America. The band set another attendance record, with an audience of 76,229 at their Silverdome concert on 30 April. It was, according to the "Guinness Book of Records", the largest attendance to that date for a single act show. Although the tour was financially profitable, it was beset by off-stage problems. On 19 April, over 70 people were arrested as about 1,000 fans tried to gatecrash Cincinnati Riverfront Coliseum for two sold-out concerts, while others tried to gain entry by throwing rocks and bottles through glass doors. On 3 June, a concert at Tampa Stadium was cut short because of a severe thunderstorm, despite tickets indicating "Rain or Shine". A riot broke out, resulting in arrests and injuries.
After 23 July show at the Day on the Green festival at the Oakland Coliseum in Oakland, California, Bonham and members of Led Zeppelin's support staff were arrested after a member of promoter Bill Graham's staff was badly beaten during the band's performance. The following day's second Oakland concert was the group's final live appearance in the United States. Two days later, as they checked in at a French Quarter hotel for their 30 July performance at the Louisiana Superdome, Plant received news that his five-year-old son, Karac, had died from a stomach virus. The rest of the tour was immediately cancelled, prompting widespread speculation about Led Zeppelin's future.
In November 1978, the group recorded at Polar Studios in Stockholm, Sweden. The resulting album, "In Through the Out Door", featured sonic experimentation that again drew mixed reactions from critics. Nevertheless, the album reached number one in the UK and the US in just its second week of release. With this album's release, Led Zeppelin's entire catalogue returned to the "Billboard" Top 200 in the weeks of 27 October and 3 November 1979.
In August 1979, after two warm-up shows in Copenhagen, Led Zeppelin headlined two concerts at the Knebworth Music Festival, playing to a crowd of approximately 104,000 on the first night. A brief, low-key European tour was undertaken in June and July 1980, featuring a stripped-down set without the usual lengthy jams and solos. On 27 June, at a show in Nuremberg, Germany, the concert came to an abrupt halt in the middle of the third song, when Bonham collapsed onstage and was rushed to hospital. Speculation in the press suggested that his collapse had been the result of excessive alcohol and drug use, but the band claimed that he had simply overeaten.
A North American tour, the band's first since 1977, was scheduled to commence on 17 October 1980. On 24 September, Bonham was picked up by Led Zeppelin assistant Rex King to attend rehearsals at Bray Studios. During the journey, Bonham asked to stop for breakfast, where he downed four quadruple vodkas (from ), with a ham roll. After taking a bite of the ham roll he said to his assistant, "breakfast". He continued to drink heavily after arriving at the studio. The rehearsals were halted late that evening and the band retired to Page's house—the Old Mill House in Clewer, Windsor.
After midnight, Bonham, who had fallen asleep, was taken to bed and placed on his side. At 1:45 pm the next day, Benji LeFevre (Led Zeppelin's new tour manager) and John Paul Jones found Bonham dead. The cause of death was asphyxiation from vomit; the finding was accidental death. An autopsy found no other recreational drugs in Bonham's body. Although he had recently begun to take Motival (a cocktail of the antipsychotic fluphenazine and the tricyclic antidepressant nortriptyline) to combat his anxiety, it is unclear if these substances interacted with the alcohol in his system. Bonham's remains were cremated and his ashes interred on 12 October 1980, at Rushock parish church, Worcestershire.
The planned North American tour was cancelled, and despite rumours that Cozy Powell, Carmine Appice, Barriemore Barlow, Simon Kirke, or Bev Bevan would join the group as his replacement, the remaining members decided to disband. A 4 December 1980 press statement stated that, "We wish it to be known that the loss of our dear friend, and the deep sense of undivided harmony felt by ourselves and our manager, have led us to decide that we could not continue as we were." The statement was signed simply "Led Zeppelin".
Following Zeppelin's dissolution, the first significant project for the members was the Honeydrippers, which Plant initially formed in 1981, and which released its only album in 1984. The group featured Page on lead guitar, along with studio musicians and friends of the pair, including Jeff Beck, Paul Shaffer, and Nile Rodgers. Plant focused on a different direction from Zeppelin, playing standards and in a more R&B style, highlighted by a cover of "Sea of Love" that peaked at number three on the "Billboard" chart in early 1985.
"Coda" – a collection of Zeppelin outtakes and unused tracks – was issued in November 1982. It included two tracks from the Royal Albert Hall in 1970, one each from the "Led Zeppelin III" and "Houses of the Holy" sessions, and three from the "In Through the Out Door" sessions. It also featured a 1976 Bonham drum instrumental with electronic effects added by Page, called "Bonzo's Montreux".
On 13 July 1985, Page, Plant, and Jones reunited for the Live Aid concert at JFK Stadium, Philadelphia, playing a short set featuring drummers Tony Thompson and Phil Collins, and bassist Paul Martinez. Collins had contributed to Plant's first two solo albums while Martinez was a member of Plant's solo band. The performance was marred by a lack of rehearsal with the two drummers, Page's struggles with an out-of-tune guitar, poorly functioning monitors, and Plant's hoarse voice. Page described the performance as "pretty shambolic", while Plant characterised it as an "atrocity".
The three members reunited again on 14 May 1988, for the Atlantic Records 40th Anniversary concert, with Bonham's son Jason on drums. The result was again disjointed: Plant and Page had argued immediately prior to taking the stage about whether to play "Stairway to Heaven", and Jones' keyboards were absent from the live television feed. Page described the performance as "one big disappointment" and Plant said "the gig was foul".
The first Led Zeppelin box set, featuring tracks remastered under Page's supervision, was released in 1990 and bolstered the band's reputation, leading to abortive discussions among members about a reunion. This set included four previously unreleased tracks, including a version of Robert Johnson's "Travelling Riverside Blues". The song peaked at number seven on the "Billboard" Album Rock Tracks chart. "Led Zeppelin Boxed Set 2" was released in 1993; the two box sets together contained all known studio recordings, as well as some rare live tracks.
In 1994, Page and Plant reunited for a 90-minute "UnLedded" MTV project. They later released an album called "", which featured some reworked Led Zeppelin songs, and embarked on a world tour the following year. This is said to be the beginning of a rift between the band members, as Jones was not even told of the reunion.
In 1995, Led Zeppelin were inducted into the United States Rock and Roll Hall of Fame by Steven Tyler and Joe Perry of Aerosmith. Jason and Zoë Bonham also attended, representing their late father. At the induction ceremony, the band's inner rift became apparent when Jones joked upon accepting his award, "Thank you, my friends, for finally remembering my phone number", causing consternation and awkward looks from Page and Plant. Afterwards, they played one brief set with Tyler and Perry, with Jason Bonham on drums, and then a second with Neil Young, this time with Michael Lee playing the drums.
In 1997, Atlantic released a single edit of "Whole Lotta Love" in the US and the UK, the only single the band released in their homeland, where it peaked at number 21. November 1997 saw the release of "Led Zeppelin BBC Sessions", a two-disc set largely recorded in 1969 and 1971. Page and Plant released another album called "Walking into Clarksdale" in 1998, featuring all new material, but after disappointing sales, the partnership dissolved before a planned Australian tour.
2003 saw the release of the triple live album "How the West Was Won", and "Led Zeppelin DVD", a six-hour chronological set of live footage that became the best-selling music DVD in history. In July 2007, Atlantic/Rhino and Warner Home Video announced three Zeppelin titles to be released that November: "Mothership", a 24-track best-of spanning the band's career; a reissue of the soundtrack "The Song Remains the Same", including previously unreleased material; and a new DVD. Zeppelin also made their catalogue legally available for download, becoming one of the last major rock bands to do so.
On 10 December 2007, Zeppelin reunited for the Ahmet Ertegun Tribute Concert at the O2 Arena in London, with Jason Bonham again taking his father's place on drums. According to "Guinness World Records 2009", the show set a record for the "Highest Demand for Tickets for One Music Concert" as 20 million requests were submitted online. Critics praised the performance and there was widespread speculation about a full reunion. Page, Jones and Jason Bonham were reported to be willing to tour, and to be working on material for a new Zeppelin project. Plant continued his touring commitments with Alison Krauss, stating in September 2008 that he would not record or tour with the band. "I told them I was busy and they'd simply have to wait," he recalled in 2014. "I would come around eventually, which they were fine with – at least to my knowledge. But it turns out they weren't. And what's even more disheartening, Jimmy used it against me."
Jones and Page reportedly looked for a replacement for Plant; candidates including Steven Tyler of Aerosmith, and Myles Kennedy of Alter Bridge. However, in January 2009, it was confirmed that the project had been abandoned. "Getting the opportunity to play with Jimmy Page, John Paul Jones and Jason Bonham was pretty special," Kennedy recalled. "That is pretty much the zenith right there. That was a crazy, good experience. It's something I still think of often ... It's so precious to me."
A film of the O2 performance, "Celebration Day", premiered on 17 October 2012 and was released on DVD on 19 November. The film grossed $2 million in one night, and the live album peaked at number 4 and 9 in the UK and US, respectively. Following the film's premiere, Page revealed that he had been remastering the band's discography. The first wave of albums, "Led Zeppelin", "Led Zeppelin II", and "Led Zeppelin III", were released on 2 June 2014. The second wave of albums, "Led Zeppelin IV" and "Houses of the Holy", were released on 27 October 2014. "Physical Graffiti" was released on 23 February 2015, almost exactly forty years to the day after the original release. The fourth and final wave of studio album reissues, "Presence", "In Through the Out Door", and "Coda", were released on 31 July 2015.
Through this remastering project, each studio album was reissued on CD and vinyl and was also available in a Deluxe Edition, which contained a bonus disc of previously unheard material ("Coda's" Deluxe Edition would include two bonus discs). Each album was also available in a Super Deluxe Edition Box Set, which included the remastered album and bonus disc on both CD and 180-gram vinyl, a high-definition audio download card of all content at 96 kHz/24 bit, a hard bound book filled with rare and previously unseen photos and memorabilia, and a high quality print of the original album cover.
On 6 November 2015, the "Mothership" compilation was reissued using the band's newly remastered audio tracks. The reissuing campaign continued the next year with the re-release of "BBC Sessions" on 16 September 2016. The reissue contained a bonus disc with nine unreleased BBC recordings, including the heavily bootlegged but never officially released "Sunshine Woman".
To commemorate the band's 50th anniversary, Page, Plant and Jones announced an official illustrated book celebrating 50 years since the formation of the band. Also released for the celebration was a reissue of "How the West Was Won" on 23 March 2018, which includes the album's first pressing on vinyl. For Record Store Day on 21 April 2018, Led Zeppelin released a 7" single "Rock and Roll" (Sunset Sound Mix)/"Friends" (Olympic Studio Mix), their first single in 21 years.
Led Zeppelin's music was rooted in the blues. The influence of American blues artists such as Muddy Waters and Skip James was particularly apparent on their first two albums, as was the distinct country blues style of Howlin' Wolf. Tracks were structured around the twelve-bar blues on every studio album except for one, and the blues directly and indirectly influenced other songs both musically and lyrically. The band were also strongly influenced by the music of the British, Celtic, and American folk revivals. Scottish folk guitarist Bert Jansch helped inspire Page, and from him he adapted open tunings and aggressive strokes into his playing. The band also drew on a wide variety of genres, including world music, and elements of early rock and roll, jazz, country, funk, soul, and reggae, particularly on "Houses of the Holy" and the albums that followed.
The material on the first two albums was largely constructed out of extended jams of blues standards and folk songs. This method led to the mixing of musical and lyrical elements of different songs and versions, as well as improvised passages, to create new material, but would lead to later accusations of plagiarism and legal disputes over copyright. Usually the music was developed first, sometimes with improvised lyrics that might then be rewritten for the final version of the song. From the visit to Bron-Yr-Aur in 1970, the songwriting partnership between Page and Plant became predominant, with Page supplying the music, largely via his acoustic guitar, and Plant emerging as the band's chief lyricist. Jones and Bonham then added to the material, in rehearsal or in the studio, as a song was developed. In the later stages of the band's career, Page took a back seat in composition and Jones became increasingly important in producing music, often composed on the keyboard. Plant would then add lyrics before Page and Bonham developed their parts.
Early lyrics drew on the band's blues and folk roots, often mixing lyrical fragments from different songs. Many of the band's songs dealt with themes of romance, unrequited love and sexual conquest, which were common in rock, pop and blues music. Some of their lyrics, especially those derived from the blues, have been interpreted as misogynistic. Particularly on "Led Zeppelin III", they incorporated elements of mythology and mysticism into their music, which largely grew out of Plant's interest in legends and history. These elements were often taken to reflect Page's interest in the occult, which resulted in accusations that the recordings contained subliminal satanic messages, some of which were said to be contained in backmasking; these claims were generally dismissed by the band and music critics. The pastoral fantasies in Plant's songwriting were inspired by the landscape of the Black Country region and J. R. R. Tolkien high fantasy novel "The Lord of the Rings". Susan Fast argues that as Plant emerged as the band's main lyricist, the songs more obviously reflected his alignment with the West Coast counterculture of the 1960s. In the later part of the band's career Plant's lyrics became more autobiographical, and less optimistic, drawing on his own experiences and circumstances.
According to musicologist Robert Walser, "Led Zeppelin's sound was marked by speed and power, unusual rhythmic patterns, contrasting terraced dynamics, singer Robert Plant's wailing vocals, and guitarist Jimmy Page's heavily distorted crunch". These elements mean that they are often cited as one of the originators of hard rock and heavy metal and they have been described as the "definitive heavy metal band", although the band members have often eschewed the label. Part of this reputation depends on the band's use of distorted guitar riffs on songs like "Whole Lotta Love" and "The Wanton Song". Often riffs were not doubled by guitar, bass and drums exactly, but instead there were melodic or rhythmic variations; as in "Black Dog", where three different time signatures are used. Page's guitar playing incorporated elements of the blues scale with those of eastern music. Plant's use of high-pitched shrieks has been compared to Janis Joplin's vocal technique. Robert Christgau found him integral to the group's heavy "power blues" aesthetic, functioning as a "mechanical effect" similarly to Page's guitar parts. While noting Plant "hints at real feeling" on some of their acoustic songs, Christgau believed he abandoned traditional blues singing's emphasis on emotional projection in favor of vocal precision and dynamics: "Whether he is mouthing sexist blues cliches or running through one of the band's half-audible, half-comprehensible ... lyrics about chivalry or the counter-culture, his voice is devoid of feeling. Like the tenors and baritones of yore, he wants his voice to be an instrument—specifically, an electric guitar." Bonham's drumming was noted for its power, his rapid rolls and his fast beats on a single bass drum; while Jones' basslines have been described as melodic and his keyboard playing added a classical touch to the band's sound.
Led Zeppelin have been widely viewed as a hard rock band, although Christgau regarded them as art rock as well. According to popular music scholar Reebee Garofalo, "because hip critics could not find a constructive way of positioning themselves in relation to Led Zeppelin's ultra-macho presentation, they were excluded from the art rock category despite their broad range of influences." Christgau wrote in 1972, the band could be considered art rock because they "relate to rock and roll not organically but intellectually", idealizing the "amplified beat" as "a kind of formal challenge". Unlike their contemporaries in Jethro Tull and Yes, who use "the physical compulsion of beat and volume to involve the mind", Led Zeppelin "make body music of an oddly cerebral cast, arousing aggression rather than sexuality." As such, along with other second-generation English hard rock bands like Black Sabbath and Mott the Hoople, they can attract both intellectuals and working-class youths in "a strange potential double audience." Years later, "In Through the Out Door"s "tuneful synthesizer pomp" further confirmed for Christgau they were an art rock band.
Page stated that he wanted Led Zeppelin to produce music that had "light and shade". This began to be more clearly realised beginning with "Led Zeppelin III", which made greater use of acoustic instruments. This approach has been seen as exemplified in the fourth album, particularly on "Stairway to Heaven", which begins with acoustic guitar and recorder and ends with drums and heavy electric sounds. Towards the end of their recording career, they moved to a more mellow and progressive sound, dominated by Jones' keyboard motifs. They also increasingly made use of various layering and production techniques, including multi-tracking and overdubbed guitar parts. Their emphasis on the sense of dynamics and ensemble arrangement has been seen as producing an individualistic style that transcends any single music genre. Ian Peddie argues that they were "... loud, powerful and often heavy, but their music was also humorous, self-reflective and extremely subtle".
Many have considered Led Zeppelin to be one of the most successful, innovative, and influential bands in the history of rock music. Rock critic Mikal Gilmore said, "Led Zeppelin—talented, complex, grasping, beautiful and dangerous—made one of the most enduring bodies of composition and performance in twentieth-century music, despite everything they had to overpower, including themselves".
Led Zeppelin have influenced hard rock and heavy metal bands such as Deep Purple, Black Sabbath, Rush, Queen, Aerosmith, the Black Crowes, and Megadeth as well as progressive metal bands like Tool and Dream Theater. They influenced some early punk and post-punk bands, among them the Ramones, Joy Division and the Cult. They were also an important influence on the development of alternative rock, as bands adapted elements from the "Zeppelin sound" of the mid-1970s, including the Smashing Pumpkins, Nirvana, Pearl Jam, and Soundgarden. Bands and artists from diverse genres have acknowledged the influence of Led Zeppelin, such as Madonna, Shakira, Lady Gaga, Kesha, and Katie Melua.
Led Zeppelin have been credited with a major impact on the nature of the music business, particularly in the development of album-orientated rock (AOR) and stadium rock. In 1988 John Kalodner, then-A&R executive of Geffen Records, remarked that "In my opinion, next to the Beatles they're the most influential band in history. They influence the way music is on records, AOR radio, concerts. They set the standards for the AOR-radio format with 'Stairway to Heaven,' having AOR hits without necessarily having Top 40 hits. They're the ones who did the first real big arena concert shows, consistently selling out and playing stadiums without support. People can do as well as them, but nobody surpasses them". Andrew Loog Oldham, the former producer and manager of the Rolling Stones, commented on how Led Zeppelin had a major influence on the record business, and the way rock concerts were managed and presented to huge audiences. In 2007, they were a featured artist in the stadium rock episode of the BBC/VH1 series "Seven Ages of Rock".
The band have sold over 200 million albums worldwide according to some sources, while others state that they have sold in excess of 300 million records, including 111.5 million certified units in the United States. According to the Recording Industry Association of America, Led Zeppelin are the third-highest-selling band, the fifth highest selling music act in the US, and one of only three acts to earn five or more Diamond albums. They achieved eight consecutive number-ones on the UK Albums Chart, a record for most consecutive UK number-one albums shared with ABBA. Led Zeppelin remain one of the most bootlegged artists in the history of rock music.
Led Zeppelin also made a significant cultural impact. Jim Miller, editor of "Rolling Stone Illustrated History of Rock & Roll", argues that "on one level, Led Zeppelin represents the final flowering of the sixties' psychedelic ethic, which casts rock as passive sensory involvement". Led Zeppelin have also been described as "the quintessential purveyors" of masculine and aggressive "cock rock", although this assertion has been challenged. The band's fashion-sense has been seminal; Simeon Lipman, head of pop culture at Christie's auction house, has commented that "Led Zeppelin have had a big influence on fashion because the whole aura surrounding them is so cool, and people want a piece of that". Led Zeppelin laid the foundation for the big hair of 1980s glam metal bands such as Mötley Crüe and Skid Row. Other musicians have also adapted elements from Led Zeppelin's attitude to clothes, jewellery and hair, such as the hipster flares and tight band T-shirts of Kings of Leon, shaggy hair, clingy T-shirts and bluesman hair of Jack White of the White Stripes, and Kasabian guitarist Sergio Pizzorno's silk scarves, trilbies and side-laced tight jeans.
Led Zeppelin have collected many honours and awards throughout the course of their career. They were inducted into the Rock and Roll Hall of Fame in 1995, and the UK Music Hall of Fame in 2006. Among the band's awards are an American Music Award in 2005, and the Polar Music Prize in 2006. Led Zeppelin were the recipient of a Grammy Lifetime Achievement Award in 2005, and four of their recordings have been inducted into the Grammy Hall of Fame. They have been awarded five Diamond albums, as well as fourteen Multi-Platinum, four Platinum and one Gold album in the United States, while in the UK they have five Multi-Platinum, six Platinum, one Gold and four Silver albums. In addition to listing five of their albums among "the 500 Greatest Albums of All Time", "Rolling Stone" named Led Zeppelin the 14th-greatest artist of all time in 2004.
In 2005, Page was appointed an Officer of the Order of the British Empire in recognition of his charity work, and in 2009 Plant was honoured as a Commander of the Order of the British Empire for his services to popular music. The band are ranked number one on VH1's "100 Greatest Artists of Hard Rock" and "Classic Rock"s "50 best live acts of all time". They were named as the best Rock band in a poll by BBC Radio 2. They were awarded an Ivor Novello Award for "Outstanding Contribution to British Music" in 1977, as well as a "Lifetime Achievement Award" at the 42nd Annual Ivor Novello awards ceremony in 1997. The band were honoured at the 2008 MOJO Awards with the "Best Live Act" prize for their one-off reunion, and were described as the "greatest rock and roll band of all time". Led Zeppelin were named as 2012 recipients of the Kennedy Center Honors.
Live performance guests
|
https://en.wikipedia.org/wiki?curid=17909
|
Edward Plunkett, 18th Baron of Dunsany
Edward John Moreton Drax Plunkett, 18th Baron of Dunsany (; 24 July 1878 – 25 October 1957), was an Anglo-Irish writer and dramatist; his work, mostly in the fantasy genre, was published under the name Lord Dunsany. More than ninety books of his work were published in his lifetime, and both original work and compilations have continued to appear. Dunsany's œuvre includes many hundreds of published short stories, as well as plays, novels and essays. He achieved great fame and success with his early short stories and plays, and during the 1910s was considered one of the greatest living writers of the English-speaking world; he is today best known for his 1924 fantasy novel "The King of Elfland's Daughter" and "The Gods of Pegāna", wherein he devised his own fictional pantheon and laid the groundwork for the Fantasy genre. He was the inventor of an asymmetric version of chess called Dunsany's Chess.
Born and raised in London, to the second-oldest title (created 1439) in the Irish peerage, Dunsany lived much of his life at what may be Ireland's longest-inhabited house, Dunsany Castle near Tara, worked with W. B. Yeats and Lady Gregory, received an honorary doctorate from Trinity College, Dublin, was chess and pistol-shooting champion of Ireland, and travelled and hunted extensively. He died in Dublin after an attack of appendicitis.
Edward Plunkett ("Dunsany"), known to his family as "Eddie," was the first son of John William Plunkett, 17th Baron of Dunsany (1853–1899), and his wife, Ernle Elizabeth Louisa Maria Grosvenor Ernle-Erle-Drax, née Ernle Elizabeth Louisa Maria Grosvenor Burton (1855–1916).
From a historically wealthy and famous family, Lord Dunsany was related to many well-known Irish figures. He was a kinsman of the Catholic Saint Oliver Plunkett, the martyred Archbishop of Armagh whose ring and crozier head are still held by the Dunsany family. He was also related to the prominent Anglo-Irish unionist and later nationalist / Home Rule politician Sir Horace Plunkett and George Count Plunkett, Papal Count and Republican politician, father of Joseph Plunkett, executed for his part in the 1916 Rising.
His mother was a cousin of Sir Richard Burton, and he inherited from her considerable height, being 6'4". The Countess of Fingall, wife of Dunsany's cousin, the Earl of Fingall, wrote a best-selling account of the life of the aristocracy in Ireland in the late 19th century and early 20th century called "Seventy Years Young".
Plunkett's only grown sibling, a younger brother, from whom he was estranged from around 1916, for reasons not fully clear but connected to his mother's will, was the noted British naval officer Sir Reginald Drax. Another younger brother died in infancy.
Edward Plunkett grew up at the family properties, most notably Dunstall Priory in Shoreham, Kent, and Dunsany Castle in County Meath, but also family homes such as in London. His schooling was at Cheam, Eton College and finally the Royal Military College, Sandhurst, which he entered in 1896.
The title passed to him at his father's death at a fairly young age, in 1899, and the young Lord Dunsany returned to Dunsany Castle after war duty, in 1901. In that year he was also confirmed as an elector for the Representative Peers for Ireland in the House of Lords.
In 1903, he met Lady Beatrice Child Villiers (1880–1970), youngest daughter of The 7th Earl of Jersey (head of the Jersey banking family), who was then living at Osterley Park, and they were married in 1904. Their only child, Randal, was born in 1906. Beatrice was supportive of Dunsany's interests and assisted him in his writing by typing his manuscripts, helping to select work for his collections, including the 1954 retrospective short story collection, and overseeing his literary heritage after his death.
The Dunsanys were socially active in both Dublin and London and travelled between their homes in Meath, London and Kent, other than during World Wars I and II and the Irish War of Independence. Dunsany himself circulated with many other literary figures of the time. To many of these in Ireland he was first introduced by his uncle, the co-operative pioneer Sir Horace Plunkett, who also helped to manage his estate and investments for a time. He was friendly with, for example, George William Russell, Oliver St. John Gogarty and, for a time, W. B. Yeats. He also socialised at times with George Bernard Shaw and H.G. Wells and was a friend of Rudyard Kipling.
In 1910 Dunsany commissioned a two-storey extension to Dunsany Castle, with a billiards room, bedrooms and other facilities. The billiards room includes the crests of all the Lords Dunsany up to the 18th.
Dunsany served as a Second Lieutenant in the Coldstream Guards during the Second Boer War.
He volunteered in the First World War and was appointed Captain in the Royal Inniskilling Fusiliers. He was stationed for some time at Ebrington Barracks in Derry. Having heard of disturbances in Dublin in 1916, during the Easter Rising, while on leave, he drove in to offer assistance and was wounded, with a bullet lodged in his skull. After recovery at Jervis Street Hospital and later what was then the King George V Hospital (now St. Bricin's Military Hospital), he returned to duty. His military belt was lost in this episode and was later used at the burial of Michael Collins. Having been refused forward positioning in 1916, being listed as valuable as a trainer, in the latter stages of the war he spent time in the trenches and in the very last period wrote propaganda material for the War Office with MI7b(1). At Dunsany Castle there is a book of wartime photos with lost members of his command marked.
During the Irish War of Independence, Dunsany was charged with violating the Restoration of Order in Ireland Regulations, tried by court-martial on 4 February 1921, convicted, and sentenced to pay a fine of 25 pounds or serve three months in prison without labour. The Crown Forces had searched Dunsany Castle and had found two double-barrelled shotguns, two rook rifles, four Very pistols, an automatic pistol, and a large quantity of pistol ammunition, along with shotgun and rifle ammunition.
During the Second World War, Dunsany signed up for the Irish Army Reserve and the British Home Guard, the two countries' local defence forces, and was especially active in Shoreham, Kent, the most-bombed village in England during the Battle of Britain.
Dunsany's fame arose chiefly from his prolific writings, and he was involved with the Irish Literary Revival. Supporting the Revival, Dunsany was a major donor to the Abbey Theatre, and he moved in Irish literary circles. He was well acquainted with W. B. Yeats (who rarely acted as editor but gathered and published a Dunsany selection), Lady Gregory, Percy French, "AE" Russell, Oliver St John Gogarty, Padraic Colum (with whom he jointly wrote a play) and others. He befriended and supported Francis Ledwidge, to whom he gave the use of his library and Mary Lavin.
Dunsany made his first literary tour to the United States in 1919 and made further such visits right up to the 1950s, in the early years mostly to the eastern seaboard and later, notably, to California.
Dunsany's own work, and contribution to the Irish literary heritage, was recognised through an honorary degree from Trinity College, Dublin.
In 1940, Dunsany was appointed Byron Professor of English in Athens University, Greece. Having reached Athens by a circuitous route, he was so successful that he was offered a post as Professor of English in Istanbul. However, he had to be evacuated due to the German invasion of Greece in April 1941, returning home by an even more complex route than he had come, his travels forming a basis for a long poem published in book form (A Journey, in 5 cantos: The Battle of Britain, The Battle of Greece, The Battle of the Mediterranean, Battles Long Ago, The Battle of the Atlantic; Special edition January 1944). Olivia Manning's character, "Lord Pinkrose", in her novel sequence, the "Fortunes of War", was a mocking portrait of Dunsany during this period.
In 1947, Dunsany transferred his Meath estate to his son and heir under a trust, and settled in Kent, at his Shoreham house, Dunstall Priory and farm, not far from the home of Rudyard Kipling, a friend. He visited Ireland only occasionally thereafter, and engaged actively in life in Shoreham and London. He also began a new period of visits to the United States, notably California, as recounted in Hazel Littlefield-Smith's biographical "Dunsany, King of Dreams."
In 1957, Lord Dunsany became ill while eating with the Earl and Countess of Fingall at Dunsany, in what proved to be an attack of appendicitis, and died in hospital in Dublin at the age of 79. He had directed that he be buried in the churchyard of the ancient church of St. Peter and St. Paul, Shoreham, Kent, in memory of shared war times. His funeral was attended by a wide range of family (including the Pakenhams, Jerseys and Fingals) and Shoreham figures, and representatives of his old regiment and various bodies in which he had taken an interest. A memorial service was held at Kilmessan in Meath with a reading of Crossing the Bar, which was noted as coinciding with a passing flock of geese.
Lady Beatrice survived Lord Dunsany, living on primarily at Shoreham, overseeing his literary legacy until her death in 1970, while their son, Randal, succeeded him in the Barony, and was in turn succeeded by his grandson, the artist Edward Plunkett, to whom literary rights passed directly.
Aside from his literary work, Dunsany was a keen chess player, set chess puzzles for journals including "The Times" (of London), played José Raúl Capablanca to a draw (in a simultaneous exhibition), and also invented Dunsany's Chess, an asymmetric chess variant that is notable for not involving any fairy pieces, unlike many variants that require the player to learn unconventional piece movements. He was president of both the Irish Chess Union and the Kent County Chess Association for some years and of Sevenoaks Chess Club for 54 years.
Dunsany was a keen horseman and hunter, for many years hosting the hounds of a local hunt as well as hunting in parts of Africa, and sportsman and was at one time the pistol-shooting champion of Ireland.
Dunsany also campaigned for animal rights, being known especially for his opposition to the "docking" of dogs' tails, and was president of the West Kent branch of the RSPCA in his later years.
He enjoyed cricket, provided the local cricket ground situated near Dunsany Crossroads, and later played for and presided at Shoreham Cricket Club in Kent.
He was a supporter of Scouting over many years, serving as President of the Sevenoaks district Boy Scouts Association. He also supported the amateur drama group the Shoreham Players.
Dunsany provided support for the British Legion in both Ireland and Kent, including grounds in Trim and poetry for the Irish branch's annual memorial service on a number of occasions.
Dunsany was a prolific writer, penning short stories, novels, plays, poetry, essays and autobiography, and publishing over 90 books in his lifetime, not including individual plays. Books have continued to appear, with more than 120 having issued as of 2017. Dunsany's works have been published in many languages.
The then Edward Plunkett began his authorial career in the late 1890s with a few published verses, such as "Rhymes from a Suburb" and "The Spirit of the Bog", but he made a lasting impression in 1905 when he burst onto the publishing scene, writing as Lord Dunsany, with the well-received collection "The Gods of Pegāna."
Dunsany's most notable fantasy short stories were published in collections from 1905 to 1919, though fantasy as a genre did not yet exist, so they were just a curious form of literature. He paid for the publication of the first collection, "The Gods of Pegāna," earning a commission on sales. This he never again had to do, the vast majority of his extensive writings selling.
The stories in his first two books, and perhaps the beginning of his third, were set within an invented world, Pegāna, with its own gods, history and geography. Starting with this book, Dunsany's name is linked to that of Sidney Sime, his chosen artist, who illustrated much of his work, notably until 1922.
Dunsany's style varied significantly throughout his writing career. Prominent Dunsany scholar S. T. Joshi has described these shifts as Dunsany moving on after he felt he had exhausted the potential of a style or medium. From the naïve fantasy of his earliest writings, through his early short-story work in 1904–1908, he turned to the self-conscious fantasy of "The Book of Wonder" in 1912, in which he almost seems to be parodying his lofty early style.
Each of his collections varies in mood; "A Dreamer's Tales" varies from the wistfulness of "Blagdaross" to the horrors of "Poor Old Bill" and "Where the Tides Ebb and Flow" to the social satire of "The Day of the Poll."
The opening paragraph of "The Hoard of the Gibbelins" from "The Book of Wonder," (1912) gives a good indication of both the tone and tenor of Dunsany's style at the time:
After "The Book of Wonder," Dunsany began to write plays – many of which were even more successful, at the time, than his early story collections – while also continuing to write short stories. He continued to write plays for the theatre into the 1930s, including the famous "If", and a number for radio production.
Although many of Dunsany's stage plays were successfully produced within his lifetime, he also wrote a number of "chamber plays" (or closet dramas), which were intended only to be read privately (as if they were stories) or performed on the radio, rather than staged . Some of Dunsany's chamber or radio plays contain supernatural events – such as a character spontaneously appearing out of thin air, or vanishing in full view of the audience, without any explanation of how the effect is to be staged, a matter of no importance, since Dunsany did not intend these works actually to be performed live and visible.
Following a successful lecture touring in the US in 1919–1920 and with his reputation now principally related to his plays, Dunsany temporarily reduced his output of short stories, concentrating on plays, novels and poetry for a time.
His poetry, now little seen, was for a time so popular that it is recited by the lead character of F. Scott Fitzgerald's "This Side of Paradise" and one of his poems, the sonnet "A Dirge of Victory" – the only poem included in the Armistice Day edition of the Times of London.
Launching another phase of his work, Dunsany's first novel, "", was published in 1922. It is set in "a Romantic Spain that never was," and follows the adventures of a young nobleman, Don Rodriguez, and his servant in their search for a castle for Rodriguez. It has been argued that Dunsany's inexperience with the novel form shows in the episodic nature of "Don Rodriguez." In 1924, Dunsany published his second novel, "The King of Elfland's Daughter," a return to his early style of writing, which is considered by many to be Dunsany's finest novel and a classic in the realm of the fantasy writing. In his next novel, "The Charwoman's Shadow," Dunsany returned to the Spanish milieu and to the light style of "Don Rodriguez," to which it is related.
Though his style and medium shifted frequently, Dunsany's thematic concerns remained essentially the same. Many of Dunsany's later novels had an explicitly Irish theme, from the semi-autobiographical "The Curse of the Wise Woman" to "His Fellow Men."
One of Dunsany's best-known characters was Joseph Jorkens, an obese middle-aged raconteur who frequented the fictional Billiards Club in London, and who would tell fantastic stories if someone would buy him a large whiskey and soda. From his tales, it was obvious that Mr Jorkens had travelled to all seven continents, was extremely resourceful, and well-versed in world cultures, but always came up short on becoming rich and famous. The "Jorkens" books, which sold well, were among the first of a type which was to become popular in fantasy and science fiction writing: extremely improbable "club tales" told at a gentleman's club or bar.
Dunsany's writing habits were considered peculiar by some. Lady Beatrice said that "He always sat on a crumpled old hat while composing his tales." (The hat was eventually stolen by a visitor to Dunsany Castle.) Dunsany almost never rewrote anything; everything he ever published was a first draft. Much of his work was penned with quill pens, which he made himself; Lady Beatrice was usually the first to see the writings, and would help type them. It has been said that Lord Dunsany would sometimes conceive stories while hunting, and would return to the Castle and draw in his family and servants to re-enact his visions before he set them on paper.
Dunsany's work was translated from an early stage, to languages including Spanish, French, Japanese, German, Italian, Dutch, Russian, Czech and Turkish. His uncle, Horace Plunkett, mentioned that he had been translated into 14 languages already by the 1920s.
Lord Dunsany was a Fellow of the Royal Society of Literature, a member, and at once point the President, of the Authors' Society, and likewise President of the Shakespeare Reading Society from 1938 until his death in 1957, succeeded by Sir John Gielgud.
Dunsany was also a Fellow of the Royal Geographical Society, and an honorary member of the Institut Historique et Heraldique de France.
He was initially an Associate Member of the Irish Academy of Letters, founded by Yeats and others, and later a full member. At one of their meetings, after 1922, he asked Seán Ó Faoláin, who was presiding, "Do we not toast the King?" Ó Faoláin replied that there was only one toast: to the Nation; but after it was given and O'Faolain had called for coffee, he saw Dunsany, standing quietly among the bustle, raise his glass discreetly, and whisper "God bless him".
"The Curse of the Wise Woman" received the Harmsworth Literary Award in Ireland.
Dunsany received an honorary doctorate, D.Litt., from Trinity College, Dublin, in 1940.
Dunsany was nominated for the Nobel Prize by Irish PEN, but lost to Bertrand Russell.
Greenwood Publishing Group, 1995, (p.2).
"Mysteries of Time and Spirit: The Letters of H.P. Lovecraft and Donald Wandrei". Night Shade Books, San Francisco,
(p.26).
Hearn...". Wellman interviewed in Jeffrey M. Elliot, "Fantasy Voices: Interviews with American Fantasy Writers". Borgo Press, San Bernardino. 1982 (p.10)
S. T. Joshi and Darrell Schweitzer have been working on the Dunsany œuvre for over twenty years, gathering stories and essays and reference material, and producing both an initial bibliography (together) and scholarly studies of Dunsany's work (separate works). They issued an updated version of the bibliography in 2013. Joshi edited "The Collected Jorkens", "The Ginger Cat and other lost plays" and co-edited "The Ghost in the Corner and other stories". Both are well-known figures in the fields of speculative fiction.
In the late 1990s a curator, J.W. (Joe) Doyle, was appointed by the Dunsany estate, working at Dunsany Castle, among other things locating and organising the author's manuscripts, typescripts and other materials. Doyle discovered both works known to exist but "lost", such as the plays "The Ginger Cat" and "The Murderers," some Jorkens stories, and the novel "The Pleasures of a Futuroscope" (subsequently published by Hippocampus Press) and unknown, unpublished works, notably including "The Last Book of Jorkens", to the first edition of which he wrote an introduction, and an unnamed 1956 short story collection, published as part of "The Ghost in the Corner and other stories" in 2017.
In the 2000s a PhD researcher, Tania Scott, from the University of Glasgow, worked on Dunsany for some time, and has spoken at literary and other conventions. A Swedish fan, Martin Andersson, has also been active in research and publication in the mid-2010s.
Dunsany's literary rights passed from the author to a Trust, which still owns them. These rights were first managed by Beatrice, Lady Dunsany, and are currently administered by Curtis Brown of London and partner companies worldwide (some past US deals, for example, have been listed by Locus Magazine as by SCG).
All of Dunsany's work is in copyright in parts of the world , including the UK and European Union, with the early work (published before 1 January 1925) being in the public domain in the United States, and all of his work out of copyright in parts of the world with copyright durations of life + 60 or less.
Dunsany's primary home, over 820 years old, can be visited at certain times of year, and tours usually include the Library, but not the tower room he often liked to work in. His other home, Dunstall Priory, was sold to a fan, Grey Gowrie, later head of the Arts Council of the UK, and thence passed on to other owners; the family still own farm- and down-land in the area, and a Tudor cottage in Shoreham village. The grave of Lord Dunsany and his wife can be seen in the Church of England graveyard in the village (most of the previous barons are buried in the grounds of Dunsany Castle).
Dunsany's original manuscripts are collected in the family archive, including some specially bound volumes of some of his works. As noted, there has been a curator since the late 1990s and scholarly access is possible by application.
|
https://en.wikipedia.org/wiki?curid=17911
|
Ludwig van Beethoven
Ludwig van Beethoven (; ; baptised 17 December 177026 March 1827) was a German composer and pianist; his music is amongst the most performed of the classical music repertoire, and he is one of the most admired composers in the history of Western music. His works span the transition from the classical period to the romantic era in classical music. His career has conventionally been divided into early, middle, and late periods. The "early" period in which he forged his craft is typically seen to last until 1802. His "middle" period, sometimes characterised as "heroic", showing an individual development from the "classical" styles of Joseph Haydn and Wolfgang Amadeus Mozart, covers the years 1802 to 1812, during which he increasingly suffered from deafness. In the "late" period from 1812 to his death in 1827, he extended his innovations in musical form and expression.
Beethoven was born in Bonn. His musical talent was obvious at an early age, and he was initially harshly and intensively taught by his father Johann van Beethoven. He was later taught by the composer and conductor Christian Gottlob Neefe, under whose tuition he published his first work, a set of keyboard variations, in 1783. He found relief from a dysfunctional home life with the family of Helene von Breuning, whose children he loved, befriended and taught piano. At age 21, he moved to Vienna, which subsequently became his base, and studied composition with Haydn. Beethoven then gained a reputation as a virtuoso pianist, and he was soon courted by Karl Alois, Prince Lichnowsky for compositions, which resulted in his three Opus 1 piano trios (the earliest works to which he accorded an opus number) in 1795.
His first major orchestral work, the First Symphony, appeared in 1800, and his first set of string quartets was published in 1801. During this period, his hearing began to deteriorate, but he continued to conduct, premiering his Third and Fifth Symphonies in 1804 and 1808, respectively. His Violin Concerto appeared in 1806. His last piano concerto (No. 5, Op. 73, known as the 'Emperor'), dedicated to his frequent patron Archduke Rudolf of Austria, was premiered in 1810, but not with Beethoven as soloist. He was almost completely deaf by 1814, and he then gave up performing and appearing in public. He described his problems with health and his unfulfilled personal life in two letters, his "Heiligenstadt Testament" (1802) to his brothers and his unsent love letter to an unknown "Immortal Beloved" (1812).
In the years from 1810, increasingly less socially involved, Beethoven composed many of his most admired works including his later symphonies and his mature chamber music and piano sonatas. His only opera, "Fidelio", which had been first performed in 1805, was revised to its final version in 1814. He composed his "Missa Solemnis" in the years 1819–1823, and his final, Ninth, Symphony, one of the first examples of a choral symphony, in 1822–1824. Written in his last years, his late string quartets of 1825–26 are amongst his final achievements. After some months of bedridden illness he died in 1827. Beethoven's works remain mainstays of the classical music repertoire.
Beethoven was the grandson of Ludwig van Beethoven (1712–1773), a musician from the town of Mechelen in the Austrian Duchy of Brabant (in what is now the Flemish region of Belgium) who had moved to Bonn at the age of 21. Ludwig was employed as a bass singer at the court of Clemens August, Archbishop-Elector of Cologne, eventually rising to become, in 1761, Kapellmeister (music director) and hence a pre-eminent musician in Bonn. The portrait he commissioned of himself towards the end of his life remained displayed in his grandson's rooms as a talisman of his musical heritage. Ludwig had one son, Johann (1740–1792), who worked as a tenor in the same musical establishment and gave keyboard and violin lessons to supplement his income.
Johann married Maria Magdalena Keverich in 1767; she was the daughter of Heinrich Keverich (1701–1751), who had been the head chef at the court of the Archbishopric of Trier. Beethoven was born of this marriage in Bonn at what is now the Beethoven House Museum, Bonnstrasse 20. There is no authentic record of the date of his birth; however, the registry of his baptism, in the Catholic Parish of St. Remigius on 17 December 1770, survives, and the custom in the region at the time was to carry out baptism within 24 hours of birth. There is a consensus, (with which Beethoven himself agreed) that his birth date was 16 December, but no documentary proof of this.
Of the seven children born to Johann van Beethoven, only Ludwig, the second-born, and two younger brothers survived infancy. Kaspar Anton Karl was born on 8 April 1774, and Nikolaus Johann, (generally known as Johann) the youngest, was born on 2 October 1776.
Beethoven's first music teacher was his father. He later had other local teachers: the court organist Gilles van den Eeden (d. 1782), Tobias Friedrich Pfeiffer (a family friend, who provided keyboard tuition), and Franz Rovantini (a relative, who instructed him in playing the violin and viola). From the outset his tuition regime, which began in his fifth year, was harsh and intensive, often reducing him to tears; with the involvement of the insomniac Pfeiffer there were irregular late-night sessions with the young Beethoven being dragged from his bed to the keyboard. His musical talent was obvious at a young age. Johann, aware of Leopold Mozart's successes in this area (with his son Wolfgang and daughter Nannerl), attempted to promote his son as a child prodigy, claiming that Beethoven was six (he was seven) on the posters for his first public performance in March 1778.
In 1780 or 1781, Beethoven began his studies with his most important teacher in Bonn, Christian Gottlob Neefe. Neefe taught him composition; in March 1783 appeared Beethoven's first published work, a set of keyboard variations (WoO 63). Beethoven soon began working with Neefe as assistant organist, at first unpaid (1782), and then as a paid employee (1784) of the court chapel. His first three piano sonatas, WoO 47, sometimes known as "" ("Elector") for their dedication to the Elector Maximilian Friedrich (1708–1784), were published in 1783. In the same year the first printed reference to Beethoven appeared in the "Magazin der Musik" – "Louis van Beethoven [sic] ... a boy of 11 years and most promising talent. He plays the piano very skilfully and with power, reads at sight very well ... the chief piece he plays is "Das wohltemperierte Klavier" of Sebastian Bach, which Herr Neefe puts into his hands ..." Maximilian Friedrich's successor as the Elector of Bonn was Maximilian Franz. He gave some support to Beethoven, appointing him Court Organist and paying towards his visit to Vienna of 1792.
He was introduced in these years to several people who became important in his life. He often visited the cultivated von Breuning family, at whose home he taught piano to some of the children, and where the widowed Frau von Breuning offered him a motherly friendship. Here he also met Franz Wegeler, a young medical student, who became a lifelong friend (and was to marry one of the von Breuning daughters). The von Breuning family environment offered an alternative to his home life, which was increasingly dominated by his father's decline. Another frequenter of the von Breunings was Count Ferdinand von Waldstein, who became a friend and financial supporter during Beethoven's Bonn period. Waldstein was to commission in 1791 Beethoven's first work for the stage, the ballet "Musik zu einem Ritterballett" (WoO 1).
In the period 1785–90 there is virtually no record of Beethoven's activity as a composer. This may be attributed to the lukewarm response his initial publications had attracted, and also to ongoing problems in the Beethoven family. His mother died in 1787, shortly after Beethoven's first visit to Vienna, where he stayed for about two weeks and almost certainly met Mozart. In 1789 Beethoven's father was forcibly retired from the service of the Court (as a consequence of his alcoholism) and it was ordered that half of his father's pension be paid directly to Ludwig for support of the family. He contributed further to the family's income by teaching (to which Wegeler said he had "an extraordinary aversion") and by playing viola in the court orchestra. This familiarized him with a variety of operas, including works by Mozart, Gluck and Paisiello. Here he also befriended Anton Reicha, a composer, flautist and violinist of about his own age who was a nephew of the court orchestra's conductor, Josef Reicha.
From 1790 to 1792, Beethoven composed a number of works (none were published at the time) showing a growing range and maturity. Musicologists have identified a theme similar to those of his Third Symphony in a set of variations written in 1791. It was perhaps on Neefe's recommendation that Beethoven received his first commissions; the Literary Society in Bonn commissioned a cantata to mark the occasion of the death in 1790 of Joseph II (WoO 87), and a further cantata, to celebrate the subsequent accession of Leopold II as Holy Roman Emperor (WoO 88), may have been commissioned by the Elector. These two "Emperor Cantatas" were never performed at the time and they remained lost until the 1880s, when they were described by Johannes Brahms as "Beethoven through and through" and as such prophetic of the style which would mark his music as distinct from the classical tradition.
Beethoven was probably first introduced to Joseph Haydn in late 1790, when the latter was travelling to London and stopped in Bonn around Christmas time. A year and a half later, they met in Bonn on Haydn's return trip from London to Vienna in July 1792, when Beethoven played in the orchestra at the Redoute in Godesberg. It is likely that arrangements were made at that time for Beethoven to study with the older master. Waldstein wrote to him before his departure: "You are going to Vienna in fulfilment of your long-frustrated wishes ... With the help of assiduous labour you shall receive Mozart's spirit from Haydn's hands."
Beethoven left Bonn for Vienna in November 1792, amid rumours of war spilling out of France; he learned shortly after his arrival that his father had died. Over the next few years, Beethoven responded to the widespread feeling that he was a successor to the recently deceased Mozart by studying that master's work and writing works with a distinctly Mozartian flavour.
He did not immediately set out to establish himself as a composer, but rather devoted himself to study and performance. Working under Haydn's direction, he sought to master counterpoint. He also studied violin under Ignaz Schuppanzigh. Early in this period, he also began receiving occasional instruction from Antonio Salieri, primarily in Italian vocal composition style; this relationship persisted until at least 1802, and possibly as late as 1809.
With Haydn's departure for England in 1794, Beethoven was expected by the Elector to return home to Bonn. He chose instead to remain in Vienna, continuing his instruction in counterpoint with Johann Albrechtsberger and other teachers. In any case, by this time it must have seemed clear to his employer that Bonn would fall to the French, as it did in October 1794, effectively leaving Beethoven without a stipend or the necessity to return. However, a number of Viennese noblemen had already recognised his ability and offered him financial support, among them Prince Joseph Franz Lobkowitz, Prince Karl Lichnowsky, and Baron Gottfried van Swieten.
Assisted by his connections with Haydn and Waldstein, Beethoven began to develop a reputation as a performer and improviser in the salons of the Viennese nobility. His friend Nikolaus Simrock began publishing his compositions, starting with a set of keyboard variations on a theme of Dittersdorf (WoO 66). By 1793, he had established a reputation in Vienna as a piano virtuoso, but he apparently withheld works from publication so that their eventual appearance would have greater impact.
His first public performance in Vienna was in March 1795, where he first performed one of his piano concertos. Shortly after this performance, he arranged for the publication of the first of his compositions to which he assigned an opus number, the three piano trios, Opus 1. These works were dedicated to his patron Prince Lichnowsky, and were a financial success; Beethoven's profits were nearly sufficient to cover his living expenses for a year. In 1799 Beethoven participated in (and won) a notorious piano 'duel' at the home of Baron Raimund Wetzlar (a former patron of Mozart) against the virtuoso Joseph Wölfl; and in the following year he similarly triumphed against Daniel Steibelt at the salon of Count Moritz von Fries. Beethoven's eighth piano sonata the "Pathétique" (Op. 13), published in 1799 is described by the musicologist Barry Cooper as "surpass[ing] any of his previous compositions, in strength of character, depth of emotion, level of originality, and ingenuity of motivic and tonal manipulation."
Beethoven composed his first six string quartets (Op. 18) between 1798 and 1800 (commissioned by, and dedicated to, Prince Lobkowitz). They were published in 1801. He also completed his Septet (Op. 20) in 1799, which was one of his most popular works during his lifetime. With premieres of his First and Second Symphonies in 1800 and 1803, he became regarded as one of the most important of a generation of young composers following Haydn and Mozart. But his melodies, musical development, use of modulation and texture, and characterisation of emotion all set him apart from his influences, and heightened the impact some of his early works made when they were first published. For the premiere of his First Symphony, he hired the Burgtheater on 2 April 1800, and staged an extensive programme, including works by Haydn and Mozart, as well as his Septet, the Symphony, and one of his piano concertos (the latter three works all then unpublished). The concert, which the "Allgemeine musikalische Zeitung" described as "the most interesting concert in a long time," was not without difficulties; among the criticisms was that "the players did not bother to pay any attention to the soloist." By the end of 1800, Beethoven and his music were already much in demand from patrons and publishers.
In May 1799, he taught piano to the daughters of Hungarian Countess Anna Brunsvik. During this time, he fell in love with the younger daughter Josephine. Amongst his other students, from 1801 to 1805, he tutored Ferdinand Ries, who went on to become a composer and later wrote about their encounters. The young Carl Czerny, who later became a renowned music teacher himself, studied with Beethoven from 1801 to 1803. In late 1801, he met a young countess, Julie Guicciardi, through the Brunsvik family; he mentions his love for Julie in a November 1801 letter to a friend, but class difference prevented any consideration of pursuing this. He dedicated his 1802 Sonata Op. 27 No. 2, now commonly known as the "Moonlight Sonata", to her.
In the spring of 1801 he completed "The Creatures of Prometheus", a ballet. The work received numerous performances in 1801 and 1802, and he rushed to publish a piano arrangement to capitalise on its early popularity. In the spring of 1802 he completed the Second Symphony, intended for performance at a concert that was cancelled. The symphony received its premiere instead at a subscription concert in April 1803 at the Theater an der Wien, where he had been appointed composer in residence. In addition to the Second Symphony, the concert also featured the First Symphony, the Third Piano Concerto, and the oratorio "Christ on the Mount of Olives". Reviews were mixed, but the concert was a financial success; he was able to charge three times the cost of a typical concert ticket.
His business dealings with publishers also began to improve in 1802 when his brother Kaspar, who had previously assisted him casually, began to assume a larger role in the management of his affairs. In addition to negotiating higher prices for recently composed works, Kaspar also began selling some of his earlier unpublished compositions, and encouraged him (against Beethoven's preference) to also make arrangements and transcriptions of his more popular works for other instrument combinations. Beethoven acceded to these requests, as he could not prevent publishers from hiring others to do similar arrangements of his works.
Beethoven told the English pianist Charles Neate (in 1815) that he dated his hearing loss from a fit he suffered in 1798 induced by a quarrel with a singer. During its gradual decline, his hearing was further impeded by a severe form of tinnitus. As early as 1801, he wrote to Wegeler and another friend Karl Amenda, describing his symptoms and the difficulties they caused in both professional and social settings (although it is likely some of his close friends were already aware of the problems). The cause was probably otosclerosis, perhaps accompanied by degeneration of the auditory nerve.
On the advice of his doctor, Beethoven moved to the small Austrian town of Heiligenstadt, just outside Vienna, from April to October 1802 in an attempt to come to terms with his condition. There he wrote the document now known as the Heiligenstadt Testament, a letter to his brothers which records his thoughts of suicide due to his growing deafness and records his resolution to continue living for and through his art. The letter was never sent and was discovered in his papers after his death. The letters to Wegeler and Amenda were not so despairing; in them Beethoven commented also on his ongoing professional and financial success at this period, and his determination, as he expressed it to Wegeler, to "seize Fate by the throat; it shall certainly not crush me completely." In 1806, Beethoven noted on one of his musical sketches "Let your deafness no longer be a secret – even in art."
Beethoven's hearing loss did not prevent him from composing music, but it made playing at concerts—an important source of income at this phase of his life—increasingly difficult. (It also contributed substantially to his social withdrawal.) Czerny remarked however that Beethoven could still hear speech and music normally until 1812. Beethoven never became totally deaf; in his final years he was still able to distinguish low tones and sudden loud sounds.
Beethoven's return to Vienna from Heiligenstadt was marked by a change in musical style, and is now often designated as the start of his middle or "heroic" period characterised by many original works composed on a grand scale. According to Carl Czerny, Beethoven said, "I am not satisfied with the work I have done so far. From now on I intend to take a new way." An early major work employing this new style was the Third Symphony in E flat Op. 55, known as the "Eroica", written in 1803-04. The idea of creating a symphony based on the career of Napoleon may have been suggested to Beethoven by Count Bernadotte in 1798. Beethoven, sympathetic to the ideal of the heroic revolutionary leader, originally gave the symphony the title "Bonaparte", but disillusioned by Napoleon declaring himself Emperor in 1804, he scratched Napoleon's name from the manuscript's title page, and the symphony was published in 1806 with its present title and the subtitle "to celebrate the memory of a great man." The "Eroica" was longer and larger in scope than any previous symphony. When it premiered in early 1805 it received a mixed reception. Some listeners objected to its length or misunderstood its structure, while others viewed it as a masterpiece.
Other middle period works extend in the same dramatic manner the musical language Beethoven had inherited. The Rasumovsky string quartets, and the "Waldstein" and "Appassionata" piano sonatas share the heroic spirit of the Third Symphony. Other works of this period include the Fourth through Eighth Symphonies, the oratorio "Christ on the Mount of Olives", the opera "Fidelio", and the Violin Concerto. Beethoven was hailed in 1810 by the writer and composer E. T. A. Hoffmann, in an influential review in the "Allgemeine musikalische Zeitung", as the greatest of (what he considered) the three "Romantic" composers, (that is, ahead of Haydn and Mozart); in Beethoven's Fifth Symphony his music, wrote Hoffmann, "sets in motion terror, fear, horror, pain, and awakens the infinite yearning that is the essence of romanticism".
During this time Beethoven's income came from publishing his works, from performances of them, and from his patrons, for whom he gave private performances and copies of works they commissioned for an exclusive period prior to their publication. Some of his early patrons, including Prince Lobkowitz and Prince Lichnowsky, gave him annual stipends in addition to commissioning works and purchasing published works. Perhaps his most important aristocratic patron was Archduke Rudolf of Austria, the youngest son of Emperor Leopold II, who in 1803 or 1804 began to study piano and composition with him. They became friends, and their meetings continued until 1824. Beethoven was to dedicate 14 compositions to Rudolf, including some of his major works such as the "Archduke" Trio Op. 97 (1811) and "Missa solemnis" Op. 123 (1823).
His position at the Theater an der Wien was terminated when the theatre changed management in early 1804, and he was forced to move temporarily to the suburbs of Vienna with his friend Stephan von Breuning. This slowed work on "Leonore", (his original title for his opera), his largest work to date, for a time. It was delayed again by the Austrian censor, and finally premiered, under its present tile of "Fidelio" in November 1805 to houses that were nearly empty because of the French occupation of the city. In addition to being a financial failure, this version of "Fidelio" was also a critical failure, and Beethoven began revising it.
Despite this failure, Beethoven continued to attract recognition. In 1807 the musician and publisher Muzio Clementi secured the rights for publishing his works in England, and Haydn's former patron Prince Esterházy commissioned a mass (the Mass in C, Op. 86) for his wife's name-day. But he could not count on such recognition alone. A colossal benefit concert which he organized in December 1808, and was widely advertised, included the premieres of the Fifth and Sixth ("Pastoral") symphonies, the Fourth Piano Concerto, extracts from the Mass in C, the scena and aria "Ah! perfido" Op. 65 and the Choral Fantasy op. 80. There was a large audience, (including Czerny and the young Ignaz Moscheles). But it was under-rehearsed, involved many stops and starts, and during the Fantasia Beethoven was noted shouting at the musicians "badly played, wrong, again!" The financial outcome is unknown.
In the autumn of 1808, after having been rejected for a position at the Royal Theatre, Beethoven had received an offer from Napoleon's brother Jérôme Bonaparte, then king of Westphalia, for a well-paid position as Kapellmeister at the court in Cassel. To persuade him to stay in Vienna, the Archduke Rudolf, Prince Kinsky and Prince Lobkowitz, after receiving representations from Beethoven's friends, pledged to pay him a pension of 4000 florins a year. In the event, Archduke Rudolf paid his share of the pension on the agreed date. Kinsky, immediately called to military duty, did not contribute and died in November 1812 after falling from his horse. The Austrian currency destabilized and Lobkowitz went bankrupt in 1811, so that to benefit from the agreement Beethoven eventually had recourse to the law, which in 1815 brought him some recompense.
The imminence of war reaching Vienna itself was felt in early 1809. In April Beethoven had completed writing his Piano Concerto No. 5 in E flat major, Op. 73, which the musicologist Alfred Einstein has described as "the apotheosis of the military concept" in Beethoven's music. Archduke Rudolf left the capital with the Imperial family in early May, prompting Beethoven's piano sonata "Les Adieux", (Sonata No. 26, Op. 81a), actually entitled by Beethoven in German "Das Lebewohl" (The Farewell), of which the final movement, "Das Wiedersehen" (The Return), is dated in the manuscript with the date of Rudolf's homecoming of 30 January 1810. During the French bombardment of Vienna in May Beethoven took refuge in the cellar of the house of his brother Kaspar. The subsequent occupation of Vienna and the disruptions to cultural life and to Beethoven's publishers, together with Beethoven's poor health at the end of 1809, explain his significantly reduced output during this period, although other notable works of the year include his String Quartet No. 10 in F major, Op. 74 (known as "The Harp") and the Piano Sonata No. 24 in F sharp major op. 78, dedicated to Josephine's sister Therese Brunsvik.
At the end of 1809 Beethoven was commissioned to write incidental music for Goethe's play "Egmont". The result (an overture, and nine additional entractes and vocal pieces, Op. 84), which appeared in 1810 fitted well with Beethoven's "heroic" style and he became interested in Goethe, setting three of his poems as songs (Op. 83) and learning about the poet from a mutual acquaintance, Bettina Brentano (who also wrote to Goethe at this time about Beethoven). Other works of this period in a similar vein were the F minor String Quartet Op. 95, to which Beethoven gave the subtitle "Quartetto serioso", and the Op. 97 Piano Trio in B flat major known, from its dedication to his patron Rudolph as the "Archduke Trio".
In the spring of 1811 Beethoven became seriously ill, suffering headaches and high fever. His doctor Johann Malfatti recommended him to take a cure at the spa of Teplitz (now Teplice in Czechia) where he wrote two more overtures and sets of incidental music for dramas, this time by August von Kotzebue – "King Stephen" Op. 117 and "The Ruins of Athens" Op. 113. Advised again to visit Teplitz in 1812 he met there with Goethe, who wrote: "His talent amazed me; unfortunately he is an utterly untamed personality, who is not altogether wrong in holding the world to be detestable, but surely does not make it any more enjoyable ... by his attitude." Beethoven wrote to his publishers Breitkopf and Härtel that "Goethe delights far too much in the court atmosphere, far more than is becoming in a poet." But following their meeting he began a setting for choir and orchestra of Goethe's "Meeresstille und glückliche Fahrt" "(Calm Sea and Prosperous Voyage)" (Op. 112), completed in 1815. After this was published in 1822 with a dedication to the poet, Beethoven wrote to him "The admiration, the love and esteem which already in my youth I cherished for the one and only immortal Goethe have persisted."
While he was at Teplitz in 1812 he wrote a ten-page love letter to his "Immortal Beloved", which he never sent to its addressee. The identity of the intended recipient was long a subject of debate, although the musicologist Maynard Solomon has convincingly demonstrated that the intended recipient must have been Antonie Brentano; other candidates have included Julie Guicciardi, Therese Malfatti and Josephine Brunsvik.
All of these had been regarded by Beethoven as possible soulmates during his first decade in Vienna. Guicciardi, although she flirted with Beethoven, never had any serious interest in him and married Wenzel Robert von Gallenberg in November 1803. (Beethoven insisted to his later secretary and biographer, Anton Schindler, that Gucciardi had "sought me out, crying, but I scorned her.") Josephine had since Beethoven's initial infatuation with her married the elderly Count Joseph Deym, who died in 1804. Beethoven began to visit her and commenced a passionate correspondence. Initially he accepted that Josephine could not love him, but he continued to address himself to her even after she had moved to Budapest, finally demonstrating that he had got the message in his last letter to her of 1807: "I thank you for wishing still to appear as if I were not altogether banished from your memory". Malfatti was the niece of Beethoven's doctor, and he had proposed to her in 1810. He was 40, she was 19 – the proposal was rejected. She is now remembered as the recipient of the piano bagatelle "Für Elise".
Antonie (Toni) Brentano (née von Birkenstock), ten years younger than Beethoven, was the wife of Franz Brentano, the half-brother of Bettina Brentano, who provided Beethoven's introduction to the family. It would seem that Antonie and Beethoven had an affair during 1811-1812. Antonie left Vienna with her husband in late 1812 and never met with (or apparently corresponded with) Beethoven again, although in her later years she wrote and spoke fondly of him.
After 1812 there are no reports of any romantic liaisons of Beethoven; it is however clear from his correspondence of the period and, later, from the conversation books, that he would occasionally resort to prostitutes.
In early 1813 Beethoven apparently went through a difficult emotional period, and his compositional output dropped. His personal appearance degraded—it had generally been neat—as did his manners in public, notably when dining.
Family issues may have played a part in this. Beethoven had visited his brother Johann at the end of October 1812. He wished to end Johann's cohabitation with Therese Obermayer, a woman who already had an illegitimate child. He was unable to convince Johann to end the relationship and appealed to the local civic and religious authorities, but Johann and Therese married on 8 November.
The illness and eventual death of his brother Kaspar from tuberculosis became an increasing concern. Kaspar had been ill for some time; in 1813 Beethoven lent him 1500 florins, to procure the repayment of which he was ultimately led to complex legal measures. After Kaspar died on 15 November 1815, Beethoven immediately became embroiled in a protracted legal dispute with Kaspar's wife Johanna over custody of their son Karl, then nine years old. Beethoven had successfully applied to Kaspar to have himself named sole guardian of the boy. A late codicil to Kaspar's will gave him and Johanna joint guardianship. While Beethoven was successful at having his nephew removed from her custody in January 1816, and had him removed to a private school in 1818 he was again preoccupied by the legal processes around Karl. While giving evidence to the court for the nobility, the Landrechte, Beethoven was unable to prove that he was of noble birth and as a consequence, on 18 December 1818 the case was transferred to the civil magistracy of Vienna, where he lost sole guardianship. He only regained custody after intensive legal struggles in 1820. During the years that followed, Beethoven frequently interfered in his nephew's life in what Karl perceived as an overbearing manner.
Beethoven was finally motivated to begin significant composition again in June 1813, when news arrived of Napoleon's defeat at the Battle of Vitoria by a coalition led by the Duke of Wellington. The inventor Mälzel persuaded him to write a work commemorating the event for his mechanical instrument the Panharmonicon. This Beethoven also transcribed for orchestra as "Wellington's Victory" (Op. 91, also known as the "Battle Symphony"). It was first performed on 8 December, along with his Seventh Symphony, Op. 92, at a charity concert for victims of the war, a concert whose success led to its repeat on 12 December. The orchestra included a number of leading and rising musicians who happened to be in Vienna at the time, including Giacomo Meyerbeer and Domenico Dragonetti. The work received repeat performances at concerts staged by Beethoven in January and February 1814. These concerts brought Beethoven more profit than any others in his career, and enabled him to buy the bank shares that were eventually to be the most valuable assets in his estate at his death.
Beethoven's renewed popularity led to demands for a revival of "Fidelio", which, in its third revised version, was also well received at its July opening in Vienna, and was frequently staged there during the following years. Beethoven's publishers, Artaria, commissioned the 20-year old Moscheles to prepare a piano score of the opera, which he inscribed "Finished, with God's help!" – to which Beethoven added "O Man, help thyself." That summer Beethoven composed a piano sonata for the first time in five years, his (Sonata in E minor, Opus 90). He was also one of many composers who produced music in a patriotic vein to entertain the many heads of state and diplomats who came to the Congress of Vienna that began in November 1814, with the cantata "Der glorreiche Augenblick (The Glorious Moment)" (Op. 136) and similar choral works which, in the words of Maynard Solomon "broadened Beethoven's popularity, [but] did little to enhance his reputation as a serious composer."
In April and May 1814, playing in his "Archduke" Trio, Beethoven made his last public appearances as a soloist. The composer Louis Spohr noted : "the piano was badly out of tune, which Beethoven minded little, since he did not hear it ... there was scarcely anything left of the virtuosity of the artist ... I was deeply saddened." From 1814 onwards Beethoven used for conversation ear-trumpets designed by Johann Nepomuk Maelzel (a number of these are on display at the Beethoven-Haus in Bonn).
His 1815 compositions include an expressive second setting of the poem "An die Hoffnung" (Op. 94) in 1815. Compared to its first setting in 1805 (a gift for Josephine Brunsvik), it was "far more dramatic ... The entire spirit is that of an operatic scena." But his energy seemed to be dropping: apart from these works, he wrote the two cello sonatas Op. 101 nos. 1 and 2, and a few minor pieces, and began but abandoned a sixth piano concerto.
Between 1815 and 1819 Beethoven's output dropped again to a level unique in his mature life. He attributed part of this to a lengthy illness (he called it an "inflammatory fever") that he had for more than a year, starting in October 1816. His biographer Maynard Solomon suggests it is also doubtless a consequence of the ongoing legal problems concerning his nephew Karl, and of Beethoven finding himself increasingly at odds with current musical trends. Unsympathetic to developments in German romanticism that featured the supernatural (as in operas by Spohr, Heinrich Marschner and Carl Maria von Weber), he also "resisted the impending Romantic fragmentation of the ... cyclic forms of the Classical era into small forms and lyric mood pieces" and turned towards study of Bach, Handel and Palestrina. An old connection was renewed in 1817 when Maelzel sought, and obtained, Beethoven's endorsement for his newly developed metronome. During these years the few major works completed include Beethoven's only song cycle, "An die ferne Geliebte" Op. 98, (1816), and the gigantic "Hammerklavier" Sonata (Sonata No. 29 in B flat major, Op. 106) (1818). It was also in 1818 that he began musical sketches that eventually form part of his final Ninth Symphony.
By early 1818 Beethoven's health had improved, and his nephew Karl, now aged 11, moved in with him in January, (although within a year Karl's mother had won him back in the courts). By now Beethoven's hearing had again seriously deteriorated, necessitating Beethoven and his interlocutors writing in notebooks to carry out conversations. These 'conversation books' are a rich written resource for his life from this period onwards. They contain discussions about music, business and personal life; they are also a valuable source for his contacts and for investigations into how he intended his music should be performed, and of his opinions of the art of music. His household management had also improved somewhat; Nanette Streicher, who had assisted in his care during his illness, continued to provide some support, and he finally found a skilled cook. A testimonial to the esteem in which Beethoven was held in England was the presentation to him in this year by Thomas Broadwood, the proprietor of the company, of a Broadwood piano, for which Beethoven expressed grateful thanks. He was not well enough, however, to carry out a visit to London that year which had been proposed by the Philharmonic Society.
Despite the time occupied by his ongoing legal struggles over Karl, which involved continuing extensive correspondence and lobbying, two events sparked off Beethoven's major composition projects in 1819. The first was the announcement of Archduke Rudolf's promotion to Cardinal-Archbishop as Archbishop of Olomouc (now in Czechia), which triggered the "Missa Solemnis" Op. 123, intended to be ready for his installation in Olomouc in March 1820. The other was the invitation by the publisher Antonio Diabelli to fifty Viennese composers , including Beethoven, Franz Schubert, Czerny and the 8-year old Franz Liszt, to compose a variation each on a theme which he provided. Beethoven was spurred to outdo the competition and by mid-1819 had already completed 20 variations of what were to become the 33 "Diabelli Variations" op. 120. Neither of these works were to be completed for a few years. A significant tribute of 1819, however, was Archduke Rudolf's set of forty piano variations on a theme written for him by Beethoven (WoO 200), and dedicated to the master. Beethoven's portrait by of this year, which was one of the most familiar images of him for the next century, was described by Schindler as, despite its artistic weaknesses, "in the rendering of that particular look, the majestic forehead ... the firmly shut mouth and the chin shaped like a shell, ... truer to nature than any other picture."
Beethoven's determination over the following years to write the "Mass" for Rudolf was not motivated by any devout Catholicism. Although born a Catholic, the form of religion as practised at the court in Bonn where he grew up was, in the words of Maynard Solomon, "a compromise ideology that permitted a relatively peaceful coexistence between the Church and rationalism.". Beethoven's "Tagebuch" (a diary he kept on an occasional basis between 1812 and 1818) shows his interest in a variety of relgious philosophies, including those of India, Egypt and the Orient and the writings of the Rig-Veda. In a letter to Rudolf of July 1821 Beethoven shows his belief in a personal God: "God ... sees into my innermost heart and knows that as a man I perform most conscientiously and on all occasions the duties which Humanity, God, and Nature enjoin upon me." On one of the sketches for the "Missa Solemnis" he wrote "Plea for inner and outer peace.".
Beethoven's status was confirmed by the series of "Concerts sprituels" given in Vienna by the choirmaster Franz Xaver Gebauer in the 1819/1820 and 1820/1821 seasons, during which all eight of his symphonies to date, plus the oratorio "Christus" and the Mass in C, were performed. Beethoven was typically underwhelmed: when in an April 1820 conversation book a friend mentioned Gebauer, Beethoven wrote in reply "Geh! Bauer" ("Begone, peasant!")
It was in 1819 that Beethoven was first approached by the publisher Moritz Schlesinger who won the suspicious composer round, whilst visiting him at Mödling, by procuring for him a plate of roast veal. One consequence of this was that Schlesinger was to secure Beethoven's three last piano sonatas and his final quartets; part of the attraction to Beethoven was that Schlesinger had publishing facilities in Germany and France, and connections in England, which could overcome problems of copyright piracy. The first of the three sonatas, for which Beethoven contracted with Schlesinger in 1820 at 30 ducats per sonata, (further delaying completion of the Mass), was sent to the publisher at the end of that year (the Sonata in E major, Op. 109, dedicated to Maximiliane, Antonie Brentano's daughter).
The start of 1821 saw Beethoven once again in poor health, suffering from rheumatism and jaundice. Despite this he continued work on the remaining piano sonatas he had promised to Schlesinger (the Sonata in A flat major Op. 110 was published in December), and on the Mass. In early 1822 Beethoven sought a reconciliation with his brother Johann, whose marriage in 1812 had met with his disapproval, and Johann now became a regular visitor (as witnessed by the conversation books of the period) and began to assist him in his business affairs, including him lending him money against ownership of some of his compositions. He also sought some reconciliation with the mother of his nephew, including supporting her income, although this did not meet with the approval of the contrary Karl. Two commissions at the end of 1822 improved Beethoven's financial prospects. In November the Philharmonic Society of London offered a commission for a symphony, which he accepted with delight, as an appropriate home for the Ninth Symphony on which he was working. Also in November Prince Nikolai Galitzin of Saint Petersburg offered to pay Beethoven's asking price for three string quartets. Beethoven set the price at the high level of 50 ducats per quartet in a letter dictated to his nephew Karl, who was then living with him.
During 1822, Anton Schindler, who in 1840 became one of Beethoven's earliest and most influential (but not always reliable) biographers, began to work as the composer's unpaid secretary. He was later to claim that he had been a member of Beethoven's circle since 1814, but there is no evidence for this. Cooper suggests that "Beethoven greatly appreciated his assistance, but did not think much of him as a man."
The year 1823 saw the completion of three notable works, all of which had occupied Beethoven for some years, namely the "Missa Solemnis", the Ninth Symphony and the "Diabelli Variations".
Beethoven at last presented the manuscript of the completed "Missa" to Rudolph on 19 March (more than a year after the Archduke's enthronement as Archbishop). He was not however in a hurry to get it published or performed as he had formed a notion that he could profitably sell manuscripts of the work to various courts in Germany and Europe at 50 ducats each. One of the few who took up this offer was Louis XVIII of France, who also sent Beethoven a heavy gold medallion. The Symphony and the variations took up most of the rest of Beethoven's working year. Diabelli hoped to publish both works, but the potential prize of the Mass excited many other publishers to lobby Beethoven for it, including Schlesinger and Carl Friedrich Peters. (In the end it was obtained by Schotts).
Beethoven had become critical of the Viennese reception of his works. He told the visiting Johann Friedrich Rochlitz in 1822:You will hear nothing of me here ... "Fidelio"? They cannot give it, nor do they want to listen to it. The symphonies? They have no time for them. My concertos? Everyone grinds out only the stuff he himself has made. The solo pieces? They went out of fashion long ago, and here fashion is everything. At the most, Schuppanzigh occasionally digs up a quartet. He therefore enquired about premiering the "Missa" and the Ninth Symphony in Berlin. When his Viennese admirers learnt of this, they pleaded with him to arrange local performances. Beethoven was won over, and the symphony was first performed, along with sections of the "Missa Solemnis", on 7 May 1824, to great acclaim at the Kärntnertortheater. Beethoven stood by the conductor Michael Umlauf during the concert beating time (although Umlauf had warned the singers and orchestra to ignore him), and because of his deafness was not even aware of the applause which followed until he was turned to witness it. The "Allgemeine musikalische Zeitung" gushed, "inexhaustible genius had shown us a new world", and Carl Czerny wrote that the Symphony "breathes such a fresh, lively, indeed youthful spirit ... so much power, innovation, and beauty as ever [came] from the head of this original man, although he certainly sometimes led the old wigs to shake their heads." The concert did not net Beethoven much money, as the expenses of mounting it were very high. A second concert on 24 May, in which the producer guaranteed him a minimum fee, was poorly attended; nephew Karl noted that "many people [had] already gone into the country". It was Beethoven's last public concert. Beethoven accused Schindler of either cheating him or mismanaging the ticket receipts; this led to the replacement of Schindler as Beethoven's secretary by Karl Holz, (who was second violinist in the Schuppanzigh Quartet), although by 1826 Beethoven and Schindler were reconciled.
Beethoven then turned to writing the string quartets for Galitzin, despite failing health. The first of these, the quartet in E♭ major, Op. 127 was premiered by the Schuppanzigh Quartet in March 1825. While writing the next, the quartet in A minor, Op. 132, in April 1825, he was struck by a sudden illness. Recuperating in Baden, he included in the quartet its slow movement to which he gave the title "Holy song of thanks ('Heiliger Dankgesang') to the Divinity, from a convalescent, in the Lydian mode." The next quartet to be completed was the Thirteenth, op. 130, in B♭ major. In six movements, the last, contrapuntal movement proved to be very difficult for both the performers and the audience at its premiere in March 1826 (again by the Schuppanzigh Quartet). Beethoven was persuaded by the publisher Artaria, for an additional fee, to write a new finale, and to issue the last movement as a separate work (the Grosse Fugue, Op. 133). Beethoven's favourite was the last of this series, the quartet in C minor Op. 131, which he rated as his most perfect single work.
Beethoven's relations with his nephew Karl had continued to be stormy; Beethoven's letters to him were demanding and reproachful. In August, Karl, who had been seeing his mother again against Beethoven's wishes, attempted suicide by shooting himself in the head. He survived and after discharge from hospital went to recuperate in the village of Gneixendorf with Beethoven and his uncle Johann. Whilst in Gneixendorf, Beethoven completed a further quartet, (Op. 135 in F major) which he sent to Schlesinger. Under the introductory slow chords in the last movement Beethoven wrote in the manuscript "Muss es sein?" ("Must it be?"); the response, over the faster main theme of the movement, is "Es muss sein!" ("It must be!"). The whole movement is headed "Der schwer gefasste Entschluss" ("The Difficult Decision"). Following this in November Beethoven completed his final composition, the replacement finale for the op. 130 quartet. Beethoven at this time was already ill and depressed; he began to quarrel with Johann, insisting that Johann made Karl his heir, in preference to Johann's wife.
On his return journey to Vienna from Gneixendorf in December 1826, illness struck Beethoven again. He was attended until his death by Dr. Andreas Wawruch, who throughout December noticed symptoms including fever, jaundice and dropsy, with swollen limbs, coughing and breathing difficulties. Several operations were carried out to tap off the excess fluid from Beethoven's abdomen.
Karl stayed by Beethoven's bedside during December, but left after the beginning of January to join the army at Iglau and did not see his uncle again, although he wrote to him shortly afterwards "My dear father ... I am living in contentment and regret only that I am separated from you." Immediately following Karl's departure, Beethoven wrote a will making his nephew his sole heir. Later in January, Beethoven was attended by Dr. Malfatti, whose treatment (recognizing the seriousness of his patient's condition) was largely centred on alcohol. As the news spread of the severity of Beethoven's condition, many old friends came to visit, including Diabelli, Schuppanzigh, Lichnowsky, Schindler, the composer Johann Nepomuk Hummel and his pupil Ferdinand Hiller. Many tributes and gifts were also sent, including £100 from the Philharmonic Society in London and a case of expensive wine from Schotts. During this period, Beethoven was almost completely bedridden despite occasional brave efforts to rouse himself. On March 24, he said to Schindler and the others present "Plaudite, amici, comoedia finita est" ("Applaud, friends, the comedy is over.") Later that day, when the wine from Schott arrived, he whispered "Pity – too late."
Beethoven died on 26 March 1827 at the age of 56; only his friend Anselm Hüttenbrenner and a "Frau van Beethoven" (possibly his old enemy Johanna van Beethoven) were present. According to Hüttenbrenner, at about 5 in the afternoon there was a flash of lightning and a clap of thunder: "Beethoven opened his eyes, lifted his right hand and looked up for several seconds with his fist clenched ... not another breath, not a heartbeat more." Many visitors came to the death-bed; some locks of the dead man's hair were retained by Hüttenbrenner and Hiller, amongst others. An autopsy revealed Beethoven suffered from significant liver damage, which may have been due to his heavy alcohol consumption, and also considerable dilation of the auditory and other related nerves.
Beethoven's funeral procession in Vienna on 29 March 1827 was attended by an estimated 10,000 people. Franz Schubert and the violinist Joseph Mayseder were among the torchbearers. A funeral oration by the poet Franz Grillparzer was read. Beethoven was buried in a dedicated grave in the Währing cemetery, north-west of Vienna, after a requiem mass at the church of the Holy Trinity (Dreifaltigkeitskirche) in Alserstrasse. Beethoven's remains were exhumed for study in 1863, and moved in 1888 to Vienna's Zentralfriedhof where they were reinterred in a grave adjacent to that of Schubert.
The historian William Drabkin notes that as early as 1818 a writer had proposed a three-period division of Beethoven's works, and that such a division (albeit often adopting different dates or works to denote changes in period) eventually became a convention adopted by all of Beethoven's biographers, starting with Schindler, F.-J. Fétis and Wilhelm von Lenz. Later writers sought to identify sub-periods within this generally accepted structure. Its drawbacks include that it generally omits a fourth period, that is, the early years in Bonn, whose works are less often considered; and that it ignores the differential development of Beethoven's composing styles over the years for different categories of work. The piano sonatas, for example, were written throughout Beethoven's life in a progression that can be interpreted as continuous development; the symphonies do not all demonstrate linear progress; of all of the types of composition, perhaps the quartets, which seem to group themselves in three periods (Op. 18 in 1801-1802, Opp. 59, 74 and 95 in 1806-1814, and the quartets, today known as 'late', from 1824 onwards) fit this categorization most neatly. Drabkin concludes that "now that we have lived with them so long ... as long as there are programme notes, essays written to accompany recordings, and all-Beethoven recitals, it is hard to imagine us ever giving up the notion of discrete stylistic periods."
Some forty compositions, including ten very early works written by Beethoven up to 1785, survive from the years that Beethoven lived in Bonn. It has been suggested that Beethoven largely abandoned composition between 1785 and 1790, possibly as a result of negative critical reaction to his first published works. A 1784 review in Johann Nikolaus Forkel's influential "Musikalischer Almanack" compared Beethoven's efforts to those of rank beginners. The three early piano quartets of 1785 (WoO 36), closely modelled on violin sonatas of Mozart, show his dependency on music of the period. Beethoven himself was not to give any of the Bonn works an opus number, save for those which he reworked for use later in his career, for example some of the songs in his Op. 52 collection (1805) and the Wind Octet reworked in Vienna in 1793 to become his String Quintet, Op. 4. Charles Rosen points out that Bonn was something of a backwater compared to Vienna; Beethoven was unlikely to be acquainted with the mature works of Haydn or Mozart, and Rosen opines that his early style was closer to that of Hummel or Muzio Clementi. Kernan suggests that at this stage Beethoven was not especially notable for his works in sonata style, but more for his vocal music; his move to Vienna in 1792 set him on the path to develop the music in the genres he became known for.
The conventional "first period" begins after Beethoven's arrival in Vienna in 1792. In the first few years he seems to have composed less than he did at Bonn, and his Piano Trios, op.1 were not published until 1795. From this point onward, he had mastered the 'Viennese style' (best known today from Haydn and Mozart) and was making the style his own. His works from 1795 to 1800 are larger in scale than was the norm (writing sonatas in four movements, not three, for instance); typically he uses a scherzo rather than a minuet and trio; and his music often includes dramatic, even sometimes over-the-top, uses of extreme dynamics and tempi and chromatic harmony. It was this that led Haydn to believe the third trio of Op.1 was too difficult for an audience to appreciate.
He also explored new directions and gradually expanded the scope and ambition of his work. Some important pieces from the early period are the first and second symphonies, the set of six string quartets Opus 18, the first two piano concertos, and the first dozen or so piano sonatas, including the famous "Pathétique" sonata, Op. 13.
His middle (heroic) period began shortly after the personal crisis brought on by his recognition of encroaching deafness. It includes large-scale works that express heroism and struggle. Middle-period works include six symphonies (Nos. 3–8), the last two piano concertos, the Triple Concerto and violin concerto, five string quartets (Nos. 7–11), several piano sonatas (including the "Waldstein" and "Appassionata" sonatas), the "Kreutzer" violin sonata and his only opera, "Fidelio".
The "middle period" is sometimes associated with a "heroic" manner of composing, but the use of the term "heroic" has become increasingly controversial in Beethoven scholarship. The term is more frequently used as an alternative name for the middle period. The appropriateness of the term "heroic" to describe the whole middle period has been questioned as well: while some works, like the Third and Fifth Symphonies, are easy to describe as "heroic", many others, like his Symphony No. 6, "Pastoral" or his Piano Sonata No. 24, are not.
Beethoven's late period began in the decade 1810-1819. He began a renewed study of older music, including works by Johann Sebastian Bach and George Frideric Handel, that were then being published in the first attempts at complete editions. Many of Beethoven's late works include fugal material. The overture "The Consecration of the House" (1822) was an early work to attempt to incorporate these influences. A new style emerged, now called his "late period". He returned to the keyboard to compose his first piano sonatas in almost a decade: the works of the late period include the last five piano sonatas and the "Diabelli Variations", the last two sonatas for cello and piano, the late string quartets (see below), and two works for very large forces: the "Missa Solemnis" and the Ninth Symphony. Works from this period are characterised by their intellectual depth, their formal innovations, and their intense, highly personal expression. The String Quartet, Op. 131 has seven linked movements, and the Ninth Symphony adds choral forces to the orchestra in the last movement. Other compositions from this period include the "Missa solemnis", the last five string quartets (including the massive "Große Fuge") and the last five piano sonatas.
The Beethoven Monument in Bonn was unveiled in August 1845, in honour of the 75th anniversary of his birth. It was the first statue of a composer created in Germany, and the music festival that accompanied the unveiling was the impetus for the very hasty construction of the original Beethovenhalle in Bonn (it was designed and built within less than a month, on the urging of Franz Liszt). A statue to Mozart had been unveiled in Salzburg, Austria, in 1842. Vienna did not honour Beethoven with a statue until 1880.
There is a museum, the Beethoven House, the place of his birth, in central Bonn. The same city has hosted a musical festival, the Beethovenfest, since 1845. The festival was initially irregular but has been organised annually since 2007.
The Ira F. Brilliant Center for Beethoven Studies serves as a museum, research center, and host of lectures and performances devoted solely to this life and works.
His music features twice on the Voyager Golden Record, a phonograph record containing a broad sample of the images, common sounds, languages, and music of Earth, sent into outer space with the two Voyager probes.
The third largest crater on Mercury is named in his honour, as is the main-belt asteroid 1815 Beethoven.
A 7-foot cast bronze statue of Beethoven by sculptor Arnold Foerster was installed in 1932 in Pershing Square, Los Angeles; it was dedicated to William Andrews Clark Jr., founder of the Los Angeles Philharmonic.
|
https://en.wikipedia.org/wiki?curid=17914
|
Lleyton Hewitt
Lleyton Glynn Hewitt (born 24 February 1981) is an Australian semi-retired professional tennis player and former world No. 1. He is the most recent Australian to win a men's singles Grand Slam title.
In November 2001 Hewitt became the youngest male in the ATP era to be ranked No. 1 in the world in singles at the age of . He won 30 singles titles and 3 doubles titles, his highlights being the 2001 US Open and 2002 Wimbledon men's singles titles, the 2000 US Open men's doubles title, back-to-back Tennis Masters Cup titles in 2001 and 2002, and the Davis Cup with Australia in 1999 and 2003. Hewitt reached the final of the 2004 US Open, where he was defeated by Roger Federer in straight sets. Between 1997 and 2016, he contested twenty consecutive Australian Open men's singles tournaments, reaching the 2005 final where he was defeated by Marat Safin in four sets.
Hewitt was born in Adelaide, South Australia. His father, Glynn, is a former Australian Rules Football player, and his mother, Cherilyn, was a physical education teacher. His younger sister is Jaslyn Hewitt, a former tennis coach and bodybuilder and his brother-in-law (Jaslyn's husband) is Rob Shehadie. Hewitt also played Australian Football until the age of 13, when he decided to pursue a tennis career. His junior tennis club was Seaside Tennis Club in Henley Beach. He was also coached by Peter Smith at Denman Tennis Club in Mitcham.
Hewitt commenced his professional career in 1998. He became one of the youngest winners of an Association of Tennis Professionals (ATP) tournament when he won the 1998 Next Generation Adelaide International, defeating Jason Stoltenberg in the final, having defeated Andre Agassi in the semi-finals. Both Aaron Krickstein winning Tel Aviv in 1983 and Michael Chang winning San Francisco in 1988 were younger than Hewitt when they claimed their first ATP title. Hewitt then left Immanuel College to concentrate on his tennis career. He was an Australian Institute of Sport scholarship holder.
He finished his professional tennis career on 24 January 2016 after 20 straight Australian Open appearances. His last professional singles match was against David Ferrer in the second round of the 2016 Australian Open at the Rod Laver Arena on 21 January 2016.
As a junior Hewitt posted a 44–19 record in singles and reached as high as No. 17 in the world in 1997 (and No. 13 in doubles).
In 2000, Hewitt reached his first Grand Slam final at the Wimbledon mixed doubles partnering Belgian Kim Clijsters, his then girlfriend. They lost the match, to Americans Kimberly Po and Donald Johnson. Hewitt later won his first Grand Slam title at the US Open when he along with Max Mirnyi claimed the men's doubles championship, thus becoming the youngest male (at 19 years, 6 months) to win a Grand Slam doubles crown in the open era. At the end of the year, Hewitt became the first teenager in ATP history to qualify for the year-end Tennis Masters Cup (ATP World Tour Finals).
Hewitt started off the 2001 season well by winning the Medibank International in Sydney, and went on to win tournaments in London (Queen's Club) and 's-Hertogenbosch. He captured his first Grand Slam singles title at the US Open in 2001, when he beat former world No. 1 Yevgeny Kafelnikov in the semi-finals and defeated then four-time champion Pete Sampras the next day in straight sets. This win made Hewitt the most recent male player to win a Grand Slam singles and doubles title during his career. The Australian went on to win the Tokyo Open and again qualify for the year-end Tennis Masters Cup held in Sydney. During the tournament, Hewitt won all matches in his group. He then went on to defeat Sébastien Grosjean in the final to take the title and gain the No. 1 ranking.
Hewitt won a total of six titles in 2001.
The year 2002 was once again a solid year for Hewitt, winning three titles in San Jose, Indian Wells and London (Queen's Club). He followed his 2001 US Open win by capturing the Wimbledon singles title. He defeated Jonas Björkman, Grégory Carraz, Julian Knowle, Mikhail Youzhny, Sjeng Schalken and home favourite Tim Henman before dominating first-time finalist David Nalbandian in straight sets; Hewitt lost only two sets (both to Schalken) throughout the championship. His victory reinforced the idea that, although the tournament had tended to be dominated by serve-and-volleyers, a baseliner could still triumph on grass (Hewitt was the first 'baseliner' to win the tournament since Agassi in 1992).
For his third straight year, he qualified for the year-end Tennis Masters Cup, held in Shanghai, and successfully defended his title by defeating Juan Carlos Ferrero in the final. Hewitt's win helped him finish the year ranked No. 1 for a second straight year.
In 2003, Hewitt defeated former No. 1 Gustavo Kuerten for the championship at Indian Wells. But at Wimbledon, as the defending champion, Hewitt lost in the first round to qualifier Ivo Karlović. Hewitt became the first defending Wimbledon men's champion in the open era to lose in the first round. Only once before in the tournament's 126-year history had a defending men's champion lost in the opening round, in 1967, when Manuel Santana was beaten by Charlie Pasarell. Hewitt was only the third defending Grand Slam champion in the open era to lose in the first round, after Boris Becker at the 1997 Australian Open and Patrick Rafter at the 1999 US Open. After Wimbledon in 2003, Hewitt lost in the final of the tournament in Los Angeles, the second round of the ATP Masters Series tournament in Montreal, and the first round of the ATP Masters Series tournament in Cincinnati. At the US Open, Hewitt lost in the quarterfinals to Juan Carlos Ferrero. Hewitt played only Davis Cup matches for the remainder of the year, recording five-set wins over Roger Federer and Juan Carlos Ferrero in the semi-finals and final respectively, as Australia went on to win the Davis Cup. Hewitt used much of his spare time in late 2003 to bulk up, gaining 7 kg.
In 2004, Hewitt became the first man in history to lose in each Grand Slam singles tournament to the eventual champion. At the Australian Open, he was defeated in the fourth round by Swiss Roger Federer. At the French Open, he was defeated in a quarterfinal by Argentine Gastón Gaudio. At Wimbledon, he was defeated in a quarterfinal by Federer, and at the US Open, he was defeated in the final by Federer, losing two out of the three sets at love. At the year ending 2004 Tennis Masters Cup, Hewitt defeated Andy Roddick to advance to the final, but was yet again defeated by defending champion Federer.
In 2005, Hewitt won his only title at the Sydney Medibank International defeating little-known Czech player Ivo Minář. Hewitt spent much time in the late stages of 2004 working with his former coach and good friend, Roger Rasheed, on bulking up his physique. His hard work paid off during the Australian summer, when he defeated an in-form No. 2 Andy Roddick to reach his first Australian Open final in 2005. He was the first Australian player to reach the final since Pat Cash in 1988. In the final, he faced fourth seed, Marat Safin, who had defeated No. 1 and defending champion Roger Federer in the semi-finals. After easily taking the first set, he was defeated by the Russian despite being up a break in the third set.
At Wimbledon, Hewitt reached the semi-finals, but lost to eventual champion Federer. Two months later, Hewitt again lost to Federer in the US Open semi-final, although this time he was able to take one set from the Swiss. Hewitt had at this point lost to the eventual champion at seven consecutive Grand Slam tournaments he played, (he missed the 2005 French Open because of injury). Hewitt pulled out of the Tennis Masters Cup tournament in Shanghai in November 2005 so that he could be with his wife Bec, who was due to give birth.
Hewitt was defeated in the second round of the 2006 Australian Open by Juan Ignacio Chela of Argentina. He then reached the finals of the San Jose and Las Vegas tournaments, losing to British youngster Andy Murray and American James Blake, respectively. But he lost to Tim Henman in the second round of the Miami Masters, a player he had defeated eight times previously in as many matches. At the 2006 French Open, Hewitt reached the fourth round, where he lost to defending champion and eventual winner Rafael Nadal in four sets.
Hewitt won his first tournament of 2006 (after a 17-month hiatus from winning a tournament), when he beat Blake in the final of the Queen's Club Championships. This was his fourth title there, equalling the records of John McEnroe and Boris Becker. During the 2006 Wimbledon Championships, Hewitt survived a five-set match against South Korea's Hyung-Taik Lee that was played over two days. He then defeated Olivier Rochus and David Ferrer, before losing to Marcos Baghdatis in the quarterfinals. At the 2006 Legg Mason Tennis Classic in Washington, D.C., Hewitt was defeated by Arnaud Clément in the quarterfinals, after defeating Vincent Spadea in the second round and Denis Gremelmayr in the third round.
Hewitt participated at the 2006 US Open, despite having an injured knee. Hewitt won his first three matches in straight sets against, respectively, Albert Montañés, Jan Hernych, and Novak Djokovic. He defeated Richard Gasquet in five sets to advance to the quarterfinals for the seventh consecutive year. He then lost to Roddick.
At the 2007 Australian Open, Hewitt lost in the third round to tenth-seeded Chilean and eventual runner-up Fernando González. With his win in Las Vegas in March, Hewitt had won at least one ATP title annually for ten consecutive years. This was a record among active players at the time. Hewitt reached the 2007 Hamburg Masters semi-finals, where he pushed eventual finalist Rafael Nadal to three sets. At the 2007 French Open, Hewitt, for the second straight time lost in the fourth round to Nadal. At the 2007 Wimbledon Championships, Hewitt won his first three matches, including a four-set third round victory over Guillermo Cañas. He then faced fourth seed Novak Djokovic in the fourth round, which he lost.
After Wimbledon, it was announced that he had hired former Australian tennis pro Tony Roche to coach him during Grand Slam and Masters tournaments in 2007 and 2008. At the Masters tournaments in Montréal and Cincinnati Hewitt reached the quarterfinals and semi-finals, respectively. In both cases, he lost to Roger Federer.
He was seeded 16th at the 2007 US Open, but for the first time in eight consecutive appearances at Flushing Meadows, he did not reach the quarterfinals or further. He lost in the second round to Argentine Agustín Calleri.
At the 2008 Australian Open, he advanced to the fourth round as the 19th seed, defeating 15th-seeded and 2006 Australian Open finalist Marcos Baghdatis in a thrilling third-round match. The 282-minute match started at 11:52 pm and ended at 4:34 am the following morning. It was a characteristically "gutsy" performance and cemented Hewitt's reputation as a tough competitor. Hewitt lost his fourth-round match in straight sets to third-seeded and eventual champion Novak Djokovic.
A hip injury Hewitt acquired in March 2008 affected his preparation for the French Open and forced the loss of 300 rankings points as Hewitt was unable to defend his semi-final appearance at the Hamburg Masters, as well as compete in supplementary tournaments. However, Hewitt made the third round at Roland Garros, before losing a five-set thriller to fifth seed David Ferrer.
Despite his ongoing hip problem, Hewitt was able to compete at the Queens Club Championship with moderate success, falling to second seed Novak Djokovic in the quarterfinals. His good form continued into Wimbledon, Hewitt making the fourth round for the second successive year, before losing to No. 1 and top seed Roger Federer.
After Wimbledon, Hewitt elected to miss the Montreal and Cincinnati Masters in an effort to give his hip sufficient rest to enable him to play at the 2008 Beijing Olympics, where he defeated Jonas Björkman in the first round before losing to second seed Rafael Nadal. However, the more notable incident in the Olympics occurred in Hewitt's opening-round doubles match with Chris Guccione against Argentines Juan Mónaco and Agustín Calleri. The match went to an advantage third set with Hewitt and Guccione prevailing 18–16. After the Olympics, due to the further damage Hewitt's hip sustained at the Olympics, he was left with no option but to pull out of the US Open and skip the rest of the season to have hip surgery. 2008 was the first year since 1997 in which Hewitt did not win a title.
After returning from hip surgery, Hewitt played his first match in 2009 at the Hopman Cup, where he defeated Nicolas Kiefer in three sets. Hewitt then participated in the Medibank International Sydney, winning his first two matches, but losing in the quarterfinals to David Nalbandian. Hewitt then went on to play in the 2009 Australian Open, where he was unseeded in a Grand Slam for the first time since 2000. He faced Fernando González in the first round and lost in five sets.
At the tournament in Memphis, he caused an upset in the first round by defeating James Blake in three sets. He then defeated fellow Australian Chris Guccione in the second round and Christophe Rochus in the quarterfinals. He faced Andy Roddick in the semi-finals, but lost in a close match. Hewitt then lost in the first round of Delray Beach to Yen-Hsun Lu, the eighth seed. Hewitt also competed in the BNP Paribas Open in Indian Wells, California, and reached the second round, being defeated by Fernando González. At the Sony Ericsson Open in Miami, Hewitt played Israeli Dudi Sela in the first round. Hewitt lost the first set, before recovering to win the match. Hewitt was then defeated by seventh seed Gilles Simon of France in straight sets.
At the 2009 U.S. Men's Clay Court Championships, Hewitt defeated seventh seed Diego Junqueira. Hewitt advanced to the quarterfinals after defeating Sergio Roitman in just 57 minutes, and then Guillermo García-López to advance to the semi-finals, where he defeated Evgeny Korolev. He defeated Wayne Odesnik in the final, for his first title since 2007 and his first clay-court title in a decade. Hewitt entered the Monte Carlo Masters as a wild card. He lost in the first round to Marat Safin. Hewitt admitted to running out of energy in the second set.
At the 2009 BMW Open, Hewitt recorded his 500th career win after defeating Philipp Petzschner in the first round, becoming one of only four active players to achieve this milestone; the others being Roger Federer and Carlos Moyá. Andy Roddick would later achieve this feat at the 2009 Legg Mason Tennis Classic Tournament in Washington, D.C. In the 2009 French Open, he defeated 26th seed Ivo Karlović in five sets in the first round, and then defeated Andrey Golubev in the second. He lost to No. 1 Rafael Nadal in the third round. His next tournament was the 2009 Aegon Championships in London. He was 15th seed and drew Eduardo Schwank in the first round, who he easily dispatched. In the second round, he went three sets against Portuguese Frederico Gil. Hewitt dropped the first set, but went on to win. Former rival Andy Roddick awaited Hewitt in the third round, and the match certainly did not disappoint. As they have many times in the past, the former No. 1 players battled through a tough and intense match, which Roddick won.
In the 2009 Wimbledon Championships, Hewitt faced the prospect of Rafael Nadal in the second round. However, Nadal withdrew due to injury, and his slot was replaced by No. 5 Juan Martín del Potro. Hewitt defeated American Robby Ginepri in the first round. Hewitt used his strong service game to advantage, losing only one service game the entire match. He upended del Potro in straight sets. The third round also produced a straight-set victory for Hewitt, as he defeated Philipp Petzschner. He reversed a two-set deficit to defeat Radek Štěpánek in the fourth round. It was another classic Hewitt fightback to thrill the many Australians on hand to witness the match. His Cinderella run ended in the quarterfinals against sixth seed Andy Roddick. It was a five-set thriller which featured two tiebreaks. Hewitt lost a heartbreaking 3–6, 7–6 (10), 6–7 (1), 6–4, 4–6 match. It was the first time Hewitt had reached the quarterfinals of a Major since the 2006 U.S. Open.
After an extended break, Hewitt began working his way into the U.S Open series by playing in Washington at the Legg Mason Classic. There Hewitt made it into the third round, before losing in a three-set battle with Juan Martín del Potro. At the Montreal Masters, Hewitt lost in the first round to former No. 1 Juan Carlos Ferrero. Cincinnati saw Hewitt reach the quarterfinals for the sixth time, where he lost to Roger Federer in straight sets. During the first round of the tournament, Hewitt showed his trademark fighting abilities by saving two match points to win against an in-form Robin Söderling. At the U.S Open, Hewitt progressed into the third round, where he played Federer for the 23rd time of their decade-long rivalry. Hewitt managed to take the first set 6–4 from Federer, before the 15-time Grand Slam champion took control of the second. The third set was tight, and both players saved multiple break points. Federer eventually prevailed the match in four sets.
In late September, Hewitt travelled to Malaysia for his first time to take part in the inaugural Malaysian Open held in Kuala Lumpur. The new tournament was part of the ATP's new dedicated Asian swing. Hewitt lost in the first round to Swedish player Joachim Johansson. In Tokyo, Hewitt was drawn to once again meet del Potro in the quarterfinals, but was given a clear path when del Potro was knocked out by qualifier Édouard Roger-Vasselin in the first round. After defeating Fabrice Santoro in the second round, Hewitt downed Roger-Vasselin, to reach his first semi-finals since winning the US Men's Clay Court Championships in April, but lost to Mikhail Youzhny. He then competed in the 2009 Shanghai ATP Masters 1000, where he won in the first round, defeating John Isner, before losing to Gaël Monfils.
Hewitt began his 2010 season partnering Samantha Stosur at the Hopman Cup. The Australians were the top seeds for the exhibition tournament. They, however, fared worse than expected, losing ties against Romania and Spain, and therefore failing to reach the final.
He was seeded fourth in the Medibank International and, like the previous year, reached the quarterfinals, losing to eventual champion Marcos Baghdatis. At the 2010 Australian Open, he lost to Roger Federer in the fourth round.
A week after his exit from the Australian Open, Hewitt announced at a press conference at Melbourne Park that he underwent another hip operation similar to his left hip operation this time on his right hip on 28 January 2010 in Hobart.
Hewitt returned to the tour at the U.S. Men's Clay Court Championships as the singles defending champion. He won his first match since the Australian Open, partnering coach Nathan Healey in the doubles, defeating James Cerretani and Adil Shamasdin, but lost to top seeds the Bryan brothers in the semi-finals. Hewitt received a first-round bye, as he was seeded fourth in singles. In his first match, against lucky loser Somdev Devvarman, Hewitt dropped the first set, before battling to win in three sets. He then lost to Juan Ignacio Chela. Hewitt's next tournament was scheduled to be the Monte-Carlo Rolex Masters. However, he withdrew due to a recurring injury.
Hewitt then reached the second round in Barcelona, before losing to Eduardo Schwank, and lost in the second round of the Internazionali BNL d'Italia to Guillermo García-López. Hewitt then travelled back to Australia to participate in a Davis Cup tie against Japan, winning his two singles matches.
At the French Open, Hewitt reached the third round, before losing to Rafael Nadal, who went on to win the title without dropping a set and take the No. 1 ranking.
On 13 June, Hewitt defeated Roger Federer in the final of the Gerry Weber Open in Halle, Germany, a grass-court tuneup for Wimbledon Championships. The win was Hewitt's first over Federer since 2003 and snapped a 15-match losing streak against the Swiss.
At Wimbledon, Hewitt was seeded 15th and lost to third seed, Novak Djokovic in the fourth round. After dropping the first two sets, Hewitt took advantage of a stomach illness had by Djokovic to take the third set. However, Hewitt could not mount a comeback, and ended up losing in four sets.
At the Atlanta Tennis Championship, Hewitt lost in the first round to Lukáš Lacko. After receiving a first-round bye at the Legg Mason Classic, Hewitt retired in the second round due to a leg injury. He pulled out of the Rogers Cup in Toronto to recover, and returned in Cincinnati. Hewitt defeated Yen-Hsun Lu in the opening round, before losing in three sets to fifth seed Robin Söderling.
Hewitt was 32nd seed at the US Open and lost his first-round match to Paul-Henri Mathieu in five sets. It was his earliest exit at the US Open. He withdrew from the Asian hard-court swing due to a wrist injury suffered during the Australian Davis Cup playoff loss to Belgium.
Hewitt began his 15th season on the ATP Tour at the Hopman Cup in Perth. He defeated his Belgian opponent Ruben Bemelmans and went on to win the tie for Australia with a three-set victory in the mixed doubles, partnering Alicia Molik. He next played No. 3 Novak Djokovic, but lost in straight sets. For his final singles match of the tournament, he played Kazakhstani Andrey Golubev, defeating him in straight sets.
After the Hopman Cup, Hewitt competed in the AAMI Kooyong Classic, an exhibitional tournament in the build-up to the Australian Open. He started the tournament solidly, taking out third seed Mikhail Youzhny. In the second round, he defeated Russian Nikolay Davydenko. In the final, he defeated Frenchman Gaël Monfils. It was the first time that Hewitt had played in the tournament.
At the 2011 Australian Open, Hewitt was defeated in the first round in five sets by Argentina's David Nalbandian. Hewitt was up two sets to one and during the fourth set had the chance to finish off the match, when the scores were 3–1 and 0–40 in Hewitt's favour, but failed to capitalise on the situation. Furthermore, Hewitt had two match point opportunities in the final set to close out victory. However, one of these was met with an excellent drop shot from Nalbandian, and he went on to save the other, securing victory.
After the Australian Open, Hewitt participated in the SAP Open, an ATP World Tour 250 event. He defeated his first-round opponent Björn Phau, and proceeded to the second round against Brian Dabul. Hewitt had some problems with Dabul, losing the first set, but managed to defeat him. In the quarterfinals, Hewitt played against former US Open champion Juan Martín del Potro, who was on a comeback from a wrist injury. In a weak performance, Hewitt lost.
The next tournament that Hewitt took part in was the Regions Morgan Keegan Championships and the Cellular South Cup, an ATP World Tour 500 event in Memphis, Tennessee. Hewitt played Lu Yen-Hsun in the opening round, which he won. He advanced to the second round against Adrian Mannarino. Despite losing the first set, Hewitt defeated Mannarino. In the quarterfinals, Hewitt played top seed Andy Roddick. Despite being a set up, Hewitt lost the match.
Hewitt then played in the 2011 BNP Paribas Open, an ATP World Tour Masters 1000 event. His first-round opponent was Chinese Taipei's Lu Yen-Hsun. This was the second time in a row the two had played each other in the first round, and he suffered a shock defeat. This was to be Hewitt's last event on the ATP Tour for over three months after he underwent surgery on his left foot. He made his comeback at the 2011 Gerry Weber Open in Halle, Germany, where he returned as defending champion. He was originally scheduled to face top seed Roger Federer in the opening round. However, the Swiss withdrew after reaching the final of the French Open. Hewitt therefore took on an alternate from Argentina, Leonardo Mayer and came through the match comfortably. In the second round, he played Andreas Seppi and defeated him. However, Hewitt's reign as champion of Halle came to an end at the hands of home favourite Philipp Kohlschreiber, when the Australian went down in straight sets. During this match, Hewitt turned his ankle when he came in to the net to try to reach a net cord ball. The following week, Hewitt had to retire during a first round match at the Aegon International against Olivier Rochus. This was a result of the niggling ankle injury he had picked up at Halle the week before.
Hewitt came into Wimbledon with doubts over his fitness and condition and was unseeded in the 2011 Wimbledon Championships draw. Hewitt faced Kei Nishikori of Japan in the first round and won in four close sets. In the second round, Hewitt faced fifth seed Robin Söderling. Hewitt won the first set in a tiebreak and the second set. Söderling fought back to take the match in five sets.
Hewitt's next tournament was the 2011 Atlanta Tennis Championships, an ATP World Tour 250 event and first event on the US hard-court swing. Hewitt won his first-round match against the American qualifier Phillip Simmonds in straight sets to advance to the second round. He went on to lose his second round encounter against the American qualifier Rajeev Ram. After this defeat, Hewitt who had been scheduled to play in Los Angeles the following week, opted not to take up the offer of a wildcard and withdrew from the event to recover from his foot injury. He then was offered a wild card to play at the 2011 US Open, but was unable to play due to foot injury which ended his season.
Hewitt began his 2012 season at the Hopman Cup. In the opening singles tie against Spain, Hewitt lost in singles to Fernando Verdasco. For the mixed doubles match, Hewitt partnered with Jarmila Gajdošová. They lost the match in three sets 6–3, 3–6, 9–11, despite being 5–1 up in the final set tie-breaker. In the second tie against France, Hewitt lost to Richard Gasquet in singles and in straight sets in mixed doubles. In the final tie against China, Hewitt defeated Wu Di in straight sets and won the mixed doubles match. His next tournament was the Apia International, where he lost in the first round against Serbian fifth seed Viktor Troicki.
His next tournament was the 2012 Australian Open. In doubles, partnering countryman Peter Luczak, the Aussies went until the 2nd round where they lost in straight sets to the Bryan Twins. In singles, where he was awarded a wildcard, Hewitt won his first round match defeating unseeded Cedrik-Marcel Stebe in almost four hours. Long-time rival Andy Roddick, who was seeded 15th, awaited Hewitt in the second round. After dropping the first set, Hewitt won the next two. Roddick then retired due to a groin injury and Hewitt advanced. In the third round, he faced the 23rd seed Milos Raonic of Canada. Playing at night in front of a boisterous Aussie crowd, Hewitt dispatched Raonic in 3 hours 6 minutes. In the 4th round, Hewitt faced returning champ and No. 1-ranked Novak Djokovic. Djokovic won the 1st two sets fairly easily, and was leading 3–0 in the 3rd set when Hewitt launched a spirited comeback, taking the set 6–4. Djokovic eventually prevailed however, winning the match in four sets, ending Hewitt's run. Hewitt's two next matches were in February at the Davis Cup, where he won one singles and one doubles match partnering Chris Guccione, what awarded Australia to go to the playoffs once more. After this Hewitt needed an operation to have a plate inserted in his toe.
Hewitt returned with a wildcard at the French Open where he lost in the first round to Blaž Kavčič. After this, Hewitt began his grass season at Queen's Club Championships. Unfortunately he lost in the 1st round to Croatian Ivo Karlović. Hewitt's next tournament was the 2012 Wimbledon Championships, where he was defeated in the first round by 5th seed Jo-Wilfried Tsonga. During this match, ITF released wild cards for the 2012 Olympics, and Hewitt's name was in the singles list, marking his third appearance at the Olympic Games (2000, 2008 and now). After his loss against Tsonga, Hewitt played doubles at Wimbledon partnering countryman Chris Guccione, where they made the 3rd round before losing in 4 sets.
After Wimbledon, viewing to prepare for the Olympics, Hewitt was granted a wild card at Newport. In the opening round, he defeated Canadian Vasek Pospisil. In the 2nd round, he won in three sets, ousting American Tim Smyczek. In his next match, the Aussie won against Israeli Dudi Sela. With this win, Hewitt went on to the semi-final (his first since Halle 2010), where he was victorious over American Rajeev Ram. He lost to top seeded John Isner in the final.
Playing in the Olympics, Hewitt was drawn against Sergiy Stakhovsky and won. Marin Čilić, seeded 13th, awaited in the second round and Hewitt dispatched the Croat in two sets to advance to the third round. There, he met 2nd seed Novak Djokovic. After losing the first set, Djokovic overpowered Hewitt to take the final two sets and eliminate Hewitt from the tournament. In the mixed doubles, he and Sam Stosur reached the quarterfinals, where they lost two sets to one to Britain's Andy Murray and Laura Robson.
Beginning the American hard court season, Hewitt received a WC to the Cincinnati Masters, where he won against Mikhail Youzhny in the 1st round before losing to Viktor Troicki in the 2nd round. The Aussie's next tournament was the US Open, where he received a WC, completing the "Wild Card Slam" (received wild cards in all of the four Grand Slams in 2012). In the 1st round, Hewitt met Tobias Kamke, winning his first match at Flushing Meadows since 2009. In the 2nd round, Hewitt won a marathon five sets match against Gilles Müller. In the 3rd round, Hewitt lost to 4th seed and No. 5-ranked David Ferrer, despite having set points in the 1st set.
Hewitt started off 2013 in Brisbane, where he lost in second round against Denis Istomin in straight sets. Prior to the Australian Open, Hewitt took part in the exhibition tournament AAMI Kooyong Classic, in which he defeated Milos Raonic, Tomáš Berdych, and Juan Martín del Potro en route to claim his second title. Due to his excellent result in the preparation event before the 2013 Australian Open, people had high expectations of Hewitt. However, he suffered his sixth first-round exit in his home slam to No. 9 Janko Tipsarević in straight sets. Hewitt then played in the Davis Cup against Taiwan and won in both singles and doubles.
He played the SAP Open next in San Jose, losing his second-round match to third-seeded American Sam Querrey in a three-set thriller. He also claimed a wild card to play in doubles with fellow Aussie Marinko Matosevic, beating the No. 1 American duo Mike Bryan and Bob Bryan in the quarterfinals, before losing to Xavier Malisse and Frank Moser in the final. With Hewitt's doubles run in the tournament, he surpassed the 100-wins mark in doubles. He next participated in the U.S. National Indoor Tennis Championships in Memphis. He faced Yen-Hsun Lu in the opening round, saving two match points to edge Lu in three sets. He lost to Denis Istomin, again in the second round.
Hewitt moved on to play the BNP Paribas Open in Indian Wells, ousting Lukáš Rosol and 15th seed John Isner, before losing to No. 18 Stanislas Wawrinka. Hewitt lost to Gilles Simon in the opening round at Roland Garros. After winning the first two sets, he succumbed in five. In his first match at the Aegon Championships Queen's Club, he beat Mike Russell in three sets. He followed this with victory over Grigor Dimitrov in straight sets. He then defeated Sam Querrey to book a place in the quarterfinals. In the quarterfinals, he defeated No. 8 Juan Martín del Potro in three sets, to progress to the semi-finals. Hewitt played Marin Čilić in the semi-finals, but was beaten in three sets. At Wimbledon, Hewitt beat top ten player Stanislas Wawrinka in the first round in straight sets. He was then defeated by German qualifier Dustin Brown in the second round in four sets.
In July 2013, he made it to his first final of the year at the Hall-of-Fame Championships, defeating Matthew Ebden, Prakash Amritraj, Jan Hernych, and John Isner on the way. He was beaten by Nicolas Mahut having served for the championship at 5–4 in the second set. His form continued at the Atlanta Open, defeating Édouard Roger-Vasselin 6–4, 6–4, Rhyne Williams 7–6, 6–4 and Ivan Dodig 1–6, 6–3, 6–0 in the quarterfinals. Hewitt played John Isner in the semi-finals, but lost in three tough sets. His 2013 US Open run started well, beating Brian Baker in four sets and following up with a five-set epic upset against fellow former US Open champion Juan Martín del Potro, where Hewitt came back from two sets to one down against the No. 6, winning a fourth set tiebreak and sealing the match 6–1 in the fifth. He beat Evgeny Donskoy in the third round to set up a fourth round match with Mikhail Youzhny. Hewitt then lost to Youzhny 3–6, 6–3, 7–6, 4–6, 7–5, despite leading 4–1 in the fourth set and serving for the match at 5–3 in the fifth set. A measure of the success of Hewitt's 2013 season is the fact that he won the Newcombe medal as the most outstanding Australian tennis player in 2013, a year in which he returned to the world's top 100.
Hewitt kicked off the 2014 season as an unseeded entrant into the 2014 Brisbane International. He won his first round match against Thanasi Kokkinakis in straight sets. His second round match was against sixth seed Feliciano López, whom he defeated. His quarterfinal encounter against qualifier Marius Copil resulted in a straight-set victory. In the semi-finals Hewitt faced second seed Kei Nishikori. Hewitt prevailed, thus setting the final match against seventeen–time Grand Slam winner Roger Federer. Federer held an 18–8 record head–to–head against Hewitt. Hewitt managed to turn the tide on Federer, winning 6–1, 4–6, 6–3 and capturing the title, which was his 29th and first since 2010. As a result, his rank increased from 60th to 43rd, becoming Australian number one again.
At AAMI Classic, he defeated Andy Murray in two tiebreaks.
In the 2014 Australian Open, Hewitt played both singles and doubles as an unseeded player. In his first round singles match, he lost to No. 24 seed Andreas Seppi. In doubles action, Hewitt partnered with retired and former Australian number one Patrick Rafter. However, the duo did not manage to win their first round match against Eric Butorac and Raven Klaasen, losing 4–6, 5–7. After the tournament, Hewitt's singles rank rose to No. 38, his highest position since late 2010. Hewitt battled for his 600th ATP win, becoming only the third active player to reach that milestone by beating Robin Haase in the 1st Round of the 2014 Sony Open Tennis.
After the Australian Open, Hewitt played as part of the Australian representative team for the Davis Cup. He lost his match against Jo-Wilfried Tsonga 3–6, 2–6, 6–7 (2–7). He then competed in the 2014 U.S. National Indoor Tennis Championships in Memphis in the United States of America. With a bye in the Round of 32, he went on to defeat Marcos Baghdatis in three sets 1–6, 6–2, 6–0 before losing to Michael Russell 3–6, 6–7 (6–8). His next tournament was the Delray Beach Tournament where he beat Bradley Klahn in straight sets 6–3, 6–1. He then versed his compatriot Marinko Matosevic but was forced to retire after injuring his shoulder. The score was 6–7 (2–7).
Hewitt played at the BNP Paribas Open where he defeated Matthew Ebden 7–6(7–2), 3–6, 6–3. He then lost to Kevin Anderson 6–7(5–7), 4–6. Hewitt then played at the Sony Open Tennis where he defeated Robin Haase in the Round of 128, 3–6, 6–3, 6–3. He then subsequently lost to No. 1 Rafael Nadal. Hewitt then played at the U.S. Men's Clay Court Championships where he lost in Round of 16 to Sam Querrey.
Hewitt suffered three consecutive first round losses at the BMW Open by FWU AG to Albert Ramos-Vinolas 7–6(6), 1–6, 0–6, Mutua Madrid Open to Santiago Giraldo 5–7, 6–4, 2–6 and at the French Open Rolland Garros to Carlos Berlocq 6–3, 2–6, 1–6, 4–6. This ended Hewitt's clay court season.
At the Aegon Championships, Hewitt won in the first round against Daniel Gimeno-Traver in straight sets before losing to Feliciano Lopez in straight sets 3–6, 4–6. Following this tournament, Hewitt played at Wimbledon where he won in the first round against Michal Przysiezny, 6–2, 6–7 (14–16), 6–1, 6–4 before losing in the second round in five sets to Jerzy Janowicz, 5–7, 4–6, 7–6(7), 6–4, 3–6.
He next competed at the Newport Hall of Fame Tennis Championships where he was seeded third. Hewitt advanced to the final for the third consecutive year where he would face Ivo Karlovic. Hewitt slayed his Newport demons and defeated the big serving Croat in three sets: 6–3, 6–7(4), 7–6(3). It was his 30th career singles title. Hewitt went on to win the doubles title with countryman Chris Guccione later that same day.
In July 2014, the book "Facing Hewitt" was published by author Scoop Malinowski, the book contains over 50 interviews with ATP players about their experiences of playing Hewitt. Hewitt received a copy in Newport after his quarterfinal win vs. American Steve Johnson.
On 10 August 2014, Hewitt defeated Austria's Jürgen Melzer in three sets (3–6, 6–4, 6–4) at the Cincinnati Masters to reach 610 wins on the ATP Tour. That enabled him to rise to number 19 on the all-time wins list, topping Björn Borg and Yevgeny Kafelnikov in the process.
Hewitt began his 2015 season as the defending champion of the Brisbane International. In the first round he was defeated in straight sets (3–6, 2–6) by fellow Australian Sam Groth in 58 minutes. As a result, he dropped from rank No. 50 to No. 84 and lost his position of No. 1 Australian which he had held for many consecutive months. Hewitt played the first Fast4 short-form tennis exhibition match against Roger Federer but lost in five sets.
Hewitt then played his 19th consecutive Australian Open appearance which is the fourth longest streak at any Grand Slam. In the first round he beat wild card Zhang Ze in 4 sets. He then lost in five sets to his second round opponent Benjamin Becker despite winning the first two sets.
At a media conference, Hewitt mentioned plans to retire after the 2016 Australian Open to become the captain of the Australian Davis Cup team after Pat Rafter moved on from the position, becoming the seventh man to captain the team. "I had thought long and hard and I plan to play the Australian Open next year and then finish," he said. "At the moment, [the Davis Cup] is the main focus for us and then I will be looking towards the grass court season and finishing here in Melbourne, which would be special to play 20 Australian Opens". It will be Australia's first time in the world group of the Davis Cup in six years. Rafter and John Newcombe are the only other two Australian men to have been ranked No. 1 since ranks were established in 1973.
Hewitt then played the Miami Open and lost in the first round to Thomaz Bellucci in three sets. He was then awarded a wildcard to the 2015 U.S. Men's Clay Court Championships where he also lost in the first round to Go Soeda.
Hewitt skipped the remainder of the clay court season including the 2015 French Open, instead opting to focus on the grass season and Wimbledon. He began his grass court season at the 2015 Topshelf Open where he lost to Nicolas Mahut in the first round. He also was awarded a wildcard into the Men's Doubles where he partnered compatriot Matt Reid. They upset the fourth seeds Draganja/Kontinen in the first round.
At Wimbledon, Hewitt was awarded a wildcard and was defeated by Jarkko Nieminen 6–3, 3–6, 6–4, 0–6, 9–11 in the first round of his eighteenth and final appearance at the tournament. It became his 44th five-set match of his grand slam career. Despite three straight breaks in the fifth set, Hewitt on serve faced and saved three match points at 4–5, and held serve each time until the 20th game of the fifth set. Afterwards both the crowd and Nieminen himself gave Hewitt a standing ovation. Partnering compatriot Thanasi Kokkinakis, the wild card duo reached the third round of the Wimbledon men's doubles with two five-set matches, including defeating the 15th seed, but they lost to the 4th seed. Hewitt played in the mixed doubles with compatriot Casey Dellacqua on a wild card and lost in the second round, seemingly ending his Wimbledon career.
Hewitt partnered Sam Groth to win Australia's Davis Cup quarterfinals doubles rubber against Kazakhstan in Darwin on 18 July. With their spectacular performance, Groth and Hewitt were selected to play the last two reverse-singles rubbers, replacing Kyrgios and Kokkinakis respectively. After Groth's win, Hewitt won the deciding fifth rubber against Nedovyesov to put Australia at 3–2 to reach the semi-finals. It was Australia's first win from 0–2 down since 1939.
Hewitt, on a wild card, defeated compatriot John-Patrick Smith 6–3 6–4 at the Citi Open in Washington D.C. in August. He lost in the second round of the US Open to Tomic in five sets despite having two match points. Hewitt partnered with Sam Groth lost a tough Davis Cup semi-finals doubles tie against the British Murray brothers in 5 sets. Todd Woodbridge hailed it as the "Best [doubles] I've watched for years."
Having previously announced his intentions to retire after the 2016 Australian Open, Hewitt confirmed that his final season would consist of that, the Hopman Cup and the exhibition World Tennis Challenge.
In his 20th appearance at the Australian Open, he won his first round match against fellow Australian James Duckworth in straight sets. He then lost in the second round in 3 straight competitive sets to 8th seed David Ferrer, 2–6, 4–6, 4–6. Post-match he was remembered by players including Roger Federer, Rafael Nadal, Andy Murray and Nick Kyrgios as a man who was at the top of the game for years, and continually displayed the fighting spirit that he became synonymous with.
He was made a Member of the Order of Australia in the awards announced on Australia Day.
In March Hewitt came out of retirement to replace the injured Nick Kyrgios in the first round Davis Cup against the US at the Kooyong Lawn Tennis Club. He played doubles with John Peers against the Bryan brothers. The Australian duo came back from two sets to love but lost the fifth set.
Hewitt was the subject of a book titled "Facing Hewitt" which features fifty interviews with professional tennis players who competed against him.
In June it was announced that Hewitt would be taking a wildcard into the Wimbledon doubles competition, playing alongside young compatriot Jordan Thompson. In the first round, the pair saved eight match points to defeat Nicolás Almagro and David Marrero 19–17 in the deciding set. However, they lost to the eighth seeds in the second round.
In December 2017, it was announced that Hewitt would come out of retirement and accept a doubles wildcard with compatriot Sam Groth at the 2018 Australian Open.
Hewitt and Jordan Thompson accepted a wildcard to play Doubles at the 2018 Brisbane International. They lost in the first round to Grigor Dimitrov and Ryan Harrison 3–6, 6–1, [5–10].
Hewitt then played in the fast4 exhibition in Sydney where he lost to Grigor Dimitrov. Hewitt and Kyrgios then went on to win the doubles beating Alexander Zverev and Grigor Dimitrov. After that, he played the Tie Break Tens in Melbourne where he won his opening match against Novak Djokovic, before losing to world No. 1 Rafael Nadal.
In the Australian Open doubles, Hewitt and Groth made a run to the quarterfinals, including a win over third seeds Jean-Julien Rojer and Horia Tecău. This was his best doubles result at the Australian Open in his career.
Hewitt's doubles comeback continued with Lleyton playing doubles at the 2018 Estoril Open with Alex de Minaur where they defeated second seeds Michael Venus and Raven Klaasen before losing in the quarterfinals. He then reached the semi-finals of the 2018 Fuzion 100 Surbiton Trophy – Men's Doubles with Alex Bolt before Venus and Klaasen gained revenge on Hewitt (this time with Alex Bolt) at the 2018 Rosmalen Grass Court Championships – Men's Doubles.
Hewitt then teamed up with another Aussie, Nick Kyrgios at the 2018 Queen's Club Championships – Doubles tournament where they defeated number 3 seeds Nicolas Mahut and Pierre-Hugues Herbert before losing in the quarterfinals. At the 2018 Wimbledon Championships – Men's Doubles, Hewitt was again wildcard with Alex Bolt however the pair again lost in the first round to Venus and Klaasen. After this loss, Hewitt teamed up with Jordan Thompson and lost in the first round of the 2018 Hall of Fame Tennis Championships in Newport.
Hewitt's last professional match of 2018 was in the 2018 Davis Cup World Group Play-offs against Austria where Hewitt paired up with experienced doubles specialist John Peers to defeat the Austrian team of doubles specialist Oliver Marach and experienced clay-courier Juergen Melzer.
In 2019, Hewitt played doubles at a number of tournaments. In a pairing with Jordan Thompson, they lost in the first round of the Sydney International. A week later he teamed up with John-Patrick Smith at the Australian Open, yet again losing in the first round.
Hewitt and countryman Jordan Thompson received a wildcard to play at the Wimbledon Championships. They reached the second round before losing to R. Klaasen and M. Venus in straight sets.
Also that year, he played doubles in New York, Houston, Surbiton, and s-Hertogenbosch, playing with the likes of Alexi Popyrin and Thompson yet again.
Hewitt once again featured in the Australian summer of tennis, this time choosing to participate in the new Adelaide International, the first time he had played tour-tennis in his home town for over a decade. He partnered Jordan Thompson but lost in the first round to Cristian Garin and Juan Ignacio Londero. The two chose to compete at the Australian Open a week later, but lost in the first round in straight sets, to Korean duo Min-Kyu Song and Nam Ji-Sung.
Hewitt made his Davis Cup debut for Australia in the 1999 Davis Cup quarterfinals at age 18 against the United States in Chestnut Hill, Massachusetts. In the first rubber of the tie Hewitt faced No. 8 and Wimbledon quarter finalist Todd Martin. Hewitt caused a major upset over Martin and would go on to win his second singles rubber against Alex O'Brien as well. The great start to his Davis Cup career would continue in the 1999 semi-finals against Russia where he would record another two wins against Marat Safin and Yevgeny Kafelnikov. He would taste his first defeat in Davis Cup in the 1999 final against France but would become a Davis Cup champion anyway. In 2000 Hewitt and Australia would again make the Davis Cup final but fell to Spain in Barcelona.
In 2001 Hewitt would again be a part of the Australian team that would make the Davis Cup final but the Australians would lose the fifth rubber and hand France a 3–2 win. Determined to make amends for his last few finals, Hewitt led the Australian team to the 2003 Davis Cup final against Spain where he defeated Juan Carlos Ferrero in five sets. The team came away victorious 3–1 overall and Hewitt claimed his second Davis Cup title. By the age of 22, he had recorded more wins in Davis Cup singles than any other Australian player. Following the retirement of Pat Rafter and the semi-retirement of Mark Philippoussis, Hewitt would be forced to lead the Australian Davis Cup team with little success from his peers. In the 2006 quarterfinals in Melbourne, Hewitt defeated Belarusian Vladimir Voltchkov in just 91 minutes. Voltchkov said before the match that "Hewitt has no weapons to hurt me." Hewitt responded, "Voltchkov doesn't have a ranking [of 457] to hurt me." In the semi-finals in Buenos Aires on clay, Hewitt lost to Argentine José Acasuso in five sets.
Despite a world group semi-final appearance in 2006, Hewitt and Australia would be relegated to the Asia/Oceania region in 2008. Hewitt continued showed his commitment to the team by competing in the regional ties but the team fell in the playoff stages every year between 2008 and 2011. In the 2011 playoffs, he played against Roger Federer and Stanislas Wawrinka on a grass court in Sydney, losing both matches. In doubles, together with Chris Guccione, he was able to defeat Federer and Wawrinka, but this was not enough to take Australia to the World Group.
In 2012, Hewitt won his single and doubles match against China in February, which allowed Australia to return to the playoffs where they lost to Germany. After defeating Chinese Taipei and Uzbekistan, Australia earned the right to get to the playoffs again in 2013. They ended up routing Poland 4–1 on their soil including a convincing 6–1 6–3 6–2 win for Hewitt over recent Wimbledon quarterfinalist Łukasz Kubot. In 2014, Australia crashed out 5–0 in the World Group first round on the French clay of La Roche sur Yon. Jo-Wilfried Tsonga beat Hewitt both in singles and doubles. Perth's grass courts would then be hosting yet another playoff tie for Australia in September 2014. Hewitt won both his singles match (against Farrukh Dustov) and the subsequent doubles rubber (partnering Chris Guccione v. Dustov and Istomin) in straight sets while up and coming Nick Kyrgios won his encounter with Denis Istomin to give Australia an unassailable 3–0 lead over Uzbekistan, thus enabling their country to return to the World Group in 2015. Sam Groth and Nick Kyrgios wrapped up a 5–0 victory a day later. Australia will open their 2015 campaign in Czech Republic for a 6–8 March tie that is one of two worst-case scenarios for Australia.
Hewitt played the Davis Cup match against Great Britain in the semi-finals of the 2015 Davis Cup. He played doubles with Sam Groth losing in five sets to brothers Andy and Jamie Murray.
He came out of retirement to play the first round match against the United States at the 2016 Davis Cup as a player-captain, where he and partner John Peers lost to the Bryan brothers in a five-setter.
He competed in the 2018 Davis Cup World Group Play-offs, again as a player-captain in doubles with Peers. They won the rubber against the Austrian duo Oliver Marach and Jürgen Melzer in four sets.
Hewitt is the sole holder of several Australian Davis Cup records, which include most wins, most singles wins, most ties played and most years played. His Davis Cup career has included wins over players who were top ten at the time, which include Todd Martin, Marat Safin, Yevgeny Kafelnikov, Roger Federer, Gustavo Kuerten, Sébastien Grosjean and Juan Carlos Ferrero.
Hewitt made his World Team Cup debut for Australia in 2000 at the age of 19. He recorded two singles victories over Albert Costa and Marcelo Ríos but fell to Yevgeny Kafelnikov in his last group stage match. Hewitt returned to the World Team Cup in 2001 and led Australia to the title by recording singles wins over Àlex Corretja, Magnus Norman, Tommy Haas in the group stages. In the final Hewitt defeated No. 2 Marat Safin.
Hewitt made his third appearance at the tournament in 2003 where he entered as the No. 1 singles player and went undefeated in his singles matches by recording wins over Jiří Novák, James Blake and Carlos Moyá but it was not enough to send Australia through to the final.
Fresh from their 2003 Davis Cup victory, Hewitt and Mark Philippoussis entered the 2004 World Team Cup with high hopes. In the group stages Hewitt recorded victories over Robby Ginepri and Martin Verkerk but fell to Gastón Gaudio in his last group singles match. Despite the loss, Australia still advanced to the final where Hewitt would lose to Fernando González and Australia would lose the final 2–1.
After a six-year hiatus Hewitt returned to compete in the 2010 World Team Cup and won his first match against John Isner but fell to Nicolás Almagro in his last match.
A 19-year-old Hewitt entered his first Olympics in 2000 and was given the fourth seeding in the draw. Hewitt was considered a strong favorite for a medal given his victory at the Sydney International earlier in the year but despite competing in his home nation Hewitt went out in the first round to Max Mirnyi 6–3 6–3. Hewitt elected not to compete in the 2004 Athens Olympic Games, deciding instead to focus on the 2004 US Open which would result in a runner-up showing. He would return for his second Olympic Games in Beijing for both the singles and doubles competitions. A first round 7–5 7–6 victory over Jonas Björkman would set up a second round clash with the number 2 seed Rafael Nadal. Nadal eliminated Hewitt in the second round 6–1 6–2 and would go on to win the singles gold medal. Pairing up with Chris Guccione in the doubles, the team would record victories over Agustín Calleri/Juan Mónaco and Rafael Nadal/Tommy Robredo before falling to the Bryan brothers in the quarterfinals.
Hewitt competed in his third olympics in London 2012 where he entered the men's singles event and defeated Ukrainian Sergiy Stakhovsky in the first round. He was the only Australian in any tennis event to progress past the first round. In the second round Hewitt took out 13th seeded Croatian Marin Čilić. In the third round Hewitt stunned the tennis world when he won the first set against the number 2 ranked Novak Djokovic, he would end up falling in three sets. He also sent an application to the International Olympic Committee to enter the men's doubles competition with Chris Guccione but the application was rejected. Following his men's doubles rejection, Hewitt decided to apply for a spot in the mixed doubles competition with Sam Stosur. The pair were granted entry and defeated Polish pair Marcin Matkowski and Agnieszka Radwańska in the first round. In the quarterfinals, Hewitt/Stosur faced British pair Andy Murray and Laura Robson, they would lose the encounter.
Peter Smith, Darren Cahill, Jason Stoltenberg, Roger Rasheed, Scott Draper, Tony Roche, Nathan Healey and Brett Smith, Tony Roche and Peter Luczak are all former coaches of Hewitt.
Hewitt's coaches in his time on the ATP Tour:
Hewitt and Roger Federer played each other on 27 occasions. Early in their careers, Hewitt dominated Federer, winning seven of their first nine meetings, including a victory from two sets down in the 2003 Davis Cup semi-final, which allowed Australia to defeat Switzerland. However, from 2004 onward, Federer dominated the rivalry, winning 16 of the last 18 meetings to finish with an 18–9 overall head-to-head record. This is Hewitt's longest rivalry as these two first played each other as juniors in 1996. They met in one Grand Slam final, the 2004 US Open final, where Federer won his first US Open title in a lopsided encounter in which Federer won the first and third sets 6–0 and the second set on a tiebreak. Federer met Hewitt at six of the Grand Slam tournaments in which he lifted the trophy, including all five of his triumphs between 2004 and 2005. Their last meeting was at the 2014 Brisbane International, where Hewitt triumphed over Federer in three sets, for his first title since 2010, when he also beat Federer to the Halle title.
Hewitt and Federer teamed up in the men's doubles at Wimbledon in 1999. They got to the third round before losing to Jonas Björkman and Pat Rafter.
Hewitt's second longest rivalry was against American Andy Roddick, in which the two played on 14 occasions. Early on, Hewitt dominated the rivalry, with six wins from their first seven meetings. One of those wins included a five-set victory at the 2001 US Open, the tournament in which Hewitt captured his first Singles Grand Slam title. In later years, Roddick began to dominate Hewitt, with the rivalry finishing at 7 wins each.
A rivalry and feud between Hewitt and Argentinian tennis players began at the 2002 Wimbledon final where Hewitt defeated Argentina's David Nalbandian in straight sets. The rivalry would hit boiling point in 2005 over a series of matches spread between the 2005 Australian Open and the 2005 Davis Cup Quarterfinals between Australia and Argentina. In the third round of 2005 Australian Open Hewitt faced Argentinian Juan Ignacio Chela in which Hewitt fired up Chela with his over-zealous celebrations for Chela's unforced errors, causing the Argentinian to spit at Hewitt during a change of ends. Hewitt would then face David Nalbandian in the quarterfinals on Australia Day with Hewitt coming out victorious 10–8 in the fifth set. Later in 2005 Hewitt would face Guillermo Coria in the Davis Cup quarterfinals, where their rivalry would flare up, however, die down the following year in the 2006 Davis Cup semi-finals, where Argentina came out victorious 5–0 over Hewitt and the Australians.
Hewitt is a defensive counterpuncher. He typically likes to stay back towards the baseline during a rally and will usually approach the net only to catch a short reply or drop shot from his opponent. Hewitt's lack of penetration in his groundstokes, most notably in his forehand, a typically dominant shot in most male players, forces him to rely on placement rather than simply "dominating" the point. At the 2004 Cincinnati Masters Final, commentator MaliVai Washington said that Hewitt was even more difficult to "ace" than Agassi because he gets more returns in play. Hewitt's tactics typically involve putting difficult service returns in play, consistently chasing down attempted winning shots from his opponent, and keeping the ball deep until he feels he can hit a winner.
Although he is known primarily as a baseliner, Hewitt is a skilled volleyer and is known for having one of the best overhead smashes in the game. His signature shot, however, is the offensive topspin lob, a shot that he executes efficiently off both wings when his opponent approaches the net. US Davis Cup captain Patrick McEnroe, Jim Courier and Tim Henman have all described Hewitt's lob as being the best in the world (although Henman has since declared Andy Murray to have succeeded him). In Andre Agassi's book "Open", Hewitt is described as one of the best shot selectors in the history of Men's Tennis.
In July 2000, Hewitt signed a multiyear endorsement deal with Nike. He is currently sponsored by American athletic apparel company Athletic DNA and the Japanese sports manufacturer Yonex, with whom he signed a "Head to Toe" deal in late 2005. Hewitt has used Yonex racquets as early as 2000, having used the Yonex Super RD Tour 95. Yonex provides Hewitt's racquets, shoes and accessories. Hewitt's Yonex shoes (SHT-306) are inscribed with his nickname "Rusty" along with an image of an Australian flag. As of 7 August 2007, his first appearance with a new racquet at the Montreal Masters, Hewitt used to use the Yonex RQiS 1 Tour. He used to use the Yonex RDS tour 90 Model, but switched to the Yonex RDiS 100 mid in 2009. In 2011, he switched to Yonex VCORE 95 D, using a grip size of 4 3/8 (L3). Since mid-2011, he began alternating between Yonex, Nike, Adidas, Asics and Fila shoes.
Hewitt is a keen supporter of Australian rules football, having played the game earlier in his career, and is currently the joint No. 1 ticket holder for the Adelaide Crows, alongside MP Kate Ellis. He had once had a close friendship with Crows star Andrew McLeod, but this broke down amid much public controversy in 2005. Hewitt had produced a DVD titled "Lleyton Hewitt: The Other Side" which precipitated the falling out between him and McLeod over filming of certain Aboriginal sites.
Hewitt and Belgian tennis player Kim Clijsters started a relationship in January 2000, during the Australian Open. The two announced their engagement just before Christmas 2003, but separated in October 2004, cancelling a planned February 2005 wedding.
On 30 January 2005, shortly after losing the 2005 Australian Open final to Marat Safin, Hewitt proposed to Australian actress Bec Cartwright after they had been dating for six weeks. They married on 21 July 2005 at the Sydney Opera House and they have three children together.
In late 2008, to extend his tennis career and reduce the amount of tax he would otherwise have had to pay, Hewitt relocated his family for the European and North American season to their home in the Old Fort Bay estate, in Nassau, Bahamas.
Hewitt has a nickname, "Rusty", which was given to him by Darren Cahill who at the time thought Hewitt resembled the character Rusty, from the National Lampoon film series. Hewitt has also been given the nickname 'Rocky' by fans, which originated from his shouts of "C'mon Balboa", in reference to the character Rocky Balboa from the Sylvester Stallone film "Rocky". Hewitt has also been compared to the character.
Hewitt has been involved in several public controversies. He was involved in a racism dispute while playing James Blake at the 2001 US Open. After being foot-faulted twice by a black linesman on crucial points in the third set, Hewitt was accused of dragging race into the situation by suggesting the similarity in skin colour of Blake and the official was playing a part in the decision to penalise him.
At the 2001 French Open Hewitt twice called the Chair Umpire and net judge "spastics" and was subsequently forced to apologise to the spastic community in Australia following a public backlash.
Hewitt's constant "c'mons" when he won a point or his opponents made an error have been remarked upon as poor sportsmanship by opponents and media commentators. Notably this behaviour particularly riled his 2005 Australian Open second-round opponent James Blake.
|
https://en.wikipedia.org/wiki?curid=17915
|
Lars von Trier
Lars von Trier (born Lars Trier; 30 April 1956) is a Danish film director and screenwriter with a prolific and controversial career spanning almost four decades. His work is known for its genre and technical innovation, confrontational examination of existential, social, and political issues, and his treatment of subjects such as mercy, sacrifice, and mental health.
Among his more than 100 awards and 200 nominations at film festivals worldwide, von Trier has received: the Palme d'Or (for "Dancer in the Dark"), the Grand Prix (for "Breaking the Waves"), the Prix du Jury (for "Europa"), and the Technical Grand Prize (for "The Element of Crime" and "Europa") at the Cannes Film Festival.
Von Trier is the founder and shareholder of the international film production company Zentropa Films, which has sold more than 350million tickets and garnered seven Academy Award nominations over the past 25years.
Von Trier was born in Kongens Lyngby, Denmark, north of Copenhagen, to Inger Høst and Fritz Michael Hartmann (the head of Denmark's Ministry of Social Affairs and a World War II resistance fighter). He received his surname from Høst's husband, Ulf Trier, whom he believed to be his biological father until 1989.
He studied film theory at the University of Copenhagen and film direction at the National Film School of Denmark. At 25, he won two Best School Film awards at the Munich International Festival of Film Schools for "Nocturne" and "Last Detail". The same year, he added the German nobiliary particle "von" to his name, possibly as a satirical homage to the equally self-invented titles of directors Erich von Stroheim and Josef von Sternberg, and saw his graduation film "Images of Liberation" released as a theatrical feature.
In 1984, "The Element of Crime", von Trier's breakthrough film, received twelve awards at seven international festivals including the Technical Grand Prize at Cannes, and a nomination for the Palme d'Or. The film's slow, non-linear pace, innovative and multi-leveled plot design, and dark dreamlike visual effects combine to create an allegory for traumatic European historical events.
His next film, "Epidemic" (1987), was also shown at Cannes in the Un Certain Regard section. The film features two story lines that ultimately collide: the chronicle of two filmmakers (played by vonTrier and screenwriter Niels Vørse) in the midst of developing a new project, and a dark science fiction tale of a futuristic plaguethe very film von Trier and Vørsel are depicted making.
Von Trier has occasionally referred to his films as falling into thematic and stylistic trilogies. This pattern began with "The Element of Crime" (1984), the first of the "Europa" trilogy, which illuminated traumatic periods in Europe both in the past and the future. It includes "The Element of Crime" (1984), "Epidemic" (1987), and "Europa" (1991).
Von Trier directed "Medea" (1988) for television, which won him the Jean d'Arcy prize in France. It is based on a screenplay by Carl Th. Dreyer and stars Udo Kier. Trier completed the Europa trilogy in 1991 with "Europa" (released as "Zentropa" in the US), which won the Prix duJury at the 1991 Cannes Film Festival, and picked up awards at other major festivals. In 1990 he also directed the music video for the song "Bakerman" by Laid Back. This video was re-used in 2006 by the English DJ and artist Shaun Baker in his remake of the song.
Seeking financial independence and creative control over their projects, in 1992 vonTrier and producer Peter Aalbæk Jensen founded the film production company Zentropa Entertainment. Named after a fictional railway company in "Europa", their most recent film at the time, Zentropa has produced many movies other than Trier's own, as well as several television series. It has also produced hardcore sex films: "Constance" (1998), "Pink Prison" (1999), "HotMen CoolBoyz" (2000), and "All About Anna" (2005). To make money for his newly founded company, vonTrier made "The Kingdom" (Danish title "Riget", 1994) and "The KingdomII" ("RigetII", 1997), a pair of miniseries recorded in the Danish national hospital, the name "Riget" being a colloquial name for the hospital known as Rigshospitalet (lit. The Kingdom's Hospital) in Danish. A projected third season of the series was derailed by the death in 1998 of Ernst-Hugo Järegård, who played Dr. Helmer, and that of Kirsten Rolffes, who played Mrs. Drusse, in 2000, two of the major characters.
In 1995 von Trier and Thomas Vinterberg presented their manifesto for a new cinematic movement, which they called Dogme 95. The Dogme95 concept, which led to international interest in Danish film, inspired filmmakers all over the world. In 2008, together with their fellow Dogme directors Kristian Levring and Søren Kragh-Jacobsen, vonTrier and Thomas Vinterberg received the European film award for European Achievement in World Cinema.
In 1996 von Trier conducted an unusual theatrical experiment in Copenhagen involving 53 actors, which he titled "Psychomobile1: The World Clock". A documentary chronicling the project was directed by Jesper Jargil, and was released in 2000 with the title "De Udstillede" (The Exhibited).
Von Trier achieved his greatest international success with his "Golden Heart" trilogy. Each film in the trilogy is about naive heroines who maintain their "golden hearts" despite the tragedies they experience. This trilogy consists of: "Breaking the Waves" (1996), "The Idiots" (1998), and "Dancer in the Dark" (2000). While all three films are sometimes associated with the Dogme 95 movement, only "The Idiots" is a certified Dogme95 film.
"Breaking the Waves" (1996), the first film in his "Golden Heart" trilogy, won the Grand Prix at the Cannes Film Festival and featured Emily Watson, who was nominated for the Academy Award for Best Actress. Its grainy images, and hand-held photography, pointed towards Dogme95 but violated several of the manifesto's rules, and therefore does not qualify as a Dogme95 film.
The second film in the trilogy, "The Idiots" (1998), was nominated for a Palme d'Or, with which he was presented in person at the Cannes Film Festival despite his dislike of traveling.
In 2000 von Trier premiered a musical featuring Icelandic musician Björk, "Dancer in the Dark". The film won the Palme d'Or at Cannes. The song "I've Seen It All" (co-written by vonTrier) received an Academy Award nomination for Best Original Song.
"The Five Obstructions" (2003), made by vonTrier and Jørgen Leth, is a documentary that incorporates lengthy sections of experimental films. The premise is that vonTrier challenges director Jørgen Leth, his friend and mentor, to remake his old experimental film "The Perfect Human" (1967) five times, each time with a different "obstruction" (or obstacle) specified by vonTrier.
A proposed trilogy, von Trier's "Land of Opportunities" consists of "Dogville" (2003), "Manderlay" (2005), and "," which is yet to be made. "Dogville" and "Manderlay" were both shot with the same distinctive, extremely stylized approach, placing the actors on a bare sound stage with no set decoration and the buildings' walls marked by chalk lines on the floor, a style inspired by 1970s televised theatre. "Dogville" (2003) starred Nicole Kidman and "Manderlay" (2005) starred Bryce Dallas Howard in the same main role as Grace Margaret Mulligan. Both films have casts of major international actors, including Harriet Andersson, Lauren Bacall, James Caan, Danny Glover, and Willem Dafoe, and question various issues relating to American society, such as intolerance (in "Dogville") and slavery (in "Manderlay").
In 2006 von Trier released a Danish-language comedy film, "The Boss of It All". It was shot using an experimental process that he has called Automavision, which involves the director choosing the best possible fixed camera position and then allowing a computer to randomly choose when to tilt, pan, or zoom.
Following "The Boss of It All", von Trier scripted an autobiographical film, "" in 2007, which went on to be directed by Jacob Thuesen. The film tells the story of vonTrier's years as a student at the National Film School of Denmark. It stars Jonatan Spang as vonTrier's alter ego, called "Erik Nietzsche", and is narrated by vonTrier himself. All the main characters in the film are based on real people from the Danish film industry, with thinly veiled portrayals including Jens Albinus as director Nils Malmros, Dejan Čukić as screenwriter Mogens Rukov, and Søren Pilmark.
The "Depression" trilogy consists of "Antichrist", "Melancholia", and "Nymphomaniac". The three films star Charlotte Gainsbourg, and deal with characters who suffer depression or grief in different ways. This trilogy is said to represent the depression that Trier himself experiences.
Von Trier's next feature film was "Antichrist", a film about "a grieving couple who retreat to their cabin in the woods, hoping a return to Eden will repair their broken hearts and troubled marriage; but nature takes its course and things go from bad to worse". The film stars Willem Dafoe and Charlotte Gainsbourg. It premiered in competition at the 2009 Cannes Film Festival, where the festival's jury honoured the movie by giving the Best Actress award to Gainsbourg.
In 2011 vonTrier released "Melancholia", a psychological drama. The film was in competition at the 2011 Cannes Film Festival.
Known to be provocative in interviews, vonTrier's remarks during the press conference before the premiere of "Melancholia" in Cannes caused significant controversy in the media, leading the festival to declare him "persona non grata" and to ban him from the festival for one year (without, however, excluding "Melancholia" from that year's competition). Minutes before the end of the interview, Trier was asked by a journalist about his German roots and the Nazi aesthetic in response to the director's description of the film's genre as "German romance". The director, who was brought up with his Jewish father and only found out later in life that his biological father was a non-Jewish German, appeared offended by the connotation and responded by discussing his German identity. He joked that since he was no longer Jewish he now "understands" and "sympathizes" with Hitler, that he is not against the Jews except for Israel which is "a pain in the ass" and that he is a Nazi. These remarks caused a stir in the media which, for the most part, presented the incident as an antisemitic scandal. The director released a formal apology immediately after the controversial press conference and kept apologizing for his joke during all of the interviews he gave in the weeks following the incident, admitting that he was not sober, and saying that he did not need to explain that he is not a Nazi. The actors of "Melancholia" who were present during the incident – Dunst, Gainsbourg, Skarsgård – defended the director, pointing to his provocative sense of humor and his depression. The director of the Cannes festival later characterized the controversy as "unfair" and as "stupid" as vonTrier's bad joke, concluding that his films are welcome at the festival and that vonTrier is considered a "friend". In 2019, von Trier stated that he made this remark at the "only press conference I ever had when I was sober."
Following "Melancholia", von Trier began the production of "Nymphomaniac", a film about the sexual awakening of a woman played by Charlotte Gainsbourg.
In early December 2013, a four-hour version of the five-and-a-half-hour film was shown to the press in a private preview session. The cast also included Stellan Skarsgård (in his sixth film for von Trier), Shia LaBeouf, Willem Dafoe, Jamie Bell, Christian Slater, and Uma Thurman. In response to claims that he had merely created a "porn film", Skarsgård stated "... if you look at this film, it's actually a really bad porn movie, even if you fast forward. And after a while you find you don't even react to the explicit scenes. They become as natural as seeing someone eating a bowl of cereal." VonTrier refused to attend the private screening due to the negative response to Nazi-related remarks he had made at the 2011 Cannes Film Festival, which had led to his expulsion from it. In the director's defense, Skarsgård stated at the screening, "Everyone knows he's not a Nazi, and it was disgraceful the way the press had these headlines saying he was."
For its public release in the United Kingdom, the four-hour version of "Nymphomaniac" was divided into two "volumes""VolumeI" and "VolumeII"and the film's British premiere was on 22February 2014. In interviews prior to the release date, Gainsbourg and co-star Stacy Martin revealed that prosthetic vaginas, body doubles, and special effects were used for the production of the film. Martin also stated that the film's characters were a reflection of the director himself and referred to the experience as an "honour" that she enjoyed.
The film was also released in two "volumes" for the Australian release on 20 March 2014, with an interval separating the back-to-back sections. In his review of the film for 3RRR's film criticism program, "Plato's Cave", presenter Josh Nelson stated that, since the production of "Breaking the Waves", the filmmaker von Trier is most akin to is Alfred Hitchcock, due to his portrayal of feminine issues. Nelson also mentioned filmmaker Andrei Tarkovsky as another influence whom Trier himself has also cited.
In February 2014, an uncensored version of "Volume I" was shown at the Berlin Film Festival, with no announcement of when or if the complete five-and-a-half-hour "Nymphomaniac" would be made available to the public. The complete version premiered at the 2014 Venice Film Festival and was shortly afterward released in a limited theatrical run worldwide that fall.
In 2015, von Trier started to work on a new feature film, "The House That Jack Built" (2018), which was originally planned as an eight-part television series. The story is about a serial killer, seen from the murderer's point of view. Shooting started in March 2017 in Sweden, with shooting moving to Copenhagen in May.
In February 2017, von Trier explained in his own words that "The House That Jack Built" "celebrates the idea that life is evil and soulless, which is sadly proven by the recent rise of the "Homo trumpus" – the rat king". The film premiered at the Cannes Film Festival in May 2018.
Despite more than 100 walkouts by audience members when initially screened at the Cannes Film Festival, the film still received a 10-minute standing ovation.
Von Trier is heavily influenced by the work of Carl Theodor Dreyer and the film "The Night Porter". He was so inspired by the short film "The Perfect Human", directed by Jørgen Leth, that he challenged Leth to redo the short five times in the feature film "The Five Obstructions".
Von Trier's writing style has been heavily influenced by his work with actors on set, as well as the Dogme 95 manifesto that he co-authored. In an interview with "Creative Screenwriting", von Trier described his process as "writing a sketch and keep[ing] the story simple...then part of the script work is with the actors."
While reflecting on the storytelling across his body of work, von Trier said, "All the stories are about a realist who comes into conflict with life. I’m not crazy about real life, and real life is not crazy about me." He further described his process as dividing different parts of his personality into different characters.
Von Trier has cited Danish filmmaker Carl Dreyer as a writing influence, pointing to Dreyer's method of overwriting his scripts then significantly cutting the length down.
Von Trier has said that "a film should be like a stone in your shoe". To create original art he feels that filmmakers must distinguish themselves stylistically from other films, often by placing restrictions on the film making process. The most famous such restriction is the cinematic "vow of chastity" of the Dogme 95 movement with which he is associated. In "Dancer in the Dark", he used jump shots and dramatically-different color palettes and camera techniques for the "real world" and musical portions of the film, and in "Dogville" everything was filmed on a sound stage with no set, where the walls of the buildings in the fictional town were marked as lines on the floor.
Von Trier often shoots digitally and operates the camera himself, preferring to continuously shoot the actors in-character without stopping between takes. In "Dogville" he let actors stay in character for hours, in the style of method acting. These techniques often put great strain on the actors, most famously with Björk during the filming of "Dancer in the Dark".
Von Trier would later return to explicit images in "Antichrist" (2009), exploring darker themes, but he ran into problems when he tried once more with "Nymphomaniac", which had 90 minutes cut out (reducing it from five-and-one-half to four hours) for its international release in 2013 in order to be commercially viable, taking nearly a year to be shown complete anywhere in an uncensored director's cut.
Trier also attributes most of his profound ideas to that of his previous mentor, Thomas Boguszewski. "Thomas' genius is one I could never match," says von Trier, "but it would be a shame not to try."
In a Skype interview for IndieWire, von Trier compared his approach to actors with "how a chef would work with a potato or a piece of meat", clarifying that working with actors has differed on each film based on the production conditions.
Von Trier has occasionally courted controversy by his treatment of his leading ladies. He and Björk famously fell out during the shooting of "Dancer in the Dark", to the point where Björk would abscond from filming for days at a time. She stated about Trier, who among other things shattered a monitor while it was next to her, "...you can take quite sexist film directors like Woody Allen or Stanley Kubrick and still they are the one that provide the soul to their movies. In Lars von Trier’s case it is not so and he knows it. He needs a female to provide his work soul. And he envies them and hates them for it. So he has to destroy them during the filming. And hide the evidence." Despite this, other actresses such as Kirsten Dunst and Charlotte Gainsbourg have spoken out in defence of von Trier's approach. "Nymphomaniac" star Stacy Martin has stated that he never forced her to do anything that was outside her comfort zone. She said "I don't think he's a misogynist. The fact that he sometimes depicts women as troubled or dangerous or dark or even evil; that doesn't automatically make him anti-feminist. It's a very dated argument. I think that Lars loves women."
Nicole Kidman, who starred in von Trier's "Dogville", said in an interview with ABC Radio National: "I think I tried to quit the film three times because he said, 'I want to tie you up and whip you, and that's not to be kind.' I was, like, what do you mean? I've come all this way to rehearse with you, to work with you, and now you're telling me you want to tie me up and whip me? But that's Lars, and Lars takes his clothes off and stands there naked and you're like, 'Oh, put your clothes back on, Lars, please, let's just shoot the film.' But he's very, very raw and he's almost like a child in that he'll say and do anything. And we would have to eat dinner every night and most of the time that would end with me in tears because Lars would sit next to me and drink peach schnapps and get drunk and get abusive and I'd leave and...anyway, then we'd go to work the next morning."
In October 2017, Björk posted on her Facebook page that she had been sexually harassed by a "Danish film director she worked with". She commented:
The "Los Angeles Times" found evidence identifying him as Lars von Trier. Von Trier has rejected Björk's allegation that he sexually harassed her during the making of the film "Dancer in the Dark", and said "That was not the case. But that we were definitely not friends, that’s a fact," to Danish daily "Jyllands-Posten" in its online edition. Peter Aalbaek Jensen, the producer of "Dancer in the Dark", told "Jyllands-Posten" that "As far as I remember we [Lars von Trier and I] were the victims. That woman was stronger than both Lars von Trier and me and our company put together. She dictated everything and was about to close a movie of 100m kroner [$16m]."
After von Trier's statement, Björk explained the details about this incident, saying:
Björk's manager, Derek Birkett, has also accused von Trier's actions in the past, stating:
Von Trier has a penchant for working with actors and production members more than once. His main crew members and producer team has remained intact since the film "Europa". The list of actors reappearing in his films, even for small parts or cameos, is also extensive. Many of them have repeatedly expressed their devotion to von Trier and willingness to return on set with him, even without payment. He uses the same regular group of actors in many of his films including Jean-Marc Barr, Udo Kier and Stellan Skarsgård who was cast in several von Trier films: "Breaking the Waves", "Dancer in the Dark", "Dogville", and "Nymphomaniac".
"Note: This list shows only the actors who have collaborated with von Trier in three or more productions."
In 1989, von Trier's mother told him on her deathbed that the man vonTrier thought was his biological father was not, and that he was the result of a liaison she had with her former employer, Fritz Michael Hartmann (1909–2000), who was descended from a long line of German-speaking, Roman Catholic classical musicians. Hartmann's grandfather was Emil Hartmann, his great-grandfather J. P. E. Hartmann, his uncles included Niels Gade and Johan Ernst Hartmann, and Niels Viggo Bentzon was his cousin. She stated that she did this to give her son "artistic genes".
"Until that point I thought I had a Jewish background. But I'm really more of a Nazi. I believe that my biological father's German family went back two further generations. Before she died, my mother told me to be happy that I was the son of this other man. She said my foster father had had no goals and no strength. But he was a loving man. And I was very sad about this revelation. And you then feel manipulated when you really do turn out to be creative. If I'd known that my mother had this plan, I would have become something else. I would have shown her. The slut!"
During the German occupation of Denmark, vonTrier's biological father Fritz Michael Hartmann worked as a civil servant and joined a resistance group, "Frit Danmark", actively counteracting any pro-German and pro-Nazi colleagues in his department. Another member of this infiltrative resistance group was Hartmann's colleague Viggo Kampmann, who would later become prime minister of Denmark. After vonTrier had had four awkward meetings with his biological father, Hartmann refused further contact.
Von Trier's mother considered herself a Communist, while his father was a Social Democrat. Both were committed nudists, and vonTrier went on several childhood holidays to nudist camps. His parents regarded the disciplining of children as reactionary. He has noted that he was brought up in an atheist family, and that although Ulf Trier was Jewish, he was not religious. His parents did not allow much room in their household for "feelings, religion, or enjoyment", and also refused to make any rules for their children, with complex effects upon vonTrier's personality and development.
In a 2005 interview with "Die Zeit", vonTrier said, "I don't know if I'm all that Catholic really. I'm probably not. Denmark is a very Protestant country. Perhaps I only turned Catholic to piss off a few of my countrymen."
In 2009, he said, "I'm a very bad Catholic. In fact I'm becoming more and more of an atheist."
Von Trier periodically suffers from depression, and also from various fears and phobias, including an intense fear of flying. This fear frequently places severe constraints on him and his crew, necessitating that virtually all of his films be shot in either Denmark or Sweden. As he quipped in an interview, "Basically, I'm afraid of everything in life, except film making."
On numerous occasions, von Trier has also stated that he suffers from occasional depression which renders him incapable of performing his work and unable to fulfill social obligations.
|
https://en.wikipedia.org/wiki?curid=17917
|
Monty Python's Life of Brian
Monty Python's Life of Brian, also known as Life of Brian, is a 1979 British comedy film starring and written by the comedy group Monty Python (Graham Chapman, John Cleese, Terry Gilliam, Eric Idle, Terry Jones and Michael Palin). It was also directed by Jones. The film tells the story of Brian Cohen (played by Chapman), a young Jewish man who is born on the same day as—and next door to—Jesus Christ, and is subsequently mistaken for the Messiah.
Following the withdrawal of funding by EMI Films just days before production was scheduled to begin, long-time Monty Python fan and former member of the Beatles, George Harrison, arranged financing for "Life of Brian" through the formation of his company HandMade Films.
The film's themes of religious satire were controversial at the time of its release, drawing accusations of blasphemy, and protests from some religious groups. Thirty-nine local authorities in the United Kingdom either imposed an outright ban, or imposed an X (18 years) certificate, effectively preventing the film from being shown, since the distributors said it could not be shown unless it was unedited and carried the original AA (14) certificate. Some countries, including Ireland and Norway, banned its showing, with a few of these bans lasting decades. The filmmakers used such notoriety to benefit their marketing campaign, with posters in Sweden reading, "So funny, it was banned in Norway!"
The film was a box office success, the fourth-highest-grossing film in the United Kingdom in 1979, and highest grossing of any British film in the United States that year. It has remained popular and was named "greatest comedy film of all time" by several magazines and television networks, and it later received a 95% "Fresh" rating on Rotten Tomatoes with the consensus, "One of the more cutting-edge films of the 1970s, this religious farce from the classic comedy troupe is as poignant as it is funny and satirical." In a 2006 Channel 4 poll, "Life of Brian" was ranked first on their list of the 50 Greatest Comedy Films.
Brian Cohen is born in a stable next door to the one in which Jesus is born, which initially confuses the three wise men who come to praise the future King of the Jews. Brian later grows up into an idealistic young man who resents the continuing Roman occupation of Judea. While attending Jesus's Sermon on the Mount, Brian becomes infatuated with an attractive young rebel, Judith. His desire for her and hatred of the Romans inspire him to join the "People's Front of Judea" (PFJ), one of many fractious and bickering independence movements which spend more time fighting each other than the Romans.
Brian participates in an abortive attempt by the PFJ to kidnap of the wife of Roman governor Pontius Pilate but is captured by the palace guards. Escaping when the guards suffer paroxysms of laughter over Pilate's speech impediment, Brian winds up trying to blend in among prophets preaching in a busy plaza. He stops his sermon mid-sentence when some Roman soldiers depart, leaving his small but intrigued audience demanding to know more. Brian grows frantic when people start following him and declare him to be the messiah. After spending a night in bed with Judith, Brian discovers an enormous crowd assembled outside his mother's house. Her attempts at dispersing the crowd are rebuffed, so she consents to Brian addressing them. He urges them to think for themselves, but they parrot his words as doctrine.
The PFJ seeks to exploit Brian's celebrity status by having him minister to a thronging crowd of followers demanding miracle cures. Brian sneaks out the back, only to be captured by the Romans and is sentenced to crucifixion. In celebration of Passover, a crowd has assembled outside the palace of Pilate, who offers to pardon a prisoner of their choice. The crowd shouts out names containing the letter "r", mocking Pilate's rhotacistic speech impediment. Eventually, Judith appears in the crowd and calls for the release of Brian, which the crowd echoes, and Pilate agrees to "welease Bwian".
His order is eventually relayed to the guards, but in a scene that parodies the climax of the film "Spartacus", various crucified people all claim to be "Brian" so they can be free and the wrong man is released. Other opportunities for a reprieve for Brian are denied as the PFJ and then Judith praise his martyrdom, while his mother expresses regret for having raised him. Hope is renewed when a crack suicide squad from the "Judean People's Front" charges and prompts the Roman soldiers to flee; however, the squad commits mass suicide as a form of political protest. Condemned to a slow and painful death, Brian finds his spirits lifted by his fellow sufferers, who cheerfully sing "Always Look on the Bright Side of Life."
Several characters remained unnamed during the film but do have names that are used in the soundtrack album track listing and elsewhere. There is no mention in the film that Eric Idle's ever-cheerful joker is called "Mr. Cheeky", or that the Roman guard played by Michael Palin is named "Nisus Wettus".
Spike Milligan plays a prophet, ignored because his acolytes are chasing after Brian. By coincidence Milligan was visiting his old World War II battlefields in Tunisia where the film was being made. The Pythons were alerted to this one morning and he was promptly included in the scene being filmed. He disappeared in the afternoon before he could be included in any of the close-up or publicity shots for the film.
There are various stories about the origins of "Life of Brian". Shortly after the release of "Monty Python and the Holy Grail" (1975), Eric Idle flippantly suggested that the title of the Pythons' forthcoming feature would be "Jesus Christ: Lust for Glory" (a play on the UK title for the 1970 American film "Patton"). This was after he had become frustrated at repeatedly being asked what it would be called, despite the troupe not having given the matter of a third film any consideration. However, they shared a distrust of organised religion, and, after witnessing the critically acclaimed "Holy Grail"s enormous financial turnover, confirming an appetite among the fans for more cinematic endeavours, they began to seriously consider a film lampooning the New Testament era in the same way that "Holy Grail" had lampooned Arthurian legend. All they needed was an idea for a plot. Eric Idle and Terry Gilliam, while promoting "Holy Grail" in Amsterdam, had come up with a sketch in which Jesus' cross is falling apart because of the idiotic carpenters who built it and he angrily tells them how to do it correctly. However, after an early brainstorming stage, and despite being non-believers, they agreed that Jesus was "definitely a good guy" and found nothing to mock in his actual teachings: "He's not particularly funny, what he's saying isn't mockable, it's very decent stuff", said Idle later. After settling on the name Brian for their new protagonist, one idea considered was that of "the 13th disciple". The focus eventually shifted to a separate individual born at a similar time and location who would be mistaken for the Messiah, but had no desire to be followed as such.
The first draft of the screenplay, provisionally titled "The Gospel According to St. Brian", was ready by Christmas 1976. The final pre-production draft was ready in January 1978, following "a concentrated two-week writing and water-skiing period in Barbados". The film would not have been made without Python fan former Beatle George Harrison, who set up HandMade Films to help fund it at a cost of £3 million. Harrison put up the money for it as he "wanted to see the movie"—later described by Terry Jones as the "world's most expensive cinema ticket."
The original backers—EMI Films and, particularly, Bernard Delfont—had been scared off at the last minute by the subject matter. The very last words in the film are: "I said to him, 'Bernie, they'll never make their money back on this one'", teasing Delfont for his lack of faith in the project. Terry Gilliam later said, "They pulled out on the Thursday. The crew was supposed to be leaving on the Saturday. Disastrous. It was because they read the script ... finally." As a reward for his help, Harrison appears in a cameo appearance as Mr. Papadopoulos, "owner of the Mount", who briefly shakes hands with Brian in a crowd scene (at 1:09 in the film). His one word of dialogue (a cheery but out of place Scouse "'ullo") had to be dubbed in later by Michael Palin.
Terry Jones was solely responsible for directing, having amicably agreed with Gilliam (who co-directed "Holy Grail") to do so, with Gilliam concentrating on the look of the film. "Holy Grail"s production had often been stilted by their differences behind the camera. Gilliam again contributed two animated sequences (one being the opening credits) and took charge of set design. However, this did not put an absolute end to their feuding. On the DVD commentary, Gilliam expresses pride in one set in particular, the main hall of Pilate's fortress, which had been designed so that it looked like an ancient synagogue that the Romans had converted by dumping their structural artefacts (such as marble floors and columns) on top. He reveals his consternation at Jones for not paying enough attention to it in the cinematography. Gilliam also worked on the matte paintings, useful in particular for the very first shot of the three wise men against a star-scape and in giving the illusion of the whole of the outside of the fortress being covered in graffiti. Perhaps the most significant contribution from Gilliam was the scene in which Brian accidentally leaps off a high building and lands inside a starship about to engage in an interstellar war. This was done "in camera" using a hand-built model starship and miniature pyrotechnics, likely influenced by the recently released "Star Wars". Afterwards, George Lucas met Terry Gilliam in San Francisco and praised him for his work.
The film was shot on location in Monastir, Tunisia, which allowed the production to reuse sets from Franco Zeffirelli's "Jesus of Nazareth" (1977). The Tunisian shoot was documented by Iain Johnstone for his BBC film "The Pythons". Many locals were employed as extras on "Life of Brian". Director Jones noted, "They were all very knowing because they'd all worked for Franco Zeffirelli on "Jesus of Nazareth", so I had these elderly Tunisians telling me, 'Well, Mr Zeffirelli wouldn't have done it like that, you know.'" Further location shooting also took place in Tunisia, at Sousse (Jerusalem outer walls and gateway), Carthage (Roman theatre) and Matmata (Sermon on the Mount and Crucifixion).
Graham Chapman, suffering from alcoholism, was so determined to play the lead role – at one point coveted by Cleese – that he dried out in time for filming, so much so that he also acted as the on-set doctor.
Following shooting between 16 September and 12 November 1978, a two-hour rough cut of the film was put together for its first private showing in January 1979. Over the next few months "Life of Brian" was re-edited and re-screened a number of times for different preview audiences, losing a number of entire filmed sequences.
A number of scenes were cut during the editing process. Five deleted scenes, a total of 13 minutes, including the controversial "Otto", were first made available in 1997 on the Criterion Collection Laserdisc. An unknown amount of raw footage was destroyed in 1998 by the company that bought Handmade Films. However, a number of them (of varying quality) were shown the following year on the Paramount Comedy Channel in the UK. The scenes shown included three shepherds discussing sheep and completely missing the arrival of the angel heralding Jesus's birth, which would have been at the very start of the film; a segment showing the attempted kidnap of Pilate's wife (a large woman played by John Case) whose escape results in a fistfight; a scene introducing hardline Zionist Otto, leader of the Judean People's Front (played by Eric Idle) and his men who practise a suicide run in the courtyard; and a brief scene in which Judith releases some birds into the air in an attempt to summon help. The shepherds' scene has badly distorted sound, and the kidnap scene has poor colour quality. The same scenes that were on the Criterion laserdisc can now be found on the Criterion Collection DVD.
The most controversial cuts were the scenes involving Otto, initially a recurring character, who had a thin Adolf Hitler moustache and spoke with a German accent, shouting accusations of "racial impurity" at Judeans who were conceived (as Brian was) when their mothers were raped by Roman centurions, as well as other Nazi phrases. The logo of the Judean People's Front, designed by Terry Gilliam, was a Star of David with a small line added to each point so it resembled a swastika, most familiar in the West as the symbol of the anti-Semitic Nazi movement. The rest of this faction also all had the same thin moustaches, and wore a spike on their helmets, similar to those on Imperial German helmets. The official reason for the cutting was that Otto's dialogue slowed down the narrative. However, Gilliam, writing in "The Pythons Autobiography by The Pythons", said he thought it should have stayed, saying "Listen, we've alienated the Christians, let's get the Jews now." Idle himself was said to have been uncomfortable with the character; "It's essentially a pretty savage attack on rabid Zionism, suggesting it's rather akin to Nazism, which is a bit strong to take, but certainly a point of view." Michael Palin's personal journal entries from the period when various edits of "Brian" were being test-screened consistently reference the Pythons' and filmmakers' concerns that the Otto scenes were slowing the story down and thus were top of the list to be chopped from the final cut of the film. However, Oxford Brookes University historian David Nash says the removal of the scene represented "a form of self-censorship" and the Otto sequence "which involved a character representative of extreme forms of Zionism" was cut "in the interests of smoothing the way for the film's distribution in America."
The only scene with Otto that remains in the film is during the crucifixion sequence. Otto arrives with his "crack suicide squad", sending the Roman soldiers fleeing in terror. Instead of doing anything useful, they "attack" by committing mass suicide in front of the cross ("Zat showed 'em, huh?" says the dying Otto, to which Brian despondently replies "You silly sods!"), ending Brian's hope of rescue (they do however show some signs of life during the famous rendition of "Always Look on the Bright Side of Life" when they are seen waving their toes in unison in time to the music). Terry Jones once mentioned that the only reason this excerpt was not cut too was due to continuity reasons, as their dead bodies were very prominently placed throughout the rest of the scene. He acknowledged that some of the humour of this sole remaining contribution was lost through the earlier edits, but felt they were necessary to the overall pacing.
Otto's scenes, and those with Pilate's wife, were cut from the film after the script had gone to the publishers, and so they can be found in the published version of the script. Also present is a scene where, after Brian has led the Fifth Legion to the headquarters of the People's Front of Judea, Reg (John Cleese) says "You cunt!! You stupid, bird-brained, flat-headed..." The profanity was overdubbed to "you klutz" before the film was released. Cleese approved of this editing as he felt the reaction to the four-letter word would "get in the way of the comedy."
An early listing of the sequence of sketches reprinted in "Monty Python: The Case Against" by Robert Hewison reveals that the film was to have begun with a set of sketches at an English public school. Much of this material was first printed in the "Monty Python's The Life of Brian / Monty Python Scrapbook" that accompanied the original script publication of "The Life of Brian" and then subsequently reused. The song "All Things Dull and Ugly" and the parody scripture reading "Martyrdom of St. Victor" were performed on "Monty Python's Contractual Obligation Album" (1980). The idea of a violent rugby match between school masters and small boys was filmed in "Monty Python's The Meaning of Life" (1983). A sketch about a boy who dies at school appeared on the unreleased "The Hastily Cobbled Together for a Fast Buck Album" (1981).
An album was also released by Monty Python in 1979 in conjunction with the film. In addition to the "Brian Song" and "Always Look on the Bright Side of Life", it contains scenes from the film with brief linking sections performed by Eric Idle and Graham Chapman. The album opens with a brief rendition of "Hava Nagila" on Scottish bagpipes. A CD version was released in 1997.
An album of the songs sung in "Monty Python's Life of Brian" was released on the Disky label. "Always Look on the Bright Side of Life" was later re-released with great success, after being sung by British football fans. Its popularity became truly evident in 1982 during the Falklands War when sailors aboard the destroyer HMS "Sheffield", severely damaged in an Argentinean Exocet missile attack on 4 May, started singing it while awaiting rescue. Many people have come to see the song as a life-affirming ode to optimism. One of its more famous renditions was by the dignitaries of Manchester's bid to host the 2000 Olympic Games, just after they were awarded to Sydney. Idle later performed the song as part of the 2012 Summer Olympics closing ceremony. "Always Look on the Bright Side of Life" is also featured in Eric Idle's "Spamalot", a Broadway musical based upon "Monty Python and the Holy Grail", and was sung by the rest of the Monty Python group at Graham Chapman's memorial service and at the "Monty Python Live At Aspen" special. The song is a staple at Iron Maiden concerts, where the recording is played after the final encore.
For the original British and Australian releases, a spoof travelogue narrated by John Cleese, "Away From It All", was shown before the film itself. It consisted mostly of stock travelogue footage and featured arch comments from Cleese. For instance, a shot of Bulgarian girls in ceremonial dresses was accompanied by the comment "Hard to believe, isn't it, that these simple happy folk are dedicated to the destruction of Western Civilisation as we know it!", Communist Bulgaria being a member of the Warsaw Pact at the time. Not only was this a spoof of travelogues "per se", it was a protest against the then common practice in Britain of showing cheaply made banal short features before a main feature.
"Life of Brian" opened on 17 August 1979 in five North American theatres and grossed US$140,034 ($28,007 per screen) in its opening weekend. Its total gross was $19,398,164. It was the highest grossing British film in North America that year. Released on 8 November 1979 in the UK, the film was the fourth highest-grossing film in Britain in 1979. In London, it opened at the Plaza cinema and grossed £40,470 in its opening week.
On 30 April 2004, "Life of Brian" was re-released on five North American screens to "cash in" (as Terry Jones put it) on the box office success of Mel Gibson's "The Passion of the Christ". It grossed $26,376 ($5,275 per screen) in its opening weekend. It ran until October 2004, playing at 28 screens at its widest point, eventually grossing $646,124 during its re-release. By comparison, a re-release of "Monty Python and the Holy Grail" had earned $1.8 million three years earlier. A DVD of the film was also released that year.
Reviews from critics were mostly positive on the film's release. Movie historian Leonard Maltin reported that "This will probably offend every creed and denomination equally, but it shouldn't. The funniest and most sustained feature from Britain's bad boys." Vincent Canby of "The New York Times" called the film "the foulest-spoken biblical epic ever made, as well as the best-humored—a nonstop orgy of assaults, not on anyone's virtue, but on the funny bone. It makes no difference that some of the routines fall flat because there are always others coming along immediately after that succeed." Roger Ebert gave the film three stars out of four, writing, "What's endearing about the Pythons is their good cheer, their irreverence, their willingness to allow comic situations to develop through a gradual accumulation of small insanities." Gene Siskel of the "Chicago Tribune" gave the film three-and-a-half stars, calling it "a gentle but very funny parody of the life of Jesus, as well as of biblical movies." Kevin Thomas of the "Los Angeles Times" declared, "Even those of us who find Monty Python too hit-and-miss and gory must admit that its latest effort has numerous moments of hilarity." Clyde Jeavons of "The Monthly Film Bulletin" wrote that the script was "occasionally over-raucous and crude," but found the second half of the film "cumulatively hilarious," with "a splendidly tasteless finale, which even Mel Brooks might envy." Gary Arnold of "The Washington Post" had a negative opinion of the film, writing that it was "a cruel fiction to foster the delusion that 'Brian' is bristling with blasphemous nifties and throbbing with impious wit. If only it were! One might find it easier to keep from nodding off."
Over time, "Life of Brian" has regularly been cited as a significant contender for the title "greatest comedy film of all time", and has been named as such in polls conducted by "Total Film" magazine in 2000, the British TV network Channel 4 where it topped the poll in the 50 Greatest Comedy Films, and "The Guardian" in 2007. Rotten Tomatoes lists it as one of the best reviewed comedies, with a 95% approval rating from 61 published reviews. A 2011 poll by "Time Out" magazine ranked it as the third greatest comedy film ever made, behind "Airplane!" and "This is Spinal Tap".
The BFI declared "Life of Brian" to be the 28th best British film of all time, in their equivalent of the original AFI's 100 Years...100 Movies list. It was the seventh highest ranking comedy on this list (four of the better placed efforts were classic Ealing Films). Another Channel 4 poll in 2001 named it the 23rd greatest film of all time (the only comedy that came higher was Billy Wilder's "Some Like It Hot", which was ranked 5th). In 2016, "Empire" magazine ranked "Life of Brian" 2nd in their list of the 100 best British films, with only David Lean’s "Lawrence of Arabia" ranking higher.
Various polls have voted the line, "He's not the Messiah, he's a very naughty boy!" (spoken by Brian's mother Mandy to the crowd assembled outside her house) the funniest in film history. Other famous lines from the film have featured in polls, such as, "What have the Romans ever done for us?" and "I'm Brian and so's my wife".
Richard Webster comments in "A Brief History of Blasphemy" (1990) that "internalised censorship played a significant role in the handling" of "Monty Python's Life of Brian". In his view, "As a satire on religion, this film might well be considered a rather slight production. As blasphemy it was, even in its original version, extremely mild. Yet the film was surrounded from its inception by intense anxiety, in some quarters of the Establishment, about the offence it might cause. As a result it gained a certificate for general release only after some cuts had been made. Perhaps more importantly still, the film was shunned by the BBC and ITV, who declined to show it for fear of offending Christians in the UK. Once again a blasphemy was restrained – or its circulation effectively curtailed – not by the force of law but by the internalisation of this law." On its initial release in the UK, the film was banned by several town councils – some of which had no cinemas within their boundaries, or had not even seen the film. A member of Harrogate council, one of those that banned the film, revealed during a television interview that the council had not seen the film, and had based their opinion on what they had been told by the Nationwide Festival of Light, a grouping with an evangelical Christian base, of which they knew nothing.
In New York (the film's release in the US preceded British distribution), screenings were picketed by both rabbis and nuns ("Nuns with banners!" observed Michael Palin). It was also banned for eight years in Ireland and for a year in Norway (it was marketed in Sweden as "The film so funny that it was banned in Norway"). During the film's theatrical run in Finland, a text explaining that the film was a parody of Hollywood historical epics was added to the opening credits.
In the UK, Mary Whitehouse, and other traditionalist Christians, pamphleteered and picketed locations where the local cinema was screening the film, a campaign that was felt to have boosted publicity. Leaflets arguing against the film's representation of the New Testament (for example, suggesting that the Wise Men would not have approached the wrong stable as they do in the opening of the film) were documented in Robert Hewison's book "Monty Python: The Case Against".
One of the most controversial scenes was the film's ending: Brian's crucifixion. Many Christian protesters said that it was mocking Jesus' suffering by turning it into a "Jolly Boys Outing" (such as when Mr Cheeky turns to Brian and says: "See, not so bad once you're up!"), capped by Brian's fellow sufferers suddenly bursting into song. This is reinforced by the fact that several characters throughout the film claim crucifixion is not as bad as it seems. For example, when Brian asks his cellmate in prison what will happen to him, he replies: "Oh, you'll probably get away with crucifixion". In another example, an old man who works with the People's Front of Judea dismisses crucifixion as "a doddle" and says being stabbed would be worse.
The director, Terry Jones, issued the following riposte to this criticism: "Any religion that makes a form of torture into an icon that they worship seems to me a pretty sick sort of religion quite honestly." The Pythons also pointed out that crucifixion was a standard form of execution in ancient times and not just one especially reserved for Jesus.
Shortly after the film was released, Cleese and Palin engaged in a debate on the BBC2 discussion programme "Friday Night, Saturday Morning" with Malcolm Muggeridge and Mervyn Stockwood, the Bishop of Southwark, who put forward arguments against the film. Muggeridge and Stockwood, it was later claimed, had arrived 15 minutes late to see a screening of the picture prior to the debate, missing the establishing scenes demonstrating that Brian and Jesus were two different characters, and hence contended that it was a send-up of Christ himself. Both Pythons later felt that there had been a strange role reversal in the manner of the debate, with two young upstart comedians attempting to make serious, well-researched points, while the Establishment figures engaged in cheap jibes and point scoring. They also expressed disappointment in Muggeridge, whom all in Python had previously respected as a satirist (he had recently converted to Christianity after meeting Mother Teresa and experiencing what he described as a miracle). Cleese expressed that his reputation had "plummeted" in his eyes, while Palin commented, "He was just being Muggeridge, preferring to have a very strong contrary opinion as opposed to none at all." Muggeridge's verdict on the film was that it was "Such a tenth-rate film that it couldn't possibly destroy anyone's genuine faith." In a 2013 interview on BBC Radio 4, Cleese stated that having recently watched the discussion again he "was astonished, first of all, at how stupid [the two members of the Church] were, and how boring the debate became". He added: "I think the sad thing was that there was absolutely no attempt at a proper discussion – no attempt to find any common ground."
The Pythons unanimously deny that they were ever out to destroy people's faith. On the DVD audio commentary, they contend that the film is heretical because it lampoons the practices of modern organised religion, but that it does not blasphemously lampoon the God that Christians and Jews worship. When Jesus does appear in the film (on the Mount, speaking the Beatitudes), he is played straight (by actor Kenneth Colley) and portrayed with respect. The music and lighting make it clear that there is a genuine aura around him. The comedy begins when members of the crowd mishear his statements of peace, love and tolerance ("I think he said, 'blessed are the cheese makers'"). Importantly, he is distinct from the character of Brian, which is also evident in the scene where an annoying and ungrateful ex-leper pesters Brian for money, while moaning that since Jesus cured him, he has lost his source of income in the begging trade (referring to Jesus as a "bloody do-gooder").
James Crossley, however, has argued that the film makes the distinction between Jesus and the character of Brian to make a contrast between the traditional Christ of both faith and cinema and the historical figure of Jesus in critical scholarship and how critical scholars have argued that ideas later got attributed to Jesus by his followers. Crossley points out that the film uses a number of potentially controversial scholarly theories about Jesus but now with reference to Brian, such as the Messianic Secret, the Jewishness of Jesus, Jesus the revolutionary, and having a single mother.
Not all the Pythons agree on the definition of the movie's tone. There was a brief exchange that occurred when the surviving members reunited in Aspen, Colorado, in 1998. In the section where "Life of Brian" is discussed, Terry Jones says, "I think the film is heretical, but it's not blasphemous." Eric Idle can be heard to concur, adding, "It's a heresy." However, John Cleese, disagreeing, counters, "I don’t think it's a heresy. It's making fun of the way that people misunderstand the teaching." Jones responds, "Of course it's a heresy, John! It's attacking the Church! And that has to be heretical." Cleese replies, "No, it's not attacking the Church, necessarily. It's about people who cannot agree with each other."
In a later interview, Jones said the film "isn't blasphemous because it doesn’t touch on belief at all. It is heretical, because it touches on dogma and the interpretation of belief, rather than belief itself."
The film continues to cause controversy; in February 2007, the Church of St Thomas the Martyr in Newcastle upon Tyne held a public screening in the church itself, with song-sheets, organ accompaniment, stewards in costume and false beards for female members of the audience (alluding to an early scene where a group of women disguise themselves as men so that they are able to take part in a stoning). Although the screening was a sell-out, some Christian groups, notably the conservative Christian Voice, were highly critical of the decision to allow the screening to go ahead. Stephen Green, the head of Christian Voice, insisted that "You don't promote Christ to the community by taking the mick out of him." The Reverend Jonathan Adams, one of the church's clergy, defended his taste in comedy, saying that it did not mock Jesus, and that it raised important issues about the hypocrisy and stupidity that can affect religion. Again on the film's DVD commentary, Cleese also spoke up for religious people who have come forward and congratulated him and his colleagues on the film's highlighting of double standards among purported followers of their own faith.
Some bans continued into the 21st century. In 2008, Torbay Council finally permitted the film to be shown after it won an online vote for the English Riviera International Comedy Film Festival. In 2009, it was announced that a thirty-year-old ban of the film in the Welsh town of Aberystwyth had finally been lifted, and the subsequent showing was attended by Terry Jones and Michael Palin alongside mayor Sue Jones-Davies (who portrayed Judith Iscariot in the film). However, before the showing, an Aberystwyth University student discovered that a ban had only been discussed by the council and in fact that it had been shown (or scheduled to be shown) at a cinema in the town in 1981. In 2013, a German official in the state of North Rhine-Westphalia considered the film to be possibly offensive to Christians and hence subject to a local regulation prohibiting its public screening on Good Friday, despite protests by local atheists.
The film pokes fun at revolutionary groups and 1970s British left-wing politics. According to Roger Wilmut, "What the film does do is place modern stereotypes in a historical setting, which enables it to indulge in a number of sharp digs, particularly at trade unionists and guerilla organisations". The groups in the film all oppose the Roman occupation of Judea, but fall into the familiar pattern of intense competition among factions that appears, to an outsider, to be over ideological distinctions so small as to be invisible, thus portraying the phenomenon of the narcissism of small differences. Such disunity indeed fatally beleaguered real-life Judean resistance against Roman rule. Michael Palin says that the various separatist movements were modelled on "modern resistance groups, all with obscure acronyms which they can never remember and their conflicting agendas".
The People's Front of Judea, composed of the Pythons' characters, harangue their "rivals" with cries of "splitters" and stand vehemently opposed to the Judean People's Front, the Campaign for a Free Galilee, and the Judean Popular People's Front (the last composed of a single old man, mocking the size of real revolutionary Trotskyist factions). The infighting among revolutionary organisations is demonstrated most dramatically when the PFJ attempts to kidnap Pontius Pilate's wife, but encounters agents of the Campaign for a Free Galilee, and the two factions begin a violent brawl over which of them conceived of the plan first. When Brian exhorts them to cease their fighting to struggle "against the common enemy," the revolutionaries stop and cry in unison, "the Judean People's Front!" However, they soon resume their fighting and, with two Roman legionaries watching bemusedly, continue until Brian is left the only survivor, at which point he is captured.
Other scenes have the freedom fighters wasting time in debate, with one of the debated items being that they should not waste their time debating so much. There is also a famous scene in which Reg gives a revolutionary speech asking, "What have the Romans ever done for us?" at which point the listeners outline all forms of positive aspects of the Roman occupation such as sanitation, medicine, education, wine, public order, irrigation, roads, a fresh water system, public health and peace, followed by "what have the Romans ever done for us except sanitation, medicine, education...". Python biographer George Perry notes, "The People's Liberation Front of Judea conducts its meetings as though they have been convened by a group of shop stewards". This joke is the reverse of a similar conversation recorded in the Babylonian Talmud; some authors have even suggested the joke is based on the Talmudic text.
The depiction of Jesus in two short scenes at the start of the film are strongly based on Christian iconography. The resistance fighters left the Sermon on the Mount, which was a literal recital, angry because Jesus was too pacifistic for them. ("Well, blessed is just about everyone with a vested interest in the status quo…") Besides the respectful depiction of Jesus, the film doesn’t suggest that there is no God or that Jesus isn’t the son of God, according to most recipients/viewers. The appearance of a leper who was healed by Jesus, affirms the Gospels and their reports about Jesus performing miracles.
Any direct reference to Jesus disappears after the introductory scenes, yet his life story partially acts as a framework and subtext for the story of Brian. Brian being a bastard of a Roman could refer to the polemic legend that Jesus was the son of the Roman soldier Panthera. Disguised as a prophet, Brian himself talks about "the lilies on the field" or states more clearly, "Don’t pass judgment on other people or else you might get judged yourself." It is safe to assume that Brian repeats incoherently everything he heard from Jesus.
There is another person in the film, besides Jesus, who is also named in the Gospels. It's Pontius Pilate and the character in the film is made to an absolute laughingstock. And although there is a hint to Barabbas previous to the crucifixion, there is no character in the Life of Brian, who bears any resemblances to Judas or Caiaphas. An anti-Semitic interpretation is therefore excluded, according to scholars. ("Whether intended or not, this decision not to have a Caiaphas character avoids the possibility that the film might be viewed as anti-Semitic.") The crucifixion, which is a main motive in Christian iconography, is viewed from a historical context within the narrative style of the film. Its enactment as a routinely done mass crucifixion left practicing Christians bewildered.
The intended subject of the satire was not Jesus and his teachings but religious dogmatism, according to the concurrent observations made by film theorists and statements from Monty Python. This is made clear in the beginning of the film during the Sermon on the Mount. Not only do the poor acoustics make it more difficult to hear what Jesus says, but the audience fails to interpret what was said correctly and sensibly. When Jesus said, "blessed are the peacemakers", the audience understands the phonetically similar word "Cheesemakers" and in turn interpret it as a metaphor and beatification of those who produce dairy products.
"Life of Brian" satirises, in the words of David Hume, the "strong propensity of mankind to [believe in] the extraordinary and the marvellous". When Brian cuts his sermon short and turns away from the crowd, they mistake his behaviour as not wanting to share the secret to eternal life and follow him everywhere. In their need to submit to an authority, the crowd declares him first a prophet and eventually a messiah. The faithful gather beneath Brian's window en masse to receive God's blessing. This is when Brian utters the main message of the film "you don't need to follow anybody! You've got to think for yourselves!" Monty Python saw this central message of the satire confirmed with the protests of practicing Christians after the film was released.
According to Terry Jones, "Life of Brian" "is not blasphemy but heresy", because Brian contested the authority of the Church whereas the belief in God remained untouched. He goes on to mention that "Christ [is] saying all of these wonderful things about people living together in peace and love, and then for the next two thousand years people are putting each other to death in His name because they can't agree on how He said it, or in what order He said it." The dispute among the followers about the correct interpretation of a sandal, which Brian lost, is in the words of Terry Jones the "history of the Church in three minutes." Kevin Shilbrack shares the view that you can enjoy the movie and still be religious.
For the most part, it was lost in the controversy that dogmatism among left-wing parties was mocked in the film. According to John Cleese, an almost unmanageable number of left-wing organisations and parties was formed back then in the United Kingdom. He said that it had been so important to them to have one pure doctrine that they would rather fight each other than their political opponent. In the film, the leader of the People's Front of Judea makes it clear that their hate for the Judean Peoples's Front is greater than their hate for the Romans. They are so caught up in constant debates that the "rather looney bunch of revolutionaries" indirectly accept the occupying forces as well as their execution methods as a fate they all have to endure. So, in the end they thank Brian for his sacrifice instead of rescuing him.
Hardly mentioned in the discussion was the sideswipe at the women's movement, which started to draw a lot of attention in the 1970s. In accordance with the language of political activists, resistance fighter Stan wants to use “his right as a man” to be a woman. The group accepts him from that moment on as Loretta, because the right to give birth was not theirs to take. Also as a result from that, the term sibling replaces the terms brother or sister.
One of the most commented-upon scenes in the film is when Brian tells his followers that they are all individuals and don’t need to follow anybody. According to Edward Slowik, this is a rare moment in which Monty Python puts a philosophical concept into words so openly and directly. "Life of Brian" accurately depicts the existentialist view that everybody needs to give meaning to their own life.
Brian can thus be called an existentialist following the tradition of Friedrich Nietzsche and Jean-Paul Sartre. He is honest to himself and others and lives an authentic life as best as he can. However, Brian is too naïve to be called a hero based on the ideas of Albert Camus. For Camus, the search for the meaning of one's own life takes place in a deeply meaningless and abstruse world. The "absurd hero" rebels against this meaninglessness and at the same time holds on to their goals, although they know their fight leaves no impact in the long run. Contrary to that, Brian isn’t able to recognize the meaninglessness of his own situation and therefore can’t triumph over it.
In "Monty Python and Philosophy", Kevin Shilbrack states that the fundamental view of the film is that the world is absurd, and every life needs to be lived without a greater meaning. He points out that the second-last verse of the song the film finishes on "Always Look on the Bright Side of Life" would express this message clearly.
– "Life of Brian"
Shilbrack concludes that the finale shows that the executions had no purpose since the deaths were meaningless and no better world was waiting for them. On this note, some people would claim that the film presents a nihilistic world view which contradicts any basis of religion. However, "Life of Brian" offers humour to counterbalance the nihilism, Shilbrack states in his text. He comments that religion and humour are compatible with each other and you should laugh about the absurdity since you can't fight it.
Spin-offs include a script-book "The Life of Brian of Nazareth", which was printed back-to-back with "MONTYPYTHONSCRAPBOOK" as a single book. The printing of this book also caused problems, due to rarely used laws in the United Kingdom against blasphemy, dictating what can and cannot be written about religion. The publisher refused to print both halves of the book, and original prints were by two companies.
Julian Doyle, the film's editor, wrote "The Life of Brian/Jesus", a book which not only describes the filmmaking and editing process but argues that it is the most accurate Biblical film ever made. In October 2008, a memoir by Kim "Howard" Johnson titled "Monty Python's Tunisian Holiday: My Life with Brian" was released. Johnson became friendly with the Pythons during the filming of "Life of Brian" and his notes and memories of the behind-the-scenes filming and make-up.
With the success of Eric Idle's musical retelling of "Monty Python and the Holy Grail", called "Spamalot", Idle announced that he would be giving "Life of Brian" a similar treatment. The oratorio, called "Not the Messiah (He's a Very Naughty Boy)", was commissioned to be part of the festival called Luminato in Toronto in June 2007, and was written/scored by Idle and John Du Prez, who also worked with Idle on "Spamalot". "Not the Messiah" is a spoof of Handel's "Messiah". It runs approximately 50 minutes, and was conducted at its world premiere by Toronto Symphony Orchestra music director Peter Oundjian, who is Idle's cousin. "Not the Messiah" received its US premiere at the Caramoor International Music Festival in Katonah, New York. Oundjian and Idle joined forces once again for a double performance of the oratorio in July 2007.
In October 2011, BBC Four premiered the made-for-television comedy film "Holy Flying Circus", written by Tony Roche and directed by Owen Harris. The "Pythonesque" film explores the events surrounding the 1979 television debate on talk show "Friday Night, Saturday Morning" between John Cleese and Michael Palin and Malcolm Muggeridge and Mervyn Stockwood, then Bishop of Southwark.
In a "Not the Nine O'Clock News" sketch, a bishop who has directed a scandalous film called "The Life of Christ" is hauled over the coals by a representative of the "Church of Python", claiming that the film is an attack on "Our Lord, John Cleese" and on the members of Python, who, in the sketch, are the objects of Britain's true religious faith. This was a parody of the infamous "Friday Night, Saturday Morning" programme, broadcast a week previously. The bishop (played by Rowan Atkinson) claims that the reaction to the film has surprised him, as he "didn't expect the Spanish Inquisition."
Radio host John Williams of Chicago's WGN 720 AM has used "Always Look on the Bright Side of Life" in a segment of his Friday shows. The segment is used to highlight good events from the past week in listeners' lives and what has made them smile. In the 1997 film "As Good as It Gets", the misanthropic character played by Jack Nicholson sings "Always Look on the Bright Side of Life" as evidence of the character's change in attitude.
A BBC history series "What the Romans Did for Us", written and presented by Adam Hart-Davis and broadcast in 2000, takes its title from Cleese's rhetorical question "What have the Romans ever done for us?" in one of the film's scenes. (Cleese himself parodied this line in a 1986 BBC advert defending the Television Licence Fee: "What has the BBC ever given us?").
Former British Prime Minister Tony Blair in his Prime Minister's Questions of 3 May 2006 made a shorthand reference to the types of political groups, "Judean People's Front" or "People's Front of Judea", lampooned in "Life of Brian". This was in response to a question from the Labour MP David Clelland, asking "What has the Labour government ever done for us?" – itself a parody of John Cleese's "What have the Romans ever done for us?"
On New Year's Day 2007, and again on New Year's Eve, UK television station Channel 4 dedicated an entire evening to the Monty Python phenomenon, during which an hour-long documentary was broadcast called "The Secret Life of Brian" about the making of "The Life of Brian" and the controversy that was caused by its release. The Pythons featured in the documentary and reflected upon the events that surrounded the film. This was followed by a screening of the film itself. The documentary (in a slightly extended form) was one of the special features on the 2007 DVD re-release – the "Immaculate Edition", also the first Python release on Blu-ray.
Most recently, in June 2014 King's College London hosted an academic conference on the film, in which internationally renowned Biblical scholars and historians discussed the film and its reception, looking both at how the Pythons had made use of scholarship and texts, and how the film can be used creatively within modern scholarship on the Historical Jesus. In a panel discussion, including Terry Jones and theologian Richard Burridge, John Cleese described the event as "the most interesting thing to come out of Monty Python". The papers from the conference have gone on to prompt the publication of a book, edited by Joan E. Taylor, the conference organiser, "Jesus and Brian: Exploring the Historical Jesus and His Times via Monty Python's Life of Brian", published by Bloomsbury in 2015.
|
https://en.wikipedia.org/wiki?curid=17920
|
Loglan
Loglan is a constructed language originally designed for linguistic research, particularly for investigating the Sapir–Whorf Hypothesis. The language was developed beginning in 1955 by Dr James Cooke Brown with the goal of making a language so different from natural languages that people learning it would think in a different way if the hypothesis were true. In 1960 "Scientific American" published an article introducing the language. Loglan is the first among, and the main inspiration for, the languages known as logical languages, which also includes Lojban.
Brown founded The Loglan Institute (TLI) to develop the language and other applications of it. He always considered the language an incomplete research project, and although he released many papers about its design, he continued to claim legal restrictions on its use. Because of this, a group of his followers later formed the Logical Language Group to create the language Lojban along the same principles, but with the intention to make it freely available and encourage its use as a real language.
Supporters of Lojban use the term "Loglan" as a generic term to refer to both their own language, and Brown's "Loglan", referred to as ""TLI Loglan" when in need of disambiguation. Although the non-trademarkability of the term "Loglan" was eventually upheld by the United States Patent and Trademark Office, many supporters and members of The Loglan Institute find this usage offensive, and reserve "Loglan" for the TLI version of the language.
Loglan (an abbreviation for "logical language") was created to investigate whether people speaking a "logical language" would in some way think more logically, as the Sapir-Whorf hypothesis might predict. The language's grammar is based on predicate logic. The grammar was intended to be small enough to be teachable and manageable, yet complex enough to allow people to think and converse in the language.
Brown intended Loglan to be as culturally neutral as possible, and metaphysically parsimonious, which means that obligatory categories are kept to a minimum. An example of an obligatory category in English is the time-tense of verbs, as it is impossible to express a finite verb without also expressing a tense.
Brown also intended the language to be completely regular and unambiguous. Each sentence can be parsed in only one way. Furthermore, the syllabic structure of words was designed so that a sequence of syllables can be separated into words in only one way, even if the word separation is not clear from pauses in speech. It has a small number of phonemes, so that regional "accents" are less likely to produce unintelligible speech. To make the vocabulary easier to learn, words were constructed to have elements in common with related words in the world's eight most widely spoken languages.
The alphabet of Loglan has two historical versions. In that of 1975 there were only 21 letters with their corresponding phonemes. In the final version of 1989 five more phonemes had been incorporated: letter H (/h/) was added to the alphabet in 1977 by popular demand; letter Y (/ə/) was added in 1982 to work as a kind of hyphen between the terms of a complex word; letters Q (/θ/), W (/y/) and X (/x/) were added in 1986 in order to allow the incorporation of the Linnaean vocabulary of biology, and they were useful to give more exact pronunciations to many borrowed names.
Loglan has three types of words: "predicates" (also called "content words"), "structure words" (also called "little words"), and "names". The majority of words are predicates; these are words that carry meaning. Structure words are words that modify predicates or show how they are related to each other, like English conjunctions and prepositions. Names in Loglan are spelled in accordance with Loglan phonetics, so if the name comes from another language, the Loglan spelling may differ from the spelling in that language. A name in Loglan always ends with a consonant, which helps to distinguish names from other types of words, which always end in a vowel. If a name in its native language ends in a vowel, it is conventional to add an "s" to form the Loglan name; for example, the English name "Mary" is rendered in Loglan as "Meris" (pronounced /ˈmɛriːs/).
Loglan makes no distinction between nouns, verbs, adjectives and adverbs. A predicate may act as any of these depending on its position in a sentence. Each predicate has its own argument structure with fixed positions for arguments. For example: "vedma" is the word for "sell". It takes four arguments, the seller, the item sold, the buyer and the price, in that order. When a predicate is used as a verb, the first argument appears before the predicate and any subsequent arguments appear after it. So "S pa vedma T B P" means "S sold T to B for price P". (The structure word "pa" is the past tense marker, discussed in more detail below.) Not all arguments need be present; for example "S pa vedma T B" means "S sold T to B", "S pa vedma T" means "S sold T", and "S pa vedma" simply means "S sold (something)".
Certain structure words can be used to reorder the arguments of a predicate, to emphasize one of the arguments by putting it first. For example, "nu" swaps the first and second arguments of any predicate. So "T pa nu vedma S" means the same thing as "S pa vedma T" and might be translated "T was sold by S". Similarly, "fu" swaps the first and third argument, and "ju" swaps the first and fourth argument. Thus "B pa fu vedma T S" = "B bought T from S" and "P pa ju vedma T B" = "P was paid to buy T by B".
The structure word "le" makes a predicate behave as a noun, so that it can be used as an argument of another predicate. The three-place predicate "matma" means "M is the mother of C by father F", so "le matma" means "the mother". Thus "le matma pa vedma" means "the mother sold (something)", while "le vedma pa matma" means "the seller was a mother (of someone)".
A name can be used as an argument by preceding it with the structure word "la". Thus "la Adam vedma" means "Adam sells". Unlike in English and many other languages, this structure word is required; an unadorned name cannot be used as an argument. (The sentence "Adam vedma" is an imperative meaning "Adam, sell (something)." In this case the name is used as a vocative, not as an argument.)
A name, or any other word or phrase, can be explicitly quoted with the structure words "li" and "lu" to use the word itself, rather than the thing that word refers to, as an argument. Thus "li Adam lu corta purda" means ""Adam" is a short word." Without the li/lu quotes, the sentence "la Adam corta purda" ("Adam is a short word") makes the unusual claim that Adam, the person himself, is a short word.
Any predicate can be used as an adjective or adverb by placing the predicate before the expression that it modifies. The predicate "sadji" means "X is wiser than Y about Z". So "le sadji matma pa vedma" means "the wise mother sold" and "le matma vedma pa sadji" means "the motherly seller was wise". Predicates can be used adverbially to modify the main predicate in the sentence in the same way. So "le matma pa sadji vedma" means "the mother wisely sold". The structure word "go" can be used to invert the normal word order, so that the modifier follows the expression being modified. Thus "le matma go sadji" (the mother who is wise) means the same as "le sadji matma" (the wise mother).
A string of more than two predicates is left associative. This grouping can be changed by using the structure word "ge" which groups what follows into a single unit. Thus Loglan can distinguish between the many possible meanings of the ambiguous English phrase "the pretty little girls' school", as in these examples:
Predicates can be modified to indicate the time at which something occurred (English tense) with the optional structure words "na" (present), "pa" (past) and "fa" (future). Thus "le matma na vedma" means "the mother is (now) selling", while "le matma fa vedma" means "the mother will sell". Marking the verb for tense is optional, so the word "ga" can be used when the time is not being specified. So "le matma ga vedma" means "the mother sells (at some unspecified time in the past, present or future)".
A set of structure words called "free variables" are used like English pronouns, but are designed to avoid the ambiguity of pronouns in such sentences as "Adam told Greg that he needed to leave." The free variable "da" refers to the most recently mentioned noun, "de" refers to the one mentioned prior to that, "di" to the one prior to that, and so on. Compare the sentences
Free variables apply equally to people of any gender and inanimate objects; there is no distinction similar to that between English "he", "she" and "it". This explains why "di" rather than "de" was used in the second example. "la Adam pa vedma le negda la Greg i de gacpi" would mean "Adam sold the egg to Greg; it (the egg) was happy."
Loglan has several sets of conjunctions to express the fourteen possible logical connectives. One set is used to combine predicate expressions ("e" = and, "a" = or, "o" = if and only if), and another set is used to combine predicates to make more complex predicates ("ce", "ca", "co"). The sentence "la Kim matma e sadji" means "Kim is a mother and is wise", while "la Kim matma ce sadji vedma" means "Kim is a motherly and wise seller", or "Kim sells in a motherly and wise manner". In the latter sentence, "ce" is used to combine matma and sadji into one predicate which modifies vedma. The sentence "la Kim matma e sadji vedma" would mean "Kim is a mother and wisely sells."
A special conjunction "ze" is used to create a "mixed" predicate which may be true even if it is not necessarily true for either of the component predicates. For example, "le negda ga nigro ze blabi" means "the egg is black-and-white". This would be true if the egg were striped or speckled; in that case it would not be true that the egg is black nor that it is white. On the other hand, "le negda ga nigro e blabi" makes the unlikely claim that "the egg is black and (it is also) white".
There is a set of words used for expressing attitudes about what one is saying, which convey conviction, intention, obligation and emotion. These words follow what they modify, but when used at the start of a sentence they modify the entire sentence. For example:
Loglan was mentioned in a couple of science fiction works: Robert A. Heinlein's well-known books, including "The Moon Is a Harsh Mistress" and "The Number of the Beast", Robert Rimmer's utopian book "Love Me Tomorrow" (1978) and Stanisław Lem novel "His Master's Voice".
Loglan's inventor, James Cooke Brown, also wrote a utopian science fiction novel called "The Troika Incident" (1970) that uses Loglan phrases but calls the language a different name, "Panlan".
Loglan is used as the official interspecies language in the roleplaying game "".
Archival material related to the creation and teaching of Loglan, including flashcards and grammar explanations, can be found in the Faith Rich Papers, located at Chicago Public Library Special Collections, Chicago, Illinois.
|
https://en.wikipedia.org/wiki?curid=17922
|
League of Nations
The League of Nations, abbreviated as LON ( , abbreviated as "SDN" or "SdN") was the first worldwide intergovernmental organisation whose principal mission was to maintain world peace. It was founded on 10 January 1920 following the Paris Peace Conference that ended the First World War; in 1919 U.S. president Woodrow Wilson won the Nobel Peace Prize for his role as the leading architect of the League.
The organisation's primary goals, as stated in its Covenant, included preventing wars through collective security and disarmament and settling international disputes through negotiation and arbitration. Other issues in this and related treaties included labour conditions, just treatment of native inhabitants,
human and drug trafficking, the arms trade, global health, prisoners of war, and protection of minorities in Europe. The Covenant of the League of Nations was signed on 28 June 1919 as Part I of the Treaty of Versailles, and it became effective together with the rest of the Treaty on 10 January 1920. The first meeting of the Council of the League took place on 16 January 1920, and the first meeting of Assembly of the League took place on 15 November 1920.
The diplomatic philosophy behind the League represented a fundamental shift from the preceding hundred years. The League lacked its own armed force and depended on the victorious First World War Allies (France, the United Kingdom, Italy and Japan were the permanent members of the Executive Council) to enforce its resolutions, keep to its economic sanctions, or provide an army when needed. The Great Powers were often reluctant to do so. Sanctions could hurt League members, so they were reluctant to comply with them. During the Second Italo-Abyssinian War, when the League accused Italian soldiers of targeting Red Cross medical tents, Benito Mussolini responded that "the League is very well when sparrows shout, but no good at all when eagles fall out."
At its greatest extent from 28 September 1934 to 23 February 1935, it had 58 members. After some notable successes and some early failures in the 1920s, the League ultimately proved incapable of preventing aggression by the Axis powers in the 1930s. The credibility of the organization was weakened by the fact that the United States never joined the League and the Soviet Union joined late and was soon expelled after invading Finland. Germany withdrew from the League, as did Japan, Italy, Spain and others. The onset of the Second World War showed that the League had failed its primary purpose, which was to prevent any future world war. The League lasted for 26 years; the United Nations (UN) replaced it after the end of the Second World War and inherited several agencies and organisations founded by the League.
The concept of a peaceful community of nations had been proposed as far back as 1795, when Immanuel Kant's "" outlined the idea of a league of nations to control conflict and promote peace between states. Kant argued for the establishment of a peaceful world community, not in a sense of a global government, but in the hope that each state would declare itself a free state that respects its citizens and welcomes foreign visitors as fellow rational beings, thus promoting peaceful society worldwide. International co-operation to promote collective security originated in the Concert of Europe that developed after the Napoleonic Wars in the 19th century in an attempt to maintain the "status quo" between European states and so avoid war. This period also saw the development of international law, with the first Geneva Conventions establishing laws dealing with humanitarian relief during wartime, and the international Hague Conventions of 1899 and 1907 governing rules of war and the peaceful settlement of international disputes. As historians William H. Harbaugh and Ronald E. Powaski point out, Theodore Roosevelt was the first American President to call for an international league. At the acceptance for his Nobel Prize, Roosevelt said: "it would be a masterstroke if those great powers honestly bent on peace would form a League of Peace."
The forerunner of the League of Nations, the Inter-Parliamentary Union (IPU), was formed by the peace activists William Randal Cremer and Frédéric Passy in 1889 (and is currently still in existence as an international body with a focus on the various elected legislative bodies of the world.) The IPU was founded with an international scope, with a third of the members of parliaments (in the 24 countries that had parliaments) serving as members of the IPU by 1914. Its foundational aims were to encourage governments to solve international disputes by peaceful means. Annual conferences were established to help governments refine the process of international arbitration. Its structure was designed as a council headed by a president, which would later be reflected in the structure of the League.
At the start of the First World War, the first schemes for an international organisation to prevent future wars began to gain considerable public support, particularly in Great Britain and the United States. Goldsworthy Lowes Dickinson, a British political scientist, coined the term "League of Nations" in 1914 and drafted a scheme for its organisation. Together with Lord Bryce, he played a leading role in the founding of the group of internationalist pacifists known as the Bryce Group, later the League of Nations Union. The group became steadily more influential among the public and as a pressure group within the then governing Liberal Party. In Dickinson's 1915 pamphlet "After the War" he wrote of his "League of Peace" as being essentially an organisation for arbitration and conciliation. He felt that the secret diplomacy of the early twentieth century had brought about war and thus could write that, "the impossibility of war, I believe, would be increased in proportion as the issues of foreign policy should be known to and controlled by public opinion." The ‘Proposals’ of the Bryce Group were circulated widely, both in England and the US, where they had a profound influence on the nascent international movement.
Within two weeks of the start of the war, feminists began to mobilise against the war. Having been barred from participating in prior peace organizations, American women formed a Women's Peace Parade Committee to plan a silent protest to the war. Led by chairwoman Fanny Garrison Villard, women from trade unions, feminist organizations, and social reform organizations, such as Kate Waller Barrett, Mary Ritter Beard, Carrie Chapman Catt, Rose Schneiderman, Lillian Wald, and others, organized 1500 women, who marched down Manhattan's Fifth Avenue on 29 August 1914. As a result of the parade, Jane Addams became interested in proposals by two European suffragists—Hungarian Rosika Schwimmer and British Emmeline Pethick-Lawrence—to hold a peace conference. On 9–10 January 1915, a peace conference directed by Addams was held in Washington, D. C., where the delegates adopted a platform calling for creation of international bodies with administrative and legislative powers to develop a "permanent league of neutral nations" to work for peace and disarmament.
Within months a call was made for an international women's conference to be held in The Hague. Coordinated by Mia Boissevain, Aletta Jacobs and Rosa Manus, the Congress, which opened on 28 April 1915 was attended by 1,136 participants from both neutral and non-belligerent nations, and resulted in the establishment of an organization which would become the Women's International League for Peace and Freedom (WILPF). At the close of the conference, two delegations of women were dispatched to meet European heads of state over the next several months. They secured agreement from reluctant Foreign Ministers, who overall felt that such a body would be ineffective, but agreed to participate or not impede creation of a neutral mediating body, if other nations agreed and if President Woodrow Wilson would initiate a body. In the midst of the War, Wilson refused.
In 1915, a similar body to the Bryce group proposals was set up in the United States by a group of like-minded individuals, including William Howard Taft. It was called the League to Enforce Peace and was substantially based on the proposals of the Bryce Group. It advocated the use of arbitration in conflict resolution and the imposition of sanctions on aggressive countries. None of these early organisations envisioned a continuously functioning body; with the exception of the Fabian Society in England, they maintained a legalistic approach that would limit the international body to a court of justice. The Fabians were the first to argue for a "Council" of states, necessarily the Great Powers, who would adjudicate world affairs, and for the creation of a permanent secretariat to enhance international co-operation across a range of activities.
In the course of the diplomatic efforts surrounding World War I, both sides had to clarify their long-term war aims. By 1916 in Britain, the leader of the Allies, and in the neutral United States, long-range thinkers had begun to design a unified international organisation to prevent future wars. Historian Peter Yearwood argues that when the new coalition government of David Lloyd George took power in December 1916, there was widespread discussion among intellectuals and diplomats of the desirability of establishing such an organisation. When Lloyd George was challenged by Wilson to state his position with an eye on the postwar situation, he endorsed such an organisation. Wilson himself included in his Fourteen Points in January 1918 a "league of nations to ensure peace and justice." British foreign secretary, Arthur Balfour, argued that, as a condition of durable peace, "behind international law, and behind all treaty arrangements for preventing or limiting hostilities, some form of international sanction should be devised which would give pause to the hardiest aggressor."
The war had had a profound impact, affecting the social, political and economic systems of Europe and inflicting psychological and physical damage. Several empires collapsed: first the Russian Empire in February 1917, followed by the German Empire, Austro-Hungarian Empire and Ottoman Empire. Anti-war sentiment rose across the world; the First World War was described as "the war to end all wars", and its possible causes were vigorously investigated. The causes identified included arms races, alliances, militaristic nationalism, secret diplomacy, and the freedom of sovereign states to enter into war for their own benefit. One proposed remedy was the creation of an international organisation whose aim was to prevent future war through disarmament, open diplomacy, international co-operation, restrictions on the right to wage war, and penalties that made war unattractive.
In London Balfour commissioned the first official report into the matter in early 1918, under the initiative of Lord Robert Cecil. The British committee was finally appointed in February 1918. It was led by Walter Phillimore (and became known as the Phillimore Committee), but also included Eyre Crowe, William Tyrrell, and Cecil Hurst. The recommendations of the so-called Phillimore Commission included the establishment of a "Conference of Allied States" that would arbitrate disputes and impose sanctions on offending states. The proposals were approved by the British government, and much of the commission's results were later incorporated into the Covenant of the League of Nations.
The French also drafted a much more far-reaching proposal in June 1918; they advocated annual meetings of a council to settle all disputes, as well as an "international army" to enforce its decisions.
The American President Woodrow Wilson instructed Edward M. House to draft a US plan which reflected Wilson's own idealistic views (first articulated in the Fourteen Points of January 1918), as well as the work of the Phillimore Commission. The outcome of House's work and Wilson's own first draft proposed the termination of "unethical" state behaviour, including forms of espionage and dishonesty. Methods of compulsion against recalcitrant states would include severe measures, such as "blockading and closing the frontiers of that power to commerce or intercourse with any part of the world and to use any force that may be necessary..."
The two principal drafters and architects of the covenant of the League of Nations were the British politician Lord Robert Cecil and the South African statesman Jan Smuts. Smuts' proposals included the creation of a Council of the great powers as permanent members and a non-permanent selection of the minor states. He also proposed the creation of a Mandate system for captured colonies of the Central Powers during the war. Cecil focused on the administrative side and proposed annual Council meetings and quadrennial meetings for the Assembly of all members. He also argued for a large and permanent secretariat to carry out the League's administrative duties.
At the Paris Peace Conference in 1919, Wilson, Cecil and Smuts all put forward their draft proposals. After lengthy negotiations between the delegates, the Hurst–Miller draft was finally produced as a basis for the Covenant. After more negotiation and compromise, the delegates finally approved of the proposal to create the League of Nations (, ) on 25 January 1919. The final Covenant of the League of Nations was drafted by a special commission, and the League was established by Part I of the Treaty of Versailles. On 28 June 1919, 44 states signed the Covenant, including 31 states which had taken part in the war on the side of the Triple Entente or joined it during the conflict.
French women's rights advocates invited international feminists to participate in a parallel conference to the Paris Conference in hopes that they could gain permission to participate in the official conference. The Inter-Allied Women's Conference asked to be allowed to submit suggestions to the peace negotiations and commissions and were granted the right to sit on commissions dealing specifically with women and children. Though they asked for enfranchisement and full legal protection under the law equal with men, those rights were ignored. Women won the right to serve in all capacities, including as staff or delegates in the League of Nations organization. They also won a declaration that member nations should prevent trafficking of women and children and should equally support humane conditions for children, women and men labourers. At the Zürich Peace Conference held between 17–19 May 1919, the women of the WILPF condemned the terms of the Treaty of Versailles for both its punitive measures, as well as its failure to provide for condemnation of violence and exclusion of women from civil and political participation. Upon reading the Rules of Procedure for the League of Nations, Catherine Marshall, a British suffragist, discovered that the guidelines were completely undemocratic and they were modified based on her suggestion.
The League would be made up of a General Assembly (representing all member states), an Executive Council (with membership limited to major powers), and a permanent secretariat. Member states were expected to "respect and preserve as against external aggression" the territorial integrity of other members and to disarm "to the lowest point consistent with domestic safety." All states were required to submit complaints for arbitration or judicial inquiry before going to war. The Executive Council would create a Permanent Court of International Justice to make judgements on the disputes.
Despite Wilson's efforts to establish and promote the League, for which he was awarded the Nobel Peace Prize in October 1919, the United States never joined. Senate Republicans led by Henry Cabot Lodge wanted a League with the reservation that only Congress could take the U.S. into war. Lodge gained a majority of Senators. Wilson refused to allow a compromise and the needed 2/3 majority was lacking.
The League held its first council meeting in Paris on 16 January 1920, six days after the Versailles Treaty and the Covenant of the League of Nations came into force. On 1 November 1920, the headquarters of the League was moved from London to Geneva, where the first General Assembly was held on 15 November 1920. The Palais Wilson on Geneva's western lakeshore, named after US President Woodrow Wilson in recognition of his efforts towards the establishment of the League, was the League's first permanent home.
The official languages of the League of Nations were French and English.
In 1939, a semi-official emblem for the League of Nations emerged: two five-pointed stars within a blue pentagon. They symbolised the Earth's five continents and "five races." A bow at the top displayed the English name ("League of Nations"), while another at the bottom showed the French (""Société des Nations"").
The main constitutional organs of the League were the Assembly, the Council, and the Permanent Secretariat. It also had two essential wings: the Permanent Court of International Justice and the International Labour Organization. In addition, there were several auxiliary agencies and commissions. Each organ's budget was allocated by the Assembly (the League was supported financially by its member states).
The relations between the Assembly and the Council and the competencies of each were for the most part not explicitly defined. Each body could deal with any matter within the sphere of competence of the League or affecting peace in the world. Particular questions or tasks might be referred to either.
Unanimity was required for the decisions of both the Assembly and the Council, except in matters of procedure and some other specific cases such as the admission of new members. This requirement was a reflection of the League's belief in the sovereignty of its component nations; the League sought a solution by consent, not by dictation. In case of a dispute, the consent of the parties to the dispute was not required for unanimity.
The Permanent Secretariat, established at the seat of the League at Geneva, comprised a body of experts in various spheres under the direction of the general secretary. Its principal sections were Political, Financial and Economics, Transit, Minorities and Administration (administering the Saar and Danzig), Mandates, Disarmament, Health, Social (Opium and Traffic in Women and Children), Intellectual Cooperation and International Bureaux, Legal, and Information. The staff of the Secretariat was responsible for preparing the agenda for the Council and the Assembly and publishing reports of the meetings and other routine matters, effectively acting as the League's civil service. In 1931 the staff numbered 707.
The Assembly consisted of representatives of all members of the League, with each state allowed up to three representatives and one vote. It met in Geneva and, after its initial sessions in 1920, it convened once a year in September. The special functions of the Assembly included the admission of new members, the periodical election of non-permanent members to the Council, the election with the Council of the judges of the Permanent Court, and control of the budget. In practice, the Assembly was the general directing force of League activities.
The League Council acted as a type of executive body directing the Assembly's business. It began with four permanent members (Great Britain, France, Italy, and Japan) and four non-permanent members that were elected by the Assembly for a three-year term. The first non-permanent members were Belgium, Brazil, Greece, and Spain.
The composition of the Council was changed several times. The number of non-permanent members was first increased to six on 22 September 1922 and to nine on 8 September 1926. Werner Dankwort of Germany pushed for his country to join the League; joining in 1926, Germany became the fifth permanent member of the Council. Later, after Germany and Japan both left the League, the number of non-permanent seats was increased from nine to eleven, and the Soviet Union was made a permanent member giving the Council a total of fifteen members. The Council met, on average, five times a year and in extraordinary sessions when required. In total, 107 sessions were held between 1920 and 1939.
The League oversaw the Permanent Court of International Justice and several other agencies and commissions created to deal with pressing international problems. These included the Disarmament Commission, the International Labour Organization (ILO), the Mandates Commission, the International Commission on Intellectual Cooperation (precursor to UNESCO), the Permanent Central Opium Board, the Commission for Refugees, and the Slavery Commission. Three of these institutions were transferred to the United Nations after the Second World War: the International Labour Organization, the Permanent Court of International Justice (as the International Court of Justice), and the Health Organisation (restructured as the World Health Organization).
The Permanent Court of International Justice was provided for by the Covenant, but not established by it. The Council and the Assembly established its constitution. Its judges were elected by the Council and the Assembly, and its budget was provided by the latter. The Court was to hear and decide any international dispute which the parties concerned submitted to it. It might also give an advisory opinion on any dispute or question referred to it by the Council or the Assembly. The Court was open to all the nations of the world under certain broad conditions.
The International Labour Organization was created in 1919 on the basis of Part XIII of the Treaty of Versailles. The ILO, although having the same members as the League and being subject to the budget control of the Assembly, was an autonomous organisation with its own Governing Body, its own General Conference and its own Secretariat. Its constitution differed from that of the League: representation had been accorded not only to governments but also to representatives of employers' and workers' organisations. Albert Thomas was its first director.
The ILO successfully restricted the addition of lead to paint, and convinced several countries to adopt an eight-hour work day and forty-eight-hour working week. It also campaigned to end child labour, increase the rights of women in the workplace, and make shipowners liable for accidents involving seamen. After the demise of the League, the ILO became an agency of the United Nations in 1946.
The League's health organisation had three bodies: the Health Bureau, containing permanent officials of the League; the General Advisory Council or Conference, an executive section consisting of medical experts; and the Health Committee. The Committee's purpose was to conduct inquiries, oversee the operation of the League's health work, and prepare work to be presented to the Council. This body focused on ending leprosy, malaria, and yellow fever, the latter two by starting an international campaign to exterminate mosquitoes. The Health Organisation also worked successfully with the government of the Soviet Union to prevent typhus epidemics, including organising a large education campaign.
The League of Nations had devoted serious attention to the question of international intellectual co-operation since its creation. The First Assembly in December 1920 recommended that the Council take action aiming at the international organisation of intellectual work, which it did by adopting a report presented by the Fifth Committee of the Second Assembly and inviting a Committee on Intellectual Cooperation to meet in Geneva in August 1922. The French philosopher Henri Bergson became the first chairman of the committee. The work of the committee included: inquiry into the conditions of intellectual life, assistance to countries where intellectual life was endangered, creation of national committees for intellectual co-operation, co-operation with international intellectual organisations, protection of intellectual property, inter-university co-operation, co-ordination of bibliographical work and international interchange of publications, and international co-operation in archaeological research.
Introduced by the second International Opium Convention, the Permanent Central Opium Board had to supervise the statistical reports on trade in opium, morphine, cocaine and heroin. The board also established a system of import certificates and export authorisations for the legal international trade in narcotics.
The Slavery Commission sought to eradicate slavery and slave trading across the world, and fought forced prostitution. Its main success was through pressing the governments who administered mandated countries to end slavery in those countries. The League secured a commitment from Ethiopia to end slavery as a condition of membership in 1923, and worked with Liberia to abolish forced labour and intertribal slavery. The United Kingdom had not supported Ethiopian membership of the League on the grounds that "Ethiopia had not reached a state of civilisation and internal security sufficient to warrant her admission."
The League also succeeded in reducing the death rate of workers constructing the Tanganyika railway from 55 to 4 percent. Records were kept to control slavery, prostitution, and the trafficking of women and children. Partly as a result of pressure brought by the League of Nations, Afghanistan abolished slavery in 1923, Iraq in 1924, Nepal in 1926, Transjordan and Persia in 1929, Bahrain in 1937, and Ethiopia in 1942.
Led by Fridtjof Nansen, the Commission for Refugees was established on 27 June 1921 to look after the interests of refugees, including overseeing their repatriation and, when necessary, resettlement. At the end of the First World War, there were two to three million ex-prisoners of war from various nations dispersed throughout Russia; within two years of the commission's foundation, it had helped 425,000 of them return home. It established camps in Turkey in 1922 to aid the country with an ongoing refugee crisis, helping to prevent the spread of cholera, smallpox and dysentery as well as feeding the refugees in the camps. It also established the Nansen passport as a means of identification for stateless people.
The Committee for the Study of the Legal Status of Women sought to inquire into the status of women all over the world. It was formed in 1937, and later became part of the United Nations as the Commission on the Status of Women.
The Covenant of the League said little about economics. Nonetheless, in 1920 the Council of the League called for a financial conference. The First Assembly at Geneva provided for the appointment of an Economic and Financial Advisory Committee to provide information to the conference. In 1923, a permanent economic and financial Organization came into being.
Of the League's 42 founding members, 23 (24 counting Free France) remained members until it was dissolved in 1946. In the founding year, six other states joined, only two of which remained members throughout the League's existence. Under the Weimar Republic, Germany was admitted to the League of Nations through a resolution passed on 8 September 1926.
An additional 15 countries joined later. The largest number of member states was 58, between 28 September 1934 (when Ecuador joined) and 23 February 1935 (when Paraguay withdrew).
On 26 May 1937, Egypt became the last state to join the League. The first member to withdraw permanently from the League was Costa Rica on 22 January 1925; having joined on 16 December 1920, this also makes it the member to have most quickly withdrawn. Brazil was the first founding member to withdraw (14 June 1926), and Haiti the last (April 1942). Iraq, which joined in 1932, was the first member that had previously been a League of Nations mandate.
The Soviet Union became a member on 18 September 1934, and was expelled on 14 December 1939 for invading Finland. In expelling the Soviet Union, the League broke its own rule: only 7 of 15 members of the Council voted for expulsion (United Kingdom, France, Belgium, Bolivia, Egypt, South Africa, and the Dominican Republic), short of the majority required by the Covenant. Three of these members had been made Council members the day before the vote (South Africa, Bolivia, and Egypt). This was one of the League's final acts before it practically ceased functioning due to the Second World War.
At the end of the First World War, the Allied powers were confronted with the question of the disposal of the former German colonies in Africa and the Pacific, and the several Arabic-speaking provinces of the Ottoman Empire. The Peace Conference adopted the principle that these territories should be administered by different governments on behalf of the League – a system of national responsibility subject to international supervision. This plan, defined as the mandate system, was adopted by the "Council of Ten" (the heads of government and foreign ministers of the main Allied powers: Britain, France, the United States, Italy, and Japan) on 30 January 1919 and transmitted to the League of Nations.
League of Nations mandates were established under Article 22 of the Covenant of the League of Nations. The Permanent Mandates Commission supervised League of Nations mandates, and also organised plebiscites in disputed territories so that residents could decide which country they would join. There were three mandate classifications: A, B and C.
The A mandates (applied to parts of the old Ottoman Empire) were "certain communities" that had
The B mandates were applied to the former German colonies that the League took responsibility for after the First World War. These were described as "peoples" that the League said were
South West Africa and certain South Pacific Islands were administered by League members under C mandates. These were classified as "territories"
The territories were governed by mandatory powers, such as the United Kingdom in the case of the Mandate of Palestine, and the Union of South Africa in the case of South-West Africa, until the territories were deemed capable of self-government. Fourteen mandate territories were divided up among seven mandatory powers: the United Kingdom, the Union of South Africa, France, Belgium, New Zealand, Australia and Japan. With the exception of the Kingdom of Iraq, which joined the League on 3 October 1932, these territories did not begin to gain their independence until after the Second World War, in a process that did not end until 1990. Following the demise of the League, most of the remaining mandates became United Nations Trust Territories.
In addition to the mandates, the League itself governed the Territory of the Saar Basin for 15 years, before it was returned to Germany following a plebiscite, and the Free City of Danzig (now Gdańsk, Poland) from 15 November 1920 to 1 September 1939.
The aftermath of the First World War left many issues to be settled, including the exact position of national boundaries and which country particular regions would join. Most of these questions were handled by the victorious Allied powers in bodies such as the Allied Supreme Council. The Allies tended to refer only particularly difficult matters to the League. This meant that, during the early interwar period, the League played little part in resolving the turmoil resulting from the war. The questions the League considered in its early years included those designated by the Paris Peace treaties.
As the League developed, its role expanded, and by the middle of the 1920s it had become the centre of international activity. This change can be seen in the relationship between the League and non-members. The United States and Russia, for example, increasingly worked with the League. During the second half of the 1920s, France, Britain and Germany were all using the League of Nations as the focus of their diplomatic activity, and each of their foreign secretaries attended League meetings at Geneva during this period. They also used the League's machinery to try to improve relations and settle their differences.
Åland is a collection of around 6,500 islands in the Baltic Sea, midway between Sweden and Finland. The islands are almost exclusively Swedish-speaking, but in 1809, the Åland Islands, along with Finland, were taken by Imperial Russia. In December 1917, during the turmoil of the Russian October Revolution, Finland declared its independence, but most of the Ålanders wished to rejoin Sweden. The Finnish government considered the islands to be a part of their new nation, as the Russians had included Åland in the Grand Duchy of Finland, formed in 1809. By 1920, the dispute had escalated to the point that there was danger of war. The British government referred the problem to the League's Council, but Finland would not let the League intervene, as they considered it an internal matter. The League created a small panel to decide if it should investigate the matter and, with an affirmative response, a neutral commission was created. In June 1921, the League announced its decision: the islands were to remain a part of Finland, but with guaranteed protection of the islanders, including demilitarisation. With Sweden's reluctant agreement, this became the first European international agreement concluded directly through the League.
The Allied powers referred the problem of Upper Silesia to the League after they had been unable to resolve the territorial dispute. After the First World War, Poland laid claim to Upper Silesia, which had been part of Prussia. The Treaty of Versailles had recommended a plebiscite in Upper Silesia to determine whether the territory should become part of Germany or Poland. Complaints about the attitude of the German authorities led to rioting and eventually to the first two Silesian Uprisings (1919 and 1920). A plebiscite took place on 20 March 1921, with 59.6 per cent (around 500,000) of the votes cast in favour of joining Germany, but Poland claimed the conditions surrounding it had been unfair. This result led to the Third Silesian Uprising in 1921.
On 12 August 1921, the League was asked to settle the matter; the Council created a commission with representatives from Belgium, Brazil, China and Spain to study the situation. The committee recommended that Upper Silesia be divided between Poland and Germany according to the preferences shown in the plebiscite and that the two sides should decide the details of the interaction between the two areas – for example, whether goods should pass freely over the border due to the economic and industrial interdependence of the two areas. In November 1921, a conference was held in Geneva to negotiate a convention between Germany and Poland. A final settlement was reached, after five meetings, in which most of the area was given to Germany, but with the Polish section containing the majority of the region's mineral resources and much of its industry. When this agreement became public in May 1922, bitter resentment was expressed in Germany, but the treaty was still ratified by both countries. The settlement produced peace in the area until the beginning of the Second World War.
The frontiers of the Principality of Albania had not been set during the Paris Peace Conference in 1919, as they were left for the League to decide; they had not yet been determined by September 1921, creating an unstable situation. Greek troops conducted military operations in the south of Albania. Kingdom of Serbs, Croats and Slovenes (Yugoslav) forces became engaged, after clashes with Albanian tribesmen, in the northern part of the country. The League sent a commission of representatives from various powers to the region. In November 1921, the League decided that the frontiers of Albania should be the same as they had been in 1913, with three minor changes that favoured Yugoslavia. Yugoslav forces withdrew a few weeks later, albeit under protest.
The borders of Albania again became the cause of international conflict when Italian General Enrico Tellini and four of his assistants were ambushed and killed on 24 August 1923 while marking out the newly decided border between Greece and Albania. Italian leader Benito Mussolini was incensed and demanded that a commission investigate the incident within five days. Whatever the results of the investigation, Mussolini insisted that the Greek government pay Italy fifty million lire in reparations. The Greeks said they would not pay unless it was proved that the crime was committed by Greeks.
Mussolini sent a warship to shell the Greek island of Corfu, and Italian forces occupied the island on 31 August 1923. This contravened the League's covenant, so Greece appealed to the League to deal with the situation. The Allies agreed (at Mussolini's insistence) that the Conference of Ambassadors should be responsible for resolving the dispute because it was the conference that had appointed General Tellini. The League Council examined the dispute, but then passed on their findings to the Conference of Ambassadors to make the final decision. The conference accepted most of the League's recommendations, forcing Greece to pay fifty million lire to Italy, even though those who committed the crime were never discovered. Italian forces then withdrew from Corfu.
The port city of Memel (now Klaipėda) and the surrounding area, with a predominantly German population, was under provisional Entente control according to Article 99 of the Treaty of Versailles. The French and Polish governments favoured turning Memel into an international city, while Lithuania wanted to annex the area. By 1923, the fate of the area had still not been decided, prompting Lithuanian forces to invade in January 1923 and seize the port. After the Allies failed to reach an agreement with Lithuania, they referred the matter to the League of Nations. In December 1923, the League Council appointed a Commission of Inquiry. The commission chose to cede Memel to Lithuania and give the area autonomous rights. The Klaipėda Convention was approved by the League Council on 14 March 1924, and then by the Allied powers and Lithuania. In 1939 Germany retook the region following the rise of the Nazis and an ultimatum to Lithuania, demanding the return of the region under threat of war. The League of Nations failed to prevent the secession of the Memel region to Germany.
With League oversight, the Sanjak of Alexandretta in the French Mandate of Syria was given autonomy in 1937. Renamed Hatay, its parliament declared independence as the Republic of Hatay in September 1938, after elections the previous month. It was annexed by Turkey with French consent in mid-1939.
The League resolved a dispute between the Kingdom of Iraq and the Republic of Turkey over control of the former Ottoman province of Mosul in 1926. According to the British, who had been awarded a League of Nations mandate over Iraq in 1920 and therefore represented Iraq in its foreign affairs, Mosul belonged to Iraq; on the other hand, the new Turkish republic claimed the province as part of its historic heartland. A League of Nations Commission of Inquiry, with Belgian, Hungarian and Swedish members, was sent to the region in 1924; it found that the people of Mosul did not want to be part of either Turkey or Iraq, but if they had to choose, they would pick Iraq. In 1925, the commission recommended that the region stay part of Iraq, under the condition that the British hold the mandate over Iraq for another 25 years, to ensure the autonomous rights of the Kurdish population. The League Council adopted the recommendation and decided on 16 December 1925 to award Mosul to Iraq. Although Turkey had accepted the League of Nations' arbitration in the Treaty of Lausanne (1923), it rejected the decision, questioning the Council's authority. The matter was referred to the Permanent Court of International Justice, which ruled that, when the Council made a unanimous decision, it must be accepted. Nonetheless, Britain, Iraq and Turkey ratified a separate treaty on 5 June 1926 that mostly followed the decision of the League Council and also assigned Mosul to Iraq. It was agreed that Iraq could still apply for League membership within 25 years and that the mandate would end upon its admission.
After the First World War, Poland and Lithuania both regained their independence but soon became immersed in territorial disputes. During the Polish–Soviet War, Lithuania signed the Moscow Peace Treaty with the Soviet Union that laid out Lithuania's frontiers. This agreement gave Lithuanians control of the city of Vilnius (, ), the old Lithuanian capital, but a city with a majority Polish population. This heightened tension between Lithuania and Poland and led to fears that they would resume the Polish–Lithuanian War, and on 7 October 1920, the League negotiated the Suwałki Agreement establishing a cease-fire and a demarcation line between the two nations. On 9 October 1920, General Lucjan Żeligowski, commanding a Polish military force in contravention of the Suwałki Agreement, took the city and established the Republic of Central Lithuania.
After a request for assistance from Lithuania, the League Council called for Poland's withdrawal from the area. The Polish government indicated they would comply, but instead reinforced the city with more Polish troops. This prompted the League to decide that the future of Vilnius should be determined by its residents in a plebiscite and that the Polish forces should withdraw and be replaced by an international force organised by the League. The plan was met with resistance in Poland, Lithuania, and the Soviet Union, which opposed any international force in Lithuania. In March 1921, the League abandoned plans for the plebiscite. After unsuccessful proposals by Paul Hymans to create a federation between Poland and Lithuania, which was intended as a reincarnation of the former union which both Poland and Lithuania had once shared before losing its independence, Vilnius and the surrounding area was formally annexed by Poland in March 1922. After Lithuania took over the Klaipėda Region, the Allied Conference set the frontier between Lithuania and Poland, leaving Vilnius within Poland, on 14 March 1923. Lithuanian authorities refused to accept the decision, and officially remained in a state of war with Poland until 1927. It was not until the 1938 Polish ultimatum that Lithuania restored diplomatic relations with Poland and thus "de facto" accepted the borders.
There were several border conflicts between Colombia and Peru in the early part of the 20th century, and in 1922, their governments signed the Salomón-Lozano Treaty in an attempt to resolve them. As part of this treaty, the border town of Leticia and its surrounding area was ceded from Peru to Colombia, giving Colombia access to the Amazon River. On 1 September 1932, business leaders from Peruvian rubber and sugar industries who had lost land, as a result, organised an armed takeover of Leticia. At first, the Peruvian government did not recognise the military takeover, but President of Peru Luis Sánchez Cerro decided to resist a Colombian re-occupation. The Peruvian Army occupied Leticia, leading to an armed conflict between the two nations. After months of diplomatic negotiations, the governments accepted mediation by the League of Nations, and their representatives presented their cases before the Council. A provisional peace agreement, signed by both parties in May 1933, provided for the League to assume control of the disputed territory while bilateral negotiations proceeded. In May 1934, a final peace agreement was signed, resulting in the return of Leticia to Colombia, a formal apology from Peru for the 1932 invasion, demilitarisation of the area around Leticia, free navigation on the Amazon and Putumayo Rivers, and a pledge of non-aggression.
Saar was a province formed from parts of Prussia and the Rhenish Palatinate and placed under League control by the Treaty of Versailles. A plebiscite was to be held after fifteen years of League rule to determine whether the province should belong to Germany or France. When the referendum was held in 1935, 90.3 per cent of voters supported becoming part of Germany, which was quickly approved by the League Council.
In addition to territorial disputes, the League also tried to intervene in other conflicts between and within nations. Among its successes were its fight against the international trade in opium and sexual slavery, and its work to alleviate the plight of refugees, particularly in Turkey in the period up to 1926. One of its innovations in this latter area was the 1922 introduction of the Nansen passport, which was the first internationally recognised identity card for stateless refugees.
After an incident involving sentries on the Greek-Bulgarian border in October 1925, fighting began between the two countries. Three days after the initial incident, Greek troops invaded Bulgaria. The Bulgarian government ordered its troops to make only token resistance, and evacuated between ten thousand and fifteen thousand people from the border region, trusting the League to settle the dispute. The League condemned the Greek invasion, and called for both Greek withdrawal and compensation to Bulgaria.
Following accusations of forced labour on the large American-owned Firestone rubber plantation and American accusations of slave trading, the Liberian government asked the League to launch an investigation. The resulting commission was jointly appointed by the League, the United States, and Liberia. In 1930, a League report confirmed the presence of slavery and forced labour. The report implicated many government officials in the selling of contract labour and recommended that they be replaced by Europeans or Americans, which generated anger within Liberia and led to the resignation of President Charles D. B. King and his vice-president. The Liberian government outlawed forced labour and slavery and asked for American help in social reforms.
The Mukden Incident, also known as the "Manchurian Incident" was a decisive setback that weakened The League because its major members refused to tackle Japanese aggression. Japan itself withdrew.
Under the agreed terms of the Twenty-One Demands with China, the Japanese government had the right to station its troops in the area around the South Manchurian Railway, a major trade route between the two countries, in the Chinese region of Manchuria. In September 1931, a section of the railway was lightly damaged by the Japanese Kwantung Army as a pretext for an invasion of Manchuria. The Japanese army claimed that Chinese soldiers had sabotaged the railway and in apparent retaliation (acting contrary to orders from Tokyo, ) occupied all of Manchuria. They renamed the area Manchukuo, and on 9 March 1932 set up a puppet government, with Pu Yi, the former emperor of China, as its executive head. This new entity was recognised only by the governments of Italy, Spain and Nazi Germany; the rest of the world still considered Manchuria legally part of China.
The League of Nations sent observers. The Lytton Report appeared a year later (October 1932). It declared Japan to be the aggressor and demanded Manchuria be returned to China. The report passed 42–1 in the Assembly in 1933 (only Japan voting against), but instead of removing its troops from China, Japan withdrew from the League. In the end, as British historian Charles Mowat argued, collective security was dead:
The League failed to prevent the 1932 war between Bolivia and Paraguay over the arid Gran Chaco region. Although the region was sparsely populated, it contained the Paraguay River, which would have given either landlocked country access to the Atlantic Ocean, and there was also speculation, later proved incorrect, that the Chaco would be a rich source of petroleum. Border skirmishes throughout the late 1920s culminated in an all-out war in 1932 when the Bolivian army attacked the Paraguayans at Fort Carlos Antonio López at Lake Pitiantuta. Paraguay appealed to the League of Nations, but the League did not take action when the Pan-American Conference offered to mediate instead. The war was a disaster for both sides, causing 57,000 casualties for Bolivia, whose population was around three million, and 36,000 dead for Paraguay, whose population was approximately one million. It also brought both countries to the brink of economic disaster. By the time a ceasefire was negotiated on 12 June 1935, Paraguay had seized control of most of the region, as was later recognised by the 1938 truce.
In October 1935, Italian dictator Benito Mussolini sent 400,000 troops to invade Abyssinia (Ethiopia). Marshal Pietro Badoglio led the campaign from November 1935, ordering bombing, the use of chemical weapons such as mustard gas, and the poisoning of water supplies, against targets which included undefended villages and medical facilities. The modern Italian Army defeated the poorly armed Abyssinians and captured Addis Ababa in May 1936, forcing Emperor of Ethiopia Haile Selassie to flee.
The League of Nations condemned Italy's aggression and imposed economic sanctions in November 1935, but the sanctions were largely ineffective since they did not ban the sale of oil or close the Suez Canal (controlled by Britain). As Stanley Baldwin, the British Prime Minister, later observed, this was ultimately because no one had the military forces on hand to withstand an Italian attack. In October 1935, the US President, Franklin D. Roosevelt, invoked the recently passed Neutrality Acts and placed an embargo on arms and munitions to both sides, but extended a further "moral embargo" to the belligerent Italians, including other trade items. On 5 October and later on 29 February 1936, the United States endeavoured, with limited success, to limit its exports of oil and other materials to normal peacetime levels. The League sanctions were lifted on 4 July 1936, but by that point, Italy had already gained control of the urban areas of Abyssinia.
The Hoare–Laval Pact of December 1935 was an attempt by the British Foreign Secretary Samuel Hoare and the French Prime Minister Pierre Laval to end the conflict in Abyssinia by proposing to partition the country into an Italian sector and an Abyssinian sector. Mussolini was prepared to agree to the pact, but news of the deal leaked out. Both the British and French public vehemently protested against it, describing it as a sell-out of Abyssinia. Hoare and Laval were forced to resign, and the British and French governments dissociated themselves from the two men. In June 1936, although there was no precedent for a head of state addressing the Assembly of the League of Nations in person, Haile Selassie spoke to the Assembly, appealing for its help in protecting his country.
The Abyssinian crisis showed how the League could be influenced by the self-interest of its members; one of the reasons why the sanctions were not very harsh was that both Britain and France feared the prospect of driving Mussolini and Adolf Hitler into an alliance.
On 17 July 1936, the Spanish Army launched a coup d'état, leading to a prolonged armed conflict between Spanish Republicans (the elected leftist national government) and the Nationalists (conservative, anti-communist rebels who included most officers of the Spanish Army). Julio Álvarez del Vayo, the Spanish Minister of Foreign Affairs, appealed to the League in September 1936 for arms to defend Spain's territorial integrity and political independence. The League members would not intervene in the Spanish Civil War nor prevent foreign intervention in the conflict. Adolf Hitler and Mussolini continued to aid General Francisco Franco's Nationalists, while the Soviet Union helped the Spanish Republic. In February 1937, the League did ban foreign volunteers, but this was in practice a symbolic move.
Following a long record of instigating localised conflicts throughout the 1930s, Japan began a full-scale invasion of China on 7 July 1937. On 12 September, the Chinese representative, Wellington Koo, appealed to the League for international intervention. Western countries were sympathetic to the Chinese in their struggle, particularly in their stubborn defence of Shanghai, a city with a substantial number of foreigners. The League was unable to provide any practical measures; on 4 October, it turned the case over to the Nine Power Treaty Conference.
The Nazi-Soviet Pact of August 23, 1939, contained secret protocols outlining spheres of interest. Finland and the Baltic states, as well as eastern Poland, fell into the Soviet sphere. After invading Poland on September 17, 1939, on November 30 the Soviets invaded Finland. Then "the League of Nations for the first time expelled a member who had violated the Covenant." The League action of December 14, 1939, stung. "The Soviet Union was the only League member ever to suffer such an indignity."
Article 8 of the Covenant gave the League the task of reducing "armaments to the lowest point consistent with national safety and the enforcement by common action of international obligations". A significant amount of the League's time and energy was devoted to this goal, even though many member governments were uncertain that such extensive disarmament could be achieved or was even desirable. The Allied powers were also under obligation by the Treaty of Versailles to attempt to disarm, and the armament restrictions imposed on the defeated countries had been described as the first step toward worldwide disarmament. The League Covenant assigned the League the task of creating a disarmament plan for each state, but the Council devolved this responsibility to a special commission set up in 1926 to prepare for the 1932–1934 World Disarmament Conference. Members of the League held different views towards the issue. The French were reluctant to reduce their armaments without a guarantee of military help if they were attacked; Poland and Czechoslovakia felt vulnerable to attack from the west and wanted the League's response to aggression against its members to be strengthened before they disarmed. Without this guarantee, they would not reduce armaments because they felt the risk of attack from Germany was too great. Fear of attack increased as Germany regained its strength after the First World War, especially after Adolf Hitler gained power and became German Chancellor in 1933. In particular, Germany's attempts to overturn the Treaty of Versailles and the reconstruction of the German military made France increasingly unwilling to disarm.
The World Disarmament Conference was convened by the League of Nations in Geneva in 1932, with representatives from 60 states. It was a failure. A one-year moratorium on the expansion of armaments, later extended by a few months, was proposed at the start of the conference. The Disarmament Commission obtained initial agreement from France, Italy, Spain, Japan, and Britain to limit the size of their navies but no final agreement was reached. Ultimately, the Commission failed to halt the military build-up by Germany, Italy, Spain and Japan during the 1930s.
The League was mostly silent in the face of major events leading to the Second World War, such as Hitler's remilitarisation of the Rhineland, occupation of the Sudetenland and "Anschluss" of Austria, which had been forbidden by the Treaty of Versailles. In fact, League members themselves re-armed. In 1933, Japan simply withdrew from the League rather than submit to its judgement, as did Germany the same year (using the failure of the World Disarmament Conference to agree to arms parity between France and Germany as a pretext), Italy and Spain in 1937. The final significant act of the League was to expel the Soviet Union in December 1939 after it invaded Finland.
The onset of the Second World War demonstrated that the League had failed in its primary purpose, the prevention of another world war. There were a variety of reasons for this failure, many connected to general weaknesses within the organisation. Additionally, the power of the League was limited by the United States' refusal to join.
The origins of the League as an organisation created by the Allied powers as part of the peace settlement to end the First World War led to it being viewed as a "League of Victors". The League's neutrality tended to manifest itself as indecision. It required a unanimous vote of nine, later fifteen, Council members to enact a resolution; hence, conclusive and effective action was difficult, if not impossible. It was also slow in coming to its decisions, as certain ones required the unanimous consent of the entire Assembly. This problem mainly stemmed from the fact that the primary members of the League of Nations were not willing to accept the possibility of their fate being decided by other countries, and by enforcing unanimous voting had effectively given themselves veto power.
Representation at the League was often a problem. Though it was intended to encompass all nations, many never joined, or their period of membership was short. The most conspicuous absentee was the United States. President Woodrow Wilson had been a driving force behind the League's formation and strongly influenced the form it took, but the US Senate voted not to join on 19 November 1919. Ruth Henig has suggested that, had the United States become a member, it would have also provided support to France and Britain, possibly making France feel more secure, and so encouraging France and Britain to co-operate more fully regarding Germany, thus making the rise to power of the Nazi Party less likely. Conversely, Henig acknowledges that if the US had been a member, its reluctance to engage in war with European states or to enact economic sanctions might have hampered the ability of the League to deal with international incidents. The structure of the US federal government might also have made its membership problematic, as its representatives at the League could not have made decisions on behalf of the executive branch without having the prior approval of the legislative branch.
In January 1920, when the League was born, Germany was not permitted to join because it was seen as having been the aggressor in the First World War. Soviet Russia was also initially excluded because Communist regimes were not welcomed and membership would have been initially dubious due to the Russian Civil War in which both sides claimed to be the legitimate government of the country. The League was further weakened when major powers left in the 1930s. Japan began as a permanent member of the Council since the country was an Allied Power in the First World War, but withdrew in 1933 after the League voiced opposition to its occupation of Manchuria. Italy began as a permanent member of the Council but withdrew in 1937 after roughly a year following the end of the Second Italo-Ethiopian War. Spain also began as a permanent member of the Council, but withdrew in 1939 after the Spanish Civil War ended in a victory for the Nationalists. The League had accepted Germany, also as a permanent member of the Council, in 1926, deeming it a "peace-loving country", but Adolf Hitler pulled Germany out when he came to power in 1933.
Another important weakness grew from the contradiction between the idea of collective security that formed the basis of the League and international relations between individual states. The League's collective security system required nations to act, if necessary, against states they considered friendly, and in a way that might endanger their national interests, to support states for which they had no normal affinity. This weakness was exposed during the Abyssinia Crisis, when Britain and France had to balance maintaining the security they had attempted to create for themselves in Europe "to defend against the enemies of internal order", in which Italy's support played a pivotal role, with their obligations to Abyssinia as a member of the League.
On 23 June 1936, in the wake of the collapse of League efforts to restrain Italy's war against Abyssinia, the British Prime Minister, Stanley Baldwin, told the House of Commons that collective security had
Ultimately, Britain and France both abandoned the concept of collective security in favour of appeasement in the face of growing German militarism under Hitler.
In this context, the League of Nations was also the institution where the first international debate on terrorism took place following the 1934 assassination of King Alexander I of Yugoslavia in Marseille, France, showing its conspiratorial features, many of which are detectable in the discourse of terrorism among states after 9/11.
American diplomatic historian Samuel Flagg Bemis originally supported the League, but after two decades changed his mind:
The League of Nations lacked an armed force of its own and depended on the Great Powers to enforce its resolutions, which they were very unwilling to do. Its two most important members, Britain and France, were reluctant to use sanctions and even more reluctant to resort to military action on behalf of the League. Immediately after the First World War, pacifism became a strong force among both the people and governments of the two countries. The British Conservatives were especially tepid to the League and preferred, when in government, to negotiate treaties without the involvement of that organisation. Moreover, the League's advocacy of disarmament for Britain, France, and its other members, while at the same time advocating collective security, meant that the League was depriving itself of the only forceful means by which it could uphold its authority.
When the British cabinet discussed the concept of the League during the First World War, Maurice Hankey, the Cabinet Secretary, circulated a memorandum on the subject. He started by saying, "Generally it appears to me that any such scheme is dangerous to us because it will create a sense of security which is wholly fictitious". He attacked the British pre-war faith in the sanctity of treaties as delusional and concluded by claiming:
The Foreign Office civil servant Sir Eyre Crowe also wrote a memorandum to the British cabinet claiming that "a solemn league and covenant" would just be "a treaty, like other treaties". "What is there to ensure that it will not, like other treaties, be broken?" Crowe went on to express scepticism of the planned "pledge of common action" against aggressors because he believed the actions of individual states would still be determined by national interests and the balance of power. He also criticised the proposal for League economic sanctions because it would be ineffectual and that "It is all a question of real military preponderance". Universal disarmament was a practical impossibility, Crowe warned.
As the situation in Europe escalated into war, the Assembly transferred enough power to the Secretary General on 30 September 1938 and 14 December 1939 to allow the League to continue to exist legally and carry on reduced operations. The headquarters of the League, the Palace of Nations, remained unoccupied for nearly six years until the Second World War ended.
At the 1943 Tehran Conference, the Allied powers agreed to create a new body to replace the League: the United Nations. Many League bodies, such as the International Labour Organization, continued to function and eventually became affiliated with the UN. The designers of the structures of the United Nations intended to make it more effective than the League.
The final meeting of the League of Nations took place on 18 April 1946 in Geneva. Delegates from 34 nations attended the assembly. This session concerned itself with liquidating the League: it transferred assets worth approximately $22,000,000 (U.S.) in 1946 (including the Palace of Nations and the League's archives) to the UN, returned reserve funds to the nations that had supplied them, and settled the debts of the League. Robert Cecil, addressing the final session, said:
The Assembly passed a resolution that "With effect from the day following the close of the present session of the Assembly [i.e., April 19], the League of Nations shall cease to exist except for the sole purpose of the liquidation of its affairs as provided in the present resolution." A Board of Liquidation consisting of nine persons from different countries spent the next 15 months overseeing the transfer of the League's assets and functions to the United Nations or specialised bodies, finally dissolving itself on 31 July 1947.
The archive of the League of Nations was transferred to the United Nations Office at Geneva and is now an entry in the UNESCO Memory of the World Register.
In the past few decades, by research using the League Archives at Geneva, historians have reviewed the legacy of the League of Nations as the United Nations has faced similar troubles to those of the interwar period. Current consensus views that, even though the League failed to achieve its ultimate goal of world peace, it did manage to build new roads towards expanding the rule of law across the globe; strengthened the concept of collective security, giving a voice to smaller nations; helped to raise awareness to problems like epidemics, slavery, child labour, colonial tyranny, refugee crises and general working conditions through its numerous commissions and committees; and paved the way for new forms of statehood, as the mandate system put the colonial powers under international observation.
Professor David Kennedy portrays the League as a unique moment when international affairs were "institutionalised", as opposed to the pre–First World War methods of law and politics.
The principal Allies in the Second World War (the UK, the USSR, France, the U.S., and the Republic of China) became permanent members of the United Nations Security Council in 1946; in 1971, the People's Republic of China replaced the Republic of China (then only in control of Taiwan) as permanent member of the UN Security Council, and in 1991 the Russian Federation assumed the seat of the dissolved USSR.
Decisions of the Security Council are binding on all members of the UN, and unanimous decisions are not required, unlike in the League Council. Permanent members of the Security Council can wield a veto to protect their vital interests.
The League of Nations archives is a collection of the League's records and documents. It consists of approximately 15 million pages of content dating from the inception of the League of Nations in 1919 extending through its dissolution, which commenced in 1946. It is located at the United Nations Office at Geneva.
In 2017, the UN Library & Archives Geneva launched the Total Digital Access to the League of Nations Archives Project (LONTAD), with the intention of preserving, digitizing, and providing online access to the League of Nations archives. It is scheduled for completion in 2022.
|
https://en.wikipedia.org/wiki?curid=17926
|
Logic programming
Logic programming is a programming paradigm which is largely based on formal logic. Any program written in a logic programming language is a set of sentences in logical form, expressing facts and rules about some problem domain. Major logic programming language families include Prolog, answer set programming (ASP) and Datalog. In all of these languages, rules are written in the form of "clauses":
and are read declaratively as logical implications:
H is called the "head" of the rule and B1, ..., Bn is called the "body". Facts are rules that have no body, and are written in the simplified form:
In the simplest case in which H, B1, ..., Bn are all atomic formulae, these clauses are called definite clauses or Horn clauses. However, there are many extensions of this simple case, the most important one being the case in which conditions in the body of a clause can also be negations of atomic formulas. Logic programming languages that include this extension have the knowledge representation capabilities of a non-monotonic logic.
In ASP and Datalog, logic programs have only a declarative reading, and their execution is performed by means of a proof procedure or model generator whose behaviour is not meant to be controlled by the programmer. However, in the Prolog family of languages, logic programs also have a procedural interpretation as goal-reduction procedures:
Consider the following clause as an example:
based on an example used by Terry Winograd to illustrate the programming language Planner. As a clause in a logic program, it can be used both as a procedure to test whether X is fallible by testing whether X is human, and as a procedure to find an X which is fallible by finding an X which is human. Even facts have a procedural interpretation. For example, the clause:
can be used both as a procedure to show that socrates is human, and as a procedure to find an X that is human by "assigning" socrates to X.
The declarative reading of logic programs can be used by a programmer to verify their correctness. Moreover, logic-based program transformation techniques can also be used to transform logic programs into logically equivalent programs that are more efficient. In the Prolog family of logic programming languages, the programmer can also use the known problem-solving behaviour of the execution mechanism to improve the efficiency of programs.
The use of mathematical logic to represent and execute computer programs is also a feature of the lambda calculus, developed by Alonzo Church in the 1930s. However, the first proposal to use the clausal form of logic for representing computer programs was made by Cordell Green. This used an axiomatization of a subset of LISP, together with a representation of an input-output relation, to compute the relation by simulating the execution of the program in LISP. Foster and Elcock's Absys, on the other hand, employed a combination of equations and lambda calculus in an assertional programming language which places no constraints on the order in which operations are performed.
Logic programming in its present form can be traced back to debates in the late 1960s and early 1970s about declarative versus procedural representations of knowledge in artificial intelligence. Advocates of declarative representations were notably working at Stanford, associated with John McCarthy, Bertram Raphael and Cordell Green, and in Edinburgh, with John Alan Robinson (an academic visitor from Syracuse University), Pat Hayes, and Robert Kowalski. Advocates of procedural representations were mainly centered at MIT, under the leadership of Marvin Minsky and Seymour Papert.
Although it was based on the proof methods of logic, Planner, developed at MIT, was the first language to emerge within this proceduralist paradigm. Planner featured pattern-directed invocation of procedural plans from goals (i.e. goal-reduction or backward chaining) and from assertions (i.e. forward chaining). The most influential implementation of Planner was the subset of Planner, called Micro-Planner, implemented by Gerry Sussman, Eugene Charniak and Terry Winograd. It was used to implement Winograd's natural-language understanding program SHRDLU, which was a landmark at that time. To cope with the very limited memory systems at the time, Planner used a backtracking control structure so that only one possible computation path had to be stored at a time. Planner gave rise to the programming languages QA-4, Popler, Conniver, QLISP, and the concurrent language Ether.
Hayes and Kowalski in Edinburgh tried to reconcile the logic-based declarative approach to knowledge representation with Planner's procedural approach. Hayes (1973) developed an equational language, Golux, in which different procedures could be obtained by altering the behavior of the theorem prover. Kowalski, on the other hand, developed SLD resolution, a variant of SL-resolution, and showed how it treats implications as goal-reduction procedures. Kowalski collaborated with Colmerauer in Marseille, who developed these ideas in the design and implementation of the programming language Prolog.
The Association for Logic Programming was founded to promote Logic Programming in 1986.
Prolog gave rise to the programming languages ALF, Fril, Gödel, Mercury, Oz, Ciao, Visual Prolog, XSB, and λProlog, as well as a variety of concurrent logic programming languages, constraint logic programming languages and Datalog.
Logic programming can be viewed as controlled deduction. An important concept in logic programming is the separation of programs into their logic component and their control component. With pure logic programming languages, the logic component alone determines the solutions produced. The control component can be varied to provide alternative ways of executing a logic program. This notion is captured by the slogan
where "Logic" represents a logic program and "Control" represents different theorem-proving strategies.
In the simplified, propositional case in which a logic program and a top-level atomic goal contain no variables, backward reasoning determines an and-or tree, which constitutes the search space for solving the goal. The top-level goal is the root of the tree. Given any node in the tree and any clause whose head matches the node, there exists a set of child nodes corresponding to the sub-goals in the body of the clause. These child nodes are grouped together by an "and". The alternative sets of children corresponding to alternative ways of solving the node are grouped together by an "or".
Any search strategy can be used to search this space. Prolog uses a sequential, last-in-first-out, backtracking strategy, in which only one alternative and one sub-goal is considered at a time. Other search strategies, such as parallel search, intelligent backtracking, or best-first search to find an optimal solution, are also possible.
In the more general case, where sub-goals share variables, other strategies can be used, such as choosing the subgoal that is most highly instantiated or that is sufficiently instantiated so that only one procedure applies. Such strategies are used, for example, in concurrent logic programming.
For most practical applications, as well as for applications that require non-monotonic reasoning in artificial intelligence, Horn clause logic programs need to be extended to normal logic programs, with negative conditions. A "clause" in a normal logic program has the form:
and is read declaratively as a logical implication:
where H and all the Ai and Bi are atomic formulas. The negation in the negative literals not Bi is commonly referred to as "negation as failure", because in most implementations, a negative condition not Bi is shown to hold by showing that the positive condition Bi fails to hold. For example:
canfly(X) :- bird(X), not abnormal(X).
abnormal(X) :- wounded(X).
bird(john).
bird(mary).
wounded(john).
Given the goal of finding something that can fly:
there are two candidate solutions, which solve the first subgoal bird(X), namely X = john and X = mary. The second subgoal not abnormal(john) of the first candidate solution fails, because wounded(john) succeeds and therefore abnormal(john) succeeds. However, The second subgoal not abnormal(mary) of the second candidate solution succeeds, because wounded(mary) fails and therefore abnormal(mary) fails. Therefore, X = mary is the only solution of the goal.
Micro-Planner had a construct, called "thnot", which when applied to an expression returns the value true if (and only if) the evaluation of the expression fails. An equivalent operator is normally built-in in modern Prolog's implementations. It is normally written as codice_1 or codice_2, where codice_3 is some goal (proposition) to be proved by the program. This operator differs from negation in first-order logic: a negation such as codice_4 fails when the variable codice_5 has been bound to the atom codice_6, but it succeeds in all other cases, including when codice_5 is unbound. This makes Prolog's reasoning non-monotonic: codice_8 always fails, while codice_9 can succeed, binding codice_5 to codice_6, depending on whether codice_5 was initially bound (note that standard Prolog executes goals in left-to-right order).
The logical status of negation as failure was unresolved until Keith Clark [1978] showed that, under certain natural conditions, it is a correct (and sometimes complete) implementation of classical negation with respect to the completion of the program. Completion amounts roughly to regarding the set of all the program clauses with the same predicate on the left hand side, say
as a definition of the predicate
where "iff" means "if and only if". Writing the completion also requires explicit use of the equality predicate and the inclusion of a set of appropriate axioms for equality. However, the implementation of negation by failure needs only the if-halves of the definitions without the axioms of equality.
For example, the completion of the program above is:
The notion of completion is closely related to McCarthy's circumscription semantics for default reasoning, and to the closed world assumption.
As an alternative to the completion semantics, negation as failure can also be interpreted epistemically, as in the stable model semantics of answer set programming. In this interpretation not(Bi) means literally that Bi is not known or not believed. The epistemic interpretation has the advantage that it can be combined very simply with classical negation, as in "extended logic programming", to formalise such phrases as "the contrary can not be shown", where "contrary" is classical negation and "can not be shown" is the epistemic interpretation of negation as failure.
The fact that Horn clauses can be given a procedural interpretation and, vice versa, that goal-reduction procedures can be understood as Horn clauses + backward reasoning means that logic programs combine declarative and procedural representations of knowledge. The inclusion of negation as failure means that logic programming is a kind of non-monotonic logic.
Despite its simplicity compared with classical logic, this combination of Horn clauses and negation as failure has proved to be surprisingly expressive. For example, it provides a natural representation for the common-sense laws of cause and effect, as formalised by both the situation calculus and event calculus. It has also been shown to correspond quite naturally to the semi-formal language of legislation. In particular, Prakken and Sartor credit the representation of the British Nationality Act as a logic program with being "hugely influential for the development of computational representations of legislation, showing how logic programming enables intuitively appealing representations that can be directly deployed to generate automatic inferences".
The programming language Prolog was developed in 1972 by Alain Colmerauer. It emerged from a collaboration between Colmerauer in Marseille and Robert Kowalski in Edinburgh. Colmerauer was working on natural language understanding, using logic to represent semantics and using resolution for question-answering. During the summer of 1971, Colmerauer and Kowalski discovered that the clausal form of logic could be used to represent formal grammars and that resolution theorem provers could be used for parsing. They observed that some theorem provers, like hyper-resolution, behave as bottom-up parsers and others, like SL-resolution (1971), behave as top-down parsers.
It was in the following summer of 1972, that Kowalski, again working with Colmerauer, developed the procedural interpretation of implications. This dual declarative/procedural interpretation later became formalised in the Prolog notation
which can be read (and used) both declaratively and procedurally. It also became clear that such clauses could be restricted to definite clauses or Horn clauses, where H, B1, ..., Bn are all atomic predicate logic formulae, and that SL-resolution could be restricted (and generalised) to LUSH or SLD-resolution. Kowalski's procedural interpretation and LUSH were described in a 1973 memo, published in 1974.
Colmerauer, with Philippe Roussel, used this dual interpretation of clauses as the basis of Prolog, which was implemented in the summer and autumn of 1972. The first Prolog program, also written in 1972 and implemented in Marseille, was a French question-answering system. The use of Prolog as a practical programming language was given great momentum by the development of a compiler by David Warren in Edinburgh in 1977. Experiments demonstrated that Edinburgh Prolog could compete with the processing speed of other symbolic programming languages such as Lisp. Edinburgh Prolog became the "de facto" standard and strongly influenced the definition of ISO standard Prolog.
Abductive logic programming is an extension of normal Logic Programming that allows some predicates, declared as abducible predicates, to be "open" or undefined. A clause in an abductive logic program has the form:
where H is an atomic formula that is not abducible, all the Bi are literals whose predicates are not abducible, and the Ai are atomic formulas whose predicates are abducible. The abducible predicates can be constrained by integrity constraints, which can have the form:
where the Li are arbitrary literals (defined or abducible, and atomic or negated). For example:
canfly(X) :- bird(X), normal(X).
false :- normal(X), wounded(X).
bird(john).
bird(mary).
wounded(john).
where the predicate normal is abducible.
Problem solving is achieved by deriving hypotheses expressed in terms of the abducible predicates as solutions of problems to be solved. These problems can be either observations that need to be explained (as in classical abductive reasoning) or goals to be solved (as in normal logic programming). For example, the hypothesis normal(mary) explains the observation canfly(mary). Moreover, the same hypothesis entails the only solution X = mary of the goal of finding something which can fly:
Abductive logic programming has been used for fault diagnosis, planning, natural language processing and machine learning. It has also been used to interpret Negation as Failure as a form of abductive reasoning.
Because mathematical logic has a long tradition of distinguishing between object language and metalanguage, logic programming also allows metalevel programming. The simplest metalogic program is the so-called "vanilla" meta-interpreter:
where true represents an empty conjunction, and clause(A,B) means that there is an object-level clause of the form A :- B.
Metalogic programming allows object-level and metalevel representations to be combined, as in natural language. It can also be used to implement any logic which is specified as inference rules. Metalogic is used in logic programming to implement metaprograms, which manipulate other programs, databases, knowledge bases or axiomatic theories as data.
Constraint logic programming combines Horn clause logic programming with constraint solving. It extends Horn clauses by allowing some predicates, declared as constraint predicates, to occur as literals in the body of clauses. A constraint logic program is a set of clauses of the form:
where H and all the Bi are atomic formulas, and the Ci are constraints. Declaratively, such clauses are read as ordinary logical implications:
However, whereas the predicates in the heads of clauses are defined by the constraint logic program, the predicates in the constraints are predefined by some domain-specific model-theoretic structure or theory.
Procedurally, subgoals whose predicates are defined by the program are solved by goal-reduction, as in ordinary logic programming, but constraints are checked for satisfiability by a domain-specific constraint-solver, which implements the semantics of the constraint predicates. An initial problem is solved by reducing it to a satisfiable conjunction of constraints.
The following constraint logic program represents a toy temporal database of john's history as a teacher:
teaches(john, hardware, T) :- 1990 ≤ T, T < 1999.
teaches(john, software, T) :- 1999 ≤ T, T < 2005.
teaches(john, logic, T) :- 2005 ≤ T, T ≤ 2012.
rank(john, instructor, T) :- 1990 ≤ T, T < 2010.
rank(john, professor, T) :- 2010 ≤ T, T < 2014.
Here ≤ and < are constraint predicates, with their usual intended semantics. The following goal clause queries the database to find out when john both taught logic and was a professor:
The solution is 2010 ≤ T, T ≤ 2012.
Constraint logic programming has been used to solve problems in such fields as civil engineering, mechanical engineering, digital circuit verification, automated timetabling, air traffic control, and finance. It is closely related to abductive logic programming.
Concurrent logic programming integrates concepts of logic programming with concurrent programming. Its development was given a big impetus in the 1980s by its choice for the systems programming language of the Japanese Fifth Generation Project (FGCS).
A concurrent logic program is a set of guarded Horn clauses of the form:
The conjunction G1, ... , Gn is called the guard of the clause, and | is the commitment operator. Declaratively, guarded Horn clauses are read as ordinary logical implications:
However, procedurally, when there are several clauses whose heads H match a given goal, then all of the clauses are executed in parallel, checking whether their guards G1, ... , Gn hold. If the guards of more than one clause hold, then a committed choice is made to one of the clauses, and execution proceeds with the subgoals B1, ..., Bn of the chosen clause. These subgoals can also be executed in parallel. Thus concurrent logic programming implements a form of "don't care nondeterminism", rather than "don't know nondeterminism".
For example, the following concurrent logic program defines a predicate shuffle(Left, Right, Merge) , which can be used to shuffle two lists Left and Right, combining them into a single list Merge that preserves the ordering of the two lists Left and Right:
shuffle([], [], []).
shuffle(Left, Right, Merge) :-
shuffle(Left, Right, Merge) :-
Here, [] represents the empty list, and [Head | Tail] represents a list with first element Head followed by list Tail, as in Prolog. (Notice that the first occurrence of | in the second and third clauses is the list constructor, whereas the second occurrence of | is the commitment operator.) The program can be used, for example, to shuffle the lists [ace, queen, king] and [1, 4, 2] by invoking the goal clause:
shuffle([ace, queen, king], [1, 4, 2], Merge).
The program will non-deterministically generate a single solution, for example Merge = [ace, queen, 1, king, 4, 2].
Arguably, concurrent logic programming is based on message passing, so it is subject to the same indeterminacy as other concurrent message-passing systems, such as Actors (see Indeterminacy in concurrent computation). Carl Hewitt has argued that concurrent logic programming is not based on logic in his sense that computational steps cannot be logically deduced. However, in concurrent logic programming, any result of a terminating computation is a logical consequence of the program, and any partial result of a partial computation is a logical consequence of the program and the residual goal (process network). Thus the indeterminacy of computations implies that not all logical consequences of the program can be deduced.
Concurrent constraint logic programming combines concurrent logic programming and constraint logic programming, using constraints to control concurrency. A clause can contain a guard, which is a set of constraints that may block the applicability of the clause. When the guards of several clauses are satisfied, concurrent constraint logic programming makes a committed choice to use only one.
Inductive logic programming is concerned with generalizing positive and negative examples in the context of background knowledge: machine learning of logic programs. Recent work in this area, combining logic programming, learning and probability, has given rise to the new field of statistical relational learning and probabilistic inductive logic programming.
Several researchers have extended logic programming with higher-order programming features derived from higher-order logic, such as predicate variables. Such languages include the Prolog extensions HiLog and λProlog.
Basing logic programming within linear logic has resulted in the design of logic programming languages which are considerably more expressive than those based on classical logic. Horn clause programs can only represent state change by the change in arguments to predicates. In linear logic programming, one can use the ambient linear logic to support state change. Some early designs of logic programming languages based on linear logic include LO [Andreoli & Pareschi, 1991], Lolli, ACL, and Forum [Miller, 1996]. Forum provides a goal-directed interpretation of all of linear logic.
F-logic extends logic programming with objects and the frame syntax.
Logtalk extends the Prolog programming language with support for objects, protocols, and other OOP concepts. It supports most standard-compliant Prolog systems as backend compilers.
Transaction logic is an extension of logic programming with a logical theory of state-modifying updates. It has both a model-theoretic semantics and a procedural one. An implementation of a subset of Transaction logic is available in the Flora-2 system. Other prototypes are also available.
|
https://en.wikipedia.org/wiki?curid=17927
|
Lake Tana
Lake Tana (also spelled T'ana, , ', '; an older variant is Tsana, Ge'ez: ጻና "Ṣānā"; sometimes called "Dembiya" after the region to the north of the lake) is the source of the Blue Nile and is the largest lake in Ethiopia. Located in Amhara Region in the north-western Ethiopian Highlands, the lake is approximately long and wide, with a maximum depth of , and an elevation of . Lake Tana is fed by the Gilgel Abay, Reb and Gumara rivers. Its surface area ranges from , depending on season and rainfall. The lake level has been regulated since the construction of the control weir where the lake discharges into the Blue Nile. This controls the flow to the Blue Nile Falls (Tis Abbai) and hydro-power station.
In 2015, the Lake Tana region was nominated as a UNESCO Biosphere Reserve recognizing its national and international natural and cultural importance.
Lake Tana was formed by volcanic activity, blocking the course of inflowing rivers in the early Pleistocene epoch, about 5 million years ago.
The lake was originally much larger than it is today. Seven large permanent rivers feed the lake as well as 40 small seasonal rivers. The main tributaries to the lake are Gilgel Abbay (Little Nile River), and the Megech, Gumara, and Rib rivers.
Lake Tana has a number of islands, whose number varies depending on the level of the lake. It has fallen about in the last 400 years. According to Manoel de Almeida (a Portuguese missionary in the early 17th century), there were 21 islands, seven to eight of which had monasteries on them "formerly large, but now much diminished." When James Bruce visited the area in the later 18th century, he noted that the locals counted 45 inhabited islands, but stated he believed that "the number may be about eleven." A 20th-century geographer named 37 islands, of which he believed 19 have or had monasteries or churches on them.
Remains of ancient Ethiopian emperors and treasures of the Ethiopian Church are kept in the isolated island monasteries (including Kebran Gabriel, Ura Kidane Mehret, Narga Selassie, Daga Estifanos, Medhane Alem of Rema, Kota Maryam, and Mertola Maryam). On the island of Tana Qirqos is a rock shown to Paul B. Henze, on which he was told the Virgin Mary had rested on her journey back from Egypt; he was also told that Frumentius, who introduced Christianity to Ethiopia, is "allegedly buried on Tana Cherqos." The body of Yekuno Amlak is interred in the monastery of St. Stephen on Daga Island. Emperors whose tombs are also on Daga include Dawit I, Zara Yaqob, Za Dengel, and Fasilides. Other important islands in Lake Tana include Dek, Mitraha, Gelila Zakarias, Halimun and Briguida.
The monasteries are believed to have been built over earlier religious sites. They include the fourteenth-century Debre Maryam, and the eighteenth-century Narga Selassie, Tana Qirqos (said to have housed the Ark of the Covenant before it was moved to Axum), and Ura Kidane Mehret, known for its regalia. A ferry service links Bahir Dar with Gorgora via Dek Island and various lakeshore villages.
There is also Zege Peninsula on the southwest portion of the lake. Zege is the site of the Azwa Maryam monastery.
Compared to other tropical lakes, the waters in Lake Tana are relatively cold, typically ranging from about . The water has a pH that is neutral to somewhat alkaline and its transparency is quite low.
Because of the large seasonal variations in the inflow of its tributaries, rain and evaporation, the water levels of Lake Tana typically vary by in a year, peaking in September–October just after the main wet season. When the water levels are high, the plains around the lake often are flooded and other permanent swamps in the region become connected to the lake.
Since there are no inflows that link the lake to other large waterways and the main outflow, the Blue Nile, is obstructed by the Blue Nile Falls, the lake supports a highly distinctive aquatic fauna, which generally is related to species from the Nile Basin. The lake's nutrient levels are low.
There are 27 fish species in Lake Tana and 20 of these are endemic. This includes one of only two known cyprinid species flocks (the other, from Lake Lanao in the Philippines, has been decimated by introduced species). It consists of 15 relatively large, up to long, "Labeobarbus" barbs that formerly were included in "Barbus" instead. Among these, "L. acutirostris", "L. longissimus", "L. megastoma" and "L. truttiformis" are strictly piscivorous, and "L. dainellii", "L. gorguari", "L. macrophtalmus" and "L. platydorsus" are mostly piscivorous. Their most important prey are the small "Enteromius" and "Garra" species. The remaining "Labeobarbus" in Lake Tana have other specialized feeding habits: "L. beso" (non-endemic and not closely related to the others) feeds on algae, "L. surkis" mostly on macrophytes, "L. gorgorensis" on macrophytes and molluscs, "L. brevicephalus" on zooplankton (however, juveniles of all members of the species flock feed on zooplankton), "L. osseensis" on macrophytes and adults insects, and "L. crassibarbis", "L. intermedius" (non-endemic but closely related to the others), "L. nedgia" and "L. tsanensis" on benthic invertebrates like chironomid larvae. Among the endemic "Labeobarbus", eight species spawn in the lake's wetlands and the remaining move seasonally into its tributaries where they spawn.
In addition to the "Labeobarbus" species flock, the endemic species are "Enteromius pleurogramma", "E. tanapelagius", "Garra regressus", "G. regressus" and "Afronemacheilus abyssinicus" (one of only two African stone loaches). The remaining non-endemic species are Nile tilapia (widespread in Africa, but with the endemic subspecies "tana" in the lake), "E. humilis", "G. dembecha", "G. dembeensis" and the large African sharptooth catfish.
Lake Tana supports a large fishing industry, mainly based on the "Labeobarbus" barbs, Nile tilapia and sharptooth catfish. According to the Ethiopian Department of Fisheries and Aquaculture, 1,454 tons of fish were landed in 2011 at Bahir Dar, which the department estimated was 15% of its sustainable amount. Nevertheless, in a review that compared catches in 2001 to those ten years earlier, it was found that typical sizes of both the tilapia and the catfish had significantly decreased, and populations of the "Labeobarbus" barbs that breed in the tributaries had significantly declined. Among the endemic fish, most are considered threatened (endangered or vulnerable) or data deficient (available data insufficient for evaluating a status) by the IUCN. In the early 2000s, the local government for the first time introduced a fisheries legislation and it is hoped this will have a positive effect on the fish populations.
Other serious threats are habitat destruction and pollution. Bahir Dar has become a large city and it is rapidly growing; its wastewater is generally released directly into the lake. The vegetation in the lake's wetlands, which are an important nursery for the "Labeobarbus" and other fish, are being cleared at a fast pace. A potentially serious threat to the unique ecosystem would be an introduction of a large and efficient predatory species like the Nile perch, which has been implicated in numerous extinctions in Lake Victoria. The piscivorous "Labeobarbus" of Lake Tana are relatively inefficient predators that only can take fish up to about 15% of the length of the predator itself.
Among other fauna, the lake supports relatively few invertebrates: There are fifteen species of molluscs, including one endemic, and also an endemic freshwater sponge.
About 230 species of birds, including more than 80 wetland birds such as the great white pelican, African darter, hamerkop, storks, African spoonbill, ibis, ducks, kingfishers and African fish eagle, are known from Lake Tana. It is an important resting and feeding ground for many Palearctic migrant waterbirds.
There are no crocodiles, but the African softshell turtle has been recorded near the Blue Nile outflow from the lake. Hippos are present, mostly near the Blue Nile outflow.
|
https://en.wikipedia.org/wiki?curid=17928
|
Lola Graham
Lola Glenn Graham (23 September 19182 January 1992). She first came to public attention after winning a musical competition at age six by playing the piano. She attended Shelford Church of England Girls' Grammar School and passed her music examinations in December 1933. In October 1936 her piano teacher, Sheila MacFie, organised a recital for Graham and fellow student, Eda Ashton, at the British Music Society's rooms, Melbourne. In April 1942 Graham and Ashton were pianists for a radio broadcast on 3LO on the Australian Broadcasting Commission's network. In May of the following year her chamber music piano work was described by "The Argus" reporter, "Graham showed virtuosity in her playing of Albanesi's Sonata in C Major."
She worked in radio for most of her career. In October 1946 she performed a duo piano recital with Mamie Reid on national radio. She worked in live musical theatre both as a band member and accompanist. Graham married Fred Menhennitt on 23 February 1957 and the couple had two sons. She was a backing musician for Barry Humphries, and in 1962, she provided piano on his album, "A Nice Night's Entertainment". She died, aged 73, after being diagnosed with cancer.
|
https://en.wikipedia.org/wiki?curid=17931
|
Liquid-crystal display
A liquid-crystal display (LCD) is a flat-panel display or other electronically modulated optical device that uses the light-modulating properties of liquid crystals combined with polarizers. Liquid crystals do not emit light directly, instead using a backlight or reflector to produce images in color or monochrome. LCDs are available to display arbitrary images (as in a general-purpose computer display) or fixed images with low information content, which can be displayed or hidden, such as preset words, digits, and seven-segment displays, as in a digital clock. They use the same basic technology, except that arbitrary images are made from a matrix of small pixels, while other displays have larger elements. LCDs can either be normally on (positive) or off (negative), depending on the polarizer arrangement. For example, a character positive LCD with a backlight will have black lettering on a background that is the color of the backlight, and a character negative LCD will have a black background with the letters being of the same color as the backlight. Optical filters are added to white on blue LCDs to give them their characteristic appearance.
LCDs are used in a wide range of applications, including LCD televisions, computer monitors, instrument panels, aircraft cockpit displays, and indoor and outdoor signage. Small LCD screens are common in portable consumer devices such as digital cameras, watches, calculators, and mobile telephones, including smartphones. LCD screens are also used on consumer electronics products such as DVD players, video game devices and clocks. LCD screens have replaced heavy, bulky cathode ray tube (CRT) displays in nearly all applications. LCD screens are available in a wider range of screen sizes than CRT and plasma displays, with LCD screens available in sizes ranging from tiny digital watches to very large television receivers. LCDs are slowly being replaced by OLEDs, which can be easily made into different shapes, and have a lower response time, wider color gamut, virtually infinite color contrast and viewing angles, lower weight for a given display size and a slimmer profile (because OLEDs use a single glass or plastic panel whereas LCDs use two glass panels; the thickness of the panels increases with size but the increase is more noticeable on LCDs) and potentially lower power consumption (as the display is only "on" where needed and there is no backlight). OLEDs, however, are more expensive for a given display size due to the very expensive electroluminescent materials or phosphors that they use. Also due to the use of phosphors, OLEDs suffer from screen burn-in and there is currently no way to recycle OLED displays, whereas LCD panels can be recycled, although the technology required to recycle LCDs is not yet widespread. Attempts to increase the lifespan of LCDs are quantum dot displays, which offer similar performance to an OLED display, but the quantum dot sheet that gives these displays their characteristics can not yet be recycled.
Since LCD screens do not use phosphors, they rarely suffer image burn-in when a static image is displayed on a screen for a long time, e.g., the table frame for an airline flight schedule on an indoor sign. LCDs are, however, susceptible to image persistence. The LCD screen is more energy-efficient and can be disposed of more safely than a CRT can. Its low electrical power consumption enables it to be used in battery-powered electronic equipment more efficiently than a CRT can be. By 2008, annual sales of televisions with LCD screens exceeded sales of CRT units worldwide, and the CRT became obsolete for most purposes.
Each pixel of an LCD typically consists of a layer of molecules aligned between two transparent electrodes, and two polarizing filters (parallel and perpendicular), the axes of transmission of which are (in most of the cases) perpendicular to each other. Without the liquid crystal between the polarizing filters, light passing through the first filter would be blocked by the second (crossed) polarizer. Before an electric field is applied, the orientation of the liquid-crystal molecules is determined by the alignment at the surfaces of electrodes. In a twisted nematic (TN) device, the surface alignment directions at the two electrodes are perpendicular to each other, and so the molecules arrange themselves in a helical structure, or twist. This induces the rotation of the polarization of the incident light, and the device appears gray. If the applied voltage is large enough, the liquid crystal molecules in the center of the layer are almost completely untwisted and the polarization of the incident light is not rotated as it passes through the liquid crystal layer. This light will then be mainly polarized perpendicular to the second filter, and thus be blocked and the pixel will appear black. By controlling the voltage applied across the liquid crystal layer in each pixel, light can be allowed to pass through in varying amounts thus constituting different levels of gray. Most color LCD systems use the same technique, with color filters used to generate red, green, and blue pixels. The color filters are made with a photolithography process. Red, green, blue and black resists are used. All resists contain a finely ground powdered pigment, with particles being just 40 nanometers across. The black resist is the first to be applied; this will create a black grid that will separate red, green and blue subpixels from one another. After the black resist has been dried in an oven and exposed to UV light through a photomask, the unexposed areas are washed away. Then the same process is repeated with the remaining resists. This fills the holes in the black grid (or matrix) with their corresponding colored resists. Another color-generation method used in early color PDAs and some calculators was done by varying the voltage in a Super-twisted nematic LCD, where the variable twist between tighter-spaced plates causes a varying double refraction birefringence, thus changing the hue. They were typically restricted to 3 colors per pixel: orange, green, and blue.
The optical effect of a TN device in the voltage-on state is far less dependent on variations in the device thickness than that in the voltage-off state. Because of this, TN displays with low information content and no backlighting are usually operated between crossed polarizers such that they appear bright with no voltage (the eye is much more sensitive to variations in the dark state than the bright state). As most of 2010-era LCDs are used in television sets, monitors and smartphones, they have high-resolution matrix arrays of pixels to display arbitrary images using backlighting with a dark background. When no image is displayed, different arrangements are used. For this purpose, TN LCDs are operated between parallel polarizers, whereas IPS LCDs feature crossed polarizers. In many applications IPS LCDs have replaced TN LCDs, in particular in smartphones such as iPhones. Both the liquid crystal material and the alignment layer material contain ionic compounds. If an electric field of one particular polarity is applied for a long period of time, this ionic material is attracted to the surfaces and degrades the device performance. This is avoided either by applying an alternating current or by reversing the polarity of the electric field as the device is addressed (the response of the liquid crystal layer is identical, regardless of the polarity of the applied field).
Displays for a small number of individual digits or fixed symbols (as in digital watches and pocket calculators) can be implemented with independent electrodes for each segment. In contrast, full alphanumeric or variable graphics displays are usually implemented with pixels arranged as a matrix consisting of electrically connected rows on one side of the LC layer and columns on the other side, which makes it possible to address each pixel at the intersections. The general method of matrix addressing consists of sequentially addressing one side of the matrix, for example by selecting the rows one-by-one and applying the picture information on the other side at the columns row-by-row. "For details on the various matrix addressing schemes see" passive-matrix and active-matrix addressed LCDs.
LCDs, along with OLED displays, are manufactured in cleanrooms using large sheets of glass whose size has increased over time. Several displays are manufactured at the same time, and then cut from the sheet of glass, also known as the mother glass. The increase in size allows more displays or larger displays to be made, just like with increasing wafer sizes in semiconductor manufacturing. The glass sizes are as follows:
Until Gen 8, manufacturers would not agree on a single mother glass size and as a result, different manufacturers would use slightly different glass sizes for the same generation. The thickness of the mother glass also increases with each generation, so larger mother glass sizes are better suited for larger displays. An LCD Module (LCM) is a ready-to-use LCD. Thus, a factory that makes LCD Modules does not necessarily make LCDs, it may only assemble them into the modules.
The origins and the complex history of liquid-crystal displays from the perspective of an insider during the early days were described by Joseph A. Castellano in "Liquid Gold: The Story of Liquid Crystal Displays and the Creation of an Industry".
Another report on the origins and history of LCD from a different perspective until 1991 has been published by Hiroshi Kawamoto, available at the IEEE History Center.
A description of Swiss contributions to LCD developments, written by Peter J. Wild, can be found at the "Engineering and Technology History Wiki".
In 1888, Friedrich Reinitzer (1858–1927) discovered the liquid crystalline nature of cholesterol extracted from carrots (that is, two melting points and generation of colors) and published his findings at a meeting of the Vienna Chemical Society on May 3, 1888 (F. Reinitzer: "Beiträge zur Kenntniss des Cholesterins, Monatshefte für Chemie (Wien) 9, 421–441 (1888)"). In 1904, Otto Lehmann published his work ""Flüssige Kristalle"" (Liquid Crystals). In 1911, Charles Mauguin first experimented with liquid crystals confined between plates in thin layers.
In 1922, Georges Friedel described the structure and properties of liquid crystals and classified them in 3 types (nematics, smectics and cholesterics). In 1927, Vsevolod Frederiks devised the electrically switched light valve, called the Fréedericksz transition, the essential effect of all LCD technology. In 1936, the Marconi Wireless Telegraph company patented the first practical application of the technology, ""The Liquid Crystal Light Valve"". In 1962, the first major English language publication on the subject ""Molecular Structure and Properties of Liquid Crystals"", by Dr. George W. Gray. In 1962, Richard Williams of RCA found that liquid crystals had some interesting electro-optic characteristics and he realized an electro-optical effect by generating stripe-patterns in a thin layer of liquid crystal material by the application of a voltage. This effect is based on an electro-hydrodynamic instability forming what are now called "Williams domains" inside the liquid crystal.
The MOSFET (metal-oxide-semiconductor field-effect transistor) was invented by Mohamed M. Atalla and Dawon Kahng at Bell Labs in 1959, and presented in 1960. Building on their work with MOSFETs, Paul K. Weimer at RCA developed the thin-film transistor (TFT) in 1962. It was a type of MOSFET distinct from the standard bulk MOSFET.
In the late 1960s, pioneering work on liquid crystals was undertaken by the UK's Royal Radar Establishment at Malvern, England. The team at RRE supported ongoing work by George William Gray and his team at the University of Hull who ultimately discovered the cyanobiphenyl liquid crystals, which had correct stability and temperature properties for application in LCDs.
In 1964, George H. Heilmeier, then working at the RCA laboratories on the effect discovered by Williams achieved the switching of colors by field-induced realignment of dichroic dyes in a homeotropically oriented liquid crystal. Practical problems with this new electro-optical effect made Heilmeier continue to work on scattering effects in liquid crystals and finally the achievement of the first operational liquid-crystal display based on what he called the "dynamic scattering mode" (DSM). Application of a voltage to a DSM display switches the initially clear transparent liquid crystal layer into a milky turbid state. DSM displays could be operated in transmissive and in reflective mode but they required a considerable current to flow for their operation. George H. Heilmeier was inducted in the National Inventors Hall of Fame and credited with the invention of LCDs. Heilmeier's work is an IEEE Milestone.
The idea of a TFT-based liquid-crystal display (LCD) was conceived by Bernard Lechner of RCA Laboratories in 1968. Lechner, F.J. Marlowe, E.O. Nester and J. Tults demonstrated the concept in 1968 with an 18x2 matrix dynamic scattering mode (DSM) LCD that used standard discrete MOSFETs.
On December 4, 1970, the twisted nematic field effect (TN) in liquid crystals was filed for patent by Hoffmann-LaRoche in Switzerland, (Swiss patent No. 532 261) with Wolfgang Helfrich and Martin Schadt (then working for the Central Research Laboratories) listed as inventors. Hoffmann-La Roche then licensed the invention to the Swiss manufacturer Brown, Boveri & Cie which produced TN displays for wristwatches and other applications during the 1970s for the international markets including the Japanese electronics industry, which soon produced the first digital quartz wristwatches with TN-LCDs and numerous other products. James Fergason, while working with Sardari Arora and Alfred Saupe at Kent State University Liquid Crystal Institute, filed an identical patent in the United States on April 22, 1971. In 1971, the company of Fergason, ILIXCO (now LXD Incorporated), produced LCDs based on the TN-effect, which soon superseded the poor-quality DSM types due to improvements of lower operating voltages and lower power consumption. Tetsuro Hama and Izuhiko Nishimura of Seiko received a US patent dated February 1971, for an electronic wristwatch incorporating a TN-LCD. In 1972, the first wristwatch with TN-LCD was launched on the market: The Gruen Teletime which was a four digit display watch.
In 1972, the concept of the active-matrix thin-film transistor (TFT) liquid-crystal display panel was prototyped in the United States by T. Peter Brody's team at Westinghouse, in Pittsburgh, Pennsylvania. In 1973, Brody, J. A. Asars and G. D. Dixon at Westinghouse Research Laboratories demonstrated the first thin-film-transistor liquid-crystal display (TFT LCD). , all modern high-resolution and high-quality electronic visual display devices use TFT-based active matrix displays. Brody and Fang-Chen Luo demonstrated the first flat active-matrix liquid-crystal display (AM LCD) in 1974, and then Brody coined the term "active matrix" in 1975.
In 1972 North American Rockwell Microelectronics Corp introduced the use of DSM LCD displays for calculators for marketing by Lloyds Electronics Inc, though these required an internal light source for illumination. Sharp Corporation followed with DSM LCD displays for pocket-sized calculators in 1973 and then mass-produced TN LCD displays for watches in 1975. Other Japanese companies soon took a leading position in the wristwatch market, like Seiko and its first 6-digit TN-LCD quartz wristwatch. Color LCDs based on "Guest-Host" interaction were invented by a team at RCA in 1968. A particular type of such a color LCD was developed by Japan's Sharp Corporation in the 1970s, receiving patents for their inventions, such as a patent by Shinji Kato and Takaaki Miyazaki in May 1975, and then improved by Fumiaki Funada and Masataka Matsuura in December 1975. TFT LCDs similar to the prototypes developed by a Westinghouse team in 1972 were patented in 1976 by a team at Sharp consisting of Fumiaki Funada, Masataka Matsuura, and Tomio Wada, then improved in 1977 by a Sharp team consisting of Kohei Kishi, Hirosaku Nonomura, Keiichiro Shimizu, and Tomio Wada. However, these TFT-LCDs were not yet ready for use in products, as problems with the materials for the TFTs were not yet solved.
In 1983, researchers at Brown, Boveri & Cie (BBC) Research Center, Switzerland, invented the "super-twisted nematic (STN) structure" for passive matrix-addressed LCDs. H. Amstutz et al. were listed as inventors in the corresponding patent applications filed in Switzerland on July 7, 1983, and October 28, 1983. Patents were granted in Switzerland CH 665491, Europe EP 0131216, and many more countries. In 1980, Brown Boveri started a 50/50 joint venture with the Dutch Philips company, called Videlec. Philips had the required know-how to design and build integrated circuits for the control of large LCD panels. In addition, Philips had better access to markets for electronic components and intended to use LCDs in new product generations of hi-fi, video equipment and telephones. In 1984, Philips researchers Theodorus Welzen and Adrianus de Vaan invented a video speed-drive scheme that solved the slow response time of STN-LCDs, enabling high-resolution, high-quality, and smooth-moving video images on STN-LCDs. In 1985, Philips inventors Theodorus Welzen and Adrianus de Vaan solved the problem of driving high-resolution STN-LCDs using low-voltage (CMOS-based) drive electronics, allowing the application of high-quality (high resolution and video speed) LCD panels in battery-operated portable products like notebook computers and mobile phones. In 1985, Philips acquired 100% of the Videlec AG company based in Switzerland. Afterwards, Philips moved the Videlec production lines to the Netherlands. Years later, Philips successfully produced and marketed complete modules (consisting of the LCD screen, microphone, speakers etc.) in high-volume production for the booming mobile phone industry.
The first color LCD televisions were developed as handheld televisions in Japan. In 1980, Hattori Seiko's R&D group began development on color LCD pocket televisions. In 1982, Seiko Epson released the first LCD television, the Epson TV Watch, a wristwatch equipped with a small active-matrix LCD television. Sharp Corporation introduced dot matrix TN-LCD in 1983. In 1984, Epson released the ET-10, the first full-color, pocket LCD television. The same year, Citizen Watch, introduced the Citizen Pocket TV, a 2.7-inch color LCD TV, with the first commercial TFT LCD display. In 1988, Sharp demonstrated a 14-inch, active-matrix, full-color, full-motion TFT-LCD. This led to Japan launching an LCD industry, which developed large-size LCDs, including TFT computer monitors and LCD televisions. Epson developed the 3LCD projection technology in the 1980s, and licensed it for use in projectors in 1988. Epson's VPJ-700, released in January 1989, was the world's first compact, full-color LCD projector.
In 1990, under different titles, inventors conceived electro optical effects as alternatives to "twisted nematic field effect LCDs" (TN- and STN- LCDs). One approach was to use interdigital electrodes on one glass substrate only to produce an electric field essentially parallel to the glass substrates. To take full advantage of the properties of this "In Plane Switching (IPS) technology" further work was needed. After thorough analysis, details of advantageous embodiments are filed in Germany by Guenter Baur "et al." and patented in various countries. The Fraunhofer Institute ISE in Freiburg, where the inventors worked, assigns these patents to Merck KGaA, Darmstadt, a supplier of LC substances. In 1992, shortly thereafter, engineers at Hitachi work out various practical details of the IPS technology to interconnect the thin-film transistor array as a matrix and to avoid undesirable stray fields in between pixels. Hitachi also improved the viewing angle dependence further by optimizing the shape of the electrodes ("Super IPS"). NEC and Hitachi become early manufacturers of active-matrix addressed LCDs based on the IPS technology. This is a milestone for implementing large-screen LCDs having acceptable visual performance for flat-panel computer monitors and television screens. In 1996, Samsung developed the optical patterning technique that enables multi-domain LCD. Multi-domain and In Plane Switching subsequently remain the dominant LCD designs through 2006. In the late 1990s, the LCD industry began shifting away from Japan, towards South Korea and Taiwan, which later shifted to China.
In 2007 the image quality of LCD televisions surpassed the image quality of cathode-ray-tube-based (CRT) TVs. In the fourth quarter of 2007, LCD televisions surpassed CRT TVs in worldwide sales for the first time. LCD TVs were projected to account 50% of the 200 million TVs to be shipped globally in 2006, according to Displaybank. In October 2011, Toshiba announced 2560 × 1600 pixels on a 6.1-inch (155 mm) LCD panel, suitable for use in a tablet computer, especially for Chinese character display. The 2010s also saw the wide adoption of TGP (Tracking Gate-line in Pixel), which moves the driving circuitry from the borders of the display to in between the pixels, allowing for narrow bezels. LCDs can be made transparent and flexible, but they cannot emit light without a backlight like OLED and microLED, which are other technologies that can also be made flexible and transparent. Fujifilm makes special films used for increasing the viewing angles of LCD panels.
In 2016, Panasonic developed IPS LCDs with a contrast ratio of 1,000,000:1, rivaling OLEDs. This technology was later put into mass production as dual layer or dual panel LCDs. The technology uses 2 liquid crystal layers instead of one, and may be used along with a mini-LED backlight and quantum dot sheets.
Since LCDs produce no light of their own, they require external light to produce a visible image. In a transmissive type of LCD, the light source is provided at the back of the glass stack and is called a backlight. Active-matrix LCDs are almost always backlit. Passive LCDs may be backlit but many use a reflector at the back of the glass stack to utilize ambient light. Transflective LCDs combine the features of a backlit transmissive display and a reflective display.
The common implementations of LCD backlight technology are:
Today, most LCD screens are being designed with an LED backlight instead of the traditional CCFL backlight, while that backlight is dynamically controlled with the video information (dynamic backlight control). The combination with the dynamic backlight control, invented by Philips researchers Douglas Stanton, Martinus Stroomer and Adrianus de Vaan, simultaneously increases the dynamic range of the display system (also marketed as "HDR, high dynamic range television" or called "Full-area Local Area Dimming (FLAD)"
The LCD backlight systems are made highly efficient by applying optical films such as prismatic structure to gain the light into the desired viewer directions and reflective polarizing films that recycle the polarized light that was formerly absorbed by the first polarizer of the LCD (invented by Philips researchers Adrianus de Vaan and Paulus Schaareman), generally achieved using so called DBEF films manufactured and supplied by 3M. These polarizers consist of a large stack of uniaxial oriented birefringent films that reflect the former absorbed polarization mode of the light. Such reflective polarizers using uniaxial oriented polymerized liquid crystals (birefringent polymers or birefringent glue) are invented in 1989 by Philips researchers Dirk Broer, Adrianus de Vaan and Joerg Brambring. The combination of such reflective polarizers, and LED dynamic backlight control make today's LCD televisions far more efficient than the CRT-based sets, leading to a worldwide energy saving of 600 TWh (2017), equal to 10% of the electricity consumption of all households worldwide or equal to 2 times the energy production of all solar cells in the world.
Due to the LCD layer that generates the desired high resolution images at flashing video speeds using very low power electronics in combination with LED based backlight technologies, LCD technology has become the dominant display technology for products such as televisions, desktop monitors, notebooks, tablets, smartphones and mobile phones. Although competing OLED technology is pushed to the market, such OLED displays do not feature the HDR capabilities like LCDs in combination with 2D LED backlight technologies have, reason why the annual market of such LCD-based products is still growing faster (in volume) than OLED-based products while the efficiency of LCDs (and products like portable computers, mobile phones and televisions) may even be further improved by preventing the light to be absorbed in the colour filters of the LCD. Such reflective colour filter solutions are not yet implemented by the LCD industry and have not made it further than laboratory prototypes. They will likely be implemented by the LCD industry to increase the efficiency compared to OLED technologies.
A standard television receiver screen, a modern LCD panel, has over six million pixels, and they are all individually powered by a wire network embedded in the screen. The fine wires, or pathways, form a grid with vertical wires across the whole screen on one side of the screen and horizontal wires across the whole screen on the other side of the screen. To this grid each pixel has a positive connection on one side and a negative connection on the other side. So the total amount of wires needed for a 1080p display is 3 x 1920 going vertically and 1080 going horizontally for a total of 6840 wires horizontally and vertically. That's three for red, green and blue and 1920 columns of pixels for each color for a total of 5760 wires going vertically and 1080 rows of wires going horizontally. For a panel that is 28.8 inches (73 centimeters) wide, that means a wire density of 200 wires per inch along the horizontal edge. The LCD panel is powered by LCD drivers that are carefully matched up with the edge of the LCD panel at the factory level. The drivers may be installed using several methods, the most common of which are COG (Chip-On-Glass) and TAB (Tape-automated bonding) These same principles apply also for smartphone screens that are much smaller than TV screens. LCD panels typically use thinly-coated metallic conductive pathways on a glass substrate to form the cell circuitry to operate the panel. It is usually not possible to use soldering techniques to directly connect the panel to a separate copper-etched circuit board. Instead, interfacing is accomplished using anisotropic conductive film or, for lower densities, elastomeric connectors.
Monochrome and later color passive-matrix LCDs were standard in most early laptops (although a few used plasma displays) and the original Nintendo Game Boy until the mid-1990s, when color active-matrix became standard on all laptops. The commercially unsuccessful Macintosh Portable (released in 1989) was one of the first to use an active-matrix display (though still monochrome). Passive-matrix LCDs are still used in the 2010s for applications less demanding than laptop computers and TVs, such as inexpensive calculators. In particular, these are used on portable devices where less information content needs to be displayed, lowest power consumption (no backlight) and low cost are desired or readability in direct sunlight is needed.
Displays having a passive-matrix structure are employing "super-twisted nematic" STN (invented by Brown Boveri Research Center, Baden, Switzerland, in 1983; scientific details were published) or double-layer STN (DSTN) technology (the latter of which addresses a color-shifting problem with the former), and color-STN (CSTN) in which color is added by using an internal filter. STN LCDs have been optimized for passive-matrix addressing. They exhibit a sharper threshold of the contrast-vs-voltage characteristic than the original TN LCDs. This is important, because pixels are subjected to partial voltages even while not selected. Crosstalk between activated and non-activated pixels has to be handled properly by keeping the RMS voltage of non-activated pixels below the threshold voltage as discovered by Peter J. Wild in 1972, while activated pixels are subjected to voltages above threshold (the voltages according to the "Alt & Pleshko" drive scheme). Driving such STN displays according to the Alt & Pleshko drive scheme require very high line addressing voltages. Welzen and de Vaan invented an alternative drive scheme (a non "Alt & Pleshko" drive scheme) requiring much lower voltages, such that the STN display could be driven using low voltage CMOS technologies. STN LCDs have to be continuously refreshed by alternating pulsed voltages of one polarity during one frame and pulses of opposite polarity during the next frame. Individual pixels are addressed by the corresponding row and column circuits. This type of display is called "passive-matrix addressed", because the pixel must retain its state between refreshes without the benefit of a steady electrical charge. As the number of pixels (and, correspondingly, columns and rows) increases, this type of display becomes less feasible. Slow response times and poor contrast are typical of passive-matrix addressed LCDs with too many pixels and driven according to the "Alt & Pleshko" drive scheme. Welzen and de Vaan also invented a non RMS drive scheme enabling to drive STN displays with video rates and enabling to show smooth moving video images on an STN display. Citizen, amongst others, licensed these patents and successfully introduced several STN based LCD pocket televisions on the market
Bistable LCDs do not require continuous refreshing. Rewriting is only required for picture information changes. In 1984 HA van Sprang and AJSM de Vaan invented an STN type display that could be operated in a bistable mode, enabling extreme high resolution images up to 4000 lines or more using only low voltages. Since a pixel however may be either in an on-state or in an off state at the moment new information needs to be written to that particular pixel, the addressing method of these bistable displays is rather complex, reason why these displays did not made it to the market. That changed when in the 2010 "zero-power" (bistable) LCDs became available. Potentially, passive-matrix addressing can be used with devices if their write/erase characteristics are suitable, which was the case for ebooks showing still pictures only. After a page is written to the display, the display may be cut from the power while that information remains readable. This has the advantage that such ebooks may be operated long time on just a small battery only. High-resolution color displays, such as modern LCD computer monitors and televisions, use an active-matrix structure. A matrix of thin-film transistors (TFTs) is added to the electrodes in contact with the LC layer. Each pixel has its own dedicated transistor, allowing each column line to access one pixel. When a row line is selected, all of the column lines are connected to a row of pixels and voltages corresponding to the picture information are driven onto all of the column lines. The row line is then deactivated and the next row line is selected. All of the row lines are selected in sequence during a refresh operation. Active-matrix addressed displays look brighter and sharper than passive-matrix addressed displays of the same size, and generally have quicker response times, producing much better images. Sharp produces bistable reflective LCDs with a 1-bit SRAM cell per pixel that only requires small amounts of power to maintain an image.
Segment LCDs can also have color by using Field Sequential Color (FSC LCD). This kind of displays have a high speed passive segment LCD panel with an RGB backlight. The backlight quickly changes color, making it appear white to the naked eye. The LCD panel is synchronized with the backlight. For example, to make a segment appear red, the segment is only turned ON when the backlight is red, and to make a segment appear magenta, the segment is turned ON when the backlight is blue, and it continues to be ON while the backlight becomes red, and it turns OFF when the backlight becomes green. To make a segment appear black, the segment is, simply, always turned ON. An FSC LCD divides a color image into 3 images (one Red, one Green and one Blue) and it displays them in order. Due to persistence of vision, the 3 monochromatic images appear as one color image. An FSC LCD needs an LCD panel with a refresh rate of 180 Hz, and the response time is reduced to just 5 milliseconds when compared with normal STN LCD panels which have a response time of 16 milliseconds. FSC LCDs contain a Chip-On-Glass driver IC can also be used with a capacitive touchscreen.
Samsung introduced UFB (Ultra Fine & Bright) displays back in 2002, utilized the super-birefringent effect. It has the luminance, color gamut, and most of the contrast of a TFT-LCD, but only consumes as much power as an STN display, according to Samsung. It was being used in a variety of Samsung cellular-telephone models produced until late 2006, when Samsung stopped producing UFB displays. UFB displays were also used in certain models of LG mobile phones.
Twisted nematic displays contain liquid crystals that twist and untwist at varying degrees to allow light to pass through. When no voltage is applied to a TN liquid crystal cell, polarized light passes through the 90-degrees twisted LC layer. In proportion to the voltage applied, the liquid crystals untwist changing the polarization and blocking the light's path. By properly adjusting the level of the voltage almost any gray level or transmission can be achieved. FSTN (Film Super-twisted nematic) can improve image sharpness.
In-plane switching is an LCD technology that aligns the liquid crystals in a plane parallel to the glass substrates. In this method, the electrical field is applied through opposite electrodes on the same glass substrate, so that the liquid crystals can be reoriented (switched) essentially in the same plane, although fringe fields inhibit a homogeneous reorientation. This requires two transistors for each pixel instead of the single transistor needed for a standard thin-film transistor (TFT) display. Before LG Enhanced IPS was introduced in 2009, the additional transistors resulted in blocking more transmission area, thus requiring a brighter backlight and consuming more power, making this type of display less desirable for notebook computers. Currently Panasonic is using an enhanced version eIPS for their large size LCD-TV products as well as Hewlett-Packard in its WebOS based TouchPad tablet and their Chromebook 11.
In 2011, LG claimed the smartphone LG Optimus Black (IPS LCD (LCD NOVA)) has the brightness up to 700 nits, while the competitor has only IPS LCD with 518 nits and double an active-matrix OLED (AMOLED) display with 305 nits. LG also claimed the NOVA display to be 50 percent more efficient than regular LCDs and to consume only 50 percent of the power of AMOLED displays when producing white on screen. When it comes to contrast ratio, AMOLED display still performs best due to its underlying technology, where the black levels are displayed as pitch black and not as dark gray. On August 24, 2011, Nokia announced the Nokia 701 and also made the claim of the world's brightest display at 1000 nits. The screen also had Nokia's Clearblack layer, improving the contrast ratio and bringing it closer to that of the AMOLED screens.
Super-IPS was later introduced after in-plane switching with even better response times and color reproduction.
Known as fringe field switching (FFS) until 2003, advanced fringe field switching is similar to IPS or S-IPS offering superior performance and color gamut with high luminosity. AFFS was developed by Hydis Technologies Co., Ltd, Korea (formally Hyundai Electronics, LCD Task Force). AFFS-applied notebook applications minimize color distortion while maintaining a wider viewing angle for a professional display. Color shift and deviation caused by light leakage is corrected by optimizing the white gamut which also enhances white/gray reproduction. In 2004, Hydis Technologies Co., Ltd licensed AFFS to Japan's Hitachi Displays. Hitachi is using AFFS to manufacture high-end panels. In 2006, HYDIS licensed AFFS to Sanyo Epson Imaging Devices Corporation. Shortly thereafter, Hydis introduced a high-transmittance evolution of the AFFS display, called HFFS (FFS+). Hydis introduced AFFS+ with improved outdoor readability in 2007. AFFS panels are mostly utilized in the cockpits of latest commercial aircraft displays. However, it is no longer produced as of February 2015.
Vertical-alignment displays are a form of LCDs in which the liquid crystals naturally align vertically to the glass substrates. When no voltage is applied, the liquid crystals remain perpendicular to the substrate, creating a black display between crossed polarizers. When voltage is applied, the liquid crystals shift to a tilted position, allowing light to pass through and create a gray-scale display depending on the amount of tilt generated by the electric field. It has a deeper-black background, a higher contrast ratio, a wider viewing angle, and better image quality at extreme temperatures than traditional twisted-nematic displays. Compared to IPS, the black levels are still deeper, allowing for a higher contrast ratio, but the viewing angle is narrower, with color and especially contrast shift being more apparent.
Blue phase mode LCDs have been shown as engineering samples early in 2008, but they are not in mass-production. The physics of blue phase mode LCDs suggest that very short switching times (≈1 ms) can be achieved, so time sequential color control can possibly be realized and expensive color filters would be obsolete.
Some LCD panels have defective transistors, causing permanently lit or unlit pixels which are commonly referred to as stuck pixels or dead pixels respectively. Unlike integrated circuits (ICs), LCD panels with a few defective transistors are usually still usable. Manufacturers' policies for the acceptable number of defective pixels vary greatly. At one point, Samsung held a zero-tolerance policy for LCD monitors sold in Korea. As of 2005, though, Samsung adheres to the less restrictive ISO 13406-2 standard. Other companies have been known to tolerate as many as 11 dead pixels in their policies.
Dead pixel policies are often hotly debated between manufacturers and customers. To regulate the acceptability of defects and to protect the end user, ISO released the ISO 13406-2 standard, which was made obsolete in 2008 with the release of ISO 9241, specifically ISO-9241-302, 303, 305, 307:2008 pixel defects. However, not every LCD manufacturer conforms to the ISO standard and the ISO standard is quite often interpreted in different ways. LCD panels are more likely to have defects than most ICs due to their larger size. For example, a 300 mm SVGA LCD has 8 defects and a 150 mm wafer has only 3 defects. However, 134 of the 137 dies on the wafer will be acceptable, whereas rejection of the whole LCD panel would be a 0% yield. In recent years, quality control has been improved. An SVGA LCD panel with 4 defective pixels is usually considered defective and customers can request an exchange for a new one. Some manufacturers, notably in South Korea where some of the largest LCD panel manufacturers, such as LG, are located, now have a zero-defective-pixel guarantee, which is an extra screening process which can then determine "A"- and "B"-grade panels. Many manufacturers would replace a product even with one defective pixel. Even where such guarantees do not exist, the location of defective pixels is important. A display with only a few defective pixels may be unacceptable if the defective pixels are near each other. LCD panels also have defects known as "clouding" (or less commonly "mura"), which describes the uneven patches of changes in luminance. It is most visible in dark or black areas of displayed scenes. As of 2010, most premium branded computer LCD panel manufacturers specify their products as having zero defects.
The zenithal bistable device (ZBD), developed by Qinetiq (formerly DERA), can retain an image without power. The crystals may exist in one of two stable orientations ("black" and "white") and power is only required to change the image. ZBD Displays is a spin-off company from QinetiQ who manufactured both grayscale and color ZBD devices. Kent Displays has also developed a "no-power" display that uses polymer stabilized cholesteric liquid crystal (ChLCD). In 2009 Kent demonstrated the use of a ChLCD to cover the entire surface of a mobile phone, allowing it to change colors, and keep that color even when power is removed.
In 2004 researchers at the University of Oxford demonstrated two new types of zero-power bistable LCDs based on Zenithal bistable techniques. Several bistable technologies, like the 360° BTN and the bistable cholesteric, depend mainly on the bulk properties of the liquid crystal (LC) and use standard strong anchoring, with alignment films and LC mixtures similar to the traditional monostable materials. Other bistable technologies, "e.g.", BiNem technology, are based mainly on the surface properties and need specific weak anchoring materials.
Some of these issues relate to full-screen displays, others to small displays as on watches, etc. Many of the comparisons are with CRT displays.
Several different families of liquid crystals are used in liquid crystals. The molecules used have to be anisotropic, and to exhibit mutual attraction. Polarizable rod-shaped molecules (biphenyls, terphenyls, etc.) are common. A common form is a pair of aromatic benzene rings, with a nonpolar moiety (pentyl, heptyl, octyl, or alkyl oxy group) on one end and polar (nitrile, halogen) on the other. Sometimes the benzene rings are separated with an acetylene group, ethylene, CH=N, CH=NO, N=N, N=NO, or ester group. In practice, eutectic mixtures of several chemicals are used, to achieve wider temperature operating range (−10..+60 °C for low-end and −20..+100 °C for high-performance displays). For example, the E7 mixture is composed of three biphenyls and one terphenyl: 39 wt.% of 4'-pentyl[1,1'-biphenyl]-4-carbonitrile (nematic range 24..35 °C), 36 wt.% of 4'-heptyl[1,1'-biphenyl]-4-carbonitrile (nematic range 30..43 °C), 16 wt.% of 4'-octoxy[1,1'-biphenyl]-4-carbonitrile (nematic range 54..80 °C), and 9 wt.% of 4"-pentyl[1,1':4',1"-terphenyl]-4-carbonitrile (nematic range 131..240 °C).
The production of LCD screens uses nitrogen trifluoride (NF3) as an etching fluid during the production of the thin-film components. NF3 is a potent greenhouse gas, and its relatively long half-life may make it a potentially harmful contributor to global warming. A report in "Geophysical Research Letters" suggested that its effects were theoretically much greater than better-known sources of greenhouse gasses like carbon dioxide. As NF3 was not in widespread use at the time, it was not made part of the Kyoto Protocols and has been deemed "the missing greenhouse gas".
Critics of the report point out that it assumes that all of the NF3 produced would be released to the atmosphere. In reality, the vast majority of NF3 is broken down during the cleaning processes; two earlier studies found that only 2 to 3% of the gas escapes destruction after its use. Furthermore, the report failed to compare NF3's effects with what it replaced, perfluorocarbon, another powerful greenhouse gas, of which anywhere from 30 to 70% escapes to the atmosphere in typical use.
|
https://en.wikipedia.org/wiki?curid=17932
|
Latency (engineering)
Latency (known within gaming circles as lag) is a time interval between the stimulation and response, or, from a more general point of view, a time delay between the cause and the effect of some physical change in the system being observed. Latency is physically a consequence of the limited velocity which any physical interaction can propagate. The magnitude of this velocity is always less than or equal to the speed of light. Therefore, every physical system with any physical separation (distance) between cause and effect will experience some sort of latency, regardless of the nature of stimulation that it has been exposed to.
The precise definition of latency depends on the system being observed and the nature of stimulation. In communications, the lower limit of latency is determined by the medium being used. In reliable two-way communication systems, latency limits the maximum rate that information can be transmitted, as there is often a limit on the amount of information that is "in-flight" at any one moment. In the field of human–machine interaction, perceptible latency has a strong effect on user satisfaction and usability.
Online games are sensitive to latency (or "lag"), since fast response times to new events occurring during a game session are rewarded while slow response times may carry penalties. Due to a delay in transmission of game events, a player with a high latency internet connection may show slow responses in spite of appropriate reaction time. This gives players with low latency connections a technical advantage.
Minimizing latency is of interest in the capital markets, particularly where algorithmic trading is used to process market updates and turn around orders within milliseconds. Low-latency trading occurs on the networks used by financial institutions to connect to stock exchanges and electronic communication networks (ECNs) to execute financial transactions. Joel Hasbrouck and Gideon Saar (2011) measure latency based on three components: the time it takes for information to reach the trader, execution of the trader's algorithms to analyze the information and decide a course of action, and the generated action to reach the exchange and get implemented. Hasbrouck and Saar contrast this with the way in which latencies are measured by many trading venues who use much more narrow definitions, such as, the processing delay measured from the entry of the order (at the vendor's computer) to the transmission of an acknowledgement (from the vendor's computer). Electronic trading now makes up 60% to 70% of the daily volume on the New York Stock Exchange and algorithmic trading close to 35%. Trading using computers has developed to the point where millisecond improvements in network speeds offer a competitive advantage for financial institutions.
Network latency in a packet-switched network is measured as either one-way (the time from the source sending a packet to the destination receiving it), or round-trip delay time (the one-way latency from source to destination plus the one-way latency from the destination back to the source). Round-trip latency is more often quoted, because it can be measured from a single point. Note that round trip latency excludes the amount of time that a destination system spends processing the packet. Many software platforms provide a service called ping that can be used to measure round-trip latency. Ping uses the Internet Control Message Protocol (ICMP) "echo request" which causes the recipient to send the received packet as an immediate response, thus it provides a rough way of measuring round-trip delay time. Ping cannot perform accurate measurements, principally because ICMP is intended only for diagnostic or control purposes, and differs from real communication protocols such as TCP. Furthermore, routers and internet service providers might apply different traffic shaping policies to different protocols. For more accurate measurements it is better to use specific software, for example: hping, Netperf or Iperf.
However, in a non-trivial network, a typical packet will be forwarded over multiple links and gateways, each of which will not begin to forward the packet until it has been completely received. In such a network, the minimal latency is the sum of the transmission delay of each link, plus the forwarding latency of each gateway. In practice, minimal latency also includes queuing and processing delays. Queuing delay occurs when a gateway receives multiple packets from different sources heading towards the same destination. Since typically only one packet can be transmitted at a time, some of the packets must queue for transmission, incurring additional delay. Processing delays are incurred while a gateway determines what to do with a newly received packet. Bufferbloat can also cause increased latency that is an order of magnitude or more. The combination of propagation, serialization, queuing, and processing delays often produces a complex and variable network latency profile.
Latency limits total throughput in reliable two-way communication systems as described by the bandwidth-delay product.
Latency in optical fiber is largely a function of the speed of light, which is 299,792,458 meters/second in vacuum. This would equate to a latency of 3.33 µs for every kilometer of path length. The index of refraction of most fiber optic cables is about 1.5, meaning that light travels about 1.5 times as fast in a vacuum as it does in the cable. This works out to about 5.0 µs of latency for every kilometer. In shorter metro networks, higher latency can be experienced due to extra distance in building risers and cross-connects. To calculate the latency of a connection, one has to know the distance traveled by the fiber, which is rarely a straight line, since it has to traverse geographic contours and obstacles, such as roads and railway tracks, as well as other rights-of-way.
Due to imperfections in the fiber, light degrades as it is transmitted through it. For distances of greater than 100 kilometers, amplifiers or regenerators are deployed. Latency introduced by these components needs to be taken into account.
Satellites in geostationary orbits are far enough away from Earth that communication latency becomes significant – about a quarter of a second for a trip from one ground-based transmitter to the satellite and back to another ground-based transmitter; close to half a second for two-way communication from one Earth station to another and then back to the first. Low Earth orbit is sometimes used to cut this delay, at the expense of more complicated satellite tracking on the ground and requiring more satellites in the satellite constellation to ensure continuous coverage.
Audio latency is the delay between when an audio signal enters and when it emerges from a system. Potential contributors to latency in an audio system include analog-to-digital conversion, buffering, digital signal processing, transmission time, digital-to-analog conversion and the speed of sound in air.
Any individual workflow within a system of workflows can be subject to some type of operational latency. It may even be the case that an individual system may have more than one type of latency, depending on the type of participant or goal-seeking behavior. This is best illustrated by the following two examples involving air travel.
From the point of view of a passenger, latency can be described as follows. Suppose John Doe flies from London to New York. The latency of his trip is the time it takes him to go from his house in England to the hotel he is staying at in New York. This is independent of the throughput of the London-New York air link – whether there were 100 passengers a day making the trip or 10000, the latency of the trip would remain the same.
From the point of view of flight operations personnel, latency can be entirely different. Consider the staff at the London and New York airports. Only a limited number of planes are able to make the transatlantic journey, so when one lands they must prepare it for the return trip as quickly as possible. It might take, for example:
Assuming the above are done consecutively, minimum plane turnaround time is:
However, cleaning, refueling and loading the cargo can be done at the same time. Passengers can be loaded after cleaning is complete. The reduced latency, then, is:
The people involved in the turnaround are interested only in the time it takes for their individual tasks. When all of the tasks are done at the same time, however, it is possible to reduce the latency to the length of the longest task. If some steps have prerequisites, it becomes more difficult to perform all steps in parallel. In the example above, the requirement to clean the plane before loading passengers results in a minimum latency longer than any single task.
Any mechanical process encounters limitations modeled by Newtonian physics. The behavior of disk drives provides an example of mechanical latency. Here, it is the time seek time for the actuator arm to be positioned above the appropriate track and then rotational latency for the data encoded on a platter to rotate from its current position to a position under the disk read-and-write head.
Computers run instructions in the context of a process. In the context of computer multitasking, the execution of the process can be postponed if other processes are also executing. In addition, the operating system can schedule when to perform the action that the process is commanding. For example, suppose a process commands that a computer card's voltage output be set high-low-high-low and so on at a rate of 1000 Hz. The operating system schedules the process for each transition (high-low or low-high) based on a hardware clock such as the High Precision Event Timer. The latency is the delay between the events generated by the hardware clock and the actual transitions of voltage from high to low or low to high.
Many desktop operating systems have performance limitations which create additional latency. The problem may be mitigated with real-time extensions and patches such as PREEMPT_RT.
On embedded systems, the real-time execution of instructions is often supported by a real-time operating system.
In simulation applications, latency refers to the time delay, often measured in milliseconds, between initial input and output clearly discernible to the simulator trainee or simulator subject. Latency is sometimes also called "transport delay". Some authorities distinguish between latency and transport delay by using the term "latency" in the sense of the extra time delay of a system over and above the reaction time of the vehicle being simulated, but this requires detailed knowledge of the vehicle dynamics and can be controversial.
In simulators with both visual and motion systems, it is particularly important that the latency of the motion system not be greater than of the visual system, or symptoms of simulator sickness may result. This is because, in the real world, motion cues are those of acceleration and are quickly transmitted to the brain, typically in less than 50 milliseconds; this is followed some milliseconds later by a perception of change in the visual scene. The visual scene change is essentially one of change of perspective or displacement of objects such as the horizon, which takes some time to build up to discernible amounts after the initial acceleration which caused the displacement. A simulator should, therefore, reflect the real-world situation by ensuring that the motion latency is equal to or less than that of the visual system and not the other way round.
|
https://en.wikipedia.org/wiki?curid=17933
|
History of public transport authorities in London
The history of public transport authorities in London details the various organisations that have been responsible for the public transport network in and around London, England from 1933 until 2000 and have used the London Transport brand. Their responsibilities have encompassed the buses, coaches, trams and the London Underground. The period began with the creation of the London Passenger Transport Board, which covered the County of London and adjacent counties within a 30-mile (48-km) radius. This area later came under the control of the London Transport Executive and then the London Transport Board. The area of responsibility was reduced to that of the Greater London administrative area in 1970 when the Greater London Council, and then London Regional Transport took over responsibility. Since 2000, the Greater London Authority has been the transport authority and the executive agency has been called Transport for London; ending the 67-year use of the "London Transport" name.
Prior to 1933, the ownership and management of the transport system in London was distributed among a large number of independent and separate organisations. The Underground railway system had been developed and was owned by the Underground Electric Railways Company of London (UERL) and the Metropolitan Railway. Tram and Trolleybus networks were owned by various local authorities and public companies and buses were owned by numerous companies. Many of these services were in competition with one another leading to wasteful duplication. The London County Council managed tram operations within the County of London, but its responsibility did not extend to the bus or tram routes that ran outside its area; or to the railways, which also extended into neighbouring counties. A Royal Commission on London Government in the 1920s did not permit the London County Council to extend its area of responsibility and an ad hoc London Traffic Area was created to regulate motor traffic in the wider London region. In the 1930s another ad hoc solution was sought to improve the control and coordination of public transport.
The London Passenger Transport Board (LPTB) was the transport authority from 1 July 1933 to 31 December 1947. It unified services in the London area for the first time. The London Passenger Transport Act 1933 removed responsibility for of tram route from the London County Council, three county boroughs and a number of other local authorities in the Greater London area. It brought the UERL lines under the same control, and took over supervision of buses from the Metropolitan Police. The area of responsibility of the LPTB was far greater than the current Greater London boundaries and was known as the London Passenger Transport Area. The period saw massive expansion of the tube network and was directly responsible for the expansion of the suburbs. The extensive New Works Programme was halted by World War II, with some projects abandoned and others completed after the end of hostilities. The 'roundel symbol' designed in 1918 was adopted by London Passenger Transport Board and the London Transport brand and architectural style was perfected during this period. The iconic tube map designed in 1931, was published in 1933.
The London Transport Executive (LTE) was the transport authority from 1 January 1948 to 31 December 1962. London Transport was taken into public ownership and became part of the British Transport Commission, which brought London Transport and British Railways under the same control for the first and last time. The period saw the start of direct recruitment from the Caribbean and the repair and replacement of stock and stations damaged during the war as well as completion of delayed projects such as the Central line eastern extension. The AEC Routemaster bus was introduced in 1956. Trams were withdrawn in 1952 and trolleybuses in 1962.
The London Transport Board was the transport authority from 1 January 1963 to 31 December 1969 It reported directly to the Minister of Transport, ending its direct association with the management of British Railways. During this period many of Britain's unprofitable railways were closed down, as most routes in the capital were widely used the Beeching Axe had little effect. However, during this period there was little investment in public transport and the motor car increased in popularity. During this period, the Victoria line was opened - although work had started in the early 1960s - and the AEC Merlin single-deck bus was introduced.
The Greater London Council was the transport authority from 1 January 1970 to 28 June 1984 and the executive agency was called the London Transport Executive. The legislation creating the Greater London Council (GLC) was already passed in 1963 when the London Transport Board was created. However, control did not pass to the new authority until 1 January 1970. The GLC broadly controlled only those services within the boundaries of Greater London. The (green painted) country buses and Green Line Coaches had been passed in 1969 to a new company, London Country Bus Services, which in 1970 became part of the National Bus Company. The period is perhaps the most controversial in London's transport history and there was a severe lack of funding from central government and staff shortages.
The inter-modal zonal ticketing system currently used by Transport for London originated in this period. Following the Greater London Council election in 1981, the incoming Labour administration simplified fares in Greater London by introducing four new bus fare zones and two central London Underground zones, named "City" and "West End", where flat fares applied for the first time. This was accompanied by a cut in prices of about a third and was marketed as the "Fares Fair" campaign. Following successful legal action against it, on 21 March 1982 London Buses fares were subsequently doubled and London Underground fares increased by 91%. The two central area zones were retained and the fares to all other stations were restructured to be graduated at three-mile intervals. In 1983, a third revision of fares was undertaken, and a new inter-modal Travelcard season ticket was launched covering five new numbered zones; representing an overall cut in prices of around 25%. The "One Day Travelcard" was launched in 1984 and on weekdays was only sold for travel after 09.30.
London Regional Transport was the transport authority from 29 June 1984 to 2 July 2000. The GLC was abolished in 1986 with responsibility for public transport removed two years earlier in 1984. The new authority, London Regional Transport (LRT), again came under direct state control, reporting to the Secretary of State for Transport. The London Regional Transport Act contained provision for setting up subsidiary companies to run the Underground and bus services and in 1985 London Underground Limited (LUL), a wholly owned subsidiary of London Regional Transport, was set up to manage the tube network. In 1988 ten individual line business units were created to manage the network. London Buses Limited was constituted to progress the privatisation of London bus services. London Transport was converted to a route operating contract tendering authority, and the former bus operating interests and assets of London Transport were split into 12 business units under the banner "London Buses". The 12 units competed for contracts with private operators from 1984, and were all sold off by 1994/5 becoming private operators themselves.
Further amendments to the fare system were made during this period, including inclusion of the separately managed British Rail services. In January 1985 the "Capitalcard" season ticket was launched, offering validity on British Rail as well as London Underground and London Buses. It was priced around 10-15% higher than the Travelcard. In June 1986 the "One Day Capitalcard" was launched. The Capitalcard brand ended in January 1989 when the Travelcard gained validity on British Rail. In January 1991 Zone 5 was split to create a new Zone 6. The Docklands Light Railway was opened on 31 August 1987 and was included in the zonal Travelcard ticketing scheme.
The Greater London Authority, a replacement authority for the GLC, was set up in 2000 with a transport executive called Transport for London (TfL) that took control from 3 July 2000. It is the first London transport authority since 1933 not to be commonly called "London Transport". The London Underground did not pass to TfL until after a Private Finance Initiative (PFI) agreement for maintenance was completed in 2003.
|
https://en.wikipedia.org/wiki?curid=17937
|
Light
Light or visible light is electromagnetic radiation within the portion of the electromagnetic spectrum that can be perceived by the human eye. Visible light is usually defined as having wavelengths in the range of 400–700 nanometers (nm), or 4.00 × 10−7 to 7.00 × 10−7 m, between the infrared (with longer wavelengths) and the ultraviolet (with shorter wavelengths). This wavelength means a frequency range of roughly 430–750 terahertz (THz).
The main source of light on Earth is the Sun. Sunlight provides the energy that green plants use to create sugars mostly in the form of starches, which release energy into the living things that digest them. This process of photosynthesis provides virtually all the energy used by living things. Historically, another important source of light for humans has been fire, from ancient campfires to modern kerosene lamps. With the development of electric lights and power systems, electric lighting has effectively replaced firelight. Some species of animals generate their own light, a process called bioluminescence. For example, fireflies use light to locate mates, and vampire squids use it to hide themselves from prey.
The primary properties of visible light are intensity, propagation direction, frequency or wavelength spectrum, and polarization, while its speed in a vacuum, 299,792,458 meters per second, is one of the fundamental constants of nature. Visible light, as with all types of electromagnetic radiation (EMR), is experimentally found to always move at this speed in a vacuum.
In physics, the term "light" sometimes refers to electromagnetic radiation of any wavelength, whether visible or not. In this sense, gamma rays, X-rays, microwaves and radio waves are also light. Like all types of EM radiation, visible light propagates as waves. However, the energy imparted by the waves is absorbed at single locations the way particles are absorbed. The absorbed energy of the EM waves is called a photon, and represents the quanta of light. When a wave of light is transformed and absorbed as a photon, the energy of the wave instantly collapses to a single location, and this location is where the photon "arrives." This is what is called the wave function collapse. This dual wave-like and particle-like nature of light is known as the wave–particle duality. The study of light, known as optics, is an important research area in modern physics.
Generally, EM radiation (the designation "radiation" excludes static electric, magnetic, and near fields), or EMR, is classified by wavelength into radio waves, microwaves, infrared, the visible spectrum that we perceive as light, ultraviolet, X-rays, and gamma rays.
The behavior of EMR depends on its wavelength. Higher frequencies have shorter wavelengths, and lower frequencies have longer wavelengths. When EMR interacts with single atoms and molecules, its behavior depends on the amount of energy per quantum it carries.
EMR in the visible light region consists of quanta (called photons) that are at the lower end of the energies that are capable of causing electronic excitation within molecules, which leads to changes in the bonding or chemistry of the molecule. At the lower end of the visible light spectrum, EMR becomes invisible to humans (infrared) because its photons no longer have enough individual energy to cause a lasting molecular change (a change in conformation) in the visual molecule retinal in the human retina, which change triggers the sensation of vision.
There exist animals that are sensitive to various types of infrared, but not by means of quantum-absorption. Infrared sensing in snakes depends on a kind of natural thermal imaging, in which tiny packets of cellular water are raised in temperature by the infrared radiation. EMR in this range causes molecular vibration and heating effects, which is how these animals detect it.
Above the range of visible light, ultraviolet light becomes invisible to humans, mostly because it is absorbed by the cornea below 360 nm and the internal lens below 400 nm. Furthermore, the rods and cones located in the retina of the human eye cannot detect the very short (below 360 nm) ultraviolet wavelengths and are in fact damaged by ultraviolet. Many animals with eyes that do not require lenses (such as insects and shrimp) are able to detect ultraviolet, by quantum photon-absorption mechanisms, in much the same chemical way that humans detect visible light.
Various sources define visible light as narrowly as 420–680 nm to as broadly as 380–800 nm. Under ideal laboratory conditions, people can see infrared up to at least 1050 nm; children and young adults may perceive ultraviolet wavelengths down to about 310–313 nm.
Plant growth is also affected by the color spectrum of light, a process known as photomorphogenesis.
The speed of light in a vacuum is defined to be exactly 299,792,458 m/s (approx. 186,282 miles per second). The fixed value of the speed of light in SI units results from the fact that the meter is now defined in terms of the speed of light. All forms of electromagnetic radiation move at exactly this same speed in vacuum.
Different physicists have attempted to measure the speed of light throughout history. Galileo attempted to measure the speed of light in the seventeenth century. An early experiment to measure the speed of light was conducted by Ole Rømer, a Danish physicist, in 1676. Using a telescope, Rømer observed the motions of Jupiter and one of its moons, Io. Noting discrepancies in the apparent period of Io's orbit, he calculated that light takes about 22 minutes to traverse the diameter of Earth's orbit. However, its size was not known at that time. If Rømer had known the diameter of the Earth's orbit, he would have calculated a speed of 227,000,000 m/s.
Another more accurate measurement of the speed of light was performed in Europe by Hippolyte Fizeau in 1849. Fizeau directed a beam of light at a mirror several kilometers away. A rotating cog wheel was placed in the path of the light beam as it traveled from the source, to the mirror and then returned to its origin. Fizeau found that at a certain rate of rotation, the beam would pass through one gap in the wheel on the way out and the next gap on the way back. Knowing the distance to the mirror, the number of teeth on the wheel, and the rate of rotation, Fizeau was able to calculate the speed of light as 313,000,000 m/s.
Léon Foucault carried out an experiment which used rotating mirrors to obtain a value of 298,000,000 m/s in 1862. Albert A. Michelson conducted experiments on the speed of light from 1877 until his death in 1931. He refined Foucault's methods in 1926 using improved rotating mirrors to measure the time it took light to make a round trip from Mount Wilson to Mount San Antonio in California. The precise measurements yielded a speed of 299,796,000 m/s.
The effective velocity of light in various transparent substances containing ordinary matter, is less than in vacuum. For example, the speed of light in water is about 3/4 of that in vacuum.
Two independent teams of physicists were said to bring light to a "complete standstill" by passing it through a Bose–Einstein condensate of the element rubidium, one team at Harvard University and the Rowland Institute for Science in Cambridge, Massachusetts, and the other at the Harvard–Smithsonian Center for Astrophysics, also in Cambridge. However, the popular description of light being "stopped" in these experiments refers only to light being stored in the excited states of atoms, then re-emitted at an arbitrary later time, as stimulated by a second laser pulse. During the time it had "stopped" it had ceased to be light.
The study of light and the interaction of light and matter is termed optics. The observation and study of optical phenomena such as rainbows and the aurora borealis offer many clues as to the nature of light.
Refraction is the bending of light rays when passing through a surface between one transparent material and another. It is described by Snell's Law:
where θ1 is the angle between the ray and the surface normal in the first medium, θ2 is the angle between the ray and the surface normal in the second medium, and n1 and n2 are the indices of refraction, "n" = 1 in a vacuum and "n" > 1 in a transparent substance.
When a beam of light crosses the boundary between a vacuum and another medium, or between two different media, the wavelength of the light changes, but the frequency remains constant. If the beam of light is not orthogonal (or rather normal) to the boundary, the change in wavelength results in a change in the direction of the beam. This change of direction is known as refraction.
The refractive quality of lenses is frequently used to manipulate light in order to change the apparent size of images. Magnifying glasses, spectacles, contact lenses, microscopes and refracting telescopes are all examples of this manipulation.
There are many sources of light. A body at a given temperature emits a characteristic spectrum of black-body radiation. A simple thermal source is sunlight, the radiation emitted by the chromosphere of the Sun at around peaks in the visible region of the electromagnetic spectrum when plotted in wavelength units and roughly 44% of sunlight energy that reaches the ground is visible. Another example is incandescent light bulbs, which emit only around 10% of their energy as visible light and the remainder as infrared. A common thermal light source in history is the glowing solid particles in flames, but these also emit most of their radiation in the infrared, and only a fraction in the visible spectrum.
The peak of the black-body spectrum is in the deep infrared, at about 10 micrometre wavelength, for relatively cool objects like human beings. As the temperature increases, the peak shifts to shorter wavelengths, producing first a red glow, then a white one, and finally a blue-white color as the peak moves out of the visible part of the spectrum and into the ultraviolet. These colors can be seen when metal is heated to "red hot" or "white hot". Blue-white thermal emission is not often seen, except in stars (the commonly seen pure-blue color in a gas flame or a welder's torch is in fact due to molecular emission, notably by CH radicals (emitting a wavelength band around 425 nm, and is not seen in stars or pure thermal radiation).
Atoms emit and absorb light at characteristic energies. This produces "emission lines" in the spectrum of each atom. Emission can be spontaneous, as in light-emitting diodes, gas discharge lamps (such as neon lamps and neon signs, mercury-vapor lamps, etc.), and flames (light from the hot gas itself—so, for example, sodium in a gas flame emits characteristic yellow light). Emission can also be stimulated, as in a laser or a microwave maser.
Deceleration of a free charged particle, such as an electron, can produce visible radiation: cyclotron radiation, synchrotron radiation, and bremsstrahlung radiation are all examples of this. Particles moving through a medium faster than the speed of light in that medium can produce visible Cherenkov radiation. Certain chemicals produce visible radiation by chemoluminescence. In living things, this process is called bioluminescence. For example, fireflies produce light by this means, and boats moving through water can disturb plankton which produce a glowing wake.
Certain substances produce light when they are illuminated by more energetic radiation, a process known as fluorescence. Some substances emit light slowly after excitation by more energetic radiation. This is known as phosphorescence. Phosphorescent materials can also be excited by bombarding them with subatomic particles. Cathodoluminescence is one example. This mechanism is used in cathode ray tube television sets and computer monitors.
Certain other mechanisms can produce light:
When the concept of light is intended to include very-high-energy photons (gamma rays), additional generation mechanisms include:
Light is measured with two main alternative sets of units: radiometry consists of measurements of light power at all wavelengths, while photometry measures light with wavelength weighted with respect to a standardized model of human brightness perception. Photometry is useful, for example, to quantify Illumination (lighting) intended for human use. The SI units for both systems are summarized in the following tables.
The photometry units are different from most systems of physical units in that they take into account how the human eye responds to light. The cone cells in the human eye are of three types which respond differently across the visible spectrum, and the cumulative response peaks at a wavelength of around 555 nm. Therefore, two sources of light which produce the same intensity (W/m2) of visible light do not necessarily appear equally bright. The photometry units are designed to take this into account, and therefore are a better representation of how "bright" a light appears to be than raw intensity. They relate to raw power by a quantity called luminous efficacy, and are used for purposes like determining how to best achieve sufficient illumination for various tasks in indoor and outdoor settings. The illumination measured by a photocell sensor does not necessarily correspond to what is perceived by the human eye, and without filters which may be costly, photocells and charge-coupled devices (CCD) tend to respond to some infrared, ultraviolet or both.
Light exerts physical pressure on objects in its path, a phenomenon which can be deduced by Maxwell's equations, but can be more easily explained by the particle nature of light: photons strike and transfer their momentum. Light pressure is equal to the power of the light beam divided by "c", the speed of light. Due to the magnitude of "c", the effect of light pressure is negligible for everyday objects. For example, a one-milliwatt laser pointer exerts a force of about 3.3 piconewtons on the object being illuminated; thus, one could lift a U.S. penny with laser pointers, but doing so would require about 30 billion 1-mW laser pointers. However, in nanometre-scale applications such as nanoelectromechanical systems (NEMS), the effect of light pressure is more significant, and exploiting light pressure to drive NEMS mechanisms and to flip nanometre-scale physical switches in integrated circuits is an active area of research. At larger scales, light pressure can cause asteroids to spin faster, acting on their irregular shapes as on the vanes of a windmill. The possibility of making solar sails that would accelerate spaceships in space is also under investigation.
Although the motion of the Crookes radiometer was originally attributed to light pressure, this interpretation is incorrect; the characteristic Crookes rotation is the result of a partial vacuum. This should not be confused with the Nichols radiometer, in which the (slight) motion caused by torque (though not enough for full rotation against friction) "is" directly caused by light pressure.
As a consequence of light pressure, Einstein in 1909 predicted the existence of "radiation friction" which would oppose the movement of matter. He wrote, "radiation will exert pressure on both sides of the plate. The forces of pressure exerted on the two sides are equal if the plate is at rest. However, if it is in motion, more radiation will be reflected on the surface that is ahead during the motion (front surface) than on the back surface. The backwardacting force of pressure exerted on the front surface is thus larger than the force of pressure acting on the back. Hence, as the resultant of the two forces, there remains a force that counteracts the motion of the plate and that increases with the velocity of the plate. We will call this resultant 'radiation friction' in brief."
Usually light momentum is aligned with its direction of motion. However, for example in evanescent waves momentum is transverse to direction of propagation.
In the fifth century BC, Empedocles postulated that everything was composed of four elements; fire, air, earth and water. He believed that Aphrodite made the human eye out of the four elements and that she lit the fire in the eye which shone out from the eye making sight possible. If this were true, then one could see during the night just as well as during the day, so Empedocles postulated an interaction between rays from the eyes and rays from a source such as the sun.
In about 300 BC, Euclid wrote "Optica", in which he studied the properties of light. Euclid postulated that light travelled in straight lines and he described the laws of reflection and studied them mathematically. He questioned that sight is the result of a beam from the eye, for he asks how one sees the stars immediately, if one closes one's eyes, then opens them at night. If the beam from the eye travels infinitely fast this is not a problem.
In 55 BC, Lucretius, a Roman who carried on the ideas of earlier Greek atomists, wrote that "The light & heat of the sun; these are composed of minute atoms which, when they are shoved off, lose no time in shooting right across the interspace of air in the direction imparted by the shove." (from "On the nature of the Universe"). Despite being similar to later particle theories, Lucretius's views were not generally accepted. Ptolemy (c. 2nd century) wrote about the refraction of light in his book "Optics".
In ancient India, the Hindu schools of Samkhya and Vaisheshika, from around the early centuries AD developed theories on light. According to the Samkhya school, light is one of the five fundamental "subtle" elements ("tanmatra") out of which emerge the gross elements. The atomicity of these elements is not specifically mentioned and it appears that they were actually taken to be continuous.
On the other hand, the Vaisheshika school gives an atomic theory of the physical world on the non-atomic ground of ether, space and time. (See "Indian atomism".) The basic atoms are those of earth ("prthivi"), water ("pani"), fire ("agni"), and air ("vayu") Light rays are taken to be a stream of high velocity of "tejas" (fire) atoms. The particles of light can exhibit different characteristics depending on the speed and the arrangements of the "tejas" atoms.
The "Vishnu Purana" refers to sunlight as "the seven rays of the sun".
The Indian Buddhists, such as Dignāga in the 5th century and Dharmakirti in the 7th century, developed a type of atomism that is a philosophy about reality being composed of atomic entities that are momentary flashes of light or energy. They viewed light as being an atomic entity equivalent to energy.
René Descartes (1596–1650) held that light was a mechanical property of the luminous body, rejecting the "forms" of Ibn al-Haytham and Witelo as well as the "species" of Bacon, Grosseteste, and Kepler. In 1637 he published a theory of the refraction of light that assumed, incorrectly, that light travelled faster in a denser medium than in a less dense medium. Descartes arrived at this conclusion by analogy with the behaviour of sound waves. Although Descartes was incorrect about the relative speeds, he was correct in assuming that light behaved like a wave and in concluding that refraction could be explained by the speed of light in different media.
Descartes is not the first to use the mechanical analogies but because he clearly asserts that light is only a mechanical property of the luminous body and the transmitting medium, Descartes' theory of light is regarded as the start of modern physical optics.
Pierre Gassendi (1592–1655), an atomist, proposed a particle theory of light which was published posthumously in the 1660s. Isaac Newton studied Gassendi's work at an early age, and preferred his view to Descartes' theory of the "plenum". He stated in his "Hypothesis of Light" of 1675 that light was composed of corpuscles (particles of matter) which were emitted in all directions from a source. One of Newton's arguments against the wave nature of light was that waves were known to bend around obstacles, while light travelled only in straight lines. He did, however, explain the phenomenon of the diffraction of light (which had been observed by Francesco Grimaldi) by allowing that a light particle could create a localised wave in the aether.
Newton's theory could be used to predict the reflection of light, but could only explain refraction by incorrectly assuming that light accelerated upon entering a denser medium because the gravitational pull was greater. Newton published the final version of his theory in his "Opticks" of 1704. His reputation helped the particle theory of light to hold sway during the 18th century. The particle theory of light led Laplace to argue that a body could be so massive that light could not escape from it. In other words, it would become what is now called a black hole. Laplace withdrew his suggestion later, after a wave theory of light became firmly established as the model for light (as has been explained, neither a particle or wave theory is fully correct). A translation of Newton's essay on light appears in "The large scale structure of space-time", by Stephen Hawking and George F. R. Ellis.
The fact that light could be polarized was for the first time qualitatively explained by Newton using the particle theory. Étienne-Louis Malus in 1810 created a mathematical particle theory of polarization. Jean-Baptiste Biot in 1812 showed that this theory explained all known phenomena of light polarization. At that time the polarization was considered as the proof of the particle theory.
To explain the origin of colors, Robert Hooke (1635–1703) developed a "pulse theory" and compared the spreading of light to that of waves in water in his 1665 work "Micrographia" ("Observation IX"). In 1672 Hooke suggested that light's vibrations could be perpendicular to the direction of propagation. Christiaan Huygens (1629–1695) worked out a mathematical wave theory of light in 1678, and published it in his "Treatise on light" in 1690. He proposed that light was emitted in all directions as a series of waves in a medium called the "Luminiferous ether". As waves are not affected by gravity, it was assumed that they slowed down upon entering a denser medium.
The wave theory predicted that light waves could interfere with each other like sound waves (as noted around 1800 by Thomas Young). Young showed by means of a diffraction experiment that light behaved as waves. He also proposed that different colors were caused by different wavelengths of light, and explained color vision in terms of three-colored receptors in the eye. Another supporter of the wave theory was Leonhard Euler. He argued in "Nova theoria lucis et colorum" (1746) that diffraction could more easily be explained by a wave theory. In 1816 André-Marie Ampère gave Augustin-Jean Fresnel an idea that the polarization of light can be explained by the wave theory if light were a transverse wave.
Later, Fresnel independently worked out his own wave theory of light, and presented it to the Académie des Sciences in 1817. Siméon Denis Poisson added to Fresnel's mathematical work to produce a convincing argument in favor of the wave theory, helping to overturn Newton's corpuscular theory. By the year 1821, Fresnel was able to show via mathematical methods that polarization could be explained by the wave theory of light if and only if light was entirely transverse, with no longitudinal vibration whatsoever.
The weakness of the wave theory was that light waves, like sound waves, would need a medium for transmission. The existence of the hypothetical substance "luminiferous aether" proposed by Huygens in 1678 was cast into strong doubt in the late nineteenth century by the Michelson–Morley experiment.
Newton's corpuscular theory implied that light would travel faster in a denser medium, while the wave theory of Huygens and others implied the opposite. At that time, the speed of light could not be measured accurately enough to decide which theory was correct. The first to make a sufficiently accurate measurement was Léon Foucault, in 1850. His result supported the wave theory, and the classical particle theory was finally abandoned, only to partly re-emerge in the 20th century.
In 1845, Michael Faraday discovered that the plane of polarization of linearly polarized light is rotated when the light rays travel along the magnetic field direction in the presence of a transparent dielectric, an effect now known as Faraday rotation. This was the first evidence that light was related to electromagnetism. In 1846 he speculated that light might be some form of disturbance propagating along magnetic field lines. Faraday proposed in 1847 that light was a high-frequency electromagnetic vibration, which could propagate even in the absence of a medium such as the ether.
Faraday's work inspired James Clerk Maxwell to study electromagnetic radiation and light. Maxwell discovered that self-propagating electromagnetic waves would travel through space at a constant speed, which happened to be equal to the previously measured speed of light. From this, Maxwell concluded that light was a form of electromagnetic radiation: he first stated this result in 1862 in "On Physical Lines of Force". In 1873, he published "A Treatise on Electricity and Magnetism", which contained a full mathematical description of the behavior of electric and magnetic fields, still known as Maxwell's equations. Soon after, Heinrich Hertz confirmed Maxwell's theory experimentally by generating and detecting radio waves in the laboratory, and demonstrating that these waves behaved exactly like visible light, exhibiting properties such as reflection, refraction, diffraction, and interference. Maxwell's theory and Hertz's experiments led directly to the development of modern radio, radar, television, electromagnetic imaging, and wireless communications.
In the quantum theory, photons are seen as wave packets of the waves described in the classical theory of Maxwell. The quantum theory was needed to explain effects even with visual light that Maxwell's classical theory could not (such as spectral lines).
In 1900 Max Planck, attempting to explain black-body radiation suggested that although light was a wave, these waves could gain or lose energy only in finite amounts related to their frequency. Planck called these "lumps" of light energy "quanta" (from a Latin word for "how much"). In 1905, Albert Einstein used the idea of light quanta to explain the photoelectric effect, and suggested that these light quanta had a "real" existence. In 1923 Arthur Holly Compton showed that the wavelength shift seen when low intensity X-rays scattered from electrons (so called Compton scattering) could be explained by a particle-theory of X-rays, but not a wave theory. In 1926 Gilbert N. Lewis named these light quanta particles photons.
Eventually the modern theory of quantum mechanics came to picture light as (in some sense) "both" a particle and a wave, and (in another sense), as a phenomenon which is "neither" a particle nor a wave (which actually are macroscopic phenomena, such as baseballs or ocean waves). Instead, modern physics sees light as something that can be described sometimes with mathematics appropriate to one type of macroscopic metaphor (particles), and sometimes another macroscopic metaphor (water waves), but is actually something that cannot be fully imagined. As in the case for radio waves and the X-rays involved in Compton scattering, physicists have noted that electromagnetic radiation tends to behave more like a classical wave at lower frequencies, but more like a classical particle at higher frequencies, but never completely loses all qualities of one or the other. Visible light, which occupies a middle ground in frequency, can easily be shown in experiments to be describable using either a wave or particle model, or sometimes both.
In February 2018, scientists reported, for the first time, the discovery of a new form of light, which may involve polaritons, that could be useful in the development of quantum computers.
|
https://en.wikipedia.org/wiki?curid=17939
|
Lipid
In biology and biochemistry, a lipid is a macrobiomolecule that is soluble in nonpolar solvents. Non-polar solvents are typically hydrocarbons used to dissolve other naturally occurring hydrocarbon lipid molecules that do not (or do not easily) dissolve in water, including fatty acids, waxes, sterols, fat-soluble vitamins (such as vitamins A, D, E, and K), monoglycerides, diglycerides, triglycerides, and phospholipids.
The functions of lipids include storing energy, signaling, and acting as structural components of cell membranes. Lipids have applications in the cosmetic and food industries as well as in nanotechnology.
Scientists sometimes define lipids as hydrophobic or amphiphilic small molecules; the amphiphilic nature of some lipids allows them to form structures such as vesicles, multilamellar/unilamellar liposomes, or membranes in an aqueous environment. Biological lipids originate entirely or in part from two distinct types of biochemical subunits or "building-blocks": ketoacyl and isoprene groups. Using this approach, lipids may be divided into eight categories: fatty acids, glycerolipids, glycerophospholipids, sphingolipids, saccharolipids, and polyketides (derived from condensation of ketoacyl subunits); and sterol lipids and prenol lipids (derived from condensation of isoprene subunits).
Although the term "lipid" is sometimes used as a synonym for fats, fats are a subgroup of lipids called triglycerides. Lipids also encompass molecules such as fatty acids and their derivatives (including tri-, di-, monoglycerides, and phospholipids), as well as other sterol-containing metabolites such as cholesterol. Although humans and other mammals use various biosynthetic pathways both to break down and to synthesize lipids, some essential lipids can't be made this way and must be obtained from the diet.
"Lipid may be regarded as organic substances relatively insoluble in water, soluble in organic solvents(alcohol, ether etc.) actually or potentially related to fatty acid and utilized by the living cells."
In 1815, Henri Braconnot classified lipids ("graisses") in two categories, "suifs" (solid greases or tallow) and "huiles" (fluid oils). In 1823, Michel Eugène Chevreul developed a more detailed classification, including oils, greases, tallow, waxes, resins, balsams and volatile oils (or essential oils).
The first successful synthesis of a triglyceride molecule was by Théophile-Jules Pelouze in 1844, when he produced tributyrin by reacting butyric acid with glycerin in the presence of concentrated sulfuric acid. Several years later, Marcellin Berthelot, one of Pelouze's students, synthesized tristearin and tripalmitin by reaction of the analogous fatty acids with glycerin in the presence of gaseous hydrogen chloride at high temperature.
In 1827, William Prout recognized fat ("oily" alimentary matters), along with protein ("albuminous") and carbohydrate ("saccharine"), as an important nutrient for humans and animals.
For a century, chemists regarded "fats" as only simple lipids made of fatty acids and glycerol (glycerides), but new forms were described later. Theodore Gobley (1847) discovered phospholipids in mammalian brain and hen egg, called by him as "lecithins". Thudichum discovered in human brain some phospholipids (cephalin), glycolipids (cerebroside) and sphingolipids (sphingomyelin).
The terms lipoid, lipin, lipide and lipid have been used with varied meanings from author to author. In 1912, Rosenbloom and Gies proposed the substitution of "lipoid" by "lipin". In 1920, Bloor introduced a new classification for "lipoids": simple lipoids (greases and waxes), compound lipoids (phospholipoids and glycolipoids), and the derived lipoids (fatty acids, alcohols, sterols).
The word "lipide" , which stems etymologically from the Greek "lipos" (fat), was introduced in 1923 by the french pharmacologist Gabriel Bertrand. Bertrands included in the concept not only the traditional fats (glycerides), but also the "lipoids", with a complex constitution. Despite the word "lipide" was unanimously approved by the international commission of "Société de Chimie Biologique" during the plenary session on the 3rd of July 1923. The word "lipide" has been later anglicized as "lipid" because of its pronunciation ('lɪpɪd). In the french language, the suffixe "-ide", from the ancient greek "-ίδης" (meaning 'son of' or 'descendant of'), is always pronounced (ɪd).
In 1947, divided lipids into "simple lipids", with greases and waxes (true waxes, sterols, alcohols).
Lipids have been classified into eight categories by the Lipid MAPS consortium as follows:
Fatty acids, or fatty acid residues when they are part of a lipid, are a diverse group of molecules synthesized by chain-elongation of an acetyl-CoA primer with malonyl-CoA or methylmalonyl-CoA groups in a process called fatty acid synthesis. They are made of a hydrocarbon chain that terminates with a carboxylic acid group; this arrangement confers the molecule with a polar, hydrophilic end, and a nonpolar, hydrophobic end that is insoluble in water. The fatty acid structure is one of the most fundamental categories of biological lipids and is commonly used as a building-block of more structurally complex lipids. The carbon chain, typically between four and 24 carbons long, may be saturated or unsaturated, and may be attached to functional groups containing oxygen, halogens, nitrogen, and sulfur. If a fatty acid contains a double bond, there is the possibility of either a "cis" or "trans" geometric isomerism, which significantly affects the molecule's configuration. "Cis"-double bonds cause the fatty acid chain to bend, an effect that is compounded with more double bonds in the chain. Three double bonds in 18-carbon "linolenic acid", the most abundant fatty-acyl chains of plant "thylakoid membranes", render these membranes highly "fluid" despite environmental low-temperatures, and also makes linolenic acid give dominating sharp peaks in high resolution 13-C NMR spectra of chloroplasts. This in turn plays an important role in the structure and function of cell membranes. Most naturally occurring fatty acids are of the "cis" configuration, although the "trans" form does exist in some natural and partially hydrogenated fats and oils.
Examples of biologically important fatty acids include the eicosanoids, derived primarily from arachidonic acid and eicosapentaenoic acid, that include prostaglandins, leukotrienes, and thromboxanes. Docosahexaenoic acid is also important in biological systems, particularly with respect to sight. Other major lipid classes in the fatty acid category are the fatty esters and fatty amides. Fatty esters include important biochemical intermediates such as wax esters, fatty acid thioester coenzyme A derivatives, fatty acid thioester ACP derivatives and fatty acid carnitines. The fatty amides include N-acyl ethanolamines, such as the cannabinoid neurotransmitter anandamide.
Glycerolipids are composed of mono-, di-, and tri-substituted glycerols, the best-known being the fatty acid triesters of glycerol, called triglycerides. The word "triacylglycerol" is sometimes used synonymously with "triglyceride". In these compounds, the three hydroxyl groups of glycerol are each esterified, typically by different fatty acids. Because they function as an energy store, these lipids comprise the bulk of storage fat in animal tissues. The hydrolysis of the ester bonds of triglycerides and the release of glycerol and fatty acids from adipose tissue are the initial steps in metabolizing fat.
Additional subclasses of glycerolipids are represented by glycosylglycerols, which are characterized by the presence of one or more sugar residues attached to glycerol via a glycosidic linkage. Examples of structures in this category are the digalactosyldiacylglycerols found in plant membranes and seminolipid from mammalian sperm cells.
Glycerophospholipids, usually referred to as phospholipids (though sphingomyelins are also classified as phospholipids), are ubiquitous in nature and are key components of the lipid bilayer of cells, as well as being involved in metabolism and cell signaling. Neural tissue (including the brain) contains relatively high amounts of glycerophospholipids, and alterations in their composition has been implicated in various neurological disorders. Glycerophospholipids may be subdivided into distinct classes, based on the nature of the polar headgroup at the "sn"-3 position of the glycerol backbone in eukaryotes and eubacteria, or the "sn"-1 position in the case of archaebacteria.
Examples of glycerophospholipids found in biological membranes are phosphatidylcholine (also known as PC, GPCho or lecithin), phosphatidylethanolamine (PE or GPEtn) and phosphatidylserine (PS or GPSer). In addition to serving as a primary component of cellular membranes and binding sites for intra- and intercellular proteins, some glycerophospholipids in eukaryotic cells, such as phosphatidylinositols and phosphatidic acids are either precursors of or, themselves, membrane-derived second messengers. Typically, one or both of these hydroxyl groups are acylated with long-chain fatty acids, but there are also alkyl-linked and 1Z-alkenyl-linked (plasmalogen) glycerophospholipids, as well as dialkylether variants in archaebacteria.
Sphingolipids are a complicated family of compounds that share a common structural feature, a sphingoid base backbone that is synthesized "de novo" from the amino acid serine and a long-chain fatty acyl CoA, then converted into ceramides, phosphosphingolipids, glycosphingolipids and other compounds. The major sphingoid base of mammals is commonly referred to as sphingosine. Ceramides (N-acyl-sphingoid bases) are a major subclass of sphingoid base derivatives with an amide-linked fatty acid. The fatty acids are typically saturated or mono-unsaturated with chain lengths from 16 to 26 carbon atoms.
The major phosphosphingolipids of mammals are sphingomyelins (ceramide phosphocholines), whereas insects contain mainly ceramide phosphoethanolamines and fungi have phytoceramide phosphoinositols and mannose-containing headgroups. The glycosphingolipids are a diverse family of molecules composed of one or more sugar residues linked via a glycosidic bond to the sphingoid base. Examples of these are the simple and complex glycosphingolipids such as cerebrosides and gangliosides.
Sterols, such as cholesterol and its derivatives, are an important component of membrane lipids, along with the glycerophospholipids and sphingomyelins. Other examples of sterols are the bile acids and their conjugates, which in mammals are oxidized derivatives of cholesterol and are synthesized in the liver. The plant equivalents are the phytosterols, such as β-sitosterol, stigmasterol, and brassicasterol; the latter compound is also used as a biomarker for algal growth. The predominant sterol in fungal cell membranes is ergosterol.
Sterols are steroids in which one of the hydrogen atoms is substituted with a hydroxyl group, at position 3 in the carbon chain. They have in common with steroids the same fused four-ring core structure. Steroids have different biological roles as hormones and signaling molecules. The eighteen-carbon (C18) steroids include the estrogen family whereas the C19 steroids comprise the androgens such as testosterone and androsterone. The C21 subclass includes the progestogens as well as the glucocorticoids and mineralocorticoids. The secosteroids, comprising various forms of vitamin D, are characterized by cleavage of the B ring of the core structure.
Prenol lipids are synthesized from the five-carbon-unit precursors isopentenyl diphosphate and dimethylallyl diphosphate that are produced mainly via the mevalonic acid (MVA) pathway. The simple isoprenoids (linear alcohols, diphosphates, etc.) are formed by the successive addition of C5 units, and are classified according to number of these terpene units. Structures containing greater than 40 carbons are known as polyterpenes. Carotenoids are important simple isoprenoids that function as antioxidants and as precursors of vitamin A. Another biologically important class of molecules is exemplified by the quinones and hydroquinones, which contain an isoprenoid tail attached to a quinonoid core of non-isoprenoid origin. Vitamin E and vitamin K, as well as the ubiquinones, are examples of this class. Prokaryotes synthesize polyprenols (called bactoprenols) in which the terminal isoprenoid unit attached to oxygen remains unsaturated, whereas in animal polyprenols (dolichols) the terminal isoprenoid is reduced.
Saccharolipids describe compounds in which fatty acids are linked directly to a sugar backbone, forming structures that are compatible with membrane bilayers. In the saccharolipids, a monosaccharide substitutes for the glycerol backbone present in glycerolipids and glycerophospholipids. The most familiar saccharolipids are the acylated glucosamine precursors of the Lipid A component of the lipopolysaccharides in Gram-negative bacteria. Typical lipid A molecules are disaccharides of glucosamine, which are derivatized with as many as seven fatty-acyl chains. The minimal lipopolysaccharide required for growth in "E. coli" is Kdo2-Lipid A, a hexa-acylated disaccharide of glucosamine that is glycosylated with two 3-deoxy-D-manno-octulosonic acid (Kdo) residues.
Polyketides are synthesized by polymerization of acetyl and propionyl subunits by classic enzymes as well as iterative and multimodular enzymes that share mechanistic features with the fatty acid synthases. They comprise many secondary metabolites and natural products from animal, plant, bacterial, fungal and marine sources, and have great structural diversity. Many polyketides are cyclic molecules whose backbones are often further modified by glycosylation, methylation, hydroxylation, oxidation, or other processes. Many commonly used anti-microbial, anti-parasitic, and anti-cancer agents are polyketides or polyketide derivatives, such as erythromycins, tetracyclines, avermectins, and antitumor epothilones.
Eukaryotic cells feature the compartmentalized membrane-bound organelles that carry out different biological functions. The glycerophospholipids are the main structural component of biological membranes, as the cellular plasma membrane and the intracellular membranes of organelles; in animal cells, the plasma membrane physically separates the intracellular components from the extracellular environment. The glycerophospholipids are amphipathic molecules (containing both hydrophobic and hydrophilic regions) that contain a glycerol core linked to two fatty acid-derived "tails" by ester linkages and to one "head" group by a phosphate ester linkage. While glycerophospholipids are the major component of biological membranes, other non-glyceride lipid components such as sphingomyelin and sterols (mainly cholesterol in animal cell membranes) are also found in biological membranes. In plants and algae, the galactosyldiacylglycerols, and sulfoquinovosyldiacylglycerol, which lack a phosphate group, are important components of membranes of chloroplasts and related organelles and are the most abundant lipids in photosynthetic tissues, including those of higher plants, algae and certain bacteria.
Plant thylakoid membranes have the largest lipid component of a non-bilayer forming monogalactosyl diglyceride (MGDG), and little phospholipids; despite this unique lipid composition, chloroplast thylakoid membranes have been shown to contain a dynamic lipid-bilayer matrix as revealed by magnetic resonance and electron microscope studies.
A biological membrane is a form of lamellar phase lipid bilayer. The formation of lipid bilayers is an energetically preferred process when the glycerophospholipids described above are in an aqueous environment. This is known as the hydrophobic effect. In an aqueous system, the polar heads of lipids align towards the polar, aqueous environment, while the hydrophobic tails minimize their contact with water and tend to cluster together, forming a vesicle; depending on the concentration of the lipid, this biophysical interaction may result in the formation of micelles, liposomes, or lipid bilayers. Other aggregations are also observed and form part of the polymorphism of amphiphile (lipid) behavior. Phase behavior is an area of study within biophysics and is the subject of current academic research. Micelles and bilayers form in the polar medium by a process known as the hydrophobic effect. When dissolving a lipophilic or amphiphilic substance in a polar environment, the polar molecules (i.e., water in an aqueous solution) become more ordered around the dissolved lipophilic substance, since the polar molecules cannot form hydrogen bonds to the lipophilic areas of the amphiphile. So in an aqueous environment, the water molecules form an ordered "clathrate" cage around the dissolved lipophilic molecule.
The formation of lipids into protocell membranes represents a key step in models of abiogenesis, the origin of life.
Triglycerides, stored in adipose tissue, are a major form of energy storage both in animals and plants. They are a major source of energy because carbohydrates are fully reduced structures. In comparison to glycogen which would contribute only half of the energy per its pure mass, triglyceride carbons are all bonded to hydrogens, unlike in carbohydrates. The adipocyte, or fat cell, is designed for continuous synthesis and breakdown of triglycerides in animals, with breakdown controlled mainly by the activation of hormone-sensitive enzyme lipase. The complete oxidation of fatty acids provides high caloric content, about 38 kJ/g (9 kcal/g), compared with 17 kJ/g (4 kcal/g) for the breakdown of carbohydrates and proteins. Migratory birds that must fly long distances without eating use stored energy of triglycerides to fuel their flights.
In recent years, evidence has emerged showing that lipid signaling is a vital part of the cell signaling. Lipid signaling may occur via activation of G protein-coupled or nuclear receptors, and members of several different lipid categories have been identified as signaling molecules and cellular messengers. These include sphingosine-1-phosphate, a sphingolipid derived from ceramide that is a potent messenger molecule involved in regulating calcium mobilization, cell growth, and apoptosis; diacylglycerol (DAG) and the phosphatidylinositol phosphates (PIPs), involved in calcium-mediated activation of protein kinase C; the prostaglandins, which are one type of fatty-acid derived eicosanoid involved in inflammation and immunity; the steroid hormones such as estrogen, testosterone and cortisol, which modulate a host of functions such as reproduction, metabolism and blood pressure; and the oxysterols such as 25-hydroxy-cholesterol that are liver X receptor agonists. Phosphatidylserine lipids are known to be involved in signaling for the phagocytosis of apoptotic cells or pieces of cells. They accomplish this by being exposed to the extracellular face of the cell membrane after the inactivation of flippases which place them exclusively on the cytosolic side and the activation of scramblases, which scramble the orientation of the phospholipids. After this occurs, other cells recognize the phosphatidylserines and phagocytosize the cells or cell fragments exposing them.
The "fat-soluble" vitamins (A, D, E and K) – which are isoprene-based lipids – are essential nutrients stored in the liver and fatty tissues, with a diverse range of functions. Acyl-carnitines are involved in the transport and metabolism of fatty acids in and out of mitochondria, where they undergo beta oxidation. Polyprenols and their phosphorylated derivatives also play important transport roles, in this case the transport of oligosaccharides across membranes. Polyprenol phosphate sugars and polyprenol diphosphate sugars function in extra-cytoplasmic glycosylation reactions, in extracellular polysaccharide biosynthesis (for instance, peptidoglycan polymerization in bacteria), and in eukaryotic protein N-glycosylation. Cardiolipins are a subclass of glycerophospholipids containing four acyl chains and three glycerol groups that are particularly abundant in the inner mitochondrial membrane. They are believed to activate enzymes involved with oxidative phosphorylation. Lipids also form the basis of steroid hormones.
The major dietary lipids for humans and other animals are animal and plant triglycerides, sterols, and membrane phospholipids. The process of lipid metabolism synthesizes and degrades the lipid stores and produces the structural and functional lipids characteristic of individual tissues.
In animals, when there is an oversupply of dietary carbohydrate, the excess carbohydrate is converted to triglycerides. This involves the synthesis of fatty acids from acetyl-CoA and the esterification of fatty acids in the production of triglycerides, a process called lipogenesis. Fatty acids are made by fatty acid synthases that polymerize and then reduce acetyl-CoA units. The acyl chains in the fatty acids are extended by a cycle of reactions that add the acetyl group, reduce it to an alcohol, dehydrate it to an alkene group and then reduce it again to an alkane group. The enzymes of fatty acid biosynthesis are divided into two groups, in animals and fungi all these fatty acid synthase reactions are carried out by a single multifunctional protein, while in plant plastids and bacteria separate enzymes perform each step in the pathway. The fatty acids may be subsequently converted to triglycerides that are packaged in lipoproteins and secreted from the liver.
The synthesis of unsaturated fatty acids involves a desaturation reaction, whereby a double bond is introduced into the fatty acyl chain. For example, in humans, the desaturation of stearic acid by stearoyl-CoA desaturase-1 produces oleic acid. The doubly unsaturated fatty acid linoleic acid as well as the triply unsaturated α-linolenic acid cannot be synthesized in mammalian tissues, and are therefore essential fatty acids and must be obtained from the diet.
Triglyceride synthesis takes place in the endoplasmic reticulum by metabolic pathways in which acyl groups in fatty acyl-CoAs are transferred to the hydroxyl groups of glycerol-3-phosphate and diacylglycerol.
Terpenes and isoprenoids, including the carotenoids, are made by the assembly and modification of isoprene units donated from the reactive precursors isopentenyl pyrophosphate and dimethylallyl pyrophosphate. These precursors can be made in different ways. In animals and archaea, the mevalonate pathway produces these compounds from acetyl-CoA, while in plants and bacteria the non-mevalonate pathway uses pyruvate and glyceraldehyde 3-phosphate as substrates. One important reaction that uses these activated isoprene donors is steroid biosynthesis. Here, the isoprene units are joined together to make squalene and then folded up and formed into a set of rings to make lanosterol. Lanosterol can then be converted into other steroids such as cholesterol and ergosterol.
Beta oxidation is the metabolic process by which fatty acids are broken down in the mitochondria or in peroxisomes to generate acetyl-CoA. For the most part, fatty acids are oxidized by a mechanism that is similar to, but not identical with, a reversal of the process of fatty acid synthesis. That is, two-carbon fragments are removed sequentially from the carboxyl end of the acid after steps of dehydrogenation, hydration, and oxidation to form a beta-keto acid, which is split by thiolysis. The acetyl-CoA is then ultimately converted into ATP, CO2, and H2O using the citric acid cycle and the electron transport chain. Hence the citric acid cycle can start at acetyl-CoA when fat is being broken down for energy if there is little or no glucose available. The energy yield of the complete oxidation of the fatty acid palmitate is 106 ATP. Unsaturated and odd-chain fatty acids require additional enzymatic steps for degradation.
Most of the fat found in food is in the form of triglycerides, cholesterol, and phospholipids. Some dietary fat is necessary to facilitate absorption of fat-soluble vitamins (A, D, E, and K) and carotenoids. Humans and other mammals have a dietary requirement for certain essential fatty acids, such as linoleic acid (an omega-6 fatty acid) and alpha-linolenic acid (an omega-3 fatty acid) because they cannot be synthesized from simple precursors in the diet. Both of these fatty acids are 18-carbon polyunsaturated fatty acids differing in the number and position of the double bonds. Most vegetable oils are rich in linoleic acid (safflower, sunflower, and corn oils). Alpha-linolenic acid is found in the green leaves of plants, and in selected seeds, nuts, and legumes (in particular flax, rapeseed, walnut, and soy). Fish oils are particularly rich in the longer-chain omega-3 fatty acids eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA). Many studies have shown positive health benefits associated with consumption of omega-3 fatty acids on infant development, cancer, cardiovascular diseases, and various mental illnesses, such as depression, attention-deficit hyperactivity disorder, and dementia. In contrast, it is now well-established that consumption of trans fats, such as those present in partially hydrogenated vegetable oils, are a risk factor for cardiovascular disease. Fats that are good for you can be turned into trans fats by overcooking.
A few studies have suggested that total dietary fat intake is linked to an increased risk of obesity and diabetes. However, a number of very large studies, including the Women's Health Initiative Dietary Modification Trial, an eight-year study of 49,000 women, the Nurses' Health Study and the Health Professionals Follow-up Study, revealed no such links. None of these studies suggested any connection between percentage of calories from fat and risk of cancer, heart disease, or weight gain. The Nutrition Source, a website maintained by the Department of Nutrition at the Harvard School of Public Health, summarizes the current evidence on the impact of dietary fat: "Detailed research—much of it done at Harvard—shows that the total amount of fat in the diet isn't really linked with weight or disease."
Introductory
Nomenclature
Databases
General
|
https://en.wikipedia.org/wiki?curid=17940
|
Lie algebra
In mathematics, a Lie algebra (pronounced "Lee") is a vector space formula_1 together with an operation called the Lie bracket, an alternating bilinear map formula_2, that satisfies the Jacobi identity. The vector space formula_1 together with this operation is a non-associative algebra, meaning that the Lie bracket is not necessarily associative.
Lie algebras are closely related to Lie groups, which are groups that are also smooth manifolds: any Lie group gives rise to a Lie algebra, which is its tangent space at the identity. Conversely, to any finite-dimensional Lie algebra over real or complex numbers, there is a corresponding connected Lie group unique up to finite coverings (Lie's third theorem). This correspondence allows one to study the structure and classification of Lie groups in terms of Lie algebras.
In physics, Lie groups appear as symmetry groups of physical systems, and their Lie algebras (tangent vectors near the identity) may be thought of as infinitesimal symmetry motions. Thus Lie algebras and their representations are used extensively in physics, notably in quantum mechanics and particle physics.
An elementary example is the space of three dimensional vectors formula_4 with the bracket operation defined by the cross product formula_5 This is skew-symmetric since formula_6, and instead of associativity it satisfies the Jacobi identity:
This is the Lie algebra of the Lie group of rotations of space, and each vector formula_8 may be pictured as an infinitesimal rotation around the axis "v", with velocity equal to the magnitude of "v". The Lie bracket is a measure of the non-commutativity between two rotations: since a rotation commutes with itself, we have the alternating property formula_9.
Lie algebras were introduced to study the concept of infinitesimal transformations by Marius Sophus Lie in the 1870s, and independently discovered by Wilhelm Killing in the 1880s. The name "Lie algebra" was given by Hermann Weyl in the 1930s; in older texts, the term "infinitesimal group" is used.
A Lie algebra is a vector space formula_10 over some field together with a binary operation formula_11 called the Lie bracket satisfying the following axioms:
Using bilinearity to expand the Lie bracket formula_19 and using alternativity shows that formula_20 for all elements "x", "y" in formula_14, showing that bilinearity and alternativity together imply
It is customary to denote a Lie algebra by a lower-case fraktur letter such as formula_24. If a Lie algebra is associated with a Lie group, then the algebra is denoted by the fraktur version of the group: for example the Lie algebra of SU("n") is formula_25.
Elements of a Lie algebra formula_14 are said to generate it if the smallest subalgebra containing these elements is formula_14 itself. The "dimension" of a Lie algebra is its dimension as a vector space over "F". The cardinality of a minimal generating set of a Lie algebra is always less than or equal to its dimension.
See the classification of low-dimensional real Lie algebras for other small examples.
The Lie bracket is not required to be associative, meaning that formula_28 need not equal formula_29. (However, it is "flexible".) Nonetheless, much of the terminology of associative rings and algebras is commonly applied to Lie algebras. A "Lie subalgebra" is a subspace formula_30 which is closed under the Lie bracket. An "ideal" formula_31 is a subalgebra satisfying the stronger condition:
A Lie algebra "homomorphism" is a linear map compatible with the respective Lie brackets:
As for associative rings, ideals are precisely the kernels of homomorphisms; given a Lie algebra formula_14 and an ideal formula_35 in it, one constructs the "factor algebra" or "quotient algebra" formula_36, and the first isomorphism theorem holds for Lie algebras.
Since the Lie bracket is a kind of infinitesimal commutator of the corresponding Lie group, we say that two elements formula_37 "commute" if their bracket vanishes: formula_38.
The centralizer subalgebra of a subset formula_39 is the set of elements commuting with "S": that is, formula_40. The centralizer of formula_14 itself is the "center" formula_42. Similarly, for a subspace "S", the normalizer subalgebra of "S" is formula_43. Equivalently, if "S" is a Lie subalgebra, formula_44 is the largest subalgebra such that formula_45 is an ideal of formula_44.
For formula_47, the commutator of two elementsformula_48shows formula_49 is a subalgebra, but not an ideal. In fact, because every one dimensional sub-vector space of a Lie algebra has an induced abelian Lie algebra structure, which is generally not an ideal. For any simple lie algebra, all abelian Lie algebras can never be ideals.
For two Lie algebras formula_50 and formula_51, their direct sum Lie algebra is the vector space
formula_52consisting of all pairs formula_53, with the operation
so that the copies of formula_55 commute with each other: formula_56 Let formula_14 be a Lie algebra and formula_58 an ideal of formula_14. If the canonical map formula_60 splits (i.e., admits a section), then formula_14 is said to be a semidirect product of formula_58 and formula_63, formula_64. See also semidirect sum of Lie algebras.
Levi's theorem says that a finite-dimensional Lie algebra is a semidirect product of its radical and the complementary subalgebra (Levi subalgebra).
A "derivation" on the Lie algebra formula_14 (or on any non-associative algebra) is a linear map formula_66 that obeys the Leibniz law, that is,
for all formula_37. The "inner derivation" associated to any formula_69 is the adjoint mapping formula_70 defined by formula_71. (This is a derivation as a consequence of the Jacobi identity.) The outer derivations are derivations which do not come from the adjoint representation of the Lie algebra. If formula_14 is semisimple, every derivation is inner.
The derivations form a vector space formula_73, which is a Lie subalgebra of formula_74; the bracket is commutator. The inner derivations form a Lie subalgebra of formula_73.
For example, given a Lie algebra ideal formula_76 the adjoint representation formula_77 of formula_14 acts as outer derivations on formula_58 since formula_80 for any formula_81 and formula_82. For the Lie algebra formula_83 of upper triangular matrices in formula_84, it has an ideal formula_85 of strictly upper triangular matrices (where the only non-zero elements are above the diagonal of the matrix). For instance, the commutator of elements in formula_86 and formula_87 givesformula_88shows there exist outer derivations from formula_86 in formula_90.
Let "V" be a finite-dimensional vector space over a field "F", formula_91 the Lie algebra of linear transformations and formula_92 a Lie subalgebra. Then formula_14 is said to be split if the roots of the characteristic polynomials of all linear transformations in formula_14 are in the base field "F". More generally, a finite-dimensional Lie algebra formula_14 is said to be split if it has a Cartan subalgebra whose image under the adjoint representation formula_96 is a split Lie algebra. A split real form of a complex semisimple Lie algebra (cf. #Real form and complexification) is an example of a split real Lie algebra. See also split Lie algebra for further information.
Any vector space formula_97 endowed with the identically zero Lie bracket becomes a Lie algebra. Such Lie algebras are called abelian, cf. below. Any one-dimensional Lie algebra over a field is abelian, by the alternating property of the Lie bracket.
Two important subalgebras of formula_108 are:
A complex matrix group is a Lie group consisting of matrices, formula_115, where the multiplication of "G" is matrix multiplication. The corresponding Lie algebra formula_1 is the space of matrices which are tangent vectors to "G" inside the linear space formula_117: this consists of derivatives of smooth curves in "G" at the identity: formula_118The Lie bracket of formula_14 is given by the commutator of matrices, formula_120. Given the Lie algebra, one can recover the Lie group as the image of the matrix exponential mapping formula_121 defined by formula_122, which converges for every matrix formula_123: that is, formula_124.
The following are examples of Lie algebras of matrix Lie groups:
Since
for any natural number formula_144 and any formula_145, one sees that the resulting Lie group elements are upper triangular 2×2 matrices with unit lower diagonal:
Given a vector space "V", let formula_91 denote the Lie algebra consisting of all linear endomorphisms of "V", with bracket given by formula_120. A "representation" of a Lie algebra formula_14 on "V" is a Lie algebra homomorphism
A representation is said to be "faithful" if its kernel is zero. Ado's theorem states that every finite-dimensional Lie algebra has a faithful representation on a finite-dimensional vector space.
For any Lie algebra formula_14, we can define a representation
given by formula_165; it is a representation on the vector space formula_14 called the adjoint representation.
One important aspect of the study of Lie algebras (especially semisimple Lie algebras) is the study of their representations. (Indeed, most of the books listed in the references section devote a substantial fraction of their pages to representation theory.) Although Ado's theorem is an important result, the primary goal of representation theory is not to find a faithful representation of a given Lie algebra formula_14. Indeed, in the semisimple case, the adjoint representation is already faithful. Rather the goal is to understand "all" possible representation of formula_14, up to the natural notion of equivalence. In the semisimple case over a field of characteristic zero, Weyl's theorem says that every finite-dimensional representation is a direct sum of irreducible representations (those with no nontrivial invariant subspaces). The irreducible representations, in turn, are classified by a theorem of the highest weight.
The representation theory of Lie algebras plays an important role in various parts of theoretical physics. There, one considers operators on the space of states that satisfy certain natural commutation relations. These commutation relations typically come from a symmetry of the problem—specifically, they are the relations of the Lie algebra of the relevant symmetry group. An example would be the angular momentum operators, whose commutation relations are those of the Lie algebra formula_151 of the rotation group SO(3). Typically, the space of states is very far from being irreducible under the pertinent operators, but one can attempt to decompose it into irreducible pieces. In doing so, one needs to know the irreducible representations of the given Lie algebra. In the study of the quantum hydrogen atom, for example, quantum mechanics textbooks give (without calling it that) a classification of the irreducible representations of the Lie algebra formula_151.
Lie algebras can be classified to some extent. In particular, this has an application to the classification of Lie groups.
Analogously to abelian, nilpotent, and solvable groups, defined in terms of the derived subgroups, one can define abelian, nilpotent, and solvable Lie algebras.
A Lie algebra formula_14 is "abelian" if the Lie bracket vanishes, i.e. ["x","y"] = 0, for all "x" and "y" in formula_14. Abelian Lie algebras correspond to commutative (or abelian) connected Lie groups such as vector spaces formula_173 or tori formula_174, and are all of the form formula_175 meaning an "n"-dimensional vector space with the trivial Lie bracket.
A more general class of Lie algebras is defined by the vanishing of all commutators of given length. A Lie algebra formula_14 is "nilpotent" if the lower central series
Lie rings need not be Lie groups under addition. Any Lie algebra is an example of a Lie ring. Any associative ring can be made into a Lie ring by defining a bracket operator formula_101. Conversely to any Lie algebra there is a corresponding ring, called the universal enveloping algebra.
Lie rings are used in the study of finite p-groups through the "Lazard correspondence. The lower central factors of a "p"-group are finite abelian "p"-groups, so modules over Z/"p"Z"'. The direct sum of the lower central factors is given the structure of a Lie ring by defining the bracket to be the commutator of two coset representatives. The Lie ring structure is enriched with another module homomorphism, the "p"th power map, making the associated Lie ring a so-called restricted Lie ring.
Lie rings are also useful in the definition of a p-adic analytic groups and their endomorphisms by studying Lie algebras over rings of integers such as the p-adic integers. The definition of finite groups of Lie type due to Chevalley involves restricting from a Lie algebra over the complex numbers to a Lie algebra over the integers, and the reducing modulo "p" to get a Lie algebra over a finite field.
|
https://en.wikipedia.org/wiki?curid=17944
|
Lie group
In mathematics, a Lie group (pronounced "Lee") is a group whose elements are organized continuously and smoothly, as opposed to discrete groups, where the elements are separated—this makes Lie groups differentiable manifolds. Classically, such groups were found by studying matrix subgroups formula_1 contained in formula_2 or formula_3, the group of formula_4 invertible matrices over formula_5 or formula_6. Lie groups are named after Norwegian mathematician Sophus Lie (1842–1899), who laid the foundations of the theory of continuous transformation groups.
In rough terms, a Lie group is a continuous group: it is a group whose elements are described by several real parameters. As such, Lie groups provide a natural model for the concept of continuous symmetry, such as rotational symmetry in three dimensions (given by the special orthogonal group formula_7). Lie groups are widely used in many parts of modern mathematics and physics. Lie's original motivation for introducing Lie groups was to model the continuous symmetries of differential equations, in much the same way that finite groups are used in Galois theory to model the discrete symmetries of algebraic equations.
Lie groups are smooth differentiable manifolds and as such can be studied using differential calculus, in contrast with the case of more general topological groups. One of the key ideas in the theory of Lie groups is to replace the "global" object, the group, with its "local" or linearized version, which Lie himself called its "infinitesimal group" and which has since become known as its Lie algebra.
Lie groups play an enormous role in modern geometry, on several different levels. Felix Klein argued in his Erlangen program that one can consider various "geometries" by specifying an appropriate transformation group that leaves certain geometric properties invariant. Thus Euclidean geometry corresponds to the choice of the group E(3) of distance-preserving transformations of the Euclidean space R3, conformal geometry corresponds to enlarging the group to the conformal group, whereas in projective geometry one is interested in the properties invariant under the projective group. This idea later led to the notion of a G-structure, where "G" is a Lie group of "local" symmetries of a manifold.
Lie groups (and their associated Lie algebras) play a major role in modern physics, with the Lie group typically playing the role of a symmetry of a physical system. Here, the representations of the Lie group (or of its Lie algebra) are especially important. Representation theory is used extensively in particle physics. Groups whose representations are of particular importance include the rotation group SO(3) (or its double cover SU(2)), the special unitary group SU(3) and the Poincaré group.
On a "global" level, whenever a Lie group acts on a geometric object, such as a Riemannian or a symplectic manifold, this action provides a measure of rigidity and yields a rich algebraic structure. The presence of continuous symmetries expressed via a Lie group action on a manifold places strong constraints on its geometry and facilitates analysis on the manifold. Linear actions of Lie groups are especially important, and are studied in representation theory.
In the 1940s–1950s, Ellis Kolchin, Armand Borel, and Claude Chevalley realised that many foundational results concerning Lie groups can be developed completely algebraically, giving rise to the theory of algebraic groups defined over an arbitrary field. This insight opened new possibilities in pure algebra, by providing a uniform construction for most finite simple groups, as well as in algebraic geometry. The theory of automorphic forms, an important branch of modern number theory, deals extensively with analogues of Lie groups over adele rings; "p"-adic Lie groups play an important role, via their connections with Galois representations in number theory.
A real Lie group is a group that is also a finite-dimensional real smooth manifold, in which the group operations of multiplication and inversion are smooth maps. Smoothness of the group multiplication
means that "μ" is a smooth mapping of the product manifold into "G". These two requirements can be combined to the single requirement that the mapping
be a smooth mapping of the product manifold into "G".
We now present an example of a group with an uncountable number of elements that is not a Lie group under a certain topology. The group given by
with formula_17 a "fixed" irrational number, is a subgroup of the torus formula_18 that is not a Lie group when given the subspace topology. If we take any small neighborhood formula_19 of a point formula_20 in formula_21, for example, the portion of formula_21 in formula_19 is disconnected. The group formula_21 winds repeatedly around the torus without ever reaching a previous point of the spiral and thus forms a dense subgroup of formula_18.
The group formula_21 can, however, be given a different topology, in which the distance between two points formula_27 is defined as the length of the shortest path "in the group " formula_21 joining formula_29 to formula_30. In this topology, formula_21 is identified homeomorphically with the real line by identifying each element with the number formula_32 in the definition of formula_21. With this topology, formula_21 is just the group of real numbers under addition and is therefore a Lie group.
The group formula_21 is an example of a "Lie subgroup" of a Lie group that is not closed. See the discussion below of Lie subgroups in the section on basic concepts.
Let formula_36 denote the group of formula_4 invertible matrices with entries in formula_6. Any closed subgroup of formula_36 is a Lie group; Lie groups of this sort are called matrix Lie groups. Since most of the interesting examples of Lie groups can be realized as matrix Lie groups, some textbooks restrict attention to this class, including those of Hall and Rossmann. Restricting attention to matrix Lie groups simplifies the definition of the Lie algebra and the exponential map. The following are standard examples of matrix Lie groups.
All of the preceding examples fall under the heading of the classical groups.
A complex Lie group is defined in the same way using complex manifolds rather than real ones (example: formula_59), and similarly, using an alternate metric completion of formula_60, one can define a "p"-adic Lie group over the "p"-adic numbers, a topological group in which each point has a "p"-adic neighborhood.
Hilbert's fifth problem asked whether replacing differentiable manifolds with topological or analytic ones can yield new examples. The answer to this question turned out to be negative: in 1952, Gleason, Montgomery and Zippin showed that if "G" is a topological manifold with continuous group operations, then there exists exactly one analytic structure on "G" which turns it into a Lie group (see also Hilbert–Smith conjecture). If the underlying manifold is allowed to be infinite-dimensional (for example, a Hilbert manifold), then one arrives at the notion of an infinite-dimensional Lie group. It is possible to define analogues of many Lie groups over finite fields, and these give most of the examples of finite simple groups.
The language of category theory provides a concise definition for Lie groups: a Lie group is a group object in the category of smooth manifolds. This is important, because it allows generalization of the notion of a Lie group to Lie supergroups.
A Lie group can be defined as a (Hausdorff) topological group that, near the identity element, looks like a transformation group, with no reference to differentiable manifolds. First, we define an immersely linear Lie group to be a subgroup "G" of the general linear group formula_36 such that
(For example, a closed subgroup of formula_36; that is, a matrix Lie group satisfies the above conditions.)
Then a "Lie group" is defined as a topological group that (1) is locally isomorphic near the identities to an immersely linear Lie group and (2) has at most countably many connected components. Showing the topological definition is equivalent to the usual one is technical (and the beginning readers should skip the following) but is done roughly as follows:
The topological definition implies the statement that if two Lie groups are isomorphic as topological groups, then they are isomorphic as Lie groups. In fact, it states the general principle that, to a large extent, "the topology of a Lie group" together with the group law determines the geometry of the group.
Lie groups occur in abundance throughout mathematics and physics. Matrix groups or algebraic groups are (roughly) groups of matrices (for example, orthogonal and symplectic groups), and these give most of the more common examples of Lie groups.
The only connected Lie groups with dimension one are the real line formula_5 (with the group operation being addition) and the circle group formula_70 of complex numbers with absolute value one (with the group operation being multiplication). The formula_70 group is often denoted as formula_72, the group of formula_73 unitary matrices.
In two dimensions, if we restrict attention to simply connected groups, then they are classified by their Lie algebras. There are (up to isomorphism) only two Lie algebras of dimension two. The associated simply connected Lie groups are formula_74 (with the group operation being vector addition) and the affine group in dimension one, described in the previous subsection under "first examples."
There are several standard ways to form new Lie groups from old ones:
Some examples of groups that are "not" Lie groups (except in the trivial sense that any group having at most countably many elements can be viewed as a 0-dimensional Lie group, with the discrete topology), are:
To every Lie group we can associate a Lie algebra whose underlying vector space is the tangent space of the Lie group at the identity element and which completely captures the local structure of the group. Informally we can think of elements of the Lie algebra as elements of the group that are "infinitesimally close" to the identity, and the Lie bracket of the Lie algebra is related to the commutator of two such infinitesimal elements. Before giving the abstract definition we give a few examples:
The concrete definition given above for matrix groups is easy to work with, but has some minor problems: to use it we first need to represent a Lie group as a group of matrices, but not all Lie groups can be represented in this way, and even it is not obvious that the Lie algebra is independent of the representation we use. To get around these problems we give
the general definition of the Lie algebra of a Lie group (in 4 steps):
This Lie algebra formula_93 is finite-dimensional and it has the same dimension as the manifold "G". The Lie algebra of "G" determines "G" up to "local isomorphism", where two Lie groups are called locally isomorphic if they look the same near the identity element.
Problems about Lie groups are often solved by first solving the corresponding problem for the Lie algebras, and the result for groups then usually follows easily.
For example, simple Lie groups are usually classified by first classifying the corresponding Lie algebras.
We could also define a Lie algebra structure on "Te" using right invariant vector fields instead of left invariant vector fields. This leads to the same Lie algebra, because the inverse map on "G" can be used to identify left invariant vector fields with right invariant vector fields, and acts as −1 on the tangent space "Te".
The Lie algebra structure on "Te" can also be described as follows:
the commutator operation
on "G" × "G" sends ("e", "e") to "e", so its derivative yields a bilinear operation on "TeG". This bilinear operation is actually the zero map, but the second derivative, under the proper identification of tangent spaces, yields an operation that satisfies the axioms of a Lie bracket, and it is equal to twice the one defined through left-invariant vector fields.
If "G" and "H" are Lie groups, then a Lie group homomorphism "f" : "G" → "H" is a smooth group homomorphism. In the case of complex Lie groups, such a homomorphism is required to be a holomorphic map. However, these requirements are a bit stringent; every continuous homomorphism between real Lie groups turns out to be (real) analytic.
The composition of two Lie homomorphisms is again a homomorphism, and the class of all Lie groups, together with these morphisms, forms a category. Moreover, every Lie group homomorphism induces a homomorphism between the corresponding Lie algebras. Let formula_95 be a Lie group homomorphism and let formula_96 be its derivative at the identity. If we identify the Lie algebras of "G" and "H" with their tangent spaces at the identity elements then formula_96 is a map between the corresponding Lie algebras:
One can show that formula_96 is actually a Lie algebra homomorphism (meaning that it is a linear map which preserves the Lie bracket). In the language of category theory, we then have a covariant functor from the category of Lie groups to the category of Lie algebras which sends a Lie group to its Lie algebra and a Lie group homomorphism to its derivative at the identity.
Two Lie groups are called "isomorphic" if there exists a bijective homomorphism between them whose inverse is also a Lie group homomorphism. Equivalently, it is a diffeomorphism which is also a group homomorphism.
Isomorphic Lie groups necessarily have isomorphic Lie algebras; it is then reasonable to ask how isomorphism classes of Lie groups relate to isomorphism classes of Lie algebras.
The first result in this direction is Lie's third theorem, which states that every finite-dimensional, real Lie algebra is the Lie algebra of some (linear) Lie group. One way to prove Lie's third theorem is to use Ado's theorem, which says every finite-dimensional real Lie algebra is isomorphic to a matrix Lie algebra. Meanwhile, for every finite-dimensional matrix Lie algebra, there is a linear group (matrix Lie group) with this algebra as its Lie algebra.
On the other hand, Lie groups with isomorphic Lie algebras need not be isomorphic. Furthermore, this result remains true even if we assume the groups are connected. To put it differently, the "global" structure of a Lie group is not determined by its Lie algebra; for example, if "Z" is any discrete subgroup of the center of "G" then "G" and "G"/"Z" have the same Lie algebra (see the table of Lie groups for examples). An example of importance in physics are the groups SU(2) and SO(3). These two groups have isomorphic Lie algebras, but the groups themselves are not isomorphic, because SU(2) is simply connected but SO(3) is not.
On the other hand, if we require that the Lie group be simply connected, then the global structure is determined by its Lie algebra: two simply connected Lie groups with isomorphic Lie algebras are isomorphic. (See the next subsection for more information about simply connected Lie groups.) In light of Lie's third theorem, we may therefore say that there is a one-to-one correspondence between isomorphism classes of finite-dimensional real Lie algebras and isomorphism classes of simply connected Lie groups.
A Lie group formula_1 is said to be simply connected if every loop in formula_1 can be shrunk continuously to a point in formula_1. This notion is important because of the following result that has simple connectedness as a hypothesis:
Lie's third theorem says that every finite-dimensional real Lie algebra is the Lie algebra of a Lie group. It follows from Lie's third theorem and the preceding result that every finite-dimensional real Lie algebra is the Lie algebra of a "unique" simply connected Lie group.
An example of a simply connected group is the special unitary group SU(2), which as a manifold is the 3-sphere. The rotation group SO(3), on the other hand, is not simply connected. (See Topology of SO(3).) The failure of SO(3) to be simply connected is intimately connected to the distinction between integer spin and half-integer spin in quantum mechanics. Other examples of simply connected Lie groups include the special unitary group SU(n), the spin group (double cover of rotation group) Spin(n) for formula_113, and the compact symplectic group Sp(n).
Methods for determining whether a Lie group is simply connected or not are discussed in the article on fundamental groups of Lie groups.
The exponential map from the Lie algebra formula_114 of the general linear group formula_115 to formula_115 is defined by the matrix exponential, given by the usual power series:
for matrices formula_87. If formula_1 is a closed subgroup of formula_115, then the exponential map takes the Lie algebra of formula_1 into formula_1; thus, we have an exponential map for all matrix groups. Every element of formula_1 that is sufficiently close to the identity is the exponential of a matrix in the Lie algebra.
The definition above is easy to use, but it is not defined for Lie groups that are not matrix groups, and it is not clear that the exponential map of a Lie group does not depend on its representation as a matrix group. We can solve both problems using a more abstract definition of the exponential map that works for all Lie groups, as follows.
For each vector formula_87 in the Lie algebra formula_93 of formula_1 (i.e., the tangent space to formula_1 at the identity), one proves that there is a unique one-parameter subgroup formula_128 such that formula_129. Saying that formula_130 is a one-parameter subgroup means simply that formula_130 is a smooth map into formula_1 and that
for all formula_134 and formula_135. The operation on the right hand side is the group multiplication in formula_1. The formal similarity of this formula with the one valid for the exponential function justifies the definition
This is called the exponential map, and it maps the Lie algebra formula_93 into the Lie group formula_1 It provides a diffeomorphism between a neighborhood of 0 in formula_93 and a neighborhood of formula_141 in formula_1. This exponential map is a generalization of the exponential function for real numbers (because formula_5 is the Lie algebra of the Lie group of positive real numbers with multiplication), for complex numbers (because formula_6 is the Lie algebra of the Lie group of non-zero complex numbers with multiplication) and for matrices (because formula_145 with the regular commutator is the Lie algebra of the Lie group formula_146 of all invertible matrices).
Because the exponential map is surjective on some neighbourhood formula_147 of formula_141, it is common to call elements of the Lie algebra infinitesimal generators of the group formula_1. The subgroup of formula_1 generated by formula_147 is the identity component of formula_1.
The exponential map and the Lie algebra determine the "local group structure" of every connected Lie group, because of the Baker–Campbell–Hausdorff formula: there exists a neighborhood formula_19 of the zero element of formula_93, such that for formula_155 we have
where the omitted terms are known and involve Lie brackets of four or more elements. In case formula_87 and formula_158 commute, this formula reduces to the familiar exponential law formula_159
The exponential map relates Lie group homomorphisms. That is, if formula_160 is a Lie group homomorphism and formula_161 the induced map on the corresponding Lie algebras, then for all formula_162 we have
In other words, the following diagram commutes,
The exponential map from the Lie algebra to the Lie group is not always onto, even if the group is connected (though it does map onto the Lie group for connected groups that are either compact or nilpotent). For example, the exponential map of SL(2, R) is not surjective. Also, the exponential map is neither surjective nor injective for infinite-dimensional (see below) Lie groups modelled on "C"∞ Fréchet space, even from arbitrary small neighborhood of 0 to corresponding neighborhood of 1.
A Lie subgroup formula_21 of a Lie group formula_1 is a Lie group that is a subset of formula_1 and such that the inclusion map from formula_21 to formula_1 is an injective immersion and group homomorphism. According to Cartan's theorem, a closed subgroup of formula_1 admits a unique smooth structure which makes it an embedded Lie subgroup of formula_1—i.e. a Lie subgroup such that the inclusion map is a smooth embedding.
Examples of non-closed subgroups are plentiful; for example take formula_1 to be a torus of dimension 2 or greater, and let formula_21 be a one-parameter subgroup of "irrational slope", i.e. one that winds around in "G". Then there is a Lie group homomorphism formula_173 with formula_174. The closure of formula_21 will be a sub-torus in formula_1.
The exponential map gives a one-to-one correspondence between the connected Lie subgroups of a connected Lie group formula_1 and the subalgebras of the Lie algebra of formula_1. Typically, the subgroup corresponding to a subalgebra is not a closed subgroup. There is no criterion solely based on the structure of formula_1 which determines which subalgebras correspond to closed subgroups.
One important aspect of the study of Lie groups is their representations, that is, the way they can act (linearly) on vector spaces. In physics, Lie groups often encode the symmetries of a physical system. The way one makes use of this symmetry to help analyze the system is often through representation theory. Consider, for example, the time-independent Schrödinger equation in quantum mechanics, formula_180. Assume the system in question has the rotation group SO(3) as a symmetry, meaning that the Hamiltonian operator formula_181 commutes with the action of SO(3) on the wave function formula_182. (One important example of such a system is the Hydrogen atom.) This assumption does not necessarily mean that the solutions formula_182 are rotationally invariant functions. Rather, it means that the "space" of solutions to formula_180 is invariant under rotations (for each fixed value of formula_185). This space, therefore, constitutes a representation of SO(3). These representations have been and the classification leads to a substantial simplification of the problem, essentially converting a three-dimensional partial differential equation to a one-dimensional ordinary differential equation.
The case of a connected compact Lie group "K" (including the just-mentioned case of SO(3)) is particularly tractable. In that case, every finite-dimensional representation of "K" decomposes as a direct sum of irreducible representations. The irreducible representations, in turn, were classified by Hermann Weyl. The classification is in terms of the "highest weight" of the representation. The classification is closely related to the classification of representations of a semisimple Lie algebra.
One can also study (in general infinite-dimensional) unitary representations of an arbitrary Lie group (not necessarily compact). For example, it is possible to give a relatively simple explicit description of the representations of the group SL(2,R) and the representations of the Poincaré group.
According to the most authoritative source on the early history of Lie groups (Hawkins, p. 1), Sophus Lie himself considered the winter of 1873–1874 as the birth date of his theory of continuous groups. Hawkins, however, suggests that it was "Lie's prodigious research activity during the four-year period from the fall of 1869 to the fall of 1873" that led to the theory's creation ("ibid"). Some of Lie's early ideas were developed in close collaboration with Felix Klein. Lie met with Klein every day from October 1869 through 1872: in Berlin from the end of October 1869 to the end of February 1870, and in Paris, Göttingen and Erlangen in the subsequent two years ("ibid", p. 2). Lie stated that all of the principal results were obtained by 1884. But during the 1870s all his papers (except the very first note) were published in Norwegian journals, which impeded recognition of the work throughout the rest of Europe ("ibid", p. 76). In 1884 a young German mathematician, Friedrich Engel, came to work with Lie on a systematic treatise to expose his theory of continuous groups. From this effort resulted the three-volume "Theorie der Transformationsgruppen", published in 1888, 1890, and 1893. The term "groupes de Lie" first appeared in French in 1893 in the thesis of Lie's student Arthur Tresse.
Lie's ideas did not stand in isolation from the rest of mathematics. In fact, his interest in the geometry of differential equations was first motivated by the work of Carl Gustav Jacobi, on the theory of partial differential equations of first order and on the equations of classical mechanics. Much of Jacobi's work was published posthumously in the 1860s, generating enormous interest in France and Germany (Hawkins, p. 43). Lie's "idée fixe" was to develop a theory of symmetries of differential equations that would accomplish for them what Évariste Galois had done for algebraic equations: namely, to classify them in terms of group theory. Lie and other mathematicians showed that the most important equations for special functions and orthogonal polynomials tend to arise from group theoretical symmetries. In Lie's early work, the idea was to construct a theory of "continuous groups", to complement the theory of discrete groups that had developed in the theory of modular forms, in the hands of Felix Klein and Henri Poincaré. The initial application that Lie had in mind was to the theory of differential equations. On the model of Galois theory and polynomial equations, the driving conception was of a theory capable of unifying, by the study of symmetry, the whole area of ordinary differential equations. However, the hope that Lie Theory would unify the entire field of ordinary differential equations was not fulfilled. Symmetry methods for ODEs continue to be studied, but do not dominate the subject. There is a differential Galois theory, but it was developed by others, such as Picard and Vessiot, and it provides a theory of quadratures, the indefinite integrals required to express solutions.
Additional impetus to consider continuous groups came from ideas of Bernhard Riemann, on the foundations of geometry, and their further development in the hands of Klein. Thus three major themes in 19th century mathematics were combined by Lie in creating his new theory: the idea of symmetry, as exemplified by Galois through the algebraic notion of a group; geometric theory and the explicit solutions of differential equations of mechanics, worked out by Poisson and Jacobi; and the new understanding of geometry that emerged in the works of Plücker, Möbius, Grassmann and others, and culminated in Riemann's revolutionary vision of the subject.
Although today Sophus Lie is rightfully recognized as the creator of the theory of continuous groups, a major stride in the development of their structure theory, which was to have a profound influence on subsequent development of mathematics, was made by Wilhelm Killing, who in 1888 published the first paper in a series entitled "Die Zusammensetzung der stetigen endlichen Transformationsgruppen" ("The composition of continuous finite transformation groups") (Hawkins, p. 100). The work of Killing, later refined and generalized by Élie Cartan, led to classification of semisimple Lie algebras, Cartan's theory of symmetric spaces, and Hermann Weyl's description of representations of compact and semisimple Lie groups using highest weights.
In 1900 David Hilbert challenged Lie theorists with his Fifth Problem presented at the International Congress of Mathematicians in Paris.
Weyl brought the early period of the development of the theory of Lie groups to fruition, for not only did he classify irreducible representations of semisimple Lie groups and connect the theory of groups with quantum mechanics, but he also put Lie's theory itself on firmer footing by clearly enunciating the distinction between Lie's "infinitesimal groups" (i.e., Lie algebras) and the Lie groups proper, and began investigations of topology of Lie groups. The theory of Lie groups was systematically reworked in modern mathematical language in a monograph by Claude Chevalley.
Lie groups may be thought of as smoothly varying families of symmetries. Examples of symmetries include rotation about an axis. What must be understood is the nature of 'small' transformations, for example, rotations through tiny angles, that link nearby transformations. The mathematical object capturing this structure is called a Lie algebra (Lie himself called them "infinitesimal groups"). It can be defined because Lie groups are smooth manifolds, so have tangent spaces at each point.
The Lie algebra of any compact Lie group (very roughly: one for which the symmetries form a bounded set) can be decomposed as a direct sum of an abelian Lie algebra and some number of simple ones. The structure of an abelian Lie algebra is mathematically uninteresting (since the Lie bracket is identically zero); the interest is in the simple summands. Hence the question arises: what are the simple Lie algebras of compact groups? It turns out that they mostly fall into four infinite families, the "classical Lie algebras" A"n", B"n", C"n" and D"n", which have simple descriptions in terms of symmetries of Euclidean space. But there are also just five "exceptional Lie algebras" that do not fall into any of these families. E8 is the largest of these.
Lie groups are classified according to their algebraic properties (simple, semisimple, solvable, nilpotent, abelian), their connectedness (connected or simply connected) and their compactness.
A first key result is the Levi decomposition, which says that every simply connected Lie group is the semidirect product of a solvable normal subgroup and a semisimple subgroup.
The identity component of any Lie group is an open normal subgroup, and the quotient group is a discrete group. The universal cover of any connected Lie group is a simply connected Lie group, and conversely any connected Lie group is a quotient of a simply connected Lie group by a discrete normal subgroup of the center. Any Lie group "G" can be decomposed into discrete, simple, and abelian groups in a canonical way as follows. Write
so that we have a sequence of normal subgroups
Then
This can be used to reduce some problems about Lie groups (such as finding their unitary representations) to the same problems for connected simple groups and nilpotent and solvable subgroups of smaller dimension.
Lie groups are often defined to be finite-dimensional, but there are many groups that resemble Lie groups, except for being infinite-dimensional. The simplest way to define infinite-dimensional Lie groups is to model them locally on Banach spaces (as opposed to Euclidean space in the finite-dimensional case), and in this case much of the basic theory is similar to that of finite-dimensional Lie groups. However this is inadequate for many applications, because many natural examples of infinite-dimensional Lie groups are not Banach manifolds. Instead one needs to define Lie groups modeled on more general locally convex topological vector spaces. In this case the relation between the Lie algebra and the Lie group becomes rather subtle, and several results about finite-dimensional Lie groups no longer hold.
The literature is not entirely uniform in its terminology as to exactly which properties of infinite-dimensional groups qualify the group for the prefix "Lie" in "Lie group". On the Lie algebra side of affairs, things are simpler since the qualifying criteria for the prefix "Lie" in "Lie algebra" are purely algebraic. For example, an infinite-dimensional Lie algebra may or may not have a corresponding Lie group. That is, there may be a group corresponding to the Lie algebra, but it might not be nice enough to be called a Lie group, or the connection between the group and the Lie algebra might not be nice enough (for example, failure of the exponential map to be onto a neighborhood of the identity). It is the "nice enough" that is not universally defined.
Some of the examples that have been studied include:
|
https://en.wikipedia.org/wiki?curid=17945
|
Lake Erie
Lake Erie () is the fourth-largest lake (by surface area) of the five Great Lakes in North America, and the eleventh-largest globally if measured in terms of surface area. It is the southernmost, shallowest, and smallest by volume of the Great Lakes and therefore also has the shortest average water residence time. At its deepest point Lake Erie is deep.
Situated on the International Boundary between Canada and the United States, Lake Erie's northern shore is the Canadian province of Ontario, specifically the Ontario Peninsula, with the U.S. states of Michigan, Ohio, Pennsylvania, and New York on its western, southern, and eastern shores. These jurisdictions divide the surface area of the lake with water boundaries.
The lake was named by the Erie people, a Native American people who lived along its southern shore. The tribal name "erie" is a shortened form of the Iroquoian word , meaning long tail.
Situated below Lake Huron, Erie's primary inlet is the Detroit River. The main natural outflow from the lake is via the Niagara River, which provides hydroelectric power to Canada and the U.S. as it spins huge turbines near Niagara Falls at Lewiston, New York and Queenston, Ontario. Some outflow occurs via the Welland Canal, part of the St. Lawrence Seaway, which diverts water for ship passages from Port Colborne, Ontario on Lake Erie, to St. Catharines on Lake Ontario, an elevation difference of . Lake Erie's environmental health has been an ongoing concern for decades, with issues such as overfishing, pollution, algae blooms, and eutrophication generating headlines.
Lake Erie (42.2° N, 81.2W) has a mean elevation of above sea level. It has a surface area of with a length of and breadth of at its widest points.
It is the shallowest of the Great Lakes with an average depth of 10 fathoms 3 feet or and a maximum depth of For comparison, Lake Superior has an average depth of 80 fathoms 3 feet or , a volume of and shoreline of 2,726 statute miles (4,385 km). Because it is the shallowest, it is also the warmest of the Great Lakes, and in 1999 this almost became a problem for two nuclear power plants which require cool lake water to keep their reactors cool. The warm summer of 1999 caused lake temperatures to come close to the limit necessary to keep the plants cool. Also because of its shallowness, and in spite of being the warmest lake in the summer, it is also the first to freeze in the winter. The shallowest section of Lake Erie is the western basin where depths average only ; as a result, "the slightest breeze can kick up lively waves," also known as seiches. The "waves build very quickly", according to other accounts. Sometimes fierce waves springing up unexpectedly have led to dramatic rescues; in one instance, a Cleveland resident trying to measure the dock near his house became trapped but was rescued by a fire department diver from Avon Lake, Ohio:
This area is also known as the "thunderstorm capital of Canada" with "breathtaking" lightning displays. Lake Erie is primarily fed by the Detroit River (from Lake Huron and Lake St. Clair) and drains via the Niagara River and Niagara Falls into Lake Ontario. Navigation downstream is provided by the Welland Canal, part of the Saint Lawrence Seaway. Other major contributors to Lake Erie include the Grand River, the Huron River, the Maumee River, the Sandusky River, the Buffalo River, and the Cuyahoga River. The drainage basin covers .
Point Pelee National Park, the southernmost point of the Canadian mainland, is located on a peninsula extending into the lake. Several islands are found in the western end of the lake; these belong to Ohio except for Pelee Island and eight neighboring islands, which are part of Ontario.
Major cities along Lake Erie include Buffalo, New York; Erie, Pennsylvania; Cleveland, Ohio, and Toledo, Ohio.
Islands tend to be located in the western side of the lake and total 31 in number (13 in Canada, 18 in the U.S.). The island-village of Put-in-Bay on South Bass Island attracts young crowds who sometimes wear "red bucket hats" and are prone to "break off cartwheels in the park" and general merriment. Kelleys Island was depicted by the "Chicago Tribune" as having charms that were "more subtle" than Put-in-Bay, and offers amenities such as beach lounging, hiking, biking and "marveling at deep glacial grooves left in limestone." Pelee Island is the largest of Erie's islands, accessible by ferry from Leamington, Ontario and Sandusky, Ohio. The island has a "fragile and unique ecosystem" with plants rarely found in Canada, such as wild hyacinth, yellow horse gentian ("Triosteum angustifolium") and prickly pear cactus, as well as two endangered snakes, the blue racer and the Lake Erie water snake. Songbirds migrate to Pelee in spring, and monarch butterflies stop over during the fall.
Lake Erie has a lake retention time of 2.6 years, the shortest of all the Great Lakes. The lake's surface area is . Lake Erie's water level fluctuates with the seasons as in the other Great Lakes. Generally, the lowest levels are in January and February, and the highest in June or July, although there have been exceptions. The average yearly level varies depending on long-term precipitation. Short-term level changes are often caused by seiches that are particularly high when southwesterly winds blow across the length of the lake during storms. These cause water to pile up at the eastern end of the lake. Storm-driven seiches can cause damage onshore. During one storm in November 2003, the water level at Buffalo rose by with waves of for a rise of . Meanwhile, at the western end of the lake, Toledo experienced a similar drop in water level. Lake water is used for drinking purposes.
Lake Erie was carved out by glacier ice, and in its current form is less than 4,000 years old, which is a short span in geological terms. Before this, the land on which the lake now sits went through several complex stages. A large lowland basin formed over two million years ago as a result of an eastern flowing river that existed well before the Pleistocene ice ages. This ancient drainage system was destroyed by the first major glacier in the area, while it deepened and enlarged the lowland areas, allowing water to settle and form a lake. The glaciers were able to carve away more land on the eastern side of the lowland because the bedrock is made of shale which is softer than the carbonate rocks of dolomite and limestone on the western side. Thus, the eastern and central basins of the modern lake are much deeper than the western basin, which averages only deep and is rich in nutrients and fish. Lake Erie is the shallowest of the Great Lakes because the ice was relatively thin and lacked erosion power when it reached that far south, according to one view.
As many as three glaciers advanced and retreated over the land causing temporary lakes to form in the time periods in between each of them. Because each lake had a different volume of water their shorelines rested at differing elevations. The last of these lakes to form, Lake Warren, existed between about 13,000 and 12,000 years ago. It was deeper than the current Lake Erie, so its shoreline existed about inland from the modern one. The shorelines of these lakes left behind high ground sand ridges that cut through swamps and were used as trails for Indians and later, pioneers. These trails became primitive roads which were eventually paved. U.S. Route 30 west of Delphos and U.S. Route 20 west of Norwalk and east of Cleveland were formed in this manner. One can still see some of these ancient sand dunes that formed in the Oak Openings Region in Northwestern Ohio. There, the sandy dry lake bed soil was not enough to support large trees with the exception of a few species of oaks, forming a rare oak savanna.
At the time of European contact, there were several groups of Native American cultures living around the shores of the eastern end of the lake. The Erie tribe (from whom the lake takes its name) lived along the southern edge, while the Neutrals (also known as Attawandaron) lived along the northern shore. Near Port Stanley, there is a Native American village dating from the 16th century known as the "Southwold Earthworks" where as many as 800 Neutral Native Americans once lived; the archaeological remains include double earth walls winding around the grass-covered perimeter. Europeans named the tribe the "Neutral Indians" since these people refused to fight with other tribes. Both tribes were conquered and assimilated by their hostile eastern neighbors, the Iroquois Confederacy between AD 1651 and 1657, in what is referred to as part of the Beaver Wars.
For decades after those wars, the land around eastern Lake Erie was claimed and utilized by the Iroquois as a hunting ground. As the power of the Iroquois waned during the last quarter of the 17th century, several other, mainly Anishinaabe Native American tribes, displaced them from the territories they claimed on the north shore of the lake. There was a legend of a Native American woman named Huldah, who, despairing over her lost British lover, hurled herself from a high rock from Pelee Island.
In 1669, the Frenchman Louis Jolliet was the first documented European to sight Lake Erie, although there is speculation that Étienne Brûlé may have come across it in 1615. Lake Erie was the last of the Great Lakes to be explored by Europeans, since the Iroquois who occupied the Niagara River area were in conflict with the French, and they did not allow explorers or traders to pass through. Explorers followed rivers out of Lake Ontario and portaged directly into Lake Huron. British authorities in Canada were nervous about possible expansion by American settlers across Lake Erie, so Colonel Talbot developed the Talbot Trail in 1809 as a way to stimulate settlement to the area; Talbot recruited settlers from Ireland and Scotland and there are numerous places named after him, such as Port Talbot and the Talbot River and Talbotville in southern Ontario.
During the War of 1812, Oliver Hazard Perry captured an entire British fleet in 1813 near Put-in-Bay, Ohio, despite having inferior numbers. American soldiers swept through the Ontario area around Port Rowan burning towns and villages, but spared a gristmill owned by a Canadian mason named John Backhouse, according to one report. Generally, however, despite the two exceptions being the American Revolutionary War and the War of 1812 which involved conflicts between the U.S. and the United Kingdom, relations between the U.S. and Canada have been remarkably friendly with an "unfortified boundary" and an agreement "that has kept all fleets of war off the Great Lakes."
In 1837, rebellions broke about between Canadian settlers and the British Colonial government. These primarily concerned political reforms and land allocation issues. Some of the rebels stationed themselves in the U.S. and crossed the ice from Sandusky Bay to Pelee Island wearing "tattered overcoats and worn-out boots", and carrying muskets, pitchforks, and swords, but the islanders had already fled. Later, there was a battle on the ice with the Royal 32nd regiment, with the rebels being driven to retreat.
Settlers established commercial fisheries on the north coast of the lake around the 1850s. An important business was fishing. In the pre-Civil War years, railways sprouted everywhere, and around 1852 there were railways circling the lake. Maritime traffic picked up, although the lake was usually closed because of ice from December to early April, and ships had to wait for the ice to clear before proceeding. Since slavery had been abolished in Canada in 1833, but was still legal in southern states of the U.S., a Lake Erie crossing was sometimes required for fugitive slaves seeking freedom:
Merchant shippers lacked modern radar and weather forecasting so vessels were often caught up in intense gales:
There were reports of disasters usually from sea captains passing information to reporters; in 1868, the captain of the "Grace Whitney" saw a sunken vessel "three men clinging to the masthead" but he could not help because of the gale and high seas.
A balloonist named John Steiner of Philadelphia made an ambitious trip across the lake in 1857. He was described in "The New York Times" as an "eronaut" or "aeronaut"; powered boats were called "propellers"; and fast was deemed "railroad speed". Here's an account of the day-long voyage over the lake:
In 1885, lake winds were so strong that water levels dropped substantially, sometimes by as much as two feet, so that at ports such as Toledo, watercraft could not load coal or depart the port.
During the history of the lake as a "fishery", there has been marked battling by opposing interest groups. Here's an 1895 newspaper account in which critics of commercial fishing issued dire predictions and calling for government action to solve the problem:
Predictions of the lake being over-fished in 1895 were premature, since the fishery has survived commercial and sport fishing, pollution in the middle of the 20th century, invasive species and other ailments, but state and provincial governments, as well as national governments, have played a greater role as time went by. Business boomed; in 1901, the Carnegie Company proposed building a new harbor near Erie in Elk Creek to accommodate shipments from its tube-plant site nearby. In 1913, a memorial to Commodore Perry was built on Put-in-Bay island featuring a Doric column.
During the Prohibition years from 1919 to 1933, a "great deal of alcohol crossed Erie" along with "mobster corpses" dumped into the Detroit River which sometimes washed up on the beaches of Pelee Island. Notable rum runners included Thomas Joseph McGinty and the Purple Gang. The Coast Guard attempted to interdict the Canadian liquor with its Rum Patrol, and a casino operated on Middle Island.
During the 20th century, commercial fishing was prevalent, but so was the boom in manufacturing industry around the lake, and often rivers and streams were used as sewers to flush untreated sewage which ended up in the lake. Sometimes poorly constructed sanitary systems meant that when old mains broke, raw sewage would spill directly into the Cuyahoga and into the lake. A report in "Time" magazine in 1969 described the lake as a "gigantic cesspool" since only three of 62 beaches were rated "completely safe for swimming".
By 1975 the popular commercial fish blue pike had been declared extinct, although the declaration may have been premature. By the 1980s, there were about 130 fishing vessels with about 3,000 workers, but commercial fishing was declining rapidly, particularly from the American side.
In 2005, the Great Lakes States of Ohio, Michigan, New York, Pennsylvania, Illinois, Indiana, Wisconsin, Minnesota and the Canadian Provinces of Ontario and Quebec endorsed the Great Lakes-St. Lawrence River Basin Sustainable Water Resources Compact (Compact). The Compact was signed into law by President George W. Bush in September 2008. An international water rights policy overseen by the Great Lakes Commission, the Compact aims to prevent diversion of water from Great Lakes to distant states, as well as to set standards for use and conservation. It had support from both political parties, including former United States Senator George Voinovich (R-OH) and former Governor Jennifer Granholm (D-MI), but is not popular in the southwestern states due to frequent drought conditions and water scarcity.
Like the other Great Lakes, Erie produces lake effect snow when the first cold winds of winter pass over the warm waters. When the temperatures of the relatively warm surface water and the colder air separate to at least to apart, then "lake-effect snow becomes possible:"
Heavy lake-effect snowfalls can occur when cold air travels or longer over a large unfrozen lake. Lake-effect snow makes Buffalo and Erie the eleventh and thirteenth snowiest places in the entire United States respectively, according to data collected from the National Climatic Data Center. Since winds blow primarily west–to–east along the main axis of the lake, lake effect snow effects are more pronounced on the eastern parts of the lake such as cities such as Buffalo and Erie. Buffalo typically gets of snow each winter, and sometimes of snow; the snowiest city is Syracuse, New York, which can receive heavy snowfall from both the lake effect process and large coastal cyclones. A storm around Christmas in 2001 pounded Buffalo with of snow.
The lake effect ends or its effect is reduced, however, when the lake freezes over. In January 2011, for example, residents of Cleveland were glad when Lake Erie was "90 percent frozen" since it meant that the area had "made it over the hump" in terms of enduring repeated snowfalls which required much shoveling. Being the shallowest of the Great Lakes, it is the most likely to freeze and frequently does. On February 16, 2010, meteorologists reported that the lake had frozen over, marking the first time the lake had completely frozen over since the winter of 1995–1996. In contrast, Lake Michigan has never completely frozen over since the warmer and deeper portion is in the south, although it came close to being totally frozen during three harsh winters over the past century. When the lake freezes over, this usually shuts down the lake effect snowfall. In past years, lake ice was so thick that it was possible to drive over it or go sailing on iceboats; but in the first decade of the 21st century, the ice has not been thick enough for such activities. Many lake residents take advantage of the ice and travel; some drive to Canada and back. Here's one account of ice life around Put-in-Bay:
Strong winds have caused lake currents to shift sediment on the bottom, leading to "wickedly shifting sandbars" that have been the cause of shipwrecks. But winds can have a peaceful purpose as well; there have been proposals to place electricity–producing wind turbines in windy and shallow points in the lake and along the coast, both in the United States and Canada. In 2010, there were plans for GE to develop five wind turbines to generate 20 megawatts of power by 2012 with plans to generate 1,000 megawatts by 2020; one proposal called for "gearless turbines" with long blades helped along with magnets. A nonprofit development group near Cleveland was developing plans to construct hundreds of turbines in the lake. A former steel mill site on the eastern edge of the lake in Buffalo, NY has been redeveloped as an urban wind farm in 2007. Known as Steel Winds, the project currently houses 14 turbines capable of generating up to 35 megawatts of electricity. A plan by Samsung to build an offshore wind farm on the north shore of the lake, from Port Maitland to Nanticoke for a distance of , but the plan has been met with opposition from residents for a number of reasons. Canadians near Leamington and Kingsville have organized protest groups to thwart attempts to bring wind turbines to the lake; reasons against the turbines include spoiling lake views. Plans to install turbines in Pigeon Bay, south of Leamington were met with opposition as well. The notion that bird and bat migration may be hurt by the wind turbines has been used to argue against the wind turbines; a reporter in "The Globe and Mail" wrote "Given the tendency of turbines to make mincemeat of things airborne, it doesn't require great imagination to figure out what would happen."
The lake is also responsible for microclimates that are important to agriculture. Along its north shore is one of the richest areas of Canada's fruit and vegetable production; this southernmost tip, particularly in the area around Leamington, is known as Canada's "tomato capital". The area around Port Rowan in Ontario has special trees which grow because of the "tempering effect of the lake", and species include tulip trees, flowering dogwood, sassafras and sour gum. In this area there are many greenhouses which produce a "variety of tropical plants rarely cultivated so far north", including some species of cacti, because of the lake's tempering effect. Along the southeastern shore of Ohio, Pennsylvania, and New York is an important grape growing region, as are the islands in the lake. Apple orchards are abundant in northeast Ohio to western New York.
According to one estimate, of water evaporates each year from the surface of the lake, which allows for rainfall and other precipitation in surrounding areas. There are conflicting reports about the overall effect of global warming on the Great Lakes region, including Lake Erie. One account suggests that climate change is causing greater evaporation of lake water, leading to warmer temperatures as well as ice in winter which is less thick or nonexistent, fueling concerns that "Erie appears to be shrinking" and is the most likely candidate among the five Great Lakes to "turn into a festering mud puddle." In 2010, the "Windsor Star" reported that the lake experienced "record-breaking temperatures" reaching in mid-August and compared the lake to a "bath tub". But the long term weather patterns are subject to controversy.
Lake Erie has a complex ecosystem with many species in constant interaction. Human activity, such as pollution and maritime ship traffic, can affect this environment in numerous ways. The interactions between the new species can sometimes have beneficial effects, as well as harmful effects. Some introductions have been seen as beneficial such as the introduction of Pacific salmon. Occasionally there have been mass die-offs of certain species of fish, sometimes for reasons unknown, such as many numbers of rainbow smelt in May 2010.
The lake has been plagued with a number of invasive species, including zebra and quagga mussels, the goby and the grass carp. One estimate was that there have been 180 invasive species in the Great Lakes, some having traveled in ballast water in international ships. Zebra mussels and gobies have been credited with the increased population and size of smallmouth bass in Lake Erie. In 2008 there were concerns that the "newest invader swarming in the Great Lakes", which was the bloody-red shrimp, might harm fish populations and promote algae blooms.
Environmentalists and biologists study lake conditions via installations such as the Franz Theodore Stone Laboratory on Gibraltar Island. The lab, which was established in 1895, is the oldest biological field station in the United States. Stone Laboratory was donated to the Ohio State University by Julius Stone in 1925 as part of the university's Ohio Sea Grant College Program. In addition, the Great Lakes Institute of the University of Windsor has experts who study issues such as lake sediment pollution and the flow of contaminants such as phosphorus.
A list of the common invasive species in Lake Erie include: zebra mussels, quagga mussels, round gobies, spiny water fleas, fishhook water fleas, the sea lamprey, and white perch. The invasive plant species that fill Lake Erie consist mainly of Eurasian milfoil, and purple loosestrife.
An ongoing concern is that "nutrient overloading from fertilizers, human and animal waste", known as eutrophication, in which additional nitrogen and phosphorus enter the lake, will cause plant life to "run wild and multiply like crazy". Since there are fewer wetlands, which are like "Nature's kidneys" by filtering nutrients, as well as greater "channelization of waterways", nutrients in water can cause algal blooms to sprout as well as "low-oxygen dead zones" in a complex interaction of natural forces. As of the 2010s much of the phosphorus in the lake comes from fertilizer applied to no-till soybean and corn fields but washed into streams by heavy rains. The algal blooms result from growth of "Microcystis", a toxic blue-green algae that the zebra mussels which infest the lake don't eat.
There periodically is a "dead zone", or region of low oxygen, in the lake whose exact location varies. Scientists from the National Oceanic and Atmospheric Administration have been studying the lake's blue-green algae blooms, and trying to find ways to predict when they are spreading or where they might hit landfall; typically the blooms arrive late each summer. This problem was extreme in the mid and late 1960s and the Lake Erie Wastewater Management Study (LEWMS) conducted by the Buffalo District of the US Army Corps of Engineers determined that the eutrophication was due to "point sources" such as industrial outfalls and municipal sanitary and storm sewer outfalls, as well as "diffuse sources", such as overland runoff from farm and forest land. All these sources contribute nutrients, primarily phosphorus, to the lake. Growth of organisms in the lake is then spiked to the point that oxygen levels are depleted. LEWMS made recommendations for reducing point source outflows, as well as reducing farm contributions of phosphorus by changing fertilizer usage, employing "no-till" farming and other conservative practices. Many industrial and municipal sources have since then been greatly reduced. The improved farming practices, which were voluntary, were followed for a while, resulting in remarkable recovery of the lake in the 1970s.
Unfortunately, the conservative practices are not monitored and have not been kept up. One recent account suggests that the seasonal algae blooms in Lake Erie were possibly caused by "runoff from cities, fertilizers, zebra mussels, and livestock near water." A second report focuses on the zebra mussels as being the cause of "big oxygen-poor dead zones" since they filter so much sediment that this produces an overgrowth of algae. One report suggests the oxygen-poor zone began about 1993 in the lake's central basin and becomes more pronounced during summer months, but it is somewhat of a mystery why this happens. Some scientists speculate that the dead zone is a naturally occurring phenomenon. Another report cited Ohio's Maumee River as the main source of polluted runoff of phosphorus from industries, municipalities, tributaries and agriculture, and in 2008, satellite images showed the algal bloom heading toward Pelee Island, and possibly heading to Lake Erie's central basin. There have been two-year $2 million studies trying to understand the "growing zone" which was described as a "10-foot-thick layer of cold water at the bottom", in one area, which stretches " across the lake's center". It kills fish and microscopic creatures of the lake's food chain and fouls the water, and may cause further problems in later years for sport and commercial fishing.
Algae blooms continued in early 2013 but new farming techniques, climate change and even a change in Lake Erie's ecosystem make phosphorus pollution more intractable.
Blue-green algae, or Cyanobacteria blooms, were problematic in August 2019. According to a news report in August, "scientists fully expect [it] to overwhelm much of western Lake Erie again this summer". By August 12, 2019, the bloom extended for roughly 50 kilometres. A large bloom does not necessarily mean the cyanobacteria ... will produce toxins", said Michael McKay, executive director of the Great Lakes Institute for Environmental Research (GLIER) at the University of Windsor. "Not enough is being done to stop fertilizer and phosphorus from getting into the lake and causing blooms," he added. Water testing was being conducted in August. The largest Lake Erie blooms to date occurred in 2015, exceeding the severity index at 10.5 and in 2011 at a 10, according to the U.S. National Oceanic and Atmospheric Administration (NOAA). In early August, the 2019 bloom was expected to measure 7.5 on the severity index, but could range between 6 and 9. At that time, satellite images depicted a bloom stretching up to 1,300 square kilometers on Lake Erie, with the epicenter near Toledo, Ohio.
The Lake Erie water snake, a subspecies of the northern water snake ("Nerodia sipedon"), lives in the vicinity of Ohio's Put-in-Bay Harbor, and had been put on the threatened species list. A threatened species is one which may soon become an endangered species. By 2010, the water snake population was over 12,000 snakes. While they have a non-venomous bite, they are a key predator in the lake's aquatic ecosystem since they feed on mudpuppies and walleye and smallmouth bass. The snake was helpful in keeping the population of goby fish in check. They mate from late May through early June and can be found in large "mating balls" with one female bunched with several males.
There is a concern that Asian carp might enter the Great Lakes region and alter the ecosystem negatively. They have been described as "greedy giants that suck plankton from the water with the brutal efficiency of vacuum cleaners" and scientists worry that they may unravel the "aquatic food web" by crowding out other species.
There was concern in 2007 that snakehead fish could get into the Great Lakes area. Officials warn that if the fish invades, it could "decimate the aquatic food chain". A YouTube video mentioned in a newspaper account has a man claiming that the fish could "bite your entire hand off". The fish can reach in length and "survive out of water for four days" and "has a mouth full of teeth that can shear fish in half" and can "eat ducks and small mammals." The snakehead fish can not live in a lake that has completely frozen over. They must come to the surface to breathe via their swim bladder.
In 1999, Doppler radar weather sensors detected millions of mayflies heading for Presque Isle in blue and green splotches on the radar in clouds measuring long. These insects were a sign of Lake Erie's move back to health, since the mayflies require clean water to thrive. Biologist Masteller of Penn State Erie declared the bugs to be a "nice nuisance" since they signified the lake's return to health after forty years of absence. Each is long; the three main species of mayflies are "Ephemera simulans", "Hexagenia rigida" and "Hexagenia limbata". The insects mate over a 72-hour period from June through September; they fly in masses up to the shore, mate in the air, then females lay up to 8,000 eggs each over the water; the eggs sink back down and the cycle repeats. Sometimes the clouds of mayflies have caused power outages as well as causing roads to become slippery with squashed insects. Since zebra mussels filter extra nutrients from the lake, it allows the mayfly larvae to thrive.
There have been incidents of birds dying from botulism, in 2000, and in 2002. Birds affected included grebes, common and red-breasted mergansers, loons, diving ducks, ring-billed gulls and herring gulls. One account suggests that bird populations are in trouble, notably the woodland warbler, which had population declines around 60 percent in 2008. Possible causes for declines in bird populations are farming practices, loss of habitats, soil depletion and erosion, and toxic chemicals. In 2006, there were concerns of possible bird flu after two wild swans on the lake were found diseased, but it was learned that they did not contain the deadly H5N1 virus. There were sightings of a magnificent frigatebird, a tropical bird with a two-metre wingspan, over the lake in 2008.
Lake Erie infamously became very polluted in the 1960s and 1970s as a result of the quantity of heavy industry situated in cities on its shores, with reports of bacteria-laden beaches and fish contaminated by industrial waste. In the 1970s, patches of the lake were declared dead because of industrial waste as well as sewage from runoffs; as "The New York Times" reporter Denny Lee wrote in 2004, "The lake, after all, is where the Rust Belt meets the water."
The water quality deteriorated partially due to increasing levels of the nutrient phosphorus in both the water and lake bottom sediments. The resultant high nitrogen levels in the water caused eutrophication, which resulted in algal blooms and algae masses and fish kills increasingly fouled the shoreline during this period. There were incidents of the oily surfaces of tributary rivers emptying into Lake Erie catching fire: in 1969, Cleveland's Cuyahoga River erupted in flames, chronicled in a "Time" magazine article which lamented a tendency to use rivers flowing through major cities as "convenient, free sewers"; the Detroit River caught fire on another occasion. The outlook was gloomy:
In December 1970 a federal grand jury investigation led by U.S. Attorney Robert Jones began, of water pollution allegedly being caused by about 12 companies in northeastern Ohio. It was the first grand jury investigation of water pollution in the area. The grand jury indicted four corporations for polluting Lake Erie and waterways in northeast Ohio. Facing fines were Cleveland Electric Illuminating Co., Shell Oil Co., Uniroyal Chemical Division of Uniroyal Inc. and Olin Corp. The Attorney General of the United States, John N. Mitchell, gave a Press Conference December 18, 1970 referencing new pollution control litigation, with particular reference to work with the new Environmental Protection Agency, and announcing the filing of a lawsuit that morning against the Jones and Laughlin Steel Corporation for discharging substantial quantities of cyanide into the Cuyahoga River near Cleveland. U.S. Attorney Robert Jones filed the misdemeanor charges in District Court, alleging violations of the 1899 Rivers and Harbors Act.
Cleveland's director of public utilities, Ben Stefanski, pursued a massive effort to "scrub the Cuyahoga"; the effort cost $100 million in bonds, according to one estimate. New sewer lines were built. Clevelanders approved a bond issue by 2 to 1 to seriously upgrade Cleveland's sewage system. Federal officials acted as well; the United States Congress passed the Clean Water Act of 1972. In that year, the United States and Canada established water pollution limits in an International Water Quality Agreement. The Corps' LEWMS, mentioned above, was also instituted at that time. The controls were effective, but it took several decades to take effect; by 1999, there were signs that large numbers of mayflies were spotted on the lake after a forty-year absence, signalling a return to health.
The clearing of the water column is also partly due to the introduction and rapid spread of zebra mussels from Europe, which had the effect of covering "the basin floor like shag carpeting" with each creature filtering "a liter of fresh water a day," helping to restore the lake to a cleaner state. The 1972 Great Lakes Water Quality Agreement also significantly reduced the dumping and runoff of phosphorus into the lake. The lake has since become clean enough to allow sunlight to infiltrate its water and produce algae and sea weed, but a dead zone persists in the central Lake Erie Basin during the late summer. The United States Environmental Protection Agency has studied this cyclic phenomenon since 2005. There have been instances of beach closings at Presque Isle off the coast of northwestern Pennsylvania because of unexplained E. Coli contaminations, possibly caused by storm water overflows after heavy downpours.
Since the 1970s environmental regulation has led to a great increase in water quality and the return of economically important fish species such as walleye and other biological life. There was substantial evidence that the new controls had substantially reduced levels of DDT in the water by 1979. Cleanup efforts were described in 1979 as a notable environmental success story, suggesting that the cumulative effect of legislation, studies, and bans had reversed the effects of pollution:
Joint U.S.–Canadian agreements pushed 600 of 864 major industrial dischargers to meet requirements for keeping the water clean. One estimate was that $5 billion was spent to upgrade plants to treat sewage. The change toward cleaner water has been in a positive direction since the 1970s.
There was a tentative exploratory plan to capture CO2, compress it to a liquid form, and pump it a half-mile (800 m) beneath Lake Erie's surface underneath the porous rock structure. According to chemical engineer Peter Douglas, there is sufficient storage space beneath Lake Erie to hold between 15 and 50 years of liquid CO2 emissions from the 4,000 megawatt Nanticoke coal plant. But there has been no substantial progress on this issue since 2007.
Lake Erie is home to one of the world's largest freshwater commercial fisheries. Lake Erie's fish populations are the most abundant of the Great Lakes, partially because of the lake's relatively mild temperatures and plentiful supply of plankton, which is the basic building block of the food chain. The lake's fish population accounts for an estimated 50% of all fish inhabiting the Great Lakes. The lake is "loaded with superstars" such as steelhead, walleye (American usage) or pickerel (Canadian usage), smallmouth bass, perch, as well as bass, trout, salmon, whitefesh, smelt, and many others. The lake consists of a long list of well established introduced species. Common non-indigenous fish species include the rainbow smelt, alewife, white perch and common carp. Non-native sport fish such as rainbow trout and brown trout are stocked specifically for anglers to catch. Attempts failed to stock coho salmon and its numbers are once again dwindling. Commercial landings are dominated by yellow perch and walleye, with substantial quantities of rainbow smelt and white bass also taken. Anglers target walleye and yellow perch, with some effort directed at rainbow trout. A variety of other species are taken in smaller quantities by both commercial and sport fleets.
Up until the end of the 1950s, the most commonly caught commercial fish (more than 50% of the commercial catch) was a subspecies of the walleye known as the blue walleye ("Sander vitreus glaucus") sometimes erroneously called "blue pike". In the 1970s and 1980s, as pollution in the lake declined, counts of walleyes which were caught grew from 112,000 in 1975 to 4.1 million in 1985, with estimates of the numbers of walleyes in the lake at around 33 million in the basin, with many of or more. Not all walleyes thrived. The combination of overfishing and the eutrophication of the lake by pollution caused the population to collapse, and in the mid-1980s, one species of walleye called the blue walleye was declared extinct. But the Lake Erie walleye was reportedly having record numbers, even in 1989, according to one report. There have been concerns about rising levels of mercury in walleye fish; a study by the Canadian Ministry of the Environment noted an "increasing concentration trend" but that limits were within acceptable established by authorities in Pennsylvania. It was recommended, because of PCBs, that persons eat no more than one walleye meal per month. Because of these and other concerns, in 1990, the National Wildlife Federation was on the verge of having a "negative fish consumption "advisory"" for walleyes and smallmouth bass, which had been the bread-and-butter catch of an $800 million commercial fishing industry.
The longest fish in Lake Erie is reportedly the sturgeon which can grow to long and weight , but it is an endangered species and mostly lives on the bottom of the lake. In 2009, there was a confirmed instance of a sturgeon being caught, which was returned to the lake alive, and there are hopes that the population of sturgeons is resurging.
Estimates vary about the fishing market for the Great Lakes region. One estimate of the total market for fishing, including commercial as well as sport or recreational fishing, for all of the Great Lakes, was $4 billion annually, in 2007. A second estimate was that the fishing industry was valued at more than $7 billion.
But since high levels of pollution were discovered in the 1960s and 1970s, there has been continued debate over the desired intensity of commercial fishing. Commercial fishing in Lake Erie has been hurt by the bad economy as well as government regulations which limit the size of their catch; one report suggested that the numbers of fishing boats and employees had declined by two-thirds in recent decades. Another concern had been that pollution in the lake, as well as toxins found inside fish, were working against commercial fishing interests. U.S. fishermen based along Lake Erie "lost their livelihood" over the past few decades described as being "caught in a net of laws and bans", according to the "Pittsburgh Post-Gazette", and no longer catch fish such as whitefish for markets in New York. Pennsylvania had a special $3 stamp on fishing licenses to help "compensate commercial fishermen for their losses", but this program ended after five years turning Erie's commercial fishing industry into an "artifact." One blamed the commercial fishing ban after a "test of wills" between commercial and recreational fishermen: "One side needed large hauls. The other feared the lake was being emptied."
Commercial fishing is now predominantly based in Canadian communities, with a much smaller fishery—largely restricted to yellow perch—in Ohio. One account suggested that Canadian fishermen are "still at it and making money" and they "know how to fish" by "using the old nets." The Ontario fishery is one of the most intensively managed in the world. However, there are reports that some Canadian commercial fishermen are dissatisfied with fishing quotas, and have sued their government about this matter, and there have been complaints that the legislative body writing the quotas is "dominated by the U.S." and that sport fishing interests are favored at the expense of commercial fishing interests. Cuts of 30 to 45 percent for certain fish were made in 2007. The Lake Erie fishery was one of the first fisheries in the world managed on individual transferable quotas and features mandatory daily catch reporting and intensive auditing of the catch reporting system. Still, the commercial fishery is the target of critics who would like to see the lake managed for the exclusive benefit of sport fishing and the various industries serving the sport fishery. According to one report, the Canadian town of Port Dover is the home of the lake's largest fishing fleet, and the town features miniature golf, dairy bars, French-fry stands, and restaurants serving perch.
The lake can be thought of as a common asset with multiple purposes including being a "fishery". There was direct competition between commercial fishermen and sport fishermen (including charter boats and sales of fishing licenses) throughout the lake's history, with both sides seeking government assistance from either Washington or Ottawa, and trying to make their case in the "court" of public opinion through newspaper reporting. But other groups have entered the political process as well, including environmentalists, lakefront property owners, industry owners and workers seeking cost-effective solutions for sewage, ferry boat operators, even corporations making electric-generating wind turbines.
Management of the fishery is by consensus of all management agencies with an interest in the resource and include the states of New York, Pennsylvania, Ohio, Michigan and the province of Ontario, and work under the mandate of the Great Lakes Fishery Commission. The commission makes assessments using sophisticated mathematical modeling systems. The Commission has been the focus of considerable recrimination, primarily from angler and charter fishing groups in the U.S. which have had a historical antipathy to commercial fishing interests. This conflict is complex, dating from the 1960s and earlier, with the result in the United States that, in 2011, commercial fishing was mostly eliminated from Great Lakes states. One report suggests that battling between diverse fishing interests began around Lake Michigan and evolved to cover the entire Great Lakes region. The analysis suggests that in the Lake Erie context, the competition between sport and commercial fishing involves universals and that these conflicts are cultural, not scientific, and therefore not resolvable by reference to ecological data.
The lake also supports a strong sport fishery. While commercial fishing declined, sport fishing has remained. The deep cool waters that spawn the best fishing is in the Canadian side of the lake. As a result, a fishing boat that crosses the international border triggers the security concerns of border crossings and fishermen are advised to have their passport. If their boat crosses the invisible border line in the lake, upon returning to the American shore, passengers will have to "drive to a local government reporting station and pose for pictures" to Customs officers by videophone. There are cumbersome rules for fishing boat operators as well, who will have to fax passenger personal information to a government agency an hour before leaving; officers will be watching and doing spot checks from patrol boats and government aircraft". Authorities in 2008 from the Pennsylvania Fish and Boat Commission have tried stocking the lake with brown trout in an effort to build what's called a "put-grow-and-take" fishery. There was a report that charter boat fishing increased substantially on the American side, from 46 to 638 charter boats in operation in Ohio alone, during a period from 1975 to 1985 as pollution levels declined, and after populations of walleye increased substantially in the lake. In 1984, Ohio sold 27,000 nonresident fishing permits, and sport fishing was described as big business. In 1992, there were accounts of fishermen catching walleyes, and that the "runt of a five-man daily limit of 25 walleye might be a nuisance of ." It is possible to fish off piers in winter for a fish called the burbot, also known by pseudonyms such as eelpout, mudblow, lawyer fish, cusk, or freshwater cod which looks "ugly" but tastes great; the burbot make a midwinter spawning run and is reportedly one of "Erie's glacial relics."
In winter when the lake freezes, many fishermen go out on the ice, cut holes, and fish. It is even possible to build bonfires on the ice. But venturing on Lake Erie ice can be dangerous. In a freak incident in 2009, warming temperatures and winds of and currents pushing eastward dislodged a miles-wide ice floe which broke away from the shore, trapping more than 130 fishermen offshore; one man died while the rest were rescued by helicopters or boats.
The lake's formerly more extensive lakebed creates a favorable environment for agriculture in the bordering areas of Ontario, Ohio, Michigan, Pennsylvania, and New York. The Lake Erie sections of western New York State have a suitable climate for growing grapes, and in Chautauqua County there have been many vineyards and wineries in the area as well as in Erie County in northwestern Pennsylvania. Much grape juice is produced in this region. The Canadian region of Lake Erie's north shore is becoming a more prominent wine region as well; it has been dubbed the Lake Erie North Shore, or LENS region, and includes Pelee Island, and since it is farther north than comparable wine-growing areas in the world, the season is longer in terms of light. A longer growing season due to the lake-moderated temperatures make the risk of early frosts less likely.
The drainage basin has led to well fertilized soil. The north coast of Ohio is widely referred to as its nursery capital.
Lake Erie is a favorite for divers since there are many shipwrecks, perhaps 1,400 to 8,000 according to one estimate, of which about 270 are "confirmed shipwreck locations." Most wrecks are undiscovered but believed to be well preserved and in good condition and at most only below the water surface. One report suggests there are more "wrecks per square mile" than any other freshwater location, including wrecks from Native American watercraft. There are efforts to identify shipwreck sites and survey the lake floor to map the location of underwater sites, possibly for further study or exploration. While the lake is relatively warmer than the other Great Lakes, there is a thermocline, meaning that as a diver descends, the water temperature drops about , requiring a wetsuit. One estimate is that Lake Erie has a quarter of all 8,000 estimated shipwrecks in the Great Lakes. They are preserved because the water is cold and salt-free creating "intact time capsules down there". Divers have a policy of not removing or touching anything at the wreck else the "next person won't be able to see it"; when artifacts were removed on occasion, it was met by "outrage" by the diving community. The cold conditions make diving difficult and "strenuous" requiring divers with skill and experience. One charter firm from western New York State takes about 1,500 divers to Lake Erie shipwrecks in a typical season from April through October.
In 1991, the 19th-century sidewheeler "Atlantic" was discovered. It had sunk in a collision with "Ogdensburg", a steamship sometimes referred to as a "propeller" according to 19th-century parlance, in 1852, west of Long Point, Ontario and survivors from "Atlantic" were saved by "Ogdensburg". One account suggests 130 people drowned while another suggests about 20 drowned. The aftermath of the disaster led to calls for authorities to seize captains of both ships so "that the cause of the collision may be correctly ascertained" as well as calls for more lifeboats and improved life preservers since the earlier ones proved to be "totally useless." There was speculation that the sunken vessel had been a gambling ship, and therefore there might have been money aboard, but most historians were skeptical. In 1998, the shipwreck of the vessel "Adventure" was the first shipwreck registered with the state of Ohio as an "underwater archaeological site"; when it was discovered that "Adventure"s propeller had been removed and given to a junkyard, the propeller was rescued days before being converted to scrap metal and brought back to the dive site and back to its underwater home. In 2003, divers discovered the steamer "Canobie" near Presque Isle, which sunk in 1921. Other wrecks include the fish tub "Neal H. Dow" (1910), the "steamer-cum-barge" "Elderado" (1880), "W. R. Hanna", "Dundee" which sank north of Cleveland in 1900, "F. H. Prince", and "The Craftsman". In 2007, the wreck of the steamship named after "Mad" Anthony Wayne was found near Vermilion, Ohio in of water; the vessel sank in 1850 after its boilers exploded, and 38 people died. The wreck belongs to the state of Ohio and "salvaging it is illegal" but divers can visit it after it is surveyed. In addition, there are wrecks of smaller vessels, with occasional drownings of fishermen.
The finding of the well-preserved wreck of the Canadian-built British troop transport warship "Caledonia", sunk during the War of 1812, has led to accusations about plundering of the site and legal wrangling about whether the vessel should be resurfaced in time for the 2013 bicentennial of the end of the war.
Research into shipwrecks has been organized by the Peachman Lake Erie Shipwreck Research Center, or PLESRC, located on the grounds of the Great Lakes Historical Society. In 2008, the Great Lakes Historical Society announced plans to survey the underwater battle site of the Battle of Lake Erie in preparation for the bicentennial celebration of the battle in 2013.
There are numerous public parks around the lake. In western Pennsylvania, a wildlife reserve was established in 1991 in Springfield Township for hiking, fishing, cross-country skiing and walking along the beach. In Ontario, Long Point is a peninsula on the northwest shore near Port Rowan that extends into Lake Erie which is a stopover for birds migrating as well as turtles; one reporter found a "turtle-crossing" sign along the road; Long Point Provincial Park is located there and has been designated as a UNESCO Biosphere reserve. In Ontario's Sand Hill Park, east of Port Burwell, there is a high dune which is so steep it requires people to "crawl like crabs to the summit", although there are lake views from the top.
Crystal Beach, in the village of Crystal Beach, Ontario, at the eastern end of the lake, is one of several South-facing beaches on the Canadian side. It is therefore well situated for sun-bathers, facing the sun from sunrise to sunset. The beach is gently sloping with no sharp drop-offs or rip currents, and is usually cooled by southwest breezes, even on the hottest days.
In southern Michigan, Sterling State Park has campgrounds, for hiking, biking, fishing, boating, with a sand beach for sunbathing, swimming, and picnicking.
"The New York Times" reporter Donna Marchetti took a bike tour around the Lake Erie perimeter in 1997, traveling per day and staying at bed and breakfasts. They went through the cities of Cleveland, Erie, Windsor, Detroit and Toledo as well as resort towns, vineyards, and cornfields. The trip highlights were the "small port towns and rural farmlands of southern Ontario". There are few bike repair shops in Ontario on the route.
Lake Erie islands tend to be in the westernmost part of the lake and have different characters. Some of them include:
Kayaking has become more popular along the lake, particularly in places such as Put-in-Bay, Ohio. There are extensive views with steep cliffs with exotic wildlife and "100 miles of paddle-friendly shoreline." Long distance swimmers have swum across the lake to set records; for example, a 15-year-old amputee swam the stretch across the lake in 2001. In 2008, 14-year-old Jade Scognamillo swam from New York's Sturgeon Point to Ontario's Crystal Beach and completed the 11.9-mile (19.2-km) swim in five hours, 40 minutes and 35 seconds, and also became the youngest swimmer to make the crossing. It is illegal for swimmers younger than 14 to attempt such a crossing. In Port Dover, Ontario, brave swimmers do high-dives at the annual "Polar Bear Swim" on the beach; in 2011, the water was , although the air was warmer, which did not deter 14-year-old youth Austin Merrell. Currents can pose a problem, and there have been occasional incidents of drownings.
The lake is dotted by distinct lighthouses. A lighthouse off the coast of Cleveland, beset with cold lake winter spray, has an unusual artistic icy shape, although sometimes ice prevents the light from being seen by maritime vessels.
A New York Times reporter, biking through the region in 1997, found the Ontario town of Port Stanley to be the "prettiest of the port towns" with a lively "holiday air" but no "ticky-tacky commercialism".
There are numerous vineyards around the lake, including ones on Pelee Island which makes wines including pinot noir, riesling and chardonnay.
People can rent summer houses and cabins near the lake to enjoy the beaches, swimming, as well as be close to activities such as wine tours and fishing and water parks. Presque Isle is a peninsula jutting out into the lake in northwestern Pennsylvania which has nice beaches, although there were incidents in 2006 when beaches had to be closed because of unexplained unhealthy water conditions with E. Coli bacteria.
It was described as a "spit of sand, trees and swamp that arcs off the shore" with seafood restaurants and beautiful sunsets. Pelee Island, Canada's southernmost point and only three miles away from Ohio, is a place that "forces you to do nothing":
Pleasure boat operators offer dinner cruises in the Cleveland area on the Cuyahoga River as well as Lake Erie.
The lake has been a "bustling thoroughfare" for maritime vessels for centuries. Ships headed eastward can take the Welland Canal and a series of eight locks descending to Lake Ontario which takes about 12 hours, according to one source. Thousands of ships make this journey each year. During the 19th century, ships could enter the Buffalo River and travel the Erie Canal eastward to Albany then south to New York City along the Hudson River. Generally there is heavy traffic on the lake except during the winter months from January through March when ice prevents vessels from traveling safely. In 2007, there was a protest against Ontario's energy policy which allows the shipping of coal in the lake; GreenPeace activists climbed a ladder on a freighter and "locked themselves to the conveyor belt device that helps to unload the ship's cargo"; three activists were arrested and the ship was delayed for more than four hours, and anti-coal messages were painted on the ship.
The ship traffic in Lake Erie being the highest among the Great Lakes and roughest of the lakes has led to it having the highest number of known shipwrecks in the Great Lakes. There have been other accidents as well; for example, in 2010 according to "The Star", crewmen from the freighter "Hermann Schoening" were sickened by phosphine gas which had been used to fumigate or control pests; rescuers took them by tugboat to receive medical attention.
The Port of Cleveland generated over $350 million and over 15 million tons of cargo in a recent year. The current port facility is unable to handle larger cargo ships, and the cranes needed to lift goods such as steel to truck trailers are insufficient to meet current shipping standards.
Ferryboats operate in numerous places, such as the Jet Express Ferry from Sandusky and Port Clinton. However, plans to operate a ferryboat between the U.S. port of Erie and the Ontario port of Port Dover ran into a slew of political problems, including security restrictions on both sides as well as additional fees required to hire border inspectors. In particular, Canada was described as having a "sticky set of laws"; the project was abandoned.
The Great Lakes Circle Tour is a designated scenic road system connecting all of the Great Lakes and the St. Lawrence River. One reporter thought the roads on the Canadian side were narrower, sometimes without shoulders, but were less trafficked except for the roads around the Ontario towns of Fort Erie and Port Colborne. Drivers can cross from the United States to the Canadian town of Fort Erie by going over the Peace Bridge.
In 2004, debris from a plane carrying 9 people was found off Lake Erie isle.
Since the border between the two nations is largely unpatrolled, it is possible for people to cross undetected from one country to the other, in either direction, by boat. In 2010, Canadian police arrested persons crossing the border illegally from the United States to Canada, near the Ontario town of Amherstburg.
|
https://en.wikipedia.org/wiki?curid=17946
|
Lake Ontario
Lake Ontario is one of the five Great Lakes of North America. It is surrounded on the north, west, and southwest by the Canadian province of Ontario, and on the south and east by the American state of New York, whose water boundaries meet in the middle of the lake. Ontario, Canada's most populous province, was named for the lake. Many cities, including Canada's most populous city Toronto, Rochester, and Hamilton, are located on the lake's shores. In the Huron language, the name "" means "great lake". Its primary inlet is the Niagara River from Lake Erie. The last in the Great Lakes chain, Lake Ontario serves as the outlet to the Atlantic Ocean via the Saint Lawrence River. It is the only Great Lake not to border the state of Michigan.
Lake Ontario is the easternmost of the Great Lakes and the smallest in surface area (7,340 sq mi, 18,960 km2), although it exceeds Lake Erie in volume (393 cu mi, 1,639 km3). It is the 13th largest lake in the world. When its islands are included, the lake's shoreline is long. As the last lake in the Great Lakes' hydrologic chain, Lake Ontario has the lowest mean surface elevation of the lakes at 243 feet (74 m) above sea level; lower than its neighbor upstream. Its maximum length is and its maximum width is . The lake's average depth is 47 fathoms 1 foot (283 ft; 86 m), with a maximum depth of 133 fathoms 4 feet (802 ft; 244 m). The lake's primary source is the Niagara River, draining Lake Erie, with the St. Lawrence River serving as the outlet. The drainage basin covers 24,720 square miles (64,030 km2). As with all the Great Lakes, water levels change both within the year (owing to seasonal changes in water input) and among years (owing to longer-term trends in precipitation). These water level fluctuations are an integral part of lake ecology, and produce and maintain extensive wetlands. The lake also has an important freshwater fishery, although it has been negatively affected by factors including over-fishing, water pollution and invasive species.
Baymouth bars built by prevailing winds and currents have created a significant number of lagoons and sheltered harbors, mostly near (but not limited to) Prince Edward County, Ontario and the easternmost shores. Perhaps the best-known example is Toronto Bay, chosen as the site of the Upper Canada (Ontario) capital for its strategic harbour. Other prominent examples include Hamilton Harbour, Irondequoit Bay, Presqu'ile Bay, and Sodus Bay. The bars themselves are the sites of long beaches, such as Sandbanks Provincial Park and Sandy Island Beach State Park. These sand bars are often associated with large wetlands, which support large numbers of plant and animal species, as well as providing important rest areas for migratory birds. Presqu'ile, on the north shore of Lake Ontario, is particularly significant in this regard. One unique feature of the lake is the Z-shaped Bay of Quinte which separates Prince Edward County from the Ontario mainland, save for a isthmus near Trenton; this feature also supports many wetlands and aquatic plants, as well as associated fisheries.
Major rivers draining into Lake Ontario include the Niagara River, Don River, Humber River, Trent River, Cataraqui River, Genesee River, Oswego River, Black River, Little Salmon River, and the Salmon River.
The lake basin was carved out of soft, weak Silurian-age rocks by the Wisconsin ice sheet during the last ice age. The action of the ice occurred along the pre-glacial Ontarian River valley which had approximately the same orientation as today's basin. Material that was pushed southward by the ice sheet left landforms such as drumlins, kames, and moraines, both on the modern land surface and the lake bottom, reorganizing the region's entire drainage system. As the ice sheet retreated toward the north, it still dammed the St. Lawrence valley outlet, so the lake surface was at a higher level. This stage is known as Lake Iroquois. During that time the lake drained through present-day Syracuse, New York into the Mohawk River, thence to the Hudson River and the Atlantic. The shoreline created during this stage can be easily recognized by the (now dry) beaches and wave-cut hills 10 to 25 miles (15 to 40 km) from the present shoreline.
When the ice finally receded from the St. Lawrence valley, the outlet was below sea level, and for a short time the lake became a bay of the Atlantic Ocean, in association with the Champlain Sea. Gradually the land rebounded from the release of the weight of about 6,500 feet (2,000 m) of ice that had been stacked on it. It is still rebounding about 12 inches (30 cm) per century in the St. Lawrence area. Since the ice receded from the area last, the most rapid rebound still occurs there. This means the lake bed is gradually tilting southward, inundating the south shore and turning river valleys into bays. Both north and south shores experience shoreline erosion, but the tilting amplifies this effect on the south shore, causing loss to property owners.
The name Ontario is derived from the Huron word "Ontarí'io", which means "great lake". The lake was a border between the Huron people and the Iroquois Confederacy in the pre-Columbian era. In the 1600s, the Iroquois drove out the Huron from southern Ontario and settled the northern shores of Lake Ontario. When the Iroquois withdrew and the Anishnabeg / Ojibwa / Mississaugas moved in from the north to southern Ontario, they retained the Iroquois name.
It is believed the first European to reach the lake was Étienne Brûlé in 1615. As was their practice, the French explorers introduced other names for the lake. In 1632 and 1656, the lake was referred to as Lac de St. Louis or Lake St. Louis by Samuel de Champlain and cartographer Nicolas Sanson respectively (likely for Louis XIV of France) In 1660 Jesuit historian Francis Creuxius coined the name "Lacus Ontarius". In a map drawn in the "Relation des Jésuites" (1662–1663), the lake bears the legend "Lac Ontario ou des Iroquois" with the name "Ondiara" in smaller type. A French map produced in 1712 (currently in the Canadian Museum of History), created by military engineer Jean-Baptiste de Couagne, identified Lake Ontario as "Lac Frontenac" named after Louis de Buade, Comte de Frontenac et de Palluau. He was a French soldier, courtier, and Governor General of New France from 1672 to 1682 and from 1689 to his death in 1698.
Artifacts which are believed to be of Norse origin have been found in the area of Sodus Bay, indicating the possibility of trading by the indigenous peoples with Norse explorers on the east coast of North America.
A series of trading posts were established by both the British and French, such as Fort Frontenac (Kingston) in 1673, Fort Oswego in 1722, Fort Rouillé (Toronto) in 1750. After the French and Indian War, all forts around the lake were under British control. The United States did not take possession of forts on present-day American territory until the signing of the Jay Treaty in 1794. Permanent, non-military European settlement began during the American Revolution. As the easternmost and nearest lake to the Atlantic seaboard of Canada and the United States, population centres here are among the oldest in the Great Lakes basin, with Kingston, Ontario, formerly the capital of Canada, dating to the 1670s (Fort Frontenac). The lake became a hub of commercial activity following the War of 1812 with canal building on both sides of the border and heavy travel by lake steamers. Steamer activity peaked in the mid-19th century before competition from railway lines.
In the late 19th and early 20th centuries, a type of scow known as a "stone hooker" was in operation on the north-west shore, particularly around Port Credit and Bronte. Stonehooking was the practice of raking flat fragments of Dundas shale from the shallow lake floor of the area for use in construction, particularly in the growing city of Toronto.
The Great Lakes watershed is a region of high biodiversity, and Lake Ontario is important for its diversity of birds, fish, reptiles, amphibians, and plants. Many of these special species are associated with shorelines, particularly sand dunes, lagoons, and wetlands. The importance of wetlands to the lake has been appreciated, and many of the larger wetlands have protected status. However, these wetlands are changing in part because the natural water level fluctuations have been reduced. Many wetland plants are dependent upon low water levels to reproduce. When water levels are stabilized, the area and diversity of the marsh is reduced. This is particularly true of meadow marsh (also known as wet meadow wetlands); for example, in Eel Bay near Alexandria Bay, regulation of lake levels has resulted in large losses of wet meadow. Often this is accompanied by the invasion of cattails, which displace many of the native plant species and reduce plant diversity. Eutrophication may accelerate this process by providing nitrogen and phosphorus for the more rapid growth of competitively dominant plants. Similar effects are occurring on the north shore, in wetlands such as Presqu'ile, which have interdunal wetlands called , with high plant diversity and many unusual plant species.
Most of the forests around the lake are deciduous forests dominated by trees including maple, oak, beech, ash and basswood. These are classified as part of the Mixedwood Plains Ecozone by Environment Canada, or as the Eastern Great Lakes and Hudson Lowlands by the United States Environmental Protection Agency, or as the Great Lakes Ecoregion by The Nature Conservancy. Deforestation in the vicinity of the lake has had many negative impacts, including loss of forest birds, extinction of native salmon, and increased amounts of sediment flowing into the lake. In some areas more than 90 percent of the forest cover has been removed and replaced by agriculture. Certain tree species, such as hemlock, have also been particularly depleted by past logging activity. Guidelines for restoration stress the importance of maintaining and restoring forest cover, particularly along streams and wetlands.
The open water is less-affected by shoreline features, such as wetlands, and more affected by nutrient levels that control the production of algae. Algae are the basis of the open water food web, and the source of primary production that ends up as Lake Trout and Walleye at the top of the open water food web.
Like the other Great Lakes, Lake Ontario used to have an important commercial fishery. It has been largely destroyed, mostly by over-fishing. Consider the Lake Sturgeon as but one example. Lake sturgeon are huge fish—they can grow up to three meters long and exceed 190 kg in weight. The females mature slowly and require decades to reach sexual maturity. It was once an abundant species in Lake Ontario. "In 1860, this species, taken on incidental catches of other fishes, was killed and dumped back in the lake, piled up on shore to dry and be burned, fed to pigs, or dug into the earth as fertilizer." It was even stacked like cordwood and used to fuel steamboats. Once its value was realized, "They were taken by every available means from spearing and jigging to set lines of baited or unbaited hooks laid on the bottom to trapnets, poundnets and gillnets." Over 5 million pounds were taken from adjoining Lake Erie in a single year. The fishery collapsed, largely by 1900. They have never recovered. Like most sturgeons, the lake sturgeon is rare now, and is protected in many areas. Populations in the Oswego River are being actively managed for recovery.
This food web has been damaged not only by over-fishing, and changes in nutrient levels, but also by other types of pollution from industrial chemicals, agricultural fertilizers, untreated sewage, phosphates from laundry detergents, and pesticides. Some pollutant chemicals that have been found in the lake include DDT, benzo["a"]pyrene and other pesticides; PCBs, aramite, chromium, lead, mirex, mercury, and carbon tetrachloride. The International Joint Commission has identified areas where pollution is particularly intense (point sources) and mapped them as Areas of Concern. A Remedial Action Plan has been developed for each area. Some Lake Ontario areas of concern include the Oswego River and Rochester Embayment on the American side, and Hamilton Harbour and Toronto on the Canadian side.
By the 1960s and 1970s, the increased pollution caused frequent algal blooms to occur in the summer. These blooms killed large numbers of fish, and left decomposing piles of filamentous algae and dead fish along the shores. At times the blooms became so thick waves could not break. Fish eating birds such as osprey, bald eagle and cormorant were being poisoned by contaminated fish. Since the 1960s and 1970s, environmental concerns have forced a cleanup of industrial and municipal wastes. Cleanup has been accomplished through better treatment plants, tighter environmental regulations, deindustrialization and increased public awareness. Today, Lake Ontario has recovered some of its pristine quality; for example, walleye, a fish species considered as a marker of clean water, are now found. However, regional airshed pollution remains a concern. The lake has also become an important sport fishery, although with introduced species (Coho and Chinook salmon) rather than the native species. Bald eagle and osprey populations are also beginning to recover.
Invasive species are a problem for Lake Ontario, particularly lamprey and zebra mussels. Lamprey are being controlled by poisoning in the juvenile stage in the streams where they breed. Zebra mussels in particular are difficult to control, and pose major challenges for the lake and its waterways.
The lake has a natural seiche rhythm of eleven minutes. The seiche effect normally is only about inches (2 cm) but can be greatly amplified by earth movement, winds, and atmospheric pressure changes.
Because of its great depth, the lake as a whole never freezes in winter, but an ice sheet covering between 10% and 90% of the lake area typically develops, depending on the severity of the winter. Ice sheets typically form along the shoreline and in slack water bays, where the lake is not as deep. During the winters of 1877 and 1878, the ice sheet coverage was up to 95–100% in most of the lake. In the winter of 1812, the ice cover was stable enough the American naval commander stationed at Sackets Harbor feared a British attack from Kingston, over the ice.
When the cold winds of winter pass over the warmer water of the lake, they pick up moisture and drop it as lake effect snow. Since the prevailing winter winds are from the northwest, the southern and southeastern shoreline of the lake is referred to as the snowbelt. In some winters the area between Oswego and Pulaski may receive twenty or more feet (600 cm) of snowfall.
Also impacted by lake-effect snow is the Tug Hill Plateau, an area of elevated land about east of Lake Ontario, creating ideal conditions for lake-effect snowfall. The "Hill", as it is often referred to, typically receives more snow than any other region in the eastern United States. As a result, Tug Hill is a popular location for winter enthusiasts, such as snow-mobilers and cross-country skiers. Lake-effect snow often extends inland as far as Syracuse, with that city often recording the most winter snowfall accumulation of any large city in the United States. Other cities in the world receive more snow annually, such as Quebec City, which averages , and Sapporo, Japan, which receives each year and is often regarded as the snowiest city in the world.
Foggy conditions (particularly in fall) can be created by thermal contrasts and can be an impediment for recreational boaters. In a normal winter, Lake Ontario will be at most one quarter ice-covered, in a mild winter almost completely unfrozen. Lake Ontario has completely frozen over on five recorded occasions: from about January 20 to about March 20, 1830; in 1874; in 1893; in 1912; and in February 1934.
Lake breezes in spring tend to retard fruit bloom until the frost danger is past, and in the autumn delay the onset of fall frost, particularly on the south shore. Cool onshore winds also retard early bloom of plants and flowers until later in the spring season, protecting them from possible frost damage. Such microclimatic effects have enabled tender fruit production in a continental climate, with the southwest shore supporting a major fruit-growing area. Apples, cherries, pears, plums, and peaches are grown in many commercial orchards around Rochester. Between Stoney Creek and Niagara-on-the-Lake on the Niagara Peninsula is a major fruit-growing and wine-making area. The wine-growing region extends over the international border into Niagara and Orleans counties. Apple varieties that tolerate a more extreme climate are grown on the lake's north shore, around Cobourg.
A large conurbation called the Golden Horseshoe occupies the lake's westernmost shores, anchored by the cities of Toronto and Hamilton. Ports on the Canadian side include St. Catharines, Oshawa, Cobourg and Kingston, near the St. Lawrence River outlet. Close to 9 million people or over a quarter of Canada's population lives within the watershed of Lake Ontario. The American shore is largely rural, with the exception of Rochester and the much smaller ports at Oswego and Sackets Harbor. The city of Syracuse is inland, connected to the lake by the New York State Canal System. Over 2 million people live in Lake Ontario's American watershed.
A high-speed passenger/vehicle ferry, the "Spirit of Ontario I", operated between Toronto and Rochester from June 17, 2004, to January 10, 2006, when the service was cancelled. The Crystal Lynn II, out of Irondequoit, New York, has been operating between Irondequoit Bay and Henderson, New York since May 2000, operated by Capt. Bob Tein.
The Great Lakes Waterway connects the lake sidestream to the Atlantic Ocean via the St. Lawrence Seaway, and upstream to the other rivers in the chain via the Welland Canal and to Lake Erie. The Trent-Severn Waterway for pleasure boats connects Lake Ontario at the Bay of Quinte to Georgian Bay (Lake Huron), via Lake Simcoe. The Oswego Canal connects the lake at Oswego to the New York State Canal System, with outlets to the Hudson River, Lake Erie, and Lake Champlain.
The Rideau Canal, also for pleasure boats, connects Lake Ontario at Kingston to the Ottawa River in downtown Ottawa, Ontario.
Nearly all of Lake Ontario's islands are on the eastern and north-eastern shores, between the Prince Edward County headland and the lake's outlet at Kingston. The Toronto Islands on the north-western shore are the remnants of a sand spit formed by coastal erosion, whereas the mostly larger eastern islands are underlain by the basement rock found throughout the region. The largest island is Wolfe Island, at the east end of the lake. It is accessible by ferry from both Canada and the U.S.
The Great Lakes Circle Tour and Seaway Trail are designated scenic road systems connecting all of the Great Lakes and the St. Lawrence River. As the Seaway Trail is posted on the U.S. side only, Lake Ontario is the only of the five Great Lakes to have no posted bi-national circle tour.
In the 1800s, there were reports of an alleged creature, similar to the so-called Loch Ness Monster, being sighted in the lake. The creature is described as large with a long neck, green in colour, and generally causes a break in the surface waves.
, nearly 50 people have successfully swum across the lake. The first person who accomplished the feat was Marilyn Bell, who did it in 1954 at the age of 16. Toronto's Marilyn Bell Park is named in her honour. The park opened in 1984, and is just to the east of the spot where Bell completed her swim. In 1974, Diana Nyad became the first person who swam across the lake against the current (from north to south). On August 28, 2007, 14-year-old Natalie Lambert from Kingston, Ontario, made the swim, leaving Sackets Harbor, New York, and reaching Kingston's Confederation basin less than 24 hours after she entered the lake. On August 19, 2012, 14-year-old Annaleise Carr became the youngest person to swim across the lake. She completed the 32-mile (52-km) crossing from Niagara-on-the-Lake to Marilyn Bell Park in just under 27 hours.
The government of Ontario, which holds the lakebed rights of the Canadian portion of the lake under the Beds of Navigable Waters Act, does not permit off-shore wind power to be generated offshore. In "Trillium Power Wind Corporation v. Ontario (Natural Resources)", the Superior Court of Justice held Trillium Power—since 2004 an "Applicant of Record" who had invested $35,000 in fees and, when in 2011 the Crown made a policy decision against offshore windfarms, claimed an injury of $2.25 billion—disclosed no reasonable cause of action.
While the Great Lakes once supported an industrial-scale fishery, with record hauls in 1899, overfishing later blighted the industry. Today only recreational fishing activities exist.
Lake Ontario is also the site of several major commercial ports including the Port of Toronto and the Port of Hamilton. Hamilton Harbour is also the location of major steel production facilities.
|
https://en.wikipedia.org/wiki?curid=17947
|
Lake Michigan
Lake Michigan is one of the five Great Lakes of North America. It is the second-largest of the Great Lakes by volume () and the third-largest by surface area (), after Lake Superior and Lake Huron (and is slightly smaller than the U.S. state of West Virginia). To the east, its basin is conjoined with that of Lake Huron through the narrow Straits of Mackinac, giving it the same surface elevation as its easterly counterpart; the two are technically a single lake.
Lake Michigan is the only one of the Great Lakes located entirely within the territory of the United States. It is shared, from west to east, by the U.S. states of Wisconsin, Illinois, Indiana, and Michigan. Ports along its shores include Chicago, Illinois; Milwaukee, Wisconsin; Green Bay, Wisconsin; Gary, Indiana; and Muskegon, Michigan. Green Bay is a large bay in its northwest and Grand Traverse Bay is in the northeast. The word "Michigan" is believed to come from the Ojibwe word "michi-gami" meaning "great water".
Some of the earliest human inhabitants of the Lake Michigan region were the Hopewell Indians. Their culture declined after 800 AD, and for the next few hundred years, the region was the home of peoples known as the Late Woodland Indians. In the early 17th century, when western European explorers made their first forays into the region, they encountered descendants of the Late Woodland Indians: the Chippewa; Menominee; Sauk; Fox; Winnebago; Miami; Ottawa; and Potawatomi. The French explorer Jean Nicolet is believed to have been the first European to reach Lake Michigan, possibly in 1634 or 1638. In the earliest European maps of the region, the name of Lake Illinois has been found in addition to that of "Michigan", named for the Illinois Confederation of tribes.
Lake Michigan is joined via the narrow, open-water Straits of Mackinac with Lake Huron, and the combined body of water is sometimes called Michigan–Huron (also Huron–Michigan). The Straits of Mackinac were an important Native American and fur trade route. Located on the southern side of the Straits is the town of Mackinaw City, Michigan, the site of Fort Michilimackinac, a reconstructed French fort founded in 1715, and on the northern side is St. Ignace, Michigan, site of a French Catholic mission to the Indians, founded in 1671. In 1673, Jacques Marquette, Louis Joliet and their crew of five Métis voyageurs followed Lake Michigan to Green Bay and up the Fox River, nearly to its headwaters, in their search for the Mississippi River, cf. Fox–Wisconsin Waterway. The eastern end of the Straits was controlled by Fort Mackinac on Mackinac Island, a British colonial and early American military base and fur trade center, founded in 1781.
With the advent of European exploration into the area in the late 17th century, Lake Michigan became part of a line of waterways leading from the Saint Lawrence River to the Mississippi River and thence to the Gulf of Mexico. French coureurs des bois and voyageurs established small ports and trading communities, such as Green Bay, on the lake during the late 17th and early 18th centuries.
In the 19th century, Lake Michigan played a major role in the development of Chicago and the Midwestern United States west of the lake. For example, 90% of the grain shipped from Chicago travelled east over Lake Michigan during the years, and only rarely falling below 50% after the Civil War and the major expansion of railroad shipping.
The first person to reach the deep bottom of Lake Michigan was J. Val Klump, a scientist at the University of Wisconsin–Milwaukee. Klump reached the bottom via submersible as part of a 1985 research expedition.
In 2007, a row of stones paralleling an ancient shoreline was discovered by Mark Holley, professor of underwater archeology at Northwestern Michigan College. This formation lies below the surface of the lake. One of the stones is said to have a carving resembling a mastodon. So far the formation has not been authenticated.
The warming of Lake Michigan was the subject of a report by Purdue University in 2018. In each decade since 1980, steady increases in obscure surface temperature have occurred. This is likely to lead to decreasing native habitat and to adversely affect native species survival.
It is the sole Great Lake wholly within the borders of the United States; the others are shared with Canada. It lies in the region known as the American Midwest.
Lake Michigan has a surface area of 22,404 sq.mi (58,026 km2); (13,237 square miles, 34,284 km2 lying in Michigan state, 7,358 square miles, 19,056 km2 in Wisconsin, 234 square miles, 606 km2 in Indiana, & 1,576 square miles, 4,079 km2 in Illinois) making it the largest lake entirely within one country by surface area (Lake Baikal, in Russia, is larger by water volume), and the fifth-largest lake in the world. It is the larger half of Lake Michigan–Huron, which is the largest body of fresh water in the world by surface area. It is long by wide with a shoreline long. The lake's average depth is 46 fathoms 3 feet (279 ft; 85 m), while its greatest depth is 153 fathoms 5 feet (923 ft; 281 m). It contains a volume of 1,180 cubic miles (4,918 km³) of water. Green Bay in the northwest is its largest bay. Grand Traverse Bay in its northeast is another large bay. Lake Michigan's deepest region, which lies in its northern-half, is called Chippewa Basin (named after prehistoric Lake Chippewa) and is separated from South Chippewa Basin, by a relatively deeper area called the Mid Lake Plateau.
Twelve million people live along Lake Michigan's shores, mainly in the Chicago and Milwaukee metropolitan areas. The economy of many communities in northern Michigan and Door County, Wisconsin is supported by tourism, with large seasonal populations attracted by Lake Michigan. Seasonal residents often have summer homes along the waterfront and return home for the winter. The southern tip of the lake near Gary, Indiana is heavily industrialized. Cities on the shores of Lake Michigan include:
Illinois
Indiana
Michigan
Wisconsin
The Saint Lawrence Seaway and Great Lakes Waterway opened the Great Lakes to ocean-going vessels. Wider ocean-going container ships do not fit through the locks on these routes, and thus shipping is limited on the lakes. Despite their vast size, large sections of the Great Lakes freeze in winter, interrupting most shipping. Some icebreakers ply the lakes.
The Great Lakes are also connected by the Illinois Waterway to the Gulf of Mexico via the Illinois River (from Chicago) and the Mississippi River. An alternate track is via the Illinois River (from Chicago), to the Mississippi, up the Ohio, and then through the Tennessee-Tombigbee Waterway (combination of a series of rivers and lakes and canals), to Mobile Bay and the Gulf. Commercial tug-and-barge traffic on these waterways is heavy.
Pleasure boats can also enter or exit the Great Lakes by way of the Erie Canal and Hudson River in New York. The Erie Canal connects to the Great Lakes at the east end of Lake Erie (at Buffalo, NY) and at the south side of Lake Ontario (at Oswego, NY).
Lake Michigan has many beaches. The region is often referred to as the "Third Coast" of the United States, after those of the Atlantic Ocean and the Pacific Ocean. The sand is often soft and off-white, known as "singing sands" because of the squeaking noise (caused by high quartz content) it emits when walked upon. Some beaches have sand dunes covered in green beach grass and sand cherries, and the water is usually clear and cool, between , even in the late summer months. However, because prevailing westerly winds tend to move the surface water toward the east, there is a flow of warmer water to the Michigan shore in the summer.
The sand dunes located on the east shore of Lake Michigan are the largest freshwater dune system in the world. In fact, in multiple locations along the shoreline, the dunes rise several hundred feet above the lake surface. Large dune formations can be seen in many state parks, national forests and national parks along the Indiana and Michigan shoreline. Some of the most expansive and unique dune formations can be found at Indiana Dunes National Park, Saugatuck Dunes State Park, Warren Dunes State Park, Hoffmaster State Park, Silver Lake State Park, Ludington State Park, and Sleeping Bear Dunes National Lakeshore. Small dune formations can be found on the western shore of Lake Michigan at Illinois Beach State Park and moderate sized dune formations can be found in Kohler-Andrae State Park and Point Beach State Forest in Wisconsin. A large dune formation can be found in Whitefish Dunes State Park in Wisconsin in the Door Peninsula. Lake Michigan beaches in Northern Michigan are the only place in the world, aside from a few inland lakes in that region, where one can find Petoskey stones, the state stone.
The beaches of the western coast and the northernmost part of the east coast are often rocky, with some sandy beaches due to local conditions; while the southern and eastern beaches are typically sandy and dune-covered. This is partly because of the prevailing winds from the west (which also cause thick layers of ice to build on the eastern shore in winter).
The Chicago city waterfront is composed of parks, beaches, harbors and marinas, and residential developments connected by the Chicago Lakefront Trail. Where there are no beaches or marinas, stone or concrete revetments protect the shoreline from erosion. The Chicago lakefront is accessible for about between the city's southern and northern limits along the lake.
Two passenger and vehicle ferries operate ferry services on Lake Michigan, both connecting Wisconsin on the western shore with Michigan on the east. From May to October, the historic steam ship, , operates daily between Manitowoc, Wisconsin, and Ludington, Michigan, connecting U.S. Highway 10 between the two cities. The "Lake Express", established in 2004, carries passengers and vehicles across the lake between Milwaukee, Wisconsin, and Muskegon, Michigan.
The National Park Service maintains the Sleeping Bear Dunes National Lakeshore and Indiana Dunes National Park. Parts of the shoreline are within the Hiawatha National Forest and the Manistee National Forest. The Manistee National Forest section of the shoreline includes the Nordhouse Dunes Wilderness. The Lake Michigan division of the Michigan Islands National Wildlife Refuge is also within the lake.
There are numerous state and local parks located on the shores of the lake or on islands within the lake. A partial list follows.
The Milwaukee Reef, running under Lake Michigan from a point between Milwaukee and Racine to a point between Grand Haven and Muskegon, divides the lake into northern and southern basins. Each basin has a clockwise flow of water, deriving from rivers, winds, and the Coriolis effect. Prevailing westerly winds tend to move the surface water toward the east, producing a moderating effect on the climate of western Michigan. There is a mean difference in summer temperatures of 5 to 10 degrees Fahrenheit (2 to 5 degrees Celsius) between the Wisconsin and Michigan shores.
Hydrologically Michigan and Huron are the same body of water (sometimes called Lake Michigan-Huron), but are normally considered distinct. Counted together, it is the largest body of fresh water in the world by surface area. The Mackinac Bridge is generally considered the dividing line between them. Both lakes are part of the Great Lakes Waterway. The main inflow to Lake Michigan from Lake Superior, through Lake Huron, is controlled by the locks operated by the bi-national Lake Superior Board of Control.
Historic high water
Historic low water
In January 2013, Lake Michigan's monthly mean water levels dipped to an all-time low of , reaching their lowest ebb since record keeping began in 1918. The lakes were below their long-term average and had declined 17 inches since January 2012. Keith Kompoltowicz, chief of watershed hydrology for the U.S. Army Corps of Engineers' district office in Detroit, explained that biggest factors leading to the lower water levels in 2013 were a combination of the "lack of a large snowpack" in the winter of 2011/2012 coupled with very hot and dry conditions in the summer of 2012. Since then water levels have rebounded, rising about 6 feet (2 meters) in 6.5 years, and are close to breaking the record high level.
Lake Michigan, like the other Great Lakes, supplies drinking water to millions of people in bordering areas. The lakes are collectively administered by the state and provincial governments adjacent to them pursuant to the Great Lakes Compact.
Environmental problems can still plague the lake. Steel mills and refineries operate near the Indiana shoreline. The "Chicago Tribune" reported that BP is a major polluter, dumping thousands of pounds of raw sludge into the lake every day from its Whiting, Indiana, oil refinery. In March 2014 BP's Whiting refinery was responsible for spilling more than of oil into the lake.
Lake Michigan is home to a small variety of fish species and other organisms. It was originally home to lake whitefish, lake trout, yellow perch, panfish, largemouth bass, smallmouth bass and bowfin, as well as some species of catfish. As a result of improvements to the Welland Canal in 1918, an invasion of sea lampreys and overharvesting, there has been a decline in native lake trout populations, ultimately causing an increase in the population of another invasive species, the alewife. As a result, salmonids, including various strains of brown trout, steelhead (rainbow trout), coho and chinook salmon, were introduced as predators in order to decrease the wildlife population. This program was so successful that the introduced population of trout and salmon exploded, resulting in the creation of a large sport fishery for these introduced species. Lake Michigan is now stocked annually with steelhead, brown trout, and coho and chinook salmon, which have also begun natural reproduction in some Lake Michigan tributaries. However, several introduced invasive species, such as lampreys, round goby, zebra mussels and quagga mussels, continue to cause major changes in water clarity and fertility, resulting in knock-on changes to Lake Michigan's ecosystem, threatening the vitality of native fish populations.
Fisheries in inland waters of the United States are small compared to marine fisheries. The largest fisheries are the landings from the Great Lakes, worth about $14 million in 2001. Michigan's commercial fishery today consists mainly of 150 tribe-licensed commercial fishing operations through the Chippewa-Ottawa Resource Authority (CORA) and tribes belonging to the Great Lakes Indian Fish and Wildlife Commission (GLIFWC), which harvest 50 percent of the Great Lakes commercial catch in Michigan waters, and 45 state-licensed commercial fishing enterprises. The prime commercial species is the lake whitefish (Coregonus clupeaformis). The annual harvest declined from an average of from 1981 through to 1999 to more recent annual harvests of . The price for lake whitefish dropped from $1.04/lb. to as low as $.40/lb during periods of high production.
Sports fishing includes salmon, whitefish, smelt, lake trout and walleye being major catches. In the late 1960s, successful stocking programs for Pacific salmon led to the development of Lake Michigan's charter fishing industry.
Like all of the Great Lakes, Lake Michigan is today used as a major mode of transport for bulk goods. In 2002, 162 million net tons of dry bulk cargo were moved via the Lakes. This was, in order of volume: iron ore, grain and potash. The iron ore and much of the stone and coal are used in the steel industry. There is also some shipping of liquid and containerized cargo, but most container vessels cannot pass the locks on the Saint Lawrence Seaway because the ships are too wide. The total amount of shipping on the lakes has been on a downward trend for several years.
The Port of Chicago, operated by the Illinois International Port District, has grain (14 million bushels) and bulk liquid (800,000 barrels) storage facilities along Lake Calumet. The central element of the Port District, Calumet Harbor, is maintained by the U.S. Army Corps of Engineers.
Tourism and recreation are major industries on all of the Great Lakes:
|
https://en.wikipedia.org/wiki?curid=17948
|
Fibonacci
Fibonacci (, also , ; – ), also known as Leonardo Bonacci, Leonardo of Pisa, or Leonardo Bigollo Pisano ('Leonardo the Traveller from Pisa'), was an Italian mathematician from the Republic of Pisa, considered to be "the most talented Western mathematician of the Middle Ages".
The name he is commonly called, "Fibonacci", was made up in 1838 by the Franco-Italian historian Guillaume Libri and is short for ('son of Bonacci'). However, even earlier in 1506 a notary of the Roman Empire Perizolo mentions Leonardo as "Lionardo Fibonacci".
Fibonacci popularized the Hindu–Arabic numeral system in the Western World primarily through his composition in 1202 of "Liber Abaci" ("Book of Calculation"). He also introduced Europe to the sequence of Fibonacci numbers, which he used as an example in "Liber Abaci".
Fibonacci was born around 1170 to Guglielmo, an Italian merchant and customs official. Guglielmo directed a trading post in Bugia, Algeria. Fibonacci travelled with him as a young boy, and it was in Bugia where he was educated that he learned about the Hindu–Arabic numeral system.
Fibonacci travelled around the Mediterranean coast, meeting with many merchants and learning about their systems of doing arithmetic. He soon realised the many advantages of the Hindu-Arabic system, which, unlike the Roman numerals used at the time, allowed easy calculation using a place-value system. In 1202, he completed the "Liber Abaci" ("Book of Abacus" or "The Book of Calculation"), which popularized Hindu–Arabic numerals in Europe.
Fibonacci was a guest of Emperor Frederick II, who enjoyed mathematics and science. In 1240, the Republic of Pisa honored Fibonacci (referred to as Leonardo Bigollo) by granting him a salary in a decree that recognized him for the services that he had given to the city as an advisor on matters of accounting and instruction to citizens.
Fibonacci is thought to have died between 1240 and 1250, in Pisa.
In the "Liber Abaci" (1202), Fibonacci introduced the so-called "modus Indorum" (method of the Indians), today known as the Hindu–Arabic numeral system. The manuscript book advocated numeration with the digits 0–9 and place value. The book showed the practical use and value of the new Hindu-Arabic numeral system by applying the numerals to commercial bookkeeping, converting weights and measures, calculation of interest, money-changing, and other applications. The book was well-received throughout educated Europe and had a profound impact on European thought. The original 1202 manuscript is not known to exist.
In a 1228 copy of the manuscript, the first section introduces the Hindu-Arabic numeral system and compares the system with other systems, such as Roman numerals, and methods to convert the other numeral systems into Hindu-Arabic numerals. Replacing the Roman numeral system, its ancient Egyptian multiplication method, and using an abacus for calculations, with a Hindu-Arabic numeral system was an advance in making business calculations easier and faster, which assisted the growth of banking and accounting in Europe.
The second section explains the uses of Hindu-Arabic numerals in business, for example converting different currencies, and calculating profit and interest, which were important to the growing banking industry. The book also discusses irrational numbers and prime numbers.
"Liber Abaci" posed and solved a problem involving the growth of a population of rabbits based on idealized assumptions. The solution, generation by generation, was a sequence of numbers later known as Fibonacci numbers. Although Fibonacci's "Liber Abaci" contains the earliest known description of the sequence outside of India, the sequence had been described by Indian mathematicians as early as the sixth century.
In the Fibonacci sequence, each number is the sum of the previous two numbers. Fibonacci omitted the "0" included today and began the sequence with 1, 1, 2, ... . He carried the calculation up to the thirteenth place, the value 233, though another manuscript carries it to the next place, the value 377. Fibonacci did not speak about the golden ratio as the limit of the ratio of consecutive numbers in this sequence.
In the 19th century, a statue of Fibonacci was set in Pisa. Today it is located in the western gallery of the Camposanto, historical cemetery on the Piazza dei Miracoli.
There are many mathematical concepts named after Fibonacci because of a connection to the Fibonacci numbers. Examples include the Brahmagupta–Fibonacci identity, the Fibonacci search technique, and the Pisano period. Beyond mathematics, namesakes of Fibonacci include the asteroid 6765 Fibonacci and the art rock band The Fibonaccis.
|
https://en.wikipedia.org/wiki?curid=17949
|
Lake Superior
Lake Superior, the largest of the Great Lakes of North America, is also the world's largest freshwater lake by surface area, and the third largest freshwater lake by volume. It is shared by the Canadian province of Ontario to the north, the U.S. state of Minnesota to the west, and Wisconsin and the Upper Peninsula of Michigan to the south. The farthest north and west of the Great Lakes chain, Superior has the highest elevation of all five great lakes and drains into the St. Mary's River.
The Ojibwe name for the lake is "gichi-gami" (pronounced as "gitchi-gami" and "kitchi-gami" in other dialects), meaning "great sea." Henry Wadsworth Longfellow wrote the name as "Gitche Gumee" in The Song of Hiawatha, as did Gordon Lightfoot in his song "The Wreck of the Edmund Fitzgerald." According to other sources, the actual Ojibwe name is "Ojibwe Gichigami" ("Ojibwe's Great Sea") or "Anishinaabe Gichigami" ("Anishinaabe's Great Sea"). The 1878 dictionary by Father Frederic Baraga, the first one written for the Ojibway language, gives the Ojibwe name as Otchipwe-kitchi-gami (reflecting "Ojibwe Gichigami").
The first French explorers approaching the great inland sea by way of the Ottawa River and Lake Huron during the 17th century referred to their discovery as "le lac supérieur". Properly translated, the expression means "Upper Lake"—that is, the lake above Lake Huron. The lake was also called "Lac Tracy" (named for Alexandre de Prouville de Tracy) by 17th century Jesuit missionaries. The British, upon taking control of the region from the French in the 1760s following the French and Indian War, anglicized the lake's name to "Superior", "on account of its being superior in magnitude to any of the lakes on that vast continent".
Lake Superior empties into Lake Huron via the St. Marys River and the Soo Locks. Lake Superior is the largest freshwater lake in the world in area and the third largest in volume, behind Lake Baikal in Siberia and Lake Tanganyika in East Africa. The Caspian Sea, while larger than Lake Superior in both surface area and volume, is brackish; though presently isolated, prehistorically the Caspian has been repeatedly connected to and then isolated from the Mediterranean via the Black Sea.
Lake Superior has a surface area of , which is approximately the size of South Carolina or Austria. It has a maximum length of and maximum breadth of . Its average depth is with a maximum depth of . Lake Superior contains 2,900 cubic miles (12,100 km³) of water. There is enough water in Lake Superior to cover the entire land mass of North and South America to a depth of . The shoreline of the lake stretches (including islands).
American limnologist J. Val Klump was the first person to reach the lowest depth of Lake Superior on July 30, 1985, as part of a scientific expedition, which at 122 fathoms 1 foot () below sea level is the second-lowest spot in the continental interior of the United States and the third-lowest spot in the interior of the North American continent after Iliamna Lake in Alaska (942 feet [287 m] below sea level) and Great Slave Lake in the Northwest Territories of Canada at ( below sea level). (Though Crater Lake is the deepest lake in the United States and deeper than Lake Superior, Crater Lake's elevation is higher and consequently its deepest point is "above" sea level.)
While the temperature of the surface of Lake Superior varies seasonally, the temperature below is an almost constant 39 °F (4 °C). This variation in temperature makes the lake seasonally stratigraphic. Twice per year, however, the water column reaches a uniform temperature of 39 °F (4 °C) from top to bottom, and the lake waters thoroughly mix. This feature makes the lake dimictic. Because of its volume, Lake Superior has a retention time of 191 years.
Annual storms on Lake Superior regularly feature wave heights of over . Waves well over have been recorded.
Lake Superior is fed by more than 200 rivers, including the Nipigon River, the St. Louis River, the Pigeon River, the Pic River, the White River, the Michipicoten River, the Bois Brule River and the Kaministiquia River. Lake Superior drains into Lake Huron via the St. Marys River. There are rapids at the river's upper (Lake Superior) end, where the river bed has a relatively steep gradient. The Soo Locks enable ships to bypass the rapids and to overcome the height difference between Lakes Superior and Huron.
The lake's average surface elevation is above sea level. Until approximately 1887, the natural hydraulic conveyance through the St. Marys River rapids determined the outflow from Lake Superior. By 1921, development in support of transportation and hydroelectric power resulted in gates, locks, power canals and other control structures completely spanning St. Marys rapids. The regulating structure is known as the Compensating Works and is operated according to a regulation plan known as Plan 1977-A. Water levels, including diversions of water from the Hudson Bay watershed, are regulated by the International Lake Superior Board of Control, which was established in 1914 by the International Joint Commission.
Lake Superior's water level was at a new record low in September 2007, slightly less than the previous record low in 1926. Water levels recovered within a few days.
Historic high water
The lake's water level fluctuates from month to month, with the highest lake levels in October and November. The normal high-water mark is above datum (601.1 ft or 183.2 m). In the summer of 1985, Lake Superior reached its highest recorded level at above datum. The winter of 1986 set new high-water records through the winter and spring months (January to June), ranging from to above Chart Datum.
Historic low water
The lake's lowest levels occur in March and April. The normal low-water mark is below datum (601.1 ft or 183.2 m). In the winter of 1926 Lake Superior reached its lowest recorded level at below datum. Additionally, the entire first half of the year (January to June) included record low months. The low water was a continuation of the dropping lake levels from the previous year, 1925, which set low-water records for October through December. During the nine-month period of October 1925 to June 1926, water levels ranged from to below Chart Datum. In the summer of 2007 monthly historic lows were set; August at , September at .
According to a study by professors at the University of Minnesota Duluth, Lake Superior may have warmed faster than its surrounding area. Summer surface temperatures in the lake appeared to have increased by about between 1979 and 2007, compared with an approximately increase in the surrounding average air temperature. The increase in the lake's surface temperature may be related to the decreasing ice cover. Less winter ice cover allows more solar radiation to penetrate and warm the water. If trends continue, Lake Superior, which freezes over completely once every 20 years, could routinely be ice-free by 2040. This would be a significant departure from historical records as, according to Hubert Lamb, Samuel Champlain reported ice along the shores of Lake Superior in June 1608. Warmer temperatures could lead to more snow in the lake effect snow belts along the shores of the lake, especially in the Upper Peninsula of Michigan. Two recent consecutive winters (2013–2014 and 2014–2015) brought unusually high ice coverage to the Great Lakes, and on March 6, 2014, overall ice coverage peaked at 92.5%, the second-highest in recorded history. Lake Superior's ice coverage further beat 2014's record in 2019, reaching 95% coverage.
The largest island in Lake Superior is Isle Royale in the state of Michigan. Isle Royale contains several lakes, some of which also contain islands. Other well-known islands include Madeline Island in the state of Wisconsin, Michipicoten Island in the province of Ontario, and Grand Island (the location of the Grand Island National Recreation Area) in the state of Michigan.
The larger cities on Lake Superior include the twin ports of Duluth, Minnesota and Superior, Wisconsin; Thunder Bay, Ontario; Marquette, Michigan; and the twin cities of Sault Ste. Marie, Michigan, and Sault Ste. Marie, Ontario. Duluth-Superior, at the western end of Lake Superior, is the most inland point on the St. Lawrence Seaway and the most inland port in the world.
Among the scenic places on the lake are Apostle Islands National Lakeshore, Isle Royale National Park, Porcupine Mountains Wilderness State Park, Pukaskwa National Park, Lake Superior Provincial Park, Grand Island National Recreation Area, Sleeping Giant (Ontario) and Pictured Rocks National Lakeshore.
The Great Lakes Circle Tour is a designated scenic road system connecting all of the Great Lakes and the St. Lawrence River.
Lake Superior's size reduces the severity of the seasons of its humid continental climate (more typically seen in locations like Nova Scotia). The water surface's slow reaction to temperature changes, seasonally ranging between 32 and 55 °F (0–13 °C) around 1970, helps to moderate surrounding air temperatures in the summer (cooler with frequent sea breeze formations) and winter, and creates lake effect snow in colder months. The hills and mountains that border the lake hold moisture and fog, particularly in the fall.
The lake's surface temperature rose by 4.5 °F (2.5 °C) from 1979 to 2006.
The rocks of Lake Superior's northern shore date back to the early history of the earth. During the Precambrian (between 4.5 billion and 540 million years ago) magma forcing its way to the surface created the intrusive granites of the Canadian Shield. These ancient granites can be seen on the North Shore today. It was during the Penokean orogeny, part of the process that created the Great Lakes Tectonic Zone, that many valuable metals were deposited. The region surrounding the lake has proved to be rich in minerals. Copper, iron, silver, gold and nickel are or were the most frequently mined. Examples include the Hemlo gold mine near Marathon, copper at Point Mamainse, silver at Silver Islet and uranium at Theano Point.
The mountains steadily eroded, depositing layers of sediments that compacted and became limestone, dolomite, taconite and the shale at Kakabeka Falls.
The continent was later riven, creating one of the deepest rifts in the world. The lake lies in this long-extinct Mesoproterozoic rift valley, the Midcontinent Rift. Magma was injected between layers of sedimentary rock, forming diabase sills. This hard diabase protects the layers of sedimentary rock below, forming the flat-topped mesas in the Thunder Bay area.
Amethyst formed in some of the cavities created by the Midcontinent Rift and there are several amethyst mines in the Thunder Bay area.
Lava erupted from the rift and formed the black basalt rock of Michipicoten Island, Black Bay Peninsula, and St. Ignace Island.
During the Wisconsin glaciation 10,000 years ago, ice covered the region at a thickness of . The land contours familiar today were carved by the advance and retreat of the ice sheet. The retreat left gravel, sand, clay and boulder deposits. Glacial meltwaters gathered in the Superior basin creating Lake Minong, a precursor to Lake Superior. Without the immense weight of the ice, the land rebounded, and a drainage outlet formed at Sault Ste. Marie, which would become known as St. Mary's River.
The first people came to the Lake Superior region 10,000 years ago after the retreat of the glaciers in the last Ice Age. They are known as the Plano, and they used stone-tipped spears to hunt caribou on the northwestern side of Lake Minong.
The next documented people are known as the Shield Archaic (c. 5000–500 BC). Evidence of this culture can be found at the eastern and western ends of the Canadian shore. They used bows and arrows, dugout canoes, fished, hunted, mined copper for tools and weapons, and established trading networks. They are believed to be the direct ancestors of the Ojibwe and Cree.
The Laurel people (c. 500 BC to AD 500) developed seine net fishing, evidence being found at rivers around Superior such as the Pic and Michipicoten.
Another culture known as the Terminal Woodland Indians (c. AD 900–1650) has been found. They were Algonkian people who hunted, fished and gathered berries. They used snow shoes, birch bark canoes and conical or domed lodges. At the mouth of the Michipicoten River, nine layers of encampments have been discovered. Most of the Pukaskwa Pits were likely made during this time.
The Anishinaabe, which includes the Ojibwe or Chippewa, have inhabited the Lake Superior region for over five hundred years and were preceded by the Dakota, Fox, Menominee, Nipigon, Noquet and Gros Ventres. They called Lake Superior either "Ojibwe Gichigami" ("the Ojibwe's Great Sea") or "Anishnaabe Gichgamiing" ("the Anishinaabe's Great Sea"). After the arrival of Europeans, the Anishinaabe made themselves the middle-men between the French fur traders and other Native peoples. They soon became the dominant Native American nation in the region: they forced out the Sioux and Fox and won a victory against the Iroquois west of Sault Ste. Marie in 1662. By the mid-18th century, the Ojibwe occupied all of Lake Superior's shores.
In the 18th century, the fur trade in the region was booming, with the Hudson's Bay Company having a virtual monopoly until 1783, when the North West Company was formed to rival the Hudson's Bay Company. The North West Company built forts on Lake Superior at Grand Portage, Fort William, Nipigon, the Pic River, the Michipicoten River, and Sault Ste. Marie. But by 1821, with competition taking too great a toll on both, the companies merged under the Hudson's Bay Company name.
Many towns around the lake are either current or former mining areas, or engaged in processing or shipping. Today, tourism is another significant industry; the sparsely populated Lake Superior country, with its rugged shorelines and wilderness, attracts tourists and adventurers.
Lake Superior has been an important link in the Great Lakes Waterway, providing a route for the transportation of iron ore as well as grain and other mined and manufactured materials. Large cargo vessels called lake freighters, as well as smaller ocean-going freighters, transport these commodities across Lake Superior.
Shipping was slow to arrive at Lake Superior in the 19th century. The first steamboat to run on the lake was the "Independence", which didn't appear until 1847, compared to the arrival of the first steamers on the other Great Lakes beginning in 1816.
Because of ice, the Lake is closed to shipping from mid-January to late March. Exact dates for the shipping season vary each year, depending on weather conditions that form and break the ice.
The southern shore of Lake Superior between Grand Marais, Michigan, and Whitefish Point is known as the "Graveyard of the Great Lakes" and more ships have been lost around the Whitefish Point area than any other part of Lake Superior. These shipwrecks are now protected by the Whitefish Point Underwater Preserve.
Storms that claimed multiple ships include the Mataafa Storm in 1905 and the Great Lakes Storm of 1913.
Wreckage of —a ore carrier that sank on October 11, 1907, during a Lake Superior storm in 77 fathoms () of water—was located in August 2007. All but Charles G. Pitz of "Cyprus"s 23 crew perished . The ore carrier sank in Lake Superior on her second voyage, while hauling iron ore from Superior, Wisconsin, to Buffalo, New York. Built in Lorain, Ohio, "Cyprus" was launched August 17, 1907.
In 1918 the last warships to sink in the Great Lakes, French minesweepers "Inkerman" and "Cerisoles", vanished in a Lake Superior storm, perhaps upon striking the uncharted danger of the Superior Shoal in an otherwise deep part of the lake. With 78 crewmembers dead, their sinking marked the largest loss of life on Lake Superior to date.
According to legend, "Lake Superior seldom gives up her dead". This is because of the unusually low temperature of the water, estimated at under on average around 1970. Normally, bacteria feeding on a sunken decaying body will generate gas inside the body, causing it to float to the surface after a few days. But Lake Superior's water is cold enough year-round to inhibit bacterial growth, and bodies tend to sink and never resurface. This is alluded to in Lightfoot's "The Wreck of the "Edmund Fitzgerald"" ballad with the line "The lake, it is said, never gives up her dead". "Fitzgerald" adventurer Joe MacInnis reported that, in July 1994, explorer Frederick Shannon's Expedition 94 to the wreck of "Edmund Fitzgerald" discovered and filmed a man's body near the port side of her pilothouse, not far from the open door, "fully clothed, wearing an orange life jacket, and lying face down in the sediment".
More than 80 species of fish have been found in Lake Superior. Species native to the lake include: banded killifish, bloater, brook trout, burbot, cisco, lake sturgeon, lake trout, lake whitefish, longnose sucker, muskellunge, northern pike, pumpkinseed, rock bass, round whitefish, smallmouth bass, walleye, white sucker and yellow perch. In addition, many fish species have been either intentionally or accidentally introduced to Lake Superior: Atlantic salmon, brown trout, carp, chinook salmon, coho salmon, freshwater drum, pink salmon, rainbow smelt, rainbow trout, round goby, ruffe, sea lamprey and white perch.
Lake Superior has fewer dissolved nutrients relative to its water volume than the other Great Lakes and so is less productive in terms of fish populations and is an oligotrophic lake. This is a result of the underdeveloped soils found in its relatively small watershed. It is also a reflection of relatively small human population and small amount of agriculture in its watershed. However, nitrate concentrations in the lake have been continuously rising for more than a century. They are still much lower than levels considered dangerous to human health; but this steady, long-term rise is an unusual record of environmental nitrogen buildup. It may relate to anthropogenic alternations to the regional nitrogen cycle, but researchers are still unsure of the causes of this change to the lake's ecology.
As for other Great Lakes fish, populations have also been affected by the accidental or intentional introduction of foreign species such as the sea lamprey and Eurasian ruffe. Accidental introductions have occurred in part by the removal of natural barriers to navigation between the Great Lakes. Overfishing has also been a factor in the decline of fish populations.
|
https://en.wikipedia.org/wiki?curid=17951
|
Leipzig
Leipzig (, also , , ; Upper Saxon: ; Sorbian: "Lipsk") is the most populous city in the German state of Saxony. With a population of 600,000 inhabitants as of 2019 (1.1 million residents in the larger urban zone), it is Germany's eighth most populous city as well as the second most populous city in the area of former East Germany after (East) Berlin. Together with Halle (Saale), the largest city of the neighbouring state of Saxony-Anhalt, the city forms the polycentric conurbation of Leipzig-Halle. Between the two cities (in Schkeuditz) lies Leipzig/Halle International Airport.
Leipzig is located about southwest of Berlin in the Leipzig Bay, which constitutes the southernmost part of the North German Plain, at the confluence of the White Elster River (progression: ) and two of its tributaries: the Pleiße and the Parthe. The name of the city as well as the names of many of its boroughs are of Slavic origin.
Leipzig has been a trade city since at least the time of the Holy Roman Empire. The city sits at the intersection of the Via Regia and the Via Imperii, two important medieval trade routes. Leipzig was once one of the major European centres of learning and culture in fields such as music and publishing. After the Second World War and during the period of the German Democratic Republic (East Germany) Leipzig remained a major urban centre in East German terms, but its cultural and economic importance declined. Events in Leipzig in 1989 played a significant role in precipitating the fall of communism in Central and Eastern Europe, mainly through demonstrations starting from St. Nicholas Church. The immediate effects of the reunification of Germany included the collapse of the local economy, which had come to depend on highly polluting heavy industry, severe unemployment, and urban blight. Starting around 2000, however, decline was first arrested and then reversed. Leipzig has undergone significant change with the restoration of major historical buildings, the demolition of derelict properties of little historical value, and the development of new industries and a modern transport infrastructure.
Leipzig today is an economic centre, the most livable city in Germany, according to the GfK marketing research institution and has the second-best future prospects of all cities in Germany, according to HWWI and Berenberg Bank. The city is one of two seats of the German National Library (together with Frankfurt), as well as the seat of the German Federal Administrative Court. Leipzig Zoo is one of the most modern zoos in Europe and ranks first in Germany and second in Europe according to Anthony Sheridan. Since the opening of the Leipzig City Tunnel in 2013, Leipzig forms the centrepiece of the S-Bahn Mitteldeutschland public transit system. Leipzig is currently listed as a Gamma World City, Germany's "Boomtown" and as the European City of the Year 2019.
Leipzig has long been a major centre for music, both classical as well as modern "dark alternative music" or darkwave genres. The Oper Leipzig is one of the most prominent opera houses in Germany. Leipzig is also home to the University of Music and Theatre "Felix Mendelssohn Bartholdy". It was during a stay in this city that Friedrich Schiller wrote his poem "Ode to Joy". The Leipzig Gewandhaus Orchestra, established in 1743, is one of the oldest symphony orchestras in the world. Johann Sebastian Bach is one among many major composers who lived and worked in Leipzig.
The name Leipzig is derived from the Slavic word "", which means "settlement where the linden trees (British English: lime trees; U.S. English: basswood trees) stand". An older spelling of the name in English is '. The Latin name ' was also used. The name is cognate with () in Russia and in Latvia.
In 1937 the Nazi government officially renamed the city "" (Imperial Trade Fair City Leipzig).
Since 1989 Leipzig has been informally dubbed "Hero City" (), in recognition of the role that the Monday demonstrations there played in the fall of the East German regime – the name alludes to the honorary title awarded in the former Soviet Union to certain cities that played a key role in the victory of the Allies during the Second World War. The common usage of this nickname for Leipzig up until the present is reflected, for example, in the name of a blog for local arts and culture, "Heldenstadt.de".
More recently, the city has sometimes been nicknamed the "Boomtown of eastern Germany", "Hypezig" or "The better Berlin" for being celebrated by the media as a hip urban centre for the vital lifestyle and creative scene with many startups.
Leipzig was first documented in 1015 in the chronicles of Bishop Thietmar of Merseburg as " (" VII", 25) and endowed with city and market privileges in 1165 by Otto the Rich. Leipzig Trade Fair, started in the Middle Ages, has become an event of international importance and is the oldest surviving trade fair in the world.
There are records of commercial fishing operations on the river Pleiße in Leipzig dating back to 1305, when the Margrave Dietrich the Younger granted the fishing rights to the church and convent of St Thomas.
There were a number of monasteries in and around the city, including a Franciscan monastery after which the (Barefoot Alley) is named and a monastery of Irish monks (, destroyed in 1544) near the present day (the old ").
The foundation of the University of Leipzig in 1409 initiated the city's development into a centre of German law and the publishing industry, and towards being the location of the " (Imperial Court of Justice) and the German National Library (founded in 1912).
During the Thirty Years' War, two battles took place in , about outside Leipzig city walls. The first Battle of Breitenfeld took place in 1631 and the second in 1642. Both battles resulted in victories for the Swedish-led side.
On 24 December 1701, an oil-fueled street lighting system was introduced. The city employed light guards who had to follow a specific schedule to ensure the punctual lighting of the 700 lanterns.
The Leipzig region was the arena of the 1813 Battle of Leipzig between Napoleonic France and an allied coalition of Prussia, Russia, Austria and Sweden. It was the largest battle in Europe before the First World War and the coalition victory ended Napoleon's presence in Germany and would ultimately lead to his first exile on Elba. The Monument to the Battle of the Nations celebrating the centenary of this event was completed in 1913. In addition to stimulating German nationalism, the war had a major impact in mobilizing a civic spirit in numerous volunteer activities. Many volunteer militias and civic associations were formed, and collaborated with churches and the press to support local and state militias, patriotic wartime mobilization, humanitarian relief and postwar commemorative practices and rituals.
When it was made a terminus of the first German long-distance railway to Dresden (the capital of Saxony) in 1839, Leipzig became a hub of Central European railway traffic, with Leipzig Hauptbahnhof the largest terminal station by area in Europe. The railway station has two grand entrance halls, the eastern one for the Royal Saxon State Railways and the western one for the Prussian state railways.
In the 19th century, Leipzig was a centre of the German and Saxon liberal movements. The first German labor party, the General German Workers' Association ("Allgemeiner Deutscher Arbeiterverein", ADAV) was founded in Leipzig on 23 May 1863 by Ferdinand Lassalle; about 600 workers from across Germany travelled to the foundation on the new railway. Leipzig expanded rapidly to more than 700,000 inhabitants. Huge "Gründerzeit" areas were built, which mostly survived both war and post-war demolition.
With the opening of a fifth production hall in 1907, the Leipziger Baumwollspinnerei became the largest cotton mill company on the continent, housing over 240,000 spindles. Daily production surpassed 5 million kilograms of yarn.
During the 1930s and 1940s, music was prominent throughout Leipzig. Many students attended Felix Mendelssohn Bartholdy College of Music and Theatre (then named Landeskonservatorium.) However, in 1944, it was closed due to World War II. It re-opened soon after the war ended in 1945.
On 22 May 1930, Carl Friedrich Goerdeler was elected mayor of Leipzig. He was well known as an opponent of the Nazi regime. He resigned in 1937 when, in his absence, his Nazi deputy ordered the destruction of the city's statue of Felix Mendelssohn. On Kristallnacht in 1938, the 1855 Moorish Revival Leipzig synagogue, one of the city's most architecturally significant buildings, was deliberately destroyed. Goerdeler was later executed by the Nazis on 2 February 1945.
Several thousand forced labourers were stationed in Leipzig during the Second World War.
Beginning in 1933, many Jewish citizens of Leipzig were members of the Gemeinde, a large Jewish religious community spread throughout Germany, Austria and Switzerland. In October 1935, the Gemeinde helped found the Lehrhaus (English: a house of study) in Leipzig to provide different forms of studies to Jewish students who were prohibited from attending any institutions in Germany. Jewish studies were emphasized and much of the Jewish community of Leipzig became involved.
Like all other cities claimed by the Nazis, Leipzig was subject to aryanisation. Beginning in 1933 and increasing in 1939, Jewish business owners were forced to give up their possessions and stores. This eventually intensified to the point where Nazi officials were strong enough to evict the Jews from their own homes. They also had the power to force many of the Jews living in the city to sell their houses. Many people who sold their homes emigrated elsewhere, outside of Leipzig. Others moved to Judenhäuser, which were smaller houses that acted as ghettos, housing large groups of people.
As with other cities in Europe during the Holocaust, the Jews of Leipzig were greatly affected by the Nuremberg Laws. However, due to the Leipzig Trade Fair and the international attention it garnered, Leipzig was especially cautious about its public image. Despite this, the Leipzig authorities were not afraid to strictly apply and enforce anti-semitic measures. Shortly before Kristallnacht, Polish Jews living in the city were expelled.
On 20 December 1937, after the Nazis took control of the city, they renamed it Reichsmessestadt Leipzig, meaning the "Imperial Trade Fair City Leipzig". In early 1938, Leipzig saw an increase in Zionism through Jewish citizens. Many of these Zionists attempted to flee before deportations began. On 28 October 1938, Heinrich Himmler ordered the deportation of Polish Jews from Leipzig to Poland.
On 9 November 1938, as part of Kristallnacht, in Gottschedstrasse (German: Gottschedstraße), now a popular dining and nightlife area in Leipzig, synagogues and businesses were set on fire. Only a couple of days later, on 11 November 1938, many Jews in the Leipzig area were deported to the Buchenwald Concentration Camp. As World War II came to an end, much of Leipzig was destroyed. Following the war, the Communist Party of Germany (German: "Kommunistische Partei Deutschlands", "KPD") provided aid for the reconstruction of the city.
In 1933, a census recorded that over 11,000 Jews were living in Leipzig. In the 1939 census, the number had fallen to roughly 4,500, and by January 1942 only 2,000 remained. In that month, these 2,000 Jews began to be deported. On 13 July 1942, 170 Jews were deported from Leipzig to Auschwitz Concentration Camp. On 19 September 1942, 440 Jews were deported from Leipzig to Theresienstadt Concentration Camp. On 18 June 1943, the remaining 18 Jews still in Leipzig were deported from Leipzig to Auschwitz Concentration Camp. According to records of the two waves of deportations to Auschwitz there were no survivors. According to records of the Theresienstadt deportation, only 53 Jews survived.
Until late 1943, there was little threat of aerial bombings to the city. However, on the morning of 4 December 1943, the British Royal Air Force dropped over 1,000 tons of explosives, resulting in the death of nearly 1,000 civilians. This bombing was the largest up to that time. Due to the close proximity of many of the buildings hit, a firestorm occurred. This prompted firefighters to rush to the city; however, the storm was too overwhelming for them. Unlike its neighbouring city of Dresden, this was a largely conventional bombing with high explosives rather than incendiaries. The resultant pattern of loss was a patchwork, rather than wholesale loss of its centre, but was nevertheless extensive.
The Allied ground advance into Germany reached Leipzig in late April 1945. The U.S. 2nd Infantry Division and U.S. 69th Infantry Division fought their way into the city on 18 April and completed its capture after fierce urban action, in which fighting was often house-to-house and block-to-block, on 19 April 1945. In April 1945 the SS Gruppehfuhrer/Mayor of Leipzig Bruno Erich Alfred Freyberg, his wife and daughter; the Deputy Mayor/Treasurer of Leipzig, Ernest Kurt Lisso, his wife, daughter, and a Volkssturm Major Walter Dönicke committed suicide in Leipzig City Hall.
The United States turned the city over to the Red Army as it pulled back from the line of contact with Soviet forces in July 1945 to the designated occupation zone boundaries. Leipzig became one of the major cities of the German Democratic Republic (East Germany).
Following the end of World War II in 1945, Leipzig saw a slow return of Jews to the city.
In the mid-20th century, the city's trade fair assumed renewed importance as a point of contact with the Comecon Eastern Europe economic bloc, of which East Germany was a member. At this time, trade fairs were held at a site in the south of the city, near the Monument to the Battle of the Nations.
The planned economy of the German Democratic Republic, however, was not kind to Leipzig. Before the Second World War, Leipzig had developed a mixture of industry, creative business (notably publishing), and services (including legal services). During the period of the German Democratic Republic, services became the concern of the state, concentrated in (East) Berlin; creative business moved to West Germany; and Leipzig was left only with heavy industry. To make bad worse, this industry was extremely polluting, making Leipzig an even less attractive city to live in. Between 1950 and the end of the German Democratic Republic, the population of Leipzig fell from 600,000 to 500,000.
In October 1989, after prayers for peace at St. Nicholas Church, established in 1983 as part of the peace movement, the Monday demonstrations started as the most prominent mass protest against the East German government. The reunification of Germany, however, was at first not good for Leipzig. The centrally planned heavy industry that had become the city's speciality was, in terms of the advanced economy of reunited Germany, almost completely unviable, and closed. Within only six years, 90% of jobs in industry had vanished. As unemployment rocketed, the population fell dramatically; some 100,000 people left Leipzig in the ten years after reunificaiton, and vacant and derelict housing became an urgent problem.
Starting in 2000, an ambitious (and subsequently much-praised) urban-renewal plan first stopped Leipzig's decline and then reversed it. The plan focused on saving and improving as much as possible of the city's urban structure, especially its attractive historic center and various architectural gems, and attracting new industries, partly through infrastructure improvement.
Nowadays, Leipzig is an important economic center in Germany. Since the 2010s, the city has been celebrated by the media as a hip urban center with a very high quality of living. It is often called "The new Berlin". Leipzig is also Germany's fastest growing city. Leipzig was the German candidate for the 2012 Summer Olympics, but was unsuccessful. After ten years of construction, the Leipzig City Tunnel opened on 14 December 2013. Leipzig forms the centerpiece of the S-Bahn Mitteldeutschland public transit system, which operates in the four German states of Saxony, Saxony-Anhalt, Thuringia and Brandenburg.
Leipzig lies at the confluence of the rivers White Elster, Pleiße and Parthe, in the Leipzig Bay, on the most southerly part of the North German Plain, which is the part of the North European Plain in Germany. The site is characterized by swampy areas such as the Leipzig Riverside Forest, though there are also some limestone areas to the north of the city. The landscape is mostly flat though there is also some evidence of moraine and drumlins.
Although there are some forest parks within the city limits, the area surrounding Leipzig is relatively unforested. During the 20th century, there were several open-cast mines in the region, many of which are being converted to use as lakes. Also see: Neuseenland
Leipzig is also situated at the intersection of the ancient roads known as the Via Regia (King's highway), which traversed Germany in an east–west direction, and the Via Imperii (Imperial Highway), a north–south road.
Leipzig was a walled city in the Middle Ages and the current "ring" road around the historic centre of the city follows the line of the old city walls.
Since 1992 Leipzig has been divided administratively into ten districts, which in turn contain a total of 63 subdistricts. Some of these correspond to outlying villages which have been annexed by Leipzig.
Like many places located in Eastern parts of Germany, Leipzig has an oceanic climate (Köppen: "Cfb" close to a "Dfb" [0 °C US isotherm]) with significant continental influences due to inland location. Winters are cool to cold, with an average of around . Summers are generally warm, averaging at with daytime temperatures of . Precipitation in winter is about half that of the summer. The amount of sunshine differs significantly between winter and summer, with an average of around 51 hours of sunshine in December (1.7 hours a day) compared with 229 hours of sunshine in July (7.4 hours a day).
Leipzig has a population of about 570,000. In 1930, the population reached its historical peak of over 700,000. It decreased steadily from 1950 to about 530,000 in 1989. In the 1990s, the population decreased rather rapidly to 437,000 in 1998. This reduction was mostly due to outward migration and suburbanisation. After almost doubling the city area by incorporation of surrounding towns in 1999, the number stabilised and started to rise again, with an increase of 1,000 in 2000. , Leipzig is the fastest-growing city in Germany with over 500,000 inhabitants.
The growth of the past 10–15 years has mostly been due to inward migration. In recent years, inward migration accelerated, reaching an increase of 12,917 in 2014.
In the years following German reunification, many people of working age took the opportunity to move to the states of the former West Germany to seek employment opportunities. This was a contributory factor to falling birth rates. Births dropped from 7,000 in 1988 to less than 3,000 in 1994. However, the number of children born in Leipzig has risen since the late 1990s. In 2011, it reached 5,490 births resulting in a RNI of −17.7 (−393.7 in 1995).
The unemployment rate decreased from 18.2% in 2003 to 9.8% in 2014 and 7.6% in June 2017.
The percentage of the population from an immigrant background is low compared with other German cities. , only 5.6% of the population were foreigners, compared to the German national average of 7.7%.
The number of people with an immigrant background (immigrants and their children) grew from 49,323 in 2012 to 77,559 in 2016, making them 13.3% of the city's population (Leipzig's population 579,530 in 2016).
The largest minorities (first and second generation) in Leipzig by country of origin as of 31.12.2018 are:
The historic central area of Leipzig features a Renaissance-style ensemble of buildings from the sixteenth century, including the old city hall in the marketplace. There are also several baroque period trading houses and former residences of rich merchants. As Leipzig grew considerably during the economic boom of the late-nineteenth century, the town has many buildings in the historicist style representative of the "Gründerzeit" era. Approximately 35% of Leipzig's flats are in buildings of this type. The new city hall, completed in 1905, is built in the same style.
Some 64,000 apartments in Leipzig were built in Plattenbau buildings during Communist rule in East Germany. and although some of these have been demolished and the numbers living in this type of accommodation have declined in recent years, at least 10% of Leipzig's population (50,000 people) are still living in Plattenbau accommodation. Grünau, for example, has approximately 40,000 people living in this sort of accommodation.
The St. Paul's Church was destroyed by the Communist government in 1968 to make room for a new main building for the university. After some debate, the city decided to establish a new, mainly secular building at the same location, called Paulinum, which was completed in 2012. Its architecture alludes to the look of the former church and it includes space for religious use by the faculty of theology, including the original altar from the old church and two newly built organs.
Many commercial buildings were built in the 1990s as a result of tax breaks after German reunification.
The tallest structure in Leipzig is the chimney of the Stahl- und Hartgusswerk Bösdorf GmbH with a height of . With , the City-Hochhaus Leipzig is the tallest high-rise building in Leipzig. From 1972 to 1973 it was Germany's tallest building.
One of the highlights of the city's contemporary arts was the Neo Rauch retrospective opening in April 2010 at the Leipzig Museum of Fine Arts. This is a show devoted to the father of the New Leipzig School of artists. According to "The New York Times", this scene "has been the toast of the contemporary art world" for the past decade. In addition, there are eleven galleries in the so-called Spinnerei.
The Grassi Museum complex contains three more of Leipzig's major collections: the Ethnography Museum, Applied Arts Museum and Musical Instrument Museum (the last of which is run by the University of Leipzig). The university also runs the Museum of Antiquities.
Founded in March 2015, the G2 Kunsthalle houses the Hildebrand Collection. This private collection focuses on the so-called New Leipzig School. Leipzig's first private museum dedicated to contemporary art in Leipzig after the turn of the millennium is located in the city centre close to the famous St. Thomas Church on the third floor of the former GDR processing centre.
Other museums in Leipzig include the following:
Leipzig is well known for its large parks. The "Leipziger Auwald" (riparian forest) lies mostly within the city limits. Neuseenland is an area south of Leipzig where old open-cast mines are being converted into a huge lake district. It is planned to be finished in 2060.
Johann Sebastian Bach worked in Leipzig from 1723–50, conducting the Thomanerchor (St. Thomas Church Choir), at the St. Thomas Church, the St. Nicholas Church and the Paulinerkirche, the university church of Leipzig (destroyed in 1968). The composer Richard Wagner was born in Leipzig in 1813, in the Brühl. Robert Schumann was also active in Leipzig music, having been invited by Felix Mendelssohn when the latter established Germany's first musical conservatoire in the city in 1843. Gustav Mahler was second conductor (working under Artur Nikisch) at the Leipzig Opera from June 1886 until May 1888, and achieved his first significant recognition while there by completing and publishing Carl Maria von Weber's opera Die Drei Pintos. Mahler also completed his own 1st Symphony while living in Leipzig.
Today the conservatory is the University of Music and Theatre Leipzig. A broad range of subjects are taught, including artistic and teacher training in all orchestral instruments, voice, interpretation, coaching, piano chamber music, orchestral conducting, choir conducting and musical composition in various musical styles. The drama departments teach acting and scriptwriting.
The Bach-Archiv Leipzig, an institution for the documentation and research of the life and work of Bach (and also of the Bach family), was founded in Leipzig in 1950 by Werner Neumann. The Bach-Archiv organizes the prestigious International Johann Sebastian Bach Competition, initiated in 1950 as part of a music festival marking the bicentennial of Bach's death. The competition is now held every two years in three changing categories. The Bach-Archiv also organizes performances, especially the international festival Bachfest Leipzig () and runs the Bach-Museum.
The city's musical tradition is also reflected in the worldwide fame of the Leipzig Gewandhaus Orchestra, under its chief conductor Andris Nelsons, and the Thomanerchor.
The MDR Leipzig Radio Symphony Orchestra is Leipzig's second largest symphony orchestra. Its current chief conductor is Kristjan Järvi. Both the Gewandhausorchester and the MDR Leipzig Radio Symphony Orchestra make use of in the Gewandhaus concert hall.
For over sixty years Leipzig has been offering a "school concert" programme for children in Germany, with over 140 concerts every year in venues such as the Gewandhaus and over 40,000 children attending.
As for contemporary music, Leipzig is known for its independent music scene and subcultural events. Leipzig has for twenty years been home to the world's largest Gothic festival, the annual Wave-Gotik-Treffen (WGT), where thousands of fans of gothic and dark styled music from across Europe and the world gather in the early summer. The first Wave Gotik Treffen was held at the Eiskeller club, today known as Conne Island, in the Connewitz district. Mayhem's notorious album Live in Leipzig was also recorded at the Eiskeller club. Leipzig Pop Up is an annual music trade fair for the independent music scene as well as a music festival taking place on Pentecost weekend. Its most famous indie-labels are Moon Harbour Recordings (House) and Kann Records (House/Techno/Psychedelic). Several venues offer live music on a daily basis, including the Moritzbastei which was once part of the city's fortifications, and is one of the oldest student clubs in Europe with concerts in various styles. For over 15 years "Tonelli's" has been offering free weekly concerts every day of the week, though door charges may apply Saturdays.
The cover photo for the Beirut band's 2005 album Gulag Orkestar, according to the sleeve notes, was stolen from a Leipzig library by Zach Condon.
The city of Leipzig is also the birthplace of Till Lindemann, best known as the lead vocalist of Rammstein, a band formed in 1994.
More than 300 sport clubs in the city represent 78 different disciplines. Over 400 athletic facilities are available to citizens and club members.
The German Football Association (DFB) was founded in Leipzig in 1900. The city was the venue for the 2006 FIFA World Cup draw, and hosted four first-round matches and one match in the round of 16 in the central stadium.
VfB Leipzig won the first national Association football championship in 1903. The club was reformed as 1. FC Lokomotive Leipzig in 1966 and has had a glorious past in international competition as well, having been champions of the 1965–66 Intertoto Cup, semi-finalists in the 1973–74 UEFA Cup, and runners-up in the 1986–87 European Cup Winners' Cup.
Red Bull entered the local football in May 2009, after having previously been denied the right to buy into FC Sachsen Leipzig in 2006. The newly founded RB Leipzig declared the intention to come up through the ranks of German football and to bring Bundesliga football back to the region. RB Leipzig was finally promoted to the top level of the Bundesliga after finishing the 2015–16 2. Bundesliga season as runners-up. The club finished runners-up in its first ever Bundesliga season and made its debut in the UEFA Champions League in 2017.
List of Leipzig men and women's football clubs playing at state level and above:
Note 1: The RB Leipzig women's football team was formed in 2016 and began play in the 2016–17 season.
Note 2: The club began play in the 2008–09 season.
Since the beginning of the 20th century, ice hockey gained popularity, and several local clubs established departments dedicated to that sport.
SC DHfK Leipzig is the men's handball club in Leipzig and were six times (1959, 1960, 1961, 1962, 1965 and 1966) the champion of East Germany handball league and was winner of EHF Champions League in 1966. They finally promoted to Handball-Bundesliga as champions of 2. Bundesliga in 2014–15 season. They play in the Arena Leipzig which has a capacity of 6,327 spectators in HBL games but can take up to 7,532 spectators for handball in maximum capacity.
Handball-Club Leipzig is one of the most successful women's handball clubs in Germany, winning 20 domestic championships since 1956 and 3 Champions League titles. The team was however relegated to the third tier league in 2017 due to failing to achieve the economic standard demanded by the league licence.
From 1950 to 1990 Leipzig was host of the Deutsche Hochschule für Körperkultur (DHfK, German College of Physical Culture), the national sports college of the GDR.
Leipzig also hosted the Fencing World Cup in 2005 and hosts a number of international competitions in a variety of sports each year.
Leipzig made a bid to host the 2012 Summer Olympics. The bid did not make the shortlist after the International Olympic Committee pared the bids down to 5.
Markkleeberger See is a new lake next to Markkleeberg, a suburb on the south side of Leipzig. A former open-pit coal mine, it was flooded in 1999 with groundwater and developed in 2006 as a tourist area. On its southeastern shore is Germany's only pump-powered artificial whitewater slalom course, Markkleeberg Canoe Park (Kanupark Markkleeberg), a venue which rivals the Eiskanal in Augsburg for training and international canoe/kayak competition.
Leipzig Rugby Club competes in the German Rugby Bundesliga but finished at the bottom of their group in 2013.
Leipzig University, founded 1409, is one of Europe's oldest universities. The philosopher and mathematician Gottfried Wilhelm Leibniz was born in Leipzig in 1646, and attended the university from 1661 to 1666. Nobel Prize laureate Werner Heisenberg worked here as a physics professor (from 1927 to 1942), as did Nobel Prize laureates Gustav Ludwig Hertz (physics), Wilhelm Ostwald (chemistry) and Theodor Mommsen (Nobel Prize in literature). Other former staff of faculty include mineralogist Georg Agricola, writer Gotthold Ephraim Lessing, philosopher Ernst Bloch, eccentric founder of psychophysics Gustav Theodor Fechner, and psychologist Wilhelm Wundt. Among the university's many noteworthy students were writers Johann Wolfgang Goethe and Erich Kästner, and philosopher Friedrich Nietzsche, political activist Karl Liebknecht, and composer Richard Wagner. Germany's chancellor since 2006, Angela Merkel, studied physics at Leipzig University. The university has about 30,000 students.
A part of Leipzig University is the German Institute for Literature which was founded in 1955 under the name "Johannes R. Becher-Institut". Many noted writers have graduated from this school, including Heinz Czechowski, Kurt Drawert, Adolf Endler, Ralph Giordano, Kerstin Hensel, Sarah and Rainer Kirsch, Angela Krauß, Erich Loest, Fred Wander. After its closure in 1990 the institute was refounded in 1995 with new teachers.
The Academy of Visual Arts ("Hochschule für Grafik und Buchkunst") was established in 1764. Its 530 students () are enrolled in courses in painting and graphics, book design/graphic design, photography and media art. The school also houses an Institute for Theory.
The University of Music and Theatre offers a broad range of subjects ranging from training in orchestral instruments, voice, interpretation, coaching, piano chamber music, orchestral conducting, choir conducting and musical composition to acting and scriptwriting.
The Leipzig University of Applied Sciences (HTWK) has approximately 6,200 students () and is () the second biggest institution of higher education in Leipzig. It was founded in 1992, merging several older schools. As a university of applied sciences (German: "Fachhochschule") its status is slightly below that of a university, with more emphasis on the practical part of the education. The HTWK offers many engineering courses, as well as courses in computer science, mathematics, business administration, librarianship, museum studies and social work. It is mainly located in the south of the city.
The private Leipzig Graduate School of Management, (in German "Handelshochschule Leipzig (HHL)"), is the oldest business school in Germany. According to The Economist, HHL is one of the best schools in the world, rankend at number six overall.
Leipzig is currently the home of twelve research institutes and the Saxon Academy of Sciences and Humanities.
Max Planck Society: Max Planck Institute for Mathematics in the Sciences, Max Planck Institute for Human Cognitive and Brain Sciences, and Max Planck Institute for Evolutionary Anthropology.
Fraunhofer Society institutes: Fraunhofer IZI and Fraunhofer IMW.
Helmholtz Association of German Research Centres: Helmholtz Centre for Environmental Research
Leibniz Association: , , , , Leibniz-Institute Jewish history.
Leipzig is home to one of the world's oldest schools "Thomasschule zu Leipzig" (St. Thomas' School, Leipzig), which gained fame for its long association with the Bach family of musicians and composers.
The Lutheran Theological Seminary is a seminary of the Evangelical Lutheran Free Church in Leipzig. The seminary trains students to become pastors for the Evangelical Lutheran Free Church or for member church bodies of the Confessional Evangelical Lutheran Conference.
The city is a location for automobile manufacturing by BMW and Porsche in large plants north of the city. In 2011 and 2012 DHL transferred the bulk of its European air operations from Brussels Airport to Leipzig/Halle Airport. Kirow Ardelt AG, the world market leader in breakdown cranes, is based in Leipzig. The city also houses the European Energy Exchange, the leading energy exchange in Central Europe. With VNG – Verbundnetz Gas AG, one of Germany's large natural gas suppliers is headquartered at Leipzig. In addition, inside its larger metropolitan area, Leipzig has developed an important petrochemical center.
Some of the largest employers in the area (outside of manufacturing) include software companies such as Spreadshirt and the various schools and universities in and around the Leipzig/Halle region. The University of Leipzig attracts millions of euros of investment yearly and is in the middle of a massive construction and refurbishment to celebrate its 600th anniversary.
Leipzig also benefits from world leading medical research (Leipzig Heart Centre) and a growing biotechnology industry.
Many bars, restaurants and stores found in the downtown area are patronized by German and foreign tourists. Leipzig Hauptbahnhof itself is the location of a shopping mall. Leipzig is one of Germany's most visited cities with over 3 million overnight stays in 2017.
In 2010, Leipzig was included in the top 10 cities to visit by "The New York Times", and ranked 39th globally out of 289 cities for innovation in the 4th Innovation Cities Index published by Australian agency 2thinknow. In 2015, Leipzig have among the 30 largest German cities the third best prospects for the future. In recent years Leipzig has often been nicknamed the "Boomtown of eastern Germany" or "Hypezig". it had the highest rate of population growth of any German city.
Companies with operations in or around Leipzig include:
In December 2013, according to a study by GfK, Leipzig was ranked as the most livable city in Germany.
In 2015/2016, Leipzig was named the second-best city for students in Germany (after Munich).
In a 2017 study, the Leipzig inner city ranked first among all large cities in Germany due to its urban aesthetics, gastronomy, and shopping opportunities.
Since 2018 it also has the second-best future prospects of all cities in Germany, only surpassed by Munich in 2018 and Berlin in 2019.
According to the 2017 Global Least & Most Stressful Cities Ranking, Leipzig was one of the least stressful cities in the World. It was ranked 25th out of 150 cities worldwide and above Dortmund, Cologne, Frankfurt, and Berlin.
In 2018, Leipzig won the European Cities of Future prize in the category of "Best Large City for Human Capital & Lifestyle".
Leipzig was named European City of the Year at the 2019 Urbanism Awards.
According to the 2019 study by Forschungsinstitut Prognos, Leipzig is the most dynamic region in Germany. Within 15 years, the city climbed 230 places and occupied in 2019 rank 104 of all 401 German regions.
Leipzig is one of 52 places to go in 2020 by "The New York Times" and the highest-ranking German destination.
Leipzig Hauptbahnhof is the best railway station in Germany and the third-best in Europe (only surpassed by St Pancras railway station and Zürich Hauptbahnhof).
Founded at the crossing of Via Regia and Via Imperii, Leipzig has been a major interchange of inter-European traffic and commerce since medieval times. After the Reunification of Germany, immense efforts to restore and expand the traffic network have been undertaken and left the city area with an excellent infrastructure.
Opened in 1915, Leipzig Central Station is the largest overhead railway station in Europe in terms of its built-up area. At the same time, it is an important supra-regional junction in the ICE and Intercity network of the Deutsche Bahn as well as a connection point for S-Bahn and regional traffic in the Halle/Leipzig area.
In Leipzig, the Intercity Express routes (Hamburg-)Berlin-Leipzig-Nuremberg-Munich and Dresden-Leipzig-Erfurt-Frankfurt am Main-(Wiesbaden/Saarbrücken) intersect. After completion of the high-speed line to Erfurt, the ICE will run on both lines via Leipzig/Halle Airport and Erfurt. Leipzig is also the starting point for the intercity lines Leipzig-Halle (Saale)-Magdeburg-Braunschweig-Hannover-Dortmund-Köln and -Bremen-Oldenburg(-Norddeich Mole). Both lines complement each other at hourly intervals and also stop at Leipzig/Halle Airport. The only international connection is the daily EuroCity Leipzig-Prague.
Most major and medium-sized towns in Saxony and southern Saxony-Anhalt can be reached without changing trains. There are also direct connections via regional express lines to Falkenberg/Elster-Cottbus, Hoyerswerda and Dessau-Magdeburg as well as Chemnitz. Neighbouring Halle (Saale) can be reached via two S-Bahn lines, one of which runs hourly via Leipzig/Halle Airport. The surrounding area of Leipzig is served by numerous regional and S-Bahn lines.
The city's rail connections are currently being greatly improved by major construction projects, particularly within the framework of the German Unity transport projects. The line to Berlin has been extended and has been passable at 200 km/h since 2006. On 13 December 2015, the high-speed line from Leipzig to Erfurt, designed for 300 km/h, was put into operation. Its continuation to Nuremberg is scheduled for completion in December 2017. This integration into the high-speed network will considerably reduce the journey times of the ICE from Leipzig to Nuremberg, Munich and Frankfurt am Main. The Leipzig-Dresden railway line, which was the first German long-distance railway to go into operation in 1839, is also undergoing expansion for 200 km/h. The ICE will also be able to operate from Leipzig to Dresden in the near future. The most important construction project in regional transport was the four-kilometer-long City Tunnel, which went into operation in December 2013 as the main line of the S-Bahn Mitteldeutschland.
For freight traffic, there are freight stations in the districts of Wahren and Engelsdorf. In addition, a large freight traffic centre has been set up near the Schkeuditzer Kreuz junction for goods handling between road and rail, as well as a freight station on the site of the DHL hub at Leipzig/Halle Airport.
Leipzig is the core of the S-Bahn Mitteldeutschland line network. Together with the tram, six of the ten lines form the backbone of local public transport and an important link to the region and the neighbouring Halle. The main line of the S-Bahn consists of the underground S-Bahn stations Hauptbahnhof, Markt, Wilhelm-Leuschner-Platz and Bayerischer Bahnhof leading through the City Tunnel as well as the above-ground station Leipzig MDR. There are a total of 30 S-Bahn stations in the Leipzig city area. Endpoints of the S-Bahn lines include Oschatz, Zwickau, Geithain and Bitterfeld. Two lines run to Halle, one of them via Leipzig/Halle Airport. In 2015, the network will be extended to Dessau and Lutherstadt Wittenberg.
With the timetable change in December 2004, the networks of Leipzig and Halle were combined to form the Leipzig-Halle S-Bahn. However, this network only served as a transitional solution and was replaced by the S-Bahn Mitteldeutschland on 15 December 2013. At the same time, the main line tunnel, marketed as the Leipzig City Tunnel, went into operation. The tunnel, which is almost four kilometres long, crosses the entire city centre from the main railway station to the Bavarian railway station. The S-Bahn stations are up to 22 metres underground. This construction was the first to create a continuous north–south axis, which had not existed until now due to the north-facing terminus station. The connection to the south of the city and the federal state will thus be greatly improved.
The Leipziger Verkehrsbetriebe, existing since 1 January 1917, operate a total of 13 tram lines and 51 bus lines in the city.
The total length of the tram network is , making it the largest in Saxony ahead of Dresden () and the second largest in Germany after Berlin ().
The longest line in the Leipzig network is line 11, which connects Schkeuditz with Markkleeberg over 22 kilometres and is the only tram line in Leipzig to run in three tariff zones of the Central German Transport Association.
Night bus lines N1 to N9 and the night tram N17 operate in the night traffic. On Saturdays, Sundays and holidays the tram line N10 and the bus line N60 also operate. The central transfer point between the bus and tram lines as well as to the S-Bahn is Leipzig Central Station.
Like most German cities, Leipzig has a traffic layout designed to be bicycle-friendly. There is an extensive cycle network. In most of the one-way central streets, cyclists are explicitly allowed to cycle both ways. A few cycle paths have been built or declared since 1990.
Since 2004 there is a bicycle-sharing system. Bikes can be borrowed and returned via smartphone app or by telephone. Since 2018, the system has enabled flexible borrowing and returning of bicycles in the inner city; in this zone, bicycles can be handed in and borrowed from almost any street corner. Outside these zones, there are stations where the bikes are waiting. The current locations of the bikes can be seen via the app. There are cooperation offers with the Leipzig public transport companies and car sharing in order to offer as complete a mobility chain as possible.
Several federal motorways pass by Leipzig: the A 14 in the north, the A 9 in the west and the A 38 in the south. The three motorways form a triangular partial ring of the double ring Mitteldeutsche Schleife around Halle and Leipzig. To the south towards Chemnitz, the A 72 is also partly under construction or being planned.
The federal roads B 2, B 6, B 87, B 181, B 184 and B 186 lead through the city area.
The ring, which corresponds to the course of the old city fortification, surrounds the city centre of Leipzig, which today is largely traffic-calmed.
Leipzig has a dense network of carsharing stations. Additionally, since 2018 there is also a stationless car sharing system in Leipzig. Here the cars can be parked and booked anywhere in the inner city without having to define a specific car or period in advance. Finding and booking is done via a smartphone app.
Apart from the usual taxi traffic, Leipzig is one of the few cities in Germany with a ridesharing provider. Taxi-like rides can be booked via an app. However, in contrast to a taxi, the start and destination must be defined beforehand and other passengers can be taken along at the same time if they share a route.
Since March 2018 there has been a central bus station directly east of Leipzig Central Station.
In addition to a large number of national lines, several international lines also serve Leipzig. The cities of Bregenz, Budapest, Milan, Prague, Sofia and Zurich, among others, can be reached without having to change trains. Around 30,000 journeys and 1.5 million passengers a year are expected at the new bus station.
Some lines also use Leipzig/Halle Airport, located at the A 9/A 14 motorway junction, and Leipziger Messe for a stop. Passengers can take the S-Bahn from there to the city centre.
Leipzig/Halle Airport is the international commercial airport of the region. It is located at the Schkeuditzer Kreuz junction northwest of Leipzig, halfway between the two major cities. The easternmost section of the new Erfurt-Leipzig/Halle line under construction gave the airport a long-distance railway station, which was also integrated into the ICE network when the railway line was completed in 2015.
Passenger flights are operated to the major German hub airports, European metropolises and holiday destinations, especially in the Mediterranean region and North Africa. The airport is of international importance in the cargo sector. In Germany, it ranks second behind Frankfurt am Main, fifth in Europe and 26th worldwide (as of 2011). DHL uses the airport as its central European hub. It is also the home base of the freight airlines Aerologic and European Air Transport Leipzig.
The former military airport near Altenburg, Thuringia called Leipzig-Altenburg Airport about a half-hour drive from Leipzig was served by Ryanair until 2010.
In the first half of the 20th century, the construction of the Elster-Saale canal, White Elster and Saale was started in Leipzig in order to connect to the network of waterways. The outbreak of the Second World War stopped most of the work, though some may have continued through the use of forced labor. The Lindenauer port was almost completed but not yet connected to the Elster-Saale and Karl-Heine canal respectively. The Leipzig rivers (White Elster, New Luppe, Pleiße, and Parthe) in the city have largely artificial river beds and are supplemented by some channels. These waterways are suitable only for small leisure boat traffic.
Through the renovation and reconstruction of existing mill races and watercourses in the south of the city and flooded disused open cast mines, the city's navigable water network is being expanded. The city commissioned planning for a link between Karl Heine Canal and the disused Lindenauer port in 2008. Still more work was scheduled to complete the Elster-Saale canal. Such a move would allow small boats to reach the Elbe from Leipzig. The intended completion date has been postponed because of an unacceptable cost-benefit ratio.
"Mein Leipzig lob' ich mir! Es ist ein klein Paris und bildet seine Leute." (I praise my Leipzig! It is a small Paris and educates its people.) – Frosch, a university student in Goethe's "Faust, Part One"
"Ich komme nach Leipzig, an den Ort, wo man die ganze Welt im Kleinen sehen kann." (I'm coming to Leipzig, to the place where one can see the whole world in miniature.) – Gotthold Ephraim Lessing
"Extra Lipsiam vivere est miserrime vivere." (To live outside Leipzig is to live miserably.) – Benedikt Carpzov the Younger
"Das angenehme Pleis-Athen, Behält den Ruhm vor allen, Auch allen zu gefallen, Denn es ist wunderschön." (The pleasurable Pleiss-Athens, earns its fame above all, appealing to every one, too, for it is mightily beauteous.) – Johann Sigismund Scholze
Leipzig is twinned with:
|
https://en.wikipedia.org/wiki?curid=17955
|
LimeWire
LimeWire was a free peer-to-peer file sharing (P2P) client for Windows, OS X, Linux and Solaris. LimeWire used the gnutella network as well as the BitTorrent protocol. A freeware version and a purchasable "enhanced" version were available. BitTorrent support is provided by libtorrent.
On October 26, 2010, U.S. federal court judge Kimba Wood issued an injunction ordering LimeWire to prevent "the searching, downloading, uploading, file trading and/or file distribution functionality, and/or all functionality" of its software in "Arista Records LLC v. Lime Group LLC". A trial investigating the damages necessary to compensate the affected record labels was scheduled to begin in January 2011. As a result of the injunction, LimeWire stopped distributing the LimeWire software, and versions 5.5.11 and newer have been disabled using a backdoor installed by the company. However, version 5.5.10 and all prior versions of LimeWire remain fully functional and cannot be disabled unless a user upgrades to one of the newer versions. The program has been "resurrected" by the creators of WireShare (formerly known as "LimeWire Pirate Edition").
Written in the Java programming language, LimeWire can run on any computer with a Java Virtual Machine installed. Installers were provided for Apple's Mac OS X, Microsoft's Windows, and Linux. Support for Mac OS 9 and other previous versions was dropped with the release of LimeWire 4.0.10. From version 4.8 onwards, LimeWire works as a UPnP Internet Gateway Device controller in that it can automatically set up packet-forwarding rules with UPnP-capable routers.
LimeWire offers sharing of its library through the Digital Audio Access Protocol (DAAP). As such, when LimeWire is running and configured to allow it, any files shared are detectable and downloaded on the local network by DAAP-enabled devices (e.g., Zune, iTunes). Beginning with LimeWire 4.13.9, connections can be encrypted with Transport Layer Security (TLS). Following LimeWire 4.13.11, TLS became the default connection option.
Until October 2010, Lime Wire LLC, the New York City based developer of LimeWire, distributed two versions of the program: a basic free version, and an enhanced version, LimeWire PRO, which sold for a fee of $21.95 with 6 months of updates, or around $35.00 with 1 year of updates. The company claimed the paid version provides faster downloads and 66% better search results. This is accomplished by facilitating direct connection with up to 10 hosts of an identical searched file at any one time, whereas the free version is limited to a maximum of 8 hosts.
Being free software, LimeWire has spawned forks, including LionShare, an experimental software development project at Penn State University, and Acquisition, a Mac OS X-based gnutella client with a proprietary interface. Researchers at Cornell University developed a reputation management add-in called Credence that allows users to distinguish between "genuine" and "suspect" files before downloading them. An October 12, 2005 report states that some of LimeWire's free and open source software contributors have forked the project and called it FrostWire.
LimeWire was the second file sharing program after Frostwire to support firewall-to-firewall file transfers, a feature introduced in version 4.2, which was released in November 2004. LimeWire also now includes BitTorrent support, but is limited to three torrent uploads and three torrent downloads, which coexist with ordinary downloads. LimeWire 5.0 added an instant messenger that uses the XMPP Protocol, an open source communication protocol. Users can now chat and share files with individuals or a group of friends in their buddy list.
From version 5.5.1, LimeWire has added a key activation, which requires the user to enter the unique key before activating the "Pro" version of the software. This has stopped people from illegally downloading the "Pro" versions. However, there are still ways to bypass this security feature, which was done when creating the pirate edition. For example, there are currently cracks available on the internet, and people can continue using the LimeWire Pro 5.5.1 Beta, which also includes AVG for LimeWire and is the first version to include AVG. The most recent stable version of LimeWire is 5.5.16.
Versions of LimeWire prior to 5.5.10 can still connect to the Gnutella network and users of these versions are still able to download files, even though a message is displayed concerning the injunction during the startup process of the software. LimeWire versions 5.5.11 and newer feature an auto-update feature that allowed Lime Wire LLC to disable newer versions of the LimeWire software. Older versions of LimeWire prior to version 5.5.11, however, do not include the auto-update feature and are still fully functional. As a result, neither the Recording Industry Association of America (RIAA) nor Lime Wire LLC have the ability to disable older versions of LimeWire, unless the user chooses to upgrade to a newer version of LimeWire.
On November 10, 2010, a secret group of developers called the "Secret Dev Team" sought to keep the application working by releasing the "LimeWire Pirate Edition". The software is based on LimeWire 5.6 Beta, and is aimed to allow Windows versions to still work and remove the threat of spyware or adware. The exclusive features in LimeWire PRO were also unlocked, and all security features installed by Lime Wire LLC were removed.
A number of forks from LimeWire have appeared, with the goal of giving users more freedom, or objecting to decisions made by Lime Wire LLC they disagreed with.
FrostWire was started in September 2004 by members of the LimeWire open source community, after LimeWire's distributor considered adding "blocking" code, in response to RIAA pressure and the threat of legal action, in light of the U.S. Supreme Court's decision in "MGM Studios, Inc. v. Grokster, Ltd.". When eventually activated, the code could block its users from sharing licensed files. This code was recently changed when lawsuits had been filed against LimeWire for P2P downloading. It had blocked all their users and redirected them to FrostWire. FrostWire has since completely moved to the BitTorrent protocol from Gnutella (LimeWire's file sharing network).
In November 2010, as a response to the legal challenges regarding LimeWire, an anonymous individual by the handle of Meta Pirate released a modified version of LimeWire Pro, which was entitled LimeWire Pirate Edition. It came without the Ask.com toolbar, advertising, spyware, and backdoors, as well as all dependencies on Lime Wire LLC servers.
In response to allegations that a current or former member of Lime Wire LLC staff wrote and released the software, the company has stated that: LimeWire is not behind these efforts. LimeWire does not authorize them. LimeWire is complying with the Court’s October 26, 2010 injunction."
The LimeWire team, after being accused by the RIAA of being complicit in the development of LimeWire Pirate Edition, swiftly acted to shut down the LimeWire Pirate Edition website. A court order was issued to close down the website, and, to remain anonymous, Meta Pirate, the developer of LimeWire PE, did not contest the order.
According to its SourceForge website, WireShare is the newest fork of the original LimeWire open source project (a successor of LPE: LimeWire Pirate Edition, which name was dropped for legal reasons). The software was developed to help keep the Gnutella network alive and to maintain a good faith continuation of the original project (without adware or spyware).
Prior to April 2004, the free version of LimeWire was distributed with a bundled program called LimeShop (a variant of TopMoxie), which was spyware. Among other things, LimeShop monitored online purchases in order to redirect sales commissions to Lime Wire LLC. Uninstallation of LimeWire would not remove LimeShop. With the removal of all bundled software in LimeWire 3.9.4 (released on April 20, 2004), these objections were addressed. LimeWire currently has a facility that allows its server to contact a running LimeWire client and gather various information.
In LimeWire versions before 5.0, users could accidentally configure the software to allow access to any file on their computer, including documents with personal information. Recent versions of LimeWire do not allow unintentional sharing of documents or applications. In 2005, the US Federal Trade Commission issued a consumer warning regarding the dangers of using peer-to-peer file sharing networks, stating that using such networks can lead to identity theft and lawsuits.
An identity theft scheme involving LimeWire was discovered in Denver in 2006. On September 7, 2007, Gregory Thomas Kopiloff of Seattle was arrested in what the U.S. Justice Department described as its first case against someone accused of using file sharing computer programs to commit identity theft. According to federal prosecutors, Kopiloff used LimeWire to search other people's computers for inadvertently shared financial information and then used it to obtain credit cards for an online shopping spree.
One investigation showed that of 123 randomly selected downloaded files, 37 contained malware – about 30%. In mid-2008, a Macintosh trojan exploiting a vulnerability involving Apple Remote Desktop was distributed via LimeWire affecting users of Mac OS X Tiger and Leopard. The ability to distribute such malware and viruses has also been reduced in versions of LimeWire 5.0 and greater, with the program defaulting to not share or search for executable files.
On May 5, 2009, a P2P industry spokesman represented Lime Wire and others at a U.S. House of Representatives legislative hearing on H.R. 1319, "The Informed P2P User Act."
On February 15, 2010, LimeWire reversed its previous anti-bundling stance and announced the inclusion of an Ask.com-powered browser toolbar that users had to explicitly opt-out of to prevent installation. The toolbar sends web and bittorrent searches to Ask.com, and LimeWire searches to an instance of LimeWire on the user's machine.
LimeWire automatically receives a cryptographically signed file, called simpp.xml, containing an IP block list.
It was the key technology behind the now defunct cyber security firm Tiversa which is alleged to have used information from the network to pressure prospective clients into engaging the company's services.
According to a June 2005 report in "The New York Times", Lime Wire LLC was considering ceasing its distribution of LimeWire because the outcome of "MGM v. Grokster" "handed a tool to judges that they can declare inducement whenever they want to."
On May 12, 2010, Judge Kimba Wood of the United States District Court for the Southern District of New York ruled in "Arista Records LLC v. Lime Group LLC" that LimeWire and its creator, Mark Gorton, had committed copyright infringement, engaged in unfair competition, and induced others to commit copyright infringement. On October 26, 2010, LimeWire was ordered to disable the "searching, downloading, uploading, file trading and/or file distribution functionality" after losing a court battle with the RIAA over claims of copyright infringement. The RIAA also announced intentions to pursue legal action over the damages caused by the program in January to compensate the affected record labels. In retaliation, the RIAA's website was taken offline on October 29 via denial-of-service attacks executed by members of Operation Payback and Anonymous.
In response to the ruling, a company spokesperson said that the company is not shutting down, but will use its "best efforts" to cease distributing and supporting P2P software.
In early 2011, the RIAA announced their intention to sue LimeWire, pursuing a statutory damages theory that claimed up to $72 trillion in damagesa sum greater than the current GDP of the entire global economy. There are currently around 11,000 songs on LimeWire that have been tagged as copyright-infringed, and the RIAA estimates that each one has been downloaded thousands of times, the penalties accruing to the above sum.
A trial to decide on the eventual amount of damages owed by Limewire to thirteen record labels, including Warner Music Group and Sony Music, all of which are represented by the RIAA, started early in May and went on until on May 13, 2011, when Gorton agreed to pay the 13 record companies $105 million in an out-of-court settlement.
Mitch Bainwol, chairman of the RIAA, referred to the "resolution of the case [as] another milestone in the continuing evolution of online music to a legitimate marketplace that appropriately rewards creators."
|
https://en.wikipedia.org/wiki?curid=17956
|
Latveria
Latveria is a fictional nation appearing in American comic books published by Marvel Comics. It is depicted within the storylines of Marvel's comic titles as an isolated European country ruled by the fictional Supreme Lord Doctor Doom, supposedly located in the Banat region. It is surrounded by the Carpathian Mountains, and also borders Symkaria (home of Silver Sable) to the south. Its capital is Doomstadt.
Latveria first appeared in "Fantastic Four Annual" #2, published in 1964. Victor von Doom is the ruler of Latveria. Though he has been dethroned a number of times, Victor has invariably managed to return to the throne of his country within a matter of months.
Victor also has a council who obey him entirely. In "Fantastic Four" #536 in 2006, he killed his own Prime Minister for claiming control of Latveria in his absence and threatened to kill two other ministers if they failed to find the landing spot of Thor's hammer.
Doctor Doom's style of rule can best be described as an absolute monarchy, as it was revealed that there is no legislature, and one minister boasted ""Doctor Doom decides everything. His slightest whim is Latverian law!"" It is shown Doom has devices throughout the Kingdom to watch his people and even has hidden weapons to prevent them leaving without his consent. In one story he is able to activate a force field around Latveria which prevents anybody leaving, though apparently it can be a defense against nuclear attack.
Located in southeast Europe, Latveria was formed out of land annexed from southern Hungary centuries before, and possibly land from Serbia as well as Romania.
At some point, Doctor Doom had his army of Servo Guards invade Rotruvia where he was successful at annexing it.
Due to Doom's undertakings that drive him away from Latveria, the monarch is often absent. After Doom's descent into Hell, the nation became a target for conquest by the neighboring countries. This forces Reed Richards to seize control of the country, attempting to pry the populace out from under the thumb of Doom, while at the same time disarming all of Doom's weaponry and technology, so if he ever returned, he would come back to absolutely nothing. In the process, Richards relocated Doom from Hell into a pocket dimension of his own design, and although Doom used his consciousness-switching abilities to escape, the death of his host body seemingly caused him to die as well, and the Fantastic Four pulled out of the country.
However Doom survives this and rules Latveria for a time with a 'puppet' Prime Minister and robotic enforcers.
After the Fantastic Four left, the United States attempted to fill the void left by Doom by establishing a democracy for the nation. The Countess Lucia von Bardas was elected as Prime Minister. However, when it was revealed that von Bardas was employing the Tinkerer to use Doom's technology to arm various tech-based villains in the United States, S.H.I.E.L.D. Commander Nick Fury took action.
During "Secret War", Fury and a number of superheroes invaded Latveria without permission of the US Government and attempted to assassinate von Bardas. While von Bardas survived, she was horribly disfigured and sought to destroy Fury and the heroes responsible. She was killed by S.H.I.E.L.D. Agent Daisy Johnson while trying to blow up New York with the armor of the various villains she employed.
Much of Latveria was destroyed and the population severely reduced by an attack executed by the Marquis of Death (a.k.a. "Dooms Master").
S.H.I.E.L.D., under the leadership of Iron Man and his team of U.S. sanctioned Avengers invaded Latveria after discovering Doom's (unintentional) involvement in the release of a symbiote virus on New York. The country was yet again devastated and Doom was taken into custody for crimes against humanity.
Doom is released from prison due to the influence of H.A.M.M.E.R director Norman Osborn. He restores his nation with the use of his time travel technology.
During the "Avengers vs. X-Men" storyline, Spider-Man fights against a Juggernaut-empowered Colossus here.
The common geographic description of Latveria places it as a small nation, around the area where Hungary, Romania and Serbia (Vojvodina) meet in real life. To its south in the Marvel universe is the nation of Symkaria, which is depicted as a benevolent constitutional monarchy in contrast to the dictatorship to its north. The capital city of Latveria is Doomstadt, formerly Hassenstadt, renamed when Doom seized power, located just north of the Kline River. The administrative center is Castle Doom.
The population consists of mixed European stock and Romani people, in whose welfare von Doom takes a particular interest. Victor von Doom, being Roma, has declared the Romani a protected class and attempts to shower them with benefits, however due to Latveria's poor economy and oppressive rule their lifestyles hardly outshine other ethnicities, and the Romani by and large live in the same fear of their own government as do fellow Latverians.
Because it lacks a native superhero populace, Latveria relies largely on Dooms' robot sentinels called Doombots to keep law and order. One of the few known Latverian superhumans is Dreadknight, whom Doom himself created by punishing Dreadknight's alter ego for hoarding ideas from him. Dreadknight has since tried to get revenge on Doctor Doom, only to be thwarted by various superheroes.
Aside from superhuman activity, the Latverian military appears to function in multiple capacity; in addition to being responsible for defense of Latveria (or more accurately, keeping Victor von Doom on the throne), they have been commissioned to make arrests and function as Latveria's secret police.
Latveria is generally depicted as a rural nation with a primitive economy and a population living an almost medieval lifestyle, likely enforced by Doom. Nonetheless, the state itself is consistently depicted as a global superpower on-par with or even surpassing any nation on Earth, including the United States, and rivalled only by the likes of Wakanda. This is largely due to Doom himself being a scientific genius of the highest order, not only possessing but actually inventing numerous technological wonders, including time and interdimensional travel, personally creating a highly sophisticated robot army of myriad designs and capabilities, and frequently coming into possession of-or outright creating-various devices that could be classified as Weapons of Mass Destruction. Thus, despite the country being both extremely small and economically backward, it is a powerhouse in military and technological terms and therefore has a vastly disproportionate influence on global affairs relative to its size and GDP. Doom also proudly claims that the country is free of poverty, disease, famine and crime and while citizens of the nation are commonly shown to be oppressed and to live in fear of their monarch, they are also shown to be relatively well cared for, so long as they do not cross Doom. Other occasions suggest that Doom is at the centre of a self-propagated personality cult and is admired and worshipped by other segments of the populace in spite of his mistreatments and he is often demonstrated to be at least a more stable and less corrupt ruler than any other Latverian leader who has replaced him,
In the future depicted in "Loki: Agent of Asgard", Doctor Doom discovers Latveria completely destroyed after King Loki destroyed the Earth. Doom attempts to prevent this future by imprisoning the Loki of the present.
In the "Marvel 1602" storyline, Latveria is ruled by Count Otto von Doom, also known as "Otto the Handsome". It is inhabited by mythical beings, and Latveria experiments on intricate clockwork devices, one of which was used to kill Queen Elizabeth I of England. The native language appears to bear a close resemblance to modern German.
In the alternate future called "Marvel 2099", various power struggles over the fate of Latveria end with most of the country's population destroyed by chemical weapons known as "necrotoxins".
In the "Marvel Zombies" storyline, Latveria is one of the last few outposts of humanity, as Doctor Doom gathers up the fittest and most fertile of the Latverian survivors in order to send them off to other dimensions. An army of super-zombies lay siege to Doom's castle and eventually break inside. Despite this and Doom himself being bitten, all the Latverian citizens successfully escape.
Latveria was introduced as a bankrupt peasant nation, but thanks to Doctor Doom it was made the ninth richest country on Earth. The townsfolk wear Doom's dragon tattoos, which incorporate microfibers that interfaced with the brain, acting as mind control devices. Where this Latveria lies is unclear but there are Belgian Flags on display in the background in the one picture displayed of Latveria.
In "Ultimate Marvel Team-Up", Latveria was presented as an impoverished dictatorial theocracy, under "his holiness" President Victor von Doom (wearing his traditional Marvel armour and cloak). They attempted, in collusion with the United States via Nick Fury, to steal the Iron Man technology from Tony Stark; this fails, partly due to the intervention of Spider-Man. At some point, however, Doom declared a holy war on the United States, creating tensions between two countries. This would be ignored and retconned away in later Ultimate Marvel titles.
|
https://en.wikipedia.org/wiki?curid=17957
|
Least common multiple
In arithmetic and number theory, the least common multiple, lowest common multiple, or smallest common multiple of two integers "a" and "b", usually denoted by lcm("a", "b"), is the smallest positive integer that is divisible by both "a" and "b". Since division of integers by zero is undefined, this definition has meaning only if "a" and "b" are both different from zero. However, some authors define lcm("a",0) as 0 for all "a", which is the result of taking the lcm to be the least upper bound in the lattice of divisibility.
The lcm is the "lowest common denominator" (lcd) that can be used before fractions can be added, subtracted or compared. The lcm of more than two integers is also well-defined: it is the smallest positive integer that is divisible by each of them.
A multiple of a number is the product of that number and an integer. For example, 10 is a multiple of 5 because 5 × 2 = 10, so 10 is divisible by 5 and 2. Because 10 is the smallest positive integer that is divisible by both 5 and 2, it is the least common multiple of 5 and 2. By the same principle, 10 is the least common multiple of −5 and −2 as well.
The least common multiple of two integers "a" and "b" is denoted as lcm("a", "b"). Some older textbooks use ["a", "b"], while the programming language J uses codice_1 .
Multiples of 4 are:
Multiples of 6 are:
"Common multiples" of 4 and 6 are the numbers that are in both lists:
So, from this list of the first few common multiples of the numbers 4 and 6, their "least common multiple" is 12.
When adding, subtracting, or comparing simple fractions, the least common multiple of the denominators often called the lowest common denominator is used because each of the fractions can be expressed as a fraction with this denominator. For example,
where the denominator 42 was used because it is the least common multiple of 21 and 6.
Suppose there are two meshing gears in a machine, having "m" and "n" teeth, respectively, and the gears are marked by a line segment drawn from the center of the first gear to the center of the second gear. When the gears begin rotating, the number of rotations the first gear must complete to realign the line segment can be calculated by using formula_6. The first gear must complete formula_7 rotations for the realignment. By that time, the second gear will have made formula_8 rotations.
Suppose there are three planets revolving around a star which take "l", "m" and "n" units of time respectively to complete their orbits. Assume that "l", "m" and "n" are integers. Assuming the planets started moving around the star after an initial linear alignment, all the planets attain a linear alignment again after formula_9 units of time. At this time, the first, second and third planet will have completed formula_10, formula_11 and formula_12 orbits respectively around the star.
The following formula reduces the problem of computing the least common multiple to the problem of computing the greatest common divisor (gcd), also known as the greatest common factor:
This formula is also valid when exactly one of "a" and "b" is 0, since gcd("a", 0) = |"a"|. However, if both "a" and "b" are 0, this formula would cause division by zero; lcm(0, 0) = 0 is a special case.
There are fast algorithms for computing the gcd that do not require the numbers to be factored, such as the Euclidean algorithm. To return to the example above,
Because gcd("a", "b") is a divisor of both "a" and "b", it is more efficient to compute the lcm by dividing "before" multiplying:
This identity is self-dual:
Then
where the absolute bars || denote the cardinality of a set.
The least common multiple can be defined generally over commutative rings as follows: Let "a" and "b" be elements of a commutative ring "R". A common multiple of "a" and "b" is an element "m" of "R" such that both "a" and "b" divide "m" (that is, there exist elements "x" and "y" of "R" such that "ax" = "m" and "by" = "m"). A least common multiple of "a" and "b" is a common multiple that is minimal in the sense that for any other common multiple "n" of "a" and "b", "m" divides "n".
In general, two elements in a commutative ring can have no least common multiple or more than one. However, any two least common multiples of the same pair of elements are associates. In a unique factorization domain, any two elements have a least common multiple. In a principal ideal domain, the least common multiple of "a" and "b" can be characterised as a generator of the intersection of the ideals generated by "a" and "b" (the intersection of a collection of ideals is always an ideal).
|
https://en.wikipedia.org/wiki?curid=17961
|
Louis St. Laurent
Louis Stephen St. Laurent ("Saint-Laurent" or "St-Laurent" in French, baptized Louis-Étienne St-Laurent; 1 February 1882 – 25 July 1973) was a Canadian politician who served as the 12th prime minister of Canada, from 15 November 1948 to 21 June 1957. He was a Liberal with a strong base in the Catholic francophone community, from which base he had long mobilised support to Prime Minister William Lyon Mackenzie King. St. Laurent was an enthusiastic proponent of Canada's joining NATO in 1949 to fight the spread of Communism, overcoming opposition from some intellectuals, the Labor-Progressive Party, and many French Canadians. The contrast with Mackenzie King was not dramatic – they agreed on most policies. St. Laurent had more hatred of communism, and less fear of the United States. He was neither an idealist nor a bookish intellectual, but an "eminently moderate, cautious ... man ... and a strong Canadian nationalist".
Louis St. Laurent () was born on 1 February 1882 in Compton, Quebec, a village in the Eastern Townships, to Jean-Baptiste-Moïse Saint-Laurent, a French Canadian, and Mary Anne Broderick, an Irish Canadian. He grew up fluently bilingual. His English had a noticeable Irish brogue, while his gestures (such as a hunch of the shoulders) were French.
He received degrees from Séminaire Saint-Charles-Borromée (B.A. 1902) and Université Laval (LL.L. 1905). He was offered, but declined, a Rhodes Scholarship upon this graduation from Laval in 1905. In 1908, he married Jeanne Renault (1886–1966), with whom he had two sons and three daughters, including Jean-Paul St. Laurent.
St-Laurent worked as a lawyer from 1905 to 1941, also becoming a professor of law at Université Laval in 1914. St-Laurent practised corporate and constitutional law in Quebec and became one of the country's most respected counsel. He served as President of the Canadian Bar Association from 1930 to 1932. In 1913 he was one of the defending counsel for Harry Kendall Thaw, who was seeking to avoid extradition from Quebec.
St-Laurent's father, a Compton shopkeeper, was a staunch supporter of the Liberal Party of Canada and was particularly enamoured with Sir Wilfrid Laurier. When Laurier led the Liberals to victory in the 1896 election, 14-year-old Louis relayed the election returns from the telephone in his father's store. However, while an ardent Liberal, Louis remained aloof from active politics for much of his life, focusing instead on his legal career and family. He became one of Quebec's leading lawyers and was so highly regarded that he was offered a position in the Cabinet of the Conservative Prime Minister Arthur Meighen in 1926 and was offered a seat as a justice in the Supreme Court of Canada; he declined both offers.
It was not until he was nearly 60 that St-Laurent finally agreed to enter politics when Liberal Prime Minister William Lyon Mackenzie King appealed to his sense of duty in late 1941.
King's Quebec lieutenant, Ernest Lapointe, had died in November 1941. King believed that his Quebec lieutenant had to be strong enough and respected enough to help deal with the volatile conscription issue. King had been in his political infancy when he witnessed the Conscription Crisis of 1917 during World War I and he wanted to prevent the same divisions from threatening his government. No Quebec or francophone members of King's cabinet or government were willing to step into the role, but many recommended St. Laurent to take the post instead. On these recommendations, King recruited St. Laurent to cabinet as Minister of Justice, Lapointe's former post, on 9 December. St. Laurent agreed to go to Ottawa out of a sense of duty, but only on the understanding that his foray into politics was temporary and that he would return to Quebec at the conclusion of the war. In February 1942, he won a by-election for Quebec East, Lapointe's former riding. The riding had also previously been held by Laurier. St-Laurent supported King's decision to introduce conscription in 1944, despite the lack of support from other French Canadians (see Conscription Crisis of 1944). His support prevented more than a handful of Quebec Liberal Members of Parliament (MPs) from leaving the party, and was therefore crucial to keeping the government and the party united.
He had to deal with the defection of Soviet cipher clerk Igor Gouzenko in Ottawa in September 1945; Gouzenko's revelations and subsequent investigations over the following few years showed major Soviet espionage in North America.
King came to regard St-Laurent as his most trusted minister and natural successor. He persuaded St-Laurent that it was his duty to remain in government following the war in order to help with the construction of a post-war international order and promoted him to the position of Secretary of State for External Affairs (foreign minister) in 1945, a portfolio King had previously always kept for himself. In this role, St-Laurent represented Canada at the Dumbarton Oaks Conference and San Francisco Conference that led to the founding of the United Nations (UN).
At the conferences, St-Laurent, compelled by his belief that the UN would be ineffective in times of war and armed conflict without some military means to impose its will, advocated the adoption of a UN military force. This force he proposed would be used in situations that called for both tact and might to preserve peace or prevent combat. In 1956, this idea was actualized by St-Laurent and his Secretary of State for External Affairs Lester B. Pearson in the development of UN Peacekeepers that helped to put an end to the Suez Crisis.
In 1948, MacKenzie King retired, and quietly persuaded his senior ministers to support St-Laurent's selection as the new Liberal leader at the Liberal leadership convention of August 1948. St-Laurent won, and was sworn in as Prime Minister of Canada on 15 November, making him Canada's second French-Canadian Prime Minister, after Wilfrid Laurier.
In the 1949 federal election that followed his ascension to the Liberal leadership, many wondered, including Liberal party insiders, if St-Laurent would appeal to the post-war populace of Canada. On the campaign trail, St-Laurent's image was developed into somewhat of a 'character' and what is considered to be the first 'media image' to be used in Canadian politics. St-Laurent chatted with children, gave speeches in his shirt sleeves, and had a 'common touch' that turned out to be appealing to voters. At one event during the 1949 election campaign, he disembarked his train and instead of approaching the assembled crowd of adults and reporters, gravitated to, and began chatting with, a group of children on the platform. A reporter submitted an article entitled "'Uncle Louis' can't lose!" which earned him the nickname "Uncle Louis" in the media (Papa Louis in Quebec). With this common touch and broad appeal, he subsequently led the party to victory in the election against the Progressive Conservative Party led by George Drew. The Liberals won 190 seats—the most in Canadian history at the time, and still a record for the party.
His reputation as Prime Minister was impressive. He demanded hard work of all of his MPs and Ministers, and worked hard himself. He was reputed to be as knowledgeable on some ministerial portfolios as the ministers responsible themselves. To that end, Jack Pickersgill (a minister in St-Laurent's cabinet) said as prime minister St-Laurent had: "as fine an intelligence as was ever applied to the problems of government in Canada. He left it a richer, a more generous and more united country than it had been before he became prime minister."
St-Laurent led the Liberals to another powerful majority in the 1953 federal election. While the Liberals lost several seats, they still had 111 more seats than the Tories, enabling them to dominate the House of Commons of Canada.
St-Laurent and his cabinet oversaw Canada's expanding international role in the postwar world. His stated desire was for Canada to occupy a social, military and economic middle power role in the post-World War II world. In 1947, he identified five basic principles of Canadian foreign policy and five practical applications regarding Canada's international relations. Always highly sensitive to cleavages of language, religion, and region, he stressed national unity, insisting, "that our external policies shall not destroy our unity ... for a disunited Canada will be a powerless one." He also stressed political liberty and rule of law in the sense of opposition to totalitarianism.
Militarily, St-Laurent was a leading proponent of the establishment of the North Atlantic Treaty Organization (NATO) in 1949, serving as an architect and signatory of the treaty document. Involvement in such an organization marked a departure from King who had been reticent about joining a military alliance. Under his leadership, Canada supported the United Nations (U.N.) in the Korean War and committed the third largest overall contribution of troops, ships and aircraft to the U.N. forces to the conflict. Troops to Korea were selected on a voluntary basis.
In 1956, under his direction, St-Laurent's Secretary of State for External Affairs Lester B. Pearson, helped solve the Suez Crisis in 1956 between Great Britain, France, Israel and Egypt, bringing forward St-Laurent's 1946 views on a U.N. military force in the form of the United Nations Emergency Force (UNEF) or peacekeeping. It is widely believed that the activities directed by St-Laurent and Pearson could well have avoided a nuclear war. These actions were recognized when Pearson won the 1957 Nobel Peace Prize.
St-Laurent was an early supporter of British Prime Minister Clement Attlee's proposal to transform the British Commonwealth from a club of white dominions into a multi-racial partnership. The leaders of the other "white dominions" were less than enthusiastic. It was St-Laurent who drafted the London Declaration, recognizing King George VI as Head of the Commonwealth as a means of allowing India to remain in the international association once it became a republic.
St-Laurent's government was modestly progressive, fiscally conservative and run with business-like efficiency. Robertson says, "St Laurent's administrations from 1949 to 1956 probably gave Canada the most consistently good, financially responsible, trouble-free government the country has had in its entire history."
It took taxation surpluses no longer needed by the wartime military and paying back in full Canada's debts accrued during the World Wars and the Great Depression. With remaining revenues, St-Laurent oversaw the expansion of Canada's social programs, including the gradual expansion of social welfare programs such as family allowances, old age pensions, government funding of university and post-secondary education and an early form of Medicare termed "Hospital Insurance" at the time. This scheme lay the groundwork for Tommy Douglas' healthcare system in Saskatchewan, and Pearson's nationwide universal healthcare in the late 1960s. Under this legislation, the federal government paid around 50% of the cost of provincial health plans to cover "a basic range of inpatient services in acute, convalescent, and chronic hospital care." The condition for the cost-sharing agreements was that all citizens were to be entitled to these benefits, and by March 1963, 98.8 of Canadians were covered by "Hospital Insurance". According to historian Katherine Boothe, however, St. Laurent did not regard government health insurance to be a "good policy idea", instead favouring the expansion of voluntary insurance through existing plans. In 1951, for instance, St. Laurent spoke in support of the medical profession assuming "the administration and responsibility for, a scheme that would provide prepaid medical attendance to any Canadian who needed it".
In addition, St-Laurent modernized and established new social and industrial policies for the country during his time in the prime minister's office. Amongst these measures included the universalization of old-age pensions for all Canadians aged seventy and above (1951), the introduction of old age assistance for needy Canadians aged sixty-five and above (1951), the introduction of allowances for the blind (1951) and the disabled (1954), amendments to the National Housing Act (1954) which provided federal government financing to non-profit organisations as well as the provinces for the renovation or construction of hostels or housing for students, the disabled, the elderly, and families on low incomes, and unemployment assistance (1956) for unemployed employables on welfare who had exhausted (or did not qualify for) unemployment insurance benefits. During his last term as Prime Minister, St-Laurent's government used $100 million in death taxes to establish the Canada Council to support research in the arts, humanities, and social sciences.
In 1949, the former lawyer of many Supreme Court cases, St-Laurent ended the practice of appealing Canadian legal cases to the Judicial Committee of the Privy Council of Great Britain, making the Supreme Court of Canada the highest avenue of legal appeal available to Canadians. In that same year, St-Laurent negotiated the British North America (No. 2) Act, 1949 with Britain which 'partially patriated' the Canadian Constitution, most significantly giving the Canadian Parliament the authority to amend portions of the constitution. Also in 1949, following two referenda within the province, St-Laurent and Premier Joey Smallwood negotiated the entry of Newfoundland into Confederation.
When asked in 1949 whether he would outlaw the Communist Party in Canada, St-Laurent responded that the party posed little threat and that such measures would be drastic.
In 1952, he advised Queen Elizabeth II to appoint Vincent Massey as the first Canadian-born Governor-General. Each of the aforementioned actions were and are seen as significant in furthering the cause of Canadian autonomy from Britain and developing a national identity on the international stage.
In 1956, using the Constitutional taxation authority of the federal level of government, St-Laurent's government introduced the policy of "Equalization payments" which redistributes taxation revenues between provinces to assist the poorer provinces in delivering government programs and services, a move that has been considered a strong one in solidifying the Canadian federation, particularly with his home province of Québec.
The government also engaged in massive public works and infrastructure projects such as building the Trans-Canada Highway (1949), the St. Lawrence Seaway (1954) and the Trans-Canada Pipeline. It was this last project that was to sow the seeds that led to the downfall of the St-Laurent government.
St-Laurent was initially very well received by the Canadian public, but by 1957, "Uncle Louis" (as he was sometimes referred to) began to appear tired, old and out of touch; he was 75 years old and had many hard years of work behind him. His government was also perceived to have grown too close to business interests. The 1956 Pipeline Debate led to the widespread impression that the Liberals had grown arrogant in power. On numerous occasions, the government invoked closure in order to curtail debate and ensure that its Pipeline Bill passed by a specific deadline. St. Laurent was criticized for a lack of restraint exercised on his minister C. D. Howe, who was widely perceived as extremely arrogant. Western Canadians felt particularly alienated by the government, believing that the Liberals were kowtowing to interests in Ontario and Quebec and the United States. (The opposition accused the government of accepting overly costly contracts that could never be completed on schedule. In the end, the pipeline was completed early and under budget.) The pipeline conflict turned out to be meaningless, insofar as the construction work was concerned, since pipe could not be obtained in 1956 from a striking American factory, and no work could have been done that year. The uproar in Parliament regarding the pipeline had a lasting impression on the electorate, and was a decisive factor in the Liberal government's 1957 defeat at the hands of the PCs, led by John Diefenbaker, in the 1957 election. Because the Liberals were still mostly classically liberal, Diefenbaker promised to outspend the incumbent Liberals, who campaigned on plans to stay the course of fiscal conservatism they had followed through St-Laurent's term in the 1940s and 1950s.
St-Laurent was the first Prime Minister to live in the present official residence of the Prime Minister of Canada: 24 Sussex Drive, from 1951 to 1957, the end of his term in office.
By 1957 St. Laurent was 75 years old and tired. His party had been in power for 22 years, and by this time had accumulated too many factions and alienated too many groups. He was ready to retire, but was persuaded to fight one last campaign. In the 1957 election, the Liberals won 200,000 more votes nationwide than the Progressive Conservatives (40.75% Liberals to 38.81% PC). However, most of those votes were wasted with huge majorities in Quebec. Largely due to dominating the rest of the country, the Progressive Conservatives took the greatest number of seats with 112 seats (42% of the House) to the Liberals' 104 (39.2%). Some ministers wanted St. Laurent to stay on and offer to form a minority government, arguing that the popular vote had supported them and the party's long years of experience would make them a more effective minority.
Another option circulated within the party saw the balance of power to be held by either the Co-operative Commonwealth Federation (CCF) and their 25 seats or Social Credit Party of Canada with their 15 seats. St-Laurent was encouraged by others to reach out to the CCF and at least four of six independent/small party MPs to form a coalition majority government, which would have held 134 of the 265 seats in Parliament—50.1% of the total. St. Laurent, however, had no desire to stay in office; he believed that the nation had passed a verdict against his government and his party. In any case, the CCF and Socreds had pledged to cooperate with a Tory government. It was very likely that St. Laurent would have been defeated on the floor of the House had he tried to stay in power with a minority government, and would not have stayed in office for long even if he survived that confidence vote. With this in mind, St. Laurent resigned on 21 June 1957—ending the longest uninterrupted run in government for a party at the federal level in Canadian history.
St-Laurent chose the following jurists to be appointed as justices of the Supreme Court of Canada by the Governor General:
After a short period as Leader of the Opposition and now more than 75 years old, St- Laurent's motivation to be involved in politics was gone. He announced his intention to retire from politics. What had been a "temporary" political career had lasted 17 years. He was succeeded as Liberal Party leader by his former Secretary of State for External Affairs and representative at the United Nations, Lester B. Pearson, at the party's leadership convention in 1958.
After his political retirement, he returned to practising law and living quietly and privately with his family. During his retirement, he was called into the public spotlight one final time in 1967 to be made a Companion of the Order of Canada, a newly created award.
Order of Canada citation
St. Laurent was appointed a Companion of the Order of Canada on 6 July 1967. His citation reads:
Former Prime Minister of Canada. For his service to his country.
Louis Stephen St-Laurent died from heart failure on 25 July 1973, in Quebec City, Quebec, aged 91 and was buried at Saint Thomas d'Aquin Cemetery in his hometown of Compton, Quebec. He is survived by granddaughters Helen, Marie, Francine and grandsons Louis St-Laurent II and Michael S. O'Donnell.
St. Laurent was ranked #4 on a survey of the first 20 prime ministers (through Jean Chrétien) of Canada done by Canadian historians, and used by J. L. Granatstein and Norman Hillmer in their book "Prime Ministers: Ranking Canada's Leaders".
The house and grounds in Compton where St. Laurent was born were designated a National Historic Site of Canada in 1973. St. Laurent's residence at 201 Grande-Allée Est in Quebec City is protected as a Recognized Federal Heritage Building.
Louis St-Laurent School in Edmonton, Alberta. is named in his honour.
|
https://en.wikipedia.org/wiki?curid=17962
|
Louis Leakey
Louis Seymour Bazett Leakey (7 August 1903 – 1 October 1972) was a British paleoanthropologist and archaeologist whose work was important in demonstrating that humans evolved in Africa, particularly through discoveries made at Olduvai Gorge with his wife, fellow paleontologist Mary Leakey. Having established a program of palaeoanthropological inquiry in eastern Africa, he also motivated many future generations to continue this scholarly work. Several members of Leakey's family became prominent scholars themselves.
Another of Leakey's legacies stems from his role in fostering field research of primates in their natural habitats, which he saw as key to understanding human evolution. He personally focused on three female researchers, Jane Goodall, Dian Fossey, and Birutė Galdikas, calling them The Trimates. Each went on to become an important scholar in the field of primatology. Leakey also encouraged and supported many other PhD. candidates, most notably from the University of Cambridge. Leakey also played a role in creating organizations for future research in Africa and for protecting wildlife there.
Louis's parents, Harry (1868–1940) and Mary (May) Bazett Leakey (died 1948), were Church of England missionaries in British East Africa (now Kenya). Harry was the son of James Shirley Leakey (1824–1871), one of the eleven children of the portrait painter James Leakey. Harry Leakey was assigned to an established post of the Church Mission Society among the Kikuyu at Kabete, in the highlands north of Nairobi. The station was at that time a hut and two tents. Louis's earliest home had an earthen floor, a leaky thatched roof, rodents and insects, and no heating system except for charcoal braziers. The facilities slowly improved over time. The mission, a center of activity, set up a clinic in one of the tents, and later a girls' school. Harry was working on a translation of the Bible into the Gikuyu language. He had a distinguished career in the CMS, becoming canon of the station.
Louis had a younger brother, Douglas, and two older sisters, Gladys and Julia. Both sisters married missionaries: Gladys married Leonard Beecher, Anglican Bishop of Mombasa and then Archbishop of East Africa from 1960 to 1970; Julia married Lawrence Barham, the second Bishop of Rwanda and Burundi from 1964 to 1966; their son Ken Barham was later the Bishop of Cyangugu in Rwanda.
The Leakey household came to contain Miss Oakes (a governess), Miss Higgenbotham (another missionary), and Mariamu (a Kikuyu nurse). Louis grew up, played, and learned to hunt with the native Kikuyus. He also learned to walk with the distinctive gait of the Kikuyu and speak their language fluently, as did his siblings. He was initiated into the Kikuyu ethnic group, an event of which he never spoke, as he was sworn to secrecy.
Louis requested and was given permission to build and move into a hut, Kikuyu style, at the end of the garden. It was home to his personal collection of natural objects, such as birds' eggs and skulls. All the children developed a keen interest in and appreciation of the pristine natural surroundings in which they found themselves. They raised baby animals, later turning them over to zoos. Louis read a gift book, "Days Before History", by H. R. Hall (1907), a juvenile fictional work illustrating the prehistory of Britain. He began to collect tools and was further encouraged in this activity by a role model, Arthur Loveridge, first curator (1914) of the Natural History Museum in Nairobi, predecessor of the Coryndon Museum. This interest may have predisposed him toward a career in archaeology. His father was also a role model: Canon Leakey co-founded the East Africa and Uganda Natural History Society.
Neither Harry nor May were of strong constitution. From 1904–1906 the entire family lived at May's mother's house in Reading, Berkshire, England, while Harry recovered from neurasthenia, and again in 1911–1913, while May recovered from general frailty and exhaustion. During the latter stay, Harry bought a house in Boscombe, Hampshire.
In Britain, the Leakey children attended elementary school; in Africa, they had a tutor. The family sat out World War I in Africa. When the sea lanes opened again in 1919, they returned to Boscombe, where Louis was sent to Weymouth College, a private boys' school, when he was 16 years old. In three years there, he did not do well and complained of hazing and rules that he considered an infringement on his freedom. Advised by one teacher to seek employment in a bank, he secured help from an English teacher in applying to St John's College, Cambridge. He received a scholarship for his high scores on the entrance exams.
Louis matriculated at the University of Cambridge, his father's alma mater, in 1922, intending to become a missionary to British East Africa.
He frequently told a story about his final exams. When he had arrived in Britain, he had notified the register that he was fluent in Swahili. When he came to his finals, he asked to be examined in this language, and the authorities agreed. Then one day, he received two letters. One instructed him to report at a certain time and place for a "viva voce" examination in Swahili. The other asked if, at the same time and place, he would examine a candidate in Swahili.
In 1922 the British had been awarded German East Africa as part of the settlement of World War I. Within the Tanganyika Territory the Germans had discovered a site rich in dinosaur fossils, Tendaguru. Louis was told by C. W. Hobley, a friend of the family, that the British Museum of Natural History was going to send a fossil-hunting expedition led by William E. Cutler to the site. Louis applied and was hired to locate the site and manage the administrative details. In 1924 they departed for Africa. They never found a complete dinosaur skeleton, and Louis was recalled from the site by Cambridge in 1925.
Louis switched his focus to anthropology, and found a new mentor in Alfred Cort Haddon, head of the Cambridge department. In 1926, Louis graduated with a "double first", or high honours, in anthropology and archaeology. He had used some of his preexisting qualifications; for example, Kikuyu was offered and accepted as the second modern language in which he was required to be proficient, even though no one there could test him on it. The university accepted an affidavit from a Kikuyu chief signed with a thumbprint.
From 1925 on Louis lectured and wrote on African archaeological and palaeontological topics. On graduation he was such a respected figure that Cambridge sent him to East Africa to study prehistoric African humans. He excavated dozens of sites, undertaking for the first time a systematic study of the artifacts. Some of his names for archaeological cultures are still in use; for example, the Elmenteitan.
In 1927, Louis received a visit at a site called Gamble's Cave, near Lake Elmenteita, by two women on a holiday, one of whom was Frida Avern (1902–1993). Avern had done some course work in archaeology. Louis and Frida began a relationship, which continued upon his return to Cambridge. In 1928, they married and continued work near Lake Elmenteita. Finds from Gamble's Cave were donated by Leakey to the British Museum in 1931. At that time he discovered the Acheulean site of Kariandusi, which he excavated in 1928.
On the strength of his work there, he obtained a post-graduate research fellowship at St. John's College and returned to Cambridge in 1929 to classify and prepare the finds from Elmenteita. His patron and mentor at Cambridge were now Arthur Keith. While cleaning two skeletons he had found, he noticed a similarity to one found in Olduvai Gorge by Professor Hans Reck, a German national, whom Louis had met in 1925 in Germany while on business for Keith.
The geology of Olduvai was known. In 1913, Reck had extricated a skeleton from Bed II in the gorge wall. He argued that it must have the date of the bed, which was believed to be 600,000 years, in the mid-Pleistocene. Early dates for human evolution were not widely accepted by the general public at the time. Reck became involved in a media uproar. He was barred from going back to settle the question by the war and then the terms of the transfer of Tanganyika from Germany to Britain. In 1929, Louis visited Berlin to talk to the now skeptical Reck. Noting an Acheulean tool in Reck's collection of artifacts from Olduvai, he bet Reck he could find ancient stone tools at Olduvai within 24 hours.
Louis received his Ph.D. in 1930 at the age of 27. His first child, a daughter named Priscilla Muthoni Leakey, was born in 1931. His headaches and epilepsy returned, and he was prescribed Luminal, which he took for the rest of his life.
In November 1931, Louis led an expedition to Olduvai whose members included Reck, whom Louis allowed to enter the gorge first. Leakey had bet Reck that Leakey would find Acheulean tools within the first 24 hours, which he did. These verified the provenance of the 1913 find, now called Olduvai Man. Non-humanoid fossils and tools were extracted from the ground in large numbers. Frida delayed joining her husband and was less enthusiastic about him on behalf of Priscilla. She did arrive eventually, however, and Louis put her to work. Frida's site became FLK, for Frida Leakey's Karongo ("gully").
Back in Cambridge, the skeptics were not impressed. To find supporting evidence of the antiquity of Reck's Olduvai Man, Louis returned to Africa, excavating at Kanam and Kanjera. He easily found more fossils, which he named Homo kanamensis. While he was gone, the opposition worked up some "evidence" of the intrusion of Olduvai Man into an earlier layer, evidence that seemed convincing at the time, but is missing and unverifiable now. On his return, Louis' finds were carefully examined by a committee of 26 scientists and were tentatively accepted as valid.
With Frida's dowry money, the Leakeys bought a large brick house in Girton near Cambridge, which they named "The Close."
Frida was now pregnant and suffered from morning sickness most of the time and was unable to work on the illustrations for Louis's second book, "Adam's Ancestors." At a dinner party given in his honor after a lecture of his at the Royal Anthropological Institute, Gertrude Caton-Thompson introduced her own illustrator, the twenty-year-old Mary Nicol. Louis convinced Mary to take on the illustration of his book, and a few months later companionship turned to romance. Frida gave birth to Colin in December 1933, and the next month Louis asked her for a divorce. She would not sue for divorce until 1936.
A panel at Cambridge investigated his morals. Grants dried up, but his mother raised enough money for another expedition to Olduvai, Kanam and Kanjera, the latter two on the Winam Gulf. His previous work there was questioned by P. G. H. Boswell, whom he invited to verify the sites for himself. Arriving at Kanam and Kanjera in 1935, they found that the iron markers Louis had used to mark the sites had been removed by the Luo tribe for use as harpoons and the sites could not now be located. To make matters worse, all the photos Louis took were ruined by a light leak in the camera. After an irritating and fruitless two-month search, Boswell left for England, promising, as Louis understood it, not to publish a word until Louis returned.
Boswell immediately set out to publish as many words as he was able, beginning with a letter in "Nature" dated 9 March 1935, destroying Reck's and Louis's dates of the fossils and questioning Louis's competence. Despite the searches for the iron markers, Boswell averred that "the earlier expedition (of 1931–32) neither marked the localities on the ground nor recorded the sites on a map." In a field report of March 1935, Louis accused Boswell of reneging on his word, but Boswell asserted he had made no such promise, and now having public opinion on his side, warned Louis to withdraw the claim. Louis was not only forced to retract the accusation in his final field report in June 1935, but also to recant his support of Reck. Louis was through at Cambridge. Even his mentors turned on him.
Meeting Mary in Africa, he proceeded to Olduvai with a small party. Louis' parents continued to urge him to return to Frida, and would pay for everyone in the party but Mary. Mary joined him under a stigma but her skill and competence eventually won over the other participants. Louis and his associates did the groundwork for future excavation at Olduvai, uncovering dozens of sites for a broad sampling, as was his method. They were named after the excavator: SHK (Sam Howard's karongo), BK (Peter Bell's), SWK (Sam White's), MNK (Mary Nicol's). Louis and Mary conducted a temporary clinic for the Maasai, made preliminary investigations of Laetoli, and ended by studying the rock paintings at the Kisese/Cheke region.
Louis and Mary returned to England in 1935 without positions or any place to stay except Mary's mother's apartment. They soon leased Steen Cottage in Great Munden. This settlement was in Hertfordshire and had an unusual name which Louis, with his sense of humor noted in his "Memoirs", Chapter 5, as "the village of Nasty." They lived without heat, electricity, or plumbing, fetching water from a well and writing by oil lantern. They lived in poverty for 18 months at this low point of their fortunes, visited at first only by Mary's relatives. Louis gardened for subsistence and exercise and improved the house and grounds. He appealed at last to the Royal Society, who relented with a small grant to continue work on his collection.
Louis had already involved himself in Kikuyu tribal affairs in 1928, taking a stand against female genital cutting. He got into a shouting match in Kikuyu one evening with Jomo Kenyatta, later the president of Kenya, who was lecturing on the topic. R. Copeland at Oxford recommended he apply to the Rhodes Trust for a grant to write a study of the Kikuyu and it was given late in 1936 along with a salary for two years. In January 1937 the Leakeys travelled to Kenya. Colin would not see his father for 20 years.
Louis returned to Kiambaa near Nairobi and persuaded Senior Chief Koinange, who designated a committee of chiefs, to help him describe the Kikuyu the way they had been. Mary excavated at Waterfall Cave. She fell ill with double pneumonia and was near death for two weeks in the hospital in Nairobi, during which time her mother was sent for. Contrary to expectation, she recovered and began another excavation at Hyrax Hill and then Njoro River Cave. Louis got an extension of his grant, which he used partially for fossil-hunting. Leakey discoveries began to appear in the newspapers again.
Tensions between the Kikuyu and the settlers increased alarmingly. Louis jumped into the fray as an exponent of the middle ground. In "Kenya: Contrasts and Problems", he angered the settlers by proclaiming Kenya could never be a "white man's country."
The government offered Louis work as a policeman in intelligence, which he accepted. He traveled the country as a pedlar, reporting on the talk. When Britain went to war in September 1939 the Kenyan government drafted Louis into its African intelligence service. Apart from some bumbling around, during which he and some settlers stalked each other as possible saboteurs of the Sagana Railway Bridge, his first task was to supply and arm Ethiopian guerrillas against the Italian invaders of their country. He created a clandestine network using his childhood friends among the Kikuyu. They also hunted fossils on the sly.
Louis conducted interrogations, analyzed handwriting, wrote radio broadcasts and took on regular police investigations. He loved a good mystery of any sort. The white leadership of the King's African Rifles used him extensively to clear up many cultural mysteries; for example, he helped an officer remove a curse he had inadvertently put on his men.
Mary continued to find and excavate sites. Jonathan Leakey was born in 1940. She worked in the Coryndon Memorial Museum (later called the National Museums of Kenya) where Louis joined her as an unpaid honorary curator in 1941. Their life was a menage of police work and archaeology. They investigated Rusinga Island and Olorgesailie. At the latter site they were assisted by a team of Italian experts recruited from the prisoners of war and paroled for the purpose.
In 1942 the Italian menace ended, but the Japanese began to reconnoiter with a view toward landing in force. Louis found himself in counter-intelligence work, which he performed with zest and imagination. Deborah was born, but died at three months. They lived in a rundown and bug infested Nairobi home, provided by the museum. Jonathan was attacked by army ants in his crib.
In 1944 Richard Leakey was born. In 1945 the family's income from police work all but vanished. By now Louis was getting plenty of job offers but he chose to stay on in Kenya as Curator of the Coryndon Museum, with an annual salary and a house, but more importantly, to continue palaeoanthropological research.
In January 1947 Louis conducted the first Pan-African Congress of Prehistory at Nairobi. Sixty scientists from 26 countries attended, delivering papers and visiting the Leakey sites. The conference restored Louis to the scientific fold and made him a major figure in it. With the money that now poured in Louis undertook the famous expeditions of 1948 and beyond at Rusinga Island in Lake Victoria, where Mary discovered the most complete Proconsul fossil up to that time.
Charles Watson Boise donated money for a boat to be used for transport on Lake Victoria, "The Miocene Lady". Its skipper, Hassan Salimu, was later to deliver Jane Goodall to Gombe. Philip Leakey was born in 1949. In 1950, Louis was awarded an honorary doctorate by Oxford University.
While the Leakeys were at Lake Victoria, the Kikuyu struck at the European settlers of the Kenyan highlands, who seemed to have the upper hand and were insisting on a "white" government of a "white" Africa. In 1949 the Kikuyu formed a secret society, the Mau Mau, which attacked settlers and especially loyalist Kikuyu.
Louis had attempted to warn Sir Philip Mitchell, governor of the colony, that nocturnal meetings and forced oaths were not Kikuyu customs and foreboded violence, but was ignored. Now he found himself pulled away from anthropology to investigate the Mau Mau. During this period his life was threatened and a reward placed on his head. The Leakeys began to pack pistols, termed "European National Dress." The government placed him under 24-hour guard.
In 1952, after a massacre of loyal chiefs, the government arrested Jomo Kenyatta, president of the Kenya African Union. Louis was summoned to be a court interpreter, but withdrew after an accusation of mistranslation because of prejudice against the defendant. He returned on request to translate documents only. Because of lack of evidence linking Kenyatta to the Mau Mau, although convicted, he did not receive the death penalty, but was sentenced to several years of hard labor and banned from Kenya.
The government brought in British troops and formed a home guard of 20,000 Kikuyu. During this time Louis played a difficult and contradictory role. He sided with the settlers, serving as their spokesman and intelligence officer, helping to ferret out bands of guerrillas. On the other hand, he continued to advocate for the Kikuyu in his 1954 book "Defeating Mau Mau" and numerous talks and articles. He recommended a multi-racial government, land reform in the highlands, a wage hike for the Kikuyu, and many other reforms, most of which were eventually adopted.
The British realized the rebellion was being directed from urban centers, instituted military law and rounded up the committees. Following Louis' suggestion, thousands of Kikuyu were placed in re-education camps and resettled in new villages. The rebellion continued from bases under Mt. Kenya until 1956, when, deprived of its leadership and supplies, it had to disperse. The state of emergency lasted until 1960. In 1963 Kenya became independent, with Jomo Kenyatta as prime minister.
Beginning in 1951, Louis and Mary began intensive research at Olduvai Gorge. A trial trench in Bed II at BK in 1951 was followed by a more extensive excavation in 1952. They found what Louis termed an Oldowan "slaughter-house", an ancient bog where animals had been trapped and butchered. Excavations stopped in 1953 but were briefly resumed in 1955 with Jean Brown.
In 1959, excavations at Bed I were opened. While Louis was sick in camp, Mary discovered the fossilized skull OH 5 at FLK, "Paranthropus boisei", famously identified as ""Zinjanthropus"" or "Zinj." The question was whether the fossil belonged to a previous genus discovered by Robert Broom, "Paranthropus", or a member of a different genus ancestral to humans. Louis opted for "Zinjanthropus", a decision opposed by Wilfrid Le Gros Clark, but one which attracted the attention of Melville Bell Grosvenor, president of the National Geographic Society. That contact resulted in an article in "National Geographic" and a large grant to continue work at Olduvai.
In 1960, geophysicists Jack Evernden and Garniss Curtis dated Bed I from 1.89 to 1.75 million years ago, confirming the great antiquity of fossil hominids in Africa.
In 1960, Louis appointed Mary director of excavation at Olduvai. She brought in a staff of Kamba assistants, including Kamoya Kimeu, who later discovered many of eastern Africa's most famous fossils. At Olduvai, Mary set up Camp 5 and began work with her own staff and associates.
At "Jonny's site", FLK-NN, Jonathan Leakey discovered two skull fragments without the Australopithecine sagittal crest, which Mary connected with Broom's and Robinson's "Telanthropus". The problem with it was its contemporaneity with "Zinjanthropus". When mailed photographs, Le Gros Clark retorted casually "Shades of Piltdown." Louis cabled him immediately and had some strong words at this suggestion of his incompetence. Clark apologized.
Not long afterwards, in 1960, Louis, his son Philip and Ray Pickering discovered a fossil he termed "Chellean Man", (Olduvai Hominid 9), in context with Oldowan tools. After reconstruction Louis and Mary called it "Pinhead." It was subsequently identified as "Homo erectus", contemporaneous with "Paranthropus" at 1.4 million years old.
In 1961 Louis got a salary as well as a grant from the National Geographic Society and turned over the acting directorship of Coryndon to a subordinate. He created the Centre for Prehistory and Paleontology on the same grounds, moved his collections to it, and appointed himself director. This was his new operations center. He opened another excavation at Fort Ternan on Lake Victoria. Shortly after, Heselon discovered "Kenyapithecus wickeri", named after the owner of the property. Louis promptly celebrated with George Gaylord Simpson, who happened to be present, aboard the "Miocene Lady" with "Leakey Safari Specials", a drink made of condensed milk and cognac.
In 1962 Louis was visiting Olduvai when Ndibo Mbuika discovered the first tooth of "Homo habilis" at MNK. Louis and Mary thought it was female and named her Cinderella, or Cindy. Phillip Tobias identified Jonny's Child with it and Raymond Dart came up with the name "Homo habilis" at Louis' request, which Tobias translated as "handyman." It was seen as intermediary between gracile "Australopithecus" and "Homo".
In 1959 Leakey, while at the British Museum of Natural History in London, received a visit from Ruth DeEtte Simpson, an archaeologist from California. Simpson had acquired what looked like ancient scrapers from a site in the Calico Hills and showed it to Leakey.
In 1963, Leakey obtained funds from the National Geographic Society and commenced archaeological excavations with Simpson. Excavations at the site carried out by Leakey and Simpson revealed that they had located stone artifacts which were dated 100,000 years or older, suggesting a human presence in North America much earlier than others had estimated.
The geologist Vance Haynes had made three visits to the site in 1973 and had claimed that the artifacts found by Leakey were naturally formed geofacts. According to Haynes, the geofacts were formed by stones becoming fractured in an ancient river on the site.
In her autobiography, Mary Leakey wrote that because of Louis's involvement with the Calico Hills site she had lost academic respect for him and that the Calico excavation project was "catastrophic to his professional career and was largely responsible for the parting of our ways".
One of Louis's legacies stems from his role in fostering field research of primates in their natural habitats, which he understood as key to unraveling the mysteries of human evolution. He personally chose three female researchers, Jane Goodall, Dian Fossey, and Birutė Galdikas, calling them The Trimates. Each went on to become an important scholar in the field of primatology, immersing themselves in the study of chimpanzees, gorillas and orangutans, respectively. Leakey also encouraged and supported many other Ph.D. candidates, most notably from Cambridge University.
During his final years Louis became famous as a lecturer in the United Kingdom and United States. He did not excavate any longer, as he was crippled with arthritis, for which he had a hip replacement in 1968. He raised funds and directed his family and associates. In Kenya he was a facilitator for hundreds of scientists exploring the East African Rift system for fossils.
In 1968, Louis refused an honorary doctorate from the University of Witwatersrand in Johannesburg, primarily because of apartheid in South Africa. Mary accepted one, and they thereafter led separate professional lives.
In the last few years Louis' health began to fail more seriously. He had his first heart attacks and spent six months in the hospital. An empathy over health brought him and Dian Fossey together for a brief romance, which she broke off. Richard began to assume more and more of his father's responsibilities, which Louis resisted, but in the end was forced to accept.
On 1 October 1972, Louis had a heart attack in Jane Goodall's apartment in London. Jane sat up all night with him in St. Stephen's Hospital and left at 9:00 a.m. He died 30 minutes later at the age of 69.
Mary wanted to cremate Louis and fly the ashes back to Nairobi. Richard intervened. Louis' dead body was flown home and interred at Limuru, near the graves of his parents.
In denial, the family did not face the question of a memorial marker for a year. When Richard went to place a stone on the grave he found one already there, courtesy of Louis' former secretary Rosalie Osborn. The inscription was signed with the letters, "ILYFA", "I'll love you forever always", which Rosalie used to place on her letters to him. Richard left it in place.
Louis Leakey was married to Mary Leakey, who made the noteworthy discovery of fossil footprints at Laetoli. Found preserved in volcanic ash in Tanzania, they are the earliest record of bipedal gait.
He is also the father of paleoanthropologist Richard Leakey and the botanist Colin Leakey. Louis's cousin, Nigel Gray Leakey, was a recipient of the Victoria Cross during World War II.
Leakey's books are listed below. The gaps between books are filled by too many articles to list. It was Louis who began the Leakey tradition of publishing in "Nature".
|
https://en.wikipedia.org/wiki?curid=17965
|
Liar paradox
In philosophy and logic, the classical liar paradox or liar's paradox or antinomy of the liar is the statement of a liar that he or she is lying: for instance, declaring that "I am lying". If the liar is indeed lying, then the liar is telling the truth, which means the liar just lied. In "this sentence is a lie" the paradox is strengthened in order to make it amenable to more rigorous logical analysis. It is still generally called the "liar paradox" although abstraction is made precisely from the liar making the statement. Trying to assign to this statement, the strengthened liar, a classical binary truth value leads to a contradiction.
If "this sentence is false" is true, then it is false, but the sentence states that it is false, and if it is false, then it must be true, and so on.
The Epimenides paradox (circa 600 BC) has been suggested as an example of the liar paradox, but they are not logically equivalent. The semi-mythical seer Epimenides, a Cretan, reportedly stated that "All Cretans are liars." However, Epimenides' statement that all Cretans are liars can be resolved as false, given that he knows of at least one other Cretan who does not lie. It is precisely in order to avoid uncertainties deriving from the human factor and from fuzzy concepts that modern logicians proposed a "strengthened" liar such as the sentence "this sentence is false".
The paradox's name translates as "pseudómenos lógos" (ψευδόμενος λόγος) in Ancient Greek. One version of the liar paradox is attributed to the Greek philosopher Eubulides of Miletus who lived in the 4th century BC. Eubulides reportedly asked, "A man says that he is lying. Is what he says true or false?"
The paradox was once discussed by St. Jerome in a sermon:
The Indian grammarian-philosopher Bhartrhari (late fifth century AD) was well aware of a liar paradox which he formulated as "everything I am saying is false" (sarvam mithyā bravīmi). He analyzes this statement together with the paradox of "unsignifiability" and explores the boundary between statements that are unproblematic in daily life and paradoxes.
There was discussion of the liar paradox in early Islamic tradition for at least five centuries, starting from late 9th century, and apparently without being influenced by any other tradition. Naṣīr al-Dīn al-Ṭūsī could have been the first logician to identify the liar paradox as self-referential.
The problem of the liar paradox is that it seems to show that common beliefs about truth and falsity actually lead to a contradiction. Sentences can be constructed that cannot consistently be assigned a truth value even though they are completely in accord with grammar and semantic rules.
The simplest version of the paradox is the sentence:
If (A) is true, then "This statement is false" is true. Therefore, (A) must be false. The hypothesis that (A) is true leads to the conclusion that (A) is false, a contradiction.
If (A) is false, then "This statement is false" is false. Therefore, (A) must be true. The hypothesis that (A) is false leads to the conclusion that (A) is true, another contradiction. Either way, (A) is both true and false, which is a paradox.
However, that the liar sentence can be shown to be true if it is false and false if it is true has led some to conclude that it is "neither true nor false". This response to the paradox is, in effect, the rejection of the claim that every statement has to be either true or false, also known as the principle of bivalence, a concept related to the law of the excluded middle.
The proposal that the statement is neither true nor false has given rise to the following, strengthened version of the paradox:
If (B) is neither true nor false, then it must be not true. Since this is what (B) itself states, it means that (B) must be true. Since initially (B) was not true and is now true, another paradox arises.
Another reaction to the paradox of (A) is to posit, as Graham Priest has, that the statement is both true and false. Nevertheless, even Priest's analysis is susceptible to the following version of the liar:
If (C) is both true and false, then (C) is only false. But then, it is not true. Since initially (C) was true and is now not true, it is a paradox. However, it has been argued that by adopting a two-valued relational semantics (as opposed to functional semantics), the dialetheic approach can overcome this version of the Liar.
There are also multi-sentence versions of the liar paradox. The following is the two-sentence version:
Assume (D1) is true. Then (D2) is true. This would mean that (D1) is false. Therefore, (D1) is both true and false.
Assume (D1) is false. Then (D2) is false. This would mean that (D1) is true. Thus (D1) is both true and false. Either way, (D1) is both true and false – the same paradox as (A) above.
The multi-sentence version of the liar paradox generalizes to any circular sequence of such statements (wherein the last statement asserts the truth/falsity of the first statement), provided there are an odd number of statements asserting the falsity of their successor; the following is a three-sentence version, with each statement asserting the falsity of its successor:
Assume (E1) is true. Then (E2) is false, which means (E3) is true, and hence (E1) is false, leading to a contradiction.
Assume (E1) is false. Then (E2) is true, which means (E3) is false, and hence (E1) is true. Either way, (E1) is both true and false – the same paradox as with (A) and (D1).
There are many other variants, and many complements, possible. In normal sentence construction, the simplest version of the complement is the sentence:
If F is assumed to bear a truth value, then it presents the problem of determining the object of that value. But, a simpler version is possible, by assuming that the single word 'true' bears a truth value. The analogue to the paradox is to assume that the single word 'false' likewise bears a truth value, namely that it is false. This reveals that the paradox can be reduced to the mental act of assuming that the very idea of fallacy bears a truth value, namely that the very idea of fallacy is false: an act of misrepresentation. So, the symmetrical version of the paradox would be:
Alfred Tarski diagnosed the paradox as arising only in languages that are "semantically closed", by which he meant a language in which it is possible for one sentence to predicate truth (or falsehood) of another sentence in the same language (or even of itself). To avoid self-contradiction, it is necessary when discussing truth values to envision levels of languages, each of which can predicate truth (or falsehood) only of languages at a lower level. So, when one sentence refers to the truth-value of another, it is semantically higher. The sentence referred to is part of the "object language", while the referring sentence is considered to be a part of a "meta-language" with respect to the object language. It is legitimate for sentences in "languages" higher on the semantic hierarchy to refer to sentences lower in the "language" hierarchy, but not the other way around. This prevents a system from becoming self-referential.
However, this system is incomplete. One would like to be able to make statements such as "For every statement in level "α" of the hierarchy, there is a statement at level "α"+1 which asserts that the first statement is false." This is a true, meaningful statement about the hierarchy that Tarski defines, but it refers to statements at every level of the hierarchy, so it must be above every level of the hierarchy, and is therefore not possible within the hierarchy (although bounded versions of the sentence are possible).
Arthur Prior asserts that there is nothing paradoxical about the liar paradox. His claim (which he attributes to Charles Sanders Peirce and John Buridan) is that every statement includes an implicit assertion of its own truth. Thus, for example, the statement "It is true that two plus two equals four" contains no more information than the statement "two plus two equals four", because the phrase "it is true that..." is always implicitly there. And in the self-referential spirit of the Liar Paradox, the phrase "it is true that..." is equivalent to "this whole statement is true and ...".
Thus the following two statements are equivalent:
The latter is a simple contradiction of the form "A and not A", and hence is false. There is therefore no paradox because the claim that this two-conjunct Liar is false does not lead to a contradiction. Eugene Mills presents a similar answer.
Saul Kripke argued that whether a sentence is paradoxical or not can depend upon contingent facts. If the only thing Smith says about Jones is
and Jones says only these three things about Smith:
If Smith really is a big spender but is "not" soft on crime, then both Smith's remark about Jones and Jones's last remark about Smith are paradoxical.
Kripke proposes a solution in the following manner. If a statement's truth value is ultimately tied up in some evaluable fact about the world, that statement is "grounded". If not, that statement is "ungrounded". Ungrounded statements do not have a truth value. Liar statements and liar-like statements are ungrounded, and therefore have no truth value.
Jon Barwise and John Etchemendy propose that the liar sentence (which they interpret as synonymous with the Strengthened Liar) is ambiguous. They base this conclusion on a distinction they make between a "denial" and a "negation". If the liar means, "It is not the case that this statement is true", then it is denying itself. If it means, "This statement is not true", then it is negating itself. They go on to argue, based on situation semantics, that the "denial liar" can be true without contradiction while the "negation liar" can be false without contradiction. Their 1987 book makes heavy use of non-well-founded set theory.
Graham Priest and other logicians, including J. C. Beall and Bradley Armour-Garb, have proposed that the liar sentence should be considered to be both true and false, a point of view known as dialetheism. Dialetheism is the view that there are true contradictions. Dialetheism raises its own problems. Chief among these is that since dialetheism recognizes the liar paradox, an intrinsic contradiction, as being true, it must discard the long-recognized principle of explosion, which asserts that any proposition can be deduced from a contradiction, unless the dialetheist is willing to accept trivialism – the view that "all" propositions are true. Since trivialism is an intuitively false view, dialetheists nearly always reject the explosion principle. Logics that reject it are called "paraconsistent".
Andrew Irvine has argued in favour of a non-cognitivist solution to the paradox, suggesting that some apparently well-formed sentences will turn out to be neither true nor false and that "formal criteria alone will inevitably prove insufficient" for resolving the paradox.
The Indian grammarian-philosopher Bhartrhari (late fifth century AD) dealt with paradoxes such as the liar in a section of one of the chapters of his magnum opus the Vākyapadīya. Although chronologically he precedes all modern treatments of the problem of the liar paradox, it has only very recently become possible for those who cannot read the original Sanskrit sources to confront his views and analyses with those of modern logicians and philosophers because sufficiently reliable editions and translations of his work have only started becoming available since the second half of the 20th century. Bhartrhari's solution fits into his general approach to language, thought and reality, which has been characterized by some as "relativistic", "non-committal" or "perspectivistic". With regard to the liar paradox ("sarvam mithyā bhavāmi" "everything I am saying is false") Bhartrhari identifies a hidden parameter which can change unproblematic situations in daily communication into a stubborn paradox. Bhartrhari's solution can be understood in terms of the solution proposed in 1992 by Julian Roberts: "Paradoxes consume themselves. But we can keep apart the warring sides of the contradiction by the simple expedient of temporal contextualisation: what is 'true' with respect to one point in time need not be so in another ... The overall force of the 'Austinian' argument is not merely that 'things change', but that rationality is essentially temporal in that we need time in order to reconcile and manage what would otherwise be mutually destructive states." According to Robert's suggestion, it is the factor "time" which allows us to reconcile the separated "parts of the world" that play a crucial role in the solution of Barwise and Etchemendy. The capacity of time to prevent a direct confrontation of the two "parts of the world" is here external to the "liar". In the light of Bhartrhari's analysis, however, the extension in time which separates two perspectives on the world or two "parts of the world" – the part before and the part after the function accomplishes its task – is inherent in any "function": also the function to signify which underlies each statement, including the "liar". The unsolvable paradox – a situation in which we have either contradiction ("virodha") or infinite regress ("anavasthā") – arises, in case of the liar and other paradoxes such as the unsignifiability paradox (Bhartrhari's paradox), when abstraction is made from this function ("vyāpāra") and its extension in time, by accepting a simultaneous, opposite function ("apara vyāpāra") undoing the previous one.
For a better understanding of the liar paradox, it is useful to write it down in a more formal way. If "this statement is false" is denoted by A and its truth value is being sought, it is necessary to find a condition that restricts the choice of possible truth values of A. Because A is self-referential it is possible to give the condition by an equation.
If some statement, B, is assumed to be false, one writes, "B = false". The statement (C) that the statement B is false would be written as "C = 'B = false. Now, the liar paradox can be expressed as the statement A, that A is false:
This is an equation from which the truth value of A = "this statement is false" could hopefully be obtained. In the boolean domain "A = false" is equivalent to "not A" and therefore the equation is not solvable. This is the motivation for reinterpretation of A. The simplest logical approach to make the equation solvable is the dialetheistic approach, in which case the solution is A being both "true" and "false". Other resolutions mostly include some modifications of the equation; Arthur Prior claims that the equation should be "A = 'A = false and A = true and therefore A is false. In computational verb logic, the liar paradox is extended to statements like, "I hear what he says; he says what I don't hear", where verb logic must be used to resolve the paradox.
Gödel's incompleteness theorems are two fundamental theorems of mathematical logic which state inherent limitations of sufficiently powerful axiomatic systems for mathematics. The theorems were proven by Kurt Gödel in 1931, and are important in the philosophy of mathematics. Roughly speaking, in proving the first incompleteness theorem, Gödel used a modified version of the liar paradox, replacing "this sentence is false" with "this sentence is not provable", called the "Gödel sentence G". His proof showed that for any sufficiently powerful theory T, G is true, but not provable in T. The analysis of the truth and provability of G is a formalized version of the analysis of the truth of the liar sentence.
To prove the first incompleteness theorem, Gödel represented statements by numbers. Then the theory at hand, which is assumed to prove certain facts about numbers, also proves facts about its own statements. Questions about the provability of statements are represented as questions about the properties of numbers, which would be decidable by the theory if it were complete. In these terms, the Gödel sentence states that no natural number exists with a certain, strange property. A number with this property would encode a proof of the inconsistency of the theory. If there were such a number then the theory would be inconsistent, contrary to the consistency hypothesis. So, under the assumption that the theory is consistent, there is no such number.
It is not possible to replace "not provable" with "false" in a Gödel sentence because the predicate "Q is the Gödel number of a false formula" cannot be represented as a formula of arithmetic. This result, known as Tarski's undefinability theorem, was discovered independently by Gödel (when he was working on the proof of the incompleteness theorem) and by Alfred Tarski.
George Boolos has since sketched an alternative proof of the first incompleteness theorem that uses Berry's paradox rather than the liar paradox to construct a true but unprovable formula.
The liar paradox is occasionally used in fiction to shut down artificial intelligences, who are presented as being unable to process the sentence. In "" episode "I, Mudd", the liar paradox is used by Captain Kirk and Harry Mudd to confuse and ultimately disable an android holding them captive. In the 1973 "Doctor Who" serial "The Green Death", the Doctor temporarily stumps the insane computer BOSS by asking it "If I were to tell you that the next thing I say would be true, but the last thing I said was a lie, would you believe me?" However BOSS eventually decides the question is irrelevant and summons security.
In the 2011 videogame "Portal 2", GLaDOS attempts to use the "this sentence is false" paradox to defeat the naïve artificial intelligence Wheatley, but, lacking the intelligence to realize the statement a paradox, he simply responds, "Um, true. I'll go with true. There, that was easy." and is unaffected, although the frankencubes around him do spark and go offline.
In the seventh episode of "" titled "Access Denied" the main character Jesse and his friends are captured by a supercomputer named PAMA. After PAMA controls two of Jesse's friends, Jesse learns that PAMA stalls when processing and uses a paradox to confuse him and escape with his last friend. One of the paradoxes the player can make him say is the liar paradox.
In Douglas Adams "The Hitchhiker's Guide to the Galaxy", chapter 21 he describes a solitary old man inhabiting a small asteroid in the spatial coordinates where it should have been a whole planet dedicated to Biro (ballpoint pen) life forms. This old man repeatedly claimed that nothing was true, though he was later discovered to be lying.
Rollins Band's 1994 song "Liar" alluded to the paradox when the narrator ends the song by stating "I'll lie again and again and I'll keep lying, I promise".
Robert Earl Keen's song "The Road Goes On and On" alludes to the paradox. The song is widely believed to be written as part of Keen's feud with Toby Keith, who is presumably the "liar" Keen refers to.
|
https://en.wikipedia.org/wiki?curid=17967
|
Leon M. Lederman
Leon Max Lederman (July 15, 1922 – October 3, 2018) was an American experimental physicist who received the Wolf Prize in Physics in 1982, along with Martin Lewis Perl, for their research on quarks and leptons, and the Nobel Prize in Physics in 1988, along with Melvin Schwartz and Jack Steinberger, for their research on neutrinos. Lederman was Director Emeritus of Fermi National Accelerator Laboratory (Fermilab) in Batavia, Illinois. He founded the Illinois Mathematics and Science Academy, in Aurora, Illinois in 1986, and was Resident Scholar Emeritus there from 2012 until his death in 2018.
An accomplished scientific writer, he became known for his 1993 book "The God Particle" establishing the popularity of the term for the Higgs boson.
Lederman was born in New York City, New York, to Morris and Minna (Rosenberg) Lederman. His parents were Ukrainian-Jewish immigrants from Kiev and Odessa. Lederman graduated from James Monroe High School in the South Bronx, and received his bachelor's degree from the City College of New York in 1943.
He next enlisted in the United States Army during World War II, intending to become a physicist after his service. Following his discharge in 1946, he enrolled at Columbia University's graduate school, receiving his Ph.D. in 1951.
Lederman became a faculty member at Columbia University, and he was promoted to full professor in 1958 as Eugene Higgins Professor of Physics. In 1960, on leave from Columbia, he spent time at CERN in Geneva as a Ford Foundation Fellow. He took an extended leave of absence from Columbia in 1979 to become director of Fermilab. Resigning from Columbia (and retiring from Fermilab) in 1989, he then taught briefly at the University of Chicago. He then moved to the physics department of the Illinois Institute of Technology, where he served as the Pritzker Professor of Science. In 1992, Lederman served as President of the American Association for the Advancement of Science.
Lederman, rare for a Nobel Prize winning professor, took it upon himself to teach physics to non-physics majors at The University of Chicago.
Lederman served as President of the Board of Sponsors of the Bulletin of the Atomic Scientists, and at the time of his death was Chair Emeritus. He also served on the board of trustees for Science Service, now known as Society for Science & the Public, from 1989 to 1992, and was a member of the JASON defense advisory group. Lederman was also one of the main proponents of the "Physics First" movement. Also known as "Right-side Up Science" and "Biology Last," this movement seeks to rearrange the current high school science curriculum so that physics precedes chemistry and biology.
Lederman was an early supporter of Science Debate 2008, an initiative to get the then-candidates for president, Barack Obama and John McCain, to debate the nation's top science policy challenges. In October 2010, Lederman participated in the USA Science and Engineering Festival's Lunch with a Laureate program where middle and high school students engaged in an informal conversation with a Nobel Prize-winning scientist over a brown-bag lunch. Lederman was also a member of the USA Science and Engineering Festival's Advisory Board.
In 1956, Lederman worked on parity violation in weak interactions. R. L. Garwin, Leon Lederman, and R. Weinrich modified an existing cyclotron experiment, and they immediately verified the parity violation. They delayed publication of their results until after Wu's group was ready, and the two papers appeared back-to-back in the same physics journal. Among his achievements are the discovery of the muon neutrino in 1962 and the bottom quark in 1977. These helped establish his reputation as among the top particle physicists.
In 1977, a group of physicists, the E288 experiment team, led by Lederman announced that a particle with a mass of about 6.0 GeV was being produced by the Fermilab particle accelerator. After taking further data, the group discovered that this particle did not actually exist, and the "discovery" was named "Oops-Leon" as a pun on the original name and Lederman's first name.
As the director of Fermilab, Lederman was a prominent supporter of the Superconducting Super Collider project, which was endorsed around 1983, and was a major proponent and advocate throughout its lifetime. Also at Fermilab, he oversaw the construction of the Tevatron, for decades the world's highest-energy particle collider. Lederman later wrote his 1993 popular science book "" – which sought to promote awareness of the significance of such a project – in the context of the project's last years and the changing political climate of the 1990s. The increasingly moribund project was finally shelved that same year after some $2 billion of expenditures. In "The God Particle" he wrote, "The history of atomism is one of reductionism – the effort to reduce all the operations of nature to a small number of laws governing a small number of primordial objects" while stressing the importance of the Higgs boson.
In 1988, Lederman received the Nobel Prize for Physics along with Melvin Schwartz and Jack Steinberger "for the neutrino beam method and the demonstration of the doublet structure of the leptons through the discovery of the muon neutrino". Lederman also received the National Medal of Science (1965), the Elliott Cresson Medal for Physics (1976), the Wolf Prize for Physics (1982) and the Enrico Fermi Award (1992). In 1995, he received the Chicago History Museum "Making History Award" for Distinction in Science Medicine and Technology.
Lederman's best friend during his college years, Martin J. Klein, convinced him of "the splendors of physics during a long evening over many beers". He was known for his sense of humor in the physics community. On August 26, 2008 Lederman was video-recorded by a science focused organization called ScienCentral, on the street in a major U.S. city, answering questions from passersby. He answered questions such as "What is the strong force?" and "What happened before the Big Bang?".
Despite his Jewish background, Lederman was an atheist. He had three children with his first wife, Florence Gordon, and toward the end of his life lived with his second wife, Ellen (Carr), in Driggs, Idaho.
Lederman began to suffer from memory loss in 2011 and, after struggling with medical bills, he had to sell his Nobel medal for $765,000 to cover the costs in 2015. He died of complications from dementia on October 3, 2018, at a care facility in Rexburg, Idaho at the age of 96.
|
https://en.wikipedia.org/wiki?curid=17970
|
Louis the Pious
Louis the Pious (778 – 20 June 840), also called the Fair, and the Debonaire, was the King of the Franks and co-emperor with his father, Charlemagne, from 813. He was also King of Aquitaine from 781.
As the only surviving adult son of Charlemagne and Hildegard, he became the sole ruler of the Franks after his father's death in 814, a position which he held until his death, save for the period 833–34, during which he was deposed.
During his reign in Aquitaine, Louis was charged with the defence of the empire's southwestern frontier. He conquered Barcelona from the Muslims in 801 and asserted Frankish authority over Pamplona and the Basques south of the Pyrenees in 812. As emperor he included his adult sons, Lothair, Pepin, and Louis, in the government and sought to establish a suitable division of the realm among them. The first decade of his reign was characterised by several tragedies and embarrassments, notably the brutal treatment of his nephew Bernard of Italy, for which Louis atoned in a public act of self-debasement.
In the 830s his empire was torn by civil war between his sons, only exacerbated by Louis's attempts to include his son Charles by his second wife in the succession plans. Though his reign ended on a high note, with order largely restored to his empire, it was followed by three years of civil war. Louis is generally compared unfavourably to his father, though the problems he faced were of a distinctly different sort.
Louis was born while his father Charlemagne was on campaign in Spain, at the Carolingian villa of Cassinogilum, according to Einhard and the anonymous chronicler called Astronomus; the place is usually identified with Chasseneuil, near Poitiers. He was the third son of Charlemagne by his wife Hildegard.
Louis was crowned King of Aquitaine as a three year old child in 781. In the following year he was sent to Aquitaine accompanied by regents and a court. Charlemagne constituted this sub-kingdom in order to secure the border of his realm after the destructive war against the Aquitanians and Basques under Waifer (capitulated "c". 768) and later Hunald II, which culminated in the disastrous Battle of Roncesvalles (778). Charlemagne wanted his son Louis to grow up in the area where he was to reign. However, in 785, wary of the customs his son may have been taking in Aquitaine, Charlemagne, who after Hildegard's death in 783, had remarried Fastrada, sent for Louis in 785. Louis presented himself in Saxony at the royal Council of Paderborn dressed in Basque costumes along with other youths in the same garment, which may have made a good impression in Toulouse, since the Basques of Vasconia were a mainstay of the Aquitanian army.
In 794, Charlemagne gave four former Gallo-Roman villas to Louis, in the thought that he would take in each in turn as winter residence: Doué-la-Fontaine in today's Anjou, Ebreuil in Allier, Angeac-Charente, and the disputed Cassinogilum. Charlemagne's intention was to see all his sons brought up as natives of their given territories, wearing the national costume of the region and ruling by the local customs. Thus were the children sent to their respective realms at a young age. The marches - peripheral principalities - played a vital role as bullwarks against exterior threats to the empire. Louis reigned over the Spanish March. In 797, Barcelona, the largest city of the "Marca", fell to the Franks when Zeid, its governor, rebelled against Córdoba and, failing, handed it to them. The Umayyad authority recaptured it in 799. However, Louis marched the entire army of his kingdom, including Gascons with their duke Sancho I of Gascony, Provençals under Leibulf, and Goths under Bera, over the Pyrenees and besieged it for two years, wintering there from 800 to 801, when it capitulated. King Louis was formally invested with his armour in 791 at the age of fourteen. However, the princes were not given independence from central authority as Charlemagne wished to implant in them the concepts of empire and unity by sending them on remote military expeditions. Louis joined his brother Pippin at the Mezzogiorno campaign in Italy against the Duke Grimoald of Benevento at least once.
Louis was one of Charlemagne's three legitimate sons to survive infancy. His twin brother, Lothair died during infancy. According to the Frankish custom of partible inheritance, Louis had expected to share his inheritance with his brothers, Charles the Younger, King of Neustria, and Pepin, King of Italy. In the "Divisio Regnorum" of 806, Charlemagne had slated Charles the Younger as his successor as emperor and chief king, ruling over the Frankish heartland of Neustria and Austrasia, while giving Pepin the Iron Crown of Lombardy, which Charlemagne possessed by conquest. To Louis's kingdom of Aquitaine, he added Septimania, Provence, and part of Burgundy. However, Charlemagne's other legitimate sons died – Pepin in 810 and Charles in 811 – and Louis was crowned co-emperor with an already ailing Charlemagne in Aachen in 813. On his father's death in 814, he inherited the entire Carolingian Empire and all its possessions (with the sole exception of the kingdom of Italy; although within Louis's empire, in 813 Charlemagne had ordered that Bernard, Pepin's son be made and called king).
While at his villa of Doué-la-Fontaine, Anjou, Louis received news of his father's death. He rushed to Aachen and crowned himself emperor to shouts of "Vivat Imperator Ludovicus" by the attending nobles.
Upon arriving at the imperial court in Aachen in an atmosphere of suspicion and anxiety on both sides, Louis's first act was to purge the palace of what he considered undesirable. He destroyed the old Germanic pagan tokens and texts which had been collected by Charlemagne. He further exiled members of the court he deemed morally "dissolute", including some of his own relatives.
He quickly sent all of his many unmarried (half-)sisters and nieces to nunneries in order to avoid any possible entanglements from overly powerful brothers-in-law. Sparing his illegitimate half-brothers Drogo, Hugh and Theoderic, he forced his father's cousins, Adalard and Wala to be tonsured, placing them in into monastic exile at St-Philibert on the island of Noirmoutier and Corbie, respectively, despite the latter's initial loyalty.
He made Bernard, margrave of Septimania, and Ebbo, Archbishop of Reims his chief counsellors. The latter, born a serf, was raised by Louis to that office, but betrayed him later. He retained some of his father's ministers, such as Elisachar, abbot of St. Maximin near Trier, and Hildebold, Archbishop of Cologne. Later he replaced Elisachar with Hildwin, abbot of many monasteries.
He also employed Benedict of Aniane (the Second Benedict), a Septimanian Visigoth, whom he made abbot of the newly established "Inden Monastery" at Aix-la-Chapelle and charged him with the reform the Frankish church. One of Benedict's primary reforms was to ensure that all religious houses in Louis' realm adhered to the Rule of Saint Benedict, named for its creator, Benedict of Nursia. From the start of his reign, his coinage imitated his father Charlemagne's portrait, which gave it an image of imperial authority and prestige. In 816, Pope Stephen IV, who had succeeded Leo III, visited Reims and again crowned Louis on Sunday 5 October.
On 9 April 817, Maundy Thursday, Louis and his court were crossing a wooden gallery from the cathedral to the palace in Aachen when the gallery collapsed, killing many. Louis, having barely survived and feeling the imminent danger of death, began planning for his succession. Three months later among the approval of his Aachen court and the clergy he issued an imperial decree of eighteen chapters, the "Ordinatio Imperii", that laid out plans for an orderly dynastic succession. The term "Ordinatio Imperii" is a modern (19th century) creation. The decree is called "divisio imperii" in the only surviving contemporary manuscript.
In 815, Louis had already given his two eldest sons a share in the government, when he had sent his elder sons Lothair and Pepin to govern Bavaria and Aquitaine respectively, though without the royal titles. He proceeded to divide the empire among his three sons:
If one of the subordinate kings died, he was to be succeeded by his sons. If he died childless, Lothair would inherit his kingdom. In the event of Lothair dying without sons, one of Louis the Pious' younger sons would be chosen to replace him by "the people". Above all, the Empire would not be divided: the Emperor would rule supreme over the subordinate kings, whose obedience to him was mandatory.
With this settlement, Louis attempted to combine his sense for the Empire's unity, supported by the clergy, while at the same time providing positions for all of his sons. Instead of treating his sons equally in status and land, he elevated his first-born son Lothair above his younger brothers and gave him the largest part of the Empire as his share.
The decree failed to create order as it omitted Bernard, who immediately began to conspire. When Louis began to issue changes in favor of his second wife Judith's son Charles the Bald, his sons Lothar, Pepin and Louis refused to accept. The rule of sons being favoured over brothers in succession remained also untouched.
The "ordinatio imperii" of Aachen left Bernard in Italy in an uncertain and subordinate position as king of Italy, and he began plotting to declare independence. Upon hearing of this, Louis immediately directed his army towards Italy, and headed for Chalon-sur-Saône. Intimidated by the emperor's swift action, Bernard met his uncle at Chalon, under invitation, and surrendered. He was taken to Aachen by Louis, who there had him tried and condemned to death for treason. Louis had the sentence commuted to blinding, which was duly carried out; Bernard did not survive the ordeal, however, dying after two days of agony. Others also suffered: Theodulf of Orléans, in eclipse since the death of Charlemagne, was accused of having supported the rebellion, and was thrown into a monastic prison, dying soon afterwards; it was rumored that he had been poisoned. The fate of his nephew deeply marked Louis's conscience for the rest of his life.
In 822, as a deeply religious man, Louis performed penance for causing Bernard's death, at his palace of Attigny near Vouziers in the Ardennes, before Pope Paschal I, and a council of clerics and nobles of the realm that had been convened for the reconciliation of Louis with his three younger half-brothers, Hugo whom he soon made abbot of St-Quentin, Drogo whom he soon made Bishop of Metz, and Theodoric. This act of contrition, partly in emulation of Theodosius I, had the effect of greatly reducing his prestige as a Frankish ruler, for he also recited a list of minor offences about which no secular ruler of the time would have taken any notice. He also made the egregious error of releasing Wala and Adalard from their monastic confinements, placing the former in a position of power in the court of Lothair and the latter in a position in his own house.
At the start of Louis's reign, the many tribes – Danes, Obotrites, Slovenes, Bretons, Basques – which inhabited his frontierlands were still in awe of the Frankish emperor's power and dared not stir up any trouble. In 816, however, the Sorbs rebelled and were quickly followed by Slavomir, chief of the Obotrites, who was captured and abandoned by his own people, being replaced by Ceadrag in 818. Soon, Ceadrag too had turned against the Franks and allied with the Danes, who were to become the greatest menace of the Franks in a short time.
A greater Slavic menace was gathering on the southeast. There, Ljudevit, duke of Pannonia, was harassing the border at the Drava and Sava rivers. The margrave of Friuli, Cadolah, was sent out against him, but he died on campaign and, in 820, his margravate was invaded by Slovenes. In 821, an alliance was made with Borna, duke of the Dalmatia, and Liudewit was brought to heel. In 824 several Slav tribes in the north-western parts of Bulgaria acknowledged Louis's suzerainty and after he was reluctant to settle the matter peacefully with the Bulgarian ruler Omurtag, in 827 the Bulgarians attacked the Franks in Pannonia and regained their lands.
On the far southern edge of his great realm, Louis had to control the Lombard princes of Benevento whom Charlemagne had never subjugated. He extracted promises from Princes Grimoald IV and Sico, but to no effect.
On the southwestern frontier, problems commenced early when c. 812, Louis the Pious crossed the western Pyrenees 'to settle matters' in Pamplona. The expedition made its way back north, where it narrowly escaped an ambush attempt arranged by the Basques in the pass of Roncevaux thanks to the precautions he took, i.e. hostages. Séguin, duke of Gascony, was then deposed by Louis in 816, possibly for failing to suppress or collaborating with the Basque revolt south of the western Pyrenees, so sparking off a Basque uprising that was duly put down by the Frankish emperor in Dax. Seguin was replaced by Lupus III, who was dispossessed in 818 by the emperor. In 820 an assembly at Quierzy-sur-Oise decided to send an expedition against the Cordoban caliphate (827). The counts in charge of the army, Hugh, count of Tours, and Matfrid, count of Orléans, were slow in acting and the expedition came to naught.
In 818, as Louis was returning from a campaign to Brittany, he was greeted by news of the death of his wife, Ermengarde. Ermengarde was the daughter of Ingerman, the duke of Hesbaye. Louis had been close to his wife, who had been involved in policymaking. It was rumoured that she had played a part in her nephew's death and Louis himself believed her own death was divine retribution for that event. It took many months for his courtiers and advisors to convince him to remarry, but eventually he did, in 820, to Judith, daughter of Welf, count of Altdorf. In 823 Judith gave birth to a son, who was named Charles.
The birth of this son damaged the "Partition of Aachen", as Louis's attempts to provide for his fourth son met with stiff resistance from his older sons, and the last two decades of his reign were marked by civil war. At Worms in 829, Louis gave Alemannia to Charles, with the title of king or duke (historians differ on this), thus enraging his son and co-emperor Lothair, whose promised share was thereby diminished. An insurrection was soon at hand.
With the urging of the vengeful Wala and the cooperation of his brothers, Lothair accused Judith of having committed adultery with Bernard of Septimania, even suggesting Bernard to be the true father of Charles. Ebbo and Hildwin abandoned the emperor at that point, Bernard having risen to greater heights than either of them. Agobard, Archbishop of Lyon, and Jesse of Amiens, bishop of Amiens, too, opposed the redivision of the empire and lent their episcopal prestige to the rebels.
In 830, at Wala's insistence that Bernard of Septimania was plotting against him, Pepin of Aquitaine led an army of Gascons, with the support of the Neustrian magnates, all the way to Paris. At Verberie, Louis the German joined him. At that time, the emperor returned from another campaign in Brittany to find his empire at war with itself. He marched as far as Compiègne, an ancient royal town, before being surrounded by Pepin's forces and captured. Judith was incarcerated at Poitiers and Bernard fled to Barcelona.
Then Lothair finally set out with a large Lombard army, but Louis had promised his sons Louis the German and Pepin of Aquitaine greater shares of the inheritance, prompting them to shift loyalties in favour of their father. When Lothair tried to call a general council of the realm in Nijmegen, in the heart of Austrasia, the Austrasians and Rhinelanders came with a following of armed retainers, and the disloyal sons were forced to free their father and bow at his feet (831). Lothair was pardoned, but disgraced and banished to Italy.
Pepin returned to Aquitaine and Judith – after being forced to humiliate herself with a solemn oath of innocence – to Louis's court. Only Wala was severely dealt with, making his way to a secluded monastery on the shores of Lake Geneva. Although Hilduin, abbot of Saint Denis, was exiled to Paderborn and Elisachar and Matfrid were deprived of their honours north of the Alps; they did not lose their freedom.
The next revolt occurred a mere two years later, in 832. The disaffected Pepin was summoned to his father's court, where he was so poorly received he left against his father's orders. Immediately, fearing that Pepin would be stirred up to revolt by his nobles and desiring to reform his morals, Louis the Pious summoned all his forces to meet in Aquitaine in preparation of an uprising, but Louis the German garnered an army of Slav allies and conquered Swabia before the emperor could react. Once again the elder Louis divided his vast realm. At Jonac, he declared Charles king of Aquitaine and deprived Pepin (he was less harsh with the younger Louis), restoring the whole rest of the empire to Lothair, not yet involved in the civil war. Lothair was, however, interested in usurping his father's authority. His ministers had been in contact with Pepin and may have convinced him and Louis the German to rebel, promising him Alemannia, the kingdom of Charles.
Soon Lothair, with the support of Pope Gregory IV, whom he had confirmed in office without his father's support, joined the revolt in 833. While Louis was at Worms gathering a new force, Lothair marched north. Louis marched south. The armies met on the plains of the Rothfeld. There, Gregory met the emperor and may have tried to sow dissension amongst his ranks. Soon much of Louis's army had evaporated before his eyes, and he ordered his few remaining followers to go, because "it would be a pity if any man lost his life or limb on my account." The resigned emperor was taken to Saint-Médard de Soissons, his son Charles to Prüm, and the queen to Tortona. The despicable show of disloyalty and disingenuousness earned the site the name Field of Lies, or Lügenfeld, or Campus Mendacii, "ubi plurimorum fidelitas exstincta est".
On 13 November 833, Ebbo, with Agobard of Lyon, presided over a synod at the Church of Saint Medard in Soissons which saw Louis undertake public penance for the second time in his reign. The penitential ritual that was undertaken began when Louis arrived at the church and confessed multiple times to the crimes levied against him. The crimes had been historic and recent, with accusations of oath breaking, violation of the public peace and inability to control his adulterous wife, Judith of Bavaria. Afterwards, he threw his sword belt at the base of the altar and received judgement through the imposition of the hands of the bishops. Louis was to live the rest of his life as a penitent, never to hold office again. The penance divided the aristocracy. The anonymous biographer of the Vita Hludovici criticized the whole affair on the basis that God does not judge twice for sins committed and confessed. Lothair's allies were generously compensated. Ebbo himself received the monastery of St Vaast whilst Pepin was allowed to keep the lands reclaimed from his father.
Men like Rabanus Maurus, Louis' younger half-brothers Drogo and Hugh, and Emma, Judith's sister and Louis the German's new wife, worked on the younger Louis to make peace with his father, for the sake of unity of the empire. The humiliation to which Louis was then subjected at Notre Dame in Compiègne turned the loyal barons of Austrasia and Saxony against Lothair, and the usurper fled to Burgundy, skirmishing with loyalists near Chalon-sur-Saône. Louis was restored the next year, on 1 March 834.
On Lothair's return to Italy, Wala, Jesse, and Matfrid, formerly count of Orléans, died of a pestilence. On 2 February 835 at the palace Thionville, Louis presided over a general council to deal with the events of the previous year. Known as the Synod of Thionville, Louis himself was reinvested with his ancestral garb and the crown, symbols of Carolingian rulership. Furthermore, the penance of 833 was officially reversed and Archbishop Ebbo officially resigned after confessing to a capital crime, whilst Agobard of Lyon and Bartholmew, Archbishop of Narbonne were also deposed. Later that year Lothair fell ill; once again the events turned in Louis favour.
In 836, however, the family made peace and Louis restored Pepin and Louis, deprived Lothair of all save Italy, and gave it to Charles in a new division, given at the diet of Crémieu. At about that time, the Vikings terrorized and sacked Utrecht and Antwerp. In 837, they went up the Rhine as far as Nijmegen, and their king, Rorik, demanded the weregild of some of his followers killed on previous expeditions before Louis the Pious mustered a massive force and marched against them. They fled, but it would not be the last time they harried the northern coasts. In 838, they even claimed sovereignty over Frisia, but a treaty was confirmed between them and the Franks in 839. Louis the Pious ordered the construction of a North Sea fleet and the sending of "missi dominici" into Frisia to establish Frankish sovereignty there.
In 837, Louis crowned Charles king over all of Alemannia and Burgundy and gave him a portion of his brother Louis' land. Louis the German promptly rose in revolt, and the emperor redivided his realm again at Quierzy-sur-Oise, giving all of the young king of Bavaria's lands, save Bavaria itself, to Charles. Emperor Louis did not stop there, however. His devotion to Charles knew no bounds. When Pepin died in 838, Louis declared Charles the new king of Aquitaine. The nobles, however, elected Pepin's son Pepin II. When Louis threatened invasion, the third great civil war of his reign broke out. In the spring of 839, Louis the German invaded Swabia, Pepin II and his Gascon subjects fought all the way to the Loire, and the Danes returned to ravage the Frisian coast (sacking Dorestad for a second time).
Lothair, for the first time in a long time, allied with his father and pledged support at Worms in exchange for a redivision of the inheritance. At a final "placitum" held at Worms on 20 May, Louis gave Bavaria to Louis the German and disinherited Pepin II, leaving the entire remainder of the empire to be divided roughly into an eastern part and a western. Lothair was given the choice of which partition he would inherit and he chose the eastern, including Italy, leaving the western for Charles. The emperor quickly subjugated Aquitaine and had Charles recognised by the nobles and clergy at Clermont-en-Auvergne in 840. Louis then, in a final flash of glory, rushed into Bavaria and forced the younger Louis into the Ostmark. The empire now settled as he had declared it at Worms, he returned in July to Frankfurt am Main, where he disbanded the army. The final civil war of his reign was over.
Louis fell ill soon after his final victorious campaigns and retreated to his summer hunting lodge on an island in the Rhine near his palace at Ingelheim. He died on 20 June 840 in the presence of many bishops and clerics and in the arms of his half-brother Drogo as he pardoned his son Louis, proclaimed Lothair emperor and commended the absent Charles and Judith to his protection.
Soon dispute plunged the surviving brothers into yet another civil war. It lasted until 843 with the signing of the Treaty of Verdun, in which the division of the empire into three souvereign entities was settled. West Francia and East Francia became the kernels of modern France and Germany respectively. Middle Francia, that included Burgundy, the Low Countries and northern Italy among other regions was only short-lived until 855 and later reorganized as Lotharingia. The dispute over the kingship of Aquitaine was not fully settled until 860.
Louis was buried in the Abbey of Saint-Arnould in Metz.
By his first wife, Ermengarde of Hesbaye (married c. 794), he had three sons and three daughters:
By his second wife, Judith of Bavaria, he had a daughter and a son:
By Theodelinde of Sens, he had two illegitimate children:
|
https://en.wikipedia.org/wiki?curid=17972
|
Irgun
The Irgun (; full title: ', lit. "The National Military Organization in the Land of Israel") was a Zionist paramilitary organization that operated in Mandate Palestine between 1931 and 1948. The organization is also referred to as Etzel"' (), an acronym of the Hebrew initials, or by the abbreviation IZL. It was an offshoot of the older and larger Jewish paramilitary organization Haganah (Hebrew: , Defence). When the group broke from the Haganah it became known as the "Haganah Bet" (Hebrew: literally "Defense 'B' " or "Second Defense", ), or alternatively as haHaganah haLeumit () or Hama'amad (). Irgun members were absorbed into the Israel Defense Forces at the start of the 1948 Arab–Israeli war.
The Irgun policy was based on what was then called Revisionist Zionism founded by Ze'ev Jabotinsky. According to Howard Sachar, "The policy of the new organization was based squarely on Jabotinsky's teachings: every Jew had the right to enter Palestine; only active retaliation would deter the Arabs; only Jewish armed force would ensure the Jewish state".
Two of the operations for which the Irgun is best known are the bombing of the King David Hotel in Jerusalem on 22 July 1946 and the Deir Yassin massacre, carried out together with Lehi on 9 April 1948.
The Irgun has been viewed as a terrorist organization or organization which carried out terrorist acts. Specifically the organization "committed acts of terrorism and assassination against the British, whom it regarded as illegal occupiers, and it was also violently anti-Arab" according to the Encyclopædia Britannica. In particular the Irgun was described as a terrorist organization by the United Nations, British, and United States governments; in media such as "The New York Times" newspaper; as well as by the Anglo-American Committee of Inquiry, the 1946 Zionist Congress and the Jewish Agency. However, academics such as Bruce Hoffman and Max Abrahms have written that the Irgun went to considerable lengths to avoid harming civilians, such as issuing pre-attack warnings; according to Hoffman, Irgun leadership urged "targeting the physical manifestations of British rule while avoiding the deliberate infliction of bloodshed." Irgun's tactics appealed to many Jews who believed that any action taken in the cause of the creation of a Jewish state was justified, including terrorism.
The Irgun was a political predecessor to Israel's right-wing "Herut" (or "Freedom") party, which led to today's Likud party. Likud has led or been part of most Israeli governments since 1977.
Members of the Irgun came mostly from Betar and from the Revisionist Party both in Palestine and abroad. The Revisionist Movement made up a popular backing for the underground organization. Ze'ev Jabotinsky, founder of Revisionist Zionism, commanded the organization until he died in 1940. He formulated the general realm of operation, regarding "Restraint" and the end thereof, and was the inspiration for the organization overall. An additional major source of ideological inspiration was the poetry of Uri Zvi Greenberg. The symbol of the organization, with the motto רק כך (only thus), underneath a hand holding a rifle in the foreground of a map showing both Mandatory Palestine and the Emirate of Transjordan (at the time, both were administered under the terms of the British Mandate for Palestine), implied that force was the only way to "liberate the homeland."
The number of members of the Irgun varied from a few hundred to a few thousand. Most of its members were people who joined the organization's command, under which they carried out various operations and filled positions, largely in opposition to British law. Most of them were "ordinary" people, who held regular jobs, and only a few dozen worked full-time in the Irgun.
The Irgun disagreed with the policy of the Yishuv and with the World Zionist Organization, both with regard to strategy and basic ideology and with regard to PR and military tactics, such as use of armed force to accomplish the Zionist ends, operations against the Arabs during the riots, and relations with the British mandatory government. Therefore, the Irgun tended to ignore the decisions made by the Zionist leadership and the Yishuv's institutions. This fact caused the elected bodies not to recognize the independent organization, and during most of the time of its existence the organization was seen as irresponsible, and its actions thus worthy of thwarting. Accordingly, the Irgun accompanied its armed operations with public-relations campaigns aiming to convince the public of the Irgun's way and the problems with the official political leadership of the Yishuv. The Irgun put out numerous advertisements, an underground newspaper and even ran the first independent Hebrew radio station – Kol Zion HaLochemet.
As members of an underground armed organization, Irgun personnel did not normally call Irgun by its name, but rather used other names. In the first years of its existence it was known primarily as "Ha-Haganah Leumit"' (The National Defense), and also by names such as "Haganah Bet" ("Second Defense"), "Irgun Bet" ("Second Irgun"), the "Parallel Organization" and the "Rightwing Organization". Later on it became most widely known as המעמד (the Stand). The anthem adopted by the Irgun was "Anonymous Soldiers", written by Avraham (Yair) Stern who was at the time a commander in the Irgun. Later on Stern defected from the Irgun and founded Lehi, and the song became the anthem of the Lehi. The Irgun's new anthem then became the third verse of the "Betar Song", by Ze'ev Jabotinsky.
The Irgun gradually evolved from its humble origins into a serious and well-organized paramilitary organization. The movement developed a hierarchy of ranks and a sophisticated command-structure, and came to demand serious military training and strict discipline from its members. It developed clandestine networks of hidden arms-caches and weapons-production workshops, safe-houses, and training camps, along with a secret printing facility for propaganda posters.
The ranks of the Irgun were (in ascending order):
The Irgun was led by a High Command, which set policy and gave orders. Directly underneath it was a General Staff, which oversaw the activities of the Irgun. The General Staff was divided into a military and a support staff. The military staff was divided into operational units that oversaw operations and support units in charge of planning, instruction, weapons caches and manufacture, and first aid. The military and support staff never met jointly; they communicated through the High Command. Beneath the General Staff were six district commands: Jerusalem, Tel Aviv, Haifa-Galilee, Southern, Sharon, and Shomron, each led by a district commander. A local Irgun district unit was called a "Branch". A "brigade" in the Irgun was made up of three sections. A section was made up of two groups, at the head of each was a "Group Head", and a deputy. Eventually, various units were established, which answered to a "Center" or "Staff".
The head of the Irgun High Command was the overall commander of the organization, but the designation of his rank varied. During the revolt against the British, Irgun commander Menachem Begin and the entire High Command held the rank of "Gundar Rishon". His predecessors, however, had held their own ranks. A rank of Military Commander (Seren) was awarded to the Irgun commander Yaakov Meridor and a rank of High Commander (Aluf) to David Raziel. Until his death in 1940, Jabotinsky was known as the "Military Commander of the Etzel" or the "Ha-Matzbi Ha-Elyon" ("Supreme Commander").
Under the command of Menachem Begin, the Irgun was divided into different corps:
The Irgun's commanders planned for it to have a regular combat force, a reserve, and shock units, but in practice there were not enough personnel for a reserve or for a shock force.
The Irgun emphasized that its fighters be highly disciplined. Strict drill exercises were carried out at ceremonies at different times, and strict attention was given to discipline, formal ceremonies and military relationships between the various ranks. The Irgun put out professional publications on combat doctrine, weaponry, leadership, drill exercises, etc. Among these publications were three books written by David Raziel, who had studied military history, techniques, and strategy:
A British analysis noted that the Irgun's discipline was "as strict as any army in the world."
The Irgun operated a sophisticated recruitment and military-training regime. Those wishing to join had to find and make contact with a member, meaning only those who personally knew a member or were persistent could find their way in. Once contact had been established, a meeting was set up with the three-member selection committee at a safe-house, where the recruit was interviewed in a darkened room, with the committee either positioned behind a screen, or with a flashlight shone into the recruit's eyes. The interviewers asked basic biographical questions, and then asked a series of questions designed to weed out romantics and adventurers and those who had not seriously contemplated the potential sacrifices. Those selected attended a four-month series of indoctrination seminars in groups of five to ten, where they were taught the Irgun's ideology and the code of conduct it expected of its members. These seminars also had another purpose - to weed out the impatient and those of flawed purpose who had gotten past the selection interview. Then, members were introduced to other members, were taught the locations of safe-houses, and given military training. Irgun recruits trained with firearms, hand grenades, and were taught how to conduct combined attacks on targets. Arms handling and tactics courses were given in clandestine training camps, while practice shooting took place in the desert or by the sea. Eventually, separate training camps were established for heavy-weapons training. The most rigorous course was the explosives course for bomb-makers, which lasted a year. The British authorities believed that some Irgun members enlisted in the Jewish section of the Palestine Police Force for a year as part of their training, during which they also passed intelligence. In addition to the Irgun's sophisticated training program, many Irgun members were veterans of the Haganah (including the Palmach), the British Armed Forces, and Jewish partisan groups that had waged guerrilla warfare in Nazi-occupied Europe, thus bringing significant military training and combat experience into the organization. The Irgun also operated a course for its intelligence operatives, in which recruits were taught espionage, cryptography, and analysis techniques.
Of the Irgun's members, almost all were part-time members. They were expected to maintain their civilian lives and jobs, dividing their time between their civilian lives and underground activities. There were never more than 40 full-time members, who were given a small expense stipend on which to live on. Upon joining, every member received an underground name. The Irgun's members were divided into cells, and worked with the members of their own cells. The identities of Irgun members in other cells were withheld. This ensured that an Irgun member taken prisoner could betray no more than a few comrades.
In addition to the Irgun's members in Palestine, underground Irgun cells composed of local Jews were established in Europe following World War II. An Irgun cell was also established in Shanghai, home to many European-Jewish refugees. The Irgun also set up a Swiss bank account. Eli Tavin, the former head of Irgun intelligence, was appointed commander of the Irgun abroad.
In November 1947, the Jewish insurgency came to an end as the UN approved of the partition of Palestine, and the British had announced their intention to withdraw the previous month. As the British left and the 1947-48 Civil War in Mandatory Palestine got underway, the Irgun came out of the underground and began to function more as a standing army rather an underground organization. It began openly recruiting, training, and raising funds, and established bases, including training facilities. It also introduced field communications and created a medical unit and supply service.
Until World War II the group armed itself with weapons purchased in Europe, primarily Italy and Poland, and smuggled to Palestine. The Irgun also established workshops that manufactured spare parts and attachments for the weapons. Also manufactured were land mines and simple hand grenades. Another way in which the Irgun armed itself was theft of weapons from the British Police and military.
The Irgun's first steps were in the aftermath of the Riots of 1929. In the Jerusalem branch of the Haganah there were feelings of disappointment and internal unrest towards the leadership of the movements and the Histadrut (at that time the organization running the Haganah). These feelings were a result of the view that the Haganah was not adequately defending Jewish interests in the region. Likewise, critics of the leadership spoke out against alleged failures in the number of weapons, readiness of the movement and its policy of restraint and not fighting back. On April 10, 1931, commanders and equipment managers announced that they refused to return weapons to the Haganah that had been issued to them earlier, prior to the Nebi Musa holiday. These weapons were later returned by the commander of the Jerusalem branch, Avraham Tehomi, a.k.a. "Gideon". However, the commanders who decided to rebel against the leadership of the Haganah relayed a message regarding their resignations to the Vaad Leumi, and thus this schism created a new independent movement.
The leader of the new underground movement was Avraham Tehomi, alongside other founding members who were all senior commanders in the Haganah, members of Hapoel Hatzair and of the Histadrut. Also among them was Eliyahu Ben Horin, an activist in the Revisionist Party. This group was known as the "Odessan Gang", because they previously had been members of the "Haganah Ha'Atzmit" of Jewish Odessa. The new movement was named "Irgun Tsvai Leumi", ("National Military Organization") in order to emphasize its active nature in contrast to the Haganah. Moreover, the organization was founded with the desire to become a true military organization and not just a militia as the Haganah was at the time.
In the autumn of that year the Jerusalem group merged with other armed groups affiliated with Betar. The Betar groups' center of activity was in Tel Aviv, and they began their activity in 1928 with the establishment of "Officers and Instructors School of Betar". Students at this institution had broken away from the Haganah earlier, for political reasons, and the new group called itself the "National Defense", הגנה הלאומית. During the riots of 1929 Betar youth participated in the defense of Tel Aviv neighborhoods under the command of Yermiyahu Halperin, at the behest of the Tel Aviv city hall. After the riots the Tel Avivian group expanded, and was known as "The Right Wing Organization".
After the Tel Aviv expansion another branch was established in Haifa. Towards the end of 1932 the Haganah branch of Safed also defected and joined the Irgun, as well as many members of the Maccabi sports association. At that time the movement's underground newsletter, "Ha'Metsudah" (the Fortress) also began publication, expressing the active trend of the movement. The Irgun also increased its numbers by expanding draft regiments of Betar – groups of volunteers, committed to two years of security and pioneer activities. These regiments were based in places that from which stemmed new Irgun strongholds in the many places, including the settlements of Yesod HaMa'ala, Mishmar HaYarden, Rosh Pina, Metula and Nahariya in the north; in the center – Hadera, Binyamina, Herzliya, Netanya and Kfar Saba, and south of there – Rishon LeZion, Rehovot and Ness Ziona. Later on regiments were also active in the Old City of Jerusalem ("the Kotel Brigades") among others. Primary training centers were based in Ramat Gan, Qastina (by Kiryat Mal'akhi of today) and other places.
In 1933 there were some signs of unrest, seen by the incitement of the local Arab leadership to act against the authorities. The strong British response put down the disturbances quickly. During that time the Irgun operated in a similar manner to the Haganah and was a guarding organization. The two organizations cooperated in ways such as coordination of posts and even intelligence sharing.
Within the Irgun, Tehomi was the first to serve as "Head of the Headquarters" or "Chief Commander". Alongside Tehomi served the senior commanders, or "Headquarters" of the movement. As the organization grew, it was divided into district commands.
In August 1933 a "Supervisory Committee" for the Irgun was established, which included representatives from most of the Zionist political parties. The members of this committee were Meir Grossman (of the Hebrew State Party), Rabbi Meir Bar-Ilan (of the Mizrachi Party, either Immanuel Neumann or Yehoshua Supersky (of the General Zionists) and Ze'ev Jabotinsky or Eliyahu Ben Horin (of Hatzohar).
In protest against, and with the aim of ending Jewish immigration to Palestine, the Great Arab Revolt of 1936–1939 broke out on April 19, 1936. The riots took the form of attacks by Arab rioters ambushing main roads, bombing of roads and settlements as well as property and agriculture vandalism. In the beginning, the Irgun and the Haganah generally maintained a policy of restraint, apart from a few instances. Some expressed resentment at this policy, leading up internal unrest in the two organizations. The Irgun tended to retaliate more often, and sometimes Irgun members patrolled areas beyond their positions in order to encounter attackers ahead of time. However, there were differences of opinion regarding what to do in the Haganah, as well. Due to the joining of many Betar Youth members, Jabotinsky (founder of Betar) had a great deal of influence over Irgun policy. Nevertheless, Jabotinsky was of the opinion that for moral reasons violent retaliation was not to be undertaken.
In November 1936 the Peel Commission was sent to inquire regarding the breakout of the riots and propose a solution to end the Revolt. In early 1937 there were still some in the Yishuv who felt the commission would recommend a partition of Mandatory Palestine (the land west of the Jordan River), thus creating a Jewish state on part of the land. The Irgun leadership, as well as the "Supervisory Committee" held similar beliefs, as did some members of the Haganah and the Jewish Agency. This belief strengthened the policy of restraint and led to the position that there was no room for defense institutions in the future Jewish state. Tehomi was quoted as saying: "We stand before great events: a Jewish state and a Jewish army. There is a need for a single military force". This position intensified the differences of opinion regarding the policy of restraint, both within the Irgun and within the political camp aligned with the organization. The leadership committee of the Irgun supported a merger with the Haganah. On April 24, 1937 a referendum was held among Irgun members regarding its continued independent existence. David Raziel and Avraham (Yair) Stern came out publicly in support for the continued existence of the Irgun:
In April 1937 the Irgun split after the referendum. Approximately 1,500–2,000 people, about half of the Irgun's membership, including the senior command staff, regional committee members, along with most of the Irgun's weapons, returned to the Haganah, which at that time was under the Jewish Agency's leadership. The Supervisory Committee's control over the Irgun ended, and Jabotinsky assumed command. In their opinion, the removal of the Haganah from the Jewish Agency's leadership to the national institutions necessitated their return. Furthermore, they no longer saw significant ideological differences between the movements. Those who remained in the Irgun were primarily young activists, mostly laypeople, who sided with the independent existence of the Irgun. In fact, most of those who remained were originally Betar people. Moshe Rosenberg estimated that approximately 1,800 members remained. In theory, the Irgun remained an organization not aligned with a political party, but in reality the supervisory committee was disbanded and the Irgun's continued ideological path was outlined according to Ze'ev Jabotinsky's school of thought and his decisions, until the movement eventually became Revisionist Zionism's military arm. One of the major changes in policy by Jabotinsky was the end of the policy of restraint.
On April 27, 1937 the Irgun founded a new headquarters, staffed by Moshe Rosenberg at the head, Avraham (Yair) Stern as secretary, David Raziel as head of the Jerusalem branch, Hanoch Kalai as commander of Haifa and Aharon Haichman as commander of Tel Aviv. On 20 Tammuz, (June 29) the day of Theodor Herzl's death, a ceremony was held in honor of the reorganization of the underground movement. For security purposes this ceremony was held at a construction site in Tel Aviv.
Ze'ev Jabotinsky placed Col. Robert Bitker at the head of the Irgun. Bitker had previously served as Betar commissioner in China and had military experience. A few months later, probably due to total incompatibility with the position, Jabotinsky replaced Bitker with Moshe Rosenberg. When the Peel Commission report was published a few months later, the Revisionist camp decided not to accept the commission's recommendations. Moreover, the organizations of Betar, Hatzohar and the Irgun began to increase their efforts to bring Jews to the land of Israel, illegally. This Aliyah was known as the עליית אף על פי "Af Al Pi (Nevertheless) Aliyah". As opposed to this position, the Jewish Agency began acting on behalf of the Zionist interest on the political front, and continued the policy of restraint. From this point onwards the differences between the Haganah and the Irgun were much more obvious.
According to Jabotinsky's "Evacuation Plan", which called for millions of European Jews to be brought to Palestine at once, the Irgun helped the illegal immigration of European Jews to the land of Israel. This was named by Jabotinsky the "National Sport". The most significant part of this immigration prior to World War II was carried out by the Revisionist camp, largely because the Yishuv institutions and the Jewish Agency shied away from such actions on grounds of cost and their belief that Britain would in the future allow widespread Jewish immigration.
The Irgun joined forces with Hatzohar and Betar in September 1937, when it assisted with the landing of a convoy of 54 Betar members at Tantura Beach (near Haifa.) The Irgun was responsible for discreetly bringing the Olim, or Jewish immigrants, to the beaches, and dispersing them among the various Jewish settlements. The Irgun also began participating in the organisation of the immigration enterprise and undertook the process of accompanying the ships. This began with the ship "Draga" which arrived at the coast of British Palestine in September 1938. In August of the same year, an agreement was made between Ari Jabotinsky (the son of Ze'ev Jabotinsky), the Betar representative and Hillel Kook, the Irgun representative, to coordinate the immigration (also known as Ha'apala). This agreement was also made in the "Paris Convention" in February 1939, at which Ze'ev Jabotinsky and David Raziel were present. Afterwards, the "Aliyah Center" was founded, made up of representatives of Hatzohar, Betar, and the Irgun, thereby making the Irgun a full participant in the process.
The difficult conditions on the ships demanded a high level of discipline. The people on board the ships were often split into units, led by commanders. In addition to having a daily roll call and the distribution of food and water (usually very little of either), organized talks were held to provide information regarding the actual arrival in Palestine. One of the largest ships was the "Sakaria", with 2,300 passengers, which equalled about 0.5% of the Jewish population in Palestine. The first vessel arrived on April 13, 1937, and the last on February 13, 1940. All told, about 18,000 Jews immigrated to Palestine with the help of the Revisionist organizations and private initiatives by other Revisionists. Most were not caught by the British.
Irgun members continued to defend settlements, but at the same time began attacks on Arab villages, thus ending the policy of restraint. These attacks were intended to instill fear in the Arab side, in order to cause the Arabs to wish for peace and quiet. In March 1938, David Raziel wrote in the underground newspaper "By the Sword" a constitutive article for the Irgun overall, in which he coined the term "Active Defense":
The first attacks began around April 1936, and by the end of World War II, more than 250 Arabs had been killed. Examples include:
During 1936, Irgun members carried out approximately ten attacks.
Throughout 1937 the Irgun continued this line of operation.
A more complete list can be found here.
At that time, however, these acts were not yet a part of a formulated policy of the Irgun. Not all of the aforementioned operations received a commander's approval, and Jabotinsky was not in favor of such actions at the time. Jabotinsky still hoped to establish a Jewish force out in the open that would not have to operate underground. However, the failure, in its eyes, of the Peel Commission and the renewal of violence on the part of the Arabs caused the Irgun to rethink its official policy.
14 November 1937 was a watershed in Irgun activity. From that date, the Irgun increased its reprisals. Following an increase in the number of attacks aimed at Jews, including the killing of five kibbutz members near Kiryat Anavim (today kibbutz Ma'ale HaHamisha), the Irgun undertook a series of attacks in various places in Jerusalem, killing five Arabs. Operations were also undertaken in Haifa (shooting at the Arab-populated Wadi Nisnas neighborhood) and in Herzliya. The date is known as the day the policy of restraint (Havlagah) ended, or as Black Sunday when operations resulted in the murder of 10 Arabs. This is when the organization fully changed its policy, with the approval of Jabotinsky and Headquarters to the policy of "active defense" in respect of Irgun actions.
The British responded with the arrest of Betar and Hatzohar members as suspected members of the Irgun. Military courts were allowed to act under "Time of Emergency Regulations" and even sentence people to death. In this manner Yehezkel Altman, a guard in a Betar battalion in the Nahalat Yizchak neighborhood of Tel Aviv, shot at an Arab bus, without his commanders' knowledge. Altman was acting in response to a shooting at Jewish vehicles on the Tel Aviv–Jerusalem road the day before. He turned himself in later and was sentenced to death, a sentence which was later commuted to a life sentence.
Despite the arrests, Irgun members continued fighting. Jabotinsky lent his moral support to these activities. In a letter to Moshe Rosenberg on 18 March 1938 he wrote:
Although the Irgun continued activities such as these, following Rosenberg's orders, they were greatly curtailed. Furthermore, in fear of the British threat of the death sentence for anyone found carrying a weapon, all operations were suspended for eight months. However, opposition to this policy gradually increased. In April, 1938, responding to the killing of six Jews, Betar members from the Rosh Pina Brigade went on a reprisal mission, without the consent of their commander, as described by historian Avi Shlaim:
Although the incident ended without casualties, the three were caught, and one of them – Shlomo Ben-Yosef was sentenced to death. Demonstrations around the country, as well as pressure from institutions and people such as Dr. Chaim Weizmann and the Chief Rabbi of Mandatory Palestine, Yitzhak HaLevi Herzog did not reduce his sentence. In Shlomo Ben-Yosef's writings in Hebrew were later found:
On 29 June 1938 he was executed, and was the first of the Olei Hagardom. The Irgun revered him after his death and many regarded him as an example.
In light of this, and due to the anger of the Irgun leadership over the decision to adopt a policy of restraint until that point, Jabotinsky relieved Rosenberg of his post and replaced him with David Raziel, who proved to be the most prominent Irgun commander until Menachem Begin. Jabotinsky simultaneously instructed the Irgun to end its policy of restraint, leading to armed offensive operations until the end of the Arab Revolt in 1939. In this time, the Irgun mounted about 40 operations against Arabs and Arab villages, for instance:
This action led the British Parliament to discuss the disturbances in Palestine. On 23 February 1939 the Secretary of State for the Colonies, Malcolm MacDonald revealed the British intention to cancel the mandate and establish a state that would preserve Arab rights. This caused a wave of riots and attacks by Arabs against Jews. The Irgun responded four days later with a series of attacks on Arab buses and other sites. The British used military force against the Arab rioters and in the latter stages of the revolt by the Arab community in Palestine, it deteriorated into a series of internal gang wars.
At the same time, the Irgun also established itself in Europe. The Irgun built underground cells that participated in organizing migration to Palestine. The cells were made up almost entirely of Betar members, and their primary activity was military training in preparation for emigration to Palestine. Ties formed with the Polish authorities brought about courses in which Irgun commanders were trained by Polish officers in advanced military issues such as guerrilla warfare, tactics and laying land mines. Avraham (Yair) Stern was notable among the cell organizers in Europe. In 1937 the Polish authorities began to deliver large amounts of weapons to the underground. According to Irgun activists Poland supplied the organization with 25,000 rifles, and additional material and weapons, by summer 1939 the Warsaw warehouses of Irgun held 5,000 rifles and 1,000 machine guns. The training and support by Poland would allow the organization to mobilize 30,000-40,000 men The transfer of handguns, rifles, explosives and ammunition stopped with the outbreak of World War II. Another field in which the Irgun operated was the training of pilots, so they could serve in the Air Force in the future war for independence, in the flight school in Lod.
Towards the end of 1938 there was progress towards aligning the ideologies of the Irgun and the Haganah. Many abandoned the belief that the land would be divided and a Jewish state would soon exist. The Haganah founded פו"מ, a special operations unit, (pronounced "poom"), which carried out reprisal attacks following Arab violence. These operations continued into 1939. Furthermore, the opposition within the Yishuv to illegal immigration significantly decreased, and the Haganah began to bring Jews to Palestine using rented ships, as the Irgun had in the past.
The publishing of the MacDonald White Paper of 1939 brought with it new edicts that were intended to lead to a more equitable settlement between Jews and Arabs. However, it was considered by some Jews to have an adverse effect on the continued development of the Jewish community in Palestine. Chief among these was the prohibition on selling land to Jews, and the smaller quotas for Jewish immigration. The entire Yishuv was furious at the contents of the White Paper. There were demonstrations against the "Treacherous Paper", as it was considered that it would preclude the establishment of a Jewish homeland in Palestine.
Under the temporary command of Hanoch Kalai, the Irgun began sabotaging strategic infrastructure such as electricity facilities, radio and telephone lines. It also started publicizing its activity and its goals. This was done in street announcements, newspapers, as well as the underground radio station Kol Zion HaLochemet. On August 26, 1939, the Irgun killed Ralph Cairns, a British police officer who, as head of the Jewish Department in the Palestine Police, had tortured a number of youths who were underground members. Cairns and Ronald Barker, another British police officer, were killed by an Irgun IED.
The British increased their efforts against the Irgun. As a result, on August 31 the British police arrested members meeting in the Irgun headquarters. On the next day, September 1, 1939, World War II broke out.
Following the outbreak of war, Ze'ev Jabotinsky and the New Zionist Organization voiced their support for Britain and France. In mid-September 1939 Raziel was moved from his place of detention in Tzrifin. This, among other events, encouraged the Irgun to announce a cessation of its activities against the British so as not to hinder Britain's effort to fight "the Hebrew's greatest enemy in the world – German Nazism". This announcement ended with the hope that after the war a Hebrew state would be founded "within the historical borders of the liberated homeland". After this announcement Irgun, Betar and Hatzohar members, including Raziel and the Irgun leadership, were gradually released from detention. The Irgun did not rule out joining the British army and the Jewish Brigade. Irgun members did enlist in various British units. Irgun members also assisted British forces with intelligence in Romania, Bulgaria, Morocco and Tunisia. An Irgun unit also operated in Syria and Lebanon. David Raziel later died during one of these operations.
During the Holocaust, Betar members revolted numerous times against the Nazis in occupied Europe. The largest of these revolts was the Warsaw Ghetto Uprising, in which an armed underground organization fought, formed by Betar and Hatzoar and known as the "Żydowski Związek Wojskowy (ŻZW)" (Jewish Military Union). Despite its political origins, the ŻZW accepted members without regard to political affiliation, and had contacts established before the war with elements of the Polish military. Because of differences over objectives and strategy, the ŻZW was unable to form a common front with the mainstream ghetto fighters of the Żydowska Organizacja Bojowa, and fought independently under the military leadership of Paweł Frenkiel and the political leadership of Dawid Wdowiński.
There were instances of Betar members enlisted in the British military smuggling British weapons to the Irgun.
From 1939 onwards, an Irgun delegation in the United States worked for the creation of a Jewish army made up of Jewish refugees and Jews from Palestine, to fight alongside the Allied Forces. In July 1943 the "Emergency Committee to Save the Jewish People in Europe" was formed, and worked until the end of the war to rescue the Jews of Europe from the Nazis and to garner public support for a Jewish state. However, it was not until January 1944 that US President Franklin Roosevelt established the War Refugee Board, which achieved some success in saving European Jews.
Throughout this entire period, the British continued enforcing the White Paper's provisions, which included a ban on the sale of land, restrictions on Jewish immigration and increased vigilance against illegal immigration. Part of the reason why the British banned land sales (to anyone) was the confused state of the post Ottoman land registry; it was difficult to determine who actually owned the land that was for sale.
Within the ranks of the Irgun this created much disappointment and unrest, at the center of which was disagreement with the leadership of the New Zionist Organization, David Raziel and the Irgun Headquarters. On June 18, 1939, Avraham (Yair) Stern and others of the leadership were released from prison and a rift opened between them the Irgun and Hatzohar leadership. The controversy centred on the issues of the underground movement submitting to public political leadership and fighting the British. On his release from prison Raziel resigned from Headquarters. To his chagrin, independent operations of senior members of the Irgun were carried out and some commanders even doubted Raziel's loyalty.
In his place, Stern was elected to the leadership. In the past, Stern had founded secret Irgun cells in Poland without Jabotinsky's knowledge, in opposition to his wishes. Furthermore, Stern was in favor of removing the Irgun from the authority of the New Zionist Organization, whose leadership urged Raziel to return to the command of the Irgun. He finally consented. Jabotinsky wrote to Raziel and to Stern, and these letters were distributed to the branches of the Irgun:
Stern was sent a telegram with an order to obey Raziel, who was reappointed. However, these events did not prevent the splitting of the organization. Suspicion and distrust were rampant among the members. Out of the Irgun a new organization was created on July 17, 1940, which was first named "The National Military Organization in Israel" (as opposed to the "National Military Organization in the Land of Israel") and later on changed its name to Lehi, an acronym for Lohamei Herut Israel, "Fighters for the Freedom of Israel", (לח"י – לוחמי חירות ישראל). Jabotinsky died in New York on August 4, 1940, yet this did not prevent the Lehi split. Following Jabotinsky's death, ties were formed between the Irgun and the New Zionist Organization. These ties would last until 1944, when the Irgun declared a revolt against the British.
The primary difference between the Irgun and the newly formed organization was its intention to fight the British in Palestine, regardless of their war against Germany. Later, additional operational and ideological differences developed that contradicted some of the Irgun's guiding principles. For example, the Lehi, unlike the Irgun, supported a population exchange with local Arabs.
The split damaged the Irgun both organizationally and from a morale point of view. As their spiritual leader, Jabotinsky's death also added to this feeling. Together, these factors brought about a mass abandonment by members. The British took advantage of this weakness to gather intelligence and arrest Irgun activists. The new Irgun leadership, which included Meridor, Yerachmiel Ha'Levi, Rabbi Moshe Zvi Segal and others used the forced hiatus in activity to rebuild the injured organization. This period was also marked by more cooperation between the Irgun and the Jewish Agency, however David Ben-Gurion's uncompromising demand that Irgun accept the Agency's command foiled any further cooperation.
In both the Irgun and the Haganah more voices were being heard opposing any cooperation with the British. Nevertheless, an Irgun operation carried out in the service of Britain was aimed at sabotaging pro-Nazi forces in Iraq, including the assassination of Haj Amin al-Husayni. Among others, Raziel and Yaakov Meridor participated. On April 20, 1941, during a Luftwaffe air raid on RAF Hannaniya near Baghdad, David Raziel, commander of the Irgun, was killed during the operation.
In late 1943 a joint Haganah – Irgun initiative was developed, to form a single fighting body, unaligned with any political party, by the name of עם לוחם ("Fighting Nation"). The new body's first plan was to kidnap the British High Commissioner of Palestine, Sir Harold MacMichael and take him to Cyprus. However, the Haganah leaked the planned operation and it was thwarted before it got off the ground. Nevertheless, at this stage the Irgun ceased its cooperation with the British. As Eliyahu Lankin tells in his book:
In 1943 the Polish II Corps, commanded by Władysław Anders, arrived in Palestine from Iraq. The British insisted that no Jewish units of the army be created. Eventually, many of the soldiers of Jewish origin that arrived with the army were released and allowed to stay in Palestine. One of them was Menachem Begin, whose arrival in Palestine created new-found expectations within the Irgun and Betar. Begin had served as head of the Betar movement in Poland, and was a respected leader. Yaakov Meridor, then the commander of the Irgun, raised the idea of appointing Begin to the post. In late 1943, when Begin accepted the position, a new leadership was formed. Meridor became Begin's deputy, and other members of the board were Aryeh Ben Eliezer, Eliyahu Lankin, and Shlomo Lev Ami.
On February 1, 1944 the Irgun put up posters all around the country, proclaiming a revolt against the British mandatory government. The posters began by saying that all of the Zionist movements stood by the Allied Forces and over 25,000 Jews had enlisted in the British military. The hope to establish a Jewish army had died. European Jewry was trapped and was being destroyed, yet Britain, for its part, did not allow any rescue missions. This part of the document ends with the following words:
The Irgun then declared that, for its part, the ceasefire was over and they were now at war with the British. It demanded the transfer of rule to a Jewish government, to implement ten policies. Among these were the mass evacuation of Jews from Europe, the signing of treaties with any state that recognized the Jewish state's sovereignty, including Britain, granting social justice to the state's residents, and full equality to the Arab population. The proclamation ended with:
The Irgun began this campaign rather weakly. At the time of the start of the revolt, it was only about 1,000 strong, including some 200 fighters. It possessed about 4 submachine guns, 40 rifles, 60 pistols, 150 hand grenades, and 2,000 kilograms of explosive material, and its funds were about £800.
The Irgun began a militant operation against the symbols of government, in an attempt to harm the regime's operation as well as its reputation. The first attack was on February 12, 1944 at the government immigration offices, a symbol of the immigration laws. The attacks went smoothly and ended with no casualties—as they took place on a Saturday night, when the buildings were empty—in the three largest cities: Jerusalem, Tel Aviv, and Haifa. On February 27 the income tax offices were bombed. Parts of the same cities were blown up, also on a Saturday night; prior warnings were put up near the buildings. On March 23 the national headquarters building of the British police in the Russian Compound in Jerusalem was attacked, and part of it was blown up. These attacks in the first few months were sharply condemned by the organized leadership of the Yishuv and by the Jewish Agency, who saw them as dangerous provocations.
At the same time the Lehi also renewed its attacks against the British. The Irgun continued to attack police stations and headquarters, and Tegart Fort, a fortified police station (today the location of Latrun). One relatively complex operation was the takeover of the radio station in Ramallah, on May 17, 1944.
One symbolic act by the Irgun happened before Yom Kippur of 1944. They plastered notices around town, warning that no British officers should come to the Western Wall on Yom Kippur, and for the first time since the mandate began no British police officers were there to prevent the Jews from the traditional Shofar blowing at the end of the fast. After the fast that year the Irgun attacked four police stations in Arab settlements. In order to obtain weapons, the Irgun carried out "confiscation" operations – they robbed British armouries and smuggled stolen weapons to their own hiding places. During this phase of activity the Irgun also cut all of its official ties with the New Zionist Organization, so as not to tie their fate in the underground organization.
Begin wrote in his memoirs, "The Revolt":
In October 1944 the British began expelling hundreds of arrested Irgun and Lehi members to detention camps in Africa. 251 detainees from Latrun were flown on thirteen planes, on October 19 to a camp in Asmara, Eritrea. Eleven additional transports were made. Throughout the period of their detention, the detainees often initiated rebellions and hunger strikes. Many escape attempts were made until July 1948 when the exiles were returned to Israel. While there were numerous successful escapes from the camp itself, only nine men actually made it back all the way. One noted success was that of Yaakov Meridor, who escaped nine times before finally reaching Europe in April 1948. These tribulations were the subject of his book "Long is the Path to Freedom: Chronicles of one of the Exiles".
On November 6, 1944, Lord Moyne, British Deputy Resident Minister of State in Cairo was assassinated by Lehi members Eliyahu Hakim and Eliyahu Bet-Zuri. This act raised concerns within the Yishuv from the British regime's reaction to the underground's violent acts against them. Therefore, the Jewish Agency decided on starting a "Hunting Season", known as the "saison", (from the French "la saison de chasse").
The Irgun's recuperation was noticeable when it began to renew its cooperation with the Lehi in May 1945, when it sabotaged oil pipelines, telephone lines and railroad bridges. All in all, over 1,000 members of the Irgun and Lehi were arrested and interned in British camps during the "Saison". Eventually the Hunting Season died out, and there was even talk of cooperation with the Haganah leading to the formation of the Jewish Resistance Movement.
Towards the end of July 1945 the Labour party in Britain was elected to power. The Yishuv leadership had high hopes that this would change the anti-Zionist policy that the British maintained at the time. However, these hopes were quickly dashed when the government limited Jewish immigration, with the intention that the population of Mandatory Palestine (the land west of the Jordan River) would not be more than one-third of the total. This, along with the stepping up of arrests and their pursuit of underground members and illegal immigration organizers led to the formation of the Jewish Resistance Movement. This body consolidated the armed resistance to the British of the Irgun, Lehi, and Haganah. For ten months the Irgun and the Lehi cooperated and they carried out nineteen attacks and defense operations. The Haganah and Palmach carried out ten such operations. The Haganah also assisted in landing 13,000 illegal immigrants.
Tension between the underground movements and the British increased with the increase in operations. On April 23, 1946, an operation undertaken by the Irgun to gain weapons from the Tegart fort at Ramat Gan resulted in a firefight with the police in which an Arab constable and two Irgun fighters were killed, including one who jumped on an explosive device to save his comrades. A third fighter, Dov Gruner, was wounded and captured. He stood trial and was sentenced to be death by hanging, refusing to sign a pardon request.
In 1946, British relations with the Yishuv worsened, building up to Operation Agatha of June 29. The authorities ignored the Anglo-American Committee of Inquiry's recommendation to allow 100,000 Jews into Palestine at once. As a result of the discovery of documents tying the Jewish Agency to the Jewish Resistance Movement, the Irgun was asked to speed up the plans for the King David Hotel bombing of July 22. The hotel was where the documents were located, the base for the British Secretariat, the military command and a branch of the Criminal Investigation Division of the police. The Irgun later claimed to have sent a warning that was ignored. Palestinian and U.S. sources confirm that the Irgun issued numerous warnings for civilians to evacuate the hotel prior to the bombing. 91 people were killed in the attack where a 350 kg bomb was placed in the basement of the hotel and caused a large section of it to collapse. Only 13 were British soldiers.
The King David Hotel bombing and the arrest of Jewish Agency and other Yishuv leaders as part of Operation Agatha caused the Haganah to cease their armed activity against the British. Yishuv and Jewish Agency leaders were released from prison. From then until the end of the British mandate, resistance activities were led by the Irgun and Lehi. In early September 1946 the Irgun renewed its attacks against civil structures, railroads, communication lines and bridges. One operation was the attack on the train station in Jerusalem, in which Meir Feinstein was arrested and later committed suicide awaiting execution. According to the Irgun these sort of armed attacks were legitimate, since the trains primarily served the British, for redeployment of their forces. The Irgun also publicized leaflets, in three languages, not to use specific trains in danger of being attacked. For a while, the British stopped train traffic at night. The Irgun also carried out repeated attacks against military and police traffic using disguised, electronically-detonated roadside mines which could be detonated by an operator hiding nearby as a vehicle passed, carried out arms raids against military bases and police stations (often disguised as British soldiers), launched bombing, shooting, and mortar attacks against military and police installations and checkpoints, and robbed banks to gain funds as a result of losing access to Haganah funding following the collapse of the Jewish Resistance Movement.
On October 31, 1946, in response to the British barring entry of Jews from Palestine, the Irgun blew up the British Embassy in Rome, a center of British efforts to monitor and stop Jewish immigration. The Irgun also carried out a few other operations in Europe: a British troop train was derailed and an attempt against another troop train failed. An attack on a British officers club in Vienna took place in 1947, and an attack on another British officer's club in Vienna and a sergeant's club in Germany took place in 1948.
In December 1946 a sentence of 18 years and 18 beatings was handed down to a young Irgun member for robbing a bank. The Irgun made good on a threat they made and after the detainee was whipped, Irgun members kidnapped British officers and beat them in public. The operation, known as the "Night of the Beatings" brought an end to British punitive beatings. The British, taking these acts seriously, moved many British families in Palestine into the confines of military bases, and some moved home.
On February 14, 1947, Ernest Bevin announced that the Jews and Arabs would not be able to agree on any British proposed solution for the land, and therefore the issue must be brought to the United Nations (UN) for a final decision. The Yishuv thought of the idea to transfer the issue to the UN as a British attempt to achieve delay while a UN inquiry commission would be established, and its ideas discussed, and all the while the Yishuv would weaken. Foundation for Immigration B increased the number of ships bringing in Jewish refugees. The British still strictly enforced the policy of limited Jewish immigration and illegal immigrants were placed in detention camps in Cyprus, which increased the anger of the Jewish community towards the mandate government.
The Irgun stepped up its activity and from February 19 until March 3 it attacked 18 British military camps, convoy routes, vehicles, and other facilities. The most notable of these attacks was the bombing of a British officer's club located in Goldsmith House in Jerusalem, which was in a heavily guarded security zone. Covered by machine-gun fire, an Irgun assault team in a truck penetrated the security zone and lobbed explosives into the building. Thirteen people, including two officers, were killed. As a result, martial law was imposed over much of the country, enforced by approximately 20,000 British soldiers. Despite this, attacks continued throughout the martial law period. The most notable one was an Irgun attack against the Royal Army Pay Corps base at the Schneller Orphanage, in which a British soldier was killed.
Throughout its struggle against the British, the Irgun sought to publicize its cause around the world. By humiliating the British, it attempted to focus global attention on Palestine, hoping that any British overreaction would be widely reported, and thus result in more political pressure against the British. Begin described this strategy as turning Palestine into a "glass house". The Irgun also re-established many representative offices internationally, and by 1948 operated in 23 states. In these countries, the Irgun sometimes acted against the local British representatives or led public relations campaigns against Britain. According to Bruce Hoffman: ""In an era long before the advent of 24/7 global news coverage and instantaneous satellite-transmitted broadcasts, the Irgun deliberately attempted to appeal to a worldwide audience far beyond the immediate confines of its local struggle, and beyond even the ruling regime's own homeland"."
On April 16, 1947, Irgun members Dov Gruner, Yehiel Dresner, Eliezer Kashani, and Mordechai Alkahi were hanged in Acre Prison, while singing Hatikvah. On April 21 Meir Feinstein and Lehi member Moshe Barazani blew themselves up, using a smuggled grenade, hours before their scheduled hanging. And on May 4 one of the Irgun's largest operations took place – the raid on Acre Prison. The operation was carried out by 23 men, commanded by Dov Cohen – AKA "Shimshon", along with the help of the Irgun and Lehi prisoners inside the prison. The Irgun had informed them of the plan in advance and smuggled in explosives. After a hole was blasted in the prison wall, the 41 Irgun and Lehi members who had been chosen to escape then ran to the hole, blasting through inner prison gates with the smuggled explosives. Meanwhile, Irgun teams mined roads and launched a mortar attack on a nearby British Army camp to delay the arrival of responding British forces. Although the 41 escapees managed to get out of the prison and board the escape trucks, some were rapidly recaptured and nine of the escapees and attackers were killed. Five Irgun men in the attacking party were also captured. Overall, 27 of the 41 designated escapees managed to escape. Along with the underground movement members, other criminals – including 214 Arabs – also escaped. Of the five attackers who were caught, three of them – Avshalom Haviv, Meir Nakar, and Yaakov Weiss, were sentenced to death.
After the death sentences of the three were confirmed, the Irgun tried to save them by kidnapping hostages — British sergeants Clifford Martin and Mervyn Paice — in the streets of Netanya. British forces closed off and combed the area in search of the two, but did not find them. On July 29, 1947, in the afternoon, Meir Nakar, Avshalom Haviv, and Yaakov Weiss were executed. Approximately thirteen hours later the hostages were hanged in retaliation by the Irgun and their bodies, booby-trapped with an explosive, afterwards strung up from trees in woodlands south of Netanya. This action caused an outcry in Britain and was condemned both there and by Jewish leaders in Palestine.
This episode has been given as a major influence on the British decision to terminate the Mandate and leave Palestine. The United Nations Special Committee on Palestine (UNSCOP) was also influenced by this and other actions. At the same time another incident was developing – the events of the ship "Exodus 1947". The 4,500 Holocaust survivors on board were not allowed to enter Palestine. UNSCOP also covered the events. Some of its members were even present at Haifa port when the putative immigrants were forcefully removed from their ship (later found to have been rigged with an IED by some of its passengers) onto the deportation ships, and later commented that this strong image helped them press for an immediate solution for Jewish immigration and the question of Palestine.
Two weeks later, the House of Commons convened for a special debate on events in Palestine, and concluded that their soldiers should be withdrawn as soon as possible.
UNSCOP's conclusion was a unanimous decision to end the British mandate, and a majority decision to divide Mandatory Palestine (the land west of the Jordan River) between a Jewish state and an Arab state. During the UN's deliberations regarding the committee's recommendations the Irgun avoided initiating any attacks, so as not to influence the UN negatively on the idea of a Jewish state. On November 29 the UN General Assembly voted in favor of ending the mandate and establishing two states on the land. That very same day the Irgun and the Lehi renewed their attacks on British targets. The next day the local Arabs began attacking the Jewish community, thus beginning the first stage of the 1948 Palestine War. The first attacks on Jews were in Jewish neighborhoods of Jerusalem, in and around Jaffa, and in Bat Yam, Holon, and the Ha'Tikvah neighborhood in Tel Aviv.
In the autumn of 1947, the Irgun had approximately 4,000 members. The goal of the organization at that point was the conquest of the land between the Jordan River and the Mediterranean Sea for the future Jewish state and preventing Arab forces from driving out the Jewish community. The Irgun became almost an overt organization, establishing military bases in Ramat Gan and Petah Tikva. It began recruiting openly, thus significantly increasing in size. During the war the Irgun fought alongside the Lehi and the Haganah in the front against the Arab attacks. At first the Haganah maintained a defensive policy, as it had until then, but after the Convoy of 35 incident it completely abandoned its policy of restraint: "Distinguishing between individuals is no longer possible, for now – it is a war, and the even the innocent shall not be absolved."
The Irgun also began carrying out reprisal missions, as it had under David Raziel's command. At the same time though, it published announcements calling on the Arabs to lay down their weapons and maintain a ceasefire:
However, the mutual attacks continued. The Irgun attacked the Arab villages of Tira near Haifa, Yehudiya ('Abassiya) in the center, and Shuafat by Jerusalem. The Irgun also attacked in the Wadi Rushmiya neighborhood in Haifa and Abu Kabir in Jaffa. On December 29 Irgun units arrived by boat to the Jaffa shore and a gunfight between them and Arab gangs ensued. The following day a bomb was thrown from a speeding Irgun car at a group of Arab men waiting to be hired for the day at the Haifa oil refinery, resulting in seven Arabs killed, and dozens injured. In response, some Arab workers attacked Jews in the area, killing 41. This sparked a Haganah response in Balad al-Sheykh, which resulted in the deaths of 60 civilians. The Irgun's goal in the fighting was to move the battles from Jewish populated areas to Arab populated areas. On January 1, 1948 the Irgun attacked again in Jaffa, its men wearing British uniforms; later in the month it attacked in Beit Nabala, a base for many Arab fighters. On 5 January 1948 the Irgun detonated a lorry bomb outside Jaffa's Ottoman built Town Hall, killing 14 and injuring 19. In Jerusalem, two days later, Irgun members in a stolen police van rolled a barrel bomb into a large group of civilians who were waiting for a bus by the Jaffa Gate, killing around sixteen. In the pursuit that followed three of the attackers were killed and two taken prisoner.
On 6 April 1948, the Irgun raided the British Army camp at Pardes Hanna killing six British soldiers and their commanding officer.
The Deir Yassin massacre was carried out in a village west of Jerusalem that had signed a non-belligerency pact with its Jewish neighbors and the Haganah, and repeatedly had barred entry to foreign irregulars. On 9 April approximately 120 Irgun and Lehi members began an operation to capture the village. During the operation, the villagers fiercely resisted the attack, and a battle broke out. In the end, the Irgun and Lehi forces advanced gradually through house-to-house fighting. The village was only taken after the Irgun began systematically dynamiting houses, and after a Palmach unit intervened and employed mortar fire to silence the villagers' sniper positions. The operation resulted in five Jewish fighters dead and 40 injured. Some 100 to 120 villagers were also killed.
There are allegations that Irgun and Lehi forces committed war crimes during and after the capture of the village. These allegations include reports that fleeing individuals and families were fired at, and prisoners of war were killed after their capture. A Haganah report writes:
Some say that this incident was an event that accelerated the Arab exodus from Palestine.
The Irgun cooperated with the Haganah in the conquest of Haifa. At the regional commander's request, on April 21 the Irgun took over an Arab post above Hadar Ha'Carmel as well as the Arab neighborhood of Wadi Nisnas, adjacent to the Lower City.
The Irgun acted independently in the conquest of Jaffa (part of the proposed Arab State according to the UN Partition Plan). On April 25 Irgun units, about 600 strong, left the Irgun base in Ramat Gan towards Arab Jaffa. Difficult battles ensued, and the Irgun faced resistance from the Arabs as well as the British. Under the command of Amichai "Gidi" Paglin, the Irgun's chief operations officer, the Irgun captured the neighborhood of Manshiya, which threatened the city of Tel Aviv. Afterwards the force continued to the sea, towards the area of the port, and using mortars, shelled the southern neighborhoods.
On May 14, 1948 the establishment of the State of Israel was proclaimed. The declaration of independence was followed by the establishment of the Israel Defense Forces (IDF), and the process of absorbing all military organizations into the IDF started. On June 1, an agreement had been signed between Menachem Begin and Yisrael Galili for the absorption of the Irgun into the IDF. One of the clauses stated that the Irgun had to stop smuggling arms. Meanwhile, in France, Irgun representatives purchased a ship, renamed "Altalena" (a pseudonym of Ze'ev Jabotinsky), and weapons. The ship sailed on June 11 and arrived at the Israeli coast on June 20, during the first truce of the 1948 Arab–Israeli War. Despite United Nations Security Council Resolution 50 declared an arms embargo in the region, neither side respected it.
When the ship arrived the Israeli government, headed by Ben-Gurion, was adamant in its demand that the Irgun surrender and hand over all of the weapons. Ben-Gurion said: "We must decide whether to hand over power to Begin or to order him to cease his activities. If he does not do so, we will open fire! Otherwise, we must decide to disperse our own army."
There were two confrontations between the newly formed IDF and the Irgun: when "Altalena" reached Kfar Vitkin in the late afternoon of Sunday, June 20 many Irgun militants, including Begin, waited on the shore. A clash with the Alexandroni Brigade, commanded by Dan Even (Epstein), occurred. Fighting ensued and there were a number of casualties on both sides. The clash ended in a ceasefire and the transfer of the weapons on shore to the local IDF commander, and with the ship, now reinforced with local Irgun members, including Begin, sailing to Tel Aviv, where the Irgun had more supporters.
Many Irgun members, who joined the IDF earlier that month, left their bases and concentrated on the Tel Aviv beach. A confrontation between them and the IDF units started. In response, Ben-Gurion ordered Yigael Yadin (acting Chief of Staff) to concentrate large forces on the Tel Aviv beach and to take the ship by force. Heavy guns were transferred to the area and at four in the afternoon, Ben-Gurion ordered the shelling of the "Altalena". One of the shells hit the ship, which began to burn.
Sixteen Irgun fighters were killed in the confrontation with the army; six were killed in the Kfar Vitkin area and ten on Tel Aviv beach. Three IDF soldiers were killed: two at Kfar Vitkin and one in Tel Aviv.
After the shelling of the "Altalena", more than 200 Irgun fighters were arrested. Most of them were freed several weeks later. The Irgun militants were then fully integrated with the IDF and not kept in separate units.
The initial agreement for the integration of the Irgun into the IDF did not include Jerusalem, where a small remnant of the Irgun called the "Jerusalem Battalion", numbering around 400 fighters, and Lehi, continued to operate independently of the government. Following the assassination of UN Envoy for Peace Folke Bernadotte by Lehi in September 1948, the Israeli government determined to immediately dismantle the underground organizations. An ultimatum was issued to the Irgun to liquidate as an independent organization and integrate into the IDF or be destroyed, and Israeli troops surrounded the Irgun camp in the Katamon Quarter of Jerusalem. The Irgun accepted the ultimatum on September 22, 1948, and shortly afterward the remaining Irgun fighters in Jerusalem began enlisting in the IDF and turning over their arms. At Begin's orders, the Irgun in the diaspora formally disbanded on January 12, 1949, with the Irgun's former Paris headquarters becoming the European bureau of the Herut movement.
In order to increase the popularity of the Irgun organization and ideology, Irgun employed propaganda. This propaganda was mainly aimed at the British, and included the idea of Eretz Israel. According to Irgun , the Jewish state was not only to encompass all of Mandatory Palestine, but also The Emirate of Transjordan.
When the Labour party came into power in Britain in July 1945, Irgun published an announcement entitled, "We shall give the Labour Government a Chance to Keep Its Word." In this publication, Irgun stated, "Before it came to power, this Party undertook to return the Land of Israel to the people of Israel as a free state... Men and parties in opposition or in their struggle with their rivals, have, for twenty-five years, made us many promises and undertaken clear obligations; but, on coming to power, they have gone back on their words." Another publication, which followed a British counter-offensive against Jewish organizations in Palestine, Irgun released a document titled, "Mobilize the Nation!" Irgun used this publication to paint the British regime as hostile to the Jewish people, even comparing the British to the Nazis. In response to what was seen as British aggression, Irgun called for a Hebrew Provisional Government, and a Hebrew Liberation Army.
References to the Irgun as a terrorist organization came from sources including the Anglo-American Committee of Inquiry, newspapers and a number of prominent world and Jewish figures.
Leaders within the mainstream Jewish organizations, the Jewish Agency, Haganah and Histadrut, as well as the British authorities, routinely condemned Irgun operations as terrorism and branded it an illegal organization as a result of the group's attacks on civilian targets. However, privately at least the Haganah kept a dialogue with the dissident groups.
Ironically, in early 1947, "the British army in Mandate Palestine banned the use of the term 'terrorist' to refer to the Irgun zvai Leumi ... because it implied that British forces had reason to be terrified."
Irgun attacks prompted a formal declaration from the World Zionist Congress in 1946, which strongly condemned "the shedding of innocent blood as a means of political warfare."
The Israeli government, in September 1948, acting in response to the assassination of Count Folke Bernadotte, outlawed the Irgun and Lehi groups, declaring them terrorist organizations under the Prevention of Terrorism Ordinance.
In 1948, "The New York Times" published a letter signed by a number of prominent Jewish figures including Hannah Arendt, Albert Einstein, Sidney Hook, and Rabbi Jessurun Cardozo, which described Irgun as "a terrorist, right-wing, chauvinist organization in Palestine". The letter went on to state that Irgun and the Stern gang "inaugurated a reign of terror in the Palestine Jewish community. Teachers were beaten up for speaking against them, adults were shot for not letting their children join them. By gangster methods, beatings, window-smashing, and widespread robberies, the terrorists intimidated the population and exacted a heavy tribute."
Soon after World War II, Winston Churchill said "we should never have stopped immigration before the war", but that the Irgun were "the vilest gangsters" and that he would "never forgive the Irgun terrorists."
In 2006, Simon McDonald, the British ambassador in Tel Aviv, and John Jenkins, the Consul-General in Jerusalem, wrote in response to a pro-Irgun commemoration of the King David Hotel bombing: "We do not think that it is right for an act of terrorism, which led to the loss of many lives, to be commemorated." They also called for the removal of plaques at the site which presented as a fact that the deaths were due to the British ignoring warning calls. The plaques, in their original version, read:
McDonald and Jenkins said that no such warning calls were made, adding that even if they had, "this does not absolve those who planted the bomb from responsibility for the deaths."
Bruce Hoffman states: "Unlike many terrorist groups today, the Irgun's strategy was not deliberately to target or wantonly harm civilians." Max Abrahms writes that the Irgun "pioneered the practice of issuing pre-attack warnings to spare civilians", which was later emulated by the African National Congress (ANC) and other groups and proved "effective but not foolproof". In addition, Begin ordered attacks to take place at night and even during Shabbat to reduce the likelihood of civilian casualties. U.S. military intelligence found that "the Irgun Zvai Leumi is waging a general war against the government and at all times took special care not to cause damage or injury to persons". Although the King David Hotel bombing is widely considered a "prima facie" case of Irgun terrorism, Abrahms comments: "But this hotel wasn't a normal hotel. It served as the headquarters for the British Armed Forces in Palestine. By all accounts the intent wasn't to harm civilians."
"Ha'aretz" columnist and Israeli historian Tom Segev wrote of the Irgun: "In the second half of 1940, a few members of the Irgun Zvai Leumi (National Military Organization) – the anti-British terrorist group sponsored by the Revisionists and known by its acronym Etzel, and to the British simply as the Irgun – made contact with representatives of Fascist Italy, offering to cooperate against the British."
Clare Hollingworth, the "Daily Telegraph" and "The Scotsman" correspondent in Jerusalem during 1948 wrote several outspoken reports after spending several weeks in West Jerusalem:
A US military intelligence report, dated January 1948, described Irgun recruiting tactics amongst Displaced Persons (DP) in the camps across Germany:
Alan Dershowitz wrote in his book "The Case for Israel" that unlike the Haganah, the policy of the Irgun had been to encourage the flight of local Arabs.
|
https://en.wikipedia.org/wiki?curid=15406
|
Isoroku Yamamoto
Yamamoto held several important posts in the IJN, and undertook many of its changes and reorganizations, especially its development of naval aviation. He was the commander-in-chief during the early years of the Pacific War and oversaw major engagements including the attack on Pearl Harbor and the Battle of Midway. He was killed when American code breakers identified his flight plans, enabling the United States Army Air Forces to shoot down his plane. His death was a major blow to Japanese military morale during World War II.
Yamamoto was born in Nagaoka, Niigata. His father, Sadayoshi Takano (高野 貞吉), was an intermediate-rank "samurai" of the Nagaoka Domain. "Isoroku" is an old Japanese term meaning "56"; the name referred to his father's age at Isoroku's birth.
In 1916, Isoroku was adopted into the Yamamoto family (another family of former Nagaoka samurai) and took the Yamamoto name. It was a common practice for samurai families lacking sons to adopt suitable young men in this fashion to carry on the family name, the rank and the income that went with it. Isoroku married Reiko Mihashi in 1918; they had two sons and two daughters.
After graduating from the Imperial Japanese Naval Academy in 1904, Yamamoto served on the armored cruiser during the Russo-Japanese War. He was wounded at the Battle of Tsushima, losing two fingers (the index and middle fingers) on his left hand, as the cruiser was hit repeatedly by the Russian battle line. He returned to the Naval Staff College in 1914, emerging as a lieutenant commander in 1916. In December of 1919, he was promoted to Commander.
Yamamoto was part of the Japanese Navy establishment, who were rivals of the more aggressive army establishment, especially the officers of the Kwantung Army. He promoted a policy of a strong fleet to project force through gunboat diplomacy, rather than a fleet used primarily for transport of invasion land forces, as some of his political opponents in the army wanted. This stance led him to oppose the invasion of China. He also opposed war against the United States, partly because of his studies at Harvard University (1919–1921) and his two postings as a naval attaché in Washington, D.C., where he learned to speak fluent English. Yamamoto traveled extensively in the United States during his tour of duty there, where he studied American customs and business practices.
He was promoted to captain in 1923. On February 13, 1924, at the rank of captain, he was part of the Japanese delegation visiting the US Naval War College. Later that year, he changed his specialty from gunnery to naval aviation. His first command was the cruiser in 1928, followed by the aircraft carrier .
He participated in the London Naval Conference 1930 as a rear admiral and the London Naval Conference 1935 as a vice admiral, as the growing military influence on the government at the time deemed that a career military specialist needed to accompany the diplomats to the arms limitations talks. Yamamoto was a strong proponent of naval aviation, and served as head of the Aeronautics Department before accepting a post as commander of the First Carrier Division. Yamamoto opposed the Japanese invasion of northeast China in 1931, the subsequent full-scale land war with China in 1937, and the Tripartite Pact with Nazi Germany and fascist Italy in 1940. As Deputy Navy Minister, he apologized to United States Ambassador Joseph C. Grew for the bombing of the gunboat in December 1937. These issues made him a target of assassination threats by pro-war militarists.
Throughout 1938, many young army and naval officers began to speak publicly against Yamamoto and certain other Japanese admirals such as Mitsumasa Yonai and Shigeyoshi Inoue for their strong opposition to a tripartite pact with Nazi Germany, which the admirals saw as inimical to "Japan's natural interests". Yamamoto received a steady stream of hate mail and death threats from Japanese nationalists. His reaction to the prospect of death by assassination was passive and accepting. The admiral wrote:
"To die for Emperor and Nation is the highest hope of a military man. After a brave hard fight the blossoms are scattered on the fighting field. But if a person wants to take a life instead, still the fighting man will go to eternity for Emperor and country. One man's life or death is a matter of no importance. All that matters is the Empire. As Confucius said, "They may crush cinnabar, yet they do not take away its color; one may burn a fragrant herb, yet it will not destroy the scent." They may destroy my body, yet they will not take away my will."
The Japanese Army, annoyed at Yamamoto's unflinching opposition to a Rome-Berlin-Tokyo treaty, dispatched military police to "guard" Yamamoto, a ruse by the army to keep an eye on him. He was later reassigned from the naval ministry to sea as the commander-in-chief of the Combined Fleet on August 30, 1939. This was done as one of the last acts of the acting Navy Minister Mitsumasa Yonai, under Baron Hiranuma's short-lived administration. It was done partly to make it harder for assassins to target Yamamoto. Yonai was certain that if Yamamoto remained ashore, he would be killed before the year [1939] ended.
Yamamoto was promoted to admiral on November 15, 1940. This, in spite of the fact that when Hideki Tōjō was appointed prime minister on October 18, 1941, many political observers thought that Yamamoto's career was essentially over. Tōjō had been Yamamoto's old opponent from the time when the latter served as Japan's deputy naval minister and Tōjō was the prime mover behind Japan's takeover of Manchuria. It was believed that Yamamoto would be appointed to command the Yokosuka Naval Base, "a nice safe demotion with a big house and no power at all". However, after a brief stint in the post, a new Japanese cabinet was announced, and Yamamoto found himself returned to his position of power despite his open conflicts with Tōjō and other members of the army's oligarchy who favored war with the European powers and the United States.
Two of the main reasons for Yamamoto's political survival were his immense popularity within the fleet, where he commanded the respect of his men and officers, and his close relations with the imperial family. He also had the acceptance of Japan's naval hierarchy:
Consequently, Yamamoto stayed in his post. With Tōjō now in charge of Japan's highest political office, it became clear the army would lead the navy into a war about which Yamamoto had serious reservations. He wrote to an ultranationalist:
This quote was spread by the militarists, minus the last sentence, where it was interpreted in America as a boast that Japan would conquer the entire continental United States. The omitted sentence showed Yamamoto's counsel of caution towards a war that could cost Japan dearly. Nevertheless, Yamamoto accepted the reality of impending war and planned for a quick victory by destroying the United States Pacific Fleet at Pearl Harbor in a preventive strike while simultaneously thrusting into the oil and rubber resource-rich areas of Southeast Asia, especially the Dutch East Indies, Borneo, and Malaya. In naval matters, Yamamoto opposed the building of the super-battleships and as an unwise investment of resources.
Yamamoto was responsible for a number of innovations in Japanese naval aviation. Although remembered for his association with aircraft carriers, Yamamoto did more to influence the development of land-based naval aviation, particularly the Mitsubishi G3M and G4M medium bombers. His demand for great range and the ability to carry a torpedo was intended to conform to Japanese conceptions of bleeding the American fleet as it advanced across the Pacific. The planes did achieve long range, but long-range fighter escorts were not available. These planes were lightly constructed and when fully fueled, they were especially vulnerable to enemy fire. This earned the G4M the sardonic nickname the "flying cigarette lighter". Yamamoto would eventually die in one of these aircraft.
The range of the G3M and G4M contributed to a demand for great range in a fighter aircraft. This partly drove the requirements for the A6M Zero which was as noteworthy for its range as for its maneuverability. Both qualities were again purchased at the expense of light construction and flammability that later contributed to the A6M's high casualty rates as the war progressed.
As Japan moved toward war during 1940, Yamamoto gradually moved toward strategic as well as tactical innovation, again with mixed results. Prompted by talented young officers such as Lieutenant Commander Minoru Genda, Yamamoto approved the reorganization of Japanese carrier forces into the First Air Fleet, a consolidated striking force that gathered Japan's six largest carriers into one unit. This innovation gave great striking capacity, but also concentrated the vulnerable carriers into a compact target. Yamamoto also oversaw the organization of a similar large land-based organization in the 11th Air Fleet, which would later use the G3M and G4M to neutralize American air forces in the Philippines and sink the British "Force Z".
In January 1941, Yamamoto went even further and proposed a radical revision of Japanese naval strategy. For two decades, in keeping with the doctrine of Captain Alfred T. Mahan, the Naval General Staff had planned in terms of Japanese light surface forces, submarines, and land-based air units whittling down the American Fleet as it advanced across the Pacific until the Japanese Navy engaged it in a climactic "decisive battle" in the northern Philippine Sea (between the Ryukyu Islands and the Marianas), with battleships meeting in the traditional exchange between battle lines.
Correctly pointing out this plan had never worked even in Japanese war games, and painfully aware of American strategic advantages in military production capacity, Yamamoto proposed instead to seek parity with the Americans by first reducing their forces with a preventive strike, then following up with a "decisive battle" fought offensively, rather than defensively. Yamamoto hoped, but probably did not believe, that if the Americans could be dealt terrific blows early in the war, they might be willing to negotiate an end to the conflict. The Naval General Staff proved reluctant to go along and Yamamoto was eventually driven to capitalize on his popularity in the fleet by threatening to resign to get his way. Admiral Osami Nagano and the Naval General Staff eventually caved in to this pressure, but only insofar as approving the attack on Pearl Harbor.
The First Air Fleet commenced preparations for the Pearl Harbor raid, solving a number of technical problems along the way, including how to launch torpedoes in the shallow water of Pearl Harbor and how to craft armor-piercing bombs by machining down battleship gun projectiles.
As the U.S. and Japan were officially at peace, the First Air Fleet of six carriers attacked the U.S. on December 7, 1941, launching 353 aircraft against Pearl Harbor and other locations within Honolulu in two waves. The attack was a complete success according to the parameters of the mission, which sought to sink at least four American battleships and prevent the U.S. from interfering in Japan's southward advance for at least six months. Three American aircraft carriers were also considered a choice target, but these were not in port at the time of the attack.
In the end, five American battleships were sunk, three were damaged, and eleven other cruisers, destroyers, and auxiliaries were sunk or seriously damaged, 188 American aircraft were destroyed and 159 others damaged, and 2,403 people were killed and 1,178 others wounded. The Japanese lost 64 servicemen and only 29 aircraft, with 74 others damaged by anti-aircraft fire from the ground. The damaged aircraft were disproportionately dive and torpedo bombers, seriously impacting available firepower to exploit the first two waves' success, so the commander of the First Air Fleet, Naval Vice Admiral Chuichi Nagumo, withdrew. Yamamoto later lamented Nagumo's failure to seize the initiative to seek out and destroy the U.S. carriers, absent from the harbor, or further bombard various strategically important facilities on Oahu.
Nagumo had absolutely no idea where the American carriers might be, and remaining on station while his forces cast about looking for them ran the risk of his own forces being found first and attacked while his aircraft were absent searching. In any case, insufficient daylight remained after recovering the aircraft from the first two waves for the carriers to launch and recover a third before dark, and Nagumo's escorting destroyers lacked the fuel capacity for him to loiter long. Much has been made of Yamamoto's hindsight, but, in keeping with Japanese military tradition not to criticize the commander on the spot, he did not punish Nagumo for his withdrawal.
On the strategic, moral, and political level, the attack was a disaster for Japan, rousing American passions for revenge due to what is now famously called a "sneak attack". The shock of the attack, coming in an unexpected place with devastating results and without a declaration of war, galvanized the U.S. public's determination to avenge the attack. When asked by Prime Minister Fumimaro Konoe in mid-1941 about the outcome of a possible war with the United States, Yamamoto made a well-known and prophetic statement: If ordered to fight, he said, "I shall run wild considerably for the first six months or a year, but I have utterly no confidence for the second and third years." His prediction would be vindicated, as Japan easily conquered territories and islands in Asia and the Pacific for the first six months of the war, before suffering a major defeat at the Battle of Midway on June 4–7, 1942, which ultimately tilted the balance of power in the Pacific towards the U.S.
As a strategic blow intended to prevent American interference in the Dutch East Indies for six months, the Pearl Harbor attack was a success, but unbeknownst to Yamamoto, it was a pointless one. In 1935, in keeping with the evolution of War Plan Orange, the U.S. Navy had abandoned any intention of attempting to charge across the Pacific towards the Philippines at the outset of a war with Japan. In 1937, the U.S. Navy had further determined even fully manning the fleet to wartime levels could not be accomplished in less than six months, and myriad other logistic assets needed to execute a trans-Pacific movement simply did not exist and would require two years to construct after the onset of war.
In 1940, U.S. Chief of Naval Operations Admiral Harold Stark had penned a Plan Dog memorandum, which emphasized a defensive war in the Pacific while the US concentrated on defeating Nazi Germany first, and consigned Admiral Husband Kimmel's Pacific Fleet to merely keeping the Imperial Japanese Navy (IJN) out of the eastern Pacific and away from the shipping lanes to Australia. Moreover, it is questionable whether the US would have gone to war at all had Japan attacked only British and Dutch possessions in the Far East.
With the US fleet largely neutralized at Pearl Harbor, Yamamoto's Combined Fleet turned to the task of executing the larger Japanese war plan devised by the Imperial Japanese Army (IJA) and Navy General Staff. The First Air Fleet made a circuit of the Pacific, striking American, Australian, Dutch and British installations from Wake Island to Australia to Ceylon in the Indian Ocean. The 11th Air Fleet caught the US 5th Air Force on the ground in the Philippines hours after Pearl Harbor, and then sank the British Force Z battleship and battlecruiser underway at sea.
Under Yamamoto's able subordinates, Vice Admirals Jisaburō Ozawa, Nobutake Kondō, and Ibō Takahashi, the Japanese swept the inadequate remaining American, British, Dutch and Australian naval assets from the Dutch East Indies in a series of amphibious landings and surface naval battles culminating in the Battle of the Java Sea on February 27, 1942. Along with the occupation of the Dutch East Indies came the fall of Singapore on February 15, 1942, and the eventual reduction of the remaining American-Filipino defensive positions in the Philippines on the Bataan peninsula, April 9, 1942, and Corregidor Island on May 6, 1942. The Japanese had secured their oil- and rubber-rich "southern resources area".
By late-March, having achieved their initial aims with surprising speed and little loss, albeit against enemies ill-prepared to resist them, the Japanese paused to consider their next moves. Yamamoto and a few Japanese military leaders and officials waited, hoping that the United States or Great Britain would negotiate an armistice or a peace treaty to end the war. But when the British, as well as the Americans, expressed no interest in negotiating a ceasefire with Japan, Japanese thoughts turned to securing their newly seized territory and acquiring more with an eye to forcing one or more of their enemies out of the war.
Competing plans were developed at this stage, including thrusts to the west against India, the south against Australia, and east against the United States. Yamamoto was involved in this debate, supporting different plans at different times with varying degrees of enthusiasm and for varying purposes, including "horse-trading" for support of his own objectives.
Plans included ideas as ambitious as invading India or Australia, or seizing Hawaii. These grandiose ventures were inevitably set aside as the army could not spare enough troops from China for the first two, which would require a minimum of 250,000 men, nor shipping to support the latter two. (Shipping was allocated separately to the IJN and IJA, and jealously guarded.) Instead, the Imperial General Staff supported an army thrust into Burma in hopes of linking up with Indian Nationalists revolting against British rule, and attacks in New Guinea and the Solomon Islands designed to imperil Australia's lines of communication with the United States. Yamamoto argued for a decisive offensive strike in the east to finish off the US fleet, but the more conservative Naval General Staff officers were unwilling to risk it.
On April 18, in the midst of these debates, the Doolittle Raid struck Tokyo and surrounding areas, demonstrating the threat posed by US aircraft carriers, and giving Yamamoto an event he could exploit to get his way as further debate over military strategy came to a quick end. The Naval General Staff agreed to Yamamoto's Midway Island (MI) Operation, subsequent to the first phase of the operations against Australia's link with America, and concurrent with its plan to seize positions in the Aleutian Islands.
Yamamoto rushed planning for the Midway and Aleutians missions, while dispatching a force under Vice Admiral Takeo Takagi, including the Fifth Carrier Division (the large, new carriers and ), to support the effort to seize the islands of Tulagi and Guadalcanal for seaplane and aeroplane bases, and the town of Port Moresby on Papua New Guinea's south coast facing Australia.
The Port Moresby (MO) Operation proved an unwelcome setback. Although Tulagi and Guadalcanal were taken, the Port Moresby invasion fleet was compelled to turn back when Takagi clashed with a US carrier task force in the Battle of the Coral Sea in early May. Although the Japanese sank the US carrier and damaged the , the Americans damaged the carrier "Shōkaku" so badly that she required dockyard repairs, and the Japanese lost the light carrier . Just as importantly, Japanese operational mishaps and US fighters and anti-aircraft fire devastated the dive bomber and torpedo plane formations of both "Shōkaku"s and "Zuikaku"s air groups. These losses sidelined "Zuikaku" while she awaited replacement aircraft and aircrews, and saw to tactical integration and training. These two ships would be sorely missed a month later at Midway.
Yamamoto's plan for Midway Island was an extension of his efforts to knock the US Pacific Fleet out of action long enough for Japan to fortify its defensive perimeter in the Pacific island chains. Yamamoto felt it necessary to seek an early, offensive decisive battle.
This plan was long believed to have been to draw American attention—and possibly carrier forces—north from Pearl Harbor by sending his Fifth Fleet (two light carriers, five cruisers, 13 destroyers, and four transports) against the Aleutians, raiding Dutch Harbor on Unalaska Island and invading the more distant islands of Kiska and Attu.
While Fifth Fleet attacked the Aleutians, First Mobile Force (four carriers, two battleships, three cruisers, and 12 destroyers) would raid Midway and destroy its air force. Once this was neutralized, Second Fleet (one light carrier, two battleships, 10 cruisers, 21 destroyers, and 11 transports) would land 5,000 troops to seize the atoll from the US Marines.
The seizure of Midway was expected to draw the US carriers west into a trap where the First Mobile Force would engage and destroy them. Afterwards, First Fleet (one light carrier, seven battleships, three cruisers and 13 destroyers), in conjunction with elements of Second Fleet, would mop up remaining US surface forces and complete the destruction of the US Pacific Fleet.
To guard against failure, Yamamoto initiated two security measures. The first was an aerial reconnaissance mission (Operation K) over Pearl Harbor to ascertain if the US carriers were there. The second was a picket line of submarines to detect the movement of US carriers toward Midway in time for First Mobile Force, First Fleet, and Second Fleet to combine against it. In the event, the first measure was aborted and the second delayed until after US carriers had already sortied.
The plan was a compromise and hastily prepared, apparently so it could be launched in time for the anniversary of Tsushima, but appeared well thought out, well organized, and finely timed when viewed from a Japanese viewpoint. Against four carriers, two light carriers, 11 battleships, 16 cruisers and 46 destroyers likely to be in the area of the main battle the US could field only three carriers, eight cruisers, and 15 destroyers. The disparity appeared crushing. Only in numbers of carrier decks, available aircraft, and submarines was there near parity between the two sides. Despite various mishaps developed in the execution, it appeared that—barring something unforeseen—Yamamoto held all the cards.
Unbeknownst to Admiral Yamamoto, the US had learned of Japanese plans thanks to the code breaking of Japanese naval code D (known to the US as JN-25). As a result, Admiral Chester Nimitz, the Pacific Fleet commander, was able to circumvent both of Yamamoto's security measures and place his outnumbered forces in a position to conduct an ambush. By Nimitz's calculation, his three available carrier decks, plus Midway, gave him rough parity with Nagumo's First Mobile Force.
Following a nuisance raid by Japanese flying boats in May, Nimitz dispatched a minesweeper to guard the intended refueling point for Operation K near French Frigate Shoals, causing the reconnaissance mission to be aborted and leaving Yamamoto ignorant of whether Pacific Fleet carriers were still at Pearl Harbor. It remains unclear why Yamamoto permitted the earlier attack, and why his submarines did not sortie sooner, as reconnaissance was essential to success at Midway. Nimitz also dispatched his carriers toward Midway early, and they passed the intended picket line force of submarines "en route" to their station, negating Yamamoto's back-up security measure. Nimitz's carriers positioned themselves to ambush the "Kidō Butai" (striking force) when it struck Midway. A token cruiser and destroyer force was sent toward the Aleutians, but otherwise Nimitz ignored them. On June 4, 1942, days before Yamamoto expected them to interfere in the Midway operation, US carrier-based aircraft destroyed the four carriers of the "Kidō Butai", catching the Japanese carriers at an especially vulnerable moment.
With his air power destroyed and his forces not yet concentrated for a fleet battle, Yamamoto maneuvered his remaining forces, still strong on paper, to trap the US forces. He was unable to do so because his initial dispositions had placed his surface combatants too far from Midway, and because Admiral Raymond Spruance prudently withdrew to the east in a position to further defend Midway Island, believing (based on a mistaken submarine report) the Japanese still intended to invade. Not knowing several battleships, including the powerful , were on the Japanese order of battle, he did not comprehend the severe risk of a night surface battle, in which his carriers and cruisers would be at a disadvantage. However, his move to the east did avoid the possibility of such a battle taking place. Correctly perceiving he had lost and could not bring surface forces into action, Yamamoto aborted the invasion of Midway and withdrew. The defeat marked the high tide of Japanese expansion.
Yamamoto's plan for Midway Island has been the subject of much criticism. Some historians state it violated the principle of concentration of force, and was overly complex. Others point to similarly complex Allied operations, such as Operation MB8, that were successful, and note the extent to which the US intelligence "coup" derailed the operation before it began. Had Yamamoto's dispositions not denied Nagumo adequate pre-attack reconnaissance assets, both the American cryptanalytic success and the unexpected appearance of the American carriers would have been irrelevant.
The Battle of Midway checked Japanese momentum, but the IJN was still a powerful force, capable of regaining the initiative. It planned to resume the thrust with Operation FS aimed at eventually taking Samoa and Fiji to cut the US lifeline to Australia.
Yamamoto remained in command as commander-in-chief, retained at least partly to avoid diminishing the morale of the Combined Fleet. However, he had lost face as a result of the Midway defeat and the Naval General Staff were disinclined to indulge in further gambles. This reduced Yamamoto to pursuing the classic defensive "decisive battle strategy" he had attempted to overturn.
Yamamoto committed Combined Fleet units to a series of small attrition actions across the south and central Pacific that stung the Americans, but suffered losses he could ill afford in return. Three major efforts to beat the Americans moving on Guadalcanal precipitated a pair of carrier battles that Yamamoto commanded personally: the Eastern Solomons and Santa Cruz Islands in September and October, and finally a wild pair of surface engagements in November, all timed to coincide with Japanese Army pushes. The effort was wasted when the army could not hold up its end of the operation. Yamamoto's naval forces won a few victories and inflicted considerable losses and damage to the US fleet in several naval battles around Guadalcanal which included the battles of Savo Island, Cape Esperance, and Tassafaronga, but he could never draw the US into a decisive fleet action. As a result, Japanese naval strength was reduced.
To boost morale following the defeat at Guadalcanal, Yamamoto decided to make an inspection tour throughout the South Pacific.
On April 14, 1943, the US naval intelligence effort, code-named "Magic", intercepted and decrypted a message containing specifics of Yamamoto's tour, including arrival and departure times and locations, as well as the number and types of aircraft that would transport and accompany him on the journey. Yamamoto, the itinerary revealed, would be flying from Rabaul to Balalae Airfield, on an island near Bougainville in the Solomon Islands, on the morning of April 18, 1943.
US President Franklin D. Roosevelt may have authorized Secretary of the Navy Frank Knox to "get Yamamoto," however no official record of such an order exists and sources disagree whether he did so. Knox essentially let Admiral Chester W. Nimitz make the decision. Nimitz first consulted Admiral William Halsey Jr., Commander, South Pacific, and then authorized the mission on April 17 to intercept Yamamoto's flight "en route" and shoot it down. A squadron of USAAF Lockheed P-38 Lightning aircraft were assigned the task as only they possessed sufficient range to intercept and engage. Select pilots from three units were informed that they were intercepting an "important high officer" with no specific name given.
On the morning of April 18, despite urging by local commanders to cancel the trip for fear of ambush, Yamamoto's two Mitsubishi G4M bombers, used as fast transport aircraft without bombs, left Rabaul as scheduled for the trip. Sixteen P-38s intercepted the flight over Bougainville and a dogfight ensued between them and the six escorting Mitsubishi A6M Zeroes. First Lieutenant Rex T. Barber engaged the first of the two Japanese transports, which turned out to be "T1-323" (Yamamoto's aircraft). He fired on the aircraft until it began to spew smoke from its left engine. Barber turned away to attack the other transport as Yamamoto's plane crashed into the jungle.
Yamamoto's body, along with the crash site, was found the next day in the jungle of the island of Bougainville by a Japanese search and rescue party, led by army engineer Lieutenant Tsuyoshi Hamasuna. According to Hamasuna, Yamamoto had been thrown clear of the plane's wreckage, his white-gloved hand grasping the hilt of his katana, still upright in his seat under a tree. Hamasuna said Yamamoto was instantly recognizable, head dipped down as if deep in thought. A post-mortem disclosed that Yamamoto had received two 0.50-caliber bullet wounds, one to the back of his left shoulder and another to the left side of his lower jaw that exited above his right eye. The Japanese navy doctor examining the body determined that the head wound killed Yamamoto. The more violent details of Yamamoto's death were hidden from the Japanese public. The medical report was whitewashed, changed "on orders from above", according to biographer Hiroyuki Agawa.
Yamamoto's staff cremated his remains at Buin and his ashes were returned to Tokyo aboard the battleship , Yamamoto's last flagship. Yamamoto was given a full state funeral on June 5, 1943, where he received, posthumously, the title of Marshal Admiral and was awarded the Order of the Chrysanthemum (1st Class). He was also awarded Nazi Germany's Knight's Cross of the Iron Cross with Oak Leaves and Swords. Some of his ashes were buried in the public Tama Cemetery, Tokyo (多摩霊園) and the remainder at his ancestral burial grounds at the temple of Chuko-ji in Nagaoka City. He was succeeded as commander-in-chief of the Combined Fleet by Admiral Mineichi Koga.
Yamamoto practiced calligraphy. He and his wife, Reiko, had four children: two sons and two daughters. Yamamoto was an avid gambler, enjoying "Go", "shogi", billiards, bridge, mah jong, poker, and other games that tested his wits and sharpened his mind. He frequently made jokes about moving to Monaco and starting his own casino. He enjoyed the company of "geisha", and his wife Reiko revealed to the Japanese public in 1954 that Yamamoto was closer to his favorite "geisha" Kawai Chiyoko than to her, which stirred some controversy. His funeral procession passed by Kawai's quarters on the way to the cemetery. The claim that Yamamoto was a Catholic
is likely due to confusion with retired Admiral Shinjiro Stefano Yamamoto, who was a decade older than Isoroku, and died of natural causes in 1942.
"From the Japanese Wikipedia"
Since the end of the Second World War, a number of Japanese and American films have depicted the character of Isoroku Yamamoto.
One of the most notable films is the 1970 movie "Tora! Tora! Tora!", which stars Japanese actor Sô Yamamura as Yamamoto, who states after the attack on Pearl Harbor:
The first film to feature Yamamoto was Toho's 1953 film "", (later released in the United States as "Eagle of the Pacific"), in which Yamamoto was portrayed by Denjirô Ôkôchi.
The 1960 film "The Gallant Hours" depicts the battle of wits between Vice-Admiral William Halsey, Jr. and Yamamoto from the start of the Guadalcanal Campaign in August 1942 to Yamamoto's death in April 1943. The film, however, portrays Yamamoto's death as occurring in November 1942, the day after the Naval Battle of Guadalcanal, and the P-38 aircraft that killed him as coming from Guadalcanal.
In Daiei Studios's 1969 film "Aa, kaigun" (later released in the United States as "Gateway to Glory"), Yamamoto was portrayed by Shôgo Shimada.
Award-winning Japanese actor Toshiro Mifune (star of "The Seven Samurai") portrayed Yamamoto in three films:
In Shūe Matsubayashi's 1981 film "" (lit. "Combined Fleet", later released in the United States as "The Imperial Navy"), Yamamoto was portrayed by Keiju Kobayashi.
In the 2001 film "Pearl Harbor", Yamamoto was portrayed by Oscar-nominated Japanese-born American actor Mako Iwamatsu. Like "Tora! Tora! Tora!", this film also features the sleeping giant quote.
In Toei's 2011 war film "", Yamamoto was portrayed by Kōji Yakusho.
A fictionalized version of Yamamoto's death was portrayed in the "Baa Baa Black Sheep" episode "The Hawk Flies on Sunday", though only photos of Yamamoto were shown. In this episode, set much later in the war than in real life, the Black Sheep, a Marine Corsair squadron, joins an army squadron of P-51 Mustangs. The Marines intercepted fighter cover while the army shot down Yamamoto.
In the 2019 motion picture "Midway", Yamamoto is portrayed by Etsushi Toyokawa. This film also features Admiral Yamamoto speaking aloud the sleeping giant quote.
In the 1993 OVA series "Konpeki no Kantai" (lit. "Deep Blue Fleet"), instead of dying in the plane crash, Yamamoto blacks out and suddenly wakes up as his younger self, Isoroku Takano, after the Battle of Tsushima in 1905. His memory from the original timeline intact, Yamamoto uses his knowledge of the future to help Japan become a stronger military power, eventually launching a "coup d'état" against Hideki Tōjō's government. In the subsequent Pacific War, Japan's technologically advanced navy decisively defeats the United States, and grants all of the former European and American colonies in Asia full independence. Later on, Yamamoto convinces Japan to join forces with the United States and Britain to defeat Nazi Germany.
In the 2004 anime series "Zipang", Yamamoto (voiced by ) works to develop the uneasy partnership with the crew of the JMSDF "Mirai", which has been transported back sixty years through time to the year 1942.
In the Axis of Time trilogy by author John Birmingham, after a naval task force from the year 2021 is accidentally transported back through time to 1942, Yamamoto assumes a leadership role in the dramatic alteration of Japan's war strategy.
In Douglas Niles' 2007 book "MacArthur's War: A Novel of the Invasion of Japan" (written with Michael Dobson), which focuses on General Douglas MacArthur and an alternate history of the Pacific War (following a considerably different outcome of the Battle of Midway), Yamamoto is portrayed sympathetically, with much of the action in the Japanese government seen through his eyes, though he could not change the major decisions of Japan in World War II.
In Robert Conroy's 2011 book "Rising Sun", Yamamoto directs the IJN to launch a series of attacks on the American West Coast, in the hope the United States can be convinced to sue for peace and securing Japan's place as a world power; but cannot escape his lingering fear the war will ultimately doom Japan.
In Neal Stephenson's 1999 book "Cryptonomicon", Yamamoto's final moments are depicted, with him realising that Japan's naval codes have been broken and that he must inform headquarters.
In "The West Wing" episode "We Killed Yamamoto", the Chairman of the Joint Chiefs of Staff uses the assassination of Yamamoto to advocate for another assassination.
|
https://en.wikipedia.org/wiki?curid=15408
|
Infrared spectroscopy
Infrared spectroscopy (IR spectroscopy or vibrational spectroscopy) involves the interaction of infrared radiation with matter. It covers a range of techniques, mostly based on absorption spectroscopy. As with all spectroscopic techniques, it can be used to identify and study chemical substances. Samples may be solid, liquid, or gas. The method or technique of infrared spectroscopy is conducted with an instrument called an infrared spectrometer (or spectrophotometer) to produce an infrared spectrum. An IR spectrum can be visualized in a graph of infrared light absorbance (or transmittance) on the vertical axis vs. frequency or wavelength on the horizontal axis. Typical units of frequency used in IR spectra are reciprocal centimeters (sometimes called wave numbers), with the symbol cm−1. Units of IR wavelength are commonly given in micrometers (formerly called "microns"), symbol μm, which are related to wave numbers in a reciprocal way. A common laboratory instrument that uses this technique is a Fourier transform infrared (FTIR) spectrometer. Two-dimensional IR is also possible as discussed below.
The infrared portion of the electromagnetic spectrum is usually divided into three regions; the near-, mid- and far- infrared, named for their relation to the visible spectrum. The higher-energy near-IR, approximately 14000–4000 cm−1 (0.7–2.5 μm wavelength) can excite overtone or combination modes of molecular vibrations. The mid-infrared, approximately 4000–400 cm−1 (2.5–25 μm) is generally used to study the fundamental vibrations and associated rotational–vibrational structure. The far-infrared, approximately 400–10 cm−1 (25–1000 μm) has low energy and may be used for rotational spectroscopy and low frequency vibrations. The region from 2–130 cm−1, bordering the microwave region, is considered the terahertz region and may probe intermolecular vibrations. The names and classifications of these subregions are conventions, and are only loosely based on the relative molecular or electromagnetic properties.
Infrared spectroscopy exploits the fact that molecules absorb frequencies that are characteristic of their structure. These absorptions occur at resonant frequencies, i.e. the frequency of the absorbed radiation matches the vibrational frequency. The energies are affected by the shape of the molecular potential energy surfaces, the masses of the atoms, and the associated vibronic coupling.
In order for a vibrational mode in a sample to be "IR active", it must be associated with changes in the dipole moment. A permanent dipole is not necessary, as the rule requires only a change in dipole moment.
A molecule can vibrate in many ways, and each way is called a "vibrational mode." For molecules with N number of atoms, linear molecules have 3N – 5 degrees of vibrational modes, whereas nonlinear molecules have 3N – 6 degrees of vibrational modes (also called vibrational degrees of freedom). As an example H2O, a non-linear molecule, will have 3 × 3 – 6 = 3 degrees of vibrational freedom, or modes.
Simple diatomic molecules have only one bond and only one vibrational band. If the molecule is symmetrical, e.g. N2, the band is not observed in the IR spectrum, but only in the Raman spectrum. Asymmetrical diatomic molecules, e.g. CO, absorb in the IR spectrum. More complex molecules have many bonds, and their vibrational spectra are correspondingly more complex, i.e. big molecules have many peaks in their IR spectra.
The atoms in a CH2X2 group, commonly found in organic compounds and where X can represent any other atom, can vibrate in nine different ways. Six of these vibrations involve only the CH2 portion: symmetric (s) and antisymmetric (as) stretching (ν), scissoring (δ), rocking (ρ), wagging (ω) and twisting (τ), as shown below. Structures that do not have the two additional X groups attached have fewer modes because some modes are defined by specific relationships to those other attached groups. For example, in water, the rocking, wagging, and twisting modes do not exist because these types of motions of the H atoms represent simple rotation of the whole molecule rather than vibrations within it. In case of more complex molecules, out-of-plane (γ) vibrational modes can be also present.
These figures do not represent the "recoil" of the C atoms, which, though necessarily present to balance the overall movements of the molecule, are much smaller than the movements of the lighter H atoms.
The simplest and most important or "fundamental" IR bands arise from the excitations of normal modes, the simplest distortions of the molecule, from the ground state with vibrational quantum number v = 0 to the first excited state with vibrational quantum number v = 1. In some cases, overtone bands are observed. An overtone band arises from the absorption of a photon leading to a direct transition from the ground state to the second excited vibrational state (v = 2). Such a band appears at approximately twice the energy of the fundamental band for the same normal mode. Some excitations, so-called "combination modes", involve simultaneous excitation of more than one normal mode. The phenomenon of Fermi resonance can arise when two modes are similar in energy; Fermi resonance results in an unexpected shift in energy and intensity of the bands etc.
The infrared spectrum of a sample is recorded by passing a beam of infrared light through the sample. When the frequency of the IR is the same as the vibrational frequency of a bond or collection of bonds, absorption occurs. Examination of the transmitted light reveals how much energy was absorbed at each frequency (or wavelength). This measurement can be achieved by scanning the wavelength range using a monochromator. Alternatively, the entire wavelength range is measured using a Fourier transform instrument and then a transmittance or absorbance spectrum is generated using a dedicated procedure.
This technique is commonly used for analyzing samples with covalent bonds. Simple spectra are obtained from samples with few IR active bonds and high levels of purity. More complex molecular structures lead to more absorption bands and more complex spectra.
Gaseous samples require a sample cell with a long pathlength to compensate for the diluteness. The pathlength of the sample cell depends on the concentration of the compound of interest. A simple glass tube with length of 5 to 10 cm equipped with infrared-transparent windows at the both ends of the tube can be used for concentrations down to several hundred ppm. Sample gas concentrations well below ppm can be measured with a White's cell in which the infrared light is guided with mirrors to travel through the gas. White's cells are available with optical pathlength starting from 0.5 m up to hundred meters.
Liquid samples can be sandwiched between two plates of a salt (commonly sodium chloride, or common salt, although a number of other salts such as potassium bromide or calcium fluoride are also used).
The plates are transparent to the infrared light and do not introduce any lines onto the spectra.
Solid samples can be prepared in a variety of ways. One common method is to crush the sample with an oily mulling agent (usually mineral oil Nujol). A thin film of the mull is applied onto salt plates and measured. The second method is to grind a quantity of the sample with a specially purified salt (usually potassium bromide) finely (to remove scattering effects from large crystals). This powder mixture is then pressed in a mechanical press to form a translucent pellet through which the beam of the spectrometer can pass. A third technique is the "cast film" technique, which is used mainly for polymeric materials. The sample is first dissolved in a suitable, non-hygroscopic solvent. A drop of this solution is deposited on surface of KBr or NaCl cell. The solution is then evaporated to dryness and the film formed on the cell is analysed directly. Care is important to ensure that the film is not too thick otherwise light cannot pass through. This technique is suitable for qualitative analysis. The final method is to use microtomy to cut a thin (20–100 μm) film from a solid sample. This is one of the most important ways of analysing failed plastic products for example because the integrity of the solid is preserved.
In photoacoustic spectroscopy the need for sample treatment is minimal. The sample, liquid or solid, is placed into the sample cup which is inserted into the photoacoustic cell which is then sealed for the measurement. The sample may be one solid piece, powder or basically in any form for the measurement. For example, a piece of rock can be inserted into the sample cup and the spectrum measured from it.
It is typical to record spectrum of both the sample and a "reference". This step controls for a number of variables, e.g. infrared detector, which may affect the spectrum. The reference measurement makes it possible to eliminate the instrument influence.
The appropriate "reference" depends on the measurement and its goal. The simplest reference measurement is to simply remove the sample (replacing it by air). However, sometimes a different reference is more useful. For example, if the sample is a dilute solute dissolved in water in a beaker, then a good reference measurement might be to measure pure water in the same beaker. Then the reference measurement would cancel out not only all the instrumental properties (like what light source is used), but also the light-absorbing and light-reflecting properties of the water and beaker, and the final result would just show the properties of the solute (at least approximately).
A common way to compare to a reference is sequentially: first measure the reference, then replace the reference by the sample and measure the sample. This technique is not perfectly reliable; if the infrared lamp is a bit brighter during the reference measurement, then a bit dimmer during the sample measurement, the measurement will be distorted. More elaborate methods, such as a "two-beam" setup (see figure), can correct for these types of effects to give very accurate results. The Standard addition method can be used to statistically cancel these errors.
Nevertheless, among different absorption based techniques which are used for gaseous species detection, Cavity ring-down spectroscopy (CRDS) can be used as a calibration free method. The fact that CRDS is based on the measurements of photon life-times (and not the laser intensity) makes it needless for any calibration and comparison with a reference
Fourier transform infrared (FTIR) spectroscopy is a measurement technique that allows one to record infrared spectra. Infrared light is guided through an interferometer and then through the sample (or vice versa). A moving mirror inside the apparatus alters the distribution of infrared light that passes through the interferometer. The signal directly recorded, called an "interferogram", represents light output as a function of mirror position. A data-processing technique called Fourier transform turns this raw data into the desired result (the sample's spectrum): Light output as a function of infrared wavelength (or equivalently, wavenumber). As described above, the sample's spectrum is always compared to a reference.
An alternate method for acquiring spectra is the "dispersive" or "scanning monochromator" method. In this approach, the sample is irradiated sequentially with various single wavelengths. The dispersive method is more common in UV-Vis spectroscopy, but is less practical in the infrared than the FTIR method. One reason that FTIR is favored is called "Fellgett's advantage" or the "multiplex advantage": The information at all frequencies is collected simultaneously, improving both speed and signal-to-noise ratio. Another is called "Jacquinot's Throughput Advantage": A dispersive measurement requires detecting much lower light levels than an FTIR measurement. There are other advantages, as well as some disadvantages, but virtually all modern infrared spectrometers are FTIR instruments.
Various forms of infrared microscopy exist. These include IR versions of sub-diffraction microscopy such as IR NSOM, photothermal microspectroscopy, Nano-FTIR and atomic force microscope based infrared spectroscopy (AFM-IR).
Infrared spectroscopy is not the only method of studying molecular vibrational spectra. Raman spectroscopy involves an inelastic scattering process in which only part of the energy of an incident photon is absorbed by the molecule, and the remaining part is scattered and detected. The energy difference corresponds to absorbed vibrational energy.
The selection rules for infrared and for Raman spectroscopy are different at least for some molecular symmetries, so that the two methods are complementary in that they observe vibrations of different symmetries.
Another method is electron energy loss spectroscopy (EELS), in which the energy absorbed is provided by an inelastically scattered electron rather than a photon. This method is useful for studying vibrations of molecules adsorbed on a solid surface.
Recently, high-resolution EELS (HREELS) has emerged as a technique for performing vibrational spectroscopy in a transmission electron microscope (TEM). In combination with the high spatial resolution of the TEM, unprecedented experiments have been performed, such as nano-scale temperature measurements, mapping of isotopically labeled molecules, mapping of phonon modes in position- and momentum-space, vibrational surface and bulk mode mapping on nanocubes, and investigations of polariton modes in van der Waals crystals.
Analysis of vibrational modes that are IR-inactive but appear in Inelastic Neutron Scattering is also possible at high spatial resolution using EELS. Although the spatial resolution of HREELs is very high, the bands are extremely broad compared to other techniques.
IR spectroscopy is often used to identify structures because functional groups give rise to characteristic bands both in terms of intensity and position (frequency). The positions of these bands are summarized in correlation tables as shown below.
A spectrograph is often interpreted as having two regions.
In the functional region there are one to a few troughs per functional group.
In the fingerprint region there are many troughs which form an intricate pattern which can be used like a fingerprint to determine the compound.
For many kinds of samples, the assignments are known, i.e. which bond deformation(s) are associated with which frequency. In such cases further information can be gleaned about the strength on a bond, relying on the empirical guideline called Badger's Rule. Originally published by Richard Badger in 1934, this rule states that the strength of a bond correlates with the frequency of its vibrational mode. That is, increase in bond strength leads to corresponding frequency increase and vice versa.
Infrared spectroscopy is a simple and reliable technique widely used in both organic and inorganic chemistry, in research and industry. It is used in quality control, dynamic measurement, and monitoring applications such as the long-term unattended measurement of CO2 concentrations in greenhouses and growth chambers by infrared gas analyzers.
It is also used in forensic analysis in both criminal and civil cases, for example in identifying polymer degradation. It can be used in determining the blood alcohol content of a suspected drunk driver.
IR-spectroscopy has been successfully used in analysis and identification of pigments in paintings and other art objects such as illuminated manuscripts.
A useful way of analyzing solid samples without the need for cutting samples uses ATR or attenuated total reflectance spectroscopy. Using this approach, samples are pressed against the face of a single crystal. The infrared radiation passes through the crystal and only interacts with the sample at the interface between the two materials.
With increasing technology in computer filtering and manipulation of the results, samples in solution can now be measured accurately (water produces a broad absorbance across the range of interest, and thus renders the spectra unreadable without this computer treatment).
Some instruments also automatically identify the substance being measured from a store of thousands of reference spectra held in storage.
Infrared spectroscopy is also useful in measuring the degree of polymerization in polymer manufacture. Changes in the character or quantity of a particular bond are assessed by measuring at a specific frequency over time. Modern research instruments can take infrared measurements across the range of interest as frequently as 32 times a second. This can be done whilst simultaneous measurements are made using other techniques. This makes the observations of chemical reactions and processes quicker and more accurate.
Infrared spectroscopy has also been successfully utilized in the field of semiconductor microelectronics: for example, infrared spectroscopy can be applied to semiconductors like silicon, gallium arsenide, gallium nitride, zinc selenide, amorphous silicon, silicon nitride, etc.
Another important application of Infrared Spectroscopy is in the food industry to measure the concentration of various compounds in different food products
The instruments are now small, and can be transported, even for use in field trials.
Infrared Spectroscopy is also used in gas leak detection devices such as the DP-IR and EyeCGAs. These devices detect hydrocarbon gas leaks in the transportation of natural gas and crude oil.
In February 2014, NASA announced a greatly upgraded database, based on IR spectroscopy, for tracking polycyclic aromatic hydrocarbons (PAHs) in the universe. According to scientists, more than 20% of the carbon in the universe may be associated with PAHs, possible starting materials for the formation of life. PAHs seem to have been formed shortly after the Big Bang, are widespread throughout the universe, and are associated with new stars and exoplanets.
Recent developments include a miniature IR-spectrometer that's linked to a cloud based database and suitable for personal everyday use, and NIR-spectroscopic chips that can be embedded in smartphones and various gadgets.
The different isotopes in a particular species may exhibit different fine details in infrared spectroscopy. For example, the O–O stretching frequency (in reciprocal centimeters) of oxyhemocyanin is experimentally determined to be 832 and 788 cm−1 for ν(16O–16O) and ν(18O–18O), respectively.
By considering the O–O bond as a spring, the frequency of absorbance can be calculated as a wavenumber [= frequency/(speed of light)]
where "k" is the spring constant for the bond, "c" is the speed of light, and "μ" is the reduced mass of the A–B system:
(formula_5 is the mass of atom formula_6).
The reduced masses for 16O–16O and 18O–18O can be approximated as 8 and 9 respectively. Thus
The effect of isotopes, both on the vibration and the decay dynamics, has been found to be stronger than previously thought. In some systems, such as silicon and germanium, the decay of the anti-symmetric stretch mode of interstitial oxygen involves the symmetric stretch mode with a strong isotope dependence. For example, it was shown that for a natural silicon sample, the lifetime of the anti-symmetric vibration is 11.4 ps. When the isotope of one of the silicon atoms is increased to 29Si, the lifetime increases to 19 ps. In similar manner, when the silicon atom is changed to 30Si, the lifetime becomes 27 ps.
Two-dimensional infrared correlation spectroscopy analysis combines multiple samples of infrared spectra to reveal more complex properties. By extending the spectral information of a perturbed sample, spectral analysis is simplified and resolution is enhanced. The 2D synchronous and 2D asynchronous spectra represent a graphical overview of the spectral changes due to a perturbation (such as a changing concentration or changing temperature) as well as the relationship between the spectral changes at two different wavenumbers.
Nonlinear two-dimensional infrared spectroscopy is the infrared version of correlation spectroscopy. Nonlinear two-dimensional infrared spectroscopy is a technique that has become available with the development of femtosecond infrared laser pulses. In this experiment, first a set of pump pulses is applied to the sample. This is followed by a waiting time during which the system is allowed to relax. The typical waiting time lasts from zero to several picoseconds, and the duration can be controlled with a resolution of tens of femtoseconds. A probe pulse is then applied, resulting in the emission of a signal from the sample. The nonlinear two-dimensional infrared spectrum is a two-dimensional correlation plot of the frequency ω1 that was excited by the initial pump pulses and the frequency ω3 excited by the probe pulse after the waiting time. This allows the observation of coupling between different vibrational modes; because of its extremely fine time resolution, it can be used to monitor molecular dynamics on a picosecond timescale. It is still a largely unexplored technique and is becoming increasingly popular for fundamental research.
As with two-dimensional nuclear magnetic resonance (2DNMR) spectroscopy, this technique spreads the spectrum in two dimensions and allows for the observation of cross peaks that contain information on the coupling between different modes. In contrast to 2DNMR, nonlinear two-dimensional infrared spectroscopy also involves the excitation to overtones. These excitations result in excited state absorption peaks located below the diagonal and cross peaks. In 2DNMR, two distinct techniques, COSY and NOESY, are frequently used. The cross peaks in the first are related to the scalar coupling, while in the latter they are related to the spin transfer between different nuclei. In nonlinear two-dimensional infrared spectroscopy, analogs have been drawn to these 2DNMR techniques. Nonlinear two-dimensional infrared spectroscopy with zero waiting time corresponds to COSY, and nonlinear two-dimensional infrared spectroscopy with finite waiting time allowing vibrational population transfer corresponds to NOESY. The COSY variant of nonlinear two-dimensional infrared spectroscopy has been used for determination of the secondary structure content of proteins.
|
https://en.wikipedia.org/wiki?curid=15412
|
Irenaeus
Irenaeus (; "Eirēnaios"; c. 130 – c. 202 AD) was a Greek bishop noted for his role in guiding and expanding Christian communities in what is now the south of France and, more widely, for the development of Christian theology by combating heresy and defining orthodoxy. Originating from Smyrna, now Izmir in Turkey, he had seen and heard the preaching of Polycarp, the last known living connection with the Apostles, who in turn was said to have heard John the Evangelist.
Chosen as bishop of Lugdunum, now Lyon, his best-known work is "Against Heresies", often cited as "Adversus Haereses", an attack on gnosticism, in particular that of Valentinus. To counter the doctrines of the gnostic sects claiming secret wisdom, he offered three pillars of orthodoxy: the scriptures, the tradition handed down from the apostles, and the teaching of the apostles' successors. Intrinsic to his writing is that the surest source of Christian guidance is the church of Rome, and he is the earliest surviving witness to regard all four of the now-canonical gospels as essential.
He is recognized as a saint in the Catholic Church, which celebrates his feast on 28 June, and in the Eastern Orthodox Churches, which celebrates the feast on 23 August.
Irenaeus was a Greek from Polycarp's hometown of Smyrna in Asia Minor, now İzmir, Turkey, born during the first half of the 2nd century. The exact date is thought to be between the years 120 and 140. Unlike many of his contemporaries, he was brought up in a Christian family rather than converting as an adult.
During the persecution of Marcus Aurelius, the Roman Emperor from 161–180, Irenaeus was a priest of the Church of Lyon. The clergy of that city, many of whom were suffering imprisonment for the faith, sent him in 177 to Rome with a letter to Pope Eleutherius concerning the heresy of Montanism, and that occasion bore emphatic testimony to his merits. While Irenaeus was in Rome, a persecution took place in Lyon. Returning to Gaul, Irenaeus succeeded the martyr Saint Pothinus and became the second bishop of Lyon.
During the religious peace which followed the persecution of Marcus Aurelius, the new bishop divided his activities between the duties of a pastor and of a missionary (as to which we have but brief data, late and not very certain). Almost all his writings were directed against Gnosticism. The most famous of these writings is "Adversus haereses" ("Against Heresies"). Irenaeus alludes to coming across Gnostic writings, and holding conversations with Gnostics, and this may have taken place in Asia Minor or in Rome. However, it also appears that Gnosticism was present near Lyon: he writes that there were followers of 'Marcus the Magician' living and teaching in the Rhone valley.
Little is known about the career of Irenaeus after he became bishop. The last action reported of him (by Eusebius, 150 years later) is that in 190 or 191, he exerted influence on Pope Victor I not to excommunicate the Christian communities of Asia Minor which persevered in the practice of the Quartodeciman celebration of Easter.
Nothing is known of the date of his death, which must have occurred at the end of the second or the beginning of the third century. He is regarded as a martyr by the Catholic Church and by some within the Orthodox Church. He was buried under the Church of Saint John in Lyon, which was later renamed St Irenaeus in his honour. The tomb and his remains were utterly destroyed in 1562 by the Huguenots.
Irenaeus wrote a number of books, but the most important that survives is the "Against Heresies" (or, in its Latin title, "Adversus haereses"). In Book I, Irenaeus talks about the Valentinian Gnostics and their predecessors, who he says go as far back as the magician Simon Magus. In Book II he attempts to provide proof that Valentinianism contains no merit in terms of its doctrines. In Book III Irenaeus purports to show that these doctrines are false, by providing counter-evidence gleaned from the Gospels. Book IV consists of Jesus' sayings, and here Irenaeus also stresses the unity of the Old Testament and the Gospel. In the final volume, Book V, Irenaeus focuses on more sayings of Jesus plus the letters of Paul the Apostle.
Irenaeus wrote: "One should not seek among others the truth that can be easily gotten from the Church. For in her, as in a rich treasury, the apostles have placed all that pertains to truth, so that everyone can drink this beverage of life. She is the door of life." (Irenaeus of Lyons, "Against Heresies", III.4) But he also said, "Christ came not only for those who believed from the time of Tiberius Caesar, nor did the Father provide only for those who are now, but for absolutely all men from the beginning, who, according to their ability, feared and loved God and lived justly. . . and desired to see Christ and to hear His voice Irenaeus recognized that all who feared and loved God, practiced justice and piety towards their neighbors, and desired to see Christ, insofar as they were able to do so, will be saved. Since many were not able to have an explicit desire to see Christ, but only implicit, it is clear that for Irenaeus this is enough.
The purpose of "Against Heresies" was to refute the teachings of various Gnostic groups; apparently, several Greek merchants had begun an oratorial campaign in Irenaeus' bishopric, teaching that the material world was the accidental creation of an evil god, from which we are to escape by the pursuit of "gnosis". Irenaeus argued that the true gnosis is in fact knowledge of Christ, which redeems rather than escapes from bodily existence.
Until the discovery of the Library of Nag Hammadi in 1945, "Against Heresies" was the best-surviving description of Gnosticism. Some religious scholars have argued the findings at Nag Hammadi have shown Irenaeus' description of Gnosticism to be inaccurate and polemic in nature. However, the general consensus among modern scholars is that Irenaeus was fairly accurate in his transmission of Gnostic beliefs, and that the Nag Hammadi texts have raised no substantial challenges to the overall accuracy of Irenaeus' information. Religious historian Elaine Pagels criticizes Irenaeus for describing Gnostic groups as sexual libertines, for example, when some of their own writings advocated chastity more strongly than did orthodox texts. However, the Nag Hammadi texts do not present a single, coherent picture of any unified Gnostc system of belief, but rather divergent beliefs of multiple Gnostic sects. Some of these sects were indeed libertine because they considered bodily existence meaningless; others praised chastity, and strongly prohibited any sexual activity, even within marriage.
Irenaeus also wrote "The Demonstration of the Apostolic Preaching" (also known as "Proof of the Apostolic Preaching"), an Armenian copy of which was discovered in 1904. This work seems to have been an instruction for recent Christian converts.
Eusebius attests to other works by Irenaeus, today lost, including "On the Ogdoad," an untitled letter to Blastus regarding schism, "On the Subject of Knowledge", "On the Monarchy" or "How God is not the Cause of Evil", "On Easter".
Irenaeus exercised wide influence on the generation which followed. Both Hippolytus and Tertullian freely drew on his writings. However, none of his works aside from "Against Heresies" and "The Demonstration of the Apostolic Preaching" survive today, perhaps because his literal hope of an earthly millennium may have made him uncongenial reading in the Greek East. Even though no complete version of "Against Heresies" in its original Greek exists, we possess the full ancient Latin version, probably of the third century, as well as thirty-three fragments of a Syrian version and a complete Armenian version of books 4 and 5.
Irenaeus' works were first translated into English by John Keble and published in 1872 as part of the Library of the Fathers series.
Irenaeus pointed to the public rule of faith, authoritatively articulated by the preaching of bishops and inculcated in Church practice, especially worship, as an authentic apostolic tradition by which to read Scripture truly against heresies. He classified as Scripture not only the Old Testament but most of the books now known as the New Testament, while excluding many works, a large number by Gnostics, that flourished in the 2nd century and claimed scriptural authority. Oftentimes, Irenaeus, as a student of Polycarp, who was a direct disciple of the Apostle John, believed that he was interpreting scriptures in the same hermeneutic as the Apostles. This connection to Jesus was important to Irenaeus because both he and the Gnostics based their arguments on Scripture. Irenaeus argued that since he could trace his authority to Jesus and the Gnostics could not, his interpretation of Scripture was correct. He also used "the Rule of Faith", a "proto-creed" with similarities to the Apostles' Creed, as a hermeneutical key to argue that his interpretation of Scripture was correct.
Before Irenaeus, Christians differed as to which gospel they preferred. The Christians of Asia Minor preferred the Gospel of John. The Gospel of Matthew was the most popular overall. Irenaeus asserted that four Gospels, Matthew, Mark, Luke, and John, were canonical scripture. Thus Irenaeus provides the earliest witness to the assertion of the four canonical Gospels, possibly in reaction to Marcion's edited version of the Gospel of Luke, which Marcion asserted was the one and only true gospel.
Based on the arguments Irenaeus made in support of only four authentic gospels, some interpreters deduce that the "fourfold Gospel" must have still been a novelty in Irenaeus' time. "Against Heresies" 3.11.7 acknowledges that many heterodox Christians use only one gospel while 3.11.9 acknowledges that some use more than four. The success of Tatian's Diatessaron in about the same time period is "... a powerful indication that the fourfold Gospel contemporaneously sponsored by Irenaeus was not broadly, let alone universally, recognized." (The apologist and ascetic Tatian had previously harmonized the four gospels into a single narrative, the "Diatesseron" circa 150–160)
Irenaeus is also the earliest attestation that the Gospel of John was written by John the Apostle, and that the Gospel of Luke was written by Luke, the companion of Paul.
Scholars contend that Irenaeus quotes from 21 of the 27 New Testament books, such as:
He may refer to Hebrews 2:30 and James 4:16 and maybe even 2 Peter 5:28, but does not cite Philemon, 3 John or Jude.
Irenaeus cited the New Testament approximately 1,000 times. About one third of his citations are made to Paul's letters. Irenaeus considered all 13 letters belonging to the Pauline corpus to have been written by Paul himself.
Irenaeus is also known as one of the first theologians to use the principle of apostolic succession to refute his opponents.
In his writing against the Gnostics, who claimed to possess a secret oral tradition from Jesus himself, Irenaeus maintained that the bishops in different cities are known as far back as the Apostles and that the bishops provided the only safe guide to the interpretation of Scripture. In a passage that became a "locus classicus" of Catholic-Protestant polemics, he cited the Roman church as an example of the unbroken chain of authority, which text Western polemics would use to assert the primacy of Rome over Eastern churches by virtue of its "preeminent authority".
With the lists of bishops to which Irenaeus referred, the doctrine of the apostolic succession of the bishops, firmly established in the Church at this time, could be linked. This succession was important to establish a chain of custody for orthodoxy. He felt it important, however, also to speak of a succession of elders (presbyters).
Irenaeus' point when refuting the Gnostics was that all of the Apostolic churches had preserved the same traditions and teachings in many independent streams. It was the unanimous agreement between these many independent streams of transmission that proved the orthodox faith, current in those churches, to be true.
The central point of Irenaeus' theology is the unity and the goodness of God, in opposition to the Gnostics' theory of God; a number of divine emanations (Aeons) along with a distinction between the Monad and the Demiurge. Irenaeus uses the Logos theology he inherited from Justin Martyr. Irenaeus was a student of Polycarp, who was said to have been tutored by John the Apostle. (John had used Logos terminology in the Gospel of John and the letter of 1 John). Irenaeus prefers to speak of the Son and the Spirit as the "hands of God".
Irenaeus' emphasis on the unity of God is reflected in his corresponding emphasis on the unity of salvation history. Irenaeus repeatedly insists that God began the world and has been overseeing it ever since this creative act; everything that has happened is part of his plan for humanity. The essence of this plan is a process of maturation: Irenaeus believes that humanity was created immature, and God intended his creatures to take a long time to grow into or assume the divine likeness.
Everything that has happened since has therefore been planned by God to help humanity overcome this initial mishap and achieve spiritual maturity. The world has been intentionally designed by God as a difficult place, where human beings are forced to make moral decisions, as only in this way can they mature as moral agents. Irenaeus likens death to the big fish that swallowed Jonah: it was only in the depths of the whale's belly that Jonah could turn to God and act according to the divine will. Similarly, death and suffering appear as evils, but without them we could never come to know God.
According to Irenaeus, the high point in salvation history is the advent of Jesus. For Irenaeus, the Incarnation of Christ was intended by God before he determined that humanity would be created. Irenaeus develops this idea based on Rom. 5:14, saying "Forinasmuch as He had a pre-existence as a saving Being, it was necessary that what might be saved should also be called into existence, in order that the Being who saves should not exist in vain." Some theologians maintain that Irenaeus believed that Incarnation would have occurred even if humanity had never sinned; but the fact that they did sin determined his role as the savior.
Irenaeus sees Christ as the new Adam, who systematically "undoes" what Adam did: thus, where Adam was disobedient concerning God's edict concerning the fruit of the Tree of Knowledge of Good and Evil, Christ was obedient even to death on the wood of a tree. Irenaeus is the first to draw comparisons between Eve and Mary, contrasting the faithlessness of the former with the faithfulness of the latter. In addition to reversing the wrongs done by Adam, Irenaeus thinks of Christ as "recapitulating" or "summing up" human life.
Irenaeus conceives of our salvation as essentially coming about through the incarnation of God as a man. He characterizes the penalty for sin as death and corruption. God, however, is immortal and incorruptible, and simply by becoming united to human nature in Christ he conveys those qualities to us: they spread, as it were, like a benign infection. Irenaeus emphasizes that salvation occurs through Christ's Incarnation, which bestows incorruptibility on humanity, rather than emphasizing His Redemptive death in the crucifixion, although the latter event is an integral part of the former.
Part of the process of recapitulation is for Christ to go through every stage of human life, from infancy to old age, and simply by living it, sanctify it with his divinity. Although it is sometimes claimed that Irenaeus believed Christ did not die until he was older than is conventionally portrayed, the bishop of Lyon simply pointed out that because Jesus turned the permissible age for becoming a rabbi (30 years old and above), he recapitulated and sanctified the period between 30 and 50 years old, as per the Jewish custom of periodization on life, and so touches the beginning of old age when one becomes 50 years old. (see Adversus Haereses, book II, chapter 22).
In the passage of "Adversus Haereses" under consideration, Irenaeus is clear that after receiving baptism at the age of thirty, citing Luke 3:23, Gnostics then falsely assert that "He [Jesus] preached only one year reckoning from His baptism," and also, "On completing His thirtieth year He [Jesus] suffered, being in fact still a young man, and who had by no means attained to advanced age." Irenaeus argues against the Gnostics by using scripture to add several years after his baptism by referencing 3 distinctly separate visits to Jerusalem. The first is when Jesus makes wine out of water, he goes up to the Paschal feast-day, after which he withdraws and is found in Samaria. The second is when Jesus goes up to Jerusalem for Passover and cures the paralytic, after which he withdraws over the sea of Tiberias. The third mention is when he travels to Jerusalem, eats the Passover, and suffers on the following day.
Irenaeus quotes scripture, which we reference as John 8:57, to suggest that Jesus ministers while in his 40s. In this passage, Jesus' opponents want to argue that Jesus has not seen Abraham, because Jesus is too young. Jesus' opponents argue that Jesus is not yet 50 years old. Irenaeus argues that if Jesus was in his thirties, his opponents would've argued that He's not yet 40 years, since that would make Him even younger. Irenaeus' argument is that they would not weaken their own argument by adding years to Jesus' age. Irenaeus also writes that "The Elders witness to this, who in Asia conferred with John the Lord's disciple, to the effect that John had delivered these things unto them: for he abode with them until the times of Trajan. And some of them saw not only John, but others also of the Apostles, and had this same account from them, and witness to the aforesaid relation."
In Demonstration (74) Irenaeus notes "For Pontius Pilate was governor of Judæa, and he had at that time resentful enmity against Herod the king of the Jews. But then, when Christ was brought to him bound, Pilate sent Him to Herod, giving command to enquire of him, that he might know of a certainty what he should desire concerning Him; making Christ a convenient occasion of reconciliation with the king." Pilate was the prefect of the Roman province of Judaea from AD 26–36. He served under Emperor Tiberius Claudius Nero. Herod Antipas was tetrarch of Galilee and Perea, a client state of the Roman Empire. He ruled from 4 BC to 39 AD. In refuting Gnostic claims that Jesus preached for only one year after his baptism, Irenaeus used the "recapitulation" approach to demonstrate that by living beyond the age of thirty Christ sanctified even old age.
Many aspects of Irenaeus' presentation of salvation history depend on Paul's Epistles.
Irenaeus’ conception of salvation relies heavily on the understanding found in Paul's letters. Irenaeus first brings up the theme of victory over sin and evil that is afforded by Jesus's death. God's intervention has saved humanity from the Fall of Adam and the wickedness of Satan. Human nature has become joined with God's in the person of Jesus, thus allowing human nature to have victory over sin. Paul writes on the same theme, that Christ has come so that a new order is formed, and being under the Law, is being under the sin of Adam Rom. 6:14, Gal. 5:18.
Reconciliation is also a theme of Paul's that Irenaeus stresses in his teachings on Salvation. Irenaeus believes Jesus coming in flesh and blood sanctified humanity so that it might again reflect the perfection associated with the likeness of the Divine. This perfection leads to a new life, in the lineage of God, which is forever striving for eternal life and unity with the Father. This is a carryover from Paul, who attributes this reconciliation to the actions of Christ: "For since death came through a human being, the resurrection of the dead has also come through a human being; for as all die in Adam, so all will be made alive in Christ" 1 Cor. 15:21–22.
A third theme in both Paul's and Irenaeus's conceptions of salvation is the sacrifice of Christ being necessary for the new life given to humanity in the triumph over evil. It is in this obedient sacrifice that Jesus is victor and reconciler, thus erasing the marks that Adam left on human nature. To argue against the Gnostics on this point, Irenaeus uses Colossians Col. 2:13–4 in showing that the debt which came by a tree has been paid for us in another tree. Furthermore, the first chapter of Ephesians is picked up in Irenaeus's discussion of the topic when he asserts, "By His own blood He redeemed us, as also His apostle declares, 'In whom we have redemption through His blood, even the remission of sins.'"
Irenaeus does not simply parrot back the message of Paul in his understanding of salvation. One of the major changes that Irenaeus makes is when the Parousia will occur. Paul states that he believes that it was going to happen soon, probably in his own lifetime 1 Thess. 4:15 1 Cor. 15:51–52. However, the end times does not happen immediately and Christians begin to worry and have doubts about the faith. For Irenaeus, sin is seen as haste, just as Adam and Eve quickly ate from the tree of knowledge as they pleased. On the other hand, redemption restored to humanity through the Christ's submission to God's will. Thus, the salvation of man will also be restored to the original trajectory controlled by God forfeited in humanity's sinful in haste. This rather slower version of salvation is not something that Irenaeus received from Paul, but was a necessary construct given the delay of the second coming of Jesus.
The frequencies of quotations and allusions to the Pauline Epistles in "Against Heresies" are:
To counter his Gnostic opponents, Irenaeus significantly develops Paul's presentation of Christ as the Last Adam.
Irenaeus' presentation of Christ as the New Adam is based on Paul's Christ-Adam parallel in Romans 5:12–21. Irenaeus uses this parallel to demonstrate that Christ truly took human flesh. Irenaeus considered it important to emphasize this point because he understands the failure to recognize Christ's full humanity the bond linking the various strains of Gnosticism together, as seen in his statement that "according to the opinion of no one of the heretics was the Word of God made flesh." Irenaeus believes that unless the Word became flesh, humans were not fully redeemed. He explains that by becoming man, Christ restored humanity to being in the image and likeness of God, which they had lost in the Fall of man. Just as Adam was the original head of humanity through whom all sinned, Christ is the new head of humanity who fulfills Adam's role in the Economy of Salvation. Irenaeus calls this process of restoring humanity recapitulation.
For Irenaeus, Paul's presentation of the Old Law (the Mosaic covenant) in this passage indicates that the Old Law revealed humanity's sinfulness but could not save them. He explains that "For as the law was spiritual, it merely made sin to stand out in relief, but did not destroy it. For sin had no dominion over the spirit, but over man." Since humans have a physical nature, they cannot be saved by a spiritual law. Instead, they need a human Savior. This is why it was necessary for Christ to take human flesh. Irenaeus summarizes how Christ's taking human flesh saves humanity with a statement that closely resembles Romans 5:19, "For as by the disobedience of the one man who was originally moulded from virgin soil, the many were made sinners, and forfeited life; so was it necessary that, by the obedience of one man, who was originally born from a virgin, many should be justified and receive salvation." The physical creation of Adam and Christ is emphasized by Irenaeus to demonstrate how the Incarnation saves humanity's physical nature.
Irenaeus emphasizes the importance of Christ's reversal of Adam's action. Through His obedience, Christ undoes Adam's disobedience. Irenaeus presents the Passion as the climax of Christ's obedience, emphasizing how this obedience on the tree of the Cross Phil. 2:8 undoes the disobedience that occurred through a tree Gen. 3:17.
Irenaeus' interpretation of Paul's discussion of Christ as the New Adam is significant because it helped develop the recapitulation theory of atonement. Irenaeus emphasizes that it is through Christ's reversal of Adam's action that humanity is saved, rather than considering the Redemption to occur in a cultic or juridical way.
The biblical passage, "Death has been swallowed up in victory" (), implied for Irenaeus that the Lord will surely resurrect the first human, i.e. Adam, as one of the saved. According to Irenaeus, those who deny Adam's salvation are “shutting themselves out from life for ever” and the first one who did so was Tatian. The notion that the Second Adam saved the first Adam was advocated not only by Irenaeus, but also by Gregory Thaumaturgus, which suggests that it was popular in the Early Church.
Valentinian Gnosticism was one of the major forms of Gnosticism that Irenaeus opposed.
According to the Gnostic view of Salvation, creation was perfect to begin with; it did not need time to grow and mature. For the Valentinians, the material world is the result of the loss of perfection which resulted from Sophia's desire to understand the Forefather. Therefore, one is ultimately redeemed, through secret knowledge, to enter the pleroma of which the Achamoth originally fell.
According to the Valentinian Gnostics, there are three classes of human beings. They are the material, who cannot attain salvation; the psychic, who are strengthened by works and faith (they are part of the church); and the spiritual, who cannot decay or be harmed by material actions.
Essentially, ordinary humans—those who have faith but do not possess the special knowledge—will not attain salvation. Spirituals, on the other hand—those who obtain this great gift—are the only class that will eventually attain salvation.
In his article entitled "The Demiurge", J.P. Arendzen sums up the Valentinian view of the salvation of man. He writes, "The first, or carnal men, will return to the grossness of matter and finally be consumed by fire; the second, or psychic men, together with the Demiurge as their master, will enter a middle state, neither heaven (pleroma) nor hell (whyle); the purely spiritual men will be completely freed from the influence of the Demiurge and together with the Saviour and Achamoth, his spouse, will enter the pleroma divested of body (húle) and soul (psuché)."
In this understanding of salvation, the purpose of the Incarnation was to redeem the Spirituals from their material bodies. By taking a material body, the Son becomes the Savior and facilitates this entrance into the pleroma by making it possible for the Spirituals to receive his spiritual body. However, in becoming a body and soul, the Son Himself becomes one of those needing redemption. Therefore, the Word descends onto the Savior at His Baptism in the Jordan, which liberates the Son from his corruptible body and soul. His redemption from the body and soul is then applied to the Spirituals. In response to this Gnostic view of Christ, Irenaeus emphasized that the Word became flesh and developed a soteriology that emphasized the significance of Christ's material Body in saving humanity, as discussed in the sections above.
In his criticism of Gnosticism, Irenaeus made reference to a Gnostic gospel which portrayed Judas in a positive light, as having acted in accordance with Jesus' instructions. The recently discovered Gospel of Judas dates close to the period when Irenaeus lived (late 2nd century), and scholars typically regard this work as one of many Gnostic texts, showing one of many varieties of Gnostic beliefs of the period.
The first four books of "Against Heresies" constitute a minute analysis and refutation of the Gnostic doctrines. The fifth is a statement of positive belief contrasting the constantly shifting and contradictory Gnostic opinions with the steadfast faith of the church. He appeals to the Biblical prophecies to demonstrate the truthfulness of Christianity.
Irenaeus showed a close relationship between the predicted events of Daniel 2 and 7. Rome, the fourth prophetic kingdom, would end in a tenfold partition. The ten divisions of the empire are the "ten horns" of Daniel 7 and the "ten horns" in Revelation 17. A "little horn," which was to supplant three of Rome's ten divisions, was also the still future "eighth" in Revelation. Irenaeus concluded with the destruction of all kingdoms at the Second Advent, when Christ, the prophesied "stone," cut out of the mountain without hands, smote the image after Rome's division.
Irenaeus identified the Antichrist, another name of the apostate Man of Sin, with Daniel's Little Horn and John's Beast of Revelation 13. He sought to apply other expressions to the Antichrist, such as "the abomination of desolation," mentioned by Christ (Matt. 24:15) and the "king of a most fierce countenance," in Gabriel's explanation of the Little Horn of Daniel 8. But he is not very clear how "the sacrifice and the libation shall be taken away" during the "half-week," or three and one-half years of the Antichrist's reign.
Under the notion that the Antichrist, as a single individual, might be of Jewish origin, he fancies that the mention of "Dan," in Jeremiah 8:16, and the omission of that name from those tribes listed in Revelation 7, might indicate the Antichrist's tribe. This surmise became the foundation of a series of subsequent interpretations by other students of Bible prophecy.
Like the other early church fathers, Irenaeus interpreted the three and one-half "times" of the Little Horn of Daniel 7 as three and one-half literal years. Antichrist's three and a half years of sitting in the temple are placed immediately before the Second Coming of Christ. They are identified as the second half of the "one week" of Daniel 9. Irenaeus says nothing of the seventy weeks; we do not know whether he placed the "one week" at the end of the seventy or whether he had a gap.
Irenaeus is the first of the church fathers to consider the mystic number 666. While Irenaeus did propose some solutions of this numerical riddle, his interpretation was quite reserved. Thus, he cautiously states:
Although Irenaeus did speculate upon three names to symbolize this mystical number, namely Euanthas, Teitan, and Lateinos, nevertheless he was content to believe that the Antichrist would arise some time in the future after the fall of Rome and then the meaning of the number would be revealed.
Irenaeus declares that the Antichrist's future three-and-a-half-year reign, when he sits in the temple at Jerusalem, will be terminated by the second advent, with the resurrection of the just, the destruction of the wicked, and the millennial reign of the righteous. The general resurrection and the judgment follow the descent of the New Jerusalem at the end of the millennial kingdom.
Irenaeus calls those "heretics" who maintain that the saved are immediately glorified in the kingdom to come after death, before their resurrection. He avers that the millennial kingdom and the resurrection are actualities, not allegories, the first resurrection introducing this promised kingdom in which the risen saints are described as ruling over the renewed earth during the millennium, between the two resurrections.
Irenaeus held to the old Jewish tradition that the first six days of creation week were typical of the first six thousand years of human history, with Antichrist manifesting himself in the sixth period. And he expected the millennial kingdom to begin with the second coming of Christ to destroy the wicked and inaugurate, for the righteous, the reign of the kingdom of God during the seventh thousand years, the millennial Sabbath, as signified by the Sabbath of creation week.
In common with many of the fathers, Irenaeus did not distinguish between the new earth re-created in its eternal state—the thousand years of Revelation 20—when the saints are with Christ after His second advent, and the Jewish traditions of the Messianic kingdom. Hence, he applies Biblical and traditional ideas to his descriptions of this earth during the millennium, throughout the closing chapters of Book 5. This conception of the reign of resurrected and translated saints with Christ on this earth during the millennium-popularly known as chiliasm—was the increasingly prevailing belief of this time. Incipient distortions due to the admixture of current traditions, which figure in the extreme forms of chiliasm, caused a reaction against the earlier interpretations of Bible prophecies.
Irenaeus was not looking for a Jewish kingdom. He interpreted Israel as the Christian church, the spiritual seed of Abraham.
At times his expressions are highly fanciful. He tells, for instance, of a prodigious fertility of this earth during the millennium, after the resurrection of the righteous, "when also the creation, having been renovated and set free, shall fructify with an abundance of all kinds of food." In this connection, he attributes to Christ the saying about the vine with ten thousand branches, and the ear of wheat with ten thousand grains, and so forth, which he quotes from Papias of Hierapolis.
Often Irenaeus is grouped with other early church fathers as teaching historic premillennialism which maintain a belief in the earthly reign of Christ but differ from dispensational premillennialism in their view of the rapture as to when the translation of saints occurs. In Against Heresies (V.XXIX.1) he says "And therefore, when in the end the Church shall be suddenly caught up from this, it is said, 'There shall be tribulation such as has not been since the beginning, neither shall be.'"
Irenaeus' exegesis does not give complete coverage. On the seals, for example, he merely alludes to Christ as the rider on the white horse. He stresses five factors with greater clarity and emphasis than Justin:
|
https://en.wikipedia.org/wiki?curid=15414
|
Involuntary commitment
Involuntary commitment or civil commitment (also known informally as sectioning or being sectioned in some jurisdictions, such as the United Kingdom) is a legal process through which an individual who is deemed by a qualified agent to have symptoms of severe mental disorder is ordered by a court into treatment in a psychiatric hospital (inpatient) or in the community (outpatient).
Criteria for civil commitment are established by laws which vary between nations. Commitment proceedings often follow a period of emergency hospitalization, during which an individual with acute psychiatric symptoms is confined for a relatively short duration (e.g. 72 hours) in a treatment facility for evaluation and stabilization by mental health professionals who may then determine whether further civil commitment is appropriate or necessary. If civil commitment proceedings follow, then the evaluation is presented in a formal court hearing where testimony and other evidence may also be submitted. The subject of the hearing is typically entitled to legal counsel and may challenge a commitment order through habeas corpus.
Historically, until the mid-1960's in most jurisdictions in the United States, all committals to public psychiatric facilities and most committals to private ones were involuntary. Since then, there have been alternating trends towards the abolition or substantial reduction of involuntary commitment, a trend known as "deinstitutionalisation".
In most jurisdictions, involuntary commitment is applied to individuals believed to be experiencing a mental illness that impairs their ability to reason to such an extent that the agents of the law, state, or courts determine that decisions will be made for the individual under a legal framework. In some jurisdictions, this is a proceeding distinct from being found incompetent.
Involuntary commitment is used in some degree for each of the following although different jurisdictions have different criteria. Some jurisdictions limit court-ordered treatment to individuals who meet statutory criteria for presenting a danger to self or others. Other jurisdictions have broader criteria.
Training is gradually becoming available in mental health first aid to equip community members such as teachers, school administrators, police officers, and medical workers with training in recognizing, and authority in managing, situations where involuntary evaluations of behavior are applicable under law. The extension of first aid training to cover mental health problems and crises is a quite recent development. A mental health first aid training course was developed in Australia in 2001 and has been found to improve assistance provided to persons with an alleged mental illness or mental health crisis. This form of training has now spread to a number of other countries (Canada, Finland, Hong Kong, Ireland, Singapore, Scotland, England, Wales, and the United States). Mental health triage may be used in an emergency room to make a determination about potential risk and apply treatment protocols.
Observation is sometimes used to determine whether a person warrants involuntary commitment. It is not always clear on a relatively brief examination whether a person is psychotic or otherwise warrants commitment.
Austria, Belgium, Germany, Israel, the Netherlands, Northern Ireland, Russia, Taiwan, Ontario (Canada), and the United States have adopted commitment criteria based on the presumed danger of the defendant to self or to others. People with suicidal thoughts may act on these impulses and harm or kill themselves. People with psychosis are occasionally driven by their delusions or hallucinations to harm themselves or others. People with certain types of personality disorders can occasionally present a danger to themselves or others.
This concern has found expression in the standards for involuntary commitment in every US state and in other countries as the danger to self or others standard, sometimes supplemented by the requirement that the danger be imminent. In some jurisdictions, the danger to self or others standard has been broadened in recent years to include need-for-treatment criteria such as "gravely disabled".
Starting in the 1960s, there has been a worldwide trend toward moving psychiatric patients from hospital settings to less restricting settings in the community, a shift known as "deinstitutionalization". Because the shift was typically not accompanied by a commensurate development of community-based services, critics say that deinstitutionalization has led to large numbers of people who would once have been inpatients as instead being incarcerated or becoming homeless. In some jurisdictions, laws authorizing court-ordered outpatient treatment have been passed in an effort to compel individuals with chronic, untreated severe mental illness to take psychiatric medication while living outside the hospital (e.g. Laura's Law, Kendra's Law).
Before the 1960s deinstitutionalization there were earlier efforts to free psychiatric patients. Doctor Philippe Pinel (1745–1826) ordered the removal of chains from patients.
In a study of 269 patients from Vermont State Hospital done by Courtenay M. Harding, Ph.D., and associates, about two-thirds of the ex-patients did well after deinstitutionalization.
United Nations General Assembly Resolution 46/119, "Principles for the Protection of Persons with Mental Illness and the Improvement of Mental Health Care," is a non-binding resolution advocating certain broadly drawn procedures for the carrying out of involuntary commitment. These principles have been used in many countries where local laws have been revised or new ones implemented. The UN runs programs in some countries to assist in this process.
At certain places and times, the practice of involuntary commitment has been used for the suppression of dissent, or in a punitive way.
In the former Soviet Union, psychiatric hospitals were used as prisons to isolate political prisoners from the rest of society. British playwright Tom Stoppard wrote "Every Good Boy Deserves Favour" about the relationship between a patient and his doctor in one of these hospitals. Stoppard was inspired by a meeting with a Russian exile.
In 1927, after the execution of Sacco and Vanzetti in the United States, demonstrator Aurora D'Angelo was sent to a mental health facility for psychiatric evaluation after she participated in a rally in support of the anarchists.
|
https://en.wikipedia.org/wiki?curid=15416
|
Intermolecular force
Intermolecular forces (IMF) are the forces which mediate interaction between molecules, including forces of attraction or repulsion which act between molecules and other types of neighboring particles, e.g. atoms or ions. Intermolecular forces are weak relative to intramolecular forces – the forces which hold a molecule together. For example, the covalent bond, involving sharing electron pairs between atoms, is much stronger than the forces present between neighboring molecules. Both sets of forces are essential parts of force fields frequently used in molecular mechanics.
The investigation of intermolecular forces starts from macroscopic observations which indicate the existence and action of forces at a molecular level. These observations include non-ideal-gas thermodynamic behavior reflected by virial coefficients, vapor pressure, viscosity, superficial tension, and absorption data.
The first reference to the nature of microscopic forces is found in Alexis Clairaut's work "Theorie de la Figure de la Terre". Other scientists who have contributed to the investigation of microscopic forces include: Laplace, Gauss, Maxwell and Boltzmann.
Attractive intermolecular forces are categorized into the following types:
Information on intermolecular forces is obtained by macroscopic measurements of properties like viscosity, pressure, volume, temperature (PVT) data. The link to microscopic aspects is given by virial coefficients and Lennard-Jones potentials.
A "hydrogen bond" is the attraction between the lone pair of an electronegative atom and a hydrogen atom that is bonded to an electronegative atom, usually nitrogen, oxygen, or fluorine. The hydrogen bond is often described as a strong electrostatic dipole–dipole interaction. However, it also has some features of covalent bonding: it is directional, stronger than a van der Waals force interaction, produces interatomic distances shorter than the sum of their van der Waals radii, and usually involves a limited number of interaction partners, which can be interpreted as a kind of valence. The number of Hydrogen bonds formed between molecules is equal to the number of active pairs. The molecule which donates its hydrogen is termed the donor molecule, while the molecule containing lone pair participating in H bonding is termed the acceptor molecule. The number of active pairs is equal to the common number between number of hydrogens the donor has and the number of lone pairs the acceptor has.
Though both not depicted in the diagram, water molecules have two active pairs, as the oxygen atom can interact with two hydrogens to form two hydrogen bonds. Intermolecular hydrogen bonding is responsible for the high boiling point of water (100 °C) compared to the other group 16 hydrides, which have little capability to hydrogen bond. Intramolecular hydrogen bonding is partly responsible for the secondary, tertiary, and quaternary structures of proteins and nucleic acids. It also plays an important role in the structure of polymers, both synthetic and natural.
The attraction between cationic and anionic sites is a noncovalent, or intermolecular interaction which is usually referred to as ion pairing or salt bridge.
It is essentially due to electrostatic forces, although in aqueous medium the association is driven by entropy and often even endothermic. Most salts form crystals with characteristic distances between the ions; in contrast to many other noncovalent interactions salt bridges are not directional and show in the solid state usually contact determined only by the van der Waals radii of the ions.
Inorganic as well as organic ions display in water at moderate ionic strength I similar salt bridge as association ΔG values around 5 to 6 kJ/mol for a 1:1 combination of anion and cation, almost independent of the nature (size, polarizability etc) of the ions. The ΔG values are additive and approximately a linear function of the charges, the interaction of e.g. a doubly charged phosphate anion with a single charged ammonium cation accounts for about 2x5 = 10 kJ/mol. The ΔG values depend on the ionic strength I of the solution, as described by the Debye-Hückel equation, at zero ionic strength one observes ΔG = 8 kJ/mol.
Dipole–dipole interactions are electrostatic interactions between molecules which have permanent dipoles.This interaction is stronger than the London forces but is weaker than ion-ion interaction because only partial charges are involved. These interactions tend to align the molecules to increase attraction (reducing potential energy). An example of a dipole–dipole interaction can be seen in hydrogen chloride (HCl): the positive end of a polar molecule will attract the negative end of the other molecule and influence its position. Polar molecules have a net attraction between them. Examples of polar molecules include hydrogen chloride (HCl) and chloroform (CHCl3).
Often molecules contain dipolar groups of atoms, but have no overall dipole moment on the molecule as a whole. This occurs if there is symmetry within the molecule that causes the dipoles to cancel each other out. This occurs in molecules such as tetrachloromethane and carbon dioxide. The dipole–dipole interaction between two individual atoms is usually zero, since atoms rarely carry a permanent dipole. These forces are discussed further in the section about the Keesom interaction, below.
Ion–dipole and ion–induced dipole forces are similar to dipole–dipole and dipole–induced dipole interactions but involve ions, instead of only polar and non-polar molecules. Ion–dipole and ion–induced dipole forces are stronger than dipole–dipole interactions because the charge of any ion is much greater than the charge of a dipole moment. Ion–dipole bonding is stronger than hydrogen bonding.
An ion–dipole force consists of an ion and a polar molecule interacting. They align so that the positive and negative groups are next to one another, allowing maximum attraction. An important example of this interaction is hydration of ions in water which give rise to hydration enthalpy. The polar water molecules surround themselves around ions in water and the energy released during the process is known as hydration enthalpy. The interaction has its immense importance in justifying the stability of various ions (like Cu2+) in water.
An ion–induced dipole force consists of an ion and a non-polar molecule interacting. Like a dipole–induced dipole force, the charge of the ion causes distortion of the electron cloud on the non-polar molecule.
The van der Waals forces arise from interaction between uncharged atoms or molecules, leading not only to such phenomena as the cohesion of condensed phases and physical absorption of gases, but also to an universal force of attraction between macroscopic bodies.
The first contribution to van der Waals forces is due to electrostatic interactions between charges (in molecular ions), dipoles (for polar molecules), quadrupoles (all molecules with symmetry lower than cubic), and permanent multipoles. It is termed the "Keesom interaction", named after Willem Hendrik Keesom. These forces originate from the attraction between permanent dipoles (dipolar molecules) and are temperature dependent.
They consist of attractive interactions between dipoles that are ensemble averaged over different rotational orientations of the dipoles. It is assumed that the molecules are constantly rotating and never get locked into place. This is a good assumption, but at some point molecules do get locked into place. The energy of a Keesom interaction depends on the inverse sixth power of the distance, unlike the interaction energy of two spatially fixed dipoles, which depends on the inverse third power of the distance. The Keesom interaction can only occur among molecules that possess permanent dipole moments, i.e., two polar molecules. Also Keesom interactions are very weak van der Waals interactions and do not occur in aqueous solutions that contain electrolytes. The angle averaged interaction is given by the following equation:
where "m" = dipole moment, formula_2 = permitivity of free space, formula_3 = dielectric constant of surrounding material, "T" = temperature, formula_4 = Boltzmann constant, and "r" = distance between molecules.
The second contribution is the induction (also termed polarization) or Debye force, arising from interactions between rotating permanent dipoles and from the polarizability of atoms and molecules (induced dipoles). These induced dipoles occur when one molecule with a permanent dipole repels another molecule's electrons. A molecule with permanent dipole can induce a dipole in a similar neighboring molecule and cause mutual attraction. Debye forces cannot occur between atoms. The forces between induced and permanent dipoles are not as temperature dependent as Keesom interactions because the induced dipole is free to shift and rotate around the polar molecule. The Debye induction effects and Keesom orientation effects are termed polar interactions.
The induced dipole forces appear from the induction (also termed polarization), which is the attractive interaction between a permanent multipole on one molecule with an induced (by the former di/multi-pole) 31 on another. This interaction is called the "Debye force", named after Peter J. W. Debye.
One example of an induction interaction between permanent dipole and induced dipole is the interaction between HCl and Ar. In this system, Ar experiences a dipole as its electrons are attracted (to the H side of HCl) or repelled (from the Cl side) by HCl. The angle averaged interaction is given by the following equation:
where formula_6 = polarizability.
This kind of interaction can be expected between any polar molecule and non-polar/symmetrical molecule. The induction-interaction force is far weaker than dipole–dipole interaction, but stronger than the London dispersion force.
The third and dominant contribution is the dispersion or London force (fluctuating dipole–induced dipole), which arises due to the non-zero instantaneous dipole moments of all atoms and molecules. Such polarization can be induced either by a polar molecule or by the repulsion of negatively charged electron clouds in non-polar molecules. Thus, London interactions are caused by random fluctuations of electron density in an electron cloud. An atom with a large number of electrons will have a greater associated London force than an atom with fewer electrons. The dispersion (London) force is the most important component because all materials are polarizable, whereas Keesom and Debye forces require permanent dipoles. The London interaction is universal and is present in atom-atom interactions as well. For various reasons, London interactions (dispersion) have been considered relevant for interactions between macroscopic bodies in condensed systems. Hamaker developed the theory of van der Waals between macroscopic bodies in 1937 and showed that the additivity of these interactions renders them considerably more long-range.
This comparison is approximate. The actual relative strengths will vary depending on the molecules involved. Ionic bonding and covalent bonding will always be stronger than intermolecular forces in any given substance.
Intermolecular forces are repulsive at short distances and attractive at long distances (see the Lennard-Jones potential). In a gas, the repulsive force chiefly has the effect of keeping two molecules from occupying the same volume. This gives a real gas a tendency to occupy a larger volume than an ideal gas at the same temperature and pressure. The attractive force draws molecules closer together and gives a real gas a tendency to occupy a smaller volume than an ideal gas. Which interaction is more important depends on temperature and pressure (see compressibility factor).
In a gas, the distances between molecules are generally large, so intermolecular forces have only a small effect. The attractive force is not overcome by the repulsive force, but by the thermal energy of the molecules. Temperature is the measure of thermal energy, so increasing temperature reduces the influence of the attractive force. In contrast, the influence of the repulsive force is essentially unaffected by temperature.
When a gas is compressed to increase its density, the influence of the attractive force increases. If the gas is made sufficiently dense, the attractions can become large enough to overcome the tendency of thermal motion to cause the molecules to disperse. Then the gas can condense to form a solid or liquid, i.e., a condensed phase. Lower temperature favors the formation of a condensed phase. In a condensed phase, there is very nearly a balance between the attractive and repulsive forces.
Intermolecular forces observed between atoms and molecules can be described phenomenologically as occurring between permanent and instantaneous dipoles, as outlined above. Alternatively, one may seek a fundamental, unifying theory that is able to explain the various types of interactions such as hydrogen bonding, van der Waals forces and dipole–dipole interactions. Typically, this is done by applying the ideas of quantum mechanics to molecules, and Rayleigh–Schrödinger perturbation theory has been especially effective in this regard. When applied to existing quantum chemistry methods, such a quantum mechanical explanation of intermolecular interactions provides an array of approximate methods that can be used to analyze intermolecular interactions. One of the most helpful methods to visualize this kind of intermolecular interactions, that we can find in quantum chemistry, is the non-covalent interaction index, which is based on the electron density of the system. London Dispersion Forces (LDF) play a big role with this.
|
https://en.wikipedia.org/wiki?curid=15417
|
List of Internet top-level domains
This list of Internet top-level domains (TLD) contains top-level domains, which are those domains in the DNS root zone of the Domain Name System of the Internet. A list of the top-level domains by the Internet Assigned Numbers Authority (IANA) is maintained at the Root Zone Database. IANA also oversees the approval process for new proposed top-level domains for ICANN. , their root domain contains 1511 top-level domains, with a number of TLDs that have been retired and are no longer functional. , the IANA root database includes 1,584 TLDs, including 55 that are not assigned (revoked), 8 that are retired and 11 test domains and are thus not represented in ICANN's listing and are not in root.zone file.
IANA distinguishes the following groups of top-level domains:
Seven generic top-level domains were created early in the development of the Internet, and predate the creation of ICANN in 1998.
As of 20 May 2017, there were 255 country-code top-level domains, purely in the Latin alphabet, using two-character codes. , this number is 316, with the addition of internationalized domains.
Internationalised domain names have been proposed for Israel, Japan and Libya.
All of these TLDs are internationalized domain names (IDN) and support second-level IDNs.
ICANN/IANA has created some Special-Use domain names which are meant for special technical purposes. ICANN/IANA owns all of the Special-Use domain names.
|
https://en.wikipedia.org/wiki?curid=15422
|
Idealism
In philosophy, idealism is the diverse group of metaphysical philosophies which asserts that "reality" is in some way indistinguishable or inseparable from human understanding and/or perception; that it is in some sense mentally constituted, or otherwise closely connected to ideas. According to German philosopher Immanuel Kant, a pioneer of modern idealist thought, idealism does “not concern the existence of things”, but asserts only that our “modes of representation” of them, above all "space" and "time", are not “determinations that belong to things in themselves” but essential features of our own minds. Kant called this position “transcendental” idealism (or sometimes “critical” idealism), since it describes the way in which "reality" is utterly transcended by, and cannot be thought separate from, the categories with which they are structured by and in human understanding. However, since Kant's view affirms the existence of "some" things independently of experience (namely, "things in themselves"), it is very different from the more traditional idealism of George Berkeley.
In contemporary scholarship, traditional idealist views are generally divided into two groups. Subjective idealism takes as its starting point that objects only exist to the extent that they are perceived by someone. Objective idealism posits the existence of an "objective" consciousness which exists before and, in some sense, independently of human consciousness, thereby bringing about the existence of objects independently of human minds.
Epistemologically, idealism manifests as a skepticism about the possibility of knowing any mind-independent thing. Such a thing as "knowledge", it would claim, cannot ever be had "mind-independently". In contrast to materialism, idealism asserts the "primacy" of consciousness as the origin and prerequisite of phenomena. Idealism holds consciousness or mind to be the "origin" of the material world – in the sense that it is a necessary condition for our positing of a material world – and it aims to explain the existing world according to these principles. As an ontological doctrine, idealism goes further, asserting that all entities are composed of mind or spirit. Ontological idealism thus rejects both physicalist and dualist views as failing to ascribe ontological priority to the mind.
The earliest extant arguments that the world of experience is grounded in the mental derive from India and Greece. The Hindu idealists in India and the Greek neoplatonists gave panentheistic arguments for an all-pervading consciousness as the ground or true nature of reality. In contrast, the Yogācāra school, which arose within Mahayana Buddhism in India in the 4th century CE, based its "mind-only" idealism to a greater extent on phenomenological analyses of personal experience. This turn toward the subjective anticipated empiricists such as George Berkeley, who revived idealism in 18th-century Europe by employing skeptical arguments against materialism. Beginning with Immanuel Kant, German idealists such as Georg Wilhelm Friedrich Hegel, Johann Gottlieb Fichte, Friedrich Wilhelm Joseph Schelling, and Arthur Schopenhauer dominated 19th-century philosophy. This tradition, which emphasized the mental or "ideal" character of all phenomena, gave birth to idealistic and subjectivist schools ranging from British idealism to phenomenalism to existentialism.
Phenomenology, an influential strain of philosophy since the beginning of the 20th century, also draws on the lessons of idealism. In his Being and Time, Martin Heidegger famously states: If the term idealism amounts to the recognition that being can never be explained through beings, but, on the contrary, always is the transcendental in its relation to any beings, then the only right possibility of philosophical problematics lies with idealism. In that case, Aristotle was no less an idealist than Kant. If idealism means a reduction of all beings to a subject or a consciousness, distinguished by staying "undetermined" in its own being, and ultimately is characterised negatively as "non-thingly", then this idealism is no less methodically naive than the most coarse-grained realism. Idealism as a philosophy came under heavy attack in the West at the turn of the 20th century. The most influential critics of both epistemological and ontological idealism were G. E. Moore and Bertrand Russell, but its critics also included the new realists. According to "Stanford Encyclopedia of Philosophy", the attacks by Moore and Russell were so influential that even more than 100 years later "any acknowledgment of idealistic tendencies is viewed in the English-speaking world with reservation". However, many aspects and paradigms of idealism did still have a large influence on subsequent philosophy.
"Idealism" is a term with several related meanings. It comes via Latin "idea" from the Ancient Greek "idea" (ἰδέα) from "idein" (ἰδεῖν), meaning "to see". The term entered the English language by 1743. It was first used in the abstract metaphysical sense "belief that reality is made up only of ideas" by Christian Wolff in 1747. The term re-entered the English language in this abstract sense by 1796.
In ordinary language, as when speaking of Woodrow Wilson's political idealism, it generally suggests the priority of ideals, principles, values, and goals over concrete realities. Idealists are understood to represent the world as it might or should be, unlike pragmatists, who focus on the world as it presently is. In the arts, similarly, idealism affirms imagination and attempts to realize a mental conception of beauty, a standard of perfection, juxtaposed to aesthetic naturalism and realism. The term "idealism" is also sometimes used in a sociological sense, which emphasizes how human ideas—especially beliefs and values—shape society.
Any philosophy that assigns crucial importance to the ideal or spiritual realm in its account of human existence may be termed "idealist". Metaphysical idealism is an ontological doctrine that holds that reality itself is incorporeal or experiential at its core. Beyond this, idealists disagree on which aspects of the mental are more basic. Platonic idealism affirms that abstractions are more basic to reality than the things we perceive, while subjective idealists and phenomenalists tend to privilege sensory experience over abstract reasoning. Epistemological idealism is the view that reality can only be known through ideas, that only psychological experience can be apprehended by the mind.
Subjective idealists like George Berkeley are anti-realists in terms of a mind-independent world, whereas transcendental idealists like Immanuel Kant are strong skeptics of such a world, affirming epistemological and not metaphysical idealism. Thus Kant defines "idealism" as "the assertion that we can never be certain whether all of our putative outer experience is not mere imagining". He claimed that, according to "idealism", "the reality of external objects does not admit of strict proof. On the contrary, however, the reality of the object of our internal sense (of myself and state) is clear immediately through consciousness". However, not all idealists restrict the real or the knowable to our immediate subjective experience. Objective idealists make claims about a transempirical world, but simply deny that this world is essentially divorced from or ontologically prior to the mental. Thus, Plato and Gottfried Leibniz affirm an objective and knowable reality transcending our subjective awareness—a rejection of epistemological idealism—but propose that this reality is grounded in ideal entities, a form of metaphysical idealism. Nor do all metaphysical idealists agree on the nature of the ideal; for Plato, the fundamental entities were non-mental abstract forms, while for Leibniz they were proto-mental and concrete monads.
As a rule, transcendental idealists like Kant affirm idealism's epistemic side without committing themselves to whether reality is "ultimately" mental; objective idealists like Plato affirm reality's metaphysical basis in the mental or abstract without restricting their epistemology to ordinary experience; and subjective idealists like Berkeley affirm both metaphysical and epistemological idealism.
Idealism as a form of metaphysical monism holds that consciousness, not matter, is the ground of all being. It is monist because it holds that there is only one type of thing in the universe and idealist because it holds that one thing to be consciousness.
Anaxagoras (480 BC) taught that "all things" were created by "Nous" ("Mind"). He held that Mind held the cosmos together and gave human beings a connection to the cosmos or a pathway to the divine.
Plato's theory of forms or "ideas" describes ideal forms (for example the platonic solids in geometry or abstracts like Goodness and Justice), as universals existing independently of any particular instance. Arne Grøn calls this doctrine "the classic example of a metaphysical idealism as a "transcendent" idealism", while Simone Klein calls Plato "the earliest representative of metaphysical objective idealism". Nevertheless, Plato holds that matter is real, though transitory and imperfect, and is perceived by our body and its senses and given existence by the eternal ideas that are perceived directly by our rational soul. Plato was therefore a metaphysical and epistemological dualist, an outlook that modern idealism has striven to avoid: Plato's thought cannot therefore be counted as idealist in the modern sense.
With the neoplatonist Plotinus, wrote Nathaniel Alfred Boll "there even appears, probably for the first time in Western philosophy, "idealism" that had long been current in the East even at that time, for it taught... that the soul has made the world by stepping from eternity into time...". Similarly, in regard to passages from the Enneads, "The only space or place of the world is the soul" and "Time must not be assumed to exist outside the soul". Ludwig Noiré wrote: "For the first time in Western philosophy we find idealism proper in Plotinus". However, Plotinus does not address whether we know external objects, unlike Schopenhauer and other modern philosophers.
Christian theologians have held idealist views, often based on neoplatonism, despite the influence of Aristotelian scholasticism from the 12th century onward. Later western theistic idealism such as that of Hermann Lotze offers a theory of the "world ground" in which all things find their unity: it has been widely accepted by Protestant theologians. Several modern religious movements, for example the organizations within the New Thought Movement and the Unity Church, may be said to have a particularly idealist orientation. The theology of Christian Science includes a form of idealism: it teaches that all that truly exists is God and God's ideas; that the world as it appears to the senses is a distortion of the underlying spiritual reality, a distortion that may be corrected (both conceptually and in terms of human experience) through a reorientation (spiritualization) of thought.
Wang Yangming, a Ming Chinese neo-Confucian philosopher, official, educationist, calligraphist and general, held that objects do not exist entirely apart from the mind because the mind shapes them. It is not the world that shapes the mind but the mind that gives reason to the world, so the mind alone is the source of all reason, having an inner light, an innate moral goodness and understanding of what is good.
There are currents of idealism throughout Indian philosophy, ancient and modern. Hindu idealism often takes the form of monism or non-dualism, espousing the view that a unitary consciousness is the essence or meaning of the phenomenal reality and plurality.
Buddhist idealism on the other hand is more epistemic and is not a metaphysical monism, which Buddhists consider eternalistic and hence not the middle way between extremes espoused by the Buddha.
The oldest reference to Idealism in Vedic texts is in Purusha Sukta of the Rig Veda. This sukta espouses panentheism by presenting cosmic being Purusha as both pervading all universe and yet being transcendent to it. Absolute idealism can be seen in Chāndogya Upaniṣad, where things of the objective world like the five elements and the subjective world such as will, hope, memory etc. are seen to be emanations from the Self.
Idealist notions have been propounded by the Vedanta schools of thought, which use the Vedas, especially the Upanishads as their key texts. Idealism was opposed by dualists Samkhya, the atomists Vaisheshika, the logicians Nyaya, the linguists Mimamsa and the materialists Cārvāka. There are various sub schools of Vedanta, like Advaita Vedanta (non-dual), Vishishtadvaita and Bhedabheda Vedanta (difference and non-difference).
The schools of Vedanta all attempt to explain the nature and relationship of Brahman (universal soul or Self) and Atman (individual self), which they see as the central topic of the Vedas. One of the earliest attempts at this was Bādarāyaņa's Brahma Sutras, which is canonical for all Vedanta sub-schools. Advaita Vedanta is a major sub school of Vedanta which holds a non-dual Idealistic metaphysics. According to Advaita thinkers like Adi Shankara (788–820) and his contemporary Maṇḍana Miśra, Brahman, the single unitary consciousness or absolute awareness, appears as the diversity of the world because of "maya" or illusion, and hence perception of plurality is "mithya", error. The world and all beings or souls in it have no separate existence from Brahman, universal consciousness, and the seemingly independent soul ("jiva") is identical to Brahman. These doctrines are represented in verses such as "brahma satyam jagan mithya; jīvo brahmaiva na aparah" (Brahman is alone True, and this world of plurality is an error; the individual self is not different from Brahman). Other forms of Vedanta like the Vishishtadvaita of Ramanuja and the Bhedabheda of Bhāskara are not as radical in their non-dualism, accepting that there is a certain difference between individual souls and Brahman. Dvaita school of Vedanta by Madhvacharya maintains the opposing view that the world is real and eternal. It also argues that real atman fully depends and reflection of independent brahman.
The Tantric tradition of Kashmir Shaivism has also been categorized by scholars as a form of Idealism. The key thinker of this tradition is the Kashmirian Abhinavagupta (975–1025 CE).
Modern Vedic Idealism was defended by the influential Indian philosopher Sarvepalli Radhakrishnan in his 1932 "An Idealist View of Life" and other works, which espouse Advaita Vedanta. The essence of Hindu Idealism is captured by such modern writers as Sri Nisargadatta Maharaj, Sri Aurobindo, P. R. Sarkar, and Sohail Inayatullah.
Buddhist views which can be said to be similar to Idealism appear in Mahayana Buddhist texts such as the Samdhinirmocana sutra, Laṅkāvatāra Sūtra, Dashabhumika sutra, etc. These were later expanded upon by Indian Buddhist philosophers of the influential Yogacara school, like Vasubandhu, Asaṅga, Dharmakīrti, and Śāntarakṣita. Yogacara thought was also promoted in China, by Chinese philosophers and translators like Xuanzang.
There is a modern scholarly disagreement about whether Yogacara Buddhism can be said to be a form of idealism. As Saam Trivedi notes: "on one side of the debate, writers such as Jay Garfield, Jeffrey Hopkins, Paul Williams, and others maintain the idealism label, while on the other side, Stefan Anacker, Dan Lusthaus, Richard King, Thomas Kochumuttom, Alex Wayman, Janice Dean Willis, and others have argued that Yogacara is not idealist." The central point of issue is what Buddhist philosophers like Vasubandhu who used the term "vijñapti-matra" ("representation-only" or "cognition-only") and formulated arguments to refute external objects actually meant to say.
Vasubandhu's works include a refutation of external objects or externality itself and argues that the true nature of reality is beyond subject-object distinctions. He views ordinary consciousness experience as deluded in its perceptions of an external world separate from itself and instead argues that all there is "Vijñapti" (representation or conceptualization). Hence Vasubandhu begins his "Vimsatika" with the verse: "All this is consciousness-only, because of the appearance of non-existent objects, just as someone with an optical disorder may see non-existent nets of hair."
Likewise, the Buddhist philosopher Dharmakirti's view of the apparent existence of external objects is summed up by him in the Pramānaṿārttika (‘Commentary on Logic and Epistemology’): "Cognition experiences itself, and nothing else whatsoever. Even the particular objects of perception, are by nature just consciousness itself."
While some writers like Jay Garfield hold that Vasubandhu is a metaphysical idealist, others see him as closer to an epistemic idealist like Kant who holds that our knowledge of the world is simply knowledge of our own concepts and perceptions of a transcendental world. Sean Butler upholding that Yogacara is a form of idealism, albeit its own unique type, notes the similarity of Kant's categories and Yogacara's "Vāsanās", both of which are simply phenomenal tools with which the mind interprets the noumenal realm. Unlike Kant however who holds that the noumenon or thing-in-itself is unknowable to us, Vasubandhu holds that ultimate reality is knowable, but only through non-conceptual yogic perception of a highly trained meditative mind.
Writers like Dan Lusthaus who hold that Yogacara is not a metaphysical idealism point out, for example, that Yogācāra thinkers did not focus on consciousness to assert it as ontologically real, but simply to analyze how our experiences and thus our suffering is created. As Lusthaus notes: "no Indian Yogācāra text ever claims that the world is created by mind. What they do claim is that we mistake our projected interpretations of the world for the world itself, i.e. we take our own mental constructions to be the world." Lusthaus notes that there are similarities to Western epistemic idealists like Kant and Husserl, enough so that Yogacara can be seen as a form of epistemological idealism. However he also notes key differences like the concepts of karma and nirvana. Saam Trivedi meanwhile notes the similarities between epistemic idealism and Yogacara, but adds that Yogacara Buddhism is in a sense its own theory.
Similarly, Thomas Kochumuttom sees Yogacara as "an explanation of experience, rather than a system of ontology" and Stefan Anacker sees Vasubandhu's philosophy as a form of psychology and as a mainly therapeutic enterprise.
Subjective idealism (also known as immaterialism) describes a relationship between experience and the world in which objects are no more than collections or bundles of sense data in the perceiver. Proponents include Berkeley, Bishop of Cloyne, an Anglo-Irish philosopher who advanced a theory he called "immaterialism," later referred to as "subjective idealism", contending that individuals can only know sensations and ideas of objects directly, not abstractions such as "matter", and that ideas also depend upon being perceived for their very existence - "esse est percipi"; "to be is to be perceived".
Arthur Collier published similar assertions though there seems to have been no influence between the two contemporary writers. The only knowable reality is the represented image of an external object. Matter as a cause of that image, is unthinkable and therefore nothing to us. An external world as absolute matter unrelated to an observer does not exist as far as we are concerned. The universe cannot exist as it appears if there is no perceiving mind. Collier was influenced by "An Essay Towards the Theory of the Ideal or Intelligible World" by Cambridge Platonist John Norris (1701).
Bertrand Russell's popular book "The Problems of Philosophy" highlights Berkeley's tautological premise for advancing idealism;
The Australian philosopher David Stove harshly criticized philosophical idealism, arguing that it rests on what he called "the worst argument in the world". Stove claims that Berkeley tried to derive a non-tautological conclusion from tautological reasoning. He argued that in Berkeley's case the fallacy is not obvious and this is because one premise is ambiguous between one meaning which is tautological and another which, Stove argues, is logically equivalent to the conclusion.
Alan Musgrave argues that conceptual idealists compound their mistakes with use/mention confusions;
and proliferation of hyphenated entities such as "thing-in-itself" (Immanuel Kant), "things-as-interacted-by-us" (Arthur Fine), "table-of-commonsense" and "table-of-physics" (Arthur Eddington) which are "warning signs" for conceptual idealism according to Musgrave because they allegedly do not exist but only highlight the numerous ways in which people come to know the world. This argument does not take into account the issues pertaining to hermeneutics, especially at the backdrop of analytic philosophy. Musgrave criticized Richard Rorty and postmodernist philosophy in general for confusion of use and mention.
A. A. Luce and John Foster are other subjectivists. Luce, in "Sense without Matter" (1954), attempts to bring Berkeley up to date by modernizing his vocabulary and putting the issues he faced in modern terms, and treats the Biblical account of matter and the psychology of perception and nature. Foster's "The Case for Idealism" argues that the physical world is the logical creation of natural, non-logical constraints on human sense-experience. Foster's latest defense of his views (phenomenalistic idealism) is in his book "A World for Us: The Case for Phenomenalistic Idealism".
Paul Brunton, a British philosopher, mystic, traveler, and guru, taught a type of idealism called "mentalism," similar to that of Bishop Berkeley, proposing a master world-image, projected or manifested by a world-mind, and an infinite number of individual minds participating. A tree does not cease to exist if nobody sees it because the world-mind is projecting the idea of the tree to all minds
John Searle, criticizing some versions of idealism, summarizes two important arguments for subjective idealism. The first is based on our perception of reality:
therefore;
Whilst agreeing with (2) Searle argues that (1) is false and points out that (3) does not follow from (1) and (2). The second argument runs as follows;
Searle contends that "Conclusion 2" does not follow from the premises.
Epistemological idealism is a subjectivist position in epistemology that holds that what one knows about an object exists only in one's mind. Proponents include Brand Blanshard.
Transcendental idealism, founded by Immanuel Kant in the eighteenth century, maintains that the mind shapes the world we perceive into the form of space-and-time.
The 2nd edition (1787) contained a "Refutation of Idealism" to distinguish his transcendental idealism from Descartes's Sceptical Idealism and Berkeley's anti-realist strain of Subjective Idealism. The section "Paralogisms of Pure Reason" is an implicit critique of Descartes' idealism. Kant says that it is not possible to infer the 'I' as an object (Descartes' "cogito ergo sum") purely from "the spontaneity of thought". Kant focused on ideas drawn from British philosophers such as Locke, Berkeley and Hume but distinguished his transcendental or critical idealism from previous varieties;
Kant distinguished between things as they appear to an observer and things in themselves, "that is, things considered without regard to whether and how they may be given to us". We cannot approach the "noumenon", the "thing in Itself" () without our own mental world. He added that the mind is not a blank slate, "tabula rasa" but rather comes equipped with categories for organising our sense impressions.
In the first volume of his "Parerga and Paralipomena", Schopenhauer wrote his "Sketch of a History of the Doctrine of the Ideal and the Real". He defined the ideal as being mental pictures that constitute subjective knowledge. The ideal, for him, is what can be attributed to our own minds. The images in our head are what comprise the ideal. Schopenhauer emphasized that we are restricted to our own consciousness. The world that appears is only a representation or mental picture of objects. We directly and immediately know only representations. All objects that are external to the mind are known indirectly through the mediation of our mind. He offered a history of the concept of the "ideal" as "ideational" or "existing in the mind as an image".
Charles Bernard Renouvier was the first Frenchman after Nicolas Malebranche to formulate a complete idealistic system, and had a vast influence on the development of French thought. His system is based on Immanuel Kant's, as his chosen term "néo-criticisme" indicates; but it is a transformation rather than a continuation of Kantianism.
Friedrich Nietzsche argued that Kant commits an agnostic tautology and does not offer a satisfactory answer as to the "source" of a philosophical right to such-or-other metaphysical claims; he ridicules his pride in tackling "the most difficult thing that could ever be undertaken on behalf of metaphysics." The famous "thing-in-itself" was called a product of philosophical habit, which seeks to introduce a grammatical subject: because wherever there is cognition, there must be a "thing" that is cognized and allegedly it must be added to ontology as a being (whereas, to Nietzsche, only the world as ever changing appearances can be assumed). Yet he attacks the idealism of Schopenhauer and Descartes with an argument similar to Kant's critique of the latter "(see above)".
Objective idealism asserts that the reality of experiencing combines and transcends the realities of the object experienced and of the mind of the observer. Proponents include Thomas Hill Green, Josiah Royce, Benedetto Croce and Charles Sanders Peirce.
Schelling (1775–1854) claimed that the Fichte's "I" needs the Not-I, because there is no subject without object, and vice versa. So there is no difference between the subjective and the objective, that is, the ideal and the real. This is Schelling's "absolute identity": the ideas or mental images in the mind are identical to the extended objects which are external to the mind.
Absolute idealism is G. W. F. Hegel's account of how existence is comprehensible as an all-inclusive whole. Hegel called his philosophy "absolute" idealism in contrast to the "subjective idealism" of Berkeley and the "transcendental idealism" of Kant and Fichte, which were not based on a critique of the finite and a dialectical philosophy of history as Hegel's idealism was. The exercise of reason and intellect enables the philosopher to know ultimate historical reality, the phenomenological constitution of self-determination, the dialectical development of self-awareness and personality in the realm of History.
In his "Science of Logic" (1812–1814) Hegel argues that finite qualities are not fully "real" because they depend on other finite qualities to determine them. Qualitative "infinity", on the other hand, would be more self-determining and hence more fully real. Similarly finite natural things are less "real"—because they are less self-determining—than spiritual things like morally responsible people, ethical communities and God. So any doctrine, such as materialism, that asserts that finite qualities or natural objects are fully real is mistaken.
Hegel certainly intends to preserve what he takes to be true of German idealism, in particular Kant's insistence that ethical reason can and does go beyond finite inclinations. For Hegel there must be some identity of thought and being for the "subject" (any human observer) to be able to know any observed "object" (any external entity, possibly even another human) at all. Under Hegel's concept of "subject-object identity," subject and object both have Spirit (Hegel's ersatz, redefined, nonsupernatural "God") as their "conceptual" (not metaphysical) inner reality—and in that sense are identical. But until Spirit's "self-realization" occurs and Spirit graduates from Spirit to "Absolute" Spirit status, subject (a human mind) mistakenly thinks every "object" it observes is something "alien," meaning something separate or apart from "subject." In Hegel's words, "The object is revealed to it [to "subject"] by [as] something alien, and it does not recognize itself." Self-realization occurs when Hegel (part of Spirit's nonsupernatural Mind, which is the collective mind of all humans) arrives on the scene and realizes that every "object" is "himself", because both subject and object are essentially Spirit. When self-realization occurs and Spirit becomes "Absolute" Spirit, the "finite" (man, human) becomes the "infinite" ("God," divine), replacing the imaginary or "picture-thinking" supernatural God of theism: man becomes God. Tucker puts it this way: "Hegelianism . . . is a religion of self-worship whose fundamental theme is given in Hegel's image of the man who aspires to be God himself, who demands 'something more, namely infinity.'" The picture Hegel presents is "a picture of a self-glorifying humanity striving compulsively, and at the end successfully, to rise to divinity."
Kierkegaard criticized Hegel's idealist philosophy in several of his works, particularly his claim to a comprehensive system that could explain the whole of reality. Where Hegel argues that an ultimate understanding of the logical structure of the world is an understanding of the logical structure of God's mind, Kierkegaard asserts that for God reality can be a system but it cannot be so for any human individual because both reality and humans are incomplete and all philosophical systems imply completeness. For Hegel, a logical system is possible but an existential system is not: "What is rational is actual; and what is actual is rational". Hegel's absolute idealism blurs the distinction between existence and thought: our mortal nature places limits on our understanding of reality;
So-called systems have often been characterized and challenged in the assertion that they abrogate the distinction between good and evil, and destroy freedom. Perhaps one would express oneself quite as definitely, if one said that every such system fantastically dissipates the concept existence. ... Being an individual man is a thing that has been abolished, and every speculative philosopher confuses himself with humanity at large; whereby he becomes something infinitely great, and at the same time nothing at all.
A major concern of Hegel's "Phenomenology of Spirit" (1807) and of the philosophy of Spirit that he lays out in his "Encyclopedia of the Philosophical Sciences" (1817–1830) is the interrelation between individual humans, which he conceives in terms of "mutual recognition." However, what Climacus means by the aforementioned statement, is that Hegel, in the "Philosophy of Right", believed the best solution was to surrender one's individuality to the customs of the State, identifying right and wrong in view of the prevailing bourgeois morality. Individual human will ought, at the State's highest level of development, to properly coincide with the will of the State. Climacus rejects Hegel's suppression of individuality by pointing out it is impossible to create a valid set of rules or system in any society which can adequately describe existence for any one individual. Submitting one's will to the State denies personal freedom, choice, and responsibility.
In addition, Hegel does believe we can know the structure of God's mind, or ultimate reality. Hegel agrees with Kierkegaard that both reality and humans are incomplete, inasmuch as we are in time, and reality develops through time. But the relation between time and eternity is outside time and this is the "logical structure" that Hegel thinks we can know. Kierkegaard disputes this assertion, because it eliminates the clear distinction between ontology and epistemology. Existence and thought are not identical and one cannot possibly think existence. Thought is always a form of abstraction, and thus not only is pure existence impossible to think, but all forms in existence are unthinkable; thought depends on language, which merely abstracts from experience, thus separating us from lived experience and the living essence of all beings. In addition, because we are finite beings, we cannot possibly know or understand anything that is universal or infinite such as God, so we cannot know God exists, since that which transcends time simultaneously transcends human understanding.
Bradley saw reality as a monistic whole apprehended through "feeling", a state in which there is no distinction between the perception and the thing perceived. Like Berkeley, Bradley thought that nothing can be known to exist unless it is known by a mind.
Bradley was the apparent target of G.
E. Moore's radical rejection of idealism. Moore claimed that Bradley did not understand the statement that something is real. We know for certain, through common sense and prephilosophical beliefs, that some things are real, whether they are objects of thought or not, according to Moore. The 1903 article "The Refutation of Idealism" is one of the first demonstrations of Moore's commitment to analysis. He examines each of the three terms in the Berkeleian aphorism "esse est percipi", "to be is to be perceived", finding that it must mean that the object and the subject are "necessarily" connected so that "yellow" and "the sensation of yellow" are identical - "to be yellow" is "to be experienced as yellow". But it also seems there is a difference between "yellow" and "the sensation of yellow" and "that "esse" is held to be "percipi", solely because what is experienced is held to be identical with the experience of it". Though far from a complete refutation, this was the first strong statement by analytic philosophy against its idealist predecessors, or at any rate against the type of idealism represented by Berkeley.
Actual idealism is a form of idealism developed by Giovanni Gentile that grew into a "grounded" idealism contrasting Kant and Hegel. The idea is a version of Occam's razor; the simpler explanations are always correct. Actual idealism is the idea that reality is the ongoing act of thinking, or in Italian "pensiero pensante". Any action done by humans is classified as human thought because the action was done due to predisposed thought. He further believes that thoughts are the only concept that truly exist since reality is defined through the act of thinking. This idea was derived from Gentile's paper, "The Theory of Mind As Pure Act".
Since thoughts are actions, any conjectured idea can be enacted. This idea not only affects the individual's life, but everyone around them, which in turn affects the state since the people are the state. Therefore, thoughts of each person are subsumed within the state. The state is a composition of many minds that come together to change the country for better or worse.
Gentile theorizes that thoughts can only be conjectured within the bounds of known reality; abstract thinking does not exist. Thoughts cannot be formed outside our known reality because we are the reality that halt ourselves from thinking externally. With accordance to "The Act of Thought of Pure Thought", our actions comprise our thoughts, our thoughts create perception, perceptions define reality, thus we think within our created reality.
The present act of thought is reality but the past is not reality; it is history. The reason being, past can be rewritten through present knowledge and perspective of the event. The reality that is currently constructed can be completely changed through language (e.g. bias (omission, source, tone)). The unreliability of the recorded realty can skew the original concept and make the past remark unreliable.
Actual idealism is regarded as a liberal and tolerant doctrine since it acknowledges that every being picturizes reality, in which their ideas remained hatched, differently. Even though, reality is a figment of thought.
Even though core concept of the theory is famous for its simplification, its application is regarded as extremely ambiguous. Over the years, philosophers have interpreted it numerously different ways: Holmes took it as metaphysics of the thinking act; Betti as a form of hermeneutics; Harris as a metaphysics of democracy; Fogu as a modernist philosophy of history.
Giovanni Gentile was a key supporter of fascism, regarded by many as the "philosopher of fascism". Gentile's philosophy was the key to understating fascism as it was believed by many who supported and loved it. They believed, if priori synthesis of subject and object is true, there is no difference between the individuals in society; they're all one. Which means that they have equal right, roles, and jobs. In fascist state, submission is given to one leader because individuals act as one body. In Gentile's view, far more can be accomplished when individuals are under a corporate body than a collection of autonomous individuals.
Pluralistic idealism such as that of Gottfried Leibniz takes the view that there are many individual minds that together underlie the existence of the observed world and make possible the existence of the physical universe. Unlike absolute idealism, pluralistic idealism does not assume the existence of a single ultimate mental reality or "Absolute". Leibniz' form of idealism, known as Panpsychism, views "monads" as the true atoms of the universe and as entities having perception. The monads are "substantial forms of being, "elemental, individual, subject to their own laws, non-interacting, each reflecting the entire universe. Monads are centers of force, which is substance while space, matter and motion are phenomenal and their form and existence is dependent on the simple and immaterial monads. There is a pre-established harmony by God, the central monad, between the world in the minds of the monads and the external world of objects. Leibniz's cosmology embraced traditional Christian theism. The English psychologist and philosopher James Ward inspired by Leibniz had also defended a form of pluralistic idealism. According to Ward the universe is composed of "psychic monads" of different levels, interacting for mutual self-betterment.
Personalism is the view that the minds that underlie reality are the minds of persons. Borden Parker Bowne, a philosopher at Boston University, a founder and popularizer of personal idealism, presented it as a substantive reality of persons, the only reality, as known directly in self-consciousness. Reality is a society of interacting persons dependent on the Supreme Person of God. Other proponents include George Holmes Howison and J. M. E. McTaggart.
Howison's personal idealism was also called "California Personalism" by others to distinguish it from the "Boston Personalism" which was of Bowne. Howison maintained that both impersonal, monistic idealism and materialism run contrary to the experience of moral freedom. To deny freedom to pursue truth, beauty, and "benignant love" is to undermine every profound human venture, including science, morality, and philosophy. Personalistic idealists Borden Parker Bowne and Edgar S. Brightman and realistic (in some senses of the term, though he remained influenced by neoplatonism) personal theist Saint Thomas Aquinas address a core issue, namely that of dependence upon an infinite personal God.
Howison, in his book "The Limits of Evolution and Other Essays Illustrating the Metaphysical Theory of Personal Idealism", created a democratic notion of personal idealism that extended all the way to God, who was no more the ultimate monarch but the ultimate democrat in eternal relation to other eternal persons. J. M. E. McTaggart's idealist atheism and Thomas Davidson's Apeirionism resemble Howisons personal idealism.
J. M. E. McTaggart argued that minds alone exist and only relate to each other through love. Space, time and material objects are unreal. In "The Unreality of Time" he argued that time is an illusion because it is impossible to produce a coherent account of a sequence of events. "The Nature of Existence" (1927) contained his arguments that space, time, and matter cannot possibly be real. In his "Studies in Hegelian Cosmology" (Cambridge, 1901, p196) he declared that metaphysics are not relevant to social and political action. McTaggart "thought that Hegel was wrong in supposing that metaphysics could show that the state is more than a means to the good of the individuals who compose it". For McTaggart "philosophy can give us very little, if any, guidance in action... Why should a Hegelian citizen be surprised that his belief as to the organic nature of the Absolute does not help him in deciding how to vote? Would a Hegelian engineer be reasonable in expecting that his belief that all matter is spirit should help him in planning a bridge?
Thomas Davidson taught a philosophy called "apeirotheism", a "form of pluralistic idealism...coupled with a stern ethical rigorism" which he defined as "a theory of Gods infinite in number." The theory was indebted to Aristotle's pluralism and his concepts of Soul, the rational, living aspect of a living substance which cannot exist apart from the body because it is not a substance but an essence, and "nous", rational thought, reflection and understanding. Although a perennial source of controversy, Aristotle arguably views the latter as both eternal and immaterial in nature, as exemplified in his theology of unmoved movers. Identifying Aristotle's God with rational thought, Davidson argued, contrary to Aristotle, that just as the soul cannot exist apart from the body, God cannot exist apart from the world.
Idealist notions took a strong hold among physicists of the early 20th century confronted with the paradoxes of quantum physics and the theory of relativity. In "The Grammar of Science", Preface to the 2nd Edition, 1900, Karl Pearson wrote, "There are many signs that a sound idealism is surely replacing, as a basis for natural philosophy, the crude materialism of the older physicists." This book influenced Einstein's regard for the importance of the observer in scientific measurements. In § 5 of that book, Pearson asserted that "...science is in reality a classification and analysis of the contents of the mind..." Also, "...the field of science is much more consciousness than an external world."
Arthur Eddington, a British astrophysicist of the early 20th century, wrote in his book "The Nature of the Physical World" that "The stuff of the world is mind-stuff": The mind-stuff of the world is, of course, something more general than our individual conscious minds... The mind-stuff is not spread in space and time; these are part of the cyclic scheme ultimately derived out of it... It is necessary to keep reminding ourselves that all knowledge of our environment from which the world of physics is constructed, has entered in the form of messages transmitted along the nerves to the seat of consciousness... Consciousness is not sharply defined, but fades into subconsciousness; and beyond that we must postulate something indefinite but yet continuous with our mental nature... It is difficult for the matter-of-fact physicist to accept the view that the substratum of everything is of mental character. But no one can deny that mind is the first and most direct thing in our experience, and all else is remote inference."
Ian Barbour in his book "Issues in Science and Religion" (1966), p. 133, cites Arthur Eddington's "The Nature of the Physical World" (1928) for a text that argues The Heisenberg Uncertainty Principles provides a scientific basis for "the defense of the idea of human freedom" and his "Science and the Unseen World" (1929) for support of philosophical idealism "the thesis that reality is basically mental".
Sir James Jeans wrote: "The stream of knowledge is heading towards a non-mechanical reality; the Universe begins to look more like a great thought than like a great machine. Mind no longer appears to be an accidental intruder into the realm of matter... we ought rather hail it as the creator and governor of the realm of matter."
Jeans, in an interview published in "The Observer" (London), when asked the question: "Do you believe that life on this planet is the result of some sort of accident, or do you believe that it is a part of some great scheme?" replied: I incline to the idealistic theory that consciousness is fundamental, and that the material universe is derivative from consciousness, not consciousness from the material universe... In general the universe seems to me to be nearer to a great thought than to a great machine. It may well be, it seems to me, that each individual consciousness ought to be compared to a brain-cell in a universal mind.
Addressing the British Association in 1934, Jeans said: What remains is in any case very different from the full-blooded matter and the forbidding materialism of the Victorian scientist. His objective and material universe is proved to consist of little more than constructs of our own minds. To this extent, then, modern physics has moved in the direction of philosophic idealism. Mind and matter, if not proved to be of similar nature, are at least found to be ingredients of one single system. There is no longer room for the kind of dualism which has haunted philosophy since the days of Descartes.
In "The Universe Around Us", Jeans writes: Finite picture whose dimensions are a certain amount of space and a certain amount of time; the protons and electrons are the streaks of paint which define the picture against its space-time background. Traveling as far back in time as we can, brings us not to the creation of the picture, but to its edge; the creation of the picture lies as much outside the picture as the artist is outside his canvas. On this view, discussing the creation of the universe in terms of time and space is like trying to discover the artist and the action of painting, by going to the edge of the canvas. This brings us very near to those philosophical systems which regard the universe as a thought in the mind of its Creator, thereby reducing all discussion of material creation to futility.
The chemist Ernest Lester Smith wrote a book "Intelligence Came First" (1975) in which he claimed that consciousness is a fact of nature and that the cosmos is grounded in and pervaded by mind and intelligence.
Bernard d'Espagnat, a French theoretical physicist best known for his work on the nature of reality, wrote a paper titled "The Quantum Theory and Reality". According to the paper: The doctrine that the world is made up of objects whose existence is independent of human consciousness turns out to be in conflict with quantum mechanics and with facts established by experiment.
In a "Guardian" article entitled "Quantum Weirdness: What We Call 'Reality' is Just a State of Mind", d'Espagnat wrote: What quantum mechanics tells us, I believe, is surprising to say the least. It tells us that the basic components of objects – the particles, electrons, quarks etc. – cannot be thought of as 'self-existent'.
He further writes that his research in quantum physics has led him to conclude that an "ultimate reality" exists, which is not embedded in space or time.
Further reading
|
https://en.wikipedia.org/wiki?curid=15428
|
Inheritance
Inheritance is the practice of passing on private property, titles, debts, rights, and obligations upon the death of an individual. The rules of inheritance differ among societies and have changed over time.
In law, an "heir" is a person who is entitled to receive a share of the deceased's (the person who died) property, subject to the rules of inheritance in the jurisdiction of which the deceased was a citizen or where the deceased (decedent) died or owned property at the time of death.
The inheritance may be either under the terms of a will or by intestate laws if the deceased had no will. However, the will must comply with the laws of the jurisdiction at the time it was created or it will be declared invalid (for example, some states do not recognise holographic wills as valid, or only in specific circumstances) and the intestate laws then apply.
A person does not become an heir before the death of the deceased, since the exact identity of the persons entitled to inherit is determined only then. Members of ruling noble or royal houses who are expected to become heirs are called heirs apparent if first in line and incapable of being displaced from inheriting by another claim; otherwise, they are heirs presumptive. There is a further concept of joint inheritance, pending renunciation by all but one, which is called coparceny.
In modern law, the terms "inheritance" and "heir" refer exclusively to succession to property by descent from a deceased dying intestate. Takers in property succeeded to under a will are termed generally "beneficiaries," and specifically "devises" for real property, "sequesters" for personal property (except money), or "legatees" for money.
Except in some jurisdictions where a person cannot be legally disinherited (such as the United States state of Louisiana, which allows disinheritance only under specifically enumerated circumstances), a person who would be an heir under intestate laws may be disinherited completely under the terms of a will (an example is that of the will of comedian Jerry Lewis; his will specifically disinherited his six children by his first wife, and their descendants, leaving his entire estate to his second wife).
Detailed anthropological and sociological studies have been made about customs of patrimonial inheritance, where only male children can inherit. Some cultures also employ matrilineal succession, where property can only pass along the female line, most commonly going to the sister's sons of the decedent; but also, in some societies, from the mother to her daughters. Some ancient societies and most modern states employ egalitarian inheritance, without discrimination based on gender and/or birth order.
The inheritance is patrimonial. The father —that is, the owner of the land— bequeaths only to his male descendants, so the Promised Land passes from one Jewish father to his sons.
If there were no living sons and no descendants of any previously living sons, daughters inherit. In , the daughters of Zelophehad (Mahlah, Noa, Hoglah, Milcah, and Tirzah) of the tribe of Manasseh come to Moses and ask for their father's inheritance, as they have no brothers. The order of inheritance is set out in : a man's sons inherit first, daughters if no sons, brothers if he has no children, and so on.
Later, in , some of the heads of the families of the tribe of Manasseh come to Moses and point out that, if a daughter inherits and then marries a man not from her paternal tribe, her land will pass from her birth-tribe's inheritance into her marriage-tribe's. So a further rule is laid down: if a daughter inherits land, she must marry someone within her father's tribe. (The daughters of Zelophehad marry the sons' of their father's brothers. There is no indication that this was not their choice.)
The tractate Baba Bathra, written during late Antiquity in Babylon, deals extensively with issues of property ownership and inheritance according to Jewish Law. Other works of Rabbinical Law, such as the Hilkhot naḥalot : mi-sefer Mishneh Torah leha-Rambam, and the Sefer ha-yerushot: ʻim yeter ha-mikhtavim be-divre ha-halakhah be-ʻAravit uve-ʻIvrit uve-Aramit also deal with inheritance issues. The first, often abbreviated to Mishneh Torah, was written by Maimonides and was very important in Jewish tradition.
All these sources agree that the firstborn son is entitled to a double portion of his father's estate: . This means that, for example, if a father left five sons, the firstborn receives a third of the estate and each of the other four receives a sixth. If he left nine sons, the firstborn receives a fifth and each of the other eight receive a tenth. If the eldest surviving son is not the firstborn son, he is not entitled to the double portion.
Philo of Alexandria and Josephus also comment on the Jewish laws of inheritance, praising them above other law codes of their time. They also agreed that the firstborn son must receive a double portion of his father's estate.
The New Testament does not specifically mention anything about inheritance rights: the only story even mentioning inheritance is that of the Prodigal Son, but that involved the father voluntarily passing his estate to his two sons prior to his death; the younger son receiving his inheritance (1/3; the older son would have received 2/3 under then existing Jewish law) and squandering it.
The topic is generally not discussed among doctrinal statements of various denominations or sects, leaving that to be a matter of secular concern.
The Quran introduced a number of different rights and restrictions on matters of inheritance, including general improvements to the treatment of women and family life compared to the pre-Islamic societies that existed in the Arabian Peninsula at the time. Furthermore, the Quran introduced additional heirs that were not entitled to inheritance in pre-Islamic times, mentioning nine relatives specifically of which six were female and three were male. However, the inheritance rights of women remained inferior to those of men because in Islam someone always has a responsibility of looking after a woman's expenses. According to the Quran, for example, a son is entitled to twice as much inheritance as a daughter. The Quran also presented efforts to fix the laws of inheritance, and thus forming a complete legal system. This development was in contrast to pre-Islamic societies where rules of inheritance varied considerably. In addition to the above changes, the Quran imposed restrictions on testamentary powers of a Muslim in disposing his or her property.
The Quran contains only three verses that give specific details of inheritance and shares, in addition to few other verses dealing with testamentary. But this information was used as a starting point by Muslim jurists who expounded the laws of inheritance even further using Hadith, as well as methods of juristic reasoning like Qiyas. Nowadays, inheritance is considered an integral part of Sharia law and its application for Muslims is mandatory, though many peoples (see Historical inheritance systems), despite being Muslim, have other inheritance customs.
The distribution of the inherited wealth has varied greatly among different cultures and legal traditions. In nations using civil law, for example, the right of children to inherit wealth from parents in pre-defined ratios is enshrined in law, as far back as the Code of Hammurabi (ca. 1750 BC). In the US State of Louisiana, the only US state where the legal system is derived from the Napoleonic Code, this system is known as "forced heirship" which prohibits disinheritance of adult children except for a few narrowly-defined reasons that a parent is obligated to prove. Other legal traditions, particularly in nations using common law, allow inheritances to be divided however one wishes, or to disinherit any child for any reason.
In cases of unequal inheritance, the majority might receive little while only a small number inherit a larger amount, with the lesser amount given to the daughter in the family. The amount of inheritance is often far less than the value of a business initially given to the son, especially when a son takes over a thriving multimillion-dollar business, yet the daughter is given the balance of the actual inheritance amounting to far less than the value of business that was initially given to the son. This is especially seen in old world cultures, but continues in many families to this day.
Arguments for eliminating forced heirship include the right to property and the merit of individual allocation of capital over government wealth confiscation and redistribution, but this does not resolve what some describe as the problem of unequal inheritance. In terms of inheritance inequality, some economists and sociologists focus on the inter generational transmission of income or wealth which is said to have a direct impact on one's mobility (or immobility) and class position in society. Nations differ on the political structure and policy options that govern the transfer of wealth.
According to the American federal government statistics compiled by Mark Zandi in 1985, the average US inheritance was $39,000. In subsequent years, the overall amount of total annual inheritance more than doubled, reaching nearly $200 billion. By 2050, there will be an estimated $25 trillion inheritance transmitted across generations.
Some researchers have attributed this rise to the baby boomer generation. Historically, the baby boomers were the largest influx of children conceived after WW2. For this reason, Thomas Shapiro suggests that this generation "is in the midst of benefiting from the greatest inheritance of wealth in history". Inherited wealth may help explain why many Americans who have become rich may have had a "substantial head start". In September 2012, according to the Institute for Policy Studies, "over 60 percent" of the Forbes richest 400 Americans "grew up in substantial privilege", and often (but not always) received substantial inheritances. .
Other research has shown that many inheritances, large or small, are rapidly squandered. Similarly, analysis shows that over two-thirds of high-wealth families lose their wealth within two generations, and almost 80% of high-wealth parents "feel the next generation is not financially responsible enough to handle inheritance".
It has been argued that inheritance plays a significant effect on social stratification. Inheritance is an integral component of family, economic, and legal institutions, and a basic mechanism of class stratification. It also affects the distribution of wealth at the societal level. The total cumulative effect of inheritance on stratification outcomes takes three forms, according to scholars who have examined the subject.
The first form of inheritance is the inheritance of cultural capital (i.e. linguistic styles, higher status social circles, and aesthetic preferences). The second form of inheritance is through familial interventions in the form of "inter vivos" transfers (i.e. gifts between the living), especially at crucial junctures in the life courses. Examples include during a child's milestone stages, such as going to college, getting married, getting a job, and purchasing a home. The third form of inheritance is the transfers of bulk estates at the time of death of the testators, thus resulting in significant economic advantage accruing to children during their adult years. The origin of the stability of inequalities is material (personal possessions one is able to obtain) and is also cultural, rooted either in varying child-rearing practices that are geared to socialization according to social class and economic position. Child-rearing practices among those who inherit wealth may center around favoring some groups at the expense of others at the bottom of the social hierarchy.
It is further argued that the degree to which economic status and inheritance is transmitted across generations determines one's life chances in society. Although many have linked one's social origins and educational attainment to life chances and opportunities, education cannot serve as the most influential predictor of economic mobility. In fact, children of well-off parents generally receive better schooling and benefit from material, cultural, and genetic inheritances. Likewise, schooling attainment is often persistent across generations and families with higher amounts of inheritance are able to acquire and transmit higher amounts of human capital. Lower amounts of human capital and inheritance can perpetuate inequality in the housing market and higher education. Research reveals that inheritance plays an important role in the accumulation of housing wealth. Those who receive an inheritance are more likely to own a home than those who do not regardless of the size of the inheritance.
Often, racial or religious minorities and individuals from socially disadvantaged backgrounds receive less inheritance and wealth. As a result, mixed races might be excluded in inheritance privilege and are more likely to rent homes or live in poorer neighborhoods, as well as achieve lower educational attainment compared with whites in America. Individuals with a substantial amount of wealth and inheritance often intermarry with others of the same social class to protect their wealth and ensure the continuous transmission of inheritance across generations; thus perpetuating a cycle of privilege.
Nations with the highest income and wealth inequalities often have the highest rates of homicide and disease (such as obesity, diabetes, and hypertension) which results in high mortality rates. A "The New York Times" article reveals that the U.S. is the world's wealthiest nation, but "ranks twenty-ninth in life expectancy, right behind Jordan and Bosnia" and "has the second highest mortality rate of the comparable OECD countries". This has been regarded as highly attributed to the significant gap of inheritance inequality in the country, although there are clearly other factors such as the affordability of healthcare.
When social and economic inequalities centered on inheritance are perpetuated by major social institutions such as family, education, religion, etc., these differing life opportunities are argued to be transmitted from each generation. As a result, this inequality is believed to become part of the overall social structure.
Dynastic wealth is monetary inheritance that is passed on to generations that didn't earn it. Dynastic wealth is linked to the term Plutocracy. Much has been written about the rise and influence of dynastic wealth including the bestselling book Capital in the Twenty-First Century by the French economist Thomas Piketty.Bill Gates uses the term in his article "Why Inequality Matters".
Many states have inheritance taxes or death duties, under which a portion of any estate goes to the government.
|
https://en.wikipedia.org/wiki?curid=15430
|
Ignatius of Antioch
Ignatius of Antioch (; Greek: Ἰγνάτιος Ἀντιοχείας, "Ignátios Antiokheías"; died c. 108/140 AD), also known as Ignatius Theophorus (, "Ignátios ho Theophóros", lit. "the God-bearing") or Ignatius Nurono (lit. "The fire-bearer"), was an early Christian writer and bishop of Antioch. While en route to Rome, where he met his martyrdom, Ignatius wrote a series of letters. This correspondence now forms a central part of a later collection of works known to be authored by the Apostolic Fathers. He is considered to be one of the three most important of these, together with Pope Clement I and Polycarp. His letters also serve as an example of early Christian theology. Important topics they address include ecclesiology, the sacraments, and the role of bishops.
Nothing is known of Ignatius' life apart from what may be inferred internally from his letters, except from later (sometimes spurious) traditions. It is said Ignatius converted to Christianity at a young age. Tradition identifies Ignatius, along with his friend Polycarp, as disciples of John the Apostle. Later in his life, Ignatius was chosen to serve as Bishop of Antioch; the fourth-century Church historian Eusebius writes that Ignatius succeeded Evodius. Theodoret of Cyrrhus claimed that St. Peter himself left directions that Ignatius be appointed to the episcopal see of Antioch. Ignatius called himself "Theophorus" (God Bearer). A tradition arose that he was one of the children whom Jesus Christ took in his arms and blessed, although if he was born around 50 AD, as supposed, then Jesus had been crucified approximately 20 years prior.
Ignatius' feast day was kept in his own Antioch on 17 October, the day on which he is now celebrated in the Catholic Church and generally in western Christianity, although from the 12th century until 1969 it was put at 1 February in the General Roman Calendar.
In the Eastern Orthodox Church it is observed on 20 December. The Synaxarium of the Coptic Orthodox Church of Alexandria places it on the 24th of the Coptic Month of Koiak (which is also the 24 day of the fourth month of Tahisas in the Synaxarium of The Ethiopian Orthodox Tewahedo Church), corresponding in three years out of every four to 20 December in the Julian Calendar, which currently falls on 2 January of the Gregorian Calendar.
Instead of being executed in his home town of Antioch, Ignatius was escorted to Rome by a company of ten Roman soldiers:
Scholars consider Ignatius' transport to Rome unusual, since those persecuted as Christians would be expected to be punished locally. Stevan Davies has pointed out that "no other examples exist from the Flavian age of any prisoners except citizens or prisoners of war being brought to Rome for execution."
If Ignatius were a Roman citizen, he could have appealed to the emperor, but then he would usually have been beheaded rather than tortured. Furthermore, the epistles of Ignatius state that he was put in chains during the journey to Rome, but it was illegal under Roman law for a citizen to be put in bonds during an appeal to the emperor.
Allen Brent argues that Ignatius was transferred to Rome at the request of the emperor in order to provide entertainment to the masses by being killed in the Colosseum. Brent insists, contrary to some, that "it was normal practice to transport condemned criminals from the provinces in order to offer spectator sport in the Colosseum at Rome."
Stevan Davies rejects the idea that Ignatius was transported to Rome for the games at the Colosseum. He reasons that "if Ignatius was in some way a donation by the Imperial Governor of Syria to the games at Rome, a single prisoner seems a rather miserly gift." Instead, Davies proposes that Ignatius may have been indicted by a legate, or representative, of the governor of Syria while the governor was away temporarily, and sent to Rome for trial and execution. Under Roman law, only the governor of a province or the emperor himself could impose capital punishment, so the legate would have faced the choice of imprisoning Ignatius in Antioch or sending him to Rome. Davies postulates that the legate may have decided to send Ignatius to Rome so as to minimize any further dissension among the Antiochene Christians.
Christine Trevett has called Davies' suggestion "entirely hypothetical" and concludes that no fully satisfactory solution to the problem can be found, writing, "I tend to take the bishop at his word when he says he is a condemned man. But the question remains, why is he going to Rome? The truth is that we do not know."
During the journey to Rome, Ignatius and his entourage of soldiers made a number of lengthy stops in Asia Minor, deviating from the most direct land route from Antioch to Rome. Scholars generally agree on the following reconstruction of Ignatius' route of travel:
During the journey, the soldiers seem to have allowed Ignatius to meet with entire congregations of Christians while in chains, at least while he was in Philadelphia (cf. Ign. Phil. 7), and numerous Christian visitors and messengers were allowed to meet with him on a one-on-one basis. These messengers allowed Ignatius to send six letters to nearby churches, and one to Polycarp, the bishop of Smyrna.
These aspects of Ignatius' martyrdom are also regarded by scholars as unusual. It is generally expected that a prisoner would be transported on the most direct, cost-effective route to their destination. Since travel by land in the Roman Empire was between five and fifty-two times more expensive than travel by sea, and Antioch was a major port city, the most efficient route would likely have been entirely by sea. Steven Davies argues that Ignatius' circuitous route to Rome can only be explained by positing that he was not the main purpose of the soldiers' trip, and that the various stops in Asia Minor were for other state business. He suggests that such a scenario would also explain the relative freedom that Ignatius was given to meet with other Christians during the journey.
Due to the sparse and fragmentary nature of the documentation of Ignatius' life and martyrdom, the date of his death is subject to a significant amount of uncertainty. Tradition places the martyrdom of Ignatius in the reign of Trajan, who was emperor of Rome from 98 to 117 AD. But the earliest source for this Trajanic date is the 4th century church historian Eusebius of Caesarea, who is regarded by some modern scholars as an unreliable source for chronological information regarding the early church. Eusebius had an ideological interest in dating church leaders as early as possible, and ensuring that there were no gaps in succession between the original apostles of Jesus and the leaders of the church in his day. Unfortunately, the epistles attributed to Ignatius provide no clear indication as to their date.
While many scholars accept the traditional dating of Ignatius' martyrdom under Trajan, others have argued for a somewhat later date. Richard Pervo dated Ignatius' death to 135-140 AD. British classicist Timothy Barnes has argued for a date in the 140s AD, on the grounds that Ignatius seems to have quoted a work of the Gnostic Ptolemy in one of his epistles, who only became active in the 130s.
Ignatius himself wrote that he would be thrown to the beasts, and in the fourth century Eusebius reports tradition that this came to pass, which is then repeated by Jerome, who is the first to explicitly mention "lions." John Chrysostom is the first to allude to the Colosseum as the place of Ignatius' martyrdom. Contemporary scholars are uncertain that any of these authors had sources other than Ignatius' own writings.
According to a medieval Christian text titled "Martyrium Ignatii", Ignatius' remains were carried back to Antioch by his companions after his martyrdom. The sixth-century writings of Evagrius Scholasticus state that the reputed remains of Ignatius were moved by the Emperor Theodosius II to the Tychaeum, or Temple of Tyche, which had been converted into a church dedicated to Ignatius. In 637 the relics were transferred to the Basilica di San Clemente in Rome.
There is a purported eye-witness account of his martyrdom, named the "Martyrium Ignatii", of medieval date. It is presented as being an eye-witness account for the church of Antioch, attributed to Ignatius' companions, Philo of Cilicia, deacon at Tarsus, and Rheus Agathopus, a Syrian.
Although James Ussher regarded it as genuine, the authenticity of the account is seriously questioned. If there is any genuine nucleus of the "Martyrium", it has been so greatly expanded with interpolations that no part of it is without questions. Its most reliable manuscript is the 10th-century "Codex Colbertinus" (Paris), in which the "Martyrium" closes the collection. The "Martyrium" presents the confrontation of the bishop Ignatius with Trajan at Antioch, a familiar trope of "Acta" of the martyrs, and many details of the long, partly overland voyage to Rome. The Synaxarium of the Coptic Orthodox Church of Alexandria says that he was thrown to the wild beasts that devoured him and rent him to pieces.
The following seven epistles preserved under the name of Ignatius are generally considered authentic, since they were mentioned by the historian Eusebius in the first half of the fourth century.
Seven original epistles:
The text of these epistles is known in three different recensions, or editions: the Short Recension, found in a Syriac manuscript; the Middle Recension, found in Greek and Latin manuscripts; and the Long Recension, found in Latin manuscripts.
For some time, it was believed that the Long Recension was the only extant version of the Ignatian epistles, but around 1628 a Latin translation of the Middle Recension was discovered by Archbishop James Ussher, who published it in 1646. For around a quarter of a century after this, it was debated which recension represented the original text of the epistles. But ever since John Pearson's strong defense of the authenticity of the Middle Recension in the late 17th century, there has been a scholarly consensus that the Middle Recension is the original version of the text. The Long Recension is the product of a fourth-century Arian Christian, who interpolated the Middle Recension epistles in order to posthumously enlist Ignatius as an unwitting witness in theological disputes of that age. This individual also forged the six spurious epistles attributed to Ignatius (see below).
Manuscripts representing the Short Recension of the Ignatian epistles were discovered and published by William Cureton in the mid-19th century. For a brief period, there was a scholarly debate on the question of whether the Short Recension was earlier and more original than the Middle Recension. But by the end of the 19th century, Theodor Zahn and J. B. Lightfoot had established a scholarly consensus that the Short Recension is merely a summary of the text of the Middle Recension, and was therefore composed later.
Ever since the Protestant Reformation in the 16th century, the authenticity of all the Ignatian epistles has come under intense scrutiny. John Calvin called the epistles "rubbish published under Ignatius’ name." Some Protestants have tended to want to deny the authenticity of all the epistles attributed to Ignatius because they seem to attest to the existence of a monarchical episcopate in the second century.
In 1886, Presbyterian minister and church historian William Dool Killen published an essay extensively arguing that none of the epistles attributed to Ignatius is authentic. Instead, he argued that Callixtus, bishop of Rome, forged the letters around AD 220 to garner support for a monarchical episcopate, modeling the renowned Saint Ignatius after his own life to give precedent for his own authority. Killen contrasted this episcopal polity with the presbyterian polity in the writings of Polycarp.
Some doubts about the authenticity of the original letters continued into the 20th century. In the late 1970s and 1980s, the scholars Robert Joly, Reinhard Hübner, Markus Vinzent, and Thomas Lenchner argued forcefully that the epistles of the Middle Recension were forgeries written during the reign of Marcus Aurelius (161-180 AD). Around the same time, the scholar Joseph Ruis-Camps published a study arguing that the Middle Recension letters were pseudepigraphically composed based on an original, smaller, authentic corpus of four letters (Romans, Magnesians, Trallians, and Ephesians). These publications stirred up tremendous, heated controversy in the scholarly community at the time, but today most scholars accept the authenticity of the seven original epistles.
The original text of six of the seven original letters are found in the Codex Mediceo Laurentianus written in Greek in the 11th century (which also contains the pseudepigraphical letters of the Long Recension, except that to the Philippians), while the letter to the Romans is found in the Codex Colbertinus.
Ignatius's letters bear signs of being written in great haste and without a proper plan, such as run-on sentences and an unsystematic succession of thought. Ignatius modeled his writings after Paul, Peter, and John, and even quoted or paraphrased their own works freely, such as when he quoted 1 Corinthians 1:18, in his letter to the Ephesians:
Ignatius is known to have taught the deity of Christ:
The same section in text of the Long Recension says the following:
He stressed the value of the Eucharist, calling it a "medicine of immortality" ("Ignatius to the Ephesians" 20:2). The very strong desire for bloody martyrdom in the arena, which Ignatius expresses rather graphically in places, may seem quite odd to the modern reader. An examination of his theology of soteriology shows that he regarded salvation as one being free from the powerful fear of death and thus to bravely face martyrdom.
Ignatius is claimed to be the first known Christian writer to argue in favor of Christianity's replacement of the Sabbath with the Lord's Day:
Ignatius is the earliest known Christian writer to emphasize loyalty to a single bishop in each city (or diocese) who is assisted by both presbyters (elders) and deacons. Earlier writings only mention "either" bishops "or" presbyters.
For instance, his writings on bishops, presbyters and deacons:
He is also responsible for the first known use of the Greek word "katholikos" (καθολικός), meaning "universal", "complete" and "whole" to describe the church, writing:
It is from the word "katholikos" ("according to the whole") that the word "catholic" comes. When Ignatius wrote the Letter to the Smyrnaeans in about the year 107 and used the word "catholic", he used it as if it were a word already in use to describe the Church. This has led many scholars to conclude that the appellation "Catholic Church" with its ecclesial connotation may have been in use as early as the last quarter of the First century. On the Eucharist, he wrote in his letter to the Smyrnaeans:
In his letter addressed to the Christians of Rome, he entreats to do nothing to prevent his martyrdom.
Several scholars have noted that there are striking similarities between Ignatius and the Christian-turned-Cynic philosopher Peregrinus Proteus, as described in Lucian's famous satire "The Passing of Peregrinus":
It is generally believed that these parallels are the result of Lucian intentionally copying traits from Ignatius and applying them to his satire of Peregrinus. If the dependence of Lucian on the Ignatian epistles is accepted, then this places an upper limit on the date of the epistles: around the 160s AD, just before "The Passing of Peregrinus" was written.
In 1892, Daniel Völter sought to explain the parallels by proposing that the Ignatian epistles were in fact "written" by Peregrinus, and later edited to conceal their provenance, but this speculative theory has failed to make a significant impact on the academic community.
Epistles attributed to Saint Ignatius but of spurious origin (their author is often called Pseudo-Ignatius in English) include:
|
https://en.wikipedia.org/wiki?curid=15435
|
ITU prefix
The International Telecommunication Union (ITU) allocates call sign prefixes for radio and television stations of all types. They also form the basis for, but do not exactly match, aircraft registration identifiers. These prefixes are agreed upon internationally, and are a form of country code. A call sign can be any number of letters and numerals but each country must only use call signs that begin with the characters allocated for use in that country.
A few countries do not fully comply with these rules. Australian broadcast stations officially have—but do not use—the VL prefix, and Canada uses Chile's CB for its own Canadian Broadcasting Corporation stations. This is through a special agreement with the government of Chile, which is officially assigned the CB prefix.
With regard to the second and/or third letters in the prefixes in the list below, if the country in question is allocated all callsigns with A to Z in that position, then that country can also use call signs with the digits 0 to 9 in that position. For example, the United States is assigned KA–KZ, and therefore can also use prefixes like KW0 or K1.
Many large countries in turn have internal rules on how and where specific subsets of their callsigns can be used (such as Mexico's XE for AM and XH for FM radio and television broadcasting), which are not covered here.
Unallocated: The following call sign prefixes are available for future allocation by the ITU. ("x" represents any letter; "n" represents any digit from 2–9.)
(* Indicates a prefix that has recently been returned to the ITU.)
Unavailable: Under present ITU guidelines the following call sign prefixes shall not be allocated. They are sometimes used unofficially – such as amateur radio operators operating in a disputed territory or in a nation state that has no official prefix (e.g. S0 in Western Sahara, station 1A0 at Knights of Malta headquarters in Rome, or station 1L in Liberland). ("x" represents any letter; "n" represents any digit from 2–9.)
|
https://en.wikipedia.org/wiki?curid=15437
|
IBM PC keyboard
The keyboard for IBM PC-compatible computers is standardized. However, during the more than 30 years of PC architecture being frequently updated, many keyboard layout variations have been developed.
A well-known class of IBM PC keyboards is the Model M. Introduced in 1984 and manufactured by IBM, Lexmark, Maxi-Switch and Unicomp, the vast majority of Model M keyboards feature a buckling spring key design and many have fully swappable keycaps.
The PC keyboard changed over the years, often at the launch of new IBM PC versions.
Common additions to the standard layouts include additional power management keys, volume controls, media player controls, and miscellaneous user-configurable shortcuts for email client, World Wide Web browser, etc.
The IBM PC layout, particularly the Model M, has been extremely influential, and today most keyboards use some variant of it. This has caused problems for applications developed with alternative layouts, which require keys that are in awkward positions on the Model M layout – often requiring the pinkie to operate – and thus require remapping for comfortable use. One notable example is the escape key, used by the vi editor: on the ADM-3A terminal this was located where the Tab key is on the IBM PC, but on the IBM PC the Escape key is in the corner; this is typically solved by remapping Caps Lock to Escape. Another example is the Emacs editor, which makes extensive use of modifier keys, and uses the Control key more than the meta key (IBM PC instead has the Alt key) – these date to the Knight keyboard, which had the Control key on the "inside" of the Meta key, opposite to the Model M, where it is on the "outside" of the Alt key; and to the space-cadet keyboard, where the four bucky bit keys (Control, Meta, Super, Hyper) are in a row, allowing easy chording to press several, unlike on the Model M layout. This results in the "Emacs pinky" problem.
Although "PC Magazine" praised most aspects of the 1981 IBM PC keyboard's hardware design, it questioned "how IBM, that ultimate pro of keyboard manufacture, could put the left-hand shift key at the awkward reach they did". The magazine reported in 1982 that it received more letters to its "Wish List" column asking for the ability to determine the status of the three lock keys than on any other topic. "Byte" columnist Jerry Pournelle described the keyboard as "infuriatingly excellent". He praised its feel but complained that the Shift and other keys' locations were "enough to make a saint weep", and denounced the trend of PC compatible computers to emulate the layout but not the feel. He reported that the layout "nearly drove" science-fiction editor Jim Baen "crazy", and that "many of [Baen's] authors refused to work with that keyboard" so could not submit manuscripts in a compatible format. The magazine's official review was more sanguine. It praised the keyboard as "bar none, the best ... on any microcomputer" and described the unusual Shift key locations as "minor [problems] compared to some of the gigantic mistakes made on almost every other microcomputer keyboard".
"I wasn't thrilled with the placement of [the left Shift and Return] keys, either", IBM's Don Estridge stated in 1983. He defended the layout, however, stating that "every place you pick to put them is not a good place for somebody ... there's no consensus", and claimed that "if we were to change it now we would be in hot water".
The PC keyboard with its various keys has a long history of evolution reaching back to teletypewriters. In addition to the 'old' standard keys, the PC keyboard has accumulated several special keys over the years. Some of the additions have been inspired by the opportunity or requirement for improving user productivity with general office application software, while other slightly more general keyboard additions have become the factory standards after being introduced by certain operating system or GUI software vendors such as Microsoft.
|
https://en.wikipedia.org/wiki?curid=15440
|
Italian battleship Giulio Cesare
"Giulio Cesare (Julius Caesar)" was one of three dreadnought battleships built for the Royal Italian Navy () in the 1910s. Completed in 1914, she was little used and saw no combat during the First World War. The ship supported operations during the Corfu Incident in 1923 and spent much of the rest of the decade in reserve. She was rebuilt between 1933 and 1937 with more powerful guns, additional armor and considerably more speed than before.
During World War II, both "Giulio Cesare" and her sister ship, , participated in the Battle of Calabria in July 1940, when the former was lightly damaged. They were both present when British torpedo bombers attacked the fleet at Taranto in November 1940, but "Giulio Cesare" was not damaged. She escorted several convoys to North Africa and participated in the Battle of Cape Spartivento in late 1940 and the First Battle of Sirte in late 1941. She was designated as a training ship in early 1942, and escaped to Malta after the Italian armistice the following year. The ship was transferred to the Soviet Union in 1949 and renamed Novorossiysk (). The Soviets also used her for training until she was sunk in 1955, with the loss of 608 men, when an old German mine exploded. She was salvaged the following year and later scrapped.
The "Conte di Cavour" class was designed to counter the French dreadnoughts which caused them to be slower and more heavily armored than the first Italian dreadnought, . The ships were long at the waterline and overall. They had a beam of , and a draft of . The "Conte di Cavour"-class ships displaced at normal load, and at deep load. They had a crew of 31 officers and 969 enlisted men. The ships were powered by three sets of Parsons steam turbines, two sets driving the outer propeller shafts and one set the two inner shafts. Steam for the turbines was provided by 24 Babcock & Wilcox boilers, half of which burned fuel oil and the other half burning both oil and coal. Designed to reach a maximum speed of from , "Giulio Cesare" failed to reach this goal on her sea trials, reaching only from . The ships carried enough coal and oil to give them a range of at .
The main battery of the "Conte di Cavour" class consisted of thirteen 305-millimeter Model 1909 guns, in five centerline gun turrets, with a twin-gun turret superfiring over a triple-gun turret in fore and aft pairs, and a third triple turret amidships. Their secondary armament consisted of eighteen guns mounted in casemates on the sides of the hull. For defense against torpedo boats, the ships carried fourteen guns; thirteen of these could be mounted on the turret tops, but they could be positioned in 30 different locations, including some on the forecastle and upper decks. They were also fitted with three submerged torpedo tubes, one on each broadside and the third in the stern.
The "Conte di Cavour"-class ships had a complete waterline armor belt that had a maximum thickness of amidships, which reduced to towards the stern and towards the bow. They had two armored decks: the main deck was thick on the flat that increased to on the slopes that connected it to the main belt. The second deck was thick. Frontal armor of the gun turrets was in thickness and the sides were thick. The armor protecting their barbettes ranged in thickness from . The walls of the forward conning tower were 280 millimeters thick.
Shortly after the end of World War I, the number of 76.2 mm guns was reduced to 13, all mounted on the turret tops, and six new 76.2-millimeter anti-aircraft (AA) guns were installed abreast the aft funnel. In addition two license-built 2-pounder () AA guns were mounted on the forecastle deck. In 1925–1926 the foremast was replaced by a four-legged (tetrapodal) mast, which was moved forward of the funnels, the rangefinders were upgraded, and the ship was equipped to handle a Macchi M.18 seaplane mounted on the amidships turret. Around that same time, either one or both of the ships was equipped with a fixed aircraft catapult on the port side of the forecastle.
"Giulio Cesare" began an extensive reconstruction in October 1933 at the Cantieri del Tirreno shipyard in Genoa that lasted until October 1937. A new bow section was grafted over the existing bow which increased her length by to and her beam increased to . The ship's draft at deep load increased to . All of the changes made increased her displacement to at standard load and at deep load. The ship's crew increased to 1,260 officers and enlisted men. Two of the propeller shafts were removed and the existing turbines were replaced by two Belluzzo geared steam turbines rated at . The boilers were replaced by eight Yarrow boilers. On her sea trials in December 1936, before her reconstruction was fully completed, "Giulio Cesare" reached a speed of from . In service her maximum speed was about and she had a range of at a speed of .
The main guns were bored out to and the center turret and the torpedo tubes were removed. All of the existing secondary armament and AA guns were replaced by a dozen 120 mm guns in six twin-gun turrets and eight AA guns in twin turrets. In addition the ship was fitted with a dozen Breda light AA guns in six twin-gun mounts and twelve Breda M31 anti-aircraft machine guns, also in twin mounts. In 1940 the 13.2 mm machine guns were replaced by AA guns in twin mounts. "Giulio Cesare" received two more twin mounts as well as four additional 37 mm guns in twin mounts on the forecastle between the two turrets in 1941. The tetrapodal mast was replaced with a new forward conning tower, protected with thick armor. Atop the conning tower there was a fire-control director fitted with two large stereo-rangefinders, with a base length of .
The deck armor was increased during the reconstruction to a total of over the engine and boiler rooms and over the magazines, although its distribution over three decks, meant that it was considerably less effective than a single plate of the same thickness. The armor protecting the barbettes was reinforced with plates. All this armor weighed a total of . The existing underwater protection was replaced by the Pugliese torpedo defense system that consisted of a large cylinder surrounded by fuel oil or water that was intended to absorb the blast of a torpedo warhead. It lacked, however, enough depth to be fully effective against contemporary torpedoes. A major problem of the reconstruction was that the ship's increased draft meant that their waterline armor belt was almost completely submerged with any significant load.
"Giulio Cesare", named after Julius Caesar, was laid down at the Gio. Ansaldo & C. shipyard in Genoa on 24 June 1910 and launched on 15 October 1911. She was completed on 14 May 1914 and served as a flagship in the southern Adriatic Sea during World War I. She saw no action, however, and spent little time at sea. Admiral Paolo Thaon di Revel, the Italian naval chief of staff, believed that Austro-Hungarian submarines and minelayers could operate too effectively in the narrow waters of the Adriatic. The threat from these underwater weapons to his capital ships was too serious for him to use the fleet in an active way. Instead, Revel decided to implement a blockade at the relatively safer southern end of the Adriatic with the battle fleet, while smaller vessels, such as the MAS torpedo boats, conducted raids on Austro-Hungarian ships and installations. Meanwhile, Revel's battleships would be preserved to confront the Austro-Hungarian battle fleet in the event that it sought a decisive engagement.
"Giulio Cesare" made port visits in the Levant in 1919 and 1920. Both "Giulio Cesare" and "Conte di Cavour" supported Italian operations on Corfu in 1923 after an Italian general and his staff were murdered at the Greek–Albanian frontier; Benito Mussolini, who had been looking for a pretext to seize Corfu, ordered Italian troops to occupy the island. "Cesare" became a gunnery training ship in 1928, after having been in reserve since 1926. She was reconstructed at Cantieri del Tirreno, Genoa, between 1933 and 1937. Both ships participated in a naval review by Adolf Hitler in the Bay of Naples in May 1938 and covered the invasion of Albania in May 1939.
Early in World War II, the ship took part in the Battle of Calabria (also known as the Battle of Punto Stilo), together with "Conte di Cavour", on 9 July 1940, as part of the 1st Battle Squadron, commanded by Admiral Inigo Campioni, during which she engaged major elements of the British Mediterranean Fleet. The British were escorting a convoy from Malta to Alexandria, while the Italians had finished escorting another from Naples to Benghazi, Libya. Admiral Andrew Cunningham, commander of the Mediterranean Fleet, attempted to interpose his ships between the Italians and their base at Taranto. Crew on the fleets spotted each other in the middle of the afternoon and the battleships opened fire at 15:53 at a range of nearly . The two leading British battleships, and , replied a minute later. Three minutes after she opened fire, shells from "Giulio Cesare" began to straddle "Warspite" which made a small turn and increased speed, to throw off the Italian ship's aim, at 16:00. Some rounds fired by "Giulio Cesare" overshot "Warspite" and near-missed the destroyers HMS "Decoy" and "Hereward", puncturing their superstructures with splinters. At that same time, a shell from "Warspite" struck "Giulio Cesare" at a distance of about . The shell pierced the rear funnel and detonated inside it, blowing out a hole nearly across. Fragments started several fires and their smoke was drawn into the boiler rooms, forcing four boilers off-line as their operators could not breathe. This reduced the ship's speed to . Uncertain how severe the damage was, Campioni ordered his battleships to turn away in the face of superior British numbers and they successfully disengaged. Repairs to "Giulio Cesare" were completed by the end of August and both ships unsuccessfully attempted to intercept British convoys to Malta in August and September.
On the night of 11 November 1940, "Giulio Cesare" and the other Italian battleships were at anchor in Taranto harbor when they were attacked by 21 Fairey Swordfish torpedo bombers from the British aircraft carrier , along with several other warships. One torpedo sank "Conte di Cavour" in shallow water, but "Giulio Cesare" was not hit during the attack. She participated in the Battle of Cape Spartivento on 27 November 1940, but never got close enough to any British ships to fire at them. The ship was damaged in January 1941 by splinters from a near miss during an air raid on Naples by Vickers Wellington bombers of the Royal Air Force; repairs at Genoa were completed in early February. On 8 February, she sailed from to the Straits of Bonifacio to intercept what the Italians thought was a Malta convoy, but was actually a raid on Genoa. She failed to make contact with any British forces. She participated in the First Battle of Sirte on 17 December 1941, providing distant cover for a convoy bound for Libya, and briefly engaging the escort force of a British convoy. She also provided distant cover for another convoy to North Africa in early January 1942. "Giulio Cesare" was reduced to a training ship afterwards at Taranto and later Pola. After the Italian surrender on 9 September 1943, she steamed to Taranto, putting down a mutiny and enduring an ineffective attack by five German aircraft en route. She then sailed for Malta where she arrived on 12 September to be interned. The ship remained there until 17 June 1944 when she returned to Taranto where she remained for the next four years.
After the war, "Giulio Cesare" was allocated to the Soviet Union as part of the war reparations. She was moved to Augusta, Sicily, on 9 December 1948, where an unsuccessful attempt was made at sabotage. The ship was stricken from the naval register on 15 December and turned over to the Soviets on 6 February 1949 under the temporary name of "Z11" in Vlorë, Albania. She was renamed "Novorossiysk", after the Soviet city of that name on the Black Sea. The Soviets used her as a training ship, and gave her eight refits. In 1953, all Italian light AA guns were replaced by eighteen 37 mm 70-K AA guns in six twin mounts and six singles. Also replaced were her fire-control systems and radars. The Soviets intended to rearm her with their own 305 mm guns, but this was forestalled by her loss. While at anchor in Sevastopol on the night of 28/29 October 1955, an explosion ripped a hole in the forecastle forward of 'A' turret. The flooding could not be controlled, and she capsized with the loss of 608 men, including men sent from other ships to assist.
The cause of the explosion is still unclear. The official cause, regarded as the most probable, was a magnetic RMH or LMB bottom mine, laid by the Germans during World War II and triggered by the dragging of the battleship's anchor chain before mooring for the last time. Subsequent searches located 32 mines of these types, some of them within of the explosion. The damage was consistent with an explosion of of TNT, and more than one mine may have detonated. Nonetheless, other explanations for the ship's loss have been proposed, and the most popular of these is that she was sunk by Italian frogmen of the wartime special operations unit "Decima Flottiglia MAS" who – more than ten years after the cessation of hostilities – were either avenging the transfer of the former Italian battleship to the USSR or sinking it on behalf of NATO. "Novorossiysk" was stricken from the naval register on 24 February 1956, salvaged on 4 May 1957, and subsequently scrapped.
|
https://en.wikipedia.org/wiki?curid=15441
|
INS Vikrant (R11)
INS "Vikrant (from Sanskrit "vikrānta", "courageous") was a of the Indian Navy. The ship was laid down as HMS "Hercules for the British Royal Navy during World War II, but construction was put on hold when the war ended. India purchased the incomplete carrier in 1957, and construction was completed in 1961. "Vikrant" was commissioned as the first aircraft carrier of the Indian Navy and played a key role in enforcing the naval blockade of East Pakistan during the Indo-Pakistani War of 1971.
In its later years, the ship underwent major refits to embark modern aircraft, before being decommissioned in January 1997. She was preserved as a museum ship in Cuffe Parade, Mumbai until 2012. In January 2014, the ship was sold through an online auction and scrapped in November 2014 after final clearance from the Supreme Court.
In 1943 the Royal Navy commissioned six light aircraft carriers in an effort to counter the German and Japanese navies. The 1942 Design Light Fleet Carrier, commonly referred to as the British Light Fleet Carrier, was the result. Serving with eight navies between 1944 and 2001, these ships were designed and constructed by civilian shipyards as an intermediate step between the full-sized fleet aircraft carriers and the less expensive but limited-capability escort carriers.
Sixteen light fleet carriers were ordered, and all were laid down as what became the "Colossus" class in 1942 and 1943. The final six ships were modified during construction to handle larger and faster aircraft, and were re-designated the "Majestic" class. The improvements from the "Colossus" class to the "Majestic" class included heavier displacement, armament, catapult, aircraft lifts and aircraft capacity. Construction on the ships was suspended at the end of World War II, as the ships were surplus to the Royal Navy's peacetime requirements.
Instead, the carriers were modernized and sold to several Commonwealth nations. The ships were similar, but each varied depending on the requirements of the country to which the ship was sold.
HMS "Hercules", the fifth ship in the "Majestic" class, was ordered on 7 August 1942 and laid down on 14 October 1943 by Vickers-Armstrongs on the River Tyne. After World War II ended with Japan's surrender on 2 September 1945, she was launched on 22 September, and her construction was suspended in May 1946. At the time of suspension, she was 75 per cent complete. Her hull was preserved, and in May 1947 she was laid up in Gareloch off the Clyde. In January 1957, she was purchased by India and was towed to Belfast to complete her construction and modifications by Harland and Wolff. Several improvements to the original design were ordered by the Indian Navy, including an angled deck, steam catapults, and a modified island.
"Vikrant" displaced at standard load and at deep load. She had an overall length of , a beam of and a mean deep draught of . She was powered by a pair of Parsons geared steam turbines, driving two propeller shafts, using steam provided by four Admiralty three-drum boilers. The turbines developed a total of which gave a maximum speed of . "Vikrant" carried about of fuel oil that gave her a range of at , and at . The air and ship crew comprised 1,110 officers and men.
The ship was armed with sixteen Bofors anti-aircraft guns, but these were later reduced to eight. At various times, its aircraft consisted of Hawker Sea Hawk and Sea Harrier (STOVL) jet fighters, Sea King Mk 42B and HAL Chetak helicopters, and Breguet Alizé Br.1050 anti-submarine aircraft. The carrier fielded between 21 and 23 aircraft of all types. "Vikrant"s flight decks were designed to handle aircraft up to , but remained the heaviest landing weight of an aircraft. Larger lifts were installed.
The ship was equipped with one LW-05 air-search radar, one ZW-06 surface-search radar, one LW-10 tactical radar and one Type 963 aircraft landing radar with other communication systems.
The Indian Navy's first aircraft carrier was commissioned as INS "Vikrant" on 4 March 1961 in Belfast by Vijaya Lakshmi Pandit, the Indian High Commissioner to the United Kingdom. The name "Vikrant" was derived from the Sanskrit word "vikrānta" meaning "stepping beyond", "courageous" or "bold". Captain Pritam Singh Mahindroo was the first commanding officer of the ship, which carried British Hawker Sea Hawk fighter-bombers and French Alizé anti-submarine aircraft. On 18 May 1961, the first jet landed on her deck. It was piloted by Lieutenant Radhakrishna Hariram Tahiliani, who later served as admiral and Chief of the Naval Staff of India from 1984 to 1987. "Vikrant" formally joined the Indian Navy's fleet in Bombay (now Mumbai) on 3 November 1961, when she was received at Ballard Pier by then Prime Minister Jawaharlal Nehru.
In December of that year, the ship was deployed for Operation Vijay (the code name for the annexation of Goa) off the coast of Goa with two destroyers, and . "Vikrant" did not see action, and patrolled along the coast to deter foreign interference. During the Indo-Pakistani War of 1965, "Vikrant" was in dry dock refitting, and did not see any action.
In June 1970, "Vikrant" was docked at the Naval Dockyard, Bombay, due to many internal fatigue cracks and fissures in the water drums of her boilers that could not be repaired by welding. As replacement drums were not available locally, four new ones were ordered from Britain, and Naval Headquarters issued orders not to use the boilers until further notice. On 26 February 1971 the ship was moved from Ballard Pier Extension to the anchorage, without replacement drums. The main objective behind this move was to light up the boilers at reduced pressure, and work up the main and flight deck machinery that had been idle for almost seven months. On 1 March, the boilers were ignited, and basin trials up to 40 revolutions per minute (RPM) were conducted. Catapult trials were conducted on the same day.
The ship began preliminary sea trials on 18 March and returned two days later. Trials were again conducted on 26–27 April. The navy decided to limit the boilers to a pressure of and the propeller revolutions to 120 RPM ahead and 80 RPM astern, reducing the ship's speed to . With the growing expectations of a war with Pakistan in the near future, the navy started to transfer its ships to strategically advantageous locations in Indian waters. The primary concern of Naval Headquarters about the operation was the serviceability of "Vikrant". When asked his opinion regarding the involvement of "Vikrant" in the war, Fleet Operations Officer Captain Gulab Mohanlal Hiranandani told the Chief of the Naval Staff Admiral Sardarilal Mathradas Nanda:
Nanda and Hiranandani proved to be instrumental in taking "Vikrant" to war. There were objections that the ship might have severe operational difficulties that would expose the carrier to increased danger on operations. In addition, the three s acquired by the Pakistan Navy posed a significant risk to the carrier. In June, extensive deep sea trials were carried out, with steel safety harnesses around the three boilers still operational. Observation windows were fitted as a precautionary measure, to detect any steam leaks. By the end of June, the trials were complete and "Vikrant" was cleared to participate on operations, with its speed restricted to 14 knots.
As a part of preparations for the war, "Vikrant" was assigned to the Eastern Naval Command, then to the Eastern Fleet. This fleet consisted of INS "Vikrant", the two s and , the two Petya III-class corvettes and , and one submarine, . The main reason behind strengthening the Eastern Fleet was to counter the Pakistani maritime forces deployed in support of military operations in East Bengal. A surveillance area of , confined by a triangle with a base of and sides of and , was set up in the Bay of Bengal. Any ship in this area was to be challenged and checked. If found to be neutral, it would be escorted to the nearest Indian port, otherwise, it would be captured, and taken as a war prize.
In the meantime, intelligence reports confirmed that Pakistan was to deploy a US-built , . "Ghazi" was considered as a serious threat to "Vikrant" by the Indian Navy, as "Vikrant"s approximate position would be known by the Pakistanis once she started operating aircraft. Of the four available surface ships, INS "Kavaratti" had no sonar, which meant that the other three had to remain in close vicinity of "Vikrant", without which the carrier would be completely vulnerable to attack by "Ghazi".
On 23 July, "Vikrant" sailed off to Cochin in company with the Western Fleet. En route, before reaching Cochin on 26 July, Sea King landing trials were carried out. After the completion of the radar and communication trials on 28 July, she departed for Madras, escorted by "Brahmaputra" and "Beas". The next major problem was operating aircraft from the carrier. The commanding officer of the ship, Captain (later Vice Admiral) S. Prakash, was seriously concerned about flight operations. He was concerned that aircrew morale would be adversely affected if flight operations were not undertaken, which could be disastrous. Naval Headquarters remained stubborn on the speed restrictions, and sought confirmation from Prakash whether it was possible to embark an Alizé without compromising the speed restrictions. The speed restrictions imposed by the headquarters meant that Alizé aircraft would have to land at close to stalling speed. Eventually the aircraft weight was reduced, which allowed several of the aircraft to embark, along with a Seahawk squadron.
By the end of September, "Vikrant" and her escorts reached Port Blair. En route to Visakhapatnam, tactical exercises were conducted in the presence of the Flag Officer Commanding-in-Chief of the Eastern Naval Command. From Vishakhapatnam, "Vikrant" set out for Madras for maintenance. Rear Admiral S. H. Sharma was appointed Flag Officer Commanding Eastern Fleet and arrived at Vishakhapatnam on 14 October. After receiving the reports that Pakistan might launch preemptive strikes, maintenance was stopped for another tactical exercise, which was completed during the night of 26–27 October at Vishakhapatnam. "Vikrant" then returned to Madras to resume maintenance. On 1 November, the Eastern Fleet was formally constituted, and on 13 November, all the ships set out for the Andaman and Nicobar Islands. To avoid misadventures, it was planned to sail "Vikrant" to a remote anchorage, isolating it from combat. Simultaneously, deception signals would give the impression that "Vikrant" was operating somewhere between Madras and Vishakhapatnam.
On 23 November, an emergency was declared in Pakistan after a clash of Indian and Pakistani troops in East Pakistan two days earlier. On 2 December, the Eastern Fleet proceeded to its patrol area in anticipation of an attack by Pakistan. The Pakistan Navy had deployed "Ghazi" on 14 November with the explicit goal of targeting and sinking "Vikrant", and "Ghazi" reached a location near Madras by the 23rd. In an attempt to deceive the Pakistan Navy and "Ghazi", India's Naval Headquarters deployed "Rajput" as a decoy—the ship sailed off the coast of Vishakhapatnam and broadcast a significant amount of radio traffic, making her appear to be "Vikrant".
"Ghazi", meanwhile, sank off the Visakhapatnam coast under mysterious circumstances. On the night of 3–4 December, a muffled underwater explosion was detected by a coastal battery. The next morning, a local fisherman observed flotsam near the coast, causing Indian naval officials to suspect a vessel had sunk off the coast. The next day, a clearance diving team was sent to search the area, and they confirmed that "Ghazi" had sunk in shallow waters.
The reason for "Ghazi"s fate is unclear. The Indian Navy's official historian, Hiranandani, suggests three possibilities, after having analysed the position of the rudder and extent of the damage suffered. The first was that "Ghazi" had come up to periscope depth to identify her position and may have seen an anti-submarine vessel that caused her to crash dive, which in turn may have led her to bury her bow in the bottom. The second possibility is closely related to the first: on the night of the explosion, "Rajput" was on patrol off Visakhapatnam and observed a severe disturbance in the water. Suspecting that it was a submarine, the ship dropped two depth charges on the spot, on a position that was very close to the wreckage. The third possibility is that there was a mishap when "Ghazi" was laying mines on the day before hostilities broke out.
"Vikrant" was redeployed towards Chittagong at the outbreak of hostilities. On 4 December, the ship's Sea Hawks struck shipping in Chittagong and Cox's Bazar harbours, sinking or incapacitating most of the ships present. Later strikes targeted Khulna and the Port of Mongla, which continued until 10 December, while other operations were flown to support a naval blockade of East Pakistan. On 14 December, the Sea Hawks attacked the cantonment area in Chittagong, destroying several Pakistani army barracks. Medium anti-aircraft fire was encountered during this strike. Simultaneous attacks by Alizés continued on Cox's Bazar. After this, "Vikrant"s fuel levels dropped to less than 25 per cent, and the aircraft carrier sailed to Paradip for refueling. The crew of INS "Vikrant" earned two Maha Vir Chakras and twelve Vir Chakra gallantry medals for their part in the war.
"Vikrant" did not see much service after the war, and was given two major modernisation refits—the first one from 1979 to 1981 and the second one from 1987 to 1989. In the first phase, her boilers, radars, communication systems and anti-aircraft guns were modernised, and facilities to operate Sea Harriers were installed. In the second phase, facilities to operate the new Sea Harrier Vertical/Short Take Off and Land (V/STOL) fighter aircraft and the new Sea King Mk 42B Anti-Submarine Warfare (ASW) helicopters were introduced. A 9.75-degree ski-jump ramp was fitted. The steam catapult was removed during this phase. Again in 1991, "Vikrant" underwent a six-month refit, followed by another fourteen-month refit in 1992–94. She remained operational thereafter, flying Sea Harriers, Sea Kings and Chetaks until her final sea outing on 23 November 1994. In the same year, a fire was also recorded aboard. In January 1995, the navy decided to keep "Vikrant" in "safe to float" state. She was laid up and formally decommissioned on 31 January 1997.
During her service, INS "Vikrant" embarked four squadrons of the Naval Air Arm of the Indian Navy:
Following decommissioning in 1997, the ship was earmarked for preservation as a museum ship in Mumbai. Lack of funding prevented progress on the ship's conversion to a museum and it was speculated that the ship would be made into a training ship. In 2001, the ship was opened to the public by the Indian Navy, but the Government of Maharashtra was unable to find a partner to operate the museum on a permanent, long-term basis and the museum was closed after it was deemed unsafe for the public in 2012.
In August 2013, Vice-Admiral Shekhar Sinha, chief of the Western Naval Command, said the Ministry of Defence would scrap the ship as she had become very difficult to maintain and no private bidders had offered to fund the museum's operations. On 3 December 2013, the Indian government decided to auction the ship. The Bombay High Court dismissed a public-interest lawsuit filed by Kiran Paigankar to stop the auction, stating the vessel's dilapidated condition did not warrant her preservation, nor were the necessary funds or government support available.
In January 2014, the ship was sold through an online auction to a Darukhana ship-breaker for . The Supreme Court of India dismissed another lawsuit challenging the ship's sale and scrapping on 14 August 2014. "Vikrant" remained beached off Darukhana in Mumbai Port while awaiting the final clearances of the Mumbai Port Trust. On 12 November 2014, the Supreme Court gave its final approval for the carrier to be scrapped, which commenced on 22 November 2014.
In memory of "Vikrant", the Vikrant Memorial was unveiled by Vice Admiral Surinder Pal Singh Cheema, Flag Officer Commanding-in-Chief of the Western Naval Command at K Subash Marg in the Naval Dockyard of Mumbai on 25 January 2016. The memorial is made from metal recovered from the ship.
In February 2016, Bajaj unveiled a new motorbike made with metal from "Vikrant"s scrap and named it Bajaj V in honour of "Vikrant".
The navy has named its first home-built carrier INS "Vikrant" in honour of INS "Vikrant" (R11). The new carrier is built by Cochin Shipyard Limited, and will displace . The keel was laid down in February 2009 and she was launched in August 2013. , the ship is being fitted out and is expected to be commissioned by the end of 2018.
The decommissioned ship featured prominently in the film "ABCD 2" as a backdrop while it was moored near Darukhana in Mumbai.
|
https://en.wikipedia.org/wiki?curid=15442
|
Western imperialism in Asia
Western imperialism in Asia involves the influence of people from Western Europe and associated states (such as Russia, Japan and the United States) in Asian territories and waters. Much of this process stemmed from the 15th-century search for trade routes to China that led directly to the Age of Discovery, and the introduction of early modern warfare into what Europeans first called the East Indies and later the Far East. By the early 16th century, the Age of Sail greatly expanded Western European influence and development of the spice trade under colonialism. European-style colonial empires and imperialism operated in Asia throughout six centuries of colonialism, formally ending with the independence of the Portuguese Empire's last colony East Timor in 2002. The empires introduced Western concepts of nation and the multinational state. This article attempts to outline the consequent development of the Western concept of the nation state.
The thrust of European political power, commerce, and culture in Asia gave rise to growing trade in commodities—a key development in the rise of today's modern world free market economy. In the 16th century, the Portuguese broke the (overland) monopoly of the Arabs and Italians in trade between Asia and Europe by the discovery of the sea route to India around the Cape of Good Hope. The ensuing rise of the rival Dutch East India Company gradually eclipsed Portuguese influence in Asia. Dutch forces first established independent bases in the East (most significantly Batavia, the heavily fortified headquarters of the Dutch East India Company) and then between 1640 and 1660 wrested Malacca, Ceylon, some southern Indian ports, and the lucrative Japan trade from the Portuguese. Later, the English and the French established settlements in India and established trade with China and their acquisitions would gradually surpass those of the Dutch. Following the end of the Seven Years' War in 1763, the British eliminated French influence in India and established the British East India Company (founded in 1600) as the most important political force on the Indian Subcontinent.
Before the Industrial Revolution in the mid-to-late 19th century, demand for oriental goods such as (porcelain, silk, spices and tea) remained the driving force behind European imperialism, and (with the important exception of British East India Company rule in India) the Western European stake in Asia remained confined largely to trading stations and strategic outposts necessary to protect trade. Industrialization, however, dramatically increased European demand for Asian raw materials; and the severe Long Depression of the 1870s provoked a scramble for new markets for European industrial products and financial services in Africa, the Americas, Eastern Europe, and especially in Asia. This scramble coincided with a new era in global colonial expansion known as "the New Imperialism", which saw a shift in focus from trade and indirect rule to formal colonial control of vast overseas territories ruled as political extensions of their mother countries. Between the 1870s and the beginning of World War I in 1914, the United Kingdom, France, and the Netherlands—the established colonial powers in Asia—added to their empires vast expanses of territory in the Middle East, the Indian Subcontinent, and South East Asia. In the same period, the Empire of Japan, following the Meiji Restoration; the German Empire, following the end of the Franco-Prussian War in 1871; Tsarist Russia; and the United States, following the Spanish–American War in 1898, quickly emerged as new imperial powers in East Asia and in the Pacific Ocean area.
In Asia, World War I and World War II were played out as struggles among several key imperial powers—conflicts involving the European powers along with Russia and the rising American and Japanese powers. None of the colonial powers, however, possessed the resources to withstand the strains of both world wars and maintain their direct rule in Asia. Although nationalist movements throughout the colonial world led to the political independence of nearly all of Asia's remaining colonies, decolonization was intercepted by the Cold War; and South East Asia, South Asia, the Middle East, and East Asia remained embedded in a world economic, financial, and military system in which the great powers compete to extend their influence. However, the rapid post-war economic development and rise of the industrialized developed countries of Taiwan, Singapore, South Korea, Japan and the developing countries of India, the People's Republic of China and its autonomous territory of Hong Kong, along with the collapse of the Soviet Union, have greatly diminished Western European influence in Asia. The United States remains influential with trade and military bases in Asia.
European exploration of Asia started in ancient Roman times along the Silk Road. Knowledge of lands as distant as China were held by the Romans. Trade with India through the Roman Egyptian Red Sea ports was significant in the first centuries of the Common Era.
In the 13th and 14th centuries, a number of Europeans, many of them Christian missionaries, had sought to penetrate into China. The most famous of these travelers was Marco Polo. But these journeys had little permanent effect on East-West trade because of a series of political developments in Asia in the last decades of the 14th century, which put an end to further European exploration of Asia. The Yuan dynasty in China, which had been receptive to European missionaries and merchants, was overthrown, and the new Ming rulers were found to be unreceptive of religious proselytism. Meanwhile, the Turks consolidated control over the eastern Mediterranean, closing off key overland trade routes. Thus, until the 15th century, only minor trade and cultural exchanges between Europe and Asia continued at certain terminals controlled by Muslim traders.
Western European rulers determined to find new trade routes of their own. The Portuguese spearheaded the drive to find oceanic routes that would provide cheaper and easier access to South and East Asian goods. This chartering of oceanic routes between East and West began with the unprecedented voyages of Portuguese and Spanish sea captains. Their voyages were influenced by medieval European adventurers, who had journeyed overland to the Far East and contributed to geographical knowledge of parts of Asia upon their return.
In 1488, Bartolomeu Dias rounded the southern tip of Africa under the sponsorship of Portugal's John II, from which point he noticed that the coast swung northeast (Cape of Good Hope). While Dias' crew forced him to turn back, by 1497, Portuguese navigator Vasco da Gama made the first open voyage from Europe to India. In 1520, Ferdinand Magellan, a Portuguese navigator in the service of the Crown of Castile ('Spain'), found a sea route into the Pacific Ocean.
In 1509, the Portuguese under Francisco de Almeida won the decisive battle of Diu against a joint Mamluk and Arab fleet sent to expel the Portuguese of the Arabian Sea. The victory enabled Portugal to implement its strategy of controlling the Indian Ocean.
Early in the 16th century Afonso de Albuquerque (left) emerged as the Portuguese colonial viceroy most instrumental in consolidating Portugal's holdings in Africa and in Asia. He understood that Portugal could wrest commercial supremacy from the Arabs only by force, and therefore devised a plan to establish forts at strategic sites which would dominate the trade routes and also protect Portuguese interests on land. In 1510, he conquered Goa in India, which enabled him to gradually consolidate control of most of the commercial traffic between Europe and Asia, largely through trade; Europeans started to carry on trade from forts, acting as foreign merchants rather than as settlers. In contrast, early European expansion in the "West Indies", (later known to Europeans as a separate continent from Asia that they would call the "Americas") following the 1492 voyage of Christopher Columbus, involved heavy settlement in colonies that were treated as political extensions of the mother countries.
Lured by the potential of high profits from another expedition, the Portuguese established a permanent base in Cochin, south of the Indian trade port of Calicut in the early 16th century. In 1510, the Portuguese, led by Afonso de Albuquerque, seized Goa on the coast of India, which Portugal held until 1961, along with Diu and Daman (the remaining territory and enclaves in India from a former network of coastal towns and smaller fortified trading ports added and abandoned or lost centuries before). The Portuguese soon acquired a monopoly over trade in the Indian Ocean.
Portuguese viceroy Albuquerque (1509–1515) resolved to consolidate Portuguese holdings in Africa and Asia, and secure control of trade with the East Indies and China. His first objective was Malacca, which controlled the narrow strait through which most Far Eastern trade moved. Captured in 1511, Malacca became the springboard for further eastward penetration, starting with the voyage of António de Abreu and Francisco Serrão in 1512, ordered by Albuquerque, to the Moluccas. Years later the first trading posts were established in the Moluccas, or "Spice Islands", which was the source for some of the world's most hotly demanded spices, and from there, in Makassar and some others, but smaller, in the Lesser Sunda Islands. By 1513-1516, the first Portuguese ships had reached Canton on the southern coasts of China.
In 1513, after the failed attempt to conquer Aden, Albuquerque entered with an armada, for the first time for Europeans by the ocean via, on the Red Sea; and in 1515, Albuquerque consolidated the Portuguese hegemony in the Persian Gulf gates, already begun by him in 1507, with the domain of Muscat and Ormuz. Shortly after, other fortified bases and forts were annexed and built along the Gulf, and in 1521, through a military campaign, the Portuguese annexed Bahrain.
The Portuguese conquest of Malacca triggered the Malayan–Portuguese war. In 1521, Ming dynasty China defeated the Portuguese at the Battle of Tunmen and then defeated the Portuguese again at the Battle of Xicaowan. The Portuguese tried to establish trade with China by illegally smuggling with the pirates on the offshore islands off the coast of Zhejiang and Fujian, but they were driven away by the Ming navy in the 1530s-1540s.
In 1557, China decided to lease Macau to the Portuguese as a place where they could dry goods they transported on their ships, which they held until 1999. The Portuguese, based at Goa and Malacca, had now established a lucrative maritime empire in the Indian Ocean meant to monopolize the spice trade. The Portuguese also began a channel of trade with the Japanese, becoming the first recorded Westerners to have visited Japan. This contact introduced Christianity and firearms into Japan.
In 1505, (also possibly before, in 1501), the Portuguese, through Lourenço de Almeida, the son of Francisco de Almeida, reached Ceylon. The Portuguese founded a fort at the city of Colombo in 1517 and gradually extended their control over the coastal areas and inland. In a series of military conflicts and political maneuvers, the Portuguese extended their control over the Sinhalese kingdoms, including Jaffna (1591), Raigama (1593), Sitawaka (1593), and Kotte (1594)- However, the aim of unifying the entire island under Portuguese control faced the Kingdom of Kandy`s fierce resistance. The Portuguese, led by Pedro Lopes de Sousa, launched a full-scale military invasion of the kingdom of Kandy in the Campaign of Danture of 1594. The invasion was a disaster for the Portuguese, with their entire army, wiped out by Kandyan guerrilla warfare. Constantino de Sá, romantically celebrated in the 17th century Sinhalese Epic (also for its greater humanism and tolerance compared to other governors) led the last military operation that also ended in disaster. He died in the Battle of Randeniwela, refusing to abandon his troops in the face of total annihilation.
The energies of Castile (later, the "unified" Spain), the other major colonial power of the 16th century, were largely concentrated on the Americas, not South and East Asia, but the Spanish did establish a footing in the Far East in the Philippines. After fighting with the Portuguese by the Spice Islands since 1522 and the agreement between the two powers in 1529 (in the treaty of Zaragoza), the Spanish, led by Miguel López de Legazpi, settled and conquered gradually the Philippines since 1564. After the discovery of the return voyage to the Americas by Andres de Urdaneta in 1565, cargoes of Chinese goods were transported from the Philippines to Mexico and from there to Spain. By this long route, Spain reaped some of the profits of Far Eastern commerce. Spanish officials converted the islands to Christianity and established some settlements, permanently establishing the Philippines as the area of East Asia most oriented toward the West in terms of culture and commerce. The Moro Muslims fought against the Spanish for over three centuries in the Spanish–Moro conflict.
The lucrative trade was vastly expanded when the Portuguese began to export slaves from Africa in 1541; however, over time, the rise of the slave trade left Portugal over-extended, and vulnerable to competition from other Western European powers. Envious of Portugal's control of trade routes, other Western European nations—mainly the Netherlands, France, and England—began to send in rival expeditions to Asia. In 1642, the Dutch drove the Portuguese out of the Gold Coast in Africa, the source of the bulk of Portuguese slave laborers, leaving this rich slaving area to other Europeans, especially the Dutch and the English.
Rival European powers began to make inroads in Asia as the Portuguese and Spanish trade in the Indian Ocean declined primarily because they had become hugely over-stretched financially due to the limitations on their investment capacity and contemporary naval technology. Both of these factors worked in tandem, making control over Indian Ocean trade extremely expensive.
The existing Portuguese interests in Asia proved sufficient to finance further colonial expansion and entrenchment in areas regarded as of greater strategic importance in Africa and Brazil. Portuguese maritime supremacy was lost to the Dutch in the 17th century, and with this came serious challenges for the Portuguese. However, they still clung to Macau and settled a new colony on the island of Timor. It was as recent as the 1960s and 1970s that the Portuguese began to relinquish their colonies in Asia. Goa was invaded by India in 1961 and became an Indian state in 1987; Portuguese Timor was abandoned in 1975 and was then invaded by Indonesia. It became an independent country in 2002, and Macau was handed back to the Chinese as per a treaty in 1999.
The arrival of the Portuguese and Spanish and their holy wars against Muslim states in the Malayan–Portuguese war, Spanish–Moro conflict and Castilian War inflamed religious tensions and turned Southeast Asia into an arena of conflict between Muslims and Christians. The Brunei Sultanate's capital at Kota Batu was assaulted by Governor Sande who led the 1578 Spanish attack.
The word "savages" in Spanish, cafres, was from the word "infidel" in Arabic - Kafir, and was used by the Spanish to refer to their own "Christian savages" who were arrested in Brunei. It was said "Castilians are kafir, men who have no souls, who are condemned by fire when they die, and that too because they eat pork" by the Brunei Sultan after the term "accursed doctrine" was used to attack Islam by the Spaniards which fed into hatred between Muslims and Christians sparked by their 1571 war against Brunei. The Sultan's words were in response to insults coming from the Spanish at Manila in 1578, other Muslims from Champa, Java, Borneo, Luzon, Pahang, Demak, Aceh, and the Malays echoed the rhetoric of holy war against the Spanish and Iberian Portuguese, calling them kafir enemies which was a contrast to their earlier nuanced views of the Portuguese in the Hikayat Tanah Hitu and Sejarah Melayu. The war by Spain against Brunei was defended in an apologia written by Doctor De Sande. The British eventually partitioned and took over Brunei while Sulu was attacked by the British, Americans, and Spanish which caused its breakdown and downfall after both of them thrived from 1500-1900 for four centuries. Dar al-Islam was seen as under invasion by "kafirs" by the Atjehnese led by Zayn al-din and by Muslims in the Philippines as they saw the Spanish invasion, since the Spanish brought the idea of a crusader holy war against Muslim Moros just as the Portuguese did in Indonesia and India against what they called "Moors" in their political and commercial conquests which they saw through the lens of religion in the 16th century.
In 1578, an attack was launched by the Spanish against Jolo, and in 1875 it was destroyed at their hands, and once again in 1974 it was destroyed by the Philippines. The Spanish first set foot on Borneo in Brunei.
The Spanish war against Brunei failed to conquer Brunei but it totally cut off the Philippines from Brunei's influence, the Spanish then started colonizing Mindanao and building fortresses. In response, the Bisayas, where Spanish forces were stationed, were subjected to retaliatory attacks by the Magindanao in 1599-1600 due to the Spanish attacks on Mindanao.
The Brunei royal family was related to the Muslim Rajahs who in ruled the principality in 1570 of Manila (Kingdom of Maynila) and this was what the Spaniards came across on their initial arrival to Manila, Spain uprooted Islam out of areas where it was shallow after they began to force Christianity on the Philippines in their conquests after 1521 while Islam was already widespread in the 16th century Philippines. In the Philippines in the Cebu islands the natives killed the Spanish fleet leader Magellan. Borneo's western coastal areas at Landak, Sukadana, and Sambas saw the growth of Muslim states in the sixteenth century, in the 15th century at Nanking, the capital of China, the death and burial of the Borneo Bruneian king Maharaja Kama took place upon his visit to China with Zheng He's fleet.
The Spanish were expelled from Brunei in 1579 after they attacked in 1578. There were fifty thousand inhabitants before the 1597 attack by the Spanish in Brunei.
During first contact with China, numerous aggressions and provocations were undertaken by the Portuguese They believed they could mistreat the non-Christians because they themselves were Christians and acted in the name of their religion in committing crimes and atrocities. This resulted in the Battle of Xicaowan where the local Chinese navy defeated and captured a fleet of Portuguese caravels.
The Portuguese decline in Asia was accelerated by attacks on their commercial empire by the Dutch and the English, which began a global struggle over the empire in Asia that lasted until the end of the Seven Years' War in 1763. The Netherlands revolt against Spanish rule facilitated Dutch encroachment on the Portuguese monopoly over South and East Asian trade. The Dutch looked on Spain's trade and colonies as potential spoils of war. When the two crowns of the Iberian peninsula were joined in 1581, the Dutch felt free to attack Portuguese territories in Asia.
By the 1590s, a number of Dutch companies were formed to finance trading expeditions in Asia. Because competition lowered their profits, and because of the doctrines of mercantilism, in 1602 the companies united into a cartel and formed the Dutch East India Company, and received from the government the right to trade and colonize territory in the area stretching from the Cape of Good Hope eastward to the Strait of Magellan.
In 1605, armed Dutch merchants captured the Portuguese fort at Amboyna in the Moluccas, which was developed into the company's first secure base. Over time, the Dutch gradually consolidated control over the great trading ports of the East Indies. This control allowed the company to monopolise the world spice trade for decades. Their monopoly over the spice trade became complete after they drove the Portuguese from Malacca in 1641 and Ceylon in 1658.
Dutch East India Company colonies or outposts were later established in Atjeh (Aceh), 1667; Macassar, 1669; and Bantam, 1682. The company established its headquarters at Batavia (today Jakarta) on the island of Java. Outside the East Indies, the Dutch East India Company colonies or outposts were also established in Persia (Iran), Bengal (now Bangladesh and part of India), Mauritius (1638-1658/1664-1710), Siam (now Thailand), Guangzhou (Canton, China), Taiwan (1624–1662), and southern India (1616–1795).
Ming dynasty China defeated the Dutch East India Company in the Sino-Dutch conflicts. The Chinese first defeated and drove the Dutch out of the Pescadores in 1624. The Ming navy under Zheng Zhilong defeated the Dutch East India Company's fleet at the 1633 Battle of Liaoluo Bay. In 1662, Zheng Zhilong's son Zheng Chenggong (also known as Koxinga) expelled the Dutch from Taiwan after defeating them in the Siege of Fort Zeelandia. ("see" History of Taiwan) Further, the Dutch East India Company trade post on Dejima (1641–1857), an artificial island off the coast of Nagasaki, was for a long time the only place where Europeans could trade with Japan.
The Vietnamese Nguyễn lords defeated the Dutch in a naval battle in 1643.
The Cambodians defeated the Dutch in the Cambodian–Dutch War in 1644.
In 1652, Jan van Riebeeck established an outpost at the Cape of Good Hope (the southwestern tip of Africa, currently in South Africa) to restock company ships on their journey to East Asia. This post later became a fully-fledged colony, the Cape Colony (1652–1806). As Cape Colony attracted increasing Dutch and European settlement, the Dutch founded the city of Kaapstad (Cape Town).
By 1669, the Dutch East India Company was the richest private company in history, with a huge fleet of merchant ships and warships, tens of thousands of employees, a private army consisting of thousands of soldiers, and a reputation on the part of its stockholders for high dividend payments.
The company was in almost constant conflict with the English; relations were particularly tense following the Amboyna Massacre in 1623. During the 18th century, Dutch East India Company possessions were increasingly focused on the East Indies. After the fourth war between the United Provinces and England (1780–1784), the company suffered increasing financial difficulties. In 1799, the company was dissolved, commencing official colonisation of the East Indies. During the era of New Imperialism the territorial claims of the Dutch East India Company (VOC) expanded into a fully fledged colony named the Dutch East Indies. Partly driven by re-newed colonial aspirations of fellow European nation states the Dutch strived to establish unchallenged control of the archipelago now known as Indonesia.
Six years into formal colonisation of the East Indies, in Europe the Dutch Republic was occupied by the French forces of Napoleon. The Dutch government went into exile in England and formally ceded its colonial possessions to Great Britain. The pro-French Governor General of Java Jan Willem Janssens, resisted a British invasion force in 1811 until forced to surrender. British Governor Raffles, who the later founded the city of Singapore, ruled the colony the following 10 years of the British interregnum (1806–1816).
After the defeat of Napoleon and the Anglo-Dutch Treaty of 1814 colonial government of the East Indies was ceded back to the Dutch in 1817. The loss of South Africa and the continued scramble for Africa stimulated the Dutch to secure unchallenged dominion over its colony in the East Indies. The Dutch started to consolidate its power base through extensive military campaigns and elaborate diplomatic alliances with indigenous rulers ensuring the Dutch tricolor was firmly planted in all corners of the Archipelago. These military campaigns included: the Padri War (1821–1837), the Java War (1825–1830) and the Aceh War (1873–1904). This raised the need for a considerable military buildup of the colonial army (KNIL). From all over Europe soldiers were recruited to join the KNIL.
The Dutch concentrated their colonial enterprise in the Dutch East Indies (Indonesia) throughout the 19th century. The Dutch lost control over the East Indies to the Japanese during much of World War II. Following the war, the Dutch fought Indonesian independence forces after Japan surrendered to the Allies in 1945. In 1949, most of what was known as the Dutch East Indies was ceded to the independent Republic of Indonesia. In 1962, also Dutch New Guinea was annexed by Indonesia de facto ending Dutch imperialism in Asia.
The English sought to stake out claims in India at the expense of the Portuguese dating back to the Elizabethan era. In 1600, Queen Elizabeth I incorporated the English East India Company (later the British East India Company), granting it a monopoly of trade from the Cape of Good Hope eastward to the Strait of Magellan. In 1639, it acquired Madras on the east coast of India, where it quickly surpassed Portuguese Goa as the principal European trading centre on the Indian Subcontinent.
Through bribes, diplomacy, and manipulation of weak native rulers, the company prospered in India, where it became the most powerful political force, and outrivaled its Portuguese and French competitors. For more than one hundred years, English and French trading companies had fought one another for supremacy, and, by the middle of the 18th century, competition between the British and the French had heated up. French defeat by the British under the command of Robert Clive during the Seven Years' War (1756–1763) marked the end of the French stake in India.
The British East India Company, although still in direct competition with French and Dutch interests until 1763, was able to extend its control over almost the whole of India in the century following the subjugation of Bengal at the 1757 Battle of Plassey. The British East India Company made great advances at the expense of a Mughal dynasty.
The reign of Aurangzeb had marked the height of Mughal power. By 1690 Mughal territorial expansion reached its greatest extent encompassing the entire Indian Subcontinent. But this period of power was followed by one of decline. Fifty years after the death of Aurangzeb, the great Mughal empire had crumbled. Meanwhile, marauding warlords, nobles, and others bent on gaining power left the Subcontinent increasingly anarchic. Although the Mughals kept the imperial title until 1858, the central government had collapsed, creating a power vacuum.
Aside from defeating the French during the Seven Years' War, Robert Clive, the leader of the Company in India, defeated a key Indian ruler of Bengal at the decisive Battle of Plassey (1757), a victory that ushered in the beginning of a new period in Indian history, that of informal British rule. While still nominally the sovereign, the Mughal Indian emperor became more and more of a puppet ruler, and anarchy spread until the company stepped into the role of policeman of India. The transition to formal imperialism, characterised by Queen Victoria being crowned "Empress of India" in the 1870s was a gradual process. The first step toward cementing formal British control extended back to the late 18th century. The British Parliament, disturbed by the idea that a great business concern, interested primarily in profit, was controlling the destinies of millions of people, passed acts in 1773 and 1784 that gave itself the power to control company policies and to appoint the highest company official in India, the Governor-General. (This system of dual control lasted until 1858.) By 1818, the East India Company was master of all of India. Some local rulers were forced to accept its overlordship; others were deprived of their territories. Some portions of India were administered by the British directly; in others native dynasties were retained under British supervision.
Until 1858, however, much of India was still officially the dominion of the Mughal emperor. Anger among some social groups, however, was seething under the governor-generalship of James Dalhousie (1847–1856), who annexed the Punjab (1849) after victory in the Second Sikh War, annexed seven princely states using the doctrine of lapse, annexed the key state of Oudh on the basis of misgovernment, and upset cultural sensibilities by banning Hindu practices such as sati.
The 1857 Sepoy Rebellion, or Indian Mutiny, an uprising initiated by Indian troops, called sepoys, who formed the bulk of the Company's armed forces, was the key turning point. Rumour had spread among them that their bullet cartridges were lubricated with pig and cow fat. The cartridges had to be bit open, so this upset the Hindu and Muslim soldiers. The Hindu religion held cows sacred, and for Muslims pork was considered haraam. In one camp, 85 out of 90 sepoys would not accept the cartridges from their garrison officer. The British harshly punished those who would not by jailing them. The Indian people were outraged, and on May 10, 1857, sepoys marched to Delhi, and, with the help of soldiers stationed there, captured it. Fortunately for the British, many areas remained loyal and quiescent, allowing the revolt to be crushed after fierce fighting. One important consequence of the revolt was the final collapse of the Mughal dynasty. The mutiny also ended the system of dual control under which the British government and the British East India Company shared authority. The government relieved the company of its political responsibilities, and in 1858, after 258 years of existence, the company relinquished its role. Trained civil servants were recruited from graduates of British universities, and these men set out to rule India. Lord Canning (created earl in 1859), appointed Governor-General of India in 1856, became known as "Clemency Canning" as a term of derision for his efforts to restrain revenge against the Indians during the Indian Mutiny. When the Government of India was transferred from the Company to the Crown, Canning became the first viceroy of India.
The Company initiated the first of the Anglo-Burmese wars in 1824, which led to total annexation of Burma by the Crown in 1885. The British ruled Burma as a province of British India until 1937, then administered her separately under the Burma Office except during the Japanese occupation of Burma, 1942–1945, until granted independence on 4 January 1948. (Unlike India, Burma opted not to join the Commonwealth of Nations.)
The denial of equal status to Indians was the immediate stimulus for the formation in 1885 of the Indian National Congress, initially loyal to the Empire but committed from 1905 to increased self-government and by 1930 to outright independence. The "Home charges", payments transferred from India for administrative costs, were a lasting source of nationalist grievance, though the flow declined in relative importance over the decades to independence in 1947.
Although majority Hindu and minority Muslim political leaders were able to collaborate closely in their criticism of British policy into the 1920s, British support for a distinct Muslim political organisation, the Muslim League from 1906 and insistence from the 1920s on separate electorates for religious minorities, is seen by many in India as having contributed to Hindu-Muslim discord and the country's eventual Partition.
France, which had lost its empire to the British by the end of the 18th century, had little geographical or commercial basis for expansion in Southeast Asia. After the 1850s, French imperialism was initially impelled by a nationalistic need to rival the United Kingdom and was supported intellectually by the notion that French culture was superior to that of the people of Annam (Vietnam), and its "mission civilisatrice"—or its "civilizing mission" of the Annamese through their assimilation to French culture and the Catholic religion. The pretext for French expansionism in Indochina was the protection of French religious missions in the area, coupled with a desire to find a southern route to China through Tonkin, the European name for a region of northern Vietnam.
French religious and commercial interests were established in Indochina as early as the 17th century, but no concerted effort at stabilizing the French position was possible in the face of British strength in the Indian Ocean and French defeat in Europe at the beginning of the 19th century. A mid-19th century religious revival under the Second Empire provided the atmosphere within which interest in Indochina grew. Anti-Christian persecutions in the Far East provided the pretext for the bombardment of Tourane (Danang) in 1847, and invasion and occupation of Danang in 1857 and Saigon in 1858. Under Napoleon III, France decided that French trade with China would be surpassed by the British, and accordingly the French joined the British against China in the Second Opium War from 1857 to 1860, and occupied parts of Vietnam as its gateway to China.
By the Treaty of Saigon in 1862, on June 5, the Vietnamese emperor ceded France three provinces of southern Vietnam to form the French colony of Cochinchina; France also secured trade and religious privileges in the rest of Vietnam and a protectorate over Vietnam's foreign relations. Gradually French power spread through exploration, the establishment of protectorates, and outright annexations. Their seizure of Hanoi in 1882 led directly to war with China (1883–1885), and the French victory confirmed French supremacy in the region. France governed Cochinchina as a direct colony, and central and northern Vietnam under the protectorates of Annam and Tonkin, and Cambodia as protectorates in one degree or another. Laos too was soon brought under French "protection".
By the beginning of the 20th century, France had created an empire in Indochina nearly 50 percent larger than the mother country. A Governor-General in Hanoi ruled Cochinchina directly and the other regions through a system of residents. Theoretically, the French maintained the precolonial rulers and administrative structures in Annam, Tonkin, Cochinchina, Cambodia, and Laos, but in fact the governor-generalship was a centralised fiscal and administrative regime ruling the entire region. Although the surviving native institutions were preserved in order to make French rule more acceptable, they were almost completely deprived of any independence of action. The ethnocentric French colonial administrators sought to assimilate the upper classes into France's "superior culture." While the French improved public services and provided commercial stability, the native standard of living declined and precolonial social structures eroded. Indochina, which had a population of over eighteen million in 1914, was important to France for its tin, pepper, coal, cotton, and rice. It is still a matter of debate, however, whether the colony was commercially profitable.
Tsarist Russia is not often regarded as a colonial power such as the United Kingdom or France because of the manner of Russian expansions: unlike the United Kingdom, which expanded overseas, the Russian empire grew from the centre outward by a process of accretion, like the United States. In the 19th century, Russian expansion took the form of a struggle of an effectively landlocked country for access to a warm water port.
Qing China defeated Russia in the Sino-Russian border conflicts.
While the British were consolidating their hold on India, Russian expansion had moved steadily eastward to the Pacific, then toward the Middle East. In the early 19th century it succeeded in conquering the South Caucasus and Dagestan from Qajar Iran following the Russo-Persian War (1804–13), the Russo-Persian War (1826–28) and the out coming treaties of Gulistan and Turkmenchay, giving Russia direct borders with both Persia's as well as Ottoman Turkey's heartlands. Later, they eventually reached the frontiers of Afghanistan as well (which had the largest foreign border adjacent to British holdings in India). In response to Russian expansion, the defense of India's land frontiers and the control of all sea approaches to the Subcontinent via the Suez Canal, the Red Sea, and the Persian Gulf became preoccupations of British foreign policy in the 19th century.
Anglo-Russian rivalry in the Middle East and Central Asia led to a brief confrontation over Afghanistan in the 1870s. In Persia (Iran), both nations set up banks to extend their economic influence. The United Kingdom went so far as to invade Tibet, a land subordinate to the Chinese empire, in 1904, but withdrew when it became clear that Russian influence was insignificant and when Chinese resistance proved tougher than expected.
In 1907, the United Kingdom and Russia signed an agreement which — on the surface —ended their rivalry in Central Asia. ("see" Anglo-Russian Entente) As part of the entente, Russia agreed to deal with the sovereign of Afghanistan only through British intermediaries. In turn, the United Kingdom would not annex or occupy Afghanistan. Chinese suzerainty over Tibet also was recognised by both Russia and the United Kingdom, since nominal control by a weak China was preferable to control by either power. Persia was divided into Russian and British spheres of influence and an intervening "neutral" zone. The United Kingdom and Russia chose to reach these uneasy compromises because of growing concern on the part of both powers over German expansion in strategic areas of China and Africa.
Following the entente, Russia increasingly intervened in Persian domestic politics and suppressed nationalist movements that threatened both St. Petersburg and London. After the Russian Revolution, Russia gave up its claim to a sphere of influence, though Soviet involvement persisted alongside the United Kingdom's until the 1940s.
In the Middle East, in Persia (Iran) and the Ottoman Empire, a German company built a railroad from Constantinople to Baghdad and the Persian Gulf in the latter, while it built a railroad from the north of the country to the south, connecting the Caucasus with the Persian Gulf in the former. Germany wanted to gain economic influence in the region and then, perhaps, move on to India. This was met with bitter resistance by the United Kingdom, Russia, and France who divided the region among themselves.
The 16th century brought many Jesuit missionaries to China, such as Matteo Ricci, who established missions where Western science was introduced, and where Europeans gathered knowledge of Chinese society, history, culture, and science. During the 18th century, merchants from Western Europe came to China in increasing numbers. However, merchants were confined to Guangzhou and the Portuguese colony of Macau, as they had been since the 16th century. European traders were increasingly irritated by what they saw as the relatively high customs duties they had to pay and by the attempts to curb the growing import trade in opium. By 1800, its importation was forbidden by the imperial government. However, the opium trade continued to boom.
Early in the 19th century, serious internal weaknesses developed in the Qing dynasty that left China vulnerable to Western, Meiji period Japanese, and Russian imperialism. In 1839, China found itself fighting the First Opium War with Britain. China was defeated, and in 1842, signed the provisions of the Treaty of Nanking which were first of the unequal treaties signed during the Qing Dynasty. Hong Kong Island was ceded to Britain, and certain ports, including Shanghai and Guangzhou, were opened to British trade and residence. In 1856, the Second Opium War broke out. The Chinese were again defeated, and now forced to the terms of the 1858 Treaty of Tientsin. The treaty opened new ports to trade and allowed foreigners to travel in the interior. In addition, Christians gained the right to propagate their religion. The United States Treaty of Wanghia and Russia later obtained the same prerogatives in separate treaties.
Toward the end of the 19th century, China appeared on the way to territorial dismemberment and economic vassalage—the fate of India's rulers that played out much earlier. Several provisions of these treaties caused long-standing bitterness and humiliation among the Chinese: extraterritoriality (meaning that in a dispute with a Chinese person, a Westerner had the right to be tried in a court under the laws of his own country), customs regulation, and the right to station foreign warships in Chinese waters, including its navigable rivers.
Jane E. Elliott criticized the allegation that China refused to modernize or was unable to defeat Western armies as simplistic, noting that China embarked on a massive military modernization in the late 1800s after several defeats, buying weapons from Western countries and manufacturing their own at arsenals, such as the Hanyang Arsenal during the Boxer Rebellion. In addition, Elliott questioned the claim that Chinese society was traumatized by the Western victories, as many Chinese peasants (90% of the population at that time) living outside the concessions continued about their daily lives, uninterrupted and without any feeling of "humiliation".
Historians have judged the Qing dynasty's vulnerability and weakness to foreign imperialism in the 19th century to be based mainly on its maritime naval weakness while it achieved military success against westerners on land, the historian Edward L. Dreyer said that "China’s nineteenth-century humiliations were strongly related to her weakness and failure at sea. At the start of the Opium War, China had no unified navy and no sense of how vulnerable she was to attack from the sea; British forces sailed and steamed wherever they wanted to go...In the Arrow War (1856-60), the Chinese had no way to prevent the Anglo-French expedition of 1860 from sailing into the Gulf of Zhili and landing as near as possible to Beijing. Meanwhile, new but not exactly modern Chinese armies suppressed the midcentury rebellions, bluffed Russia into a peaceful settlement of disputed frontiers in Central Asia, and defeated the French forces on land in the Sino-French War (1884-85). But the defeat of the fleet, and the resulting threat to steamship traffic to Taiwan, forced China to conclude peace on unfavorable terms."
During the Sino-French War, Chinese forces defeated the French at the Battle of Cầu Giấy (Paper Bridge), Bắc Lệ ambush, Battle of Phu Lam Tao, Battle of Zhenhai, the Battle of Tamsui in the Keelung Campaign and in the last battle which ended the war, the Battle of Bang Bo (Zhennan Pass), which triggered the French Retreat from Lạng Sơn and resulted in the collapse of the French Jules Ferry government in the Tonkin Affair.
The Qing dynasty forced Russia to hand over disputed territory in Ili in the Treaty of Saint Petersburg (1881), in what was widely seen by the west as a diplomatic victory for the Qing. Russia acknowledged that Qing China potentially posed a serious military threat. Mass media in the west during this era portrayed China as a rising military power due to its modernization programs and as a major threat to the western world, invoking fears that China would successfully conquer western colonies like Australia.
The British observer Demetrius Charles de Kavanagh Boulger suggested a British-Chinese alliance to check Russian expansion in Central Asia.
During the Ili crisis when Qing China threatened to go to war against Russia over the Russian occupation of Ili, the British officer Charles George Gordon was sent to China by Britain to advise China on military options against Russia should a potential war break out between China and Russia.
The Russians observed the Chinese building up their arsenal of modern weapons during the Ili crisis, the Chinese bought thousands of rifles from Germany. In 1880, massive amounts of military equipment and rifles were shipped via boats to China from Antwerp as China purchased torpedoes, artillery, and 260,260 modern rifles from Europe.
The Russian military observer D. V. Putiatia visited China in 1888 and found that in Northeastern China (Manchuria) along the Chinese-Russian border, the Chinese soldiers were potentially able to become adept at "European tactics" under certain circumstances, and the Chinese soldiers were armed with modern weapons like Krupp artillery, Winchester carbines, and Mauser rifles.
Compared to Russian controlled areas, more benefits were given to the Muslim Kirghiz on the Chinese controlled areas. Russian settlers fought against the Muslim nomadic Kirghiz, which led the Russians to believe that the Kirghiz would be a liability in any conflict against China. The Muslim Kirghiz were sure that in an upcoming war, that China would defeat Russia.
Russian sinologists, the Russian media, threat of internal rebellion, the pariah status inflicted by the Congress of Berlin, the negative state of the Russian economy all led Russia to concede and negotiate with China in St Petersburg, and return most of Ili to China.
The rise of Japan since the Meiji Restoration as an imperial power led to further subjugation of China. In a dispute over China's longstanding claim of suzerainty in Korea, war broke out between China and Japan, resulting in humiliating defeat for the Chinese. By the Treaty of Shimonoseki (1895), China was forced to recognize effective Japanese rule of Korea and Taiwan was ceded to Japan until its recovery in 1945 at the end of the WWII by the Republic of China.
China's defeat at the hands of Japan was another trigger for future aggressive actions by Western powers. In 1897, Germany demanded and was given a set of exclusive mining and railroad rights in Shandong province. Russia obtained access to Dairen and Port Arthur and the right to build a railroad across Manchuria, thereby achieving complete domination over a large portion of northwestern China. The United Kingdom and France also received a number of concessions. At this time, much of China was divided up into "spheres of influence": Germany had influence in Jiaozhou (Kiaochow) Bay, Shandong, and the Yellow River valley; Russia had influence in the Liaodong Peninsula and Manchuria; the United Kingdom had influence in Weihaiwei and the Yangtze Valley; and France had influence in the Guangzhou Bay and the provinces of Yunnan, Guizhou and Guangxi
China continued to be divided up into these spheres until the United States, which had no sphere of influence, grew alarmed at the possibility of its businessmen being excluded from Chinese markets. In 1899, Secretary of State John Hay asked the major powers to agree to a policy of equal trading privileges. In 1900, several powers agreed to the U.S.-backed scheme, giving rise to the "Open Door" policy, denoting freedom of commercial access and non-annexation of Chinese territory. In any event, it was in the European powers' interest to have a weak but independent Chinese government. The privileges of the Europeans in China were guaranteed in the form of treaties with the Qing government. In the event that the Qing government totally collapsed, each power risked losing the privileges that it already had negotiated.
The erosion of Chinese sovereignty and seizures of land from Chinese by foreigners contributed to a spectacular anti-foreign outbreak in June 1900, when the "Boxers" (properly the society of the "righteous and harmonious fists") attacked foreigners around Beijing. The Imperial Court was divided into anti-foreign and pro-foreign factions, with the pro-foreign faction led by Ronglu and Prince Qing hampering any military effort by the anti-foreign faction led by Prince Duan and Dong Fuxiang. The Qing Empress Dowager ordered all diplomatic ties to be cut off and all foreigners to leave the legations in Beijing to go to Tianjin. The foreigners refused to leave. Fueled by entirely false reports that the foreigners in the legations were massacred, the Eight-Nation Alliance decided to launch an expedition on Beijing to reach the legations but they underestimated the Qing military. The Qing and Boxers defeated the foreigners at the Seymour Expedition, forcing them to turn back at the Battle of Langfang. In response to the foreign attack on Dagu Forts the Qing responded by declaring war against the foreigners. the Qing forces and foreigners fought a fierce battle at the Battle of Tientsin before the foreigners could launch a second expedition. On their second try Gaselee Expedition, with a much larger force, the foreigners managed to reach Beijing and fight the Battle of Peking (1900). British and French forces looted, plundered and burned the Old Summer Palace to the ground for the second time (the first time being in 1860, following the Second Opium War). German forces were particularly severe in exacting revenge for the killing of their ambassador due to the orders of Kaiser Wilhelm II, who held anti-Asian sentiments, while Russia tightened its hold on Manchuria in the northeast until its crushing defeat by Japan in the war of 1904–1905. The Qing court evacuated to Xi'an and threatened to continue the war against foreigners, until the foreigners tempered their demands in the Boxer Protocol, promising that China would not have to give up any land and gave up the demands for the execution of Dong Fuxiang and Prince Duan.
The correspondent Douglas Story observed Chinese troops in 1907 and praised their abilities and military skill.
Extraterritorial jurisdiction was abandoned by the United Kingdom and the United States in 1943. Chiang Kai-shek forced the French to hand over all their concessions back to China control after World War II. Foreign political control over leased parts of China ended with the incorporation of Hong Kong and the small Portuguese territory of Macau into the People's Republic of China in 1997 and 1999 respectively.
Some Americans in the Nineteenth Century advocated for the annexation of Taiwan from China. Aboriginals on Taiwan often attacked and massacred shipwrecked western sailors. In 1867, during the Rover incident, Taiwanese aborigines attacked shipwrecked American sailors, killing the entire crew. They subsequently defeated a retaliatory expedition by the American military and killed another American during the battle.
As the United States emerged as a new imperial power in the Pacific and Asia, one of the two oldest Western imperialist powers in the regions, Spain, was finding it increasingly difficult to maintain control of territories it had held in the regions since the 16th century. In 1896, a widespread revolt against Spanish rule broke out in the Philippines. Meanwhile, the recent string of U.S. territorial gains in the Pacific posed an even greater threat to Spain's remaining colonial holdings.
As the U.S. continued to expand its economic and military power in the Pacific, it declared war against Spain in 1898. During the Spanish–American War, U.S. Admiral Dewey destroyed the Spanish fleet at Manila and U.S. troops landed in the Philippines. Spain later agreed by treaty to cede the Philippines in Asia and Guam in the Pacific. In the Caribbean, Spain ceded Puerto Rico to the U.S. The war also marked the end of Spanish rule in Cuba, which was to be granted nominal independence but remained heavily influenced by the U.S. government and U.S. business interests. One year following its treaty with Spain, the U.S. occupied the small Pacific outpost of Wake Island.
The Filipinos, who assisted U.S. troops in fighting the Spanish, wished to establish an independent state and, on June 12, 1898, declared independence from Spain. In 1899, fighting between the Filipino nationalists and the U.S. broke out; it took the U.S. almost fifteen years to fully subdue the insurgency. The U.S. sent 70,000 troops and suffered thousands of casualties. The Filipinos insurgents, however, suffered considerably higher casualties than the Americans. Most casualties in the war were civilians dying primarily from disease.
U.S. attacks into the countryside often included scorched earth campaigns where entire villages were burned and destroyed, and concentrated civilians into camps known as "protected zones". Most of these civilian casualties resulted from disease and famine. Reports of the execution of U.S. soldiers taken prisoner by the Filipinos led to disproportionate reprisals by American forces.
The Moro Muslims fought against the Americans in the Moro Rebellion.
In 1914, Dean C. Worcester, U.S. Secretary of the Interior for the Philippines (1901–1913) described "the regime of civilisation and improvement which started with American occupation and resulted in developing naked savages into cultivated and educated men". Nevertheless, some Americans, such as Mark Twain, deeply opposed American involvement/imperialism in the Philippines, leading to the abandonment of attempts to construct a permanent U.S. naval base and using it as an entry point to the Chinese market. In 1916, Congress guaranteed the independence of the Philippines by 1945.
World War I brought about the fall of several empires in Europe. This had repercussions around the world. The defeated Central Powers included Germany and the Turkish Ottoman Empire. Germany lost all of its colonies in Asia. German New Guinea, a part of Papua New Guinea, became administered by Australia. German possessions and concessions in China, including Qingdao, became the subject of a controversy during the Paris Peace Conference when the Beiyang government in China agreed to cede these interests to Japan, to the anger of many Chinese people. Although the Chinese diplomats refused to sign the agreement, these interests were ceded to Japan with the support of the United States and the United Kingdom.
Turkey gave up her provinces; Syria, Palestine, and Mesopotamia (now Iraq) came under French and British control as League of Nations Mandates. The discovery of petroleum first in Iran and then in the Arab lands in the interbellum provided a new focus for activity on the part of the United Kingdom, France, and the United States.
In 1641, all Westerners were thrown out of Japan. For the next two centuries, Japan was free from Western contact, except for at the port of Nagasaki, which Japan allowed Dutch merchant vessels to enter on a limited basis.
Japan's freedom from Western contact ended on 8 July 1853, when Commodore Matthew Perry of the U.S. Navy sailed a squadron of black-hulled warships into Edo (modern Tokyo) harbor. The Japanese told Perry to sail to Nagasaki but he refused. Perry sought to present a letter from U.S. President Millard Fillmore to the emperor which demanded concessions from Japan. Japanese authorities responded by stating that they could not present the letter directly to the emperor, but scheduled a meeting on 14 July with a representative of the emperor. On 14 July, the squadron sailed towards the shore, giving a demonstration of their cannon's firepower thirteen times. Perry landed with a large detachment of Marines and presented the emperor's representative with Fillmore's letter. Perry said he would return, and did so, this time with even more war ships. The U.S. show of force led to Japan's concession to the Convention of Kanagawa on 31 March 1854. This treaty conferred extraterritoriality on American nationals, as well as, opening up further treaty ports beyond Nagasaki. This treaty was followed up by similar treaties with the United Kingdom, the Netherlands, Russia and France. These events made Japanese authorities aware that the country was lacking technologically and needed the strength of industrialism in order to keep their power. This realisation eventually led to a civil war and political reform known the Meiji Restoration.
The Meiji Restoration of 1868 led to administrative overhaul, deflation and subsequent rapid economic development. Japan had limited natural resources of her own and sought both overseas markets and sources of raw materials, fuelling a drive for imperial conquest which began with the defeat of China in 1895.
Taiwan, ceded by Qing dynasty China, became the first Japanese colony. In 1899, Japan won agreements from the great powers' to abandon extraterritoriality for their citizens, and an alliance with the United Kingdom established it in 1902 as an international power. Its spectacular defeat of Russia's navy in 1905 gave it the southern half of the island of Sakhalin; exclusive Japanese influence over Korea (propinquity); the former Russian lease of the Liaodong Peninsula with Port Arthur (Lüshunkou); and extensive rights in Manchuria (see the Russo-Japanese War).
The Empire of Japan and the Joseon Dynasty in Korea formed bilateral diplomatic relations in 1876. China lost its suzerainty of Korea after defeat in the Sino-Japanese War in 1894. Russia also lost influence on the Korean peninsula with the Treaty of Portsmouth as a result of the Russo-Japanese war in 1904. The Joseon Dynasty became increasingly dependent on Japan. Korea became a protectorate of Japan with the Japan–Korea Treaty of 1905. Korea was then "de jure" annexed to Japan with the Japan–Korea Treaty of 1910.
Japan was now one of the most powerful forces in the Far East, and in 1914, it entered World War I on the side of the Allies, seizing German-occupied Kiaochow and subsequently demanding Chinese acceptance of Japanese political influence and territorial acquisitions (Twenty-One Demands, 1915). Mass protests in Peking in 1919 coupled with Allied (and particularly U.S.) opinion led to Japan's abandonment of most of the demands and Joseon's 1922 return to China. Japan received the German territory from the Treaty of Versailles, 1919, sparking widespread Chinese nationalism.
Tensions with China increased over the 1920s, and in 1931 Japanese army units based in Manchuria seized control of the region without direction from Tokyo. Intermittent conflict with China led to full-scale war in mid-1937, drawing Japan toward an overambitious bid for Asian hegemony (Greater East Asia Co-Prosperity Sphere), which ultimately led to defeat and the loss of all its overseas territories after World War II (see Japanese expansionism and Japanese nationalism).
In the aftermath of World War II, European colonies, controlling more than one billion people throughout the world, still ruled most of the Middle East, South East Asia, and the Indian Subcontinent. However, the image of European pre-eminence was shattered by the wartime Japanese occupations of large portions of British, French, and Dutch territories in the Pacific. The destabilisation of European rule led to the rapid growth of nationalist movements in Asia—especially in Indonesia, Malaya, Burma, and French Indochina (Vietnam, Cambodia, and Laos).
The war, however, only accelerated forces already in existence undermining Western imperialism in Asia. Throughout the colonial world, the processes of urbanisation and capitalist investment created professional merchant classes that emerged as new Westernised elites. While imbued with Western political and economic ideas, these classes increasingly grew to resent their unequal status under European rule.
In India, the westward movement of Japanese forces towards Bengal during World War II had led to major concessions on the part of British authorities to Indian nationalist leaders. In 1947, the United Kingdom, devastated by war and embroiled in economic crisis at home, granted British India its independence as two nations: India and Pakistan. Myanmar (Burma) and Sri Lanka (Ceylon), which is also part of British India, also gained their independence from the United Kingdom the following year, in 1948. In the Middle East, the United Kingdom granted independence to Jordan in 1946 and two years later, in 1948, ended its mandate of Palestine becoming the independent nation of Israel.
Following the end of the war, nationalists in Indonesia demanded complete independence from the Netherlands. A brutal conflict ensued, and finally, in 1949, through United Nations mediation, the Dutch East Indies achieved independence, becoming the new nation of Indonesia. Dutch imperialism moulded this new multi-ethnic state comprising roughly 3,000 islands of the Indonesian archipelago with a population at the time of over 100 million.
The end of Dutch rule opened up latent tensions between the roughly 300 distinct ethnic groups of the islands, with the major ethnic fault line being between the Javanese and the non-Javanese.
Netherlands New Guinea was under the Dutch administration until 1962 (see also West New Guinea dispute).
In the Philippines, the U.S. remained committed to its previous pledges to grant the islands their independence, and the Philippines became the first of the Western-controlled Asian colonies to be granted independence post-World War II. However, the Philippines remained under pressure to adopt a political and economic system similar to the U.S.
This aim was greatly complicated by the rise of new political forces. During the war, the "Hukbalahap" (People's Army), which had strong ties to the Communist Party of the Philippines (PKP), fought against the Japanese occupation of the Philippines and won strong popularity among many sectors of the Filipino working class and peasantry. In 1946, the PKP participated in elections as part of the Democratic Alliance. However, with the onset of the Cold War, its growing political strength drew a reaction from the ruling government and the United States, resulting in the repression of the PKP and its associated organisations. In 1948, the PKP began organizing an armed struggle against the government and continued U.S. military presence. In 1950, the PKP created the People's Liberation Army ("Hukbong Mapagpalaya ng Bayan"), which mobilised thousands of troops throughout the islands. The insurgency lasted until 1956, when the PKP gave up armed struggle.
In 1968, the PKP underwent a split, and in 1969 the Maoist faction of the PKP created the New People's Army. Maoist rebels re-launched an armed struggle against the government and the U.S. military presence in the Philippines, which continues to this day.
France remained determined to retain its control of Indochina. However, in Hanoi, in 1945, a broad front of nationalists and communists led by Ho Chi Minh declared an independent Republic of Vietnam, commonly referred to as the Viet Minh regime by Western outsiders. France, seeking to regain control of Vietnam, countered with a vague offer of self-government under French rule. France's offers were unacceptable to Vietnamese nationalists; and in December 1946 the Việt Minh launched a rebellion against the French authority governing the colonies of French Indochina. The first few years of the war involved a low-level rural insurgency against French authority. However, after the Chinese communists reached the Northern border of Vietnam in 1949, the conflict turned into a conventional war between two armies equipped with modern weapons supplied by the United States and the Soviet Union. Meanwhile, the France granted the State of Vietnam based in Saigon independence in 1949 while Laos and Cambodia received independence in 1953. The US recognized the regime in Saigon, and provided the French military effort with military aid.
Meanwhile, in Vietnam, the French war against the Viet Minh continued for nearly eight years. The French were gradually worn down by guerrilla and jungle fighting. The turning point for France occurred at Dien Bien Phu in 1954, which resulted in the surrender of ten thousand French troops. Paris was forced to accept a political settlement that year at the Geneva Conference, which led to a precarious set of agreements regarding the future political status of Laos, Cambodia, and Vietnam.
British colonies in South Asia, East Asia, And Southeast Asia:
French colonies in Southeast Asia:
Dutch, British, Portuguese colonies and Russian territories in Asia:
|
https://en.wikipedia.org/wiki?curid=15443
|
Entropy (information theory)
The information entropy, often just entropy, is a basic quantity in information theory associated to any random variable, which can be interpreted as the average level of "information", "surprise", or "uncertainty" inherent in the variable's possible outcomes. The concept of information entropy was introduced by Claude Shannon in his 1948 paper "A Mathematical Theory of Communication".
The entropy is the expected value of the self-information, a related quantity also introduced by Shannon. The self-information quantifies the level of information or surprise associated with "one" particular outcome or event of a random variable, whereas the entropy quantifies how "informative" or "surprising" the "entire random variable" is, averaged on all its possible outcomes.
The entropy was originally created by Shannon as part of his theory of communication, in which a data communication system is composed of three elements: a source of data, a communication channel, and a receiver. In Shannon's theory, the "fundamental problem of communication" – as expressed by Shannon – is for the receiver to be able to identify what data was generated by the source, based on the signal it receives through the channel. Shannon considered various ways to encode, compress, and transmit messages from a data source, and proved in his famous source coding theorem that the entropy represents an absolute mathematical limit on how well data from the source can be losslessly compressed onto a perfectly noiseless channel. Shannon strengthened this result considerably for noisy channels in his noisy-channel coding theorem.
The entropy can also be interpreted as the average rate at which information is produced by a stochastic source of data. When the data source produces a low-probability value (i.e., when a low-probability event occurs), the event carries more "information" than when the data source produces a high-probability value. This notion of "information" is formally represented by Shannon's self-information quantity, and is also sometimes interpreted as "surprisal". The amount of information conveyed by each individual event then becomes a random variable whose expected value is the information entropy.
Given a random variable formula_1, with possible outcomes formula_2, each with probability formula_3, the entropy formula_4 of formula_1 is as follows:
where formula_7 is the self-information associated with particular outcome; formula_8 is the self-information of the random variable formula_1 in general, treated as a new derived random variable; and formula_10 is the expected value of this new random variable, equal to the sum of the self-information of each outcome, weighted by the probability of each outcome occurring; and formula_11, the base of the logarithm, is a new parameter that can be set different ways to determine the choice of units for information entropy.
Information entropy is typically measured in bits (alternatively called "shannons"), corresponding to base 2 in the above equation. It is also sometimes measured in "natural units" (nats), corresponding to base e, or decimal digits (called "dits", "bans", or "hartleys"), corresponding to base 10.
Shannon's definition is basically unique in that it is the only such one that has certain properties: it is determined entirely by the probability distribution of the data source, it is additive for independent sources, it is maximized at the uniform distribution, it is minimized (and equal to zero) when there is 100% probability of only one event occurring, and it obeys a certain derived version of the chain rule of probability. Axiomatic derivations of entropy are explained further below on the page.
The definition of entropy used in information theory is directly analogous to the definition used in statistical thermodynamics, a relationship which is detailed on the page Entropy in thermodynamics and information theory.
The basic idea of information theory is that the "informational value" of a communicated message depends on the degree to which the content of the message is surprising. If an event is very probable, it is no surprise (and generally uninteresting) when that event happens as expected; hence transmission of such a message carries very little new information. However, if an event is unlikely to occur, it is much more informative to learn that the event happened or will happen. For instance, the knowledge that some particular number "will not" be the winning number of a lottery provides very little information, because any particular chosen number will almost certainly not win. However, knowledge that a particular number "will" win a lottery has high value because it communicates the outcome of a very low probability event.
The information content (also called the "surprisal") of an event formula_12 is an increasing function of the reciprocal of the probability formula_13 of the event, precisely formula_14. Entropy measures the expected (i.e., average) amount of information conveyed by identifying the outcome of a random trial. This implies that casting a die has higher entropy than tossing a coin because each outcome of a die toss has smaller probability (about formula_15) than each outcome of a coin toss (formula_16).
Entropy is a measure of the "unpredictability" of the state, or equivalently, of its "average information content". To get an intuitive understanding of these terms, consider the example of a political poll. Usually, such polls happen because the outcome of the poll is not already known. In other words, the outcome of the poll is relatively "unpredictable", and actually performing the poll and learning the results gives some new "information"; these are just different ways of saying that the "a priori" entropy of the poll results is large. Now, consider the case that the same poll is performed a second time shortly after the first poll. Since the result of the first poll is already known, the outcome of the second poll can be predicted well and the results should not contain much new information; in this case the "a priori" entropy of the second poll result is small relative to that of the first.
Consider the example of a coin toss. If the probability of heads is the same as the probability of tails, then the entropy of the coin toss is as high as it could be for a two-outcome trial. There is no way to predict the outcome of the coin toss ahead of time: if one has to choose, there is no average advantage to be gained by predicting that the toss will come up heads or tails, as either prediction will be correct with probability formula_17. Such a coin toss has one bit of entropy since there are two possible outcomes that occur with equal probability, and learning the actual outcome contains one bit of information. In contrast, a coin toss using a coin that has two heads and no tails has zero entropy since the coin will always come up heads, and the outcome can be predicted perfectly. Analogously, a binary event with equiprobable outcomes has a Shannon entropy of formula_18 bit. Similarly, one trit with equiprobable values contains formula_19 (about 1.58496) bits of information because it can have one of three values.
English text, treated as a string of characters, has fairly low entropy, i.e., is fairly predictable. If we do not know exactly what is going to come next, we can be fairly certain that, for example, 'e' will be far more common than 'z', that the combination 'qu' will be much more common than any other combination with a 'q' in it, and that the combination 'th' will be more common than 'z', 'q', or 'qu'. After the first few letters one can often guess the rest of the word. English text has between 0.6 and 1.3 bits of entropy per character of the message.
If a compression scheme is lossless – one in which you can always recover the entire original message by decompression – then a compressed message has the same quantity of information as the original, but communicated in fewer characters. It has more information (higher entropy) per character. A compressed message has less redundancy. Shannon's source coding theorem states a lossless compression scheme cannot compress messages, on average, to have "more" than one bit of information per bit of message, but that any value "less" than one bit of information per bit of message can be attained by employing a suitable coding scheme. The entropy of a message per bit multiplied by the length of that message is a measure of how much total information the message contains.
If one were to transmit sequences comprising the 4 characters 'A', 'B', 'C', and 'D', a transmitted message might be 'ABADDCAB'. Information theory gives a way of calculating the smallest possible amount of information that will convey this. If all 4 letters are equally likely (25%), one can't do better (over a binary channel) than to have 2 bits encode (in binary) each letter: 'A' might code as '00', 'B' as '01', 'C' as '10', and 'D' as '11'. If 'A' occurs with 70% probability, 'B' with 26%, and 'C' and 'D' with 2% each, one could assign variable length codes, so that receiving a '1' says to look at another bit unless 2 bits of sequential 1s have already been received. In this case, 'A' would be coded as '0' (one bit), 'B' as '10', and 'C' and 'D' as '110' and '111'. It is easy to see that 70% of the time only one bit needs to be sent, 26% of the time two bits, and only 4% of the time 3 bits. On average, fewer than 2 bits are required since the entropy is lower (owing to the high prevalence of 'A' followed by 'B' – together 96% of characters). The calculation of the sum of probability-weighted log probabilities measures and captures this effect.
Shannon's theorem also implies that no lossless compression scheme can shorten "all" messages. If some messages come out shorter, at least one must come out longer due to the pigeonhole principle. In practical use, this is generally not a problem, because one is usually only interested in compressing certain types of messages, such as a document in English, as opposed to gibberish text, or digital photographs rather than noise, and it is unimportant if a compression algorithm makes some unlikely or uninteresting sequences larger.
Named after Boltzmann's Η-theorem, Shannon defined the entropy (Greek capital letter eta) of a discrete random variable formula_20 with possible values formula_21 and probability mass function formula_22 as:
Here formula_24 is the expected value operator, and is the information content of .
formula_25 is itself a random variable.
The entropy can explicitly be written as
where is the base of the logarithm used. Common values of are 2, Euler's number, and 10, and the corresponding units of entropy are the bits for , nats for , and bans for .
In the case of for some , the value of the corresponding summand is taken to be , which is consistent with the limit:
One may also define the conditional entropy of two events formula_1 and formula_28 taking values formula_2 and formula_30respectively, as
where formula_32 is the probability that formula_33 and formula_34. This quantity should be understood as the amount of randomness in the random variable formula_1 given the random variable formula_28.
Consider tossing a coin with known, not necessarily fair, probabilities of coming up heads or tails; this can be modelled as a Bernoulli process.
The entropy of the unknown result of the next toss of the coin is maximized if the coin is fair (that is, if heads and tails both have equal probability 1/2). This is the situation of maximum uncertainty as it is most difficult to predict the outcome of the next toss; the result of each toss of the coin delivers one full bit of information. This is because
However, if we know the coin is not fair, but comes up heads or tails with probabilities and , where , then there is less uncertainty. Every time it is tossed, one side is more likely to come up than the other. The reduced uncertainty is quantified in a lower entropy: on average each toss of the coin delivers less than one full bit of information. For example, if =0.7, then
Uniform probability yields maximum uncertainty and therefore maximum entropy. Entropy, then, can only decrease from the value associated with uniform probability. The extreme case is that of a double-headed coin that never comes up tails, or a double-tailed coin that never results in a head. Then there is no uncertainty. The entropy is zero: each toss of the coin delivers no new information as the outcome of each coin toss is always certain.
Entropy can be normalized by dividing it by information length. This ratio is called metric entropy and is a measure of the randomness of the information.
To understand the meaning of , first define an information function in terms of an event with probability . The amount of information acquired due to the observation of event follows from Shannon's solution of the fundamental properties of information:
The last is a crucial property. It states that joint probability of independent sources of information communicates as much information as the two individual events separately. Particularly, if the first event can yield one of equiprobable outcomes and another has one of equiprobable outcomes then there are possible outcomes of the joint event. This means that if bits are needed to encode the first value and to encode the second, one needs to encode both. Shannon discovered that the proper choice of function to quantify information, preserving this additivity, is logarithmic, i.e.,
let formula_40 be the information function which one assumes to be twice continuously differentiable, one has:
This differential equation leads to the solution formula_42 for any formula_43. Condition 2. leads to formula_44 and especially, formula_45 can be chosen on the form formula_46 with formula_47, which is equivalent to choosing a specific base for the logarithm. The different units of information (bits for the binary logarithm , nats for the natural logarithm , bans for the decimal logarithm and so on) are constant multiples of each other. For instance, in case of a fair coin toss, heads provides bit of information, which is approximately 0.693 nats or 0.301 decimal digits. Because of additivity, tosses provide bits of information, which is approximately nats or decimal digits.
If there is a distribution where event can happen with probability , and it is sampled times with an outcome occurring times, the total amount of information we have received is
The "average" amount of information that we receive per event is therefore
As a result of the above, and owing to the use of the logarithm in the derived definition of entropy, the entropy is also additive for independent sources. For instance, the entropy of a fair coin toss is 1 bit, and the entropy of tosses is bits. In a straightforward representation, bits are needed to represent a variable that can take one of values if is a power of 2, that is . If these values are equally probable, the entropy (in bits) is equal to . If one of the values is more probable to occur than the others, an observation that this value occurs is less informative than if some less common outcome had occurred. Conversely, rarer events provide more information when observed. Since observation of less probable events occurs more rarely, the net effect is that the entropy (thought of as average information) received from non-uniformly distributed data is always less than or equal to . Entropy is zero when one outcome is certain to occur. The entropy quantifies these considerations when a probability distribution of the source data is known. The "meaning" of the events observed (the meaning of "messages") does not matter in the definition of entropy. Entropy only takes into account the probability of observing a specific event, so the information it encapsulates is information about the underlying probability distribution, not the meaning of the events themselves.
The inspiration for adopting the word "entropy" in information theory came from the close resemblance between Shannon's formula and very similar known formulae from statistical mechanics.
In statistical thermodynamics the most general formula for the thermodynamic entropy of a thermodynamic system is the Gibbs entropy,
where is the Boltzmann constant, and is the probability of a microstate. The Gibbs entropy was defined by J. Willard Gibbs in 1878 after earlier work by Boltzmann (1872).
The Gibbs entropy translates over almost unchanged into the world of quantum physics to give the von Neumann entropy, introduced by John von Neumann in 1927,
where ρ is the density matrix of the quantum mechanical system and Tr is the trace.
At an everyday practical level, the links between information entropy and thermodynamic entropy are not evident. Physicists and chemists are apt to be more interested in "changes" in entropy as a system spontaneously evolves away from its initial conditions, in accordance with the second law of thermodynamics, rather than an unchanging probability distribution. As the minuteness of Boltzmann's constant indicates, the changes in for even tiny amounts of substances in chemical and physical processes represent amounts of entropy that are extremely large compared to anything in data compression or signal processing. In classical thermodynamics, entropy is defined in terms of macroscopic measurements and makes no reference to any probability distribution, which is central to the definition of information entropy.
The connection between thermodynamics and what is now known as information theory was first made by Ludwig Boltzmann and expressed by his famous equation:
where formula_53 is the thermodynamic entropy of a particular macrostate (defined by thermodynamic parameters such as temperature, volume, energy, etc.), "W" is the number of microstates (various combinations of particles in various energy states) that can yield the given macrostate, and "kB" is Boltzmann's constant. It is assumed that each microstate is equally likely, so that the probability of a given microstate is "pi = 1/W". When these probabilities are substituted into the above expression for the Gibbs entropy (or equivalently "kB" times the Shannon entropy), Boltzmann's equation results. In information theoretic terms, the information entropy of a system is the amount of "missing" information needed to determine a microstate, given the macrostate.
In the view of Jaynes (1957), thermodynamic entropy, as explained by statistical mechanics, should be seen as an "application" of Shannon's information theory: the thermodynamic entropy is interpreted as being proportional to the amount of further Shannon information needed to define the detailed microscopic state of the system, that remains uncommunicated by a description solely in terms of the macroscopic variables of classical thermodynamics, with the constant of proportionality being just the Boltzmann constant. Adding heat to a system increases its thermodynamic entropy because it increases the number of possible microscopic states of the system that are consistent with the measurable values of its macroscopic variables, making any complete state description longer. (See article: "maximum entropy thermodynamics"). Maxwell's demon can (hypothetically) reduce the thermodynamic entropy of a system by using information about the states of individual molecules; but, as Landauer (from 1961) and co-workers have shown, to function the demon himself must increase thermodynamic entropy in the process, by at least the amount of Shannon information he proposes to first acquire and store; and so the total thermodynamic entropy does not decrease (which resolves the paradox). Landauer's principle imposes a lower bound on the amount of heat a computer must generate to process a given amount of information, though modern computers are far less efficient.
Entropy is defined in the context of a probabilistic model. Independent fair coin flips have an entropy of 1 bit per flip. A source that always generates a long string of B's has an entropy of 0, since the next character will always be a 'B'.
The entropy rate of a data source means the average number of bits per symbol needed to encode it. Shannon's experiments with human predictors show an information rate between 0.6 and 1.3 bits per character in English; the PPM compression algorithm can achieve a compression ratio of 1.5 bits per character in English text.
From the preceding example, note the following points:
Shannon's definition of entropy, when applied to an information source, can determine the minimum channel capacity required to reliably transmit the source as encoded binary digits (see caveat below in italics). The formula can be derived by calculating the mathematical expectation of the "amount of information" contained in a digit from the information source. "See also" Shannon–Hartley theorem.
Shannon's entropy measures the information contained in a message as opposed to the portion of the message that is determined (or predictable). "Examples of the latter include redundancy in language structure or statistical properties relating to the occurrence frequencies of letter or word pairs, triplets etc." See Markov chain.
Entropy is one of several ways to measure diversity. Specifically, Shannon entropy is the logarithm of , the true diversity index with parameter equal to 1.
Entropy effectively bounds the performance of the strongest lossless compression possible, which can be realized in theory by using the typical set or in practice using Huffman, Lempel–Ziv or arithmetic coding. See also Kolmogorov complexity. In practice, compression algorithms deliberately include some judicious redundancy in the form of checksums to protect against errors.
A 2011 study in "Science" estimates the world's technological capacity to store and communicate optimally compressed information normalized on the most effective compression algorithms available in the year 2007, therefore estimating the entropy of the technologically available sources.
The authors estimate humankind technological capacity to store information (fully entropically compressed) in 1986 and again in 2007. They break the information into three categories—to store information on a medium, to receive information through one-way broadcast networks, or to exchange information through two-way telecommunication networks.
There are a number of entropy-related concepts that mathematically quantify information content in some way:
(The "rate of self-information" can also be defined for a particular sequence of messages or symbols generated by a given stochastic process: this will always be equal to the entropy rate in the case of a stationary process.) Other quantities of information are also used to compare or relate different sources of information.
It is important not to confuse the above concepts. Often it is only clear from context which one is meant. For example, when someone says that the "entropy" of the English language is about 1 bit per character, they are actually modeling the English language as a stochastic process and talking about its entropy "rate". Shannon himself used the term in this way.
If very large blocks are used, the estimate of per-character entropy rate may become artificially low because the probability distribution of the sequence is not known exactly; it is only an estimate. If one considers the text of every book ever published as a sequence, with each symbol being the text of a complete book, and if there are published books, and each book is only published once, the estimate of the probability of each book is , and the entropy (in bits) is . As a practical code, this corresponds to assigning each book a unique identifier and using it in place of the text of the book whenever one wants to refer to the book. This is enormously useful for talking about books, but it is not so useful for characterizing the information content of an individual book, or of language in general: it is not possible to reconstruct the book from its identifier without knowing the probability distribution, that is, the complete text of all the books. The key idea is that the complexity of the probabilistic model must be considered. Kolmogorov complexity is a theoretical generalization of this idea that allows the consideration of the information content of a sequence independent of any particular probability model; it considers the shortest program for a universal computer that outputs the sequence. A code that achieves the entropy rate of a sequence for a given model, plus the codebook (i.e. the probabilistic model), is one such program, but it may not be the shortest.
The Fibonacci sequence is 1, 1, 2, 3, 5, 8, 13, ... treating the sequence as a message and each number as a symbol, there are almost as many symbols as there are characters in the message, giving an entropy of approximately . The first 128 symbols of the Fibonacci sequence has an entropy of approximately 7 bits/symbol, but the sequence can be expressed using a formula [ for , , ] and this formula has a much lower entropy and applies to any length of the Fibonacci sequence.
In cryptanalysis, entropy is often roughly used as a measure of the unpredictability of a cryptographic key, though its real uncertainty is unmeasurable. For example, a 128-bit key that is uniformly and randomly generated has 128 bits of entropy. It also takes (on average) formula_54 guesses to break by brute force. Entropy fails to capture the number of guesses required if the possible keys are not chosen uniformly. Instead, a measure called "guesswork" can be used to measure the effort required for a brute force attack.
Other problems may arise from non-uniform distributions used in cryptography. For example, a 1,000,000-digit binary one-time pad using exclusive or. If the pad has 1,000,000 bits of entropy, it is perfect. If the pad has 999,999 bits of entropy, evenly distributed (each individual bit of the pad having 0.999999 bits of entropy) it may provide good security. But if the pad has 999,999 bits of entropy, where the first bit is fixed and the remaining 999,999 bits are perfectly random, the first bit of the ciphertext will not be encrypted at all.
A common way to define entropy for text is based on the Markov model of text. For an order-0 source (each character is selected independent of the last characters), the binary entropy is:
where is the probability of . For a first-order Markov source (one in which the probability of selecting a character is dependent only on the immediately preceding character), the entropy rate is:
where is a state (certain preceding characters) and formula_57 is the probability of given as the previous character.
For a second order Markov source, the entropy rate is
In general the -ary entropy of a source formula_59 with source alphabet } and discrete probability distribution } where is the probability of (say is defined by:
Note: the in "-ary entropy" is the number of different symbols of the "ideal alphabet" used as a standard yardstick to measure source alphabets. In information theory, two symbols are necessary and sufficient for an alphabet to encode information. Therefore, the default is to let ("binary entropy"). Thus, the entropy of the source alphabet, with its given empiric probability distribution, is a number equal to the number (possibly fractional) of symbols of the "ideal alphabet", with an optimal probability distribution, necessary to encode for each symbol of the source alphabet. Also note: "optimal probability distribution" here means a uniform distribution: a source alphabet with symbols has the highest possible entropy (for an alphabet with symbols) when the probability distribution of the alphabet is uniform. This optimal entropy turns out to be .
A source alphabet with non-uniform distribution will have less entropy than if those symbols had uniform distribution (i.e. the "optimized alphabet"). This deficiency in entropy can be expressed as a ratio called efficiency:
Applying the basic properties of the logarithm, this quantity can also be expressed as:
Efficiency has utility in quantifying the effective use of a communication channel. This formulation is also referred to as the normalized entropy, as the entropy is divided by the maximum entropy formula_63. Furthermore, the efficiency is indifferent to choice of (positive) base , as indicated by the insensitivity within the final logarithm above thereto.
Shannon entropy is characterized by a small number of criteria, listed below. Any definition of entropy satisfying these assumptions has the form
where is a constant corresponding to a choice of measurement units.
In the following, and .
The measure should be continuous, so that changing the values of the probabilities by a very small amount should only change the entropy by a small amount.
The measure should be unchanged if the outcomes are re-ordered.
The measure should be maximal if all the outcomes are equally likely (uncertainty is highest when all possible events are equiprobable).
For equiprobable events the entropy should increase with the number of outcomes.
For continuous random variables, the multivariate Gaussian is the distribution with maximum differential entropy.
The amount of entropy should be independent of how the process is regarded as being divided into parts.
This last functional relationship characterizes the entropy of a system with sub-systems. It demands that the entropy of a system can be calculated from the entropies of its sub-systems if the interactions between the sub-systems are known.
Given an ensemble of uniformly distributed elements that are divided into boxes (sub-systems) with elements each, the entropy of the whole ensemble should be equal to the sum of the entropy of the system of boxes and the individual entropies of the boxes, each weighted with the probability of being in that particular box.
For positive integers where ,
Choosing , this implies that the entropy of a certain outcome is zero: . This implies that the efficiency of a source alphabet with symbols can be defined simply as being equal to its -ary entropy. See also Redundancy (information theory).
The Shannon entropy satisfies the following properties, for some of which it is useful to interpret entropy as the amount of information learned (or uncertainty eliminated) by revealing the value of a random variable :
The Shannon entropy is restricted to random variables taking discrete values. The corresponding formula for a continuous random variable with probability density function with finite or infinite support formula_86 on the real line is defined by analogy, using the above form of the entropy as an expectation:
This formula is usually referred to as the continuous entropy, or differential entropy. A precursor of the continuous entropy is the expression for the functional in the H-theorem of Boltzmann.
Although the analogy between both functions is suggestive, the following question must be set: is the differential entropy a valid extension of the Shannon discrete entropy? Differential entropy lacks a number of properties that the Shannon discrete entropy has – it can even be negative – and corrections have been suggested, notably limiting density of discrete points.
To answer this question, a connection must be established between the two functions:
In order to obtain a generally finite measure as the bin size goes to zero. In the discrete case, the bin size is the (implicit) width of each of the (finite or infinite) bins whose probabilities are denoted by . As the continuous domain is generalised, the width must be made explicit.
To do this, start with a continuous function discretized into bins of size formula_88.
By the mean-value theorem there exists a value in each bin such that
the integral of the function can be approximated (in the Riemannian sense) by
where this limit and "bin size goes to zero" are equivalent.
We will denote
and expanding the logarithm, we have
As Δ → 0, we have
Note; as , requires a special definition of the differential or continuous entropy:
which is, as said before, referred to as the differential entropy. This means that the differential entropy "is not" a limit of the Shannon entropy for . Rather, it differs from the limit of the Shannon entropy by an infinite offset (see also the article on information dimension)
It turns out as a result that, unlike the Shannon entropy, the differential entropy is "not" in general a good measure of uncertainty or information. For example, the differential entropy can be negative; also it is not invariant under continuous co-ordinate transformations. This problem may be illustrated by a change of units when "x" is a dimensioned variable. "f(x)" will then have the units of "1/x". The argument of the logarithm must be dimensionless, otherwise it is improper, so that the differential entropy as given above will be improper. If "Δ" is some "standard" value of "x" (i.e. "bin size") and therefore has the same units, then a modified differential entropy may be written in proper form as:
and the result will be the same for any choice of units for "x". In fact, the limit of discrete entropy as formula_96 would also include a term of formula_97, which would in general be infinite. This is expected, continuous variables would typically have infinite entropy when discretized. The limiting density of discrete points is really a measure of how much easier a distribution is to describe than a distribution that is uniform over its quantization scheme.
Another useful measure of entropy that works equally well in the discrete and the continuous case is the relative entropy of a distribution. It is defined as the Kullback–Leibler divergence from the distribution to a reference measure as follows. Assume that a probability distribution is absolutely continuous with respect to a measure , i.e. is of the form for some non-negative -integrable function with -integral 1, then the relative entropy can be defined as
In this form the relative entropy generalises (up to change in sign) both the discrete entropy, where the measure is the counting measure, and the differential entropy, where the measure is the Lebesgue measure. If the measure is itself a probability distribution, the relative entropy is non-negative, and zero if as measures. It is defined for any measure space, hence coordinate independent and invariant under co-ordinate reparameterizations if one properly takes into account the transformation of the measure . The relative entropy, and implicitly entropy and differential entropy, do depend on the "reference" measure .
Entropy has become a useful quantity in combinatorics.
A simple example of this is an alternate proof of the Loomis–Whitney inequality: for every subset , we have
where is the orthogonal projection in the th coordinate:
The proof follows as a simple corollary of Shearer's inequality: if are random variables and are subsets of } such that every integer between 1 and lies in exactly of these subsets, then
where formula_102 is the Cartesian product of random variables with indexes in (so the dimension of this vector is equal to the size of ).
We sketch how Loomis–Whitney follows from this: Indeed, let be a uniformly distributed random variable with values in and so that each point in occurs with equal probability. Then (by the further properties of entropy mentioned above) , where denotes the cardinality of . Let }. The range of formula_103 is contained in and hence formula_104. Now use this to bound the right side of Shearer's inequality and exponentiate the opposite sides of the resulting inequality you obtain.
For integers let . Then
where
Here is a sketch proof. Note that formula_107 is one term of the expression
Rearranging gives the upper bound. For the lower bound one first shows, using some algebra, that it is the largest term in the summation. But then,
since there are terms in the summation. Rearranging gives the lower bound.
A nice interpretation of this is that the number of binary strings of length with exactly many 1's is approximately formula_110.
|
https://en.wikipedia.org/wiki?curid=15445
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.